Sample records for imaging software platform

  1. Future Directions for Astronomical Image Display

    NASA Technical Reports Server (NTRS)

    Mandel, Eric

    2000-01-01

    In the "Future Directions for Astronomical Image Displav" project, the Smithsonian Astrophysical Observatory (SAO) and the National Optical Astronomy Observatories (NOAO) evolved our existing image display program into fully extensible. cross-platform image display software. We also devised messaging software to support integration of image display into astronomical analysis systems. Finally, we migrated our software from reliance on Unix and the X Window System to a platform-independent architecture that utilizes the cross-platform Tcl/Tk technology.

  2. Multiattribute selection of acute stroke imaging software platform for Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) clinical trial.

    PubMed

    Churilov, Leonid; Liu, Daniel; Ma, Henry; Christensen, Soren; Nagakane, Yoshinari; Campbell, Bruce; Parsons, Mark W; Levi, Christopher R; Davis, Stephen M; Donnan, Geoffrey A

    2013-04-01

    The appropriateness of a software platform for rapid MRI assessment of the amount of salvageable brain tissue after stroke is critical for both the validity of the Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) Clinical Trial of stroke thrombolysis beyond 4.5 hours and for stroke patient care outcomes. The objective of this research is to develop and implement a methodology for selecting the acute stroke imaging software platform most appropriate for the setting of a multi-centre clinical trial. A multi-disciplinary decision making panel formulated the set of preferentially independent evaluation attributes. Alternative Multi-Attribute Value Measurement methods were used to identify the best imaging software platform followed by sensitivity analysis to ensure the validity and robustness of the proposed solution. Four alternative imaging software platforms were identified. RApid processing of PerfusIon and Diffusion (RAPID) software was selected as the most appropriate for the needs of the EXTEND trial. A theoretically grounded generic multi-attribute selection methodology for imaging software was developed and implemented. The developed methodology assured both a high quality decision outcome and a rational and transparent decision process. This development contributes to stroke literature in the area of comprehensive evaluation of MRI clinical software. At the time of evaluation, RAPID software presented the most appropriate imaging software platform for use in the EXTEND clinical trial. The proposed multi-attribute imaging software evaluation methodology is based on sound theoretical foundations of multiple criteria decision analysis and can be successfully used for choosing the most appropriate imaging software while ensuring both robust decision process and outcomes. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.

  3. VA's Integrated Imaging System on three platforms.

    PubMed

    Dayhoff, R E; Maloney, D L; Majurski, W J

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability.

  4. VA's Integrated Imaging System on three platforms.

    PubMed Central

    Dayhoff, R. E.; Maloney, D. L.; Majurski, W. J.

    1992-01-01

    The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability. PMID:1482983

  5. A software platform for phase contrast x-ray breast imaging research.

    PubMed

    Bliznakova, K; Russo, P; Mettivier, G; Requardt, H; Popov, P; Bravin, A; Buliev, I

    2015-06-01

    To present and validate a computer-based simulation platform dedicated for phase contrast x-ray breast imaging research. The software platform, developed at the Technical University of Varna on the basis of a previously validated x-ray imaging software simulator, comprises modules for object creation and for x-ray image formation. These modules were updated to take into account the refractive index for phase contrast imaging as well as implementation of the Fresnel-Kirchhoff diffraction theory of the propagating x-ray waves. Projection images are generated in an in-line acquisition geometry. To test and validate the platform, several phantoms differing in their complexity were constructed and imaged at 25 keV and 60 keV at the beamline ID17 of the European Synchrotron Radiation Facility. The software platform was used to design computational phantoms that mimic those used in the experimental study and to generate x-ray images in absorption and phase contrast modes. The visual and quantitative results of the validation process showed an overall good correlation between simulated and experimental images and show the potential of this platform for research in phase contrast x-ray imaging of the breast. The application of the platform is demonstrated in a feasibility study for phase contrast images of complex inhomogeneous and anthropomorphic breast phantoms, compared to x-ray images generated in absorption mode. The improved visibility of mammographic structures suggests further investigation and optimisation of phase contrast x-ray breast imaging, especially when abnormalities are present. The software platform can be exploited also for educational purposes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. A software platform for the analysis of dermatology images

    NASA Astrophysics Data System (ADS)

    Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon

    2017-11-01

    The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.

  7. Fiji: an open-source platform for biological-image analysis.

    PubMed

    Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert

    2012-06-28

    Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.

  8. A Software Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Donald J.; Martin, Richard E.; Seebo, Jeff P.; Trinh, Long B.; Walker, James L.; Winfree, William P.

    2007-01-01

    Ultrasonic, microwave, and terahertz nondestructive evaluation imaging systems generally require the acquisition of waveforms at each scan point to form an image. For such systems, signal and image processing methods are commonly needed to extract information from the waves and improve resolution of, and highlight, defects in the image. Since some similarity exists for all waveform-based NDE methods, it would seem a common software platform containing multiple signal and image processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. This presentation describes NASA Glenn Research Center's approach in developing a common software platform for processing waveform-based NDE signals and images. This platform is currently in use at NASA Glenn and at Lockheed Martin Michoud Assembly Facility for processing of pulsed terahertz and ultrasonic data. Highlights of the software operation will be given. A case study will be shown for use with terahertz data. The authors also request scientists and engineers who are interested in sharing customized signal and image processing algorithms to contribute to this effort by letting the authors code up and include these algorithms in future releases.

  9. Informatics in radiology (infoRAD): free DICOM image viewing and processing software for the Macintosh computer: what's available and what it can do for you.

    PubMed

    Escott, Edward J; Rubinstein, David

    2004-01-01

    It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.

  10. A user-friendly LabVIEW software platform for grating based X-ray phase-contrast imaging.

    PubMed

    Wang, Shenghao; Han, Huajie; Gao, Kun; Wang, Zhili; Zhang, Can; Yang, Meng; Wu, Zhao; Wu, Ziyu

    2015-01-01

    X-ray phase-contrast imaging can provide greatly improved contrast over conventional absorption-based imaging for weakly absorbing samples, such as biological soft tissues and fibre composites. In this study, we introduced an easy and fast way to develop a user-friendly software platform dedicated to the new grating-based X-ray phase-contrast imaging setup at the National Synchrotron Radiation Laboratory of the University of Science and Technology of China. The control of 21 motorized stages, of a piezoelectric stage and of an X-ray tube are achieved with this software, it also covers image acquisition with a flat panel detector for automatic phase stepping scan. Moreover, a data post-processing module for signals retrieval and other custom features are in principle available. With a seamless integration of all the necessary functions in one software package, this platform greatly facilitate users' activities during experimental runs with this grating based X-ray phase contrast imaging setup.

  11. ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.

    PubMed

    Frank, Tobias; Krieger, Axel; Leonard, Simon; Patel, Niravkumar A; Tokuda, Junichi

    2017-08-01

    With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.

  12. TDat: An Efficient Platform for Processing Petabyte-Scale Whole-Brain Volumetric Images.

    PubMed

    Li, Yuxin; Gong, Hui; Yang, Xiaoquan; Yuan, Jing; Jiang, Tao; Li, Xiangning; Sun, Qingtao; Zhu, Dan; Wang, Zhenyu; Luo, Qingming; Li, Anan

    2017-01-01

    Three-dimensional imaging of whole mammalian brains at single-neuron resolution has generated terabyte (TB)- and even petabyte (PB)-sized datasets. Due to their size, processing these massive image datasets can be hindered by the computer hardware and software typically found in biological laboratories. To fill this gap, we have developed an efficient platform named TDat, which adopts a novel data reformatting strategy by reading cuboid data and employing parallel computing. In data reformatting, TDat is more efficient than any other software. In data accessing, we adopted parallelization to fully explore the capability for data transmission in computers. We applied TDat in large-volume data rigid registration and neuron tracing in whole-brain data with single-neuron resolution, which has never been demonstrated in other studies. We also showed its compatibility with various computing platforms, image processing software and imaging systems.

  13. OIPAV: an integrated software system for ophthalmic image processing, analysis and visualization

    NASA Astrophysics Data System (ADS)

    Zhang, Lichun; Xiang, Dehui; Jin, Chao; Shi, Fei; Yu, Kai; Chen, Xinjian

    2018-03-01

    OIPAV (Ophthalmic Images Processing, Analysis and Visualization) is a cross-platform software which is specially oriented to ophthalmic images. It provides a wide range of functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis and visualization to help researchers and clinicians deal with various ophthalmic images such as optical coherence tomography (OCT) images and color photo of fundus, etc. It enables users to easily access to different ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images and improve quantitative evaluations. In this paper, we will present the system design and functional modules of the platform and demonstrate various applications. With a satisfying function scalability and expandability, we believe that the software can be widely applied in ophthalmology field.

  14. MMX-I: data-processing software for multimodal X-ray imaging and tomography.

    PubMed

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-05-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors' knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments.

  15. Three-Dimensional Root Phenotyping with a Novel Imaging and Software Platform1[C][W][OA

    PubMed Central

    Clark, Randy T.; MacCurdy, Robert B.; Jung, Janelle K.; Shaff, Jon E.; McCouch, Susan R.; Aneshansley, Daniel J.; Kochian, Leon V.

    2011-01-01

    A novel imaging and software platform was developed for the high-throughput phenotyping of three-dimensional root traits during seedling development. To demonstrate the platform’s capacity, plants of two rice (Oryza sativa) genotypes, Azucena and IR64, were grown in a transparent gellan gum system and imaged daily for 10 d. Rotational image sequences consisting of 40 two-dimensional images were captured using an optically corrected digital imaging system. Three-dimensional root reconstructions were generated and analyzed using a custom-designed software, RootReader3D. Using the automated and interactive capabilities of RootReader3D, five rice root types were classified and 27 phenotypic root traits were measured to characterize these two genotypes. Where possible, measurements from the three-dimensional platform were validated and were highly correlated with conventional two-dimensional measurements. When comparing gellan gum-grown plants with those grown under hydroponic and sand culture, significant differences were detected in morphological root traits (P < 0.05). This highly flexible platform provides the capacity to measure root traits with a high degree of spatial and temporal resolution and will facilitate novel investigations into the development of entire root systems or selected components of root systems. In combination with the extensive genetic resources that are now available, this platform will be a powerful resource to further explore the molecular and genetic determinants of root system architecture. PMID:21454799

  16. MMX-I: data-processing software for multimodal X-ray imaging and tomography

    PubMed Central

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-01-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159

  17. Platform-Independent Cirrus and Spectralis Thickness Measurements in Eyes with Diabetic Macular Edema Using Fully Automated Software

    PubMed Central

    Willoughby, Alex S.; Chiu, Stephanie J.; Silverman, Rachel K.; Farsiu, Sina; Bailey, Clare; Wiley, Henry E.; Ferris, Frederick L.; Jaffe, Glenn J.

    2017-01-01

    Purpose We determine whether the automated segmentation software, Duke Optical Coherence Tomography Retinal Analysis Program (DOCTRAP), can measure, in a platform-independent manner, retinal thickness on Cirrus and Spectralis spectral domain optical coherence tomography (SD-OCT) images in eyes with diabetic macular edema (DME) under treatment in a clinical trial. Methods Automatic segmentation software was used to segment the internal limiting membrane (ILM), inner retinal pigment epithelium (RPE), and Bruch's membrane (BM) in SD-OCT images acquired by Cirrus and Spectralis commercial systems, from the same eye, on the same day during a clinical interventional DME trial. Mean retinal thickness differences were compared across commercial and DOCTRAP platforms using intraclass correlation (ICC) and Bland-Altman plots. Results The mean 1 mm central subfield thickness difference (standard error [SE]) comparing segmentation of Spectralis images with DOCTRAP versus HEYEX was 0.7 (0.3) μm (0.2 pixels). The corresponding values comparing segmentation of Cirrus images with DOCTRAP versus Cirrus software was 2.2 (0.7) μm. The mean 1 mm central subfield thickness difference (SE) comparing segmentation of Cirrus and Spectralis scan pairs with DOCTRAP using BM as the outer retinal boundary was −2.3 (0.9) μm compared to 2.8 (0.9) μm with inner RPE as the outer boundary. Conclusions DOCTRAP segmentation of Cirrus and Spectralis images produces validated thickness measurements that are very similar to each other, and very similar to the values generated by the corresponding commercial software in eyes with treated DME. Translational Relevance This software enables automatic total retinal thickness measurements across two OCT platforms, a process that is impractical to perform manually. PMID:28180033

  18. Spotlight-8 Image Analysis Software

    NASA Technical Reports Server (NTRS)

    Klimek, Robert; Wright, Ted

    2006-01-01

    Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.

  19. SU-D-BRD-02: A Web-Based Image Processing and Plan Evaluation Platform (WIPPEP) for Future Cloud-Based Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, X; Liu, L; Xing, L

    Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less

  20. OSIRIX: open source multimodality image navigation software

    NASA Astrophysics Data System (ADS)

    Rosset, Antoine; Pysher, Lance; Spadola, Luca; Ratib, Osman

    2005-04-01

    The goal of our project is to develop a completely new software platform that will allow users to efficiently and conveniently navigate through large sets of multidimensional data without the need of high-end expensive hardware or software. We also elected to develop our system on new open source software libraries allowing other institutions and developers to contribute to this project. OsiriX is a free and open-source imaging software designed manipulate and visualize large sets of medical images: http://homepage.mac.com/rossetantoine/osirix/

  1. The ImageJ ecosystem: an open platform for biomedical image analysis

    PubMed Central

    Schindelin, Johannes; Rueden, Curtis T.; Hiner, Mark C.; Eliceiri, Kevin W.

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available – from commercial to academic, special-purpose to Swiss army knife, small to large–but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts life science, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. PMID:26153368

  2. The ImageJ ecosystem: An open platform for biomedical image analysis.

    PubMed

    Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W

    2015-01-01

    Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. © 2015 Wiley Periodicals, Inc.

  3. Develop Direct Geo-referencing System Based on Open Source Software and Hardware Platform

    NASA Astrophysics Data System (ADS)

    Liu, H. S.; Liao, H. M.

    2015-08-01

    Direct geo-referencing system uses the technology of remote sensing to quickly grasp images, GPS tracks, and camera position. These data allows the construction of large volumes of images with geographic coordinates. So that users can be measured directly on the images. In order to properly calculate positioning, all the sensor signals must be synchronized. Traditional aerial photography use Position and Orientation System (POS) to integrate image, coordinates and camera position. However, it is very expensive. And users could not use the result immediately because the position information does not embed into image. To considerations of economy and efficiency, this study aims to develop a direct geo-referencing system based on open source software and hardware platform. After using Arduino microcontroller board to integrate the signals, we then can calculate positioning with open source software OpenCV. In the end, we use open source panorama browser, panini, and integrate all these to open source GIS software, Quantum GIS. A wholesome collection of data - a data processing system could be constructed.

  4. Platform-independent software for medical image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Mancuso, Michael E.; Pathak, Sayan D.; Kim, Yongmin

    1997-05-01

    We have developed a software tool for image processing over the Internet. The tool is a general purpose, easy to use, flexible, platform independent image processing software package with functions most commonly used in medical image processing.It provides for processing of medical images located wither remotely on the Internet or locally. The software was written in Java - the new programming language developed by Sun Microsystems. It was compiled and tested using Microsoft's Visual Java 1.0 and Microsoft's Just in Time Compiler 1.00.6211. The software is simple and easy to use. In order to use the tool, the user needs to download the software from our site before he/she runs it using any Java interpreter, such as those supplied by Sun, Symantec, Borland or Microsoft. Future versions of the operating systems supplied by Sun, Microsoft, Apple, IBM, and others will include Java interpreters. The software is then able to access and process any image on the iNternet or on the local computer. Using a 512 X 512 X 8-bit image, a 3 X 3 convolution took 0.88 seconds on an Intel Pentium Pro PC running at 200 MHz with 64 Mbytes of memory. A window/level operation took 0.38 seconds while a 3 X 3 median filter took 0.71 seconds. These performance numbers demonstrate the feasibility of using this software interactively on desktop computes. Our software tool supports various image processing techniques commonly used in medical image processing and can run without the need of any specialized hardware. It can become an easily accessible resource over the Internet to promote the learning and of understanding image processing algorithms. Also, it could facilitate sharing of medical image databases and collaboration amongst researchers and clinicians, regardless of location.

  5. NiftyPET: a High-throughput Software Platform for High Quantitative Accuracy and Precision PET Imaging and Analysis.

    PubMed

    Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien

    2018-01-01

    We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.

  6. ImagePy: an open-source, Python-based and platform-independent software package for boimage analysis.

    PubMed

    Wang, Anliang; Yan, Xiaolong; Wei, Zhijun

    2018-04-27

    This note presents the design of a scalable software package named ImagePy for analysing biological images. Our contribution is concentrated on facilitating extensibility and interoperability of the software through decoupling the data model from the user interface. Especially with assistance from the Python ecosystem, this software framework makes modern computer algorithms easier to be applied in bioimage analysis. ImagePy is free and open source software, with documentation and code available at https://github.com/Image-Py/imagepy under the BSD license. It has been tested on the Windows, Mac and Linux operating systems. wzjdlut@dlut.edu.cn or yxdragon@imagepy.org.

  7. Free and open source software for the manipulation of digital images.

    PubMed

    Solomon, Robert W

    2009-06-01

    Free and open source software is a type of software that is nearly as powerful as commercial software but is freely downloadable. This software can do almost everything that the expensive programs can. GIMP (gnu image manipulation program) is the free program that is comparable to Photoshop, and versions are available for Windows, Macintosh, and Linux platforms. This article briefly describes how GIMP can be installed and used to manipulate radiology images. It is no longer necessary to budget large amounts of money for high-quality software to achieve the goals of image processing and document creation because free and open source software is available for the user to download at will.

  8. Raspberry Pi-powered imaging for plant phenotyping.

    PubMed

    Tovar, Jose C; Hoyer, J Steen; Lin, Andy; Tielking, Allison; Callen, Steven T; Elizabeth Castillo, S; Miller, Michael; Tessman, Monica; Fahlgren, Noah; Carrington, James C; Nusinow, Dmitri A; Gehan, Malia A

    2018-03-01

    Image-based phenomics is a powerful approach to capture and quantify plant diversity. However, commercial platforms that make consistent image acquisition easy are often cost-prohibitive. To make high-throughput phenotyping methods more accessible, low-cost microcomputers and cameras can be used to acquire plant image data. We used low-cost Raspberry Pi computers and cameras to manage and capture plant image data. Detailed here are three different applications of Raspberry Pi-controlled imaging platforms for seed and shoot imaging. Images obtained from each platform were suitable for extracting quantifiable plant traits (e.g., shape, area, height, color) en masse using open-source image processing software such as PlantCV. This protocol describes three low-cost platforms for image acquisition that are useful for quantifying plant diversity. When coupled with open-source image processing tools, these imaging platforms provide viable low-cost solutions for incorporating high-throughput phenomics into a wide range of research programs.

  9. CONRAD—A software framework for cone-beam imaging in radiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maier, Andreas; Choi, Jang-Hwan; Riess, Christian

    2013-11-15

    Purpose: In the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects.Methods: CONRAD is a Java-based state-of-the-art software platform withmore » extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source.Results: A total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size.Conclusions: As a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and quantitative performance comparison between the methods of different groups.« less

  10. Software platform for simulation of a prototype proton CT scanner.

    PubMed

    Giacometti, Valentina; Bashkirov, Vladimir A; Piersimoni, Pierluigi; Guatelli, Susanna; Plautz, Tia E; Sadrozinski, Hartmut F-W; Johnson, Robert P; Zatserklyaniy, Andriy; Tessonnier, Thomas; Parodi, Katia; Rosenfeld, Anatoly B; Schulte, Reinhard W

    2017-03-01

    Proton computed tomography (pCT) is a promising imaging technique to substitute or at least complement x-ray CT for more accurate proton therapy treatment planning as it allows calculating directly proton relative stopping power from proton energy loss measurements. A proton CT scanner with a silicon-based particle tracking system and a five-stage scintillating energy detector has been completed. In parallel a modular software platform was developed to characterize the performance of the proposed pCT. The modular pCT software platform consists of (1) a Geant4-based simulation modeling the Loma Linda proton therapy beam line and the prototype proton CT scanner, (2) water equivalent path length (WEPL) calibration of the scintillating energy detector, and (3) image reconstruction algorithm for the reconstruction of the relative stopping power (RSP) of the scanned object. In this work, each component of the modular pCT software platform is described and validated with respect to experimental data and benchmarked against theoretical predictions. In particular, the RSP reconstruction was validated with both experimental scans, water column measurements, and theoretical calculations. The results show that the pCT software platform accurately reproduces the performance of the existing prototype pCT scanner with a RSP agreement between experimental and simulated values to better than 1.5%. The validated platform is a versatile tool for clinical proton CT performance and application studies in a virtual setting. The platform is flexible and can be modified to simulate not yet existing versions of pCT scanners and higher proton energies than those currently clinically available. © 2017 American Association of Physicists in Medicine.

  11. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  12. High-throughput two-dimensional root system phenotyping platform facilitates genetic analysis of root growth and development.

    PubMed

    Clark, Randy T; Famoso, Adam N; Zhao, Keyan; Shaff, Jon E; Craft, Eric J; Bustamante, Carlos D; McCouch, Susan R; Aneshansley, Daniel J; Kochian, Leon V

    2013-02-01

    High-throughput phenotyping of root systems requires a combination of specialized techniques and adaptable plant growth, root imaging and software tools. A custom phenotyping platform was designed to capture images of whole root systems, and novel software tools were developed to process and analyse these images. The platform and its components are adaptable to a wide range root phenotyping studies using diverse growth systems (hydroponics, paper pouches, gel and soil) involving several plant species, including, but not limited to, rice, maize, sorghum, tomato and Arabidopsis. The RootReader2D software tool is free and publicly available and was designed with both user-guided and automated features that increase flexibility and enhance efficiency when measuring root growth traits from specific roots or entire root systems during large-scale phenotyping studies. To demonstrate the unique capabilities and high-throughput capacity of this phenotyping platform for studying root systems, genome-wide association studies on rice (Oryza sativa) and maize (Zea mays) root growth were performed and root traits related to aluminium (Al) tolerance were analysed on the parents of the maize nested association mapping (NAM) population. © 2012 Blackwell Publishing Ltd.

  13. Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology.

    PubMed

    Markiewicz, Tomasz

    2011-03-30

    The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the results are stored in a server database. The internet platform was tested on PC Intel Core2 Duo T9600 2.8 GHz 4 GB RAM server with 768x576 pixel size, 1.28 Mb tiff format images reffering to meningioma tumour (x400, Ki-67/MIB-1). The time consumption was as following: at analysis by CAMI, locally on a server - 3.5 seconds, at remote analysis - 26 seconds, from which 22 seconds were used for data transfer via internet connection. At jpg format image (102 Kb) the consumption time was reduced to 14 seconds. The results have confirmed that designed remote platform can be useful for pathology image analysis. The time consumption is depended mainly on the image size and speed of the internet connections. The presented implementation can be used for many types of analysis at different staining, tissue, morphometry approaches, etc. The significant problem is the implementation of the JSP page in the multithread form, that can be used parallelly by many users. The presented platform for image analysis in pathology can be especially useful for small laboratory without its own image analysis system.

  14. Using MATLAB software with Tomcat server and Java platform for remote image analysis in pathology

    PubMed Central

    2011-01-01

    Background The Matlab software is a one of the most advanced development tool for application in engineering practice. From our point of view the most important is the image processing toolbox, offering many built-in functions, including mathematical morphology, and implementation of a many artificial neural networks as AI. It is very popular platform for creation of the specialized program for image analysis, also in pathology. Based on the latest version of Matlab Builder Java toolbox, it is possible to create the software, serving as a remote system for image analysis in pathology via internet communication. The internet platform can be realized based on Java Servlet Pages with Tomcat server as servlet container. Methods In presented software implementation we propose remote image analysis realized by Matlab algorithms. These algorithms can be compiled to executable jar file with the help of Matlab Builder Java toolbox. The Matlab function must be declared with the set of input data, output structure with numerical results and Matlab web figure. Any function prepared in that manner can be used as a Java function in Java Servlet Pages (JSP). The graphical user interface providing the input data and displaying the results (also in graphical form) must be implemented in JSP. Additionally the data storage to database can be implemented within algorithm written in Matlab with the help of Matlab Database Toolbox directly with the image processing. The complete JSP page can be run by Tomcat server. Results The proposed tool for remote image analysis was tested on the Computerized Analysis of Medical Images (CAMI) software developed by author. The user provides image and case information (diagnosis, staining, image parameter etc.). When analysis is initialized, input data with image are sent to servlet on Tomcat. When analysis is done, client obtains the graphical results as an image with marked recognized cells and also the quantitative output. Additionally, the results are stored in a server database. The internet platform was tested on PC Intel Core2 Duo T9600 2.8GHz 4GB RAM server with 768x576 pixel size, 1.28Mb tiff format images reffering to meningioma tumour (x400, Ki-67/MIB-1). The time consumption was as following: at analysis by CAMI, locally on a server – 3.5 seconds, at remote analysis – 26 seconds, from which 22 seconds were used for data transfer via internet connection. At jpg format image (102 Kb) the consumption time was reduced to 14 seconds. Conclusions The results have confirmed that designed remote platform can be useful for pathology image analysis. The time consumption is depended mainly on the image size and speed of the internet connections. The presented implementation can be used for many types of analysis at different staining, tissue, morphometry approaches, etc. The significant problem is the implementation of the JSP page in the multithread form, that can be used parallelly by many users. The presented platform for image analysis in pathology can be especially useful for small laboratory without its own image analysis system. PMID:21489188

  15. PLUS: open-source toolkit for ultrasound-guided intervention systems.

    PubMed

    Lasso, Andras; Heffter, Tamas; Rankin, Adam; Pinter, Csaba; Ungi, Tamas; Fichtinger, Gabor

    2014-10-01

    A variety of advanced image analysis methods have been under the development for ultrasound-guided interventions. Unfortunately, the transition from an image analysis algorithm to clinical feasibility trials as part of an intervention system requires integration of many components, such as imaging and tracking devices, data processing algorithms, and visualization software. The objective of our paper is to provide a freely available open-source software platform-PLUS: Public software Library for Ultrasound-to facilitate rapid prototyping of ultrasound-guided intervention systems for translational clinical research. PLUS provides a variety of methods for interventional tool pose and ultrasound image acquisition from a wide range of tracking and imaging devices, spatial and temporal calibration, volume reconstruction, simulated image generation, and recording and live streaming of the acquired data. This paper introduces PLUS, explains its functionality and architecture, and presents typical uses and performance in ultrasound-guided intervention systems. PLUS fulfills the essential requirements for the development of ultrasound-guided intervention systems and it aspires to become a widely used translational research prototyping platform. PLUS is freely available as open source software under BSD license and can be downloaded from http://www.plustoolkit.org.

  16. A specialized plug-in software module for computer-aided quantitative measurement of medical images.

    PubMed

    Wang, Q; Zeng, Y J; Huo, P; Hu, J L; Zhang, J H

    2003-12-01

    This paper presents a specialized system for quantitative measurement of medical images. Using Visual C++, we developed a computer-aided software based on Image-Pro Plus (IPP), a software development platform. When transferred to the hard disk of a computer by an MVPCI-V3A frame grabber, medical images can be automatically processed by our own IPP plug-in for immunohistochemical analysis, cytomorphological measurement and blood vessel segmentation. In 34 clinical studies, the system has shown its high stability, reliability and ease of utility.

  17. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    NASA Astrophysics Data System (ADS)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  18. HelioScan: a software framework for controlling in vivo microscopy setups with high hardware flexibility, functional diversity and extendibility.

    PubMed

    Langer, Dominik; van 't Hoff, Marcel; Keller, Andreas J; Nagaraja, Chetan; Pfäffli, Oliver A; Göldi, Maurice; Kasper, Hansjörg; Helmchen, Fritjof

    2013-04-30

    Intravital microscopy such as in vivo imaging of brain dynamics is often performed with custom-built microscope setups controlled by custom-written software to meet specific requirements. Continuous technological advancement in the field has created a need for new control software that is flexible enough to support the biological researcher with innovative imaging techniques and provide the developer with a solid platform for quickly and easily implementing new extensions. Here, we introduce HelioScan, a software package written in LabVIEW, as a platform serving this dual role. HelioScan is designed as a collection of components that can be flexibly assembled into microscope control software tailored to the particular hardware and functionality requirements. Moreover, HelioScan provides a software framework, within which new functionality can be implemented in a quick and structured manner. A specific HelioScan application assembles at run-time from individual software components, based on user-definable configuration files. Due to its component-based architecture, HelioScan can exploit synergies of multiple developers working in parallel on different components in a community effort. We exemplify the capabilities and versatility of HelioScan by demonstrating several in vivo brain imaging modes, including camera-based intrinsic optical signal imaging for functional mapping of cortical areas, standard two-photon laser-scanning microscopy using galvanometric mirrors, and high-speed in vivo two-photon calcium imaging using either acousto-optic deflectors or a resonant scanner. We recommend HelioScan as a convenient software framework for the in vivo imaging community. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Platform for Post-Processing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don J.

    2010-01-01

    Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.

  20. The design of real time infrared image generation software based on Creator and Vega

    NASA Astrophysics Data System (ADS)

    Wang, Rui-feng; Wu, Wei-dong; Huo, Jun-xiu

    2013-09-01

    Considering the requirement of high reality and real-time quality dynamic infrared image of an infrared image simulation, a method to design real-time infrared image simulation application on the platform of VC++ is proposed. This is based on visual simulation software Creator and Vega. The functions of Creator are introduced simply, and the main features of Vega developing environment are analyzed. The methods of infrared modeling and background are offered, the designing flow chart of the developing process of IR image real-time generation software and the functions of TMM Tool and MAT Tool and sensor module are explained, at the same time, the real-time of software is designed.

  1. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research.

    PubMed

    Campagnola, Luke; Kratz, Megan B; Manis, Paul B

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org.

  2. The Design and Development of Test Platform for Wheat Precision Seeding Based on Image Processing Techniques

    NASA Astrophysics Data System (ADS)

    Li, Qing; Lin, Haibo; Xiu, Yu-Feng; Wang, Ruixue; Yi, Chuijie

    The test platform of wheat precision seeding based on image processing techniques is designed to develop the wheat precision seed metering device with high efficiency and precision. Using image processing techniques, this platform gathers images of seeds (wheat) on the conveyer belt which are falling from seed metering device. Then these data are processed and analyzed to calculate the qualified rate, reseeding rate and leakage sowing rate, etc. This paper introduces the whole structure, design parameters of the platform and hardware & software of the image acquisition system were introduced, as well as the method of seed identification and seed-space measurement using image's threshold and counting the seed's center. By analyzing the experimental result, the measurement error is less than ± 1mm.

  3. Different source image fusion based on FPGA

    NASA Astrophysics Data System (ADS)

    Luo, Xiao; Piao, Yan

    2016-03-01

    The fusion technology of video image is to make the video obtained by different image sensors complementary to each other by some technical means, so as to obtain the video information which is rich in information and suitable for the human eye system. Infrared cameras in harsh environments such as when smoke, fog and low light situations penetrating power, but the ability to obtain the details of the image is poor, does not meet the human visual system. Single visible light imaging can be rich in detail, high resolution images and for the visual system, but the visible image easily affected by the external environment. Infrared image and visible image fusion process involved in the video image fusion algorithm complexity and high calculation capacity, have occupied more memory resources, high clock rate requirements, such as software, c ++, c, etc. to achieve more, but based on Hardware platform less. In this paper, based on the imaging characteristics of infrared images and visible light images, the software and hardware are combined to obtain the registration parameters through software matlab, and the gray level weighted average method is used to implement the hardware platform. Information fusion, and finally the fusion image can achieve the goal of effectively improving the acquisition of information to increase the amount of information in the image.

  4. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  5. Initial experience with a handheld device digital imaging and communications in medicine viewer: OsiriX mobile on the iPhone.

    PubMed

    Choudhri, Asim F; Radvany, Martin G

    2011-04-01

    Medical imaging is commonly used to diagnose many emergent conditions, as well as plan treatment. Digital images can be reviewed on almost any computing platform. Modern mobile phones and handheld devices are portable computing platforms with robust software programming interfaces, powerful processors, and high-resolution displays. OsiriX mobile, a new Digital Imaging and Communications in Medicine viewing program, is available for the iPhone/iPod touch platform. This raises the possibility of mobile review of diagnostic medical images to expedite diagnosis and treatment planning using a commercial off the shelf solution, facilitating communication among radiologists and referring clinicians.

  6. Image processing in biodosimetry: A proposal of a generic free software platform.

    PubMed

    Dumpelmann, Matthias; Cadena da Matta, Mariel; Pereira de Lemos Pinto, Marcela Maria; de Salazar E Fernandes, Thiago; Borges da Silva, Edvane; Amaral, Ademir

    2015-08-01

    The scoring of chromosome aberrations is the most reliable biological method for evaluating individual exposure to ionizing radiation. However, microscopic analyses of chromosome human metaphases, generally employed to identify aberrations mainly dicentrics (chromosome with two centromeres), is a laborious task. This method is time consuming and its application in biological dosimetry would be almost impossible in case of a large scale radiation incidents. In this project, a generic software was enhanced for automatic chromosome image processing from a framework originally developed for the Framework V project Simbio, of the European Union for applications in the area of source localization from electroencephalographic signals. The platforms capability is demonstrated by a study comparing automatic segmentation strategies of chromosomes from microscopic images.

  7. Distributed nuclear medicine applications using World Wide Web and Java technology.

    PubMed

    Knoll, P; Höll, K; Mirzaei, S; Koriska, K; Köhn, H

    2000-01-01

    At present, medical applications applying World Wide Web (WWW) technology are mainly used to view static images and to retrieve some information. The Java platform is a relative new way of computing, especially designed for network computing and distributed applications which enables interactive connection between user and information via the WWW. The Java 2 Software Development Kit (SDK) including Java2D API, Java Remote Method Invocation (RMI) technology, Object Serialization and the Java Advanced Imaging (JAI) extension was used to achieve a robust, platform independent and network centric solution. Medical image processing software based on this technology is presented and adequate performance capability of Java is demonstrated by an iterative reconstruction algorithm for single photon emission computerized tomography (SPECT).

  8. A Versatile Phenotyping System and Analytics Platform Reveals Diverse Temporal Responses to Water Availability in Setaria.

    PubMed

    Fahlgren, Noah; Feldman, Maximilian; Gehan, Malia A; Wilson, Melinda S; Shyu, Christine; Bryant, Douglas W; Hill, Steven T; McEntee, Colton J; Warnasooriya, Sankalpi N; Kumar, Indrajit; Ficor, Tracy; Turnipseed, Stephanie; Gilbert, Kerrigan B; Brutnell, Thomas P; Carrington, James C; Mockler, Todd C; Baxter, Ivan

    2015-10-05

    Phenotyping has become the rate-limiting step in using large-scale genomic data to understand and improve agricultural crops. Here, the Bellwether Phenotyping Platform for controlled-environment plant growth and automated multimodal phenotyping is described. The system has capacity for 1140 plants, which pass daily through stations to record fluorescence, near-infrared, and visible images. Plant Computer Vision (PlantCV) was developed as open-source, hardware platform-independent software for quantitative image analysis. In a 4-week experiment, wild Setaria viridis and domesticated Setaria italica had fundamentally different temporal responses to water availability. While both lines produced similar levels of biomass under limited water conditions, Setaria viridis maintained the same water-use efficiency under water replete conditions, while Setaria italica shifted to less efficient growth. Overall, the Bellwether Phenotyping Platform and PlantCV software detected significant effects of genotype and environment on height, biomass, water-use efficiency, color, plant architecture, and tissue water status traits. All ∼ 79,000 images acquired during the course of the experiment are publicly available. Copyright © 2015 The Author. Published by Elsevier Inc. All rights reserved.

  9. Uses of megavoltage digital tomosynthesis in radiotherapy

    NASA Astrophysics Data System (ADS)

    Sarkar, Vikren

    With the advent of intensity modulated radiotherapy, radiation treatment plans are becoming more conformal to the tumor with the decreasing margins. It is therefore of prime importance that the patient be positioned correctly prior to treatment. Therefore, image guided treatment is necessary for intensity modulated radiotherapy plans to be implemented successfully. Current advanced imaging devices require costly hardware and software upgrade, and radiation imaging solutions, such as cone beam computed tomography, may introduce extra radiation dose to the patient in order to acquire better quality images. Thus, there is a need to extend current existing imaging device ability and functions while reducing cost and radiation dose. Existing electronic portal imaging devices can be used to generate computed tomography-like tomograms through projection images acquired over a small angle using the technique of cone-beam digital tomosynthesis. Since it uses a fraction of the images required for computed tomography reconstruction, use of this technique correspondingly delivers only a fraction of the imaging dose to the patient. Furthermore, cone-beam digital tomosynthesis can be offered as a software-only solution as long as a portal imaging device is available. In this study, the feasibility of performing digital tomosynthesis using individually-acquired megavoltage images from a charge coupled device-based electronic portal imaging device was investigated. Three digital tomosynthesis reconstruction algorithms, the shift-and-add, filtered back-projection, and simultaneous algebraic reconstruction technique, were compared considering the final image quality and radiation dose during imaging. A software platform, DART, was created using a combination of the Matlab and C++ languages. The platform allows for the registration of a reference Cone Beam Digital Tomosynthesis (CBDT) image against a daily acquired set to determine how to shift the patient prior to treatment. Finally, the software was extended to investigate if the digital tomosynthesis dataset could be used in an adaptive radiotherapy regimen through the use of the Pinnacle treatment planning software to recalculate dose delivered. The feasibility study showed that the megavoltage CBDT visually agreed with corresponding megavoltage computed tomography images. The comparative study showed that the best compromise between imaging quality and imaging dose is obtained when 11 projection images, acquired over an imaging angle of 40°, are used with the filtered back-projection algorithm. DART was successfully used to register reference and daily image sets to within 1 mm in-plane and 2.5 mm out of plane. The DART platform was also effectively used to generate updated files that the Pinnacle treatment planning system used to calculate updated dose in a rigidly shifted patient. These doses were then used to calculate a cumulative dose distribution that could be used by a physician as reference to decide when the treatment plan should be updated. In conclusion, this study showed that a software solution is possible to extend existing electronic portal imaging devices to function as cone-beam digital tomosynthesis devices and achieve daily requirement for image guided intensity modulated radiotherapy treatments. The DART platform also has the potential to be used as a part of adaptive radiotherapy solution.

  10. Cryo-Imaging and Software Platform for Analysis of Molecular MR Imaging of Micrometastases

    PubMed Central

    Qutaish, Mohammed Q.; Zhou, Zhuxian; Prabhu, David; Liu, Yiqiao; Busso, Mallory R.; Izadnegahdar, Donna; Gargesha, Madhusudhana; Lu, Hong; Lu, Zheng-Rong

    2018-01-01

    We created and evaluated a preclinical, multimodality imaging, and software platform to assess molecular imaging of small metastases. This included experimental methods (e.g., GFP-labeled tumor and high resolution multispectral cryo-imaging), nonrigid image registration, and interactive visualization of imaging agent targeting. We describe technological details earlier applied to GFP-labeled metastatic tumor targeting by molecular MR (CREKA-Gd) and red fluorescent (CREKA-Cy5) imaging agents. Optimized nonrigid cryo-MRI registration enabled nonambiguous association of MR signals to GFP tumors. Interactive visualization of out-of-RAM volumetric image data allowed one to zoom to a GFP-labeled micrometastasis, determine its anatomical location from color cryo-images, and establish the presence/absence of targeted CREKA-Gd and CREKA-Cy5. In a mouse with >160 GFP-labeled tumors, we determined that in the MR images every tumor in the lung >0.3 mm2 had visible signal and that some metastases as small as 0.1 mm2 were also visible. More tumors were visible in CREKA-Cy5 than in CREKA-Gd MRI. Tape transfer method and nonrigid registration allowed accurate (<11 μm error) registration of whole mouse histology to corresponding cryo-images. Histology showed inflammation and necrotic regions not labeled by imaging agents. This mouse-to-cells multiscale and multimodality platform should uniquely enable more informative and accurate studies of metastatic cancer imaging and therapy. PMID:29805438

  11. The NifTK software platform for image-guided interventions: platform overview and NiftyLink messaging.

    PubMed

    Clarkson, Matthew J; Zombori, Gergely; Thompson, Steve; Totz, Johannes; Song, Yi; Espak, Miklos; Johnsen, Stian; Hawkes, David; Ourselin, Sébastien

    2015-03-01

    To perform research in image-guided interventions, researchers need a wide variety of software components, and assembling these components into a flexible and reliable system can be a challenging task. In this paper, the NifTK software platform is presented. A key focus has been high-performance streaming of stereo laparoscopic video data, ultrasound data and tracking data simultaneously. A new messaging library called NiftyLink is introduced that uses the OpenIGTLink protocol and provides the user with easy-to-use asynchronous two-way messaging, high reliability and comprehensive error reporting. A small suite of applications called NiftyGuide has been developed, containing lightweight applications for grabbing data, currently from position trackers and ultrasound scanners. These applications use NiftyLink to stream data into NiftyIGI, which is a workstation-based application, built on top of MITK, for visualisation and user interaction. Design decisions, performance characteristics and initial applications are described in detail. NiftyLink was tested for latency when transmitting images, tracking data, and interleaved imaging and tracking data. NiftyLink can transmit tracking data at 1,024 frames per second (fps) with latency of 0.31 milliseconds, and 512 KB images with latency of 6.06 milliseconds at 32 fps. NiftyIGI was tested, receiving stereo high-definition laparoscopic video at 30 fps, tracking data from 4 rigid bodies at 20-30 fps and ultrasound data at 20 fps with rendering refresh rates between 2 and 20 Hz with no loss of user interaction. These packages form part of the NifTK platform and have proven to be successful in a variety of image-guided surgery projects. Code and documentation for the NifTK platform are available from http://www.niftk.org . NiftyLink is provided open-source under a BSD license and available from http://github.com/NifTK/NiftyLink . The code for this paper is tagged IJCARS-2014.

  12. ACQ4: an open-source software platform for data acquisition and analysis in neurophysiology research

    PubMed Central

    Campagnola, Luke; Kratz, Megan B.; Manis, Paul B.

    2014-01-01

    The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org. PMID:24523692

  13. Interfaces and Integration of Medical Image Analysis Frameworks: Challenges and Opportunities.

    PubMed

    Covington, Kelsie; McCreedy, Evan S; Chen, Min; Carass, Aaron; Aucoin, Nicole; Landman, Bennett A

    2010-05-25

    Clinical research with medical imaging typically involves large-scale data analysis with interdependent software toolsets tied together in a processing workflow. Numerous, complementary platforms are available, but these are not readily compatible in terms of workflows or data formats. Both image scientists and clinical investigators could benefit from using the framework which is a most natural fit to the specific problem at hand, but pragmatic choices often dictate that a compromise platform is used for collaboration. Manual merging of platforms through carefully tuned scripts has been effective, but exceptionally time consuming and is not feasible for large-scale integration efforts. Hence, the benefits of innovation are constrained by platform dependence. Removing this constraint via integration of algorithms from one framework into another is the focus of this work. We propose and demonstrate a light-weight interface system to expose parameters across platforms and provide seamless integration. In this initial effort, we focus on four platforms Medical Image Analysis and Visualization (MIPAV), Java Image Science Toolkit (JIST), command line tools, and 3D Slicer. We explore three case studies: (1) providing a system for MIPAV to expose internal algorithms and utilize these algorithms within JIST, (2) exposing JIST modules through self-documenting command line interface for inclusion in scripting environments, and (3) detecting and using JIST modules in 3D Slicer. We review the challenges and opportunities for light-weight software integration both within development language (e.g., Java in MIPAV and JIST) and across languages (e.g., C/C++ in 3D Slicer and shell in command line tools).

  14. Cloud-enabled microscopy and droplet microfluidic platform for specific detection of Escherichia coli in water.

    PubMed

    Golberg, Alexander; Linshiz, Gregory; Kravets, Ilia; Stawski, Nina; Hillson, Nathan J; Yarmush, Martin L; Marks, Robert S; Konry, Tania

    2014-01-01

    We report an all-in-one platform - ScanDrop - for the rapid and specific capture, detection, and identification of bacteria in drinking water. The ScanDrop platform integrates droplet microfluidics, a portable imaging system, and cloud-based control software and data storage. The cloud-based control software and data storage enables robotic image acquisition, remote image processing, and rapid data sharing. These features form a "cloud" network for water quality monitoring. We have demonstrated the capability of ScanDrop to perform water quality monitoring via the detection of an indicator coliform bacterium, Escherichia coli, in drinking water contaminated with feces. Magnetic beads conjugated with antibodies to E. coli antigen were used to selectively capture and isolate specific bacteria from water samples. The bead-captured bacteria were co-encapsulated in pico-liter droplets with fluorescently-labeled anti-E. coli antibodies, and imaged with an automated custom designed fluorescence microscope. The entire water quality diagnostic process required 8 hours from sample collection to online-accessible results compared with 2-4 days for other currently available standard detection methods.

  15. Cloud-Enabled Microscopy and Droplet Microfluidic Platform for Specific Detection of Escherichia coli in Water

    PubMed Central

    Kravets, Ilia; Stawski, Nina; Hillson, Nathan J.; Yarmush, Martin L.; Marks, Robert S.; Konry, Tania

    2014-01-01

    We report an all-in-one platform – ScanDrop – for the rapid and specific capture, detection, and identification of bacteria in drinking water. The ScanDrop platform integrates droplet microfluidics, a portable imaging system, and cloud-based control software and data storage. The cloud-based control software and data storage enables robotic image acquisition, remote image processing, and rapid data sharing. These features form a “cloud” network for water quality monitoring. We have demonstrated the capability of ScanDrop to perform water quality monitoring via the detection of an indicator coliform bacterium, Escherichia coli, in drinking water contaminated with feces. Magnetic beads conjugated with antibodies to E. coli antigen were used to selectively capture and isolate specific bacteria from water samples. The bead-captured bacteria were co-encapsulated in pico-liter droplets with fluorescently-labeled anti-E. coli antibodies, and imaged with an automated custom designed fluorescence microscope. The entire water quality diagnostic process required 8 hours from sample collection to online-accessible results compared with 2–4 days for other currently available standard detection methods. PMID:24475107

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Apte, A; Veeraraghavan, H; Oh, J

    Purpose: To present an open source and free platform to facilitate radiomics research — The “Radiomics toolbox” in CERR. Method: There is scarcity of open source tools that support end-to-end modeling of image features to predict patient outcomes. The “Radiomics toolbox” strives to fill the need for such a software platform. The platform supports (1) import of various kinds of image modalities like CT, PET, MR, SPECT, US. (2) Contouring tools to delineate structures of interest. (3) Extraction and storage of image based features like 1st order statistics, gray-scale co-occurrence and zonesize matrix based texture features and shape features andmore » (4) Statistical Analysis. Statistical analysis of the extracted features is supported with basic functionality that includes univariate correlations, Kaplan-Meir curves and advanced functionality that includes feature reduction and multivariate modeling. The graphical user interface and the data management are performed with Matlab for the ease of development and readability of code and features for wide audience. Open-source software developed with other programming languages is integrated to enhance various components of this toolbox. For example: Java-based DCM4CHE for import of DICOM, R for statistical analysis. Results: The Radiomics toolbox will be distributed as an open source, GNU copyrighted software. The toolbox was prototyped for modeling Oropharyngeal PET dataset at MSKCC. The analysis will be presented in a separate paper. Conclusion: The Radiomics Toolbox provides an extensible platform for extracting and modeling image features. To emphasize new uses of CERR for radiomics and image-based research, we have changed the name from the “Computational Environment for Radiotherapy Research” to the “Computational Environment for Radiological Research”.« less

  17. Development of an optoelectronic holographic platform for otolaryngology applications

    NASA Astrophysics Data System (ADS)

    Harrington, Ellery; Dobrev, Ivo; Bapat, Nikhil; Flores, Jorge Mauricio; Furlong, Cosme; Rosowski, John; Cheng, Jeffery Tao; Scarpino, Chris; Ravicz, Michael

    2010-08-01

    In this paper, we present advances on our development of an optoelectronic holographic computing platform with the ability to quantitatively measure full-field-of-view nanometer-scale movements of the tympanic membrane (TM). These measurements can facilitate otologists' ability to study and diagnose hearing disorders in humans. The holographic platform consists of a laser delivery system and an otoscope. The control software, called LaserView, is written in Visual C++ and handles communication and synchronization between hardware components. It provides a user-friendly interface to allow viewing of holographic images with several tools to automate holography-related tasks and facilitate hardware communication. The software uses a series of concurrent threads to acquire images, control the hardware, and display quantitative holographic data at video rates and in two modes of operation: optoelectronic holography and lensless digital holography. The holographic platform has been used to perform experiments on several live and post-mortem specimens, and is to be deployed in a medical research environment with future developments leading to its eventual clinical use.

  18. PScan 1.0: flexible software framework for polygon based multiphoton microscopy

    NASA Astrophysics Data System (ADS)

    Li, Yongxiao; Lee, Woei Ming

    2016-12-01

    Multiphoton laser scanning microscopes exhibit highly localized nonlinear optical excitation and are powerful instruments for in-vivo deep tissue imaging. Customized multiphoton microscopy has a significantly superior performance for in-vivo imaging because of precise control over the scanning and detection system. To date, there have been several flexible software platforms catered to custom built microscopy systems i.e. ScanImage, HelioScan, MicroManager, that perform at imaging speeds of 30-100fps. In this paper, we describe a flexible software framework for high speed imaging systems capable of operating from 5 fps to 1600 fps. The software is based on the MATLAB image processing toolbox. It has the capability to communicate directly with a high performing imaging card (Matrox Solios eA/XA), thus retaining high speed acquisition. The program is also designed to communicate with LabVIEW and Fiji for instrument control and image processing. Pscan 1.0 can handle high imaging rates and contains sufficient flexibility for users to adapt to their high speed imaging systems.

  19. GPU-Based Real-Time Volumetric Ultrasound Image Reconstruction for a Ring Array

    PubMed Central

    Choe, Jung Woo; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T.

    2014-01-01

    Synthetic phased array (SPA) beamforming with Hadamard coding and aperture weighting is an optimal option for real-time volumetric imaging with a ring array, a particularly attractive geometry in intracardiac and intravascular applications. However, the imaging frame rate of this method is limited by the immense computational load required in synthetic beamforming. For fast imaging with a ring array, we developed graphics processing unit (GPU)-based, real-time image reconstruction software that exploits massive data-level parallelism in beamforming operations. The GPU-based software reconstructs and displays three cross-sectional images at 45 frames per second (fps). This frame rate is 4.5 times higher than that for our previously-developed multi-core CPU-based software. In an alternative imaging mode, it shows one B-mode image rotating about the axis and its maximum intensity projection (MIP), processed at a rate of 104 fps. This paper describes the image reconstruction procedure on the GPU platform and presents the experimental images obtained using this software. PMID:23529080

  20. TestDose: A nuclear medicine software based on Monte Carlo modeling for generating gamma camera acquisitions and dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcia, Marie-Paule, E-mail: marie-paule.garcia@univ-brest.fr; Villoing, Daphnée; McKay, Erin

    Purpose: The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. Methods: The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of amore » given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit GATE offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on GATE to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user’s imaging requirements and generates automatically command files used as input for GATE. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant GATE input files are generated for the virtual patient model and associated pharmacokinetics. Results: Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body “step and shoot” acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. Conclusions: The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.« less

  1. TestDose: A nuclear medicine software based on Monte Carlo modeling for generating gamma camera acquisitions and dosimetry.

    PubMed

    Garcia, Marie-Paule; Villoing, Daphnée; McKay, Erin; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel

    2015-12-01

    The TestDose platform was developed to generate scintigraphic imaging protocols and associated dosimetry by Monte Carlo modeling. TestDose is part of a broader project (www.dositest.com) whose aim is to identify the biases induced by different clinical dosimetry protocols. The TestDose software allows handling the whole pipeline from virtual patient generation to resulting planar and SPECT images and dosimetry calculations. The originality of their approach relies on the implementation of functional segmentation for the anthropomorphic model representing a virtual patient. Two anthropomorphic models are currently available: 4D XCAT and ICRP 110. A pharmacokinetic model describes the biodistribution of a given radiopharmaceutical in each defined compartment at various time-points. The Monte Carlo simulation toolkit gate offers the possibility to accurately simulate scintigraphic images and absorbed doses in volumes of interest. The TestDose platform relies on gate to reproduce precisely any imaging protocol and to provide reference dosimetry. For image generation, TestDose stores user's imaging requirements and generates automatically command files used as input for gate. Each compartment is simulated only once and the resulting output is weighted using pharmacokinetic data. Resulting compartment projections are aggregated to obtain the final image. For dosimetry computation, emission data are stored in the platform database and relevant gate input files are generated for the virtual patient model and associated pharmacokinetics. Two samples of software runs are given to demonstrate the potential of TestDose. A clinical imaging protocol for the Octreoscan™ therapeutical treatment was implemented using the 4D XCAT model. Whole-body "step and shoot" acquisitions at different times postinjection and one SPECT acquisition were generated within reasonable computation times. Based on the same Octreoscan™ kinetics, a dosimetry computation performed on the ICRP 110 model is also presented. The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.

  2. A system for the real-time display of radar and video images of targets

    NASA Technical Reports Server (NTRS)

    Allen, W. W.; Burnside, W. D.

    1990-01-01

    Described here is a software and hardware system for the real-time display of radar and video images for use in a measurement range. The main purpose is to give the reader a clear idea of the software and hardware design and its functions. This system is designed around a Tektronix XD88-30 graphics workstation, used to display radar images superimposed on video images of the actual target. The system's purpose is to provide a platform for tha analysis and documentation of radar images and their associated targets in a menu-driven, user oriented environment.

  3. Histostitcher™: An informatics software platform for reconstructing whole-mount prostate histology using the extensible imaging platform framework

    PubMed Central

    Toth, Robert J.; Shih, Natalie; Tomaszewski, John E.; Feldman, Michael D.; Kutter, Oliver; Yu, Daphne N.; Paulus, John C.; Paladini, Ginaluca; Madabhushi, Anant

    2014-01-01

    Context: Co-registration of ex-vivo histologic images with pre-operative imaging (e.g., magnetic resonance imaging [MRI]) can be used to align and map disease extent, and to identify quantitative imaging signatures. However, ex-vivo histology images are frequently sectioned into quarters prior to imaging. Aims: This work presents Histostitcher™, a software system designed to create a pseudo whole mount histology section (WMHS) from a stitching of four individual histology quadrant images. Materials and Methods: Histostitcher™ uses user-identified fiducials on the boundary of two quadrants to stitch such quadrants. An original prototype of Histostitcher™ was designed using the Matlab programming languages. However, clinical use was limited due to slow performance, computer memory constraints and an inefficient workflow. The latest version was created using the extensible imaging platform (XIP™) architecture in the C++ programming language. A fast, graphics processor unit renderer was designed to intelligently cache the visible parts of the histology quadrants and the workflow was significantly improved to allow modifying existing fiducials, fast transformations of the quadrants and saving/loading sessions. Results: The new stitching platform yielded significantly more efficient workflow and reconstruction than the previous prototype. It was tested on a traditional desktop computer, a Windows 8 Surface Pro table device and a 27 inch multi-touch display, with little performance difference between the different devices. Conclusions: Histostitcher™ is a fast, efficient framework for reconstructing pseudo WMHS from individually imaged quadrants. The highly modular XIP™ framework was used to develop an intuitive interface and future work will entail mapping the disease extent from the pseudo WMHS onto pre-operative MRI. PMID:24843820

  4. Software designs of image processing tasks with incremental refinement of computation.

    PubMed

    Anastasia, Davide; Andreopoulos, Yiannis

    2010-08-01

    Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.

  5. GIFT-Cloud: A data sharing and collaboration platform for medical imaging research.

    PubMed

    Doel, Tom; Shakir, Dzhoshkun I; Pratt, Rosalind; Aertsen, Michael; Moggridge, James; Bellon, Erwin; David, Anna L; Deprest, Jan; Vercauteren, Tom; Ourselin, Sébastien

    2017-02-01

    Clinical imaging data are essential for developing research software for computer-aided diagnosis, treatment planning and image-guided surgery, yet existing systems are poorly suited for data sharing between healthcare and academia: research systems rarely provide an integrated approach for data exchange with clinicians; hospital systems are focused towards clinical patient care with limited access for external researchers; and safe haven environments are not well suited to algorithm development. We have established GIFT-Cloud, a data and medical image sharing platform, to meet the needs of GIFT-Surg, an international research collaboration that is developing novel imaging methods for fetal surgery. GIFT-Cloud also has general applicability to other areas of imaging research. GIFT-Cloud builds upon well-established cross-platform technologies. The Server provides secure anonymised data storage, direct web-based data access and a REST API for integrating external software. The Uploader provides automated on-site anonymisation, encryption and data upload. Gateways provide a seamless process for uploading medical data from clinical systems to the research server. GIFT-Cloud has been implemented in a multi-centre study for fetal medicine research. We present a case study of placental segmentation for pre-operative surgical planning, showing how GIFT-Cloud underpins the research and integrates with the clinical workflow. GIFT-Cloud simplifies the transfer of imaging data from clinical to research institutions, facilitating the development and validation of medical research software and the sharing of results back to the clinical partners. GIFT-Cloud supports collaboration between multiple healthcare and research institutions while satisfying the demands of patient confidentiality, data security and data ownership. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  6. Analyzing huge pathology images with open source software.

    PubMed

    Deroulers, Christophe; Ameisen, David; Badoual, Mathilde; Gerin, Chloé; Granier, Alexandre; Lartaud, Marc

    2013-06-06

    Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer's memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. The virtual slide(s) for this article can be found here:http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272.

  7. Analyzing huge pathology images with open source software

    PubMed Central

    2013-01-01

    Background Digital pathology images are increasingly used both for diagnosis and research, because slide scanners are nowadays broadly available and because the quantitative study of these images yields new insights in systems biology. However, such virtual slides build up a technical challenge since the images occupy often several gigabytes and cannot be fully opened in a computer’s memory. Moreover, there is no standard format. Therefore, most common open source tools such as ImageJ fail at treating them, and the others require expensive hardware while still being prohibitively slow. Results We have developed several cross-platform open source software tools to overcome these limitations. The NDPITools provide a way to transform microscopy images initially in the loosely supported NDPI format into one or several standard TIFF files, and to create mosaics (division of huge images into small ones, with or without overlap) in various TIFF and JPEG formats. They can be driven through ImageJ plugins. The LargeTIFFTools achieve similar functionality for huge TIFF images which do not fit into RAM. We test the performance of these tools on several digital slides and compare them, when applicable, to standard software. A statistical study of the cells in a tissue sample from an oligodendroglioma was performed on an average laptop computer to demonstrate the efficiency of the tools. Conclusions Our open source software enables dealing with huge images with standard software on average computers. They are cross-platform, independent of proprietary libraries and very modular, allowing them to be used in other open source projects. They have excellent performance in terms of execution speed and RAM requirements. They open promising perspectives both to the clinician who wants to study a single slide and to the research team or data centre who do image analysis of many slides on a computer cluster. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5955513929846272 PMID:23829479

  8. Introducing PLIA: Planetary Laboratory for Image Analysis

    NASA Astrophysics Data System (ADS)

    Peralta, J.; Hueso, R.; Barrado, N.; Sánchez-Lavega, A.

    2005-08-01

    We present a graphical software tool developed under IDL software to navigate, process and analyze planetary images. The software has a complete Graphical User Interface and is cross-platform. It can also run under the IDL Virtual Machine without the need to own an IDL license. The set of tools included allow image navigation (orientation, centring and automatic limb determination), dynamical and photometric atmospheric measurements (winds and cloud albedos), cylindrical and polar projections, as well as image treatment under several procedures. Being written in IDL, it is modular and easy to modify and grow for adding new capabilities. We show several examples of the software capabilities with Galileo-Venus observations: Image navigation, photometrical corrections, wind profiles obtained by cloud tracking, cylindrical projections and cloud photometric measurements. Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.

  9. Software-codec-based full motion video conferencing on the PC using visual pattern image sequence coding

    NASA Astrophysics Data System (ADS)

    Barnett, Barry S.; Bovik, Alan C.

    1995-04-01

    This paper presents a real time full motion video conferencing system based on the Visual Pattern Image Sequence Coding (VPISC) software codec. The prototype system hardware is comprised of two personal computers, two camcorders, two frame grabbers, and an ethernet connection. The prototype system software has a simple structure. It runs under the Disk Operating System, and includes a user interface, a video I/O interface, an event driven network interface, and a free running or frame synchronous video codec that also acts as the controller for the video and network interfaces. Two video coders have been tested in this system. Simple implementations of Visual Pattern Image Coding and VPISC have both proven to support full motion video conferencing with good visual quality. Future work will concentrate on expanding this prototype to support the motion compensated version of VPISC, as well as encompassing point-to-point modem I/O and multiple network protocols. The application will be ported to multiple hardware platforms and operating systems. The motivation for developing this prototype system is to demonstrate the practicality of software based real time video codecs. Furthermore, software video codecs are not only cheaper, but are more flexible system solutions because they enable different computer platforms to exchange encoded video information without requiring on-board protocol compatible video codex hardware. Software based solutions enable true low cost video conferencing that fits the `open systems' model of interoperability that is so important for building portable hardware and software applications.

  10. Recent Progress in Optical Biosensors Based on Smartphone Platforms

    PubMed Central

    Geng, Zhaoxin; Zhang, Xiong; Fan, Zhiyuan; Lv, Xiaoqing; Su, Yue; Chen, Hongda

    2017-01-01

    With a rapid improvement of smartphone hardware and software, especially complementary metal oxide semiconductor (CMOS) cameras, many optical biosensors based on smartphone platforms have been presented, which have pushed the development of the point-of-care testing (POCT). Imaging-based and spectrometry-based detection techniques have been widely explored via different approaches. Combined with the smartphone, imaging-based and spectrometry-based methods are currently used to investigate a wide range of molecular properties in chemical and biological science for biosensing and diagnostics. Imaging techniques based on smartphone-based microscopes are utilized to capture microscale analysts, while spectrometry-based techniques are used to probe reactions or changes of molecules. Here, we critically review the most recent progress in imaging-based and spectrometry-based smartphone-integrated platforms that have been developed for chemical experiments and biological diagnosis. We focus on the analytical performance and the complexity for implementation of the platforms. PMID:29068375

  11. Recent Progress in Optical Biosensors Based on Smartphone Platforms.

    PubMed

    Geng, Zhaoxin; Zhang, Xiong; Fan, Zhiyuan; Lv, Xiaoqing; Su, Yue; Chen, Hongda

    2017-10-25

    With a rapid improvement of smartphone hardware and software, especially complementary metal oxide semiconductor (CMOS) cameras, many optical biosensors based on smartphone platforms have been presented, which have pushed the development of the point-of-care testing (POCT). Imaging-based and spectrometry-based detection techniques have been widely explored via different approaches. Combined with the smartphone, imaging-based and spectrometry-based methods are currently used to investigate a wide range of molecular properties in chemical and biological science for biosensing and diagnostics. Imaging techniques based on smartphone-based microscopes are utilized to capture microscale analysts, while spectrometry-based techniques are used to probe reactions or changes of molecules. Here, we critically review the most recent progress in imaging-based and spectrometry-based smartphone-integrated platforms that have been developed for chemical experiments and biological diagnosis. We focus on the analytical performance and the complexity for implementation of the platforms.

  12. ibex: An open infrastructure software platform to facilitate collaborative work in radiomics

    PubMed Central

    Zhang, Lifei; Fried, David V.; Fave, Xenia J.; Hunter, Luke A.; Court, Laurence E.

    2015-01-01

    Purpose: Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (ibex), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. Methods: The ibex software package was developed using the matlab and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, ibex is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, ibex provides an integrated development environment on top of matlab and c/c++, so users are not limited to its built-in functions. In the ibex developer studio, users can plug in, debug, and test new algorithms, extending ibex’s functionality. ibex also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the ibex workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions. Results: Researchers with a variety of technical skill levels, including radiation oncologists, physicists, and computer scientists, have found the ibex software to be intuitive, powerful, and easy to use. ibex can be run at any computer with the windows operating system and 1GB RAM. The authors fully validated the implementation of all importers, preprocessing algorithms, and feature extraction algorithms. Windows version 1.0 beta of stand-alone ibex and ibex’s source code can be downloaded. Conclusions: The authors successfully implemented ibex, an open infrastructure software platform that streamlines common radiomics workflow tasks. Its transparency, flexibility, and portability can greatly accelerate the pace of radiomics research and pave the way toward successful clinical translation. PMID:25735289

  13. IBEX: an open infrastructure software platform to facilitate collaborative work in radiomics.

    PubMed

    Zhang, Lifei; Fried, David V; Fave, Xenia J; Hunter, Luke A; Yang, Jinzhong; Court, Laurence E

    2015-03-01

    Radiomics, which is the high-throughput extraction and analysis of quantitative image features, has been shown to have considerable potential to quantify the tumor phenotype. However, at present, a lack of software infrastructure has impeded the development of radiomics and its applications. Therefore, the authors developed the imaging biomarker explorer (IBEX), an open infrastructure software platform that flexibly supports common radiomics workflow tasks such as multimodality image data import and review, development of feature extraction algorithms, model validation, and consistent data sharing among multiple institutions. The IBEX software package was developed using the MATLAB and c/c++ programming languages. The software architecture deploys the modern model-view-controller, unit testing, and function handle programming concepts to isolate each quantitative imaging analysis task, to validate if their relevant data and algorithms are fit for use, and to plug in new modules. On one hand, IBEX is self-contained and ready to use: it has implemented common data importers, common image filters, and common feature extraction algorithms. On the other hand, IBEX provides an integrated development environment on top of MATLAB and c/c++, so users are not limited to its built-in functions. In the IBEX developer studio, users can plug in, debug, and test new algorithms, extending IBEX's functionality. IBEX also supports quality assurance for data and feature algorithms: image data, regions of interest, and feature algorithm-related data can be reviewed, validated, and/or modified. More importantly, two key elements in collaborative workflows, the consistency of data sharing and the reproducibility of calculation result, are embedded in the IBEX workflow: image data, feature algorithms, and model validation including newly developed ones from different users can be easily and consistently shared so that results can be more easily reproduced between institutions. Researchers with a variety of technical skill levels, including radiation oncologists, physicists, and computer scientists, have found the IBEX software to be intuitive, powerful, and easy to use. IBEX can be run at any computer with the windows operating system and 1GB RAM. The authors fully validated the implementation of all importers, preprocessing algorithms, and feature extraction algorithms. Windows version 1.0 beta of stand-alone IBEX and IBEX's source code can be downloaded. The authors successfully implemented IBEX, an open infrastructure software platform that streamlines common radiomics workflow tasks. Its transparency, flexibility, and portability can greatly accelerate the pace of radiomics research and pave the way toward successful clinical translation.

  14. Increasing the impact of medical image computing using community-based open-access hackathons: The NA-MIC and 3D Slicer experience.

    PubMed

    Kapur, Tina; Pieper, Steve; Fedorov, Andriy; Fillion-Robin, J-C; Halle, Michael; O'Donnell, Lauren; Lasso, Andras; Ungi, Tamas; Pinter, Csaba; Finet, Julien; Pujol, Sonia; Jagadeesan, Jayender; Tokuda, Junichi; Norton, Isaiah; Estepar, Raul San Jose; Gering, David; Aerts, Hugo J W L; Jakab, Marianna; Hata, Nobuhiko; Ibanez, Luiz; Blezek, Daniel; Miller, Jim; Aylward, Stephen; Grimson, W Eric L; Fichtinger, Gabor; Wells, William M; Lorensen, William E; Schroeder, Will; Kikinis, Ron

    2016-10-01

    The National Alliance for Medical Image Computing (NA-MIC) was launched in 2004 with the goal of investigating and developing an open source software infrastructure for the extraction of information and knowledge from medical images using computational methods. Several leading research and engineering groups participated in this effort that was funded by the US National Institutes of Health through a variety of infrastructure grants. This effort transformed 3D Slicer from an internal, Boston-based, academic research software application into a professionally maintained, robust, open source platform with an international leadership and developer and user communities. Critical improvements to the widely used underlying open source libraries and tools-VTK, ITK, CMake, CDash, DCMTK-were an additional consequence of this effort. This project has contributed to close to a thousand peer-reviewed publications and a growing portfolio of US and international funded efforts expanding the use of these tools in new medical computing applications every year. In this editorial, we discuss what we believe are gaps in the way medical image computing is pursued today; how a well-executed research platform can enable discovery, innovation and reproducible science ("Open Science"); and how our quest to build such a software platform has evolved into a productive and rewarding social engineering exercise in building an open-access community with a shared vision. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. HTML5 PivotViewer: high-throughput visualization and querying of image data on the web.

    PubMed

    Taylor, Stephen; Noble, Roger

    2014-09-15

    Visualization and analysis of large numbers of biological images has generated a bottle neck in research. We present HTML5 PivotViewer, a novel, open source, platform-independent viewer making use of the latest web technologies that allows seamless access to images and associated metadata for each image. This provides a powerful method to allow end users to mine their data. Documentation, examples and links to the software are available from http://www.cbrg.ox.ac.uk/data/pivotviewer/. The software is licensed under GPLv2. © The Author 2014. Published by Oxford University Press.

  16. A cloud-based multimodality case file for mobile devices.

    PubMed

    Balkman, Jason D; Loehfelm, Thomas W

    2014-01-01

    Recent improvements in Web and mobile technology, along with the widespread use of handheld devices in radiology education, provide unique opportunities for creating scalable, universally accessible, portable image-rich radiology case files. A cloud database and a Web-based application for radiologic images were developed to create a mobile case file with reasonable usability, download performance, and image quality for teaching purposes. A total of 75 radiology cases related to breast, thoracic, gastrointestinal, musculoskeletal, and neuroimaging subspecialties were included in the database. Breast imaging cases are the focus of this article, as they best demonstrate handheld display capabilities across a wide variety of modalities. This case subset also illustrates methods for adapting radiologic content to cloud platforms and mobile devices. Readers will gain practical knowledge about storage and retrieval of cloud-based imaging data, an awareness of techniques used to adapt scrollable and high-resolution imaging content for the Web, and an appreciation for optimizing images for handheld devices. The evaluation of this software demonstrates the feasibility of adapting images from most imaging modalities to mobile devices, even in cases of full-field digital mammograms, where high resolution is required to represent subtle pathologic features. The cloud platform allows cases to be added and modified in real time by using only a standard Web browser with no application-specific software. Challenges remain in developing efficient ways to generate, modify, and upload radiologic and supplementary teaching content to this cloud-based platform. Online supplemental material is available for this article. ©RSNA, 2014.

  17. Software Transition Project Retrospectives and the Application of SEL Effort Estimation Model and Boehm's COCOMO to Complex Software Transition Projects

    NASA Technical Reports Server (NTRS)

    McNeill, Justin

    1995-01-01

    The Multimission Image Processing Subsystem (MIPS) at the Jet Propulsion Laboratory (JPL) has managed transitions of application software sets from one operating system and hardware platform to multiple operating systems and hardware platforms. As a part of these transitions, cost estimates were generated from the personal experience of in-house developers and managers to calculate the total effort required for such projects. Productivity measures have been collected for two such transitions, one very large and the other relatively small in terms of source lines of code. These estimates used a cost estimation model similar to the Software Engineering Laboratory (SEL) Effort Estimation Model. Experience in transitioning software within JPL MIPS have uncovered a high incidence of interface complexity. Interfaces, both internal and external to individual software applications, have contributed to software transition project complexity, and thus to scheduling difficulties and larger than anticipated design work on software to be ported.

  18. Open source software projects of the caBIG In Vivo Imaging Workspace Software special interest group.

    PubMed

    Prior, Fred W; Erickson, Bradley J; Tarbox, Lawrence

    2007-11-01

    The Cancer Bioinformatics Grid (caBIG) program was created by the National Cancer Institute to facilitate sharing of IT infrastructure, data, and applications among the National Cancer Institute-sponsored cancer research centers. The program was launched in February 2004 and now links more than 50 cancer centers. In April 2005, the In Vivo Imaging Workspace was added to promote the use of imaging in cancer clinical trials. At the inaugural meeting, four special interest groups (SIGs) were established. The Software SIG was charged with identifying projects that focus on open-source software for image visualization and analysis. To date, two projects have been defined by the Software SIG. The eXtensible Imaging Platform project has produced a rapid application development environment that researchers may use to create targeted workflows customized for specific research projects. The Algorithm Validation Tools project will provide a set of tools and data structures that will be used to capture measurement information and associated needed to allow a gold standard to be defined for the given database against which change analysis algorithms can be tested. Through these and future efforts, the caBIG In Vivo Imaging Workspace Software SIG endeavors to advance imaging informatics and provide new open-source software tools to advance cancer research.

  19. Platform for Postprocessing Waveform-Based NDE

    NASA Technical Reports Server (NTRS)

    Roth, Don

    2008-01-01

    Taking advantage of the similarities that exist among all waveform-based non-destructive evaluation (NDE) methods, a common software platform has been developed containing multiple- signal and image-processing techniques for waveforms and images. The NASA NDE Signal and Image Processing software has been developed using the latest versions of LabVIEW, and its associated Advanced Signal Processing and Vision Toolkits. The software is useable on a PC with Windows XP and Windows Vista. The software has been designed with a commercial grade interface in which two main windows, Waveform Window and Image Window, are displayed if the user chooses a waveform file to display. Within these two main windows, most actions are chosen through logically conceived run-time menus. The Waveform Window has plots for both the raw time-domain waves and their frequency- domain transformations (fast Fourier transform and power spectral density). The Image Window shows the C-scan image formed from information of the time-domain waveform (such as peak amplitude) or its frequency-domain transformation at each scan location. The user also has the ability to open an image, or series of images, or a simple set of X-Y paired data set in text format. Each of the Waveform and Image Windows contains menus from which to perform many user actions. An option exists to use raw waves obtained directly from scan, or waves after deconvolution if system wave response is provided. Two types of deconvolution, time-based subtraction or inverse-filter, can be performed to arrive at a deconvolved wave set. Additionally, the menu on the Waveform Window allows preprocessing of waveforms prior to image formation, scaling and display of waveforms, formation of different types of images (including non-standard types such as velocity), gating of portions of waves prior to image formation, and several other miscellaneous and specialized operations. The menu available on the Image Window allows many further image processing and analysis operations, some of which are found in commercially-available image-processing software programs (such as Adobe Photoshop), and some that are not (removing outliers, Bscan information, region-of-interest analysis, line profiles, and precision feature measurements).

  20. A New Effort for Atmospherical Forecast: Meteorological Image Processing Software (MIPS) for Astronomical Observations

    NASA Astrophysics Data System (ADS)

    Shameoni Niaei, M.; Kilic, Y.; Yildiran, B. E.; Yüzlükoglu, F.; Yesilyaprak, C.

    2016-12-01

    We have described a new software (MIPS) about the analysis and image processing of the meteorological satellite (Meteosat) data for an astronomical observatory. This software will be able to help to make some atmospherical forecast (cloud, humidity, rain) using meteosat data for robotic telescopes. MIPS uses a python library for Eumetsat data that aims to be completely open-source and licenced under GNU/General Public Licence (GPL). MIPS is a platform independent and uses h5py, numpy, and PIL with the general-purpose and high-level programming language Python and the QT framework.

  1. DART, a platform for the creation and registration of cone beam digital tomosynthesis datasets.

    PubMed

    Sarkar, Vikren; Shi, Chengyu; Papanikolaou, Niko

    2011-04-01

    Digital tomosynthesis is an imaging modality that allows for tomographic reconstructions using only a fraction of the images needed for CT reconstruction. Since it offers the advantages of tomographic images with a smaller imaging dose delivered to the patient, the technique offers much promise for use in patient positioning prior to radiation delivery. This paper describes a software environment developed to help in the creation of digital tomosynthesis image sets from digital portal images using three different reconstruction algorithms. The software then allows for use of the tomograms for patient positioning or for dose recalculation if shifts are not applied, possibly as part of an adaptive radiotherapy regimen.

  2. A Virtual Radial Arm Maze for the Study of Multiple Memory Systems in a Functional Magnetic Resonance Imaging Environment

    PubMed Central

    Xu, Dongrong; Hao, Xuejun; Wang, Zhishun; Duan, Yunsuo; Liu, Feng; Marsh, Rachel; Yu, Shan; Peterson, Bradley S.

    2015-01-01

    An increasing number of functional brain imaging studies are employing computer-based virtual reality (VR) to study changes in brain activity during the performance of high-level psychological and cognitive tasks. We report the development of a VR radial arm maze that adapts for human use in a scanning environment with the same general experimental design of behavioral tasks as that has been used with remarkable effectiveness for the study of multiple memory systems in rodents. The software platform is independent of specific computer hardware and operating systems, as we aim to provide shared access to this technology by the research community. We hope that doing so will provide greater standardization of software platform and study paradigm that will reduce variability and improve the comparability of findings across studies. We report the details of the design and implementation of this platform and provide information for downloading of the system for demonstration and research applications. PMID:26366052

  3. Missile signal processing common computer architecture for rapid technology upgrade

    NASA Astrophysics Data System (ADS)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application may be programmed under existing real-time operating systems using parallel processing software libraries, resulting in highly portable code that can be rapidly migrated to new platforms as processor technology evolves. Use of standardized development tools and 3rd party software upgrades are enabled as well as rapid upgrade of processing components as improved algorithms are developed. The resulting weapon system will have a superior processing capability over a custom approach at the time of deployment as a result of a shorter development cycles and use of newer technology. The signal processing computer may be upgraded over the lifecycle of the weapon system, and can migrate between weapon system variants enabled by modification simplicity. This paper presents a reference design using the new approach that utilizes an Altivec PowerPC parallel COTS platform. It uses a VxWorks-based real-time operating system (RTOS), and application code developed using an efficient parallel vector library (PVL). A quantification of computing requirements and demonstration of interceptor algorithm operating on this real-time platform are provided.

  4. Cancer Slide Digital Archive (CDSA) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    The CDSA is a web-based platform to support the sharing, managment and analysis of digital pathology data. The Emory Instance currently hosts over 23,000 images from The Cancer Genome Atlas, and the software is being developed within the ITCR grant to be deployable as a digital pathology platform for other labs and Cancer Institutes.

  5. QuantWorm: a comprehensive software package for Caenorhabditis elegans phenotypic assays.

    PubMed

    Jung, Sang-Kyu; Aleman-Meza, Boanerges; Riepe, Celeste; Zhong, Weiwei

    2014-01-01

    Phenotypic assays are crucial in genetics; however, traditional methods that rely on human observation are unsuitable for quantitative, large-scale experiments. Furthermore, there is an increasing need for comprehensive analyses of multiple phenotypes to provide multidimensional information. Here we developed an automated, high-throughput computer imaging system for quantifying multiple Caenorhabditis elegans phenotypes. Our imaging system is composed of a microscope equipped with a digital camera and a motorized stage connected to a computer running the QuantWorm software package. Currently, the software package contains one data acquisition module and four image analysis programs: WormLifespan, WormLocomotion, WormLength, and WormEgg. The data acquisition module collects images and videos. The WormLifespan software counts the number of moving worms by using two time-lapse images; the WormLocomotion software computes the velocity of moving worms; the WormLength software measures worm body size; and the WormEgg software counts the number of eggs. To evaluate the performance of our software, we compared the results of our software with manual measurements. We then demonstrated the application of the QuantWorm software in a drug assay and a genetic assay. Overall, the QuantWorm software provided accurate measurements at a high speed. Software source code, executable programs, and sample images are available at www.quantworm.org. Our software package has several advantages over current imaging systems for C. elegans. It is an all-in-one package for quantifying multiple phenotypes. The QuantWorm software is written in Java and its source code is freely available, so it does not require use of commercial software or libraries. It can be run on multiple platforms and easily customized to cope with new methods and requirements.

  6. JIP: Java image processing on the Internet

    NASA Astrophysics Data System (ADS)

    Wang, Dongyan; Lin, Bo; Zhang, Jun

    1998-12-01

    In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.

  7. Open-source platforms for navigated image-guided interventions.

    PubMed

    Ungi, Tamas; Lasso, Andras; Fichtinger, Gabor

    2016-10-01

    Navigation technology is changing the clinical standards in medical interventions by making existing procedures more accurate, and new procedures possible. Navigation is based on preoperative or intraoperative imaging combined with 3-dimensional position tracking of interventional tools registered to the images. Research of navigation technology in medical interventions requires significant engineering efforts. The difficulty of developing such complex systems has been limiting the clinical translation of new methods and ideas. A key to the future success of this field is to provide researchers with platforms that allow rapid implementation of applications with minimal resources spent on reimplementing existing system features. A number of platforms have been already developed that can share data in real time through standard interfaces. Complete navigation systems can be built using these platforms using a layered software architecture. In this paper, we review the most popular platforms, and show an effective way to take advantage of them through an example surgical navigation application. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network

    PubMed Central

    Fedorov, Andriy; Beichel, Reinhard; Kalpathy-Cramer, Jayashree; Finet, Julien; Fillion-Robin, Jean-Christophe; Pujol, Sonia; Bauer, Christian; Jennings, Dominique; Fennessy, Fiona; Sonka, Milan; Buatti, John; Aylward, Stephen; Miller, James V.; Pieper, Steve; Kikinis, Ron

    2012-01-01

    Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm, and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer. PMID:22770690

  9. AMIDE: a free software tool for multimodality medical image analysis.

    PubMed

    Loening, Andreas Markus; Gambhir, Sanjiv Sam

    2003-07-01

    Amide's a Medical Image Data Examiner (AMIDE) has been developed as a user-friendly, open-source software tool for displaying and analyzing multimodality volumetric medical images. Central to the package's abilities to simultaneously display multiple data sets (e.g., PET, CT, MRI) and regions of interest is the on-demand data reslicing implemented within the program. Data sets can be freely shifted, rotated, viewed, and analyzed with the program automatically handling interpolation as needed from the original data. Validation has been performed by comparing the output of AMIDE with that of several existing software packages. AMIDE runs on UNIX, Macintosh OS X, and Microsoft Windows platforms, and it is freely available with source code under the terms of the GNU General Public License.

  10. A noninvasive technique for real-time detection of bruises in apple surface based on machine vision

    NASA Astrophysics Data System (ADS)

    Zhao, Juan; Peng, Yankun; Dhakal, Sagar; Zhang, Leilei; Sasao, Akira

    2013-05-01

    Apple is one of the highly consumed fruit item in daily life. However, due to its high damage potential and massive influence on taste and export, the quality of apple has to be detected before it reaches the consumer's hand. This study was aimed to develop a hardware and software unit for real-time detection of apple bruises based on machine vision technology. The hardware unit consisted of a light shield installed two monochrome cameras at different angles, LED light source to illuminate the sample, and sensors at the entrance of box to signal the positioning of sample. Graphical Users Interface (GUI) was developed in VS2010 platform to control the overall hardware and display the image processing result. The hardware-software system was developed to acquire the images of 3 samples from each camera and display the image processing result in real time basis. An image processing algorithm was developed in Opencv and C++ platform. The software is able to control the hardware system to classify the apple into two grades based on presence/absence of surface bruises with the size of 5mm. The experimental result is promising and the system with further modification can be applicable for industrial production in near future.

  11. Current and future trends in marine image annotation software

    NASA Astrophysics Data System (ADS)

    Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.

    2016-12-01

    Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.

  12. Rich internet application system for patient-centric healthcare data management using handheld devices.

    PubMed

    Constantinescu, L; Pradana, R; Kim, J; Gong, P; Fulham, Michael; Feng, D

    2009-01-01

    Rich Internet Applications (RIAs) are an emerging software platform that blurs the line between web service and native application, and is a powerful tool for handheld device deployment. By democratizing health data management and widening its availability, this software platform has the potential to revolutionize telemedicine, clinical practice, medical education and information distribution, particularly in rural areas, and to make patient-centric medical computing a reality. In this paper, we propose a telemedicine application that leverages the ability of a mobile RIA platform to transcode, organise and present textual and multimedia data, which are sourced from medical database software. We adopted a web-based approach to communicate, in real-time, with an established hospital information system via a custom RIA. The proposed solution allows communication between handheld devices and a hospital information system for media streaming with support for real-time encryption, on any RIA enabled platform. We demonstrate our prototype's ability to securely and rapidly access, without installation requirements, medical data ranging from simple textual records to multi-slice PET-CT images and maximum intensity (MIP) projections.

  13. Developing an Interactive Data Visualization Tool to Assess the Impact of Decision Support on Clinical Operations.

    PubMed

    Huber, Timothy C; Krishnaraj, Arun; Monaghan, Dayna; Gaskin, Cree M

    2018-05-18

    Due to mandates from recent legislation, clinical decision support (CDS) software is being adopted by radiology practices across the country. This software provides imaging study decision support for referring providers at the point of order entry. CDS systems produce a large volume of data, providing opportunities for research and quality improvement. In order to better visualize and analyze trends in this data, an interactive data visualization dashboard was created using a commercially available data visualization platform. Following the integration of a commercially available clinical decision support product into the electronic health record, a dashboard was created using a commercially available data visualization platform (Tableau, Seattle, WA). Data generated by the CDS were exported from the data warehouse, where they were stored, into the platform. This allowed for real-time visualization of the data generated by the decision support software. The creation of the dashboard allowed the output from the CDS platform to be more easily analyzed and facilitated hypothesis generation. Integrating data visualization tools into clinical decision support tools allows for easier data analysis and can streamline research and quality improvement efforts.

  14. High-speed laser photoacoustic imaging system combined with a digital ultrasonic imaging platform

    NASA Astrophysics Data System (ADS)

    Zeng, Lvming; Liu, Guodong; Ji, Xuanrong; Ren, Zhong; Huang, Zhen

    2009-07-01

    As a new field of combined ultrasound/photoacoustic imaging in biomedical photonics research, we present and demonstrate a high-speed laser photoacoustic imaging system combined with digital ultrasound imaging platform. In the prototype system, a new B-mode digital ultrasonic imaging system is modified as the hardware platform with 384 vertical transducer elements. The centre resonance frequency of the piezoelectric transducer is 5.0 MHz with greater than 70% pulse-echo -6dB fractional bandwidth. The modular instrument of PCI-6541 is used as the hardware control centre of the testing system, which features 32 high-speed channels to build low-skew and multi-channel system. The digital photoacoustic data is transported into computer for subsequent reconstruction at 25 MHz clock frequency. Meantime, the software system for controlling and analyzing is correspondingly explored with LabVIEW language on virtual instrument platform. In the breast tissue experiment, the reconstructed image agrees well with the original sample, and the spatial resolution of the system can reach 0.2 mm with multi-element synthetic aperture focusing technique. Therefore, the system and method may have a significant value in improving early detecting level of cancer in the breast and other organs.

  15. CellShape: A user-friendly image analysis tool for quantitative visualization of bacterial cell factories inside.

    PubMed

    Goñi-Moreno, Ángel; Kim, Juhyun; de Lorenzo, Víctor

    2017-02-01

    Visualization of the intracellular constituents of individual bacteria while performing as live biocatalysts is in principle doable through more or less sophisticated fluorescence microscopy. Unfortunately, rigorous quantitation of the wealth of data embodied in the resulting images requires bioinformatic tools that are not widely extended within the community-let alone that they are often subject to licensing that impedes software reuse. In this context we have developed CellShape, a user-friendly platform for image analysis with subpixel precision and double-threshold segmentation system for quantification of fluorescent signals stemming from single-cells. CellShape is entirely coded in Python, a free, open-source programming language with widespread community support. For a developer, CellShape enhances extensibility (ease of software improvements) by acting as an interface to access and use existing Python modules; for an end-user, CellShape presents standalone executable files ready to open without installation. We have adopted this platform to analyse with an unprecedented detail the tridimensional distribution of the constituents of the gene expression flow (DNA, RNA polymerase, mRNA and ribosomal proteins) in individual cells of the industrial platform strain Pseudomonas putida KT2440. While the CellShape first release version (v0.8) is readily operational, users and/or developers are enabled to expand the platform further. Copyright © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling

    NASA Astrophysics Data System (ADS)

    Comi, Troy J.; Neumann, Elizabeth K.; Do, Thanh D.; Sweedler, Jonathan V.

    2017-09-01

    Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. [Figure not available: see fulltext.

  17. microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling.

    PubMed

    Comi, Troy J; Neumann, Elizabeth K; Do, Thanh D; Sweedler, Jonathan V

    2017-09-01

    Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. Graphical Abstract ᅟ.

  18. Radiology and Enterprise Medical Imaging Extensions (REMIX).

    PubMed

    Erdal, Barbaros S; Prevedello, Luciano M; Qian, Songyue; Demirer, Mutlu; Little, Kevin; Ryu, John; O'Donnell, Thomas; White, Richard D

    2018-02-01

    Radiology and Enterprise Medical Imaging Extensions (REMIX) is a platform originally designed to both support the medical imaging-driven clinical and clinical research operational needs of Department of Radiology of The Ohio State University Wexner Medical Center. REMIX accommodates the storage and handling of "big imaging data," as needed for large multi-disciplinary cancer-focused programs. The evolving REMIX platform contains an array of integrated tools/software packages for the following: (1) server and storage management; (2) image reconstruction; (3) digital pathology; (4) de-identification; (5) business intelligence; (6) texture analysis; and (7) artificial intelligence. These capabilities, along with documentation and guidance, explaining how to interact with a commercial system (e.g., PACS, EHR, commercial database) that currently exists in clinical environments, are to be made freely available.

  19. Protyping machine vision software on the World Wide Web

    NASA Astrophysics Data System (ADS)

    Karantalis, George; Batchelor, Bruce G.

    1998-10-01

    Interactive image processing is a proven technique for analyzing industrial vision applications and building prototype systems. Several of the previous implementations have used dedicated hardware to perform the image processing, with a top layer of software providing a convenient user interface. More recently, self-contained software packages have been devised and these run on a standard computer. The advent of the Java programming language has made it possible to write platform-independent software, operating over the Internet, or a company-wide Intranet. Thus, there arises the possibility of designing at least some shop-floor inspection/control systems, without the vision engineer ever entering the factories where they will be used. It successful, this project will have a major impact on the productivity of vision systems designers.

  20. On-line 3-dimensional confocal imaging in vivo.

    PubMed

    Li, J; Jester, J V; Cavanagh, H D; Black, T D; Petroll, W M

    2000-09-01

    In vivo confocal microscopy through focusing (CMTF) can provide a 3-D stack of high-resolution corneal images and allows objective measurements of corneal sublayer thickness and backscattering. However, current systems require time-consuming off-line image processing and analysis on multiple software platforms. Furthermore, there is a trade off between the CMTF speed and measurement precision. The purpose of this study was to develop a novel on-line system for in vivo corneal imaging and analysis that overcomes these limitations. A tandem scanning confocal microscope (TSCM) was used for corneal imaging. The TSCM video camera was interfaced directly to a PC image acquisition board to implement real-time digitization. Software was developed to allow in vivo 2-D imaging, CMTF image acquisition, interactive 3-D reconstruction, and analysis of CMTF data to be performed on line in a single user-friendly environment. A procedure was also incorporated to separate the odd/even video fields, thereby doubling the CMTF sampling rate and theoretically improving the precision of CMTF thickness measurements by a factor of two. In vivo corneal examinations of a normal human and a photorefractive keratectomy patient are presented to demonstrate the capabilities of the new system. Improvements in the convenience, speed, and functionality of in vivo CMTF image acquisition, display, and analysis are demonstrated. This is the first full-featured software package designed for in vivo TSCM imaging of the cornea, which performs both 2-D and 3-D image acquisition, display, and processing as well as CMTF analysis. The use of a PC platform and incorporation of easy to use, on line, and interactive features should help to improve the clinical utility of this technology.

  1. The Digital Slide Archive: A Software Platform for Management, Integration, and Analysis of Histology for Cancer Research.

    PubMed

    Gutman, David A; Khalilia, Mohammed; Lee, Sanghoon; Nalisnik, Michael; Mullen, Zach; Beezley, Jonathan; Chittajallu, Deepak R; Manthey, David; Cooper, Lee A D

    2017-11-01

    Tissue-based cancer studies can generate large amounts of histology data in the form of glass slides. These slides contain important diagnostic, prognostic, and biological information and can be digitized into expansive and high-resolution whole-slide images using slide-scanning devices. Effectively utilizing digital pathology data in cancer research requires the ability to manage, visualize, share, and perform quantitative analysis on these large amounts of image data, tasks that are often complex and difficult for investigators with the current state of commercial digital pathology software. In this article, we describe the Digital Slide Archive (DSA), an open-source web-based platform for digital pathology. DSA allows investigators to manage large collections of histologic images and integrate them with clinical and genomic metadata. The open-source model enables DSA to be extended to provide additional capabilities. Cancer Res; 77(21); e75-78. ©2017 AACR . ©2017 American Association for Cancer Research.

  2. Automated phenotype pattern recognition of zebrafish for high-throughput screening.

    PubMed

    Schutera, Mark; Dickmeis, Thomas; Mione, Marina; Peravali, Ravindra; Marcato, Daniel; Reischl, Markus; Mikut, Ralf; Pylatiuk, Christian

    2016-07-03

    Over the last years, the zebrafish (Danio rerio) has become a key model organism in genetic and chemical screenings. A growing number of experiments and an expanding interest in zebrafish research makes it increasingly essential to automatize the distribution of embryos and larvae into standard microtiter plates or other sample holders for screening, often according to phenotypical features. Until now, such sorting processes have been carried out by manually handling the larvae and manual feature detection. Here, a prototype platform for image acquisition together with a classification software is presented. Zebrafish embryos and larvae and their features such as pigmentation are detected automatically from the image. Zebrafish of 4 different phenotypes can be classified through pattern recognition at 72 h post fertilization (hpf), allowing the software to classify an embryo into 2 distinct phenotypic classes: wild-type versus variant. The zebrafish phenotypes are classified with an accuracy of 79-99% without any user interaction. A description of the prototype platform and of the algorithms for image processing and pattern recognition is presented.

  3. HTML5 PivotViewer: high-throughput visualization and querying of image data on the web

    PubMed Central

    Taylor, Stephen; Noble, Roger

    2014-01-01

    Motivation: Visualization and analysis of large numbers of biological images has generated a bottle neck in research. We present HTML5 PivotViewer, a novel, open source, platform-independent viewer making use of the latest web technologies that allows seamless access to images and associated metadata for each image. This provides a powerful method to allow end users to mine their data. Availability and implementation: Documentation, examples and links to the software are available from http://www.cbrg.ox.ac.uk/data/pivotviewer/. The software is licensed under GPLv2. Contact:  stephen.taylor@imm.ox.ac.uk and roger@coritsu.com PMID:24849578

  4. XML-based scripting of multimodality image presentations in multidisciplinary clinical conferences

    NASA Astrophysics Data System (ADS)

    Ratib, Osman M.; Allada, Vivekanand; Dahlbom, Magdalena; Marcus, Phillip; Fine, Ian; Lapstra, Lorelle

    2002-05-01

    We developed a multi-modality image presentation software for display and analysis of images and related data from different imaging modalities. The software is part of a cardiac image review and presentation platform that supports integration of digital images and data from digital and analog media such as videotapes, analog x-ray films and 35 mm cine films. The software supports standard DICOM image files as well as AVI and PDF data formats. The system is integrated in a digital conferencing room that includes projections of digital and analog sources, remote videoconferencing capabilities, and an electronic whiteboard. The goal of this pilot project is to: 1) develop a new paradigm for image and data management for presentation in a clinically meaningful sequence adapted to case-specific scenarios, 2) design and implement a multi-modality review and conferencing workstation using component technology and customizable 'plug-in' architecture to support complex review and diagnostic tasks applicable to all cardiac imaging modalities and 3) develop an XML-based scripting model of image and data presentation for clinical review and decision making during routine clinical tasks and multidisciplinary clinical conferences.

  5. Design of a Single-Cell Positioning Controller Using Electroosmotic Flow and Image Processing

    PubMed Central

    Ay, Chyung; Young, Chao-Wang; Chen, Jhong-Yin

    2013-01-01

    The objective of the current research was not only to provide a fast and automatic positioning platform for single cells, but also improved biomolecular manipulation techniques. In this study, an automatic platform for cell positioning using electroosmotic flow and image processing technology was designed. The platform was developed using a PCI image acquisition interface card for capturing images from a microscope and then transferring them to a computer using human-machine interface software. This software was designed by the Laboratory Virtual Instrument Engineering Workbench, a graphical language for finding cell positions and viewing the driving trace, and the fuzzy logic method for controlling the voltage or time of an electric field. After experiments on real human leukemic cells (U-937), the success of the cell positioning rate achieved by controlling the voltage factor reaches 100% within 5 s. A greater precision is obtained when controlling the time factor, whereby the success rate reaches 100% within 28 s. Advantages in both high speed and high precision are attained if these two voltage and time control methods are combined. The control speed with the combined method is about 5.18 times greater than that achieved by the time method, and the control precision with the combined method is more than five times greater than that achieved by the voltage method. PMID:23698272

  6. A modular and programmable development platform for capsule endoscopy system.

    PubMed

    Khan, Tareq Hasan; Shrestha, Ravi; Wahid, Khan A

    2014-06-01

    The state-of-the-art capsule endoscopy (CE) technology offers painless examination for the patients and the ability to examine the interior of the gastrointestinal tract by a noninvasive procedure for the gastroenterologists. In this work, a modular and flexible CE development system platform consisting of a miniature field programmable gate array (FPGA) based electronic capsule, a microcontroller based portable data recorder unit and computer software is designed and developed. Due to the flexible and reprogrammable nature of the system, various image processing and compression algorithms can be tested in the design without requiring any hardware change. The designed capsule prototype supports various imaging modes including white light imaging (WLI) and narrow band imaging (NBI), and communicates with the data recorder in full duplex fashion, which enables configuring the image size and imaging mode in real time during examination. A low complexity image compressor based on a novel color-space is implemented inside the capsule to reduce the amount of RF transmission data. The data recorder contains graphical LCD for real time image viewing and SD cards for storing image data. Data can be uploaded to a computer or Smartphone by SD card, USB interface or by wireless Bluetooth link. Computer software is developed that decompresses and reconstructs images. The fabricated capsule PCBs have a diameter of 16 mm. An ex-vivo animal testing has also been conducted to validate the results.

  7. EOS workstation

    NASA Technical Reports Server (NTRS)

    Leberl, Franz; Karspeck, Milan; Millot, Michel; Maurice, Kelly; Jackson, Matt

    1992-01-01

    This final report summarizes the work done from mid-1989 until January 1992 to develop a prototype set of tools for the analysis of EOS-type images. Such images are characterized by great multiplicity and quantity. A single 'snapshot' of EOS-type imagery may contain several hundred component images so that on a particular pixel, one finds multiple gray values. A prototype EOS-sensor, AVIRIS, has 224 gray values at each pixel. The work focused on the ability to utilize very large images and continuously roam through those images, zoom and be able to hold more than one black and white or color image, for example for stereo viewing or for image comparisons. A second focus was the utilization of so-called 'image cubes', where multiple images need to be co-registered and then jointly analyzed, viewed, and manipulated. The target computer platform that was selected was a high-performance graphics superworkstation, Stardent 3000. This particular platform offered many particular graphics tools such as the Application Visualization System (AVS) or Dore, but it missed availability of commercial third-party software for relational data bases, image processing, etc. The project was able to cope with these limitations and a phase-3 activity is currently being negotiated to port the software and enhance it for use with a novel graphics superworkstation to be introduced into the market in the Spring of 1993.

  8. Advances in single-cell experimental design made possible by automated imaging platforms with feedback through segmentation.

    PubMed

    Crick, Alex J; Cammarota, Eugenia; Moulang, Katie; Kotar, Jurij; Cicuta, Pietro

    2015-01-01

    Live optical microscopy has become an essential tool for studying the dynamical behaviors and variability of single cells, and cell-cell interactions. However, experiments and data analysis in this area are often extremely labor intensive, and it has often not been achievable or practical to perform properly standardized experiments on a statistically viable scale. We have addressed this challenge by developing automated live imaging platforms, to help standardize experiments, increasing throughput, and unlocking previously impossible ones. Our real-time cell tracking programs communicate in feedback with microscope and camera control software, and they are highly customizable, flexible, and efficient. As examples of our current research which utilize these automated platforms, we describe two quite different applications: egress-invasion interactions of malaria parasites and red blood cells, and imaging of immune cells which possess high motility and internal dynamics. The automated imaging platforms are able to track a large number of motile cells simultaneously, over hours or even days at a time, greatly increasing data throughput and opening up new experimental possibilities. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Unified Framework for Development, Deployment and Robust Testing of Neuroimaging Algorithms

    PubMed Central

    Joshi, Alark; Scheinost, Dustin; Okuda, Hirohito; Belhachemi, Dominique; Murphy, Isabella; Staib, Lawrence H.; Papademetris, Xenophon

    2011-01-01

    Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software—BioImage Suite (bioimagesuite.org). PMID:21249532

  10. Software components for medical image visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Starreveld, Yves P.; Gobbi, David G.; Finnis, Kirk; Peters, Terence M.

    2001-05-01

    Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been licensed and certified for use in a commercial image guidance system. Conclusions: It is feasible to encapsulate image manipulation and surgical guidance tasks in individual, reusable software modules. These modules allow for faster development of new applications. The strict application of object oriented software design methods allows individual components of such a system to make the transition from the research environment to a commercial one.

  11. FRACOR-software toolbox for deterministic mapping of fracture corridors in oil fields on AutoCAD platform

    NASA Astrophysics Data System (ADS)

    Ozkaya, Sait I.

    2018-03-01

    Fracture corridors are interconnected large fractures in a narrow sub vertical tabular array, which usually traverse entire reservoir vertically and extended for several hundreds of meters laterally. Fracture corridors with their huge conductivities constitute an important element of many fractured reservoirs. Unlike small diffuse fractures, actual fracture corridors must be mapped deterministically for simulation or field development purposes. Fracture corridors can be identified and quantified definitely with borehole image logs and well testing. However, there are rarely sufficient image logs or well tests, and it is necessary to utilize various fracture corridor indicators with varying degrees of reliability. Integration of data from many different sources, in turn, requires a platform with powerful editing and layering capability. Available commercial reservoir characterization software packages, with layering and editing capabilities, can be cost intensive. CAD packages are far more affordable and may easily acquire the versatility and power of commercial software packages with addition of a small software toolbox. The objective of this communication is to present FRACOR, a software toolbox which enables deterministic 2D fracture corridor mapping and modeling on AutoCAD platform. The FRACOR toolbox is written in AutoLISPand contains several independent routines to import and integrate available fracture corridor data from an oil field, and export results as text files. The resulting fracture corridor maps consists mainly of fracture corridors with different confidence levels from combination of static and dynamic data and exclusion zones where no fracture corridor can exist. The exported text file of fracture corridors from FRACOR can be imported into an upscaling programs to generate fracture grid for dual porosity simulation or used for field development and well planning.

  12. Phenotiki: an open software and hardware platform for affordable and easy image-based phenotyping of rosette-shaped plants.

    PubMed

    Minervini, Massimo; Giuffrida, Mario V; Perata, Pierdomenico; Tsaftaris, Sotirios A

    2017-04-01

    Phenotyping is important to understand plant biology, but current solutions are costly, not versatile or are difficult to deploy. To solve this problem, we present Phenotiki, an affordable system for plant phenotyping that, relying on off-the-shelf parts, provides an easy to install and maintain platform, offering an out-of-box experience for a well-established phenotyping need: imaging rosette-shaped plants. The accompanying software (with available source code) processes data originating from our device seamlessly and automatically. Our software relies on machine learning to devise robust algorithms, and includes an automated leaf count obtained from 2D images without the need of depth (3D). Our affordable device (~€200) can be deployed in growth chambers or greenhouse to acquire optical 2D images of approximately up to 60 adult Arabidopsis rosettes concurrently. Data from the device are processed remotely on a workstation or via a cloud application (based on CyVerse). In this paper, we present a proof-of-concept validation experiment on top-view images of 24 Arabidopsis plants in a combination of genotypes that has not been compared previously. Phenotypic analysis with respect to morphology, growth, color and leaf count has not been performed comprehensively before now. We confirm the findings of others on some of the extracted traits, showing that we can phenotype at reduced cost. We also perform extensive validations with external measurements and with higher fidelity equipment, and find no loss in statistical accuracy when we use the affordable setting that we propose. Device set-up instructions and analysis software are publicly available ( http://phenotiki.com). © 2017 The Authors The Plant Journal © 2017 John Wiley & Sons Ltd.

  13. Open source software and low cost sensors for teaching UAV science

    NASA Astrophysics Data System (ADS)

    Kefauver, S. C.; Sanchez-Bragado, R.; El-Haddad, G.; Araus, J. L.

    2016-12-01

    Drones, also known as UASs (unmanned aerial systems), UAVs (Unmanned Aerial Vehicles) or RPAS (Remotely piloted aircraft systems), are both useful advanced scientific platforms and recreational toys that are appealing to younger generations. As such, they can make for excellent education tools as well as low-cost scientific research project alternatives. However, the process of taking pretty pictures to remote sensing science can be daunting if one is presented with only expensive software and sensor options. There are a number of open-source tools and low cost platform and sensor options available that can provide excellent scientific research results, and, by often requiring more user-involvement than commercial software and sensors, provide even greater educational benefits. Scale-invariant feature transform (SIFT) algorithm implementations, such as the Microsoft Image Composite Editor (ICE), which can create quality 2D image mosaics with some motion and terrain adjustments and VisualSFM (Structure from Motion), which can provide full image mosaicking with movement and orthorectification capacities. RGB image quantification using alternate color space transforms, such as the BreedPix indices, can be calculated via plugins in the open-source software Fiji (http://fiji.sc/Fiji; http://github.com/george-haddad/CIMMYT). Recent analyses of aerial images from UAVs over different vegetation types and environments have shown RGB metrics can outperform more costly commercial sensors. Specifically, Hue-based pixel counts, the Triangle Greenness Index (TGI), and the Normalized Green Red Difference Index (NGRDI) consistently outperformed NDVI in estimating abiotic and biotic stress impacts on crop health. Also, simple kits are available for NDVI camera conversions. Furthermore, suggestions for multivariate analyses of the different RGB indices in the "R program for statistical computing", such as classification and regression trees can allow for a more approachable interpretation of results in the classroom.

  14. Pre-Hardware Optimization of Spacecraft Image Processing Software Algorithms and Hardware Implementation

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Flatley, Thomas P.; Hestnes, Phyllis; Jentoft-Nilsen, Marit; Petrick, David J.; Day, John H. (Technical Monitor)

    2001-01-01

    Spacecraft telemetry rates have steadily increased over the last decade presenting a problem for real-time processing by ground facilities. This paper proposes a solution to a related problem for the Geostationary Operational Environmental Spacecraft (GOES-8) image processing application. Although large super-computer facilities are the obvious heritage solution, they are very costly, making it imperative to seek a feasible alternative engineering solution at a fraction of the cost. The solution is based on a Personal Computer (PC) platform and synergy of optimized software algorithms and re-configurable computing hardware technologies, such as Field Programmable Gate Arrays (FPGA) and Digital Signal Processing (DSP). It has been shown in [1] and [2] that this configuration can provide superior inexpensive performance for a chosen application on the ground station or on-board a spacecraft. However, since this technology is still maturing, intensive pre-hardware steps are necessary to achieve the benefits of hardware implementation. This paper describes these steps for the GOES-8 application, a software project developed using Interactive Data Language (IDL) (Trademark of Research Systems, Inc.) on a Workstation/UNIX platform. The solution involves converting the application to a PC/Windows/RC platform, selected mainly by the availability of low cost, adaptable high-speed RC hardware. In order for the hybrid system to run, the IDL software was modified to account for platform differences. It was interesting to examine the gains and losses in performance on the new platform, as well as unexpected observations before implementing hardware. After substantial pre-hardware optimization steps, the necessity of hardware implementation for bottleneck code in the PC environment became evident and solvable beginning with the methodology described in [1], [2], and implementing a novel methodology for this specific application [6]. The PC-RC interface bandwidth problem for the class of applications with moderate input-output data rates but large intermediate multi-thread data streams has been addressed and mitigated. This opens a new class of satellite image processing applications for bottleneck problems solution using RC technologies. The issue of a science algorithm level of abstraction necessary for RC hardware implementation is also described. Selected Matlab functions already implemented in hardware were investigated for their direct applicability to the GOES-8 application with the intent to create a library of Matlab and IDL RC functions for ongoing work. A complete class of spacecraft image processing applications using embedded re-configurable computing technology to meet real-time requirements, including performance results and comparison with the existing system, is described in this paper.

  15. An FPGA-based heterogeneous image fusion system design method

    NASA Astrophysics Data System (ADS)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  16. Leaf-GP: an open and automated software application for measuring growth phenotypes for arabidopsis and wheat.

    PubMed

    Zhou, Ji; Applegate, Christopher; Alonso, Albor Dobon; Reynolds, Daniel; Orford, Simon; Mackiewicz, Michal; Griffiths, Simon; Penfield, Steven; Pullen, Nick

    2017-01-01

    Plants demonstrate dynamic growth phenotypes that are determined by genetic and environmental factors. Phenotypic analysis of growth features over time is a key approach to understand how plants interact with environmental change as well as respond to different treatments. Although the importance of measuring dynamic growth traits is widely recognised, available open software tools are limited in terms of batch image processing, multiple traits analyses, software usability and cross-referencing results between experiments, making automated phenotypic analysis problematic. Here, we present Leaf-GP (Growth Phenotypes), an easy-to-use and open software application that can be executed on different computing platforms. To facilitate diverse scientific communities, we provide three software versions, including a graphic user interface (GUI) for personal computer (PC) users, a command-line interface for high-performance computer (HPC) users, and a well-commented interactive Jupyter Notebook (also known as the iPython Notebook) for computational biologists and computer scientists. The software is capable of extracting multiple growth traits automatically from large image datasets. We have utilised it in Arabidopsis thaliana and wheat ( Triticum aestivum ) growth studies at the Norwich Research Park (NRP, UK). By quantifying a number of growth phenotypes over time, we have identified diverse plant growth patterns between different genotypes under several experimental conditions. As Leaf-GP has been evaluated with noisy image series acquired by different imaging devices (e.g. smartphones and digital cameras) and still produced reliable biological outputs, we therefore believe that our automated analysis workflow and customised computer vision based feature extraction software implementation can facilitate a broader plant research community for their growth and development studies. Furthermore, because we implemented Leaf-GP based on open Python-based computer vision, image analysis and machine learning libraries, we believe that our software not only can contribute to biological research, but also demonstrates how to utilise existing open numeric and scientific libraries (e.g. Scikit-image, OpenCV, SciPy and Scikit-learn) to build sound plant phenomics analytic solutions, in a efficient and effective way. Leaf-GP is a sophisticated software application that provides three approaches to quantify growth phenotypes from large image series. We demonstrate its usefulness and high accuracy based on two biological applications: (1) the quantification of growth traits for Arabidopsis genotypes under two temperature conditions; and (2) measuring wheat growth in the glasshouse over time. The software is easy-to-use and cross-platform, which can be executed on Mac OS, Windows and HPC, with open Python-based scientific libraries preinstalled. Our work presents the advancement of how to integrate computer vision, image analysis, machine learning and software engineering in plant phenomics software implementation. To serve the plant research community, our modulated source code, detailed comments, executables (.exe for Windows; .app for Mac), and experimental results are freely available at https://github.com/Crop-Phenomics-Group/Leaf-GP/releases.

  17. An Open Source Software and Web-GIS Based Platform for Airborne SAR Remote Sensing Data Management, Distribution and Sharing

    NASA Astrophysics Data System (ADS)

    Changyong, Dou; Huadong, Guo; Chunming, Han; Ming, Liu

    2014-03-01

    With more and more Earth observation data available to the community, how to manage and sharing these valuable remote sensing datasets is becoming an urgent issue to be solved. The web based Geographical Information Systems (GIS) technology provides a convenient way for the users in different locations to share and make use of the same dataset. In order to efficiently use the airborne Synthetic Aperture Radar (SAR) remote sensing data acquired in the Airborne Remote Sensing Center of the Institute of Remote Sensing and Digital Earth (RADI), Chinese Academy of Sciences (CAS), a Web-GIS based platform for airborne SAR data management, distribution and sharing was designed and developed. The major features of the system include map based navigation search interface, full resolution imagery shown overlaid the map, and all the software adopted in the platform are Open Source Software (OSS). The functions of the platform include browsing the imagery on the map navigation based interface, ordering and downloading data online, image dataset and user management, etc. At present, the system is under testing in RADI and will come to regular operation soon.

  18. Uav Borne Low Altitude Photogrammetry System

    NASA Astrophysics Data System (ADS)

    Lin, Z.; Su, G.; Xie, F.

    2012-07-01

    In this paper,the aforementioned three major aspects related to the Unmanned Aerial Vehicles (UAV) system for low altitude aerial photogrammetry, i.e., flying platform, imaging sensor system and data processing software, are discussed. First of all, according to the technical requirements about the least cruising speed, the shortest taxiing distance, the level of the flight control and the performance of turbulence flying, the performance and suitability of the available UAV platforms (e.g., fixed wing UAVs, the unmanned helicopters and the unmanned airships) are compared and analyzed. Secondly, considering the restrictions on the load weight of a platform and the resolution pertaining to a sensor, together with the exposure equation and the theory of optical information, the principles of designing self-calibration and self-stabilizing combined wide-angle digital cameras (e.g., double-combined camera and four-combined camera) are placed more emphasis on. Finally, a software named MAP-AT, considering the specialty of UAV platforms and sensors, is developed and introduced. Apart from the common functions of aerial image processing, MAP-AT puts more effort on automatic extraction, automatic checking and artificial aided adding of the tie points for images with big tilt angles. Based on the recommended process for low altitude photogrammetry with UAVs in this paper, more than ten aerial photogrammetry missions have been accomplished, the accuracies of Aerial Triangulation, Digital orthophotos(DOM)and Digital Line Graphs(DLG) of which meet the standard requirement of 1:2000, 1:1000 and 1:500 mapping.

  19. MOPEX: a software package for astronomical image processing and visualization

    NASA Astrophysics Data System (ADS)

    Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley

    2006-06-01

    We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.

  20. IDEAL: Images Across Domains, Experiments, Algorithms and Learning

    NASA Astrophysics Data System (ADS)

    Ushizima, Daniela M.; Bale, Hrishikesh A.; Bethel, E. Wes; Ercius, Peter; Helms, Brett A.; Krishnan, Harinarayan; Grinberg, Lea T.; Haranczyk, Maciej; Macdowell, Alastair A.; Odziomek, Katarzyna; Parkinson, Dilworth Y.; Perciano, Talita; Ritchie, Robert O.; Yang, Chao

    2016-11-01

    Research across science domains is increasingly reliant on image-centric data. Software tools are in high demand to uncover relevant, but hidden, information in digital images, such as those coming from faster next generation high-throughput imaging platforms. The challenge is to analyze the data torrent generated by the advanced instruments efficiently, and provide insights such as measurements for decision-making. In this paper, we overview work performed by an interdisciplinary team of computational and materials scientists, aimed at designing software applications and coordinating research efforts connecting (1) emerging algorithms for dealing with large and complex datasets; (2) data analysis methods with emphasis in pattern recognition and machine learning; and (3) advances in evolving computer architectures. Engineering tools around these efforts accelerate the analyses of image-based recordings, improve reusability and reproducibility, scale scientific procedures by reducing time between experiments, increase efficiency, and open opportunities for more users of the imaging facilities. This paper describes our algorithms and software tools, showing results across image scales, demonstrating how our framework plays a role in improving image understanding for quality control of existent materials and discovery of new compounds.

  1. The CoreWall Project: An Update for 2007

    NASA Astrophysics Data System (ADS)

    Yu-Chung Chen, J.; Higgins, S.; Hur, H.; Ito, E.; Jenkins, C. J.; Johnson, A.; Leigh, J.; Morin, P.; Lee, J.

    2007-12-01

    The CoreWall Suite is a NSF-supported collaborative development for a real-time core description (Corelyzer), stratigraphic correlation (Correlater), and data visualization (CoreNavigator) software to be used by the marine, terrestrial and Antarctic science communities. The overall goal of the Corewall software development is to bring portable cross-platform tools to the broader drilling and coring communities to expand and enhance data visualization and enhance collaborative integration of multiple datasets. The CoreWall Project is now in its second year and significant progress has been made on all 3 software components. Corelyzer has undergone 2 field deployments and testing by ANDRILL program in 2006 (and again in Fall 2007) and by ICDP's SAFOD project (summer 2007). In addition, Corewall group and ICDP are working together so that the core description (DIS) system can expose DIS core data directly into Corelyzer seamlessly and be available to future ICDP and IODP-Mission Specific Platform expeditions. Educators have also taken note of the software's ease of use and strong visualization capabilities to begin exploring curriculum projects with Corelyzer software. To ensure that the software development is integrated with other community IT activities the development of the U.S. IODP-Phase 2 Scientific Ocean Drilling Vessel (SODV), a Steering Committee was constituted. It is composed of key U.S. IODP and related database (e.g., CHRONOS, SedDB) developers and users as well as representatives of other core-based enterprises (e.g., ANDRILL, ICDP, LacCore). Corelyzer (CoreWall's main visual core description tool) software displays digital core images from one or more cores along with discrete data streams (eg. physical properties, downhole logs) and nested images (eg. thin sections, fossils) to provide a robust approach to the description of sediment cores. Corelyzer's digital image handling allows the cores to be viewed from micron to km scale determined by the image resolution along a sliding plane, effectively making it a "digital microscope". Detailed features such as lithologic variation, macroscopic grain size variation, bioturbation intensity, chemical composition and micropaleontology are easier to interpret and annotate. Significant new capabilities have been added to allow for importing multiple images and data types, sharing/exporting Corelyzer "work sessions" for multiple users, enhanced annotations, as well as support for other activities like examining clasts, and sample requests. The new Correlator software, the updated version of Splicer/Sagan software used by ODP for over 10 years, has been ported into a single new analysis tool that will work across multiple platforms and interact seamlessly with both JANUS (ODP's relational database), CHRONOS, PetDB, SedDB, dbSEABED and other databases. This functionality will result in a CoreWall Suite module that can be used and distributed anywhere for stratigraphic and age correlation tasks. CoreNavigator, a spatial data discovery tool, has taken on a virtual Globe interface that allows users to enter Corelyzer from a geographic-visual standpoint.

  2. The role of 3D printing in treating craniomaxillofacial congenital anomalies.

    PubMed

    Lopez, Christopher D; Witek, Lukasz; Torroni, Andrea; Flores, Roberto L; Demissie, David B; Young, Simon; Cronstein, Bruce N; Coelho, Paulo G

    2018-05-20

    Craniomaxillofacial congenital anomalies comprise approximately one third of all congenital birth defects and include deformities such as alveolar clefts, craniosynostosis, and microtia. Current surgical treatments commonly require the use of autogenous graft material which are difficult to shape, limited in supply, associated with donor site morbidity and cannot grow with a maturing skeleton. Our group has demonstrated that 3D printed bio-ceramic scaffolds can generate vascularized bone within large, critical-sized defects (defects too large to heal spontaneously) of the craniomaxillofacial skeleton. Furthermore, these scaffolds are also able to function as a delivery vehicle for a new osteogenic agent with a well-established safety profile. The same 3D printers and imaging software platforms have been leveraged by our team to create sterilizable patient-specific intraoperative models for craniofacial reconstruction. For microtia repair, the current standard of care surgical guide is a two-dimensional drawing taken from the contralateral ear. Our laboratory has used 3D printers and open source software platforms to design personalized microtia surgical models. In this review, we report on the advancements in tissue engineering principles, digital imaging software platforms and 3D printing that have culminated in the application of this technology to repair large bone defects in skeletally immature transitional models and provide in-house manufactured, sterilizable patient-specific models for craniofacial reconstruction. © 2018 Wiley Periodicals, Inc.

  3. The Orthanc Ecosystem for Medical Imaging.

    PubMed

    Jodogne, Sébastien

    2018-05-03

    This paper reviews the components of Orthanc, a free and open-source, highly versatile ecosystem for medical imaging. At the core of the Orthanc ecosystem, the Orthanc server is a lightweight vendor neutral archive that provides PACS managers with a powerful environment to automate and optimize the imaging flows that are very specific to each hospital. The Orthanc server can be extended with plugins that provide solutions for teleradiology, digital pathology, or enterprise-ready databases. It is shown how software developers and research engineers can easily develop external software or Web portals dealing with medical images, with minimal knowledge of the DICOM standard, thanks to the advanced programming interface of the Orthanc server. The paper concludes by introducing the Stone of Orthanc, an innovative toolkit for the cross-platform rendering of medical images.

  4. Platform for Automated Real-Time High Performance Analytics on Medical Image Data.

    PubMed

    Allen, William J; Gabr, Refaat E; Tefera, Getaneh B; Pednekar, Amol S; Vaughn, Matthew W; Narayana, Ponnada A

    2018-03-01

    Biomedical data are quickly growing in volume and in variety, providing clinicians an opportunity for better clinical decision support. Here, we demonstrate a robust platform that uses software automation and high performance computing (HPC) resources to achieve real-time analytics of clinical data, specifically magnetic resonance imaging (MRI) data. We used the Agave application programming interface to facilitate communication, data transfer, and job control between an MRI scanner and an off-site HPC resource. In this use case, Agave executed the graphical pipeline tool GRAphical Pipeline Environment (GRAPE) to perform automated, real-time, quantitative analysis of MRI scans. Same-session image processing will open the door for adaptive scanning and real-time quality control, potentially accelerating the discovery of pathologies and minimizing patient callbacks. We envision this platform can be adapted to other medical instruments, HPC resources, and analytics tools.

  5. Building a virtual simulation platform for quasistatic breast ultrasound elastography using open source software: A preliminary investigation.

    PubMed

    Wang, Yu; Helminen, Emily; Jiang, Jingfeng

    2015-09-01

    Quasistatic ultrasound elastography (QUE) is being used to augment in vivo characterization of breast lesions. Results from early clinical trials indicated that there was a lack of confidence in image interpretation. Such confidence can only be gained through rigorous imaging tests using complex, heterogeneous but known media. The objective of this study is to build a virtual breast QUE simulation platform in the public domain that can be used not only for innovative QUE research but also for rigorous imaging tests. The main thrust of this work is to streamline biomedical ultrasound simulations by leveraging existing open source software packages including Field II (ultrasound simulator), VTK (geometrical visualization and processing), FEBio [finite element (FE) analysis], and Tetgen (mesh generator). However, integration of these open source packages is nontrivial and requires interdisciplinary knowledge. In the first step, a virtual breast model containing complex anatomical geometries was created through a novel combination of image-based landmark structures and randomly distributed (small) structures. Image-based landmark structures were based on data from the NIH Visible Human Project. Subsequently, an unstructured FE-mesh was created by Tetgen. In the second step, randomly positioned point scatterers were placed within the meshed breast model through an octree-based algorithm to make a virtual breast ultrasound phantom. In the third step, an ultrasound simulator (Field II) was used to interrogate the virtual breast phantom to obtain simulated ultrasound echo data. Of note, tissue deformation generated using a FE-simulator (FEBio) was the basis of deforming the original virtual breast phantom in order to obtain the postdeformation breast phantom for subsequent ultrasound simulations. Using the procedures described above, a full cycle of QUE simulations involving complex and highly heterogeneous virtual breast phantoms can be accomplished for the first time. Representative examples were used to demonstrate capabilities of this virtual simulation platform. In the first set of three ultrasound simulation examples, three heterogeneous volumes of interest were selected from a virtual breast ultrasound phantom to perform sophisticated ultrasound simulations. These resultant B-mode images realistically represented the underlying complex but known media. In the second set of three QUE examples, advanced applications in QUE were simulated. The first QUE example was to show breast tumors with complex shapes and/or compositions. The resultant strain images showed complex patterns that were normally seen in freehand clinical ultrasound data. The second and third QUE examples demonstrated (deformation-dependent) nonlinear strain imaging and time-dependent strain imaging, respectively. The proposed virtual QUE platform was implemented and successfully tested in this study. Through show-case examples, the proposed work has demonstrated its capabilities of creating sophisticated QUE data in a way that cannot be done through the manufacture of physical tissue-mimicking phantoms and other software. This open software architecture will soon be made available in the public domain and can be readily adapted to meet specific needs of different research groups to drive innovations in QUE.

  6. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  7. A Common DPU Platform for ESA JUICE Mission Instruments

    NASA Astrophysics Data System (ADS)

    Aberg, Martin; Hellstrom, Daniel; Samuelsson, Arne; Torelli, Felice

    2016-08-01

    This paper describes the resulting hardware and software platform based on GR712RC [1] LEON3-FT that Cobham Gaisler developed in accordance with the common system requirements of the ten scientific instruments on-board the ESA JUICE spacecraft destined the Jupiter system [8].The radiation hardened DPU platform features EDAC protected boot, application memory and working memory of configurable sizes and SpaceWire, FPGA I/O-32/16/8, GPIO, UART and SPI I/O interfaces. The design has undergone PSA, Risk, WCA, Radiation analyses etc. to justify component and design choices resulting in a robust design that can be used in spacecrafts requiring a total dose up to 100krad(Si). The prototype board manufactured uses engineering models of the flight components to ensure that development is representative.Validated boot, standby and driver software accommodates the various DPU platform configurations. The boot performs low-level DPU initialization, standby handles OBC SpaceWire communication and finally the loading and executing of application images typically stored in the non-volatile application memory.

  8. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    PubMed

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.

  9. Small animal imaging platform for quantitative assessment of short-wave infrared-emitting contrast agents

    NASA Astrophysics Data System (ADS)

    Hu, Philip; Mingozzi, Marco; Higgins, Laura M.; Ganapathy, Vidya; Zevon, Margot; Riman, Richard E.; Roth, Charles M.; Moghe, Prabhas V.; Pierce, Mark C.

    2015-03-01

    We report the design, calibration, and testing of a pre-clinical small animal imaging platform for use with short-wave infrared (SWIR) emitting contrast agents. Unlike materials emitting at visible or near-infrared wavelengths, SWIR-emitting agents require detection systems with sensitivity in the 1-2 μm wavelength region, beyond the range of commercially available small animal imagers. We used a collimated 980 nm laser beam to excite rare-earth-doped NaYF4:Er,Yb nanocomposites, as an example of a SWIR emitting material under development for biomedical imaging applications. This beam was raster scanned across the animal, with fluorescence in the 1550 nm wavelength region detected by an InGaAs area camera. Background adjustment and intensity non-uniformity corrections were applied in software. The final SWIR fluorescence image was overlaid onto a standard white-light image for registration of contrast agent uptake with respect to anatomical features.

  10. A flexible software architecture for scalable real-time image and video processing applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  11. Integration of LCoS-SLM and LabVIEW based software to simulate fundamental optics, wave optics, and Fourier optics

    NASA Astrophysics Data System (ADS)

    Lyu, Bo-Han; Wang, Chen; Tsai, Chun-Wei

    2017-08-01

    Jasper Display Corp. (JDC) offer high reflectivity, high resolution Liquid Crystal on Silicon - Spatial Light Modulator (LCoS-SLM) which include an associated controller ASIC and LabVIEW based modulation software. Based on this LCoS-SLM, also called Education Kit (EDK), we provide a training platform which includes a series of optical theory and experiments to university students. This EDK not only provides a LabVIEW based operation software to produce Computer Generated Holograms (CGH) to generate some basic diffraction image or holographic image, but also provides simulation software to verity the experiment results simultaneously. However, we believe that a robust LCoSSLM, operation software, simulation software, training system, and training course can help students to study the fundamental optics, wave optics, and Fourier optics more easily. Based on these fundamental knowledges, they could develop their unique skills and create their new innovations on the optoelectronic application in the future.

  12. Technical Note: scuda: A software platform for cumulative dose assessment.

    PubMed

    Park, Seyoun; McNutt, Todd; Plishker, William; Quon, Harry; Wong, John; Shekhar, Raj; Lee, Junghoon

    2016-10-01

    Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (scuda) that can be seamlessly integrated into the clinical workflow. scuda consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our image PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.

  13. An image-processing software package: UU and Fig for optical metrology applications

    NASA Astrophysics Data System (ADS)

    Chen, Lujie

    2013-06-01

    Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.

  14. Building Interactive Visualizations for Geochronological Data

    NASA Astrophysics Data System (ADS)

    Zeringue, J.; Bowring, J. F.; McLean, N. M.; Pastor, F.

    2014-12-01

    Since the early 1990s, Ken Ludwig's Isoplot software has been the tool of choice for visualization and analysis of isotopic data used for geochronology. The software is an add-in to Microsoft Excel that allows users to generate visual representations of data. However, recent changes to Excel have made Isoplot more difficult to use and maintain, and the software is no longer supported. In the last several years, the Cyber Infrastructure Research and Development Lab for the Earth Sciences (CIRDLES), at the College of Charleston, has worked collaboratively with geochronologists to develop U-Pb_Redux, a software product that provides some of Isoplot's functionality for U-Pb geochronology. However, the community needs a full and complete Isoplot replacement that is open source, platform independent, and not dependent on proprietary software. This temporary lapse in tooling also presents a tremendous opportunity for scientific computing in the earth sciences. When Isoplot was written for Excel, it gained much of the platform's flexibility and power but also was burdened with its limitations. For example, Isoplot could not be used outside of Excel, could not be cross-platform (so long as Excel wasn't), could not be embedded in other applications, and only static images could be produced. Nonetheless this software was and still is a powerful tool that has served the community for more than two decades and the trade-offs were more than acceptable. In 2014, we seek to gain flexibility not available with Excel. We propose that the next generation of charting software be reusable, platform-agnostic, and interactive. This new software should allow scientists to easily explore—not just passively view—their data. Beginning in the fall of 2013, researchers at CIRDLES began planning for and prototyping a 21st-century replacement for Isoplot, which we call Topsoil, an anagram of Isoplot. This work is being conducted in the public domain at https://github.com/CIRDLES/topsoil. We welcome and encourage community participation and contributions.

  15. Assessment of left ventricular size and function by 3-dimensional transthoracic echocardiography: Impact of the echocardiography platform and analysis software.

    PubMed

    Castel, Anne Laure; Toledano, Manuel; Tribouilloy, Christophe; Delelis, François; Mailliet, Amandine; Marotte, Nathalie; Guerbaai, Raphaëlle A; Levy, Franck; Graux, Pierre; Ennezat, Pierre-Vladimir; Maréchaux, Sylvestre

    2018-05-27

    Whether echocardiography platform and analysis software impact left ventricular (LV) volumes, ejection fraction (EF), and stroke volume (SV) by transthoracic tridimensional echocardiography (3DE) has not yet been assessed. Hence, our aim was to compare 3DE LV end-diastolic and end-systolic volumes (EDV and ESV), LVEF, and SV obtained with echocardiography platform from 2 different manufacturers. 3DE was performed in 84 patients (65% of screened consecutive patients), with equipment from 2 different manufacturers, with subsequent off-line postprocessing to obtain parameters of LV function and size (Philips QLAB 3DQ and General Electric EchoPAC 4D autoLVQ). Twenty-five patients with clinical indication for cardiac magnetic resonance imaging served as a validation subgroup. LVEDV and LVESV from 2 vendors were highly correlated (r = 0.93), but compared with 4D autoLVQ, the use of Qlab 3DQ resulted in lower LVEDV and LVESV (bias: 11 mL, limits of agreement: -25 to +47 and bias: 6 mL, limits of agreement: -22 to +34, respectively). The agreement between LVEF values of each software was poor (intraclass correlation coefficient 0.62) despite no or minimal bias. SVs were also lower with Qlab 3DQ advanced compared with 4D autoLVQ, and both were poorly correlated (r = 0.66). Consistently, the underestimation of LVEDV, LVESV, and SV by 3DE compared with cardiac magnetic resonance imaging was more pronounced with Philips QLAB 3DQ advanced than with 4D autoLVQ. The echocardiography platform and analysis software significantly affect the values of LV parameters obtained by 3DE. Intervendor standardization and improvements in 3DE modalities are needed to broaden the use of LV parameters obtained by 3DE in clinical practice. Copyright © 2018. Published by Elsevier Inc.

  16. Use of Fisheye Parrot Bebop 2 Images for 3d Modelling Using Commercial Photogrammetric Software

    NASA Astrophysics Data System (ADS)

    Pagliari, D.; Pinto, L.

    2018-05-01

    Fisheye camera installed on-board mass market UAS are becoming very popular and it is more and more frequent the use of such platforms for photogrammetric purposes. The interest of wide-angles images for 3D modelling is confirmed by the introduction of fisheye models in several commercial software packages. The paper exploits the different mathematical models implemented in the most famous commercial photogrammetric software packages, highlighting the different processing pipelines and analysing the achievable results in terms of checkpoint residuals, as well as the quality of the delivered 3D point clouds. A two-step approach based on the creation of undistorted images has been tested too. An experimental test has been carried out using a Parrot Bebop 2 UAS by performing a flight over an historical complex located near Piacenza (Northern Italy), which is characterized by the simultaneous presence of horizontal, vertical and oblique surfaces. Different flight configurations have been tested to evaluate the potentiality and possible drawbacks of the previously mentioned UAS platform. Results confirmed that the fisheye images acquired with the Parrot Bebop 2 are suitable for 3D modelling, ensuring accuracies of the photogrammetric blocks of the order of the GSD (about 0.05 m normal to the optic axis in case of a flight height equal to 35 m). The generated point clouds have been compared to a reference scan, acquired by means of a MS60 MultiStation, resulting in differences below 0.05 in all directions.

  17. LHCb Dockerized Build Environment

    NASA Astrophysics Data System (ADS)

    Clemencic, M.; Belin, M.; Closier, J.; Couturier, B.

    2017-10-01

    Used as lightweight virtual machines or as enhanced chroot environments, Linux containers, and in particular the Docker abstraction over them, are more and more popular in the virtualization communities. The LHCb Core Software team decided to investigate how to use Docker containers to provide stable and reliable build environments for the different supported platforms, including the obsolete ones which cannot be installed on modern hardware, to be used in integration builds, releases and by any developer. We present here the techniques and procedures set up to define and maintain the Docker images and how these images can be used to develop on modern Linux distributions for platforms otherwise not accessible.

  18. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  19. GelScape: a web-based server for interactively annotating, manipulating, comparing and archiving 1D and 2D gel images.

    PubMed

    Young, Nelson; Chang, Zhan; Wishart, David S

    2004-04-12

    GelScape is a web-based tool that permits facile, interactive annotation, comparison, manipulation and storage of protein gel images. It uses Java applet-servlet technology to allow rapid, remote image handling and image processing in a platform-independent manner. It supports many of the features found in commercial, stand-alone gel analysis software including spot annotation, spot integration, gel warping, image resizing, HTML image mapping, image overlaying as well as the storage of gel image and gel annotation data in compliance with Federated Gel Database requirements.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kowalkowski, Jim; Lyon, Adam; Paterno, Marc

    Over the past few years, container technology has become increasingly promising as a means to seamlessly make our software available across a wider range of platforms. In December 2015, we decided to put together a set of docker images that serve as a demonstration of this container technology for managing a run-time environment for art-related software projects, and also serve as a set of test cases for evaluation of performance. Docker[1] containers provide a way to “wrap up a piece of software in a complete filesystem that contains everything it needs to run”. In combination with Shifter[2], such containers providemore » a way to run software developed and deployed on “typical” HEP platforms (such as SLF 6, in common use at Fermilab and on OSG platforms) on HPC facilities at NERSC. Docker containers provide a means of delivering software that can be run on a variety of hosts without needing to be compiled specially for each OS to be supported. This could substantially reduce the effort required to create and validate a new release, since one build could be suitable for use on both grid machines (both FermiGrid and OSG) as well as any machine capable of running the Docker container. In addition, docker containers may provide for a quick and easy way for users to install and use a software release in a standardized environment. This report contains the results and status of this demonstration and evaluation.« less

  1. Image manipulation software portable on different hardware platforms: what is the cost?

    NASA Astrophysics Data System (ADS)

    Ligier, Yves; Ratib, Osman M.; Funk, Matthieu; Perrier, Rene; Girard, Christian; Logean, Marianne

    1992-07-01

    A hospital wide PACS project is currently under development at the University Hospital of Geneva. The visualization and manipulation of images provided by different imaging modalities constitutes one of the most challenging components of a PACS. Because there are different requirements depending on the clinical usage, it was necessary for such a visualization software to be provided on different types of workstations in different sectors of the PACS. The user interface has to be the same independently of the underlying workstation. Beside, in addition to a standard set of image manipulation and processing tools there is a need for more specific clinical tools that should be easily adapted to specific medical requirements. To achieve operating and windowing systems: the standard Unix/X-11/OSF-Motif based workstations and the Macintosh family and should be easily ported on other systems. This paper describes the design of such a system and discusses the extra cost and efforts involved in the development of a portable and easily expandable software.

  2. Get the Picture?

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Positive Systems has worked in conjunction with Stennis Space Center to design the ADAR System 5500. This is a four-band airborne digital imaging system used to capture multispectral imagery similar to that available from satellite platforms such as Landsat, SPOT and the new generation of high resolution satellites. Positive Systems has provided remote sensing services for the development of digital aerial camera systems and software for commercial aerial imaging applications.

  3. The Microphenotron: a robotic miniaturized plant phenotyping platform with diverse applications in chemical biology.

    PubMed

    Burrell, Thomas; Fozard, Susan; Holroyd, Geoff H; French, Andrew P; Pound, Michael P; Bigley, Christopher J; James Taylor, C; Forde, Brian G

    2017-01-01

    Chemical genetics provides a powerful alternative to conventional genetics for understanding gene function. However, its application to plants has been limited by the lack of a technology that allows detailed phenotyping of whole-seedling development in the context of a high-throughput chemical screen. We have therefore sought to develop an automated micro-phenotyping platform that would allow both root and shoot development to be monitored under conditions where the phenotypic effects of large numbers of small molecules can be assessed. The 'Microphenotron' platform uses 96-well microtitre plates to deliver chemical treatments to seedlings of Arabidopsis thaliana L. and is based around four components: (a) the 'Phytostrip', a novel seedling growth device that enables chemical treatments to be combined with the automated capture of images of developing roots and shoots; (b) an illuminated robotic platform that uses a commercially available robotic manipulator to capture images of developing shoots and roots; (c) software to control the sequence of robotic movements and integrate these with the image capture process; (d) purpose-made image analysis software for automated extraction of quantitative phenotypic data. Imaging of each plate (representing 80 separate assays) takes 4 min and can easily be performed daily for time-course studies. As currently configured, the Microphenotron has a capacity of 54 microtitre plates in a growth room footprint of 2.1 m 2 , giving a potential throughput of up to 4320 chemical treatments in a typical 10 days experiment. The Microphenotron has been validated by using it to screen a collection of 800 natural compounds for qualitative effects on root development and to perform a quantitative analysis of the effects of a range of concentrations of nitrate and ammonium on seedling development. The Microphenotron is an automated screening platform that for the first time is able to combine large numbers of individual chemical treatments with a detailed analysis of whole-seedling development, and particularly root system development. The Microphenotron should provide a powerful new tool for chemical genetics and for wider chemical biology applications, including the development of natural and synthetic chemical products for improved agricultural sustainability.

  4. MorphoGraphX: A platform for quantifying morphogenesis in 4D.

    PubMed

    Barbier de Reuille, Pierre; Routier-Kierzkowska, Anne-Lise; Kierzkowski, Daniel; Bassel, George W; Schüpbach, Thierry; Tauriello, Gerardo; Bajpai, Namrata; Strauss, Sören; Weber, Alain; Kiss, Annamaria; Burian, Agata; Hofhuis, Hugo; Sapala, Aleksandra; Lipowczan, Marcin; Heimlicher, Maria B; Robinson, Sarah; Bayer, Emmanuelle M; Basler, Konrad; Koumoutsakos, Petros; Roeder, Adrienne H K; Aegerter-Wilmsen, Tinri; Nakayama, Naomi; Tsiantis, Miltos; Hay, Angela; Kwiatkowska, Dorota; Xenarios, Ioannis; Kuhlemeier, Cris; Smith, Richard S

    2015-05-06

    Morphogenesis emerges from complex multiscale interactions between genetic and mechanical processes. To understand these processes, the evolution of cell shape, proliferation and gene expression must be quantified. This quantification is usually performed either in full 3D, which is computationally expensive and technically challenging, or on 2D planar projections, which introduces geometrical artifacts on highly curved organs. Here we present MorphoGraphX ( www.MorphoGraphX.org), a software that bridges this gap by working directly with curved surface images extracted from 3D data. In addition to traditional 3D image analysis, we have developed algorithms to operate on curved surfaces, such as cell segmentation, lineage tracking and fluorescence signal quantification. The software's modular design makes it easy to include existing libraries, or to implement new algorithms. Cell geometries extracted with MorphoGraphX can be exported and used as templates for simulation models, providing a powerful platform to investigate the interactions between shape, genes and growth.

  5. Transplant Image Processing Technology under Windows into the Platform Based on MiniGUI

    NASA Astrophysics Data System (ADS)

    Gan, Lan; Zhang, Xu; Lv, Wenya; Yu, Jia

    MFC has a large number of digital image processing-related API functions, object-oriented and class mechanisms which provides image processing technology strong support in Windows. But in embedded systems, image processing technology dues to the restrictions of hardware and software do not have the environment of MFC in Windows. Therefore, this paper draws on the experience of image processing technology of Windows and transplants it into MiniGUI embedded systems. The results show that MiniGUI/Embedded graphical user interface applications about image processing which used in embedded image processing system can be good results.

  6. Open source software in a practical approach for post processing of radiologic images.

    PubMed

    Valeri, Gianluca; Mazza, Francesco Antonino; Maggi, Stefania; Aramini, Daniele; La Riccia, Luigi; Mazzoni, Giovanni; Giovagnoni, Andrea

    2015-03-01

    The purpose of this paper is to evaluate the use of open source software (OSS) to process DICOM images. We selected 23 programs for Windows and 20 programs for Mac from 150 possible OSS programs including DICOM viewers and various tools (converters, DICOM header editors, etc.). The programs selected all meet the basic requirements such as free availability, stand-alone application, presence of graphical user interface, ease of installation and advanced features beyond simple display monitor. Capabilities of data import, data export, metadata, 2D viewer, 3D viewer, support platform and usability of each selected program were evaluated on a scale ranging from 1 to 10 points. Twelve programs received a score higher than or equal to eight. Among them, five obtained a score of 9: 3D Slicer, MedINRIA, MITK 3M3, VolView, VR Render; while OsiriX received 10. OsiriX appears to be the only program able to perform all the operations taken into consideration, similar to a workstation equipped with proprietary software, allowing the analysis and interpretation of images in a simple and intuitive way. OsiriX is a DICOM PACS workstation for medical imaging and software for image processing for medical research, functional imaging, 3D imaging, confocal microscopy and molecular imaging. This application is also a good tool for teaching activities because it facilitates the attainment of learning objectives among students and other specialists.

  7. Development of a translation stage for in situ noninvasive analysis and high-resolution imaging

    NASA Astrophysics Data System (ADS)

    Strivay, David; Clar, Mathieu; Rakkaa, Said; Hocquet, Francois-Philippe; Defeyt, Catherine

    2016-11-01

    Noninvasive imaging techniques and analytical instrumentation for cultural heritage object studies have undergone a tremendous development over the last years. Many new miniature and/or handheld systems have been developed and optimized. Nonetheless, these instruments are usually used with a tripod or a manual position system. This is very time consuming when performing point analysis or 2D scanning of a surface. The Centre Européen d'Archéométrie has built a translation system made of pluggable rails of 1 m long with a maximum length and height of 3 m. Three motors embedded in the system allow the platform to be moved along these axis, toward and backward from the sample. The rails hold a displacement system, providing a continuous movement. Any position can be reached with a reproducibility of 0.1 mm. The displacements are controlled by an Ethernet connection through a laptop computer running a multiplatform custom-made software written in JAVA. This software allows a complete control over the positioning using a simple, unique, and concise interface. Automatic scanning can be performed over a large surface of 3 m on 3 m. The Ethernet wires provide also the power for the different motors and, if necessary, the detection head. The platform has been originally designed for a XRF detection head (with its full power alimentation) but now can accommodate many different systems like IR reflectography, digital camera, hyperspectral camera, and Raman probes. The positioning system can be modified to combine the acquisition software of the imaging or analytical techniques and the positioning software.

  8. Next generation of decision making software for nanopatterns characterization: application to semiconductor industry

    NASA Astrophysics Data System (ADS)

    Dervilllé, A.; Labrosse, A.; Zimmermann, Y.; Foucher, J.; Gronheid, R.; Boeckx, C.; Singh, A.; Leray, P.; Halder, S.

    2016-03-01

    The dimensional scaling in IC manufacturing strongly drives the demands on CD and defect metrology techniques and their measurement uncertainties. Defect review has become as important as CD metrology and both of them create a new metrology paradigm because it creates a completely new need for flexible, robust and scalable metrology software. Current, software architectures and metrology algorithms are performant but it must be pushed to another higher level in order to follow roadmap speed and requirements. For example: manage defect and CD in one step algorithm, customize algorithms and outputs features for each R&D team environment, provide software update every day or every week for R&D teams in order to explore easily various development strategies. The final goal is to avoid spending hours and days to manually tune algorithm to analyze metrology data and to allow R&D teams to stay focus on their expertise. The benefits are drastic costs reduction, more efficient R&D team and better process quality. In this paper, we propose a new generation of software platform and development infrastructure which can integrate specific metrology business modules. For example, we will show the integration of a chemistry module dedicated to electronics materials like Direct Self Assembly features. We will show a new generation of image analysis algorithms which are able to manage at the same time defect rates, images classifications, CD and roughness measurements with high throughput performances in order to be compatible with HVM. In a second part, we will assess the reliability, the customization of algorithm and the software platform capabilities to follow new specific semiconductor metrology software requirements: flexibility, robustness, high throughput and scalability. Finally, we will demonstrate how such environment has allowed a drastic reduction of data analysis cycle time.

  9. A versatile nondestructive evaluation imaging workstation

    NASA Technical Reports Server (NTRS)

    Chern, E. James; Butler, David W.

    1994-01-01

    Ultrasonic C-scan and eddy current imaging systems are of the pointwise type evaluation systems that rely on a mechanical scanner to physically maneuver a probe relative to the specimen point by point in order to acquire data and generate images. Since the ultrasonic C-scan and eddy current imaging systems are based on the same mechanical scanning mechanisms, the two systems can be combined using the same PC platform with a common mechanical manipulation subsystem and integrated data acquisition software. Based on this concept, we have developed an IBM PC-based combined ultrasonic C-scan and eddy current imaging system. The system is modularized and provides capacity for future hardware and software expansions. Advantages associated with the combined system are: (1) eliminated duplication of the computer and mechanical hardware, (2) unified data acquisition, processing and storage software, (3) reduced setup time for repetitious ultrasonic and eddy current scans, and (4) improved system efficiency. The concept can be adapted to many engineering systems by integrating related PC-based instruments into one multipurpose workstation such as dispensing, machining, packaging, sorting, and other industrial applications.

  10. A versatile nondestructive evaluation imaging workstation

    NASA Astrophysics Data System (ADS)

    Chern, E. James; Butler, David W.

    1994-02-01

    Ultrasonic C-scan and eddy current imaging systems are of the pointwise type evaluation systems that rely on a mechanical scanner to physically maneuver a probe relative to the specimen point by point in order to acquire data and generate images. Since the ultrasonic C-scan and eddy current imaging systems are based on the same mechanical scanning mechanisms, the two systems can be combined using the same PC platform with a common mechanical manipulation subsystem and integrated data acquisition software. Based on this concept, we have developed an IBM PC-based combined ultrasonic C-scan and eddy current imaging system. The system is modularized and provides capacity for future hardware and software expansions. Advantages associated with the combined system are: (1) eliminated duplication of the computer and mechanical hardware, (2) unified data acquisition, processing and storage software, (3) reduced setup time for repetitious ultrasonic and eddy current scans, and (4) improved system efficiency. The concept can be adapted to many engineering systems by integrating related PC-based instruments into one multipurpose workstation such as dispensing, machining, packaging, sorting, and other industrial applications.

  11. Building a virtual simulation platform for quasistatic breast ultrasound elastography using open source software: A preliminary investigation

    PubMed Central

    Wang, Yu; Helminen, Emily; Jiang, Jingfeng

    2015-01-01

    Purpose: Quasistatic ultrasound elastography (QUE) is being used to augment in vivo characterization of breast lesions. Results from early clinical trials indicated that there was a lack of confidence in image interpretation. Such confidence can only be gained through rigorous imaging tests using complex, heterogeneous but known media. The objective of this study is to build a virtual breast QUE simulation platform in the public domain that can be used not only for innovative QUE research but also for rigorous imaging tests. Methods: The main thrust of this work is to streamline biomedical ultrasound simulations by leveraging existing open source software packages including Field II (ultrasound simulator), VTK (geometrical visualization and processing), FEBio [finite element (FE) analysis], and Tetgen (mesh generator). However, integration of these open source packages is nontrivial and requires interdisciplinary knowledge. In the first step, a virtual breast model containing complex anatomical geometries was created through a novel combination of image-based landmark structures and randomly distributed (small) structures. Image-based landmark structures were based on data from the NIH Visible Human Project. Subsequently, an unstructured FE-mesh was created by Tetgen. In the second step, randomly positioned point scatterers were placed within the meshed breast model through an octree-based algorithm to make a virtual breast ultrasound phantom. In the third step, an ultrasound simulator (Field II) was used to interrogate the virtual breast phantom to obtain simulated ultrasound echo data. Of note, tissue deformation generated using a FE-simulator (FEBio) was the basis of deforming the original virtual breast phantom in order to obtain the postdeformation breast phantom for subsequent ultrasound simulations. Using the procedures described above, a full cycle of QUE simulations involving complex and highly heterogeneous virtual breast phantoms can be accomplished for the first time. Results: Representative examples were used to demonstrate capabilities of this virtual simulation platform. In the first set of three ultrasound simulation examples, three heterogeneous volumes of interest were selected from a virtual breast ultrasound phantom to perform sophisticated ultrasound simulations. These resultant B-mode images realistically represented the underlying complex but known media. In the second set of three QUE examples, advanced applications in QUE were simulated. The first QUE example was to show breast tumors with complex shapes and/or compositions. The resultant strain images showed complex patterns that were normally seen in freehand clinical ultrasound data. The second and third QUE examples demonstrated (deformation-dependent) nonlinear strain imaging and time-dependent strain imaging, respectively. Conclusions: The proposed virtual QUE platform was implemented and successfully tested in this study. Through show-case examples, the proposed work has demonstrated its capabilities of creating sophisticated QUE data in a way that cannot be done through the manufacture of physical tissue-mimicking phantoms and other software. This open software architecture will soon be made available in the public domain and can be readily adapted to meet specific needs of different research groups to drive innovations in QUE. PMID:26328994

  12. The Image Data Resource: A Bioimage Data Integration and Publication Platform.

    PubMed

    Williams, Eleanor; Moore, Josh; Li, Simon W; Rustici, Gabriella; Tarkowska, Aleksandra; Chessel, Anatole; Leo, Simone; Antal, Bálint; Ferguson, Richard K; Sarkans, Ugis; Brazma, Alvis; Salas, Rafael E Carazo; Swedlow, Jason R

    2017-08-01

    Access to primary research data is vital for the advancement of science. To extend the data types supported by community repositories, we built a prototype Image Data Resource (IDR) that collects and integrates imaging data acquired across many different imaging modalities. IDR links data from several imaging modalities, including high-content screening, super-resolution and time-lapse microscopy, digital pathology, public genetic or chemical databases, and cell and tissue phenotypes expressed using controlled ontologies. Using this integration, IDR facilitates the analysis of gene networks and reveals functional interactions that are inaccessible to individual studies. To enable re-analysis, we also established a computational resource based on Jupyter notebooks that allows remote access to the entire IDR. IDR is also an open source platform that others can use to publish their own image data. Thus IDR provides both a novel on-line resource and a software infrastructure that promotes and extends publication and re-analysis of scientific image data.

  13. Portable image-manipulation software: what is the extra development cost?

    PubMed

    Ligier, Y; Ratib, O; Funk, M; Perrier, R; Girard, C; Logean, M

    1992-08-01

    A hospital-wide picture archiving and communication system (PACS) project is currently under development at the University Hospital of Geneva. The visualization and manipulation of images provided by different imaging modalities constitutes one of the most challenging component of a PACS. It was necessary to provide this visualization software on a number of types of workstations because of the varying requirements imposed by the range of clinical uses it must serve. The user interface must be the same, independent of the underlying workstation. In addition to a standard set of image-manipulation and processing tools, there is a need for more specific clinical tools that can be easily adapted to specific medical requirements. To achieve this goal, it was elected to develop a modular and portable software called OSIRIS. This software is available on two different operating systems (the UNIX standard X-11/OSF-Motif based workstations and the Macintosh family) and can be easily ported to other systems. The extra effort required to design such software in a modular and portable way was worthwhile because it resulted in a platform that can be easily expanded and adapted to a variety of specific clinical applications. Its portability allows users to benefit from the rapidly evolving workstation technology and to adapt the performance to suit their needs.

  14. Technical Note: PLASTIMATCH MABS, an open source tool for automatic image segmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zaffino, Paolo; Spadea, Maria Francesca

    Purpose: Multiatlas based segmentation is largely used in many clinical and research applications. Due to its good performances, it has recently been included in some commercial platforms for radiotherapy planning and surgery guidance. Anyway, to date, a software with no restrictions about the anatomical district and image modality is still missing. In this paper we introduce PLASTIMATCH MABS, an open source software that can be used with any image modality for automatic segmentation. Methods: PLASTIMATCH MABS workflow consists of two main parts: (1) an offline phase, where optimal registration and voting parameters are tuned and (2) an online phase, wheremore » a new patient is labeled from scratch by using the same parameters as identified in the former phase. Several registration strategies, as well as different voting criteria can be selected. A flexible atlas selection scheme is also available. To prove the effectiveness of the proposed software across anatomical districts and image modalities, it was tested on two very different scenarios: head and neck (H&N) CT segmentation for radiotherapy application, and magnetic resonance image brain labeling for neuroscience investigation. Results: For the neurological study, minimum dice was equal to 0.76 (investigated structures: left and right caudate, putamen, thalamus, and hippocampus). For head and neck case, minimum dice was 0.42 for the most challenging structures (optic nerves and submandibular glands) and 0.62 for the other ones (mandible, brainstem, and parotid glands). Time required to obtain the labels was compatible with a real clinical workflow (35 and 120 min). Conclusions: The proposed software fills a gap in the multiatlas based segmentation field, since all currently available tools (both for commercial and for research purposes) are restricted to a well specified application. Furthermore, it can be adopted as a platform for exploring MABS parameters and as a reference implementation for comparing against other segmentation algorithms.« less

  15. A virtual fluoroscopy system to verify seed positioning accuracy during prostate permanent seed implants.

    PubMed

    Sarkar, V; Gutierrez, A N; Stathakis, S; Swanson, G P; Papanikolaou, N

    2009-01-01

    The purpose of this project was to develop a software platform to produce a virtual fluoroscopic image as an aid for permanent prostate seed implants. Seed location information from a pre-plan was extracted and used as input to in-house developed software to produce a virtual fluoroscopic image. In order to account for differences in patient positioning on the day of treatment, the user was given the ability to make changes to the virtual image. The system has been shown to work as expected for all test cases. The system allows for quick (on average less than 10 sec) generation of a virtual fluoroscopic image of the planned seed pattern. The image can be used as a verification tool to aid the physician in evaluating how close the implant is to the planned distribution throughout the procedure and enable remedial action should a large deviation be observed.

  16. AGScan: a pluggable microarray image quantification software based on the ImageJ library.

    PubMed

    Cathelin, R; Lopez, F; Klopp, Ch

    2007-01-15

    Many different programs are available to analyze microarray images. Most programs are commercial packages, some are free. In the latter group only few propose automatic grid alignment and batch mode. More often than not a program implements only one quantification algorithm. AGScan is an open source program that works on all major platforms. It is based on the ImageJ library [Rasband (1997-2006)] and offers a plug-in extension system to add new functions to manipulate images, align grid and quantify spots. It is appropriate for daily laboratory use and also as a framework for new algorithms. The program is freely distributed under X11 Licence. The install instructions can be found in the user manual. The software can be downloaded from http://mulcyber.toulouse.inra.fr/projects/agscan/. The questions and plug-ins can be sent to the contact listed below.

  17. Evaluation environment for digital and analog pathology: a platform for validation studies

    PubMed Central

    Gallas, Brandon D.; Gavrielides, Marios A.; Conway, Catherine M.; Ivansky, Adam; Keay, Tyler C.; Cheng, Wei-Chung; Hipp, Jason; Hewitt, Stephen M.

    2014-01-01

    Abstract. We present a platform for designing and executing studies that compare pathologists interpreting histopathology of whole slide images (WSIs) on a computer display to pathologists interpreting glass slides on an optical microscope. eeDAP is an evaluation environment for digital and analog pathology. The key element in eeDAP is the registration of the WSI to the glass slide. Registration is accomplished through computer control of the microscope stage and a camera mounted on the microscope that acquires real-time images of the microscope field of view (FOV). Registration allows for the evaluation of the same regions of interest (ROIs) in both domains. This can reduce or eliminate disagreements that arise from pathologists interpreting different areas and focuses on the comparison of image quality. We reduced the pathologist interpretation area from an entire glass slide (10 to 30  mm2) to small ROIs (<50  μm2). We also made possible the evaluation of individual cells. We summarize eeDAP’s software and hardware and provide calculations and corresponding images of the microscope FOV and the ROIs extracted from the WSIs. The eeDAP software can be downloaded from the Google code website (project: eeDAP) as a MATLAB source or as a precompiled stand-alone license-free application. PMID:26158076

  18. Evaluation environment for digital and analog pathology: a platform for validation studies.

    PubMed

    Gallas, Brandon D; Gavrielides, Marios A; Conway, Catherine M; Ivansky, Adam; Keay, Tyler C; Cheng, Wei-Chung; Hipp, Jason; Hewitt, Stephen M

    2014-10-01

    We present a platform for designing and executing studies that compare pathologists interpreting histopathology of whole slide images (WSIs) on a computer display to pathologists interpreting glass slides on an optical microscope. eeDAP is an evaluation environment for digital and analog pathology. The key element in eeDAP is the registration of the WSI to the glass slide. Registration is accomplished through computer control of the microscope stage and a camera mounted on the microscope that acquires real-time images of the microscope field of view (FOV). Registration allows for the evaluation of the same regions of interest (ROIs) in both domains. This can reduce or eliminate disagreements that arise from pathologists interpreting different areas and focuses on the comparison of image quality. We reduced the pathologist interpretation area from an entire glass slide (10 to [Formula: see text]) to small ROIs ([Formula: see text]). We also made possible the evaluation of individual cells. We summarize eeDAP's software and hardware and provide calculations and corresponding images of the microscope FOV and the ROIs extracted from the WSIs. The eeDAP software can be downloaded from the Google code website (project: eeDAP) as a MATLAB source or as a precompiled stand-alone license-free application.

  19. P-TRAP: a Panicle TRAit Phenotyping tool.

    PubMed

    A L-Tam, Faroq; Adam, Helene; Anjos, António dos; Lorieux, Mathias; Larmande, Pierre; Ghesquière, Alain; Jouannic, Stefan; Shahbazkia, Hamid Reza

    2013-08-29

    In crops, inflorescence complexity and the shape and size of the seed are among the most important characters that influence yield. For example, rice panicles vary considerably in the number and order of branches, elongation of the axis, and the shape and size of the seed. Manual low-throughput phenotyping methods are time consuming, and the results are unreliable. However, high-throughput image analysis of the qualitative and quantitative traits of rice panicles is essential for understanding the diversity of the panicle as well as for breeding programs. This paper presents P-TRAP software (Panicle TRAit Phenotyping), a free open source application for high-throughput measurements of panicle architecture and seed-related traits. The software is written in Java and can be used with different platforms (the user-friendly Graphical User Interface (GUI) uses Netbeans Platform 7.3). The application offers three main tools: a tool for the analysis of panicle structure, a spikelet/grain counting tool, and a tool for the analysis of seed shape. The three tools can be used independently or simultaneously for analysis of the same image. Results are then reported in the Extensible Markup Language (XML) and Comma Separated Values (CSV) file formats. Images of rice panicles were used to evaluate the efficiency and robustness of the software. Compared to data obtained by manual processing, P-TRAP produced reliable results in a much shorter time. In addition, manual processing is not repeatable because dry panicles are vulnerable to damage. The software is very useful, practical and collects much more data than human operators. P-TRAP is a new open source software that automatically recognizes the structure of a panicle and the seeds on the panicle in numeric images. The software processes and quantifies several traits related to panicle structure, detects and counts the grains, and measures their shape parameters. In short, P-TRAP offers both efficient results and a user-friendly environment for experiments. The experimental results showed very good accuracy compared to field operator, expert verification and well-known academic methods.

  20. P-TRAP: a Panicle Trait Phenotyping tool

    PubMed Central

    2013-01-01

    Background In crops, inflorescence complexity and the shape and size of the seed are among the most important characters that influence yield. For example, rice panicles vary considerably in the number and order of branches, elongation of the axis, and the shape and size of the seed. Manual low-throughput phenotyping methods are time consuming, and the results are unreliable. However, high-throughput image analysis of the qualitative and quantitative traits of rice panicles is essential for understanding the diversity of the panicle as well as for breeding programs. Results This paper presents P-TRAP software (Panicle TRAit Phenotyping), a free open source application for high-throughput measurements of panicle architecture and seed-related traits. The software is written in Java and can be used with different platforms (the user-friendly Graphical User Interface (GUI) uses Netbeans Platform 7.3). The application offers three main tools: a tool for the analysis of panicle structure, a spikelet/grain counting tool, and a tool for the analysis of seed shape. The three tools can be used independently or simultaneously for analysis of the same image. Results are then reported in the Extensible Markup Language (XML) and Comma Separated Values (CSV) file formats. Images of rice panicles were used to evaluate the efficiency and robustness of the software. Compared to data obtained by manual processing, P-TRAP produced reliable results in a much shorter time. In addition, manual processing is not repeatable because dry panicles are vulnerable to damage. The software is very useful, practical and collects much more data than human operators. Conclusions P-TRAP is a new open source software that automatically recognizes the structure of a panicle and the seeds on the panicle in numeric images. The software processes and quantifies several traits related to panicle structure, detects and counts the grains, and measures their shape parameters. In short, P-TRAP offers both efficient results and a user-friendly environment for experiments. The experimental results showed very good accuracy compared to field operator, expert verification and well-known academic methods. PMID:23987653

  1. Integrated platform for optimized solar PV system design and engineering plan set generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adeyemo, Samuel

    2015-12-30

    The Aurora team has developed software that allows users to quickly generate a three-dimensional model for a building, with a corresponding irradiance map, from any two-dimensional image with associated geo-coordinates. The purpose of this project is to build upon that technology by developing and distributing to solar installers a software platform that automatically retrieves engineering, financial and geographic data for a specific site, and quickly generates an optimal customer proposal and corresponding engineering plans for that site. At the end of the project, Aurora’s optimization platform would have been used to make at least one thousand proposals from at leastmore » ten unique solar installation companies, two of whom would sign economically viable contracts to use the software. Furthermore, Aurora’s algorithms would be tested to show that in at least seventy percent of cases, Aurora automatically generated a design equivalent to or better than what a human could have done manually. A ‘better’ design is one that generates more energy for the same cost, or that generates a higher return on investment, while complying with all site-specific aesthetic, electrical and spatial requirements.« less

  2. The multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) high performance computing infrastructure: applications in neuroscience and neuroinformatics research

    PubMed Central

    Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.

    2014-01-01

    The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019

  3. Fast software-based volume rendering using multimedia instructions on PC platforms and its application to virtual endoscopy

    NASA Astrophysics Data System (ADS)

    Mori, Kensaku; Suenaga, Yasuhito; Toriwaki, Jun-ichiro

    2003-05-01

    This paper describes a software-based fast volume rendering (VolR) method on a PC platform by using multimedia instructions, such as SIMD instructions, which are currently available in PCs' CPUs. This method achieves fast rendering speed through highly optimizing software rather than an improved rendering algorithm. In volume rendering using a ray casting method, the system requires fast execution of the following processes: (a) interpolation of voxel or color values at sample points, (b) computation of normal vectors (gray-level gradient vectors), (c) calculation of shaded values obtained by dot-products of normal vectors and light source direction vectors, (d) memory access to a huge area, and (e) efficient ray skipping at translucent regions. The proposed software implements these fundamental processes in volume rending by using special instruction sets for multimedia processing. The proposed software can generate virtual endoscopic images of a 3-D volume of 512x512x489 voxel size by volume rendering with perspective projection, specular reflection, and on-the-fly normal vector computation on a conventional PC without any special hardware at thirteen frames per second. Semi-translucent display is also possible.

  4. Automated Transmission-Mode Scanning Electron Microscopy (tSEM) for Large Volume Analysis at Nanoscale Resolution

    PubMed Central

    Kuwajima, Masaaki; Mendenhall, John M.; Lindsey, Laurence F.; Harris, Kristen M.

    2013-01-01

    Transmission-mode scanning electron microscopy (tSEM) on a field emission SEM platform was developed for efficient and cost-effective imaging of circuit-scale volumes from brain at nanoscale resolution. Image area was maximized while optimizing the resolution and dynamic range necessary for discriminating key subcellular structures, such as small axonal, dendritic and glial processes, synapses, smooth endoplasmic reticulum, vesicles, microtubules, polyribosomes, and endosomes which are critical for neuronal function. Individual image fields from the tSEM system were up to 4,295 µm2 (65.54 µm per side) at 2 nm pixel size, contrasting with image fields from a modern transmission electron microscope (TEM) system, which were only 66.59 µm2 (8.160 µm per side) at the same pixel size. The tSEM produced outstanding images and had reduced distortion and drift relative to TEM. Automated stage and scan control in tSEM easily provided unattended serial section imaging and montaging. Lens and scan properties on both TEM and SEM platforms revealed no significant nonlinear distortions within a central field of ∼100 µm2 and produced near-perfect image registration across serial sections using the computational elastic alignment tool in Fiji/TrakEM2 software, and reliable geometric measurements from RECONSTRUCT™ or Fiji/TrakEM2 software. Axial resolution limits the analysis of small structures contained within a section (∼45 nm). Since this new tSEM is non-destructive, objects within a section can be explored at finer axial resolution in TEM tomography with current methods. Future development of tSEM tomography promises thinner axial resolution producing nearly isotropic voxels and should provide within-section analyses of structures without changing platforms. Brain was the test system given our interest in synaptic connectivity and plasticity; however, the new tSEM system is readily applicable to other biological systems. PMID:23555711

  5. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers.

    PubMed

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  6. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    PubMed Central

    Cui, Yang; Hanley, Luke

    2015-01-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872

  7. ChiMS: Open-source instrument control software platform on LabVIEW for imaging/depth profiling mass spectrometers

    NASA Astrophysics Data System (ADS)

    Cui, Yang; Hanley, Luke

    2015-06-01

    ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.

  8. High resolution image processing on low-cost microcomputers

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1993-01-01

    Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.

  9. OsiriX: an open-source software for navigating in multidimensional DICOM images.

    PubMed

    Rosset, Antoine; Spadola, Luca; Ratib, Osman

    2004-09-01

    A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program's toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.

  10. NecroQuant: quantitative assessment of radiological necrosis

    NASA Astrophysics Data System (ADS)

    Hwang, Darryl H.; Mohamed, Passant; Varghese, Bino A.; Cen, Steven Y.; Duddalwar, Vinay

    2017-11-01

    Clinicians can now objectively quantify tumor necrosis by Hounsfield units and enhancement characteristics from multiphase contrast enhanced CT imaging. NecroQuant has been designed to work as part of a radiomics pipelines. The software is a departure from the conventional qualitative assessment of tumor necrosis, as it provides the user (radiologists and researchers) a simple interface to precisely and interactively define and measure necrosis in contrast-enhanced CT images. Although, the software is tested here on renal masses, it can be re-configured to assess tumor necrosis across variety of tumors from different body sites, providing a generalized, open, portable, and extensible quantitative analysis platform that is widely applicable across cancer types to quantify tumor necrosis.

  11. BIRD: Bio-Image Referral Database. Design and implementation of a new web based and patient multimedia data focused system for effective medical diagnosis and therapy.

    PubMed

    Pinciroli, Francesco; Masseroli, Marco; Acerbo, Livio A; Bonacina, Stefano; Ferrari, Roberto; Marchente, Mario

    2004-01-01

    This paper presents a low cost software platform prototype supporting health care personnel in retrieving patient referral multimedia data. These information are centralized in a server machine and structured by using a flexible eXtensible Markup Language (XML) Bio-Image Referral Database (BIRD). Data are distributed on demand to requesting client in an Intranet network and transformed via eXtensible Stylesheet Language (XSL) to be visualized in an uniform way on market browsers. The core server operation software has been developed in PHP Hypertext Preprocessor scripting language, which is very versatile and useful for crafting a dynamic Web environment.

  12. Intraoperative optical coherence tomography: past, present, and future

    PubMed Central

    Ehlers, J P

    2016-01-01

    To provide an overview of the current state of intraoperative optical coherence tomography (OCT). Literature review of studies pertaining to intraoperative OCT examining both the technology aspects of the imaging platform and the current evidence for patient care. Over the last several years, there have been significant advances in integrative technology for intraoperative OCT. This has resulted in the development of multiple microscope-integrated systems and a rapidly expanding field of image-guided surgical care. Multiple studies have demonstrated the potential role for intraoperative OCT in facilitating surgeon understanding of the surgical environment, tissue configuration, and overall changes to anatomy. In fact, the PIONEER and DISCOVER studies, both demonstrated a potential significant percentage of cases that intraoperative OCT alters surgical decision-making in both anterior and posterior segment surgery. Current areas of exploration and development include OCT-compatible instrumentation, automated tracking, intraoperative OCT software platforms, and surgeon feedback/visualization platforms. Intraoperative OCT is an emerging technology that holds promise for enhancing the surgical care of both anterior segment and posterior segment conditions. Hurdles remain for adoption and widespread utilization, including cost, optimized feedback platforms, and more definitive value for individualized surgical care with image guidance. PMID:26681147

  13. PACS for Bhutan: a cost effective open source architecture for emerging countries.

    PubMed

    Ratib, Osman; Roduit, Nicolas; Nidup, Dechen; De Geer, Gerard; Rosset, Antoine; Geissbuhler, Antoine

    2016-10-01

    This paper reports the design and implementation of an innovative and cost-effective imaging management infrastructure suitable for radiology centres in emerging countries. It was implemented in the main referring hospital of Bhutan equipped with a CT, an MRI, digital radiology, and a suite of several ultrasound units. They lacked the necessary informatics infrastructure for image archiving and interpretation and needed a system for distribution of images to clinical wards. The solution developed for this project combines several open source software platforms in a robust and versatile archiving and communication system connected to analysis workstations equipped with a FDA-certified version of the highly popular Open-Source software. The whole system was implemented on standard off-the-shelf hardware. The system was installed in three days, and training of the radiologists as well as the technical and IT staff was provided onsite to ensure full ownership of the system by the local team. Radiologists were rapidly capable of reading and interpreting studies on the diagnostic workstations, which had a significant benefit on their workflow and ability to perform diagnostic tasks more efficiently. Furthermore, images were also made available to several clinical units on standard desktop computers through a web-based viewer. • Open source imaging informatics platforms can provide cost-effective alternatives for PACS • Robust and cost-effective open architecture can provide adequate solutions for emerging countries • Imaging informatics is often lacking in hospitals equipped with digital modalities.

  14. Development and validation of an open source quantification tool for DSC-MRI studies.

    PubMed

    Gordaliza, P M; Mateos-Pérez, J M; Montesinos, P; Guzmán-de-Villoria, J A; Desco, M; Vaquero, J J

    2015-03-01

    This work presents the development of an open source tool for the quantification of dynamic susceptibility-weighted contrast-enhanced (DSC) perfusion studies. The development of this tool is motivated by the lack of open source tools implemented on open platforms to allow external developers to implement their own quantification methods easily and without the need of paying for a development license. This quantification tool was developed as a plugin for the ImageJ image analysis platform using the Java programming language. A modular approach was used in the implementation of the components, in such a way that the addition of new methods can be done without breaking any of the existing functionalities. For the validation process, images from seven patients with brain tumors were acquired and quantified with the presented tool and with a widely used clinical software package. The resulting perfusion parameters were then compared. Perfusion parameters and the corresponding parametric images were obtained. When no gamma-fitting is used, an excellent agreement with the tool used as a gold-standard was obtained (R(2)>0.8 and values are within 95% CI limits in Bland-Altman plots). An open source tool that performs quantification of perfusion studies using magnetic resonance imaging has been developed and validated using a clinical software package. It works as an ImageJ plugin and the source code has been published with an open source license. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Flightspeed Integral Image Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Thompson, David R.

    2009-01-01

    The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles

  16. GiNA, an Efficient and High-Throughput Software for Horticultural Phenotyping

    PubMed Central

    Diaz-Garcia, Luis; Covarrubias-Pazaran, Giovanny; Schlautman, Brandon; Zalapa, Juan

    2016-01-01

    Traditional methods for trait phenotyping have been a bottleneck for research in many crop species due to their intensive labor, high cost, complex implementation, lack of reproducibility and propensity to subjective bias. Recently, multiple high-throughput phenotyping platforms have been developed, but most of them are expensive, species-dependent, complex to use, and available only for major crops. To overcome such limitations, we present the open-source software GiNA, which is a simple and free tool for measuring horticultural traits such as shape- and color-related parameters of fruits, vegetables, and seeds. GiNA is multiplatform software available in both R and MATLAB® programming languages and uses conventional images from digital cameras with minimal requirements. It can process up to 11 different horticultural morphological traits such as length, width, two-dimensional area, volume, projected skin, surface area, RGB color, among other parameters. Different validation tests produced highly consistent results under different lighting conditions and camera setups making GiNA a very reliable platform for high-throughput phenotyping. In addition, five-fold cross validation between manually generated and GiNA measurements for length and width in cranberry fruits were 0.97 and 0.92. In addition, the same strategy yielded prediction accuracies above 0.83 for color estimates produced from images of cranberries analyzed with GiNA compared to total anthocyanin content (TAcy) of the same fruits measured with the standard methodology of the industry. Our platform provides a scalable, easy-to-use and affordable tool for massive acquisition of phenotypic data of fruits, seeds, and vegetables. PMID:27529547

  17. GiNA, an Efficient and High-Throughput Software for Horticultural Phenotyping.

    PubMed

    Diaz-Garcia, Luis; Covarrubias-Pazaran, Giovanny; Schlautman, Brandon; Zalapa, Juan

    2016-01-01

    Traditional methods for trait phenotyping have been a bottleneck for research in many crop species due to their intensive labor, high cost, complex implementation, lack of reproducibility and propensity to subjective bias. Recently, multiple high-throughput phenotyping platforms have been developed, but most of them are expensive, species-dependent, complex to use, and available only for major crops. To overcome such limitations, we present the open-source software GiNA, which is a simple and free tool for measuring horticultural traits such as shape- and color-related parameters of fruits, vegetables, and seeds. GiNA is multiplatform software available in both R and MATLAB® programming languages and uses conventional images from digital cameras with minimal requirements. It can process up to 11 different horticultural morphological traits such as length, width, two-dimensional area, volume, projected skin, surface area, RGB color, among other parameters. Different validation tests produced highly consistent results under different lighting conditions and camera setups making GiNA a very reliable platform for high-throughput phenotyping. In addition, five-fold cross validation between manually generated and GiNA measurements for length and width in cranberry fruits were 0.97 and 0.92. In addition, the same strategy yielded prediction accuracies above 0.83 for color estimates produced from images of cranberries analyzed with GiNA compared to total anthocyanin content (TAcy) of the same fruits measured with the standard methodology of the industry. Our platform provides a scalable, easy-to-use and affordable tool for massive acquisition of phenotypic data of fruits, seeds, and vegetables.

  18. Uranus: a rapid prototyping tool for FPGA embedded computer vision

    NASA Astrophysics Data System (ADS)

    Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.

    2007-01-01

    The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.

  19. Image Processing Occupancy Sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The Image Processing Occupancy Sensor, or IPOS, is a novel sensor technology developed at the National Renewable Energy Laboratory (NREL). The sensor is based on low-cost embedded microprocessors widely used by the smartphone industry and leverages mature open-source computer vision software libraries. Compared to traditional passive infrared and ultrasonic-based motion sensors currently used for occupancy detection, IPOS has shown the potential for improved accuracy and a richer set of feedback signals for occupant-optimized lighting, daylighting, temperature setback, ventilation control, and other occupancy and location-based uses. Unlike traditional passive infrared (PIR) or ultrasonic occupancy sensors, which infer occupancy based only onmore » motion, IPOS uses digital image-based analysis to detect and classify various aspects of occupancy, including the presence of occupants regardless of motion, their number, location, and activity levels of occupants, as well as the illuminance properties of the monitored space. The IPOS software leverages the recent availability of low-cost embedded computing platforms, computer vision software libraries, and camera elements.« less

  20. Technical Note: SCUDA: A software platform for cumulative dose assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Seyoun; McNutt, Todd; Quon, Harry

    Purpose: Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (SCUDA) that can be seamlessly integrated into the clinical workflow. Methods: SCUDA consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our imagemore » PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. Results: The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. Conclusions: The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.« less

  1. Geneious Basic: An integrated and extendable desktop software platform for the organization and analysis of sequence data

    PubMed Central

    Kearse, Matthew; Moir, Richard; Wilson, Amy; Stones-Havas, Steven; Cheung, Matthew; Sturrock, Shane; Buxton, Simon; Cooper, Alex; Markowitz, Sidney; Duran, Chris; Thierer, Tobias; Ashton, Bruce; Meintjes, Peter; Drummond, Alexei

    2012-01-01

    Summary: The two main functions of bioinformatics are the organization and analysis of biological data using computational resources. Geneious Basic has been designed to be an easy-to-use and flexible desktop software application framework for the organization and analysis of biological data, with a focus on molecular sequences and related data types. It integrates numerous industry-standard discovery analysis tools, with interactive visualizations to generate publication-ready images. One key contribution to researchers in the life sciences is the Geneious public application programming interface (API) that affords the ability to leverage the existing framework of the Geneious Basic software platform for virtually unlimited extension and customization. The result is an increase in the speed and quality of development of computation tools for the life sciences, due to the functionality and graphical user interface available to the developer through the public API. Geneious Basic represents an ideal platform for the bioinformatics community to leverage existing components and to integrate their own specific requirements for the discovery, analysis and visualization of biological data. Availability and implementation: Binaries and public API freely available for download at http://www.geneious.com/basic, implemented in Java and supported on Linux, Apple OSX and MS Windows. The software is also available from the Bio-Linux package repository at http://nebc.nerc.ac.uk/news/geneiousonbl. Contact: peter@biomatters.com PMID:22543367

  2. Geneious Basic: an integrated and extendable desktop software platform for the organization and analysis of sequence data.

    PubMed

    Kearse, Matthew; Moir, Richard; Wilson, Amy; Stones-Havas, Steven; Cheung, Matthew; Sturrock, Shane; Buxton, Simon; Cooper, Alex; Markowitz, Sidney; Duran, Chris; Thierer, Tobias; Ashton, Bruce; Meintjes, Peter; Drummond, Alexei

    2012-06-15

    The two main functions of bioinformatics are the organization and analysis of biological data using computational resources. Geneious Basic has been designed to be an easy-to-use and flexible desktop software application framework for the organization and analysis of biological data, with a focus on molecular sequences and related data types. It integrates numerous industry-standard discovery analysis tools, with interactive visualizations to generate publication-ready images. One key contribution to researchers in the life sciences is the Geneious public application programming interface (API) that affords the ability to leverage the existing framework of the Geneious Basic software platform for virtually unlimited extension and customization. The result is an increase in the speed and quality of development of computation tools for the life sciences, due to the functionality and graphical user interface available to the developer through the public API. Geneious Basic represents an ideal platform for the bioinformatics community to leverage existing components and to integrate their own specific requirements for the discovery, analysis and visualization of biological data. Binaries and public API freely available for download at http://www.geneious.com/basic, implemented in Java and supported on Linux, Apple OSX and MS Windows. The software is also available from the Bio-Linux package repository at http://nebc.nerc.ac.uk/news/geneiousonbl.

  3. SU-E-J-04: Integration of Interstitial High Intensity Therapeutic Ultrasound Applicators On a Clinical MRI-Guided High Intensity Focused Ultrasound Treatment Planning Software Platform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellens, N; Partanen, A; Ghoshal, G

    Purpose: Interstitial high intensity therapeutic ultrasound (HITU) applicators can be used to ablate tissue percutaneously, allowing for minimally-invasive treatment without ionizing radiation [1,2]. The purpose of this study was to evaluate the feasibility and usability of combining multielement interstitial HITU applicators with a clinical magnetic resonance imaging (MRI)-guided focused ultrasound software platform. Methods: The Sonalleve software platform (Philips Healthcare, Vantaa, Finland) combines anatomical MRI for target selection and multi-planar MRI thermometry to provide real-time temperature information. The MRI-compatible interstitial US applicators (Acoustic MedSystems, Savoy, IL, USA) had 1–4 cylindrical US elements, each 1 cm long with either 180° or 360°more » of active surface. Each applicator (4 Fr diameter, enclosed within a 13 Fr flexible catheter) was inserted into a tissue-mimicking agar-silica phantom. Degassed water was circulated around the transducers for cooling and coupling. Based on the location of the applicator, a virtual transducer overlay was added to the software to assist targeting and to allow automatic thermometry slice placement. The phantom was sonicated at 7 MHz for 5 minutes with 6–8 W of acoustic power for each element. MR thermometry data were collected during and after sonication. Results: Preliminary testing indicated that the applicator location could be identified in the planning images and the transducer locations predicted within 1 mm accuracy using the overlay. Ablation zones (thermal dose ≥ 240 CEM43) for 2 active, adjacent US elements ranged from 18 mm × 24 mm (width × length) to 25 mm × 25 mm for the 6 W and 8 W sonications, respectively. Conclusion: The combination of interstitial HITU applicators and this software platform holds promise for novel approaches in minimally-invasive MRI-guided therapy, especially when bony structures or air-filled cavities may preclude extracorporeal HIFU.[1] Diederich et al. IEEE UFFFC 46.5 (1999): 1218.[2] Chopra et al. PMB 50.21 (2005): 4957. Funding support was provided by Philips Healthcare and in-kind support from Acoustic MedSystems Inc. Ari Partanen is a paid employee of Philips Healthcare. Goutam Ghoshal and Everette Clif Burdette are paid employees of Acoustic MedSystems Inc.« less

  4. MSiReader v1.0: Evolving Open-Source Mass Spectrometry Imaging Software for Targeted and Untargeted Analyses.

    PubMed

    Bokhart, Mark T; Nazari, Milad; Garrard, Kenneth P; Muddiman, David C

    2018-01-01

    A major update to the mass spectrometry imaging (MSI) software MSiReader is presented, offering a multitude of newly added features critical to MSI analyses. MSiReader is a free, open-source, and vendor-neutral software written in the MATLAB platform and is capable of analyzing most common MSI data formats. A standalone version of the software, which does not require a MATLAB license, is also distributed. The newly incorporated data analysis features expand the utility of MSiReader beyond simple visualization of molecular distributions. The MSiQuantification tool allows researchers to calculate absolute concentrations from quantification MSI experiments exclusively through MSiReader software, significantly reducing data analysis time. An image overlay feature allows the incorporation of complementary imaging modalities to be displayed with the MSI data. A polarity filter has also been incorporated into the data loading step, allowing the facile analysis of polarity switching experiments without the need for data parsing prior to loading the data file into MSiReader. A quality assurance feature to generate a mass measurement accuracy (MMA) heatmap for an analyte of interest has also been added to allow for the investigation of MMA across the imaging experiment. Most importantly, as new features have been added performance has not degraded, in fact it has been dramatically improved. These new tools and the improvements to the performance in MSiReader v1.0 enable the MSI community to evaluate their data in greater depth and in less time. Graphical Abstract ᅟ.

  5. MorphoGraphX: A platform for quantifying morphogenesis in 4D

    PubMed Central

    Barbier de Reuille, Pierre; Routier-Kierzkowska, Anne-Lise; Kierzkowski, Daniel; Bassel, George W; Schüpbach, Thierry; Tauriello, Gerardo; Bajpai, Namrata; Strauss, Sören; Weber, Alain; Kiss, Annamaria; Burian, Agata; Hofhuis, Hugo; Sapala, Aleksandra; Lipowczan, Marcin; Heimlicher, Maria B; Robinson, Sarah; Bayer, Emmanuelle M; Basler, Konrad; Koumoutsakos, Petros; Roeder, Adrienne HK; Aegerter-Wilmsen, Tinri; Nakayama, Naomi; Tsiantis, Miltos; Hay, Angela; Kwiatkowska, Dorota; Xenarios, Ioannis; Kuhlemeier, Cris; Smith, Richard S

    2015-01-01

    Morphogenesis emerges from complex multiscale interactions between genetic and mechanical processes. To understand these processes, the evolution of cell shape, proliferation and gene expression must be quantified. This quantification is usually performed either in full 3D, which is computationally expensive and technically challenging, or on 2D planar projections, which introduces geometrical artifacts on highly curved organs. Here we present MorphoGraphX (www.MorphoGraphX.org), a software that bridges this gap by working directly with curved surface images extracted from 3D data. In addition to traditional 3D image analysis, we have developed algorithms to operate on curved surfaces, such as cell segmentation, lineage tracking and fluorescence signal quantification. The software's modular design makes it easy to include existing libraries, or to implement new algorithms. Cell geometries extracted with MorphoGraphX can be exported and used as templates for simulation models, providing a powerful platform to investigate the interactions between shape, genes and growth. DOI: http://dx.doi.org/10.7554/eLife.05864.001 PMID:25946108

  6. The Medical Imaging Interaction Toolkit: challenges and advances : 10 years of open-source development.

    PubMed

    Nolden, Marco; Zelzer, Sascha; Seitel, Alexander; Wald, Diana; Müller, Michael; Franz, Alfred M; Maleike, Daniel; Fangerau, Markus; Baumhauer, Matthias; Maier-Hein, Lena; Maier-Hein, Klaus H; Meinzer, Hans-Peter; Wolf, Ivo

    2013-07-01

    The Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control. MITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams. MITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process. MITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today's and tomorrow's clinically motivated research.

  7. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  8. Development of RT-components for the M-3 Strawberry Harvesting Robot

    NASA Astrophysics Data System (ADS)

    Yamashita, Tomoki; Tanaka, Motomasa; Yamamoto, Satoshi; Hayashi, Shigehiko; Saito, Sadafumi; Sugano, Shigeki

    We are now developing the strawberry harvest robot called “M-3” prototype robot system under the 4th urgent project of MAFF. In order to develop the control software of the M-3 robot more efficiently, we innovated the RT-middleware “OpenRTM-aist” software platform. In this system, we developed 9 kind of RT-Components (RTC): Robot task sequence player RTC, Proxy RTC for image processing software, DC motor controller RTC, Arm kinematics RTC, and so on. In this paper, we discuss advantages of RT-middleware developing system and problems about operating the RTC-configured robotic system by end-users.

  9. The Open Microscopy Environment: open image informatics for the biological sciences

    NASA Astrophysics Data System (ADS)

    Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.

    2016-07-01

    Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).

  10. Raster Metafile and Raster Metafile Translator

    NASA Technical Reports Server (NTRS)

    Taylor, Nancy L.; Everton, Eric L.; Randall, Donald P.; Gates, Raymond L.; Skeens, Kristi M.

    1989-01-01

    The intent is to present an effort undertaken at NASA Langley Research Center to design a generic raster image format and to develop tools for processing images prepared in this format. Both the Raster Metafile (RM) format and the Raster Metafile Translator (RMT) are addressed. This document is intended to serve a varied audience including: users wishing to display and manipulate raster image data, programmers responsible for either interfacing the RM format with other raster formats or for developing new RMT device drivers, and programmers charged with installing the software on a host platform.

  11. Digital shaded relief image of a carbonate platform (northern Great Bahama Bank): Scenery seen and unseen

    NASA Astrophysics Data System (ADS)

    Boss, Stephen K.

    1996-11-01

    A mosaic image of the northern Great Bahama Bank was created from separate gray-scale Landsat images using photo-editing and image analysis software that is commercially available for desktop computers. Measurements of pixel gray levels (relative scale from 0 to 255 referred to as digital number, DN) on the mosaic image were compared to bank-top bathymetry (determined from a network of single-channel, high-resolution seismic profiles), bottom type (coarse sand, sandy mud, barren rock, or reef determined from seismic profiles and diver observations), and vegetative cover (presence and/or absence and relative density of the marine angiosperm Thalassia testudinum determined from diver observations). Results of these analyses indicate that bank-top bathymetry is a primary control on observed pixel DN, bottom type is a secondary control on pixel DN, and vegetative cover is a tertiary influence on pixel DN. Consequently, processing of the gray-scale Landsat mosaic with a directional gradient edge-detection filter generated a physiographic shaded relief image resembling bank-top bathymetric patterns related to submerged physiographic features across the platform. The visibility of submerged karst landforms, Pleistocene eolianite ridges, islands, and possible paleo-drainage patterns created during sea-level lowstands is significantly enhanced on processed images relative to the original mosaic. Bank-margin ooid shoals, platform interior sand bodies, reef edifices, and bidirectional sand waves are features resulting from Holocene carbonate deposition that are also more clearly visible on the new physiographic images. Combined with observational data (single-channel, high-resolution seismic profiles, bottom observations by SCUBA divers, sediment and rock cores) across the northern Great Bahama Bank, these physiographic images facilitate comprehension of areal relations among antecedent platform topography, physical processes, and ensuing depositional patterns during sea-level rise.

  12. Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors

    PubMed Central

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena

    2013-01-01

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804

  13. Computational burden resulting from image recognition of high resolution radar sensors.

    PubMed

    López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena

    2013-04-22

    This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.

  14. Volumetric neuroimage analysis extensions for the MIPAV software package.

    PubMed

    Bazin, Pierre-Louis; Cuzzocreo, Jennifer L; Yassa, Michael A; Gandler, William; McAuliffe, Matthew J; Bassett, Susan S; Pham, Dzung L

    2007-09-15

    We describe a new collection of publicly available software tools for performing quantitative neuroimage analysis. The tools perform semi-automatic brain extraction, tissue classification, Talairach alignment, and atlas-based measurements within a user-friendly graphical environment. They are implemented as plug-ins for MIPAV, a freely available medical image processing software package from the National Institutes of Health. Because the plug-ins and MIPAV are implemented in Java, both can be utilized on nearly any operating system platform. In addition to the software plug-ins, we have also released a digital version of the Talairach atlas that can be used to perform regional volumetric analyses. Several studies are conducted applying the new tools to simulated and real neuroimaging data sets.

  15. Engineering and algorithm design for an image processing Api: a technical report on ITK--the Insight Toolkit.

    PubMed

    Yoo, Terry S; Ackerman, Michael J; Lorensen, William E; Schroeder, Will; Chalana, Vikram; Aylward, Stephen; Metaxas, Dimitris; Whitaker, Ross

    2002-01-01

    We present the detailed planning and execution of the Insight Toolkit (ITK), an application programmers interface (API) for the segmentation and registration of medical image data. This public resource has been developed through the NLM Visible Human Project, and is in beta test as an open-source software offering under cost-free licensing. The toolkit concentrates on 3D medical data segmentation and registration algorithms, multimodal and multiresolution capabilities, and portable platform independent support for Windows, Linux/Unix systems. This toolkit was built using current practices in software engineering. Specifically, we embraced the concept of generic programming during the development of these tools, working extensively with C++ templates and the freedom and flexibility they allow. Software development tools for distributed consortium-based code development have been created and are also publicly available. We discuss our assumptions, design decisions, and some lessons learned.

  16. OpenMSI: A High-Performance Web-Based Platform for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Greiner, Annette; Cholia, Shreyas

    Mass spectrometry imaging (MSI) enables researchers to directly probe endogenous molecules directly within the architecture of the biological matrix. Unfortunately, efficient access, management, and analysis of the data generated by MSI approaches remain major challenges to this rapidly developing field. Despite the availability of numerous dedicated file formats and software packages, it is a widely held viewpoint that the biggest challenge is simply opening, sharing, and analyzing a file without loss of information. Here we present OpenMSI, a software framework and platform that addresses these challenges via an advanced, high-performance, extensible file format and Web API for remote data accessmore » (http://openmsi.nersc.gov). The OpenMSI file format supports storage of raw MSI data, metadata, and derived analyses in a single, self-describing format based on HDF5 and is supported by a large range of analysis software (e.g., Matlab and R) and programming languages (e.g., C++, Fortran, and Python). Careful optimization of the storage layout of MSI data sets using chunking, compression, and data replication accelerates common, selective data access operations while minimizing data storage requirements and are critical enablers of rapid data I/O. The OpenMSI file format has shown to provide >2000-fold improvement for image access operations, enabling spectrum and image retrieval in less than 0.3 s across the Internet even for 50 GB MSI data sets. To make remote high-performance compute resources accessible for analysis and to facilitate data sharing and collaboration, we describe an easy-to-use yet powerful Web API, enabling fast and convenient access to MSI data, metadata, and derived analysis results stored remotely to facilitate high-performance data analysis and enable implementation of Web based data sharing, visualization, and analysis.« less

  17. A three-dimensional image processing program for accurate, rapid, and semi-automated segmentation of neuronal somata with dense neurite outgrowth

    PubMed Central

    Ross, James D.; Cullen, D. Kacy; Harris, James P.; LaPlaca, Michelle C.; DeWeerth, Stephen P.

    2015-01-01

    Three-dimensional (3-D) image analysis techniques provide a powerful means to rapidly and accurately assess complex morphological and functional interactions between neural cells. Current software-based identification methods of neural cells generally fall into two applications: (1) segmentation of cell nuclei in high-density constructs or (2) tracing of cell neurites in single cell investigations. We have developed novel methodologies to permit the systematic identification of populations of neuronal somata possessing rich morphological detail and dense neurite arborization throughout thick tissue or 3-D in vitro constructs. The image analysis incorporates several novel automated features for the discrimination of neurites and somata by initially classifying features in 2-D and merging these classifications into 3-D objects; the 3-D reconstructions automatically identify and adjust for over and under segmentation errors. Additionally, the platform provides for software-assisted error corrections to further minimize error. These features attain very accurate cell boundary identifications to handle a wide range of morphological complexities. We validated these tools using confocal z-stacks from thick 3-D neural constructs where neuronal somata had varying degrees of neurite arborization and complexity, achieving an accuracy of ≥95%. We demonstrated the robustness of these algorithms in a more complex arena through the automated segmentation of neural cells in ex vivo brain slices. These novel methods surpass previous techniques by improving the robustness and accuracy by: (1) the ability to process neurites and somata, (2) bidirectional segmentation correction, and (3) validation via software-assisted user input. This 3-D image analysis platform provides valuable tools for the unbiased analysis of neural tissue or tissue surrogates within a 3-D context, appropriate for the study of multi-dimensional cell-cell and cell-extracellular matrix interactions. PMID:26257609

  18. Retina-like sensor image coordinates transformation and display

    NASA Astrophysics Data System (ADS)

    Cao, Fengmei; Cao, Nan; Bai, Tingzhu; Song, Shengyu

    2015-03-01

    For a new kind of retina-like senor camera, the image acquisition, coordinates transformation and interpolation need to be realized. Both of the coordinates transformation and interpolation are computed in polar coordinate due to the sensor's particular pixels distribution. The image interpolation is based on sub-pixel interpolation and its relative weights are got in polar coordinates. The hardware platform is composed of retina-like senor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes the real-time image acquisition, coordinate transformation and interpolation.

  19. MATtrack: A MATLAB-Based Quantitative Image Analysis Platform for Investigating Real-Time Photo-Converted Fluorescent Signals in Live Cells.

    PubMed

    Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W; Gautier, Virginie W

    2015-01-01

    We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip.

  20. MATtrack: A MATLAB-Based Quantitative Image Analysis Platform for Investigating Real-Time Photo-Converted Fluorescent Signals in Live Cells

    PubMed Central

    Courtney, Jane; Woods, Elena; Scholz, Dimitri; Hall, William W.; Gautier, Virginie W.

    2015-01-01

    We introduce here MATtrack, an open source MATLAB-based computational platform developed to process multi-Tiff files produced by a photo-conversion time lapse protocol for live cell fluorescent microscopy. MATtrack automatically performs a series of steps required for image processing, including extraction and import of numerical values from Multi-Tiff files, red/green image classification using gating parameters, noise filtering, background extraction, contrast stretching and temporal smoothing. MATtrack also integrates a series of algorithms for quantitative image analysis enabling the construction of mean and standard deviation images, clustering and classification of subcellular regions and injection point approximation. In addition, MATtrack features a simple user interface, which enables monitoring of Fluorescent Signal Intensity in multiple Regions of Interest, over time. The latter encapsulates a region growing method to automatically delineate the contours of Regions of Interest selected by the user, and performs background and regional Average Fluorescence Tracking, and automatic plotting. Finally, MATtrack computes convenient visualization and exploration tools including a migration map, which provides an overview of the protein intracellular trajectories and accumulation areas. In conclusion, MATtrack is an open source MATLAB-based software package tailored to facilitate the analysis and visualization of large data files derived from real-time live cell fluorescent microscopy using photoconvertible proteins. It is flexible, user friendly, compatible with Windows, Mac, and Linux, and a wide range of data acquisition software. MATtrack is freely available for download at eleceng.dit.ie/courtney/MATtrack.zip. PMID:26485569

  1. Perk Station – Percutaneous Surgery Training and Performance Measurement Platform

    PubMed Central

    Vikal, Siddharth; U-Thainual, Paweena; Carrino, John A.; Iordachita, Iulian; Fischer, Gregory S.; Fichtinger, Gabor

    2009-01-01

    Motivation Image-guided percutaneous (through the skin) needle-based surgery has become part of routine clinical practice in performing procedures such as biopsies, injections and therapeutic implants. A novice physician typically performs needle interventions under the supervision of a senior physician; a slow and inherently subjective training process that lacks objective, quantitative assessment of the surgical skill and performance[S1]. Shortening the learning curve and increasing procedural consistency are important factors in assuring high-quality medical care. Methods This paper describes a laboratory validation system, called Perk Station, for standardized training and performance measurement under different assistance techniques for needle-based surgical guidance systems. The initial goal of the Perk Station is to assess and compare different techniques: 2D image overlay, biplane laser guide, laser protractor and conventional freehand. The main focus of this manuscript is the planning and guidance software system developed on the 3D Slicer platform, a free, open source software package designed for visualization and analysis of medical image data. Results The prototype Perk Station has been successfully developed, the associated needle insertion phantoms were built, and the graphical user interface was fully implemented. The system was inaugurated in undergraduate teaching and a wide array of outreach activities. Initial results, experiences, ongoing activities and future plans are reported. PMID:19539446

  2. Implementation of remote monitoring and managing switches

    NASA Astrophysics Data System (ADS)

    Leng, Junmin; Fu, Guo

    2010-12-01

    In order to strengthen the safety performance of the network and provide the big convenience and efficiency for the operator and the manager, the system of remote monitoring and managing switches has been designed and achieved using the advanced network technology and present network resources. The fast speed Internet Protocol Cameras (FS IP Camera) is selected, which has 32-bit RSIC embedded processor and can support a number of protocols. An Optimal image compress algorithm Motion-JPEG is adopted so that high resolution images can be transmitted by narrow network bandwidth. The architecture of the whole monitoring and managing system is designed and implemented according to the current infrastructure of the network and switches. The control and administrative software is projected. The dynamical webpage Java Server Pages (JSP) development platform is utilized in the system. SQL (Structured Query Language) Server database is applied to save and access images information, network messages and users' data. The reliability and security of the system is further strengthened by the access control. The software in the system is made to be cross-platform so that multiple operating systems (UNIX, Linux and Windows operating systems) are supported. The application of the system can greatly reduce manpower cost, and can quickly find and solve problems.

  3. Survey of Non-Rigid Registration Tools in Medicine.

    PubMed

    Keszei, András P; Berkels, Benjamin; Deserno, Thomas M

    2017-02-01

    We catalogue available software solutions for non-rigid image registration to support scientists in selecting suitable tools for specific medical registration purposes. Registration tools were identified using non-systematic search in Pubmed, Web of Science, IEEE Xplore® Digital Library, Google Scholar, and through references in identified sources (n = 22). Exclusions are due to unavailability or inappropriateness. The remaining (n = 18) tools were classified by (i) access and technology, (ii) interfaces and application, (iii) living community, (iv) supported file formats, and (v) types of registration methodologies emphasizing the similarity measures implemented. Out of the 18 tools, (i) 12 are open source, 8 are released under a permissive free license, which imposes the least restrictions on the use and further development of the tool, 8 provide graphical processing unit (GPU) support; (ii) 7 are built on software platforms, 5 were developed for brain image registration; (iii) 6 are under active development but only 3 have had their last update in 2015 or 2016; (iv) 16 support the Analyze format, while 7 file formats can be read with only one of the tools; and (v) 6 provide multiple registration methods and 6 provide landmark-based registration methods. Based on open source, licensing, GPU support, active community, several file formats, algorithms, and similarity measures, the tools Elastics and Plastimatch are chosen for the platform ITK and without platform requirements, respectively. Researchers in medical image analysis already have a large choice of registration tools freely available. However, the most recently published algorithms may not be included in the tools, yet.

  4. MSiReader: an open-source interface to view and analyze high resolving power MS imaging files on Matlab platform.

    PubMed

    Robichaud, Guillaume; Garrard, Kenneth P; Barry, Jeremy A; Muddiman, David C

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  5. MSiReader: An Open-Source Interface to View and Analyze High Resolving Power MS Imaging Files on Matlab Platform

    NASA Astrophysics Data System (ADS)

    Robichaud, Guillaume; Garrard, Kenneth P.; Barry, Jeremy A.; Muddiman, David C.

    2013-05-01

    During the past decade, the field of mass spectrometry imaging (MSI) has greatly evolved, to a point where it has now been fully integrated by most vendors as an optional or dedicated platform that can be purchased with their instruments. However, the technology is not mature and multiple research groups in both academia and industry are still very actively studying the fundamentals of imaging techniques, adapting the technology to new ionization sources, and developing new applications. As a result, there important varieties of data file formats used to store mass spectrometry imaging data and, concurrent to the development of MSi, collaborative efforts have been undertaken to introduce common imaging data file formats. However, few free software packages to read and analyze files of these different formats are readily available. We introduce here MSiReader, a free open source application to read and analyze high resolution MSI data from the most common MSi data formats. The application is built on the Matlab platform (Mathworks, Natick, MA, USA) and includes a large selection of data analysis tools and features. People who are unfamiliar with the Matlab language will have little difficult navigating the user-friendly interface, and users with Matlab programming experience can adapt and customize MSiReader for their own needs.

  6. HyperCard and Other Macintosh Applications in Astronomy Education

    NASA Astrophysics Data System (ADS)

    Meisel, D.

    1992-12-01

    For the past six years, Macintosh computers have been used in introductory astronomy classes and laboratories with HyperCard and other commercial Macintosh software. I will review some of the available software that has been found particularly useful in undergraduate situations. The review will start with HyperCard (a programmable "index card" system) since it is a mature multimedia platform for the Macintosh. Experiences with the Voyager, the TS-24, MathCad, NIH Image, and other programs as used by the author and George Mumford (Tufts University) in courses and workshops will be described.

  7. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment -- San Joaquin Basin (5010): Chapter 28 in Petroleum systems and geologic assessment of oil and gas in the San Joaquin Basin Province, California

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2007-01-01

    This chapter describes data used in support of the assessment process. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD–ROM. Computers and software may import the data without transcription from the portable document format (.pdf) files of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  8. Chapter 2: Tabular Data and Graphical Images in Support of the U.S. Geological Survey National Oil and Gas Assessment - The Wind River Basin Province

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2007-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files) because of the number and variety of platforms and software available.

  9. DICOM image secure communications with Internet protocols IPv6 and IPv4.

    PubMed

    Zhang, Jianguo; Yu, Fenghai; Sun, Jianyong; Yang, Yuanyuan; Liang, Chenwen

    2007-01-01

    Image-data transmission from one site to another through public network is usually characterized in term of privacy, authenticity, and integrity. In this paper, we first describe a general scenario about how image is delivered from one site to another through a wide-area network (WAN) with security features of data privacy, integrity, and authenticity. Second, we give the common implementation method of the digital imaging and communication in medicine (DICOM) image communication software library with IPv6/IPv4 for high-speed broadband Internet by using open-source software. Third, we discuss two major security-transmission methods, the IP security (IPSec) and the secure-socket layer (SSL) or transport-layer security (TLS), being used currently in medical-image-data communication with privacy support. Fourth, we describe a test schema of multiple-modality DICOM-image communications through TCP/IPv4 and TCP/IPv6 with different security methods, different security algorithms, and operating systems, and evaluate the test results. We found that there are tradeoff factors between choosing the IPsec and the SSL/TLS-based security implementation of IPv6/IPv4 protocols. If the WAN networks only use IPv6 such as in high-speed broadband Internet, the choice is IPsec-based security. If the networks are IPv4 or the combination of IPv6 and IPv4, it is better to use SSL/TLS security. The Linux platform has more security algorithms implemented than the Windows (XP) platform, and can achieve better performance in most experiments of IPv6 and IPv4-based DICOM-image communications. In teleradiology or enterprise-PACS applications, the Linux operating system may be the better choice as peer security gateways for both the IPsec and the SSL/TLS-based secure DICOM communications cross public networks.

  10. A new tool for supervised classification of satellite images available on web servers: Google Maps as a case study

    NASA Astrophysics Data System (ADS)

    García-Flores, Agustín.; Paz-Gallardo, Abel; Plaza, Antonio; Li, Jun

    2016-10-01

    This paper describes a new web platform dedicated to the classification of satellite images called Hypergim. The current implementation of this platform enables users to perform classification of satellite images from any part of the world thanks to the worldwide maps provided by Google Maps. To perform this classification, Hypergim uses unsupervised algorithms like Isodata and K-means. Here, we present an extension of the original platform in which we adapt Hypergim in order to use supervised algorithms to improve the classification results. This involves a significant modification of the user interface, providing the user with a way to obtain samples of classes present in the images to use in the training phase of the classification process. Another main goal of this development is to improve the runtime of the image classification process. To achieve this goal, we use a parallel implementation of the Random Forest classification algorithm. This implementation is a modification of the well-known CURFIL software package. The use of this type of algorithms to perform image classification is widespread today thanks to its precision and ease of training. The actual implementation of Random Forest was developed using CUDA platform, which enables us to exploit the potential of several models of NVIDIA graphics processing units using them to execute general purpose computing tasks as image classification algorithms. As well as CUDA, we use other parallel libraries as Intel Boost, taking advantage of the multithreading capabilities of modern CPUs. To ensure the best possible results, the platform is deployed in a cluster of commodity graphics processing units (GPUs), so that multiple users can use the tool in a concurrent way. The experimental results indicate that this new algorithm widely outperform the previous unsupervised algorithms implemented in Hypergim, both in runtime as well as precision of the actual classification of the images.

  11. ITK: enabling reproducible research and open science

    PubMed Central

    McCormick, Matthew; Liu, Xiaoxiao; Jomier, Julien; Marion, Charles; Ibanez, Luis

    2014-01-01

    Reproducibility verification is essential to the practice of the scientific method. Researchers report their findings, which are strengthened as other independent groups in the scientific community share similar outcomes. In the many scientific fields where software has become a fundamental tool for capturing and analyzing data, this requirement of reproducibility implies that reliable and comprehensive software platforms and tools should be made available to the scientific community. The tools will empower them and the public to verify, through practice, the reproducibility of observations that are reported in the scientific literature. Medical image analysis is one of the fields in which the use of computational resources, both software and hardware, are an essential platform for performing experimental work. In this arena, the introduction of the Insight Toolkit (ITK) in 1999 has transformed the field and facilitates its progress by accelerating the rate at which algorithmic implementations are developed, tested, disseminated and improved. By building on the efficiency and quality of open source methodologies, ITK has provided the medical image community with an effective platform on which to build a daily workflow that incorporates the true scientific practices of reproducibility verification. This article describes the multiple tools, methodologies, and practices that the ITK community has adopted, refined, and followed during the past decade, in order to become one of the research communities with the most modern reproducibility verification infrastructure. For example, 207 contributors have created over 2400 unit tests that provide over 84% code line test coverage. The Insight Journal, an open publication journal associated with the toolkit, has seen over 360,000 publication downloads. The median normalized closeness centrality, a measure of knowledge flow, resulting from the distributed peer code review system was high, 0.46. PMID:24600387

  12. ITK: enabling reproducible research and open science.

    PubMed

    McCormick, Matthew; Liu, Xiaoxiao; Jomier, Julien; Marion, Charles; Ibanez, Luis

    2014-01-01

    Reproducibility verification is essential to the practice of the scientific method. Researchers report their findings, which are strengthened as other independent groups in the scientific community share similar outcomes. In the many scientific fields where software has become a fundamental tool for capturing and analyzing data, this requirement of reproducibility implies that reliable and comprehensive software platforms and tools should be made available to the scientific community. The tools will empower them and the public to verify, through practice, the reproducibility of observations that are reported in the scientific literature. Medical image analysis is one of the fields in which the use of computational resources, both software and hardware, are an essential platform for performing experimental work. In this arena, the introduction of the Insight Toolkit (ITK) in 1999 has transformed the field and facilitates its progress by accelerating the rate at which algorithmic implementations are developed, tested, disseminated and improved. By building on the efficiency and quality of open source methodologies, ITK has provided the medical image community with an effective platform on which to build a daily workflow that incorporates the true scientific practices of reproducibility verification. This article describes the multiple tools, methodologies, and practices that the ITK community has adopted, refined, and followed during the past decade, in order to become one of the research communities with the most modern reproducibility verification infrastructure. For example, 207 contributors have created over 2400 unit tests that provide over 84% code line test coverage. The Insight Journal, an open publication journal associated with the toolkit, has seen over 360,000 publication downloads. The median normalized closeness centrality, a measure of knowledge flow, resulting from the distributed peer code review system was high, 0.46.

  13. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model

    PubMed Central

    Saul, Katherine R.; Hu, Xiao; Goehler, Craig M.; Vidt, Meghan E.; Daly, Melissa; Velisar, Anca; Murray, Wendy M.

    2014-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms. PMID:24995410

  14. Benchmarking of dynamic simulation predictions in two software platforms using an upper limb musculoskeletal model.

    PubMed

    Saul, Katherine R; Hu, Xiao; Goehler, Craig M; Vidt, Meghan E; Daly, Melissa; Velisar, Anca; Murray, Wendy M

    2015-01-01

    Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM-Dynamics Pipeline-SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms.

  15. Software system design for the non-null digital Moiré interferometer

    NASA Astrophysics Data System (ADS)

    Chen, Meng; Hao, Qun; Hu, Yao; Wang, Shaopu; Li, Tengfei; Li, Lin

    2016-11-01

    Aspheric optical components are an indispensable part of modern optics systems. With the development of aspheric optical elements fabrication technique, high-precision figure error test method of aspheric surfaces is a quite urgent issue now. We proposed a digital Moiré interferometer technique (DMIT) based on partial compensation principle for aspheric and freeform surface measurement. Different from traditional interferometer, DMIT consists of a real and a virtual interferometer. The virtual interferometer is simulated with Zemax software to perform phase-shifting and alignment. We can get the results by a series of calculation with the real interferogram and virtual interferograms generated by computer. DMIT requires a specific, reliable software system to ensure its normal work. Image acquisition and data processing are two important parts in this system. And it is also a challenge to realize the connection between the real and virtual interferometer. In this paper, we present a software system design for DMIT with friendly user interface and robust data processing features, enabling us to acquire the figure error of the measured asphere. We choose Visual C++ as the software development platform and control the ideal interferometer by using hybrid programming with Zemax. After image acquisition and data transmission, the system calls image processing algorithms written with Matlab to calculate the figure error of the measured asphere. We test the software system experimentally. In the experiment, we realize the measurement of an aspheric surface and prove the feasibility of the software system.

  16. The IMS Software Integration Platform

    DTIC Science & Technology

    1993-04-12

    products to incorporate all data shared by the IMS applications. Some entities (time-series, images, a algorithm -specific parameters) must be managed...dbwhoanii, dbcancel Transaction Management: dbcommit, dbrollback Key Counter Assignment: dbgetcounter String Handling: cstr ~to~pad, pad-to- cstr Error...increment *value; String Maniputation: int cstr topad (array, string, arraylength) char *array, *string; int arrayjlength; int pad tocstr (string

  17. A web-based solution for 3D medical image visualization

    NASA Astrophysics Data System (ADS)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  18. Co-Registration of Terrestrial and Uav-Based Images - Experimental Results

    NASA Astrophysics Data System (ADS)

    Gerke, M.; Nex, F.; Jende, P.

    2016-03-01

    For many applications within urban environments the combined use of images taken from the ground and from unmanned aerial platforms seems interesting: while from the airborne perspective the upper parts of objects including roofs can be observed, the ground images can complement the data from lateral views to retrieve a complete visualisation or 3D reconstruction of interesting areas. The automatic co-registration of air- and ground-based images is still a challenge and cannot be considered solved. The main obstacle is originating from the fact that objects are photographed from quite different angles, and hence state-of-the-art tie point measurement approaches cannot cope with the induced perspective transformation. One first important step towards a solution is to use airborne images taken under slant directions. Those oblique views not only help to connect vertical images and horizontal views but also provide image information from 3D-structures not visible from the other two directions. According to our experience, however, still a good planning and many images taken under different viewing angles are needed to support an automatic matching across all images and complete bundle block adjustment. Nevertheless, the entire process is still quite sensible - the removal of a single image might lead to a completely different or wrong solution, or separation of image blocks. In this paper we analyse the impact different parameters and strategies have on the solution. Those are a) the used tie point matcher, b) the used software for bundle adjustment. Using the data provided in the context of the ISPRS benchmark on multi-platform photogrammetry, we systematically address the mentioned influences. Concerning the tie-point matching we test the standard SIFT point extractor and descriptor, but also the SURF and ASIFT-approaches, the ORB technique, as well as (A)KAZE, which are based on a nonlinear scale space. In terms of pre-processing we analyse the Wallis-filter. Results show that in more challenging situations, in this case for data captured from different platforms at different days most approaches do not perform well. Wallis-filtering emerged to be most helpful especially for the SIFT approach. The commercial software pix4dmapper succeeds in overall bundle adjustment only for some configurations, and especially not for the entire image block provided.

  19. Measurements and analysis in imaging for biomedical applications

    NASA Astrophysics Data System (ADS)

    Hoeller, Timothy L.

    2009-02-01

    A Total Quality Management (TQM) approach can be used to analyze data from biomedical optical and imaging platforms of tissues. A shift from individuals to teams, partnerships, and total participation are necessary from health care groups for improved prognostics using measurement analysis. Proprietary measurement analysis software is available for calibrated, pixel-to-pixel measurements of angles and distances in digital images. Feature size, count, and color are determinable on an absolute and comparative basis. Although changes in images of histomics are based on complex and numerous factors, the variation of changes in imaging analysis to correlations of time, extent, and progression of illness can be derived. Statistical methods are preferred. Applications of the proprietary measurement software are available for any imaging platform. Quantification of results provides improved categorization of illness towards better health. As health care practitioners try to use quantified measurement data for patient diagnosis, the techniques reported can be used to track and isolate causes better. Comparisons, norms, and trends are available from processing of measurement data which is obtained easily and quickly from Scientific Software and methods. Example results for the class actions of Preventative and Corrective Care in Ophthalmology and Dermatology, respectively, are provided. Improved and quantified diagnosis can lead to better health and lower costs associated with health care. Systems support improvements towards Lean and Six Sigma affecting all branches of biology and medicine. As an example for use of statistics, the major types of variation involving a study of Bone Mineral Density (BMD) are examined. Typically, special causes in medicine relate to illness and activities; whereas, common causes are known to be associated with gender, race, size, and genetic make-up. Such a strategy of Continuous Process Improvement (CPI) involves comparison of patient results to baseline data using F-statistics. Self-parings over time are also useful. Special and common causes are identified apart from aging in applying the statistical methods. In the future, implementation of imaging measurement methods by research staff, doctors, and concerned patient partners result in improved health diagnosis, reporting, and cause determination. The long-term prospects for quantified measurements are better quality in imaging analysis with applications of higher utility for heath care providers.

  20. MSiReader v1.0: Evolving Open-Source Mass Spectrometry Imaging Software for Targeted and Untargeted Analyses

    NASA Astrophysics Data System (ADS)

    Bokhart, Mark T.; Nazari, Milad; Garrard, Kenneth P.; Muddiman, David C.

    2018-01-01

    A major update to the mass spectrometry imaging (MSI) software MSiReader is presented, offering a multitude of newly added features critical to MSI analyses. MSiReader is a free, open-source, and vendor-neutral software written in the MATLAB platform and is capable of analyzing most common MSI data formats. A standalone version of the software, which does not require a MATLAB license, is also distributed. The newly incorporated data analysis features expand the utility of MSiReader beyond simple visualization of molecular distributions. The MSiQuantification tool allows researchers to calculate absolute concentrations from quantification MSI experiments exclusively through MSiReader software, significantly reducing data analysis time. An image overlay feature allows the incorporation of complementary imaging modalities to be displayed with the MSI data. A polarity filter has also been incorporated into the data loading step, allowing the facile analysis of polarity switching experiments without the need for data parsing prior to loading the data file into MSiReader. A quality assurance feature to generate a mass measurement accuracy (MMA) heatmap for an analyte of interest has also been added to allow for the investigation of MMA across the imaging experiment. Most importantly, as new features have been added performance has not degraded, in fact it has been dramatically improved. These new tools and the improvements to the performance in MSiReader v1.0 enable the MSI community to evaluate their data in greater depth and in less time. [Figure not available: see fulltext.

  1. SITHON: An Airborne Fire Detection System Compliant with Operational Tactical Requirements

    PubMed Central

    Kontoes, Charalabos; Keramitsoglou, Iphigenia; Sifakis, Nicolaos; Konstantinidis, Pavlos

    2009-01-01

    In response to the urging need of fire managers for timely information on fire location and extent, the SITHON system was developed. SITHON is a fully digital thermal imaging system, integrating INS/GPS and a digital camera, designed to provide timely positioned and projected thermal images and video data streams rapidly integrated in the GIS operated by Crisis Control Centres. This article presents in detail the hardware and software components of SITHON, and demonstrates the first encouraging results of test flights over the Sithonia Peninsula in Northern Greece. It is envisaged that the SITHON system will be soon operated onboard various airborne platforms including fire brigade airplanes and helicopters as well as on UAV platforms owned and operated by the Greek Air Forces. PMID:22399963

  2. Automatic thermographic image defect detection of composites

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Liebenberg, Bjorn; Raymont, Jeff; Santospirito, SP

    2011-05-01

    Detecting defects, and especially reliably measuring defect sizes, are critical objectives in automatic NDT defect detection applications. In this work, the Sentence software is proposed for the analysis of pulsed thermography and near IR images of composite materials. Furthermore, the Sentence software delivers an end-to-end, user friendly platform for engineers to perform complete manual inspections, as well as tools that allow senior engineers to develop inspection templates and profiles, reducing the requisite thermographic skill level of the operating engineer. Finally, the Sentence software can also offer complete independence of operator decisions by the fully automated "Beep on Defect" detection functionality. The end-to-end automatic inspection system includes sub-systems for defining a panel profile, generating an inspection plan, controlling a robot-arm and capturing thermographic images to detect defects. A statistical model has been built to analyze the entire image, evaluate grey-scale ranges, import sentencing criteria and automatically detect impact damage defects. A full width half maximum algorithm has been used to quantify the flaw sizes. The identified defects are imported into the sentencing engine which then sentences (automatically compares analysis results against acceptance criteria) the inspection by comparing the most significant defect or group of defects against the inspection standards.

  3. Publishing Platform for Scientific Software - Lessons Learned

    NASA Astrophysics Data System (ADS)

    Hammitzsch, Martin; Fritzsch, Bernadette; Reusser, Dominik; Brembs, Björn; Deinzer, Gernot; Loewe, Peter; Fenner, Martin; van Edig, Xenia; Bertelmann, Roland; Pampel, Heinz; Klump, Jens; Wächter, Joachim

    2015-04-01

    Scientific software has become an indispensable commodity for the production, processing and analysis of empirical data but also for modelling and simulation of complex processes. Software has a significant influence on the quality of research results. For strengthening the recognition of the academic performance of scientific software development, for increasing its visibility and for promoting the reproducibility of research results, concepts for the publication of scientific software have to be developed, tested, evaluated, and then transferred into operations. For this, the publication and citability of scientific software have to fulfil scientific criteria by means of defined processes and the use of persistent identifiers, similar to data publications. The SciForge project is addressing these challenges. Based on interviews a blueprint for a scientific software publishing platform and a systematic implementation plan has been designed. In addition, the potential of journals, software repositories and persistent identifiers have been evaluated to improve the publication and dissemination of reusable software solutions. It is important that procedures for publishing software as well as methods and tools for software engineering are reflected in the architecture of the platform, in order to improve the quality of the software and the results of research. In addition, it is necessary to work continuously on improving specific conditions that promote the adoption and sustainable utilization of scientific software publications. Among others, this would include policies for the development and publication of scientific software in the institutions but also policies for establishing the necessary competencies and skills of scientists and IT personnel. To implement the concepts developed in SciForge a combined bottom-up / top-down approach is considered that will be implemented in parallel in different scientific domains, e.g. in earth sciences, climate research and the life sciences. Based on the developed blueprints a scientific software publishing platform will be iteratively implemented, tested, and evaluated. Thus the platform should be developed continuously on the basis of gained experiences and results. The platform services will be extended one by one corresponding to the requirements of the communities. Thus the implemented platform for the publication of scientific software can be improved and stabilized incrementally as a tool with software, science, publishing, and user oriented features.

  4. Effect of different types of prosthetic platforms on stress-distribution in dental implant-supported prostheses.

    PubMed

    Minatel, Lurian; Verri, Fellippo Ramos; Kudo, Guilherme Abu Halawa; de Faria Almeida, Daniel Augusto; de Souza Batista, Victor Eduardo; Lemos, Cleidiel Aparecido Araujo; Pellizzer, Eduardo Piza; Santiago, Joel Ferreira

    2017-02-01

    A biomechanical analysis of different types of implant connections is relevant to clinical practice because it may impact the longevity of the rehabilitation treatment. Therefore, the objective of this study is to evaluate the Morse taper connections and the stress distribution of structures associated with the platform switching (PSW) concept. It will do this by obtaining data on the biomechanical behavior of the main structure in relation to the dental implant using the 3-dimensional finite element methodology. Four models were simulated (with each containing a single prosthesis over the implant) in the molar region, with the following specifications: M1 and M2 is an external hexagonal implant on a regular platform; M3 is an external hexagonal implant using PSW concept; and M4 is a Morse taper implant. The modeling process involved the use of images from InVesalius CT (computed tomography) processing software, which were refined using Rhinoceros 4.0 and SolidWorks 2011 CAD software. The models were then exported into the finite element program (FEMAP 11.0) to configure the meshes. The models were processed using NeiNastram software. The main results are that M1 (regular diameter 4mm) had the highest stress concentration area and highest microstrain concentration for bone tissue, dental implants, and the retaining screw (P<0.05). Using the PSW concept increases the area of the stress concentrations in the retaining screw (P<0.05) more than in the regular platform implant. It was concluded that the increase in diameter is beneficial for stress distribution and that the PSW concept had higher stress concentrations in the retaining screw and the crown compared to the regular platform implant. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. FTOOLS: A general package of software to manipulate FITS files

    NASA Astrophysics Data System (ADS)

    Blackburn, J. K.; Shaw, R. A.; Payne, H. E.; Hayes, J. J. E.; Heasarc

    1999-12-01

    FTOOLS, a highly modular collection of utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Research Archive Center) at NASA's Goddard Space Flight Center. The FTOOLS package contains many utility programs which perform modular tasks on any FITS image or table, as well as higher-level analysis programs designed specifically for data from current and past high energy astrophysics missions. The utility programs for FITS tables are especially rich and powerful, and provide functions for presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual FTOOLS programs can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. FTOOLS development began in 1991 and has produced the main set of data analysis software for the current ASCA and RXTE space missions and for other archival sets of X-ray and gamma-ray data. The FTOOLS software package is supported on most UNIX platforms and on Windows machines. The user interface is controlled by standard parameter files that are very similar to those used by IRAF. The package is self documenting through a stand alone help task called fhelp. Software is written in ANSI C and FORTRAN to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.

  6. AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves

    NASA Astrophysics Data System (ADS)

    Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.

    2017-02-01

    ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.

  7. FISH Finder: a high-throughput tool for analyzing FISH images

    PubMed Central

    Shirley, James W.; Ty, Sereyvathana; Takebayashi, Shin-ichiro; Liu, Xiuwen; Gilbert, David M.

    2011-01-01

    Motivation: Fluorescence in situ hybridization (FISH) is used to study the organization and the positioning of specific DNA sequences within the cell nucleus. Analyzing the data from FISH images is a tedious process that invokes an element of subjectivity. Automated FISH image analysis offers savings in time as well as gaining the benefit of objective data analysis. While several FISH image analysis software tools have been developed, they often use a threshold-based segmentation algorithm for nucleus segmentation. As fluorescence signal intensities can vary significantly from experiment to experiment, from cell to cell, and within a cell, threshold-based segmentation is inflexible and often insufficient for automatic image analysis, leading to additional manual segmentation and potential subjective bias. To overcome these problems, we developed a graphical software tool called FISH Finder to automatically analyze FISH images that vary significantly. By posing the nucleus segmentation as a classification problem, compound Bayesian classifier is employed so that contextual information is utilized, resulting in reliable classification and boundary extraction. This makes it possible to analyze FISH images efficiently and objectively without adjustment of input parameters. Additionally, FISH Finder was designed to analyze the distances between differentially stained FISH probes. Availability: FISH Finder is a standalone MATLAB application and platform independent software. The program is freely available from: http://code.google.com/p/fishfinder/downloads/list Contact: gilbert@bio.fsu.edu PMID:21310746

  8. Next Generation Image-Based Phenotyping of Root System Architecture

    NASA Astrophysics Data System (ADS)

    Davis, T. W.; Shaw, N. M.; Cheng, H.; Larson, B. G.; Craft, E. J.; Shaff, J. E.; Schneider, D. J.; Piñeros, M. A.; Kochian, L. V.

    2016-12-01

    The development of the Plant Root Imaging and Data Acquisition (PRIDA) hardware/software system enables researchers to collect digital images, along with all the relevant experimental details, of a range of hydroponically grown agricultural crop roots for 2D and 3D trait analysis. Previous efforts of image-based root phenotyping focused on young cereals, such as rice; however, there is a growing need to measure both older and larger root systems, such as those of maize and sorghum, to improve our understanding of the underlying genetics that control favorable rooting traits for plant breeding programs to combat the agricultural risks presented by climate change. Therefore, a larger imaging apparatus has been prototyped for capturing 3D root architecture with an adaptive control system and innovative plant root growth media that retains three-dimensional root architectural features. New publicly available multi-platform software has been released with considerations for both high throughput (e.g., 3D imaging of a single root system in under ten minutes) and high portability (e.g., support for the Raspberry Pi computer). The software features unified data collection, management, exploration and preservation for continued trait and genetics analysis of root system architecture. The new system makes data acquisition efficient and includes features that address the needs of researchers and technicians, such as reduced imaging time, semi-automated camera calibration with uncertainty characterization, and safe storage of the critical experimental data.

  9. Volume-rendering on a 3D hyperwall: A molecular visualization platform for research, education and outreach.

    PubMed

    MacDougall, Preston J; Henze, Christopher E; Volkov, Anatoliy

    2016-11-01

    We present a unique platform for molecular visualization and design that uses novel subatomic feature detection software in tandem with 3D hyperwall visualization technology. We demonstrate the fleshing-out of pharmacophores in drug molecules, as well as reactive sites in catalysts, focusing on subatomic features. Topological analysis with picometer resolution, in conjunction with interactive volume-rendering of the Laplacian of the electronic charge density, leads to new insight into docking and catalysis. Visual data-mining is done efficiently and in parallel using a 4×4 3D hyperwall (a tiled array of 3D monitors driven independently by slave GPUs but displaying high-resolution, synchronized and functionally-related images). The visual texture of images for a wide variety of molecular systems are intuitive to experienced chemists but also appealing to neophytes, making the platform simultaneously useful as a tool for advanced research as well as for pedagogical and STEM education outreach purposes. Copyright © 2016. Published by Elsevier Inc.

  10. Investigating Public Perception of Occupational Therapy: An Environmental Scan of Three Media Outlets.

    PubMed

    Walsh, Wendy E

    Using a phenomenological approach, this study investigated visibility and perception of the profession of occupational therapy in three media outlets. Content analysis occurred on LexisNexis Academic (LNA), Google Images, and Twitter platforms. Analysis of LNA identified the prevalence of articles about occupational therapy in domestic newspapers and similar media avenues, MaxQDA qualitative software coded Google Images from a search on occupational therapy, and AnalyzeWords evaluated Twitter feeds of four health care professions for presence and tone in a social media context. Results indicate that although occupational therapy is 100 years old, its presence in news and online platforms could be stronger. This study suggests that a clear professional identity for occupational therapy practitioners must be strategically communicated through academic and social platforms. Such advocacy promotes the profession, meets the next iteration of occupational therapy's professional vision, and allows occupational therapy to remain a prominent and formidable stakeholder in today's health care marketplace. Copyright © 2018 by the American Occupational Therapy Association, Inc.

  11. Design of verification platform for wireless vision sensor networks

    NASA Astrophysics Data System (ADS)

    Ye, Juanjuan; Shang, Fei; Yu, Chuang

    2017-08-01

    At present, the majority of research for wireless vision sensor networks (WVSNs) still remains in the software simulation stage, and the verification platforms of WVSNs that available for use are very few. This situation seriously restricts the transformation from theory research of WVSNs to practical application. Therefore, it is necessary to study the construction of verification platform of WVSNs. This paper combines wireless transceiver module, visual information acquisition module and power acquisition module, designs a high-performance wireless vision sensor node whose core is ARM11 microprocessor and selects AODV as the routing protocol to set up a verification platform called AdvanWorks for WVSNs. Experiments show that the AdvanWorks can successfully achieve functions of image acquisition, coding, wireless transmission, and obtain the effective distance parameters between nodes, which lays a good foundation for the follow-up application of WVSNs.

  12. plas.io: Open Source, Browser-based WebGL Point Cloud Visualization

    NASA Astrophysics Data System (ADS)

    Butler, H.; Finnegan, D. C.; Gadomski, P. J.; Verma, U. K.

    2014-12-01

    Point cloud data, in the form of Light Detection and Ranging (LiDAR), RADAR, or semi-global matching (SGM) image processing, are rapidly becoming a foundational data type to quantify and characterize geospatial processes. Visualization of these data, due to overall volume and irregular arrangement, is often difficult. Technological advancement in web browsers, in the form of WebGL and HTML5, have made interactivity and visualization capabilities ubiquitously available which once only existed in desktop software. plas.io is an open source JavaScript application that provides point cloud visualization, exploitation, and compression features in a web-browser platform, reducing the reliance for client-based desktop applications. The wide reach of WebGL and browser-based technologies mean plas.io's capabilities can be delivered to a diverse list of devices -- from phones and tablets to high-end workstations -- with very little custom software development. These properties make plas.io an ideal open platform for researchers and software developers to communicate visualizations of complex and rich point cloud data to devices to which everyone has easy access.

  13. Managing a Real-Time Embedded Linux Platform with Buildroot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diamond, J.; Martin, K.

    2015-01-01

    Developers of real-time embedded software often need to build the operating system, kernel, tools and supporting applications from source to work with the differences in their hardware configuration. The first attempts to introduce Linux-based real-time embedded systems into the Fermilab accelerator controls system used this approach but it was found to be time-consuming, difficult to maintain and difficult to adapt to different hardware configurations. Buildroot is an open source build system with a menu-driven configuration tool (similar to the Linux kernel build system) that automates this process. A customized Buildroot [1] system has been developed for use in the Fermilabmore » accelerator controls system that includes several hardware configuration profiles (including Intel, ARM and PowerPC) and packages for Fermilab support software. A bootable image file is produced containing the Linux kernel, shell and supporting software suite that varies from 3 to 20 megabytes large – ideal for network booting. The result is a platform that is easier to maintain and deploy in diverse hardware configurations« less

  14. [Porting Radiotherapy Software of Varian to Cloud Platform].

    PubMed

    Zou, Lian; Zhang, Weisha; Liu, Xiangxiang; Xie, Zhao; Xie, Yaoqin

    2017-09-30

    To develop a low-cost private cloud platform of radiotherapy software. First, a private cloud platform which was based on OpenStack and the virtual GPU hardware was builded. Then on the private cloud platform, all the Varian radiotherapy software modules were installed to the virtual machine, and the corresponding function configuration was completed. Finally the software on the cloud was able to be accessed by virtual desktop client. The function test results of the cloud workstation show that a cloud workstation is equivalent to an isolated physical workstation, and any clients on the LAN can use the cloud workstation smoothly. The cloud platform transplantation in this study is economical and practical. The project not only improves the utilization rates of radiotherapy software, but also makes it possible that the cloud computing technology can expand its applications to the field of radiation oncology.

  15. Inselect: Automating the Digitization of Natural History Collections

    PubMed Central

    Hudson, Lawrence N.; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W.; van der Walt, Stéfan; Smith, Vincent S.

    2015-01-01

    The world’s natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect—a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization. PMID:26599208

  16. Inselect: Automating the Digitization of Natural History Collections.

    PubMed

    Hudson, Lawrence N; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W; van der Walt, Stéfan; Smith, Vincent S

    2015-01-01

    The world's natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect-a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization.

  17. Adding Value to the Network: Exploring the Software as a Service and Platform as a Service Models for Mobile Operators

    NASA Astrophysics Data System (ADS)

    Gonçalves, Vânia

    The environments of software development and software provision are shifting to Web-based platforms supported by Platform/Software as a Service (PaaS/SaaS) models. This paper will make the case that there is equally an opportunity for mobile operators to identify additional sources of revenue by exposing network functionalities through Web-based service platforms. By elaborating on the concepts, benefits and risks of SaaS and PaaS, several factors that should be taken into consideration in applying these models to the telecom world are delineated.

  18. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study

    PubMed Central

    2018-01-01

    Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962

  19. Performance Evaluation of Cots Uav for Architectural Heritage Documentation. a Test on S.GIULIANO Chapel in Savigliano (cn) - Italy

    NASA Astrophysics Data System (ADS)

    Chiabrando, F.; Teppati Losè, L.

    2017-08-01

    Even more the use of UAV platforms is a standard for images or videos acquisitions from an aerial point of view. According to the enormous growth of requests, we are assisting to an increasing of the production of COTS (Commercial off the Shelf) platforms and systems to answer to the market requirements. In this last years, different platforms have been developed and sell at low-medium cost and nowadays the offer of interesting systems is very large. One of the most important company that produce UAV and other imaging systems is the DJI (Dà-Jiāng Innovations Science and Technology Co., Ltd) founded in 2006 headquartered in Shenzhen - China. The platforms realized by the company range from low cost systems up to professional equipment, tailored for high resolution acquisitions useful for film maker purposes. According to the characteristics of the last developed low cost DJI platforms, the onboard sensors and the performance of the modern photogrammetric software based on Structure from Motion (SfM) algorithms, those systems are nowadays employed for performing 3D surveys starting from the small up to the large scale. The present paper is aimed to test the characteristic in terms of image quality, flight operations, flight planning and accuracy evaluation of the final products of three COTS platforms realized by DJI: the Mavic Pro, the Phantom 4 and the Phantom 4 PRO. The test site chosen was the Chapel of San Giuliano in the municipality of Savigliano (Cuneo-Italy), a small church with two aisles dating back to the early eleventh century.

  20. Reproducible Bioconductor workflows using browser-based interactive notebooks and containers.

    PubMed

    Almugbel, Reem; Hung, Ling-Hong; Hu, Jiaming; Almutairy, Abeer; Ortogero, Nicole; Tamta, Yashaswi; Yeung, Ka Yee

    2018-01-01

    Bioinformatics publications typically include complex software workflows that are difficult to describe in a manuscript. We describe and demonstrate the use of interactive software notebooks to document and distribute bioinformatics research. We provide a user-friendly tool, BiocImageBuilder, that allows users to easily distribute their bioinformatics protocols through interactive notebooks uploaded to either a GitHub repository or a private server. We present four different interactive Jupyter notebooks using R and Bioconductor workflows to infer differential gene expression, analyze cross-platform datasets, process RNA-seq data and KinomeScan data. These interactive notebooks are available on GitHub. The analytical results can be viewed in a browser. Most importantly, the software contents can be executed and modified. This is accomplished using Binder, which runs the notebook inside software containers, thus avoiding the need to install any software and ensuring reproducibility. All the notebooks were produced using custom files generated by BiocImageBuilder. BiocImageBuilder facilitates the publication of workflows with a point-and-click user interface. We demonstrate that interactive notebooks can be used to disseminate a wide range of bioinformatics analyses. The use of software containers to mirror the original software environment ensures reproducibility of results. Parameters and code can be dynamically modified, allowing for robust verification of published results and encouraging rapid adoption of new methods. Given the increasing complexity of bioinformatics workflows, we anticipate that these interactive software notebooks will become as necessary for documenting software methods as traditional laboratory notebooks have been for documenting bench protocols, and as ubiquitous. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  1. NeuroPG: open source software for optical pattern generation and data acquisition

    PubMed Central

    Avants, Benjamin W.; Murphy, Daniel B.; Dapello, Joel A.; Robinson, Jacob T.

    2015-01-01

    Patterned illumination using a digital micromirror device (DMD) is a powerful tool for optogenetics. Compared to a scanning laser, DMDs are inexpensive and can easily create complex illumination patterns. Combining these complex spatiotemporal illumination patterns with optogenetics allows DMD-equipped microscopes to probe neural circuits by selectively manipulating the activity of many individual cells or many subcellular regions at the same time. To use DMDs to study neural activity, scientists must develop specialized software to coordinate optical stimulation patterns with the acquisition of electrophysiological and fluorescence data. To meet this growing need we have developed an open source optical pattern generation software for neuroscience—NeuroPG—that combines, DMD control, sample visualization, and data acquisition in one application. Built on a MATLAB platform, NeuroPG can also process, analyze, and visualize data. The software is designed specifically for the Mightex Polygon400; however, as an open source package, NeuroPG can be modified to incorporate any data acquisition, imaging, or illumination equipment that is compatible with MATLAB’s Data Acquisition and Image Acquisition toolboxes. PMID:25784873

  2. A Supporting Platform for Semi-Automatic Hyoid Bone Tracking and Parameter Extraction from Videofluoroscopic Images for the Diagnosis of Dysphagia Patients.

    PubMed

    Lee, Jun Chang; Nam, Kyoung Won; Jang, Dong Pyo; Paik, Nam Jong; Ryu, Ju Seok; Kim, In Young

    2017-04-01

    Conventional kinematic analysis of videofluoroscopic (VF) swallowing image, most popular for dysphagia diagnosis, requires time-consuming and repetitive manual extraction of diagnostic information from multiple images representing one swallowing period, which results in a heavy work load for clinicians and excessive hospital visits for patients to receive counseling and prescriptions. In this study, a software platform was developed that can assist in the VF diagnosis of dysphagia by automatically extracting a two-dimensional moving trajectory of the hyoid bone as well as 11 temporal and kinematic parameters. Fifty VF swallowing videos containing both non-mandible-overlapped and mandible-overlapped cases from eight patients with dysphagia of various etiologies and 19 videos from ten healthy controls were utilized for performance verification. Percent errors of hyoid bone tracking were 1.7 ± 2.1% for non-overlapped images and 4.2 ± 4.8% for overlapped images. Correlation coefficients between manually extracted and automatically extracted moving trajectories of the hyoid bone were 0.986 ± 0.017 (X-axis) and 0.992 ± 0.006 (Y-axis) for non-overlapped images, and 0.988 ± 0.009 (X-axis) and 0.991 ± 0.006 (Y-axis) for overlapped images. Based on the experimental results, we believe that the proposed platform has the potential to improve the satisfaction of both clinicians and patients with dysphagia.

  3. Chapter 3: Tabular Data and Graphical Images in Support of the U.S. Geological Survey National Oil and Gas Assessment - Western Gulf Province, Smackover-Austin-Eagle Ford Composite Total Petroleum System (504702)

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2006-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  4. Open source pipeline for ESPaDOnS reduction and analysis

    NASA Astrophysics Data System (ADS)

    Martioli, Eder; Teeple, Doug; Manset, Nadine; Devost, Daniel; Withington, Kanoa; Venne, Andre; Tannock, Megan

    2012-09-01

    OPERA is a Canada-France-Hawaii Telescope (CFHT) open source collaborative software project currently under development for an ESPaDOnS echelle spectro-polarimetric image reduction pipeline. OPERA is designed to be fully automated, performing calibrations and reduction, producing one-dimensional intensity and polarimetric spectra. The calibrations are performed on two-dimensional images. Spectra are extracted using an optimal extraction algorithm. While primarily designed for CFHT ESPaDOnS data, the pipeline is being written to be extensible to other echelle spectrographs. A primary design goal is to make use of fast, modern object-oriented technologies. Processing is controlled by a harness, which manages a set of processing modules, that make use of a collection of native OPERA software libraries and standard external software libraries. The harness and modules are completely parametrized by site configuration and instrument parameters. The software is open- ended, permitting users of OPERA to extend the pipeline capabilities. All these features have been designed to provide a portable infrastructure that facilitates collaborative development, code re-usability and extensibility. OPERA is free software with support for both GNU/Linux and MacOSX platforms. The pipeline is hosted on SourceForge under the name "opera-pipeline".

  5. Creating Math Videos: Comparing Platforms and Software

    ERIC Educational Resources Information Center

    Abbasian, Reza O.; Sieben, John T.

    2016-01-01

    In this paper we present a short tutorial on creating mini-videos using two platforms--PCs and tablets such as iPads--and software packages that work with these devices. Specifically, we describe the step-by-step process of creating and editing videos using a Wacom Intuos pen-tablet plus Camtasia software on a PC platform and using the software…

  6. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales.

    PubMed

    Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca 2+ -imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca 2+ imaging datasets, particularly when these have been acquired at different spatial scales.

  7. SamuROI, a Python-Based Software Tool for Visualization and Analysis of Dynamic Time Series Imaging at Multiple Spatial Scales

    PubMed Central

    Rueckl, Martin; Lenzi, Stephen C.; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W.

    2017-01-01

    The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales. PMID:28706482

  8. SpecTracer: A Python-Based Interactive Solution for Echelle Spectra Reduction

    NASA Astrophysics Data System (ADS)

    Romero Matamala, Oscar Fernando; Petit, Véronique; Caballero-Nieves, Saida Maria

    2018-01-01

    SpecTracer is a newly developed interactive solution to reduce cross dispersed echelle spectra. The use of widgets saves the user the steep learning curves of currently available reduction software. SpecTracer uses well established image processing techniques based on IRAF to succesfully extract the stellar spectra. Comparisons with other reduction software, like IRAF, show comparable results, with the added advantages of ease of use, platform independence and portability. This tool can obtain meaningful scientific data and serve also as a training tool, especially for undergraduates doing research, in the procedure for spectroscopic analysis.

  9. Evaluation of DICOM viewer software for workflow integration in clinical trials

    NASA Astrophysics Data System (ADS)

    Haak, Daniel; Page, Charles E.; Kabino, Klaus; Deserno, Thomas M.

    2015-03-01

    The digital imaging and communications in medicine (DICOM) protocol is nowadays the leading standard for capture, exchange and storage of image data in medical applications. A broad range of commercial, free, and open source software tools supporting a variety of DICOM functionality exists. However, different from patient's care in hospital, DICOM has not yet arrived in electronic data capture systems (EDCS) for clinical trials. Due to missing integration, even just the visualization of patient's image data in electronic case report forms (eCRFs) is impossible. Four increasing levels for integration of DICOM components into EDCS are conceivable, raising functionality but also demands on interfaces with each level. Hence, in this paper, a comprehensive evaluation of 27 DICOM viewer software projects is performed, investigating viewing functionality as well as interfaces for integration. Concerning general, integration, and viewing requirements the survey involves the criteria (i) license, (ii) support, (iii) platform, (iv) interfaces, (v) two-dimensional (2D) and (vi) three-dimensional (3D) image viewing functionality. Optimal viewers are suggested for applications in clinical trials for 3D imaging, hospital communication, and workflow. Focusing on open source solutions, the viewers ImageJ and MicroView are superior for 3D visualization, whereas GingkoCADx is advantageous for hospital integration. Concerning workflow optimization in multi-centered clinical trials, we suggest the open source viewer Weasis. Covering most use cases, an EDCS and PACS interconnection with Weasis is suggested.

  10. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    PubMed

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  11. Imaging of karsts on buried carbonate platform in Central Luconia Province, Malaysia

    NASA Astrophysics Data System (ADS)

    Nur Fathiyah Jamaludin, Siti; Mubin, Mukhriz; Latiff, Abdul Halim Abdul

    2017-10-01

    Imaging of carbonate rocks in the subsurface through seismic method is always challenging due to its heterogeneity and fast velocity compared to the other rock types. Existence of karsts features on the carbonate rocks make it more complicated to interpret the reflectors. Utilization of modern interpretation software such as PETREL and GeoTeric® to image the karsts morphology make it possible to model the karst network within the buried carbonate platform used in this study. Using combination of different seismic attributes such as Variance, Conformance, Continuity, Amplitude, Frequency and Edge attributes, we are able to image the karsts features that are available in the proven gas-field in Central Luconia Province, Malaysia. The mentioned attributes are excellent in visualize and image the stratigraphic features based on the difference in their acoustic impedance as well as structural features, which include karst. 2D & 3D Karst Models were developed to give a better understanding on the characteristics of the identified karsts. From the models, it is found that the karsts are concentrated in the top part of the carbonate reservoir (epikarst) and the middle layer with some of them becomes extensive and create karst networks, either laterally or vertically. Most of the vertical network karst are related to the existence of faults that displaced all the horizons in the carbonate platform.

  12. Space Images for NASA JPL Android Version

    NASA Technical Reports Server (NTRS)

    Nelson, Jon D.; Gutheinz, Sandy C.; Strom, Joshua R.; Arca, Jeremy M.; Perez, Martin; Boggs, Karen; Stanboli, Alice

    2013-01-01

    This software addresses the demand for easily accessible NASA JPL images and videos by providing a user friendly and simple graphical user interface that can be run via the Android platform from any location where Internet connection is available. This app is complementary to the iPhone version of the application. A backend infrastructure stores, tracks, and retrieves space images from the JPL Photojournal and Institutional Communications Web server, and catalogs the information into a streamlined rating infrastructure. This system consists of four distinguishing components: image repository, database, server-side logic, and Android mobile application. The image repository contains images from various JPL flight projects. The database stores the image information as well as the user rating. The server-side logic retrieves the image information from the database and categorizes each image for display. The Android mobile application is an interfacing delivery system that retrieves the image information from the server for each Android mobile device user. Also created is a reporting and tracking system for charting and monitoring usage. Unlike other Android mobile image applications, this system uses the latest emerging technologies to produce image listings based directly on user input. This allows for countless combinations of images returned. The backend infrastructure uses industry-standard coding and database methods, enabling future software improvement and technology updates. The flexibility of the system design framework permits multiple levels of display possibilities and provides integration capabilities. Unique features of the software include image/video retrieval from a selected set of categories, image Web links that can be shared among e-mail users, sharing to Facebook/Twitter, marking as user's favorites, and image metadata searchable for instant results.

  13. Application of the GNU Radio platform in the multistatic radar

    NASA Astrophysics Data System (ADS)

    Szlachetko, Boguslaw; Lewandowski, Andrzej

    2009-06-01

    This document presents the application of the Software Defined Radio-based platform in the multistatic radar. This platform consists of four-sensor linear antenna, Universal Software Radio Peripheral (USRP) hardware (radio frequency frontend) and GNU-Radio PC software. The paper provides information about architecture of digital signal processing performed by USRP's FPGA (digital down converting blocks) and PC host (implementation of the multichannel digital beamforming). The preliminary results of the signal recording performed by our experimental platform are presented.

  14. Development of a phenotyping platform for high throughput screening of nodal root angle in sorghum.

    PubMed

    Joshi, Dinesh C; Singh, Vijaya; Hunt, Colleen; Mace, Emma; van Oosterom, Erik; Sulman, Richard; Jordan, David; Hammer, Graeme

    2017-01-01

    In sorghum, the growth angle of nodal roots is a major component of root system architecture. It strongly influences the spatial distribution of roots of mature plants in the soil profile, which can impact drought adaptation. However, selection for nodal root angle in sorghum breeding programs has been restricted by the absence of a suitable high throughput phenotyping platform. The aim of this study was to develop a phenotyping platform for the rapid, non-destructive and digital measurement of nodal root angle of sorghum at the seedling stage. The phenotyping platform comprises of 500 soil filled root chambers (50 × 45 × 0.3 cm in size), made of transparent perspex sheets that were placed in metal tubs and covered with polycarbonate sheets. Around 3 weeks after sowing, once the first flush of nodal roots was visible, roots were imaged in situ using an imaging box that included two digital cameras that were remotely controlled by two android tablets. Free software ( openGelPhoto.tcl ) allowed precise measurement of nodal root angle from the digital images. The reliability and efficiency of the platform was evaluated by screening a large nested association mapping population of sorghum and a set of hybrids in six independent experimental runs that included up to 500 plants each. The platform revealed extensive genetic variation and high heritability (repeatability) for nodal root angle. High genetic correlations and consistent ranking of genotypes across experimental runs confirmed the reproducibility of the platform. This low cost, high throughput root phenotyping platform requires no sophisticated equipment, is adaptable to most glasshouse environments and is well suited to dissect the genetic control of nodal root angle of sorghum. The platform is suitable for use in sorghum breeding programs aiming to improve drought adaptation through root system architecture manipulation.

  15. Software Tools for Development on the Peregrine System | High-Performance

    Science.gov Websites

    Computing | NREL Software Tools for Development on the Peregrine System Software Tools for and manage software at the source code level. Cross-Platform Make and SCons The "Cross-Platform Make" (CMake) package is from Kitware, and SCons is a modern software build tool based on Python

  16. Internet-based videoconferencing and data collaboration for the imaging community.

    PubMed

    Poon, David P; Langkals, John W; Giesel, Frederik L; Knopp, Michael V; von Tengg-Kobligk, Hendrik

    2011-01-01

    Internet protocol-based digital data collaboration with videoconferencing is not yet well utilized in the imaging community. Videoconferencing, combined with proven low-cost solutions, can provide reliable functionality and speed, which will improve rapid, time-saving, and cost-effective communications, within large multifacility institutions or globally with the unlimited reach of the Internet. The aim of this project was to demonstrate the implementation of a low-cost hardware and software setup that facilitates global data collaboration using WebEx and GoToMeeting Internet protocol-based videoconferencing software. Both products' features were tested and evaluated for feasibility across 2 different Internet networks, including a video quality and recording assessment. Cross-compatibility with an Apple OS is also noted in the evaluations. Departmental experiences with WebEx pertaining to clinical trials are also described. Real-time remote presentation of dynamic data was generally consistent across platforms. A reliable and inexpensive hardware and software setup for complete Internet-based data collaboration/videoconferencing can be achieved.

  17. GABBs: Cyberinfrastructure for Self-Service Geospatial Data Exploration, Computation, and Sharing

    NASA Astrophysics Data System (ADS)

    Song, C. X.; Zhao, L.; Biehl, L. L.; Merwade, V.; Villoria, N.

    2016-12-01

    Geospatial data are present everywhere today with the proliferation of location-aware computing devices. This is especially true in the scientific community where large amounts of data are driving research and education activities in many domains. Collaboration over geospatial data, for example, in modeling, data analysis and visualization, must still overcome the barriers of specialized software and expertise among other challenges. In addressing these needs, the Geospatial data Analysis Building Blocks (GABBs) project aims at building geospatial modeling, data analysis and visualization capabilities in an open source web platform, HUBzero. Funded by NSF's Data Infrastructure Building Blocks initiative, GABBs is creating a geospatial data architecture that integrates spatial data management, mapping and visualization, and interfaces in the HUBzero platform for scientific collaborations. The geo-rendering enabled Rappture toolkit, a generic Python mapping library, geospatial data exploration and publication tools, and an integrated online geospatial data management solution are among the software building blocks from the project. The GABBS software will be available through Amazon's AWS Marketplace VM images and open source. Hosting services are also available to the user community. The outcome of the project will enable researchers and educators to self-manage their scientific data, rapidly create GIS-enable tools, share geospatial data and tools on the web, and build dynamic workflows connecting data and tools, all without requiring significant software development skills, GIS expertise or IT administrative privileges. This presentation will describe the GABBs architecture, toolkits and libraries, and showcase the scientific use cases that utilize GABBs capabilities, as well as the challenges and solutions for GABBs to interoperate with other cyberinfrastructure platforms.

  18. Multispectral tissue characterization for intestinal anastomosis optimization.

    PubMed

    Cha, Jaepyeong; Shademan, Azad; Le, Hanh N D; Decker, Ryan; Kim, Peter C W; Kang, Jin U; Krieger, Axel

    2015-10-01

    Intestinal anastomosis is a surgical procedure that restores bowel continuity after surgical resection to treat intestinal malignancy, inflammation, or obstruction. Despite the routine nature of intestinal anastomosis procedures, the rate of complications is high. Standard visual inspection cannot distinguish the tissue subsurface and small changes in spectral characteristics of the tissue, so existing tissue anastomosis techniques that rely on human vision to guide suturing could lead to problems such as bleeding and leakage from suturing sites. We present a proof-of-concept study using a portable multispectral imaging (MSI) platform for tissue characterization and preoperative surgical planning in intestinal anastomosis. The platform is composed of a fiber ring light-guided MSI system coupled with polarizers and image analysis software. The system is tested on ex vivo porcine intestine tissue, and we demonstrate the feasibility of identifying optimal regions for suture placement.

  19. Multispectral tissue characterization for intestinal anastomosis optimization

    PubMed Central

    Cha, Jaepyeong; Shademan, Azad; Le, Hanh N. D.; Decker, Ryan; Kim, Peter C. W.; Kang, Jin U.; Krieger, Axel

    2015-01-01

    Abstract. Intestinal anastomosis is a surgical procedure that restores bowel continuity after surgical resection to treat intestinal malignancy, inflammation, or obstruction. Despite the routine nature of intestinal anastomosis procedures, the rate of complications is high. Standard visual inspection cannot distinguish the tissue subsurface and small changes in spectral characteristics of the tissue, so existing tissue anastomosis techniques that rely on human vision to guide suturing could lead to problems such as bleeding and leakage from suturing sites. We present a proof-of-concept study using a portable multispectral imaging (MSI) platform for tissue characterization and preoperative surgical planning in intestinal anastomosis. The platform is composed of a fiber ring light-guided MSI system coupled with polarizers and image analysis software. The system is tested on ex vivo porcine intestine tissue, and we demonstrate the feasibility of identifying optimal regions for suture placement. PMID:26440616

  20. A novel medical image data-based multi-physics simulation platform for computational life sciences.

    PubMed

    Neufeld, Esra; Szczerba, Dominik; Chavannes, Nicolas; Kuster, Niels

    2013-04-06

    Simulating and modelling complex biological systems in computational life sciences requires specialized software tools that can perform medical image data-based modelling, jointly visualize the data and computational results, and handle large, complex, realistic and often noisy anatomical models. The required novel solvers must provide the power to model the physics, biology and physiology of living tissue within the full complexity of the human anatomy (e.g. neuronal activity, perfusion and ultrasound propagation). A multi-physics simulation platform satisfying these requirements has been developed for applications including device development and optimization, safety assessment, basic research, and treatment planning. This simulation platform consists of detailed, parametrized anatomical models, a segmentation and meshing tool, a wide range of solvers and optimizers, a framework for the rapid development of specialized and parallelized finite element method solvers, a visualization toolkit-based visualization engine, a Python scripting interface for customized applications, a coupling framework, and more. Core components are cross-platform compatible and use open formats. Several examples of applications are presented: hyperthermia cancer treatment planning, tumour growth modelling, evaluating the magneto-haemodynamic effect as a biomarker and physics-based morphing of anatomical models.

  1. How to Build a Hybrid Neurofeedback Platform Combining EEG and fMRI

    PubMed Central

    Mano, Marsel; Lécuyer, Anatole; Bannier, Elise; Perronnet, Lorraine; Noorzadeh, Saman; Barillot, Christian

    2017-01-01

    Multimodal neurofeedback estimates brain activity using information acquired with more than one neurosignal measurement technology. In this paper we describe how to set up and use a hybrid platform based on simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI), then we illustrate how to use it for conducting bimodal neurofeedback experiments. The paper is intended for those willing to build a multimodal neurofeedback system, to guide them through the different steps of the design, setup, and experimental applications, and help them choose a suitable hardware and software configuration. Furthermore, it reports practical information from bimodal neurofeedback experiments conducted in our lab. The platform presented here has a modular parallel processing architecture that promotes real-time signal processing performance and simple future addition and/or replacement of processing modules. Various unimodal and bimodal neurofeedback experiments conducted in our lab showed high performance and accuracy. Currently, the platform is able to provide neurofeedback based on electroencephalography and functional magnetic resonance imaging, but the architecture and the working principles described here are valid for any other combination of two or more real-time brain activity measurement technologies. PMID:28377691

  2. A LabVIEW Platform for Preclinical Imaging Using Digital Subtraction Angiography and Micro-CT.

    PubMed

    Badea, Cristian T; Hedlund, Laurence W; Johnson, G Allan

    2013-01-01

    CT and digital subtraction angiography (DSA) are ubiquitous in the clinic. Their preclinical equivalents are valuable imaging methods for studying disease models and treatment. We have developed a dual source/detector X-ray imaging system that we have used for both micro-CT and DSA studies in rodents. The control of such a complex imaging system requires substantial software development for which we use the graphical language LabVIEW (National Instruments, Austin, TX, USA). This paper focuses on a LabVIEW platform that we have developed to enable anatomical and functional imaging with micro-CT and DSA. Our LabVIEW applications integrate and control all the elements of our system including a dual source/detector X-ray system, a mechanical ventilator, a physiological monitor, and a power microinjector for the vascular delivery of X-ray contrast agents. Various applications allow cardiac- and respiratory-gated acquisitions for both DSA and micro-CT studies. Our results illustrate the application of DSA for cardiopulmonary studies and vascular imaging of the liver and coronary arteries. We also show how DSA can be used for functional imaging of the kidney. Finally, the power of 4D micro-CT imaging using both prospective and retrospective gating is shown for cardiac imaging.

  3. A LabVIEW Platform for Preclinical Imaging Using Digital Subtraction Angiography and Micro-CT

    PubMed Central

    Badea, Cristian T.; Hedlund, Laurence W.; Johnson, G. Allan

    2013-01-01

    CT and digital subtraction angiography (DSA) are ubiquitous in the clinic. Their preclinical equivalents are valuable imaging methods for studying disease models and treatment. We have developed a dual source/detector X-ray imaging system that we have used for both micro-CT and DSA studies in rodents. The control of such a complex imaging system requires substantial software development for which we use the graphical language LabVIEW (National Instruments, Austin, TX, USA). This paper focuses on a LabVIEW platform that we have developed to enable anatomical and functional imaging with micro-CT and DSA. Our LabVIEW applications integrate and control all the elements of our system including a dual source/detector X-ray system, a mechanical ventilator, a physiological monitor, and a power microinjector for the vascular delivery of X-ray contrast agents. Various applications allow cardiac- and respiratory-gated acquisitions for both DSA and micro-CT studies. Our results illustrate the application of DSA for cardiopulmonary studies and vascular imaging of the liver and coronary arteries. We also show how DSA can be used for functional imaging of the kidney. Finally, the power of 4D micro-CT imaging using both prospective and retrospective gating is shown for cardiac imaging. PMID:27006920

  4. Melanie II--a third-generation software package for analysis of two-dimensional electrophoresis images: I. Features and user interface.

    PubMed

    Appel, R D; Palagi, P M; Walther, D; Vargas, J R; Sanchez, J C; Ravier, F; Pasquali, C; Hochstrasser, D F

    1997-12-01

    Although two-dimensional electrophoresis (2-DE) computer analysis software packages have existed ever since 2-DE technology was developed, it is only now that the hardware and software technology allows large-scale studies to be performed on low-cost personal computers or workstations, and that setting up a 2-DE computer analysis system in a small laboratory is no longer considered a luxury. After a first attempt in the seventies and early eighties to develop 2-DE analysis software systems on hardware that had poor or even no graphical capabilities, followed in the late eighties by a wave of innovative software developments that were possible thanks to new graphical interface standards such as XWindows, a third generation of 2-DE analysis software packages has now come to maturity. It can be run on a variety of low-cost, general-purpose personal computers, thus making the purchase of a 2-DE analysis system easily attainable for even the smallest laboratory that is involved in proteome research. Melanie II 2-D PAGE, developed at the University Hospital of Geneva, is such a third-generation software system for 2-DE analysis. Based on unique image processing algorithms, this user-friendly object-oriented software package runs on multiple platforms, including Unix, MS-Windows 95 and NT, and Power Macintosh. It provides efficient spot detection and quantitation, state-of-the-art image comparison, statistical data analysis facilities, and is Internet-ready. Linked to proteome databases such as those available on the World Wide Web, it represents a valuable tool for the "Virtual Lab" of the post-genome area.

  5. Dedicated computer system AOTK for image processing and analysis of horse navicular bone

    NASA Astrophysics Data System (ADS)

    Zaborowicz, M.; Fojud, A.; Koszela, K.; Mueller, W.; Górna, K.; Okoń, P.; Piekarska-Boniecka, H.

    2017-07-01

    The aim of the research was made the dedicated application AOTK (pol. Analiza Obrazu Trzeszczki Kopytowej) for image processing and analysis of horse navicular bone. The application was produced by using specialized software like Visual Studio 2013 and the .NET platform. To implement algorithms of image processing and analysis were used libraries of Aforge.NET. Implemented algorithms enabling accurate extraction of the characteristics of navicular bones and saving data to external files. Implemented in AOTK modules allowing the calculations of distance selected by user, preliminary assessment of conservation of structure of the examined objects. The application interface is designed in a way that ensures user the best possible view of the analyzed images.

  6. Architecture of a high-performance surgical guidance system based on C-arm cone-beam CT: software platform for technical integration and clinical translation

    NASA Astrophysics Data System (ADS)

    Uneri, Ali; Schafer, Sebastian; Mirota, Daniel; Nithiananthan, Sajendra; Otake, Yoshito; Reaungamornrat, Sureerat; Yoo, Jongheun; Stayman, J. Webster; Reh, Douglas; Gallia, Gary L.; Khanna, A. Jay; Hager, Gregory; Taylor, Russell H.; Kleinszig, Gerhard; Siewerdsen, Jeffrey H.

    2011-03-01

    Intraoperative imaging modalities are becoming more prevalent in recent years, and the need for integration of these modalities with surgical guidance is rising, creating new possibilities as well as challenges. In the context of such emerging technologies and new clinical applications, a software architecture for cone-beam CT (CBCT) guided surgery has been developed with emphasis on binding open-source surgical navigation libraries and integrating intraoperative CBCT with novel, application-specific registration and guidance technologies. The architecture design is focused on accelerating translation of task-specific technical development in a wide range of applications, including orthopaedic, head-and-neck, and thoracic surgeries. The surgical guidance system is interfaced with a prototype mobile C-arm for high-quality CBCT and through a modular software architecture, integration of different tools and devices consistent with surgical workflow in each of these applications is realized. Specific modules are developed according to the surgical task, such as: 3D-3D rigid or deformable registration of preoperative images, surgical planning data, and up-to-date CBCT images; 3D-2D registration of planning and image data in real-time fluoroscopy and/or digitally reconstructed radiographs (DRRs); compatibility with infrared, electromagnetic, and video-based trackers used individually or in hybrid arrangements; augmented overlay of image and planning data in endoscopic or in-room video; real-time "virtual fluoroscopy" computed from GPU-accelerated DRRs; and multi-modality image display. The platform aims to minimize offline data processing by exposing quantitative tools that analyze and communicate factors of geometric precision. The system was translated to preclinical phantom and cadaver studies for assessment of fiducial (FRE) and target registration error (TRE) showing sub-mm accuracy in targeting and video overlay within intraoperative CBCT. The work culminates in the development of a CBCT guidance system (reported here for the first time) that leverages the technical developments in Carm CBCT and associated technologies for realizing a high-performance system for translation to clinical studies.

  7. Image Harvest: an open-source platform for high-throughput plant image processing and analysis

    PubMed Central

    Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal

    2016-01-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917

  8. TU-AB-303-08: GPU-Based Software Platform for Efficient Image-Guided Adaptive Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, S; Robinson, A; McNutt, T

    2015-06-15

    Purpose: In this study, we develop an integrated software platform for adaptive radiation therapy (ART) that combines fast and accurate image registration, segmentation, and dose computation/accumulation methods. Methods: The proposed system consists of three key components; 1) deformable image registration (DIR), 2) automatic segmentation, and 3) dose computation/accumulation. The computationally intensive modules including DIR and dose computation have been implemented on a graphics processing unit (GPU). All required patient-specific data including the planning CT (pCT) with contours, daily cone-beam CTs, and treatment plan are automatically queried and retrieved from their own databases. To improve the accuracy of DIR between pCTmore » and CBCTs, we use the double force demons DIR algorithm in combination with iterative CBCT intensity correction by local intensity histogram matching. Segmentation of daily CBCT is then obtained by propagating contours from the pCT. Daily dose delivered to the patient is computed on the registered pCT by a GPU-accelerated superposition/convolution algorithm. Finally, computed daily doses are accumulated to show the total delivered dose to date. Results: Since the accuracy of DIR critically affects the quality of the other processes, we first evaluated our DIR method on eight head-and-neck cancer cases and compared its performance. Normalized mutual-information (NMI) and normalized cross-correlation (NCC) computed as similarity measures, and our method produced overall NMI of 0.663 and NCC of 0.987, outperforming conventional methods by 3.8% and 1.9%, respectively. Experimental results show that our registration method is more consistent and roust than existing algorithms, and also computationally efficient. Computation time at each fraction took around one minute (30–50 seconds for registration and 15–25 seconds for dose computation). Conclusion: We developed an integrated GPU-accelerated software platform that enables accurate and efficient DIR, auto-segmentation, and dose computation, thus supporting an efficient ART workflow. This work was supported by NIH/NCI under grant R42CA137886.« less

  9. The SCEC Broadband Platform: A Collaborative Open-Source Software Package for Strong Ground Motion Simulation and Validation

    NASA Astrophysics Data System (ADS)

    Silva, F.; Maechling, P. J.; Goulet, C.; Somerville, P.; Jordan, T. H.

    2013-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving SCEC researchers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Broadband Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms of a historical earthquake for which observed strong ground motion data is available. Also in validation mode, the Broadband Platform calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. During the past year, we have modified the software to enable the addition of a large number of historical events, and we are now adding validation simulation inputs and observational data for 23 historical events covering the Eastern and Western United States, Japan, Taiwan, Turkey, and Italy. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. By establishing an interface between scientific modules with a common set of input and output files, the Broadband Platform facilitates the addition of new scientific methods, which are written by earth scientists in a number of languages such as C, C++, Fortran, and Python. The Broadband Platform's modular design also supports the reuse of existing software modules as building blocks to create new scientific methods. Additionally, the Platform implements a wrapper around each scientific module, converting input and output files to and from the specific formats required (or produced) by individual scientific codes. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes the addition of 3 new simulation methods and several new data products, such as map and distance-based goodness of fit plots. Finally, as the number and complexity of scenarios simulated using the Broadband Platform increase, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.

  10. Technical note: DIRART--A software suite for deformable image registration and adaptive radiotherapy research.

    PubMed

    Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Goddu, S Murty; Mutic, Sasa; Deasy, Joseph O; Low, Daniel A

    2011-01-01

    Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-

  11. The SCEC Broadband Platform: A Collaborative Open-Source Software Package for Strong Ground Motion Simulation and Validation

    NASA Astrophysics Data System (ADS)

    Silva, F.; Maechling, P. J.; Goulet, C. A.; Somerville, P.; Jordan, T. H.

    2014-12-01

    The Southern California Earthquake Center (SCEC) Broadband Platform is a collaborative software development project involving geoscientists, earthquake engineers, graduate students, and the SCEC Community Modeling Environment. The SCEC Broadband Platform (BBP) is open-source scientific software that can generate broadband (0-100Hz) ground motions for earthquakes, integrating complex scientific modules that implement rupture generation, low and high-frequency seismogram synthesis, non-linear site effects calculation, and visualization into a software system that supports easy on-demand computation of seismograms. The Broadband Platform operates in two primary modes: validation simulations and scenario simulations. In validation mode, the Platform runs earthquake rupture and wave propagation modeling software to calculate seismograms for a well-observed historical earthquake. Then, the BBP calculates a number of goodness of fit measurements that quantify how well the model-based broadband seismograms match the observed seismograms for a certain event. Based on these results, the Platform can be used to tune and validate different numerical modeling techniques. In scenario mode, the Broadband Platform can run simulations for hypothetical (scenario) earthquakes. In this mode, users input an earthquake description, a list of station names and locations, and a 1D velocity model for their region of interest, and the Broadband Platform software then calculates ground motions for the specified stations. Working in close collaboration with scientists and research engineers, the SCEC software development group continues to add new capabilities to the Broadband Platform and to release new versions as open-source scientific software distributions that can be compiled and run on many Linux computer systems. Our latest release includes 5 simulation methods, 7 simulation regions covering California, Japan, and Eastern North America, the ability to compare simulation results against GMPEs, and several new data products, such as map and distance-based goodness of fit plots. As the number and complexity of scenarios simulated using the Broadband Platform increases, we have added batching utilities to substantially improve support for running large-scale simulations on computing clusters.

  12. GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.

    PubMed

    Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A

    2017-03-01

    We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.

  13. Optimal processing for gel electrophoresis images: Applying Monte Carlo Tree Search in GelApp.

    PubMed

    Nguyen, Phi-Vu; Ghezal, Ali; Hsueh, Ya-Chih; Boudier, Thomas; Gan, Samuel Ken-En; Lee, Hwee Kuan

    2016-08-01

    In biomedical research, gel band size estimation in electrophoresis analysis is a routine process. To facilitate and automate this process, numerous software have been released, notably the GelApp mobile app. However, the band detection accuracy is limited due to a band detection algorithm that cannot adapt to the variations in input images. To address this, we used the Monte Carlo Tree Search with Upper Confidence Bound (MCTS-UCB) method to efficiently search for optimal image processing pipelines for the band detection task, thereby improving the segmentation algorithm. Incorporating this into GelApp, we report a significant enhancement of gel band detection accuracy by 55.9 ± 2.0% for protein polyacrylamide gels, and 35.9 ± 2.5% for DNA SYBR green agarose gels. This implementation is a proof-of-concept in demonstrating MCTS-UCB as a strategy to optimize general image segmentation. The improved version of GelApp-GelApp 2.0-is freely available on both Google Play Store (for Android platform), and Apple App Store (for iOS platform). © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Chapter 6. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment-East Texas basin and Louisiana-Mississippi salt basins provinces, Jurassic Smackover interior salt basins total petroleum system (504902), Travis Peak and Hosston formations.

    USGS Publications Warehouse

    ,

    2006-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on the CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  15. Chapter 3. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment--East Texas basin and Louisiana-Mississippi salt basins provinces, Jurassic Smackover Interior salt basins total petroleum system (504902), Cotton Valley group.

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2006-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on the CD-ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  16. An Autonomous Gps-Denied Unmanned Vehicle Platform Based on Binocular Vision for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    Qin, M.; Wan, X.; Shao, Y. Y.; Li, S. Y.

    2018-04-01

    Vision-based navigation has become an attractive solution for autonomous navigation for planetary exploration. This paper presents our work of designing and building an autonomous vision-based GPS-denied unmanned vehicle and developing an ARFM (Adaptive Robust Feature Matching) based VO (Visual Odometry) software for its autonomous navigation. The hardware system is mainly composed of binocular stereo camera, a pan-and tilt, a master machine, a tracked chassis. And the ARFM-based VO software system contains four modules: camera calibration, ARFM-based 3D reconstruction, position and attitude calculation, BA (Bundle Adjustment) modules. Two VO experiments were carried out using both outdoor images from open dataset and indoor images captured by our vehicle, the results demonstrate that our vision-based unmanned vehicle is able to achieve autonomous localization and has the potential for future planetary exploration.

  17. Development of a Web-Enabled Learning Platform for Geospatial Laboratories: Improving the Undergraduate Learning Experience

    ERIC Educational Resources Information Center

    Mui, Amy B.; Nelson, Sarah; Huang, Bruce; He, Yuhong; Wilson, Kathi

    2015-01-01

    This paper describes a web-enabled learning platform providing remote access to geospatial software that extends the learning experience outside of the laboratory setting. The platform was piloted in two undergraduate courses, and includes a software server, a data server, and remote student users. The platform was designed to improve the quality…

  18. Research and Deployment a Hospital Open Software Platform for e-Health on the Grid System at VAST/IAMI

    NASA Astrophysics Data System (ADS)

    van Tuyet, Dao; Tuan, Ngo Anh; van Lang, Tran

    Grid computing has been an increasing topic in recent years. It attracts the attention of many scientists from many fields. As a result, many Grid systems have been built for serving people's demands. At present, many tools for developing the Grid systems such as Globus, gLite, Unicore still developed incessantly. Especially, gLite - the Grid Middleware - was developed by the Europe Community scientific in recent years. Constant growth of Grid technology opened the way for new opportunities in term of information and data exchange in a secure and collaborative context. These new opportunities can be exploited to offer physicians new telemedicine services in order to improve their collaborative capacities. Our platform gives physicians an easy method to use telemedicine environment to manage and share patient's information (such as electronic medical record, images formatted DICOM) between remote locations. This paper presents the Grid Infrastructure based on gLite; some main components of gLite; the challenge scenario in which new applications can be developed to improve collaborative work between scientists; the process of deploying Hospital Open software Platform for E-health (HOPE) on the Grid.

  19. Assessment of mass detection performance in contrast enhanced digital mammography

    NASA Astrophysics Data System (ADS)

    Carton, Ann-Katherine; de Carvalho, Pablo M.; Li, Zhijin; Dromain, Clarisse; Muller, Serge

    2015-03-01

    We address the detectability of contrast-agent enhancing masses for contrast-agent enhanced spectral mammography (CESM), a dual-energy technique providing functional projection images of breast tissue perfusion and vascularity using simulated CESM images. First, the realism of simulated CESM images from anthropomorphic breast software phantoms generated with a software X-ray imaging platform was validated. Breast texture was characterized by power-law coefficients calculated in data sets of real clinical and simulated images. We also performed a 2-alternative forced choice (2-AFC) psychophysical experiment whereby simulated and real images were presented side-by-side to an experienced radiologist to test if real images could be distinguished from the simulated images. It was found that texture in our simulated CESM images has a fairly realistic appearance. Next, the relative performance of human readers and previously developed mathematical observers was assessed for the detection of iodine-enhancing mass lesions containing different contrast agent concentrations. A four alternative-forced-choice (4 AFC) task was designed; the task for the model and human observer was to detect which one of the four simulated DE recombined images contained an iodineenhancing mass. Our results showed that the NPW and NPWE models largely outperform human performance. After introduction of an internal noise component, both observers approached human performance. The CHO observer performs slightly worse than the average human observer. There is still work to be done in improving model observers as predictors of human-observer performance. Larger trials could also improve our test statistics. We hope that in the future, this framework of software breast phantoms, virtual image acquisition and processing, and mathematical observers can be beneficial to optimize CESM imaging techniques.

  20. Open-source software platform for medical image segmentation applications

    NASA Astrophysics Data System (ADS)

    Namías, R.; D'Amato, J. P.; del Fresno, M.

    2017-11-01

    Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.

  1. Design of an automated imaging system for use in a space experiment

    NASA Technical Reports Server (NTRS)

    Hartz, William G.; Bozzolo, Nora G.; Lewis, Catherine C.; Pestak, Christopher J.

    1991-01-01

    An experiment, occurring in an orbiting platform, examines the mass transfer across gas-liquid and liquid-liquid interfaces. It employs an imaging system with real time image analysis. The design includes optical design, imager selection and integration, positioner control, image recording, software development for processing and interfaces to telemetry. It addresses the constraints of weight, volume, and electric power associated with placing the experiment in the Space Shuttle cargo bay. Challenging elements of the design are: imaging and recording of a 200-micron-diameter bubble with a resolution of 2 microns to serve a primary source of data; varying frame rates from 500 per second to 1 frame per second, depending on the experiment phase; and providing three-dimensional information to determine the shape of the bubble.

  2. Development of a UAV system for VNIR-TIR acquisitions in precision agriculture

    NASA Astrophysics Data System (ADS)

    Misopolinos, L.; Zalidis, Ch.; Liakopoulos, V.; Stavridou, D.; Katsigiannis, P.; Alexandridis, T. K.; Zalidis, G.

    2015-06-01

    Adoption of precision agriculture techniques requires the development of specialized tools that provide spatially distributed information. Both flying platforms and airborne sensors are being continuously evolved to cover the needs of plant and soil sensing at affordable costs. Due to restrictions in payload, flying platforms are usually limited to carry a single sensor on board. The aim of this work is to present the development of a vertical take-off and landing autonomous unmanned aerial vehicle (VTOL UAV) system for the simultaneous acquisition of high resolution vertical images at the visible, near infrared (VNIR) and thermal infrared (TIR) wavelengths. A system was developed that has the ability to trigger two cameras simultaneously with a fully automated process and no pilot intervention. A commercial unmanned hexacopter UAV platform was optimized to increase reliability, ease of operation and automation. The designed systems communication platform is based on a reduced instruction set computing (RISC) processor running Linux OS with custom developed drivers in an efficient way, while keeping the cost and weight to a minimum. Special software was also developed for the automated image capture, data processing and on board data and metadata storage. The system was tested over a kiwifruit field in northern Greece, at flying heights of 70 and 100m above the ground. The acquired images were mosaicked and geo-corrected. Images from both flying heights were of good quality and revealed unprecedented detail within the field. The normalized difference vegetation index (NDVI) was calculated along with the thermal image in order to provide information on the accurate location of stressors and other parameters related to the crop productivity. Compared to other available sources of data, this system can provide low cost, high resolution and easily repeatable information to cover the requirements of precision agriculture.

  3. Molecular brain imaging in the multimodality era

    PubMed Central

    Price, Julie C

    2012-01-01

    Multimodality molecular brain imaging encompasses in vivo visualization, evaluation, and measurement of cellular/molecular processes. Instrumentation and software developments over the past 30 years have fueled advancements in multimodality imaging platforms that enable acquisition of multiple complementary imaging outcomes by either combined sequential or simultaneous acquisition. This article provides a general overview of multimodality neuroimaging in the context of positron emission tomography as a molecular imaging tool and magnetic resonance imaging as a structural and functional imaging tool. Several image examples are provided and general challenges are discussed to exemplify complementary features of the modalities, as well as important strengths and weaknesses of combined assessments. Alzheimer's disease is highlighted, as this clinical area has been strongly impacted by multimodality neuroimaging findings that have improved understanding of the natural history of disease progression, early disease detection, and informed therapy evaluation. PMID:22434068

  4. A Neuroimaging Web Services Interface as a Cyber Physical System for Medical Imaging and Data Management in Brain Research: Design Study.

    PubMed

    Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek

    2018-04-26

    Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. ©Gabriel Lizarraga, Chunfei Li, Mercedes Cabrerizo, Warren Barker, David A Loewenstein, Ranjan Duara, Malek Adjouadi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 26.04.2018.

  5. Assessing the Effects of Software Platforms on Volumetric Segmentation of Glioblastoma

    PubMed Central

    Dunn, William D.; Aerts, Hugo J.W.L.; Cooper, Lee A.; Holder, Chad A.; Hwang, Scott N.; Jaffe, Carle C.; Brat, Daniel J.; Jain, Rajan; Flanders, Adam E.; Zinn, Pascal O.; Colen, Rivka R.; Gutman, David A.

    2017-01-01

    Background Radiological assessments of biologically relevant regions in glioblastoma have been associated with genotypic characteristics, implying a potential role in personalized medicine. Here, we assess the reproducibility and association with survival of two volumetric segmentation platforms and explore how methodology could impact subsequent interpretation and analysis. Methods Post-contrast T1- and T2-weighted FLAIR MR images of 67 TCGA patients were segmented into five distinct compartments (necrosis, contrast-enhancement, FLAIR, post contrast abnormal, and total abnormal tumor volumes) by two quantitative image segmentation platforms - 3D Slicer and a method based on Velocity AI and FSL. We investigated the internal consistency of each platform by correlation statistics, association with survival, and concordance with consensus neuroradiologist ratings using ordinal logistic regression. Results We found high correlations between the two platforms for FLAIR, post contrast abnormal, and total abnormal tumor volumes (spearman’s r(67) = 0.952, 0.959, and 0.969 respectively). Only modest agreement was observed for necrosis and contrast-enhancement volumes (r(67) = 0.693 and 0.773 respectively), likely arising from differences in manual and automated segmentation methods of these regions by 3D Slicer and Velocity AI/FSL, respectively. Survival analysis based on AUC revealed significant predictive power of both platforms for the following volumes: contrast-enhancement, post contrast abnormal, and total abnormal tumor volumes. Finally, ordinal logistic regression demonstrated correspondence to manual ratings for several features. Conclusion Tumor volume measurements from both volumetric platforms produced highly concordant and reproducible estimates across platforms for general features. As automated or semi-automated volumetric measurements replace manual linear or area measurements, it will become increasingly important to keep in mind that measurement differences between segmentation platforms for more detailed features could influence downstream survival or radio genomic analyses. PMID:29600296

  6. Fast segmentation and high-quality three-dimensional volume mesh creation from medical images for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jermyn, Michael; Ghadyani, Hamid; Mastanduno, Michael A.; Turner, Wes; Davis, Scott C.; Dehghani, Hamid; Pogue, Brian W.

    2013-08-01

    Multimodal approaches that combine near-infrared (NIR) and conventional imaging modalities have been shown to improve optical parameter estimation dramatically and thus represent a prevailing trend in NIR imaging. These approaches typically involve applying anatomical templates from magnetic resonance imaging/computed tomography/ultrasound images to guide the recovery of optical parameters. However, merging these data sets using current technology requires multiple software packages, substantial expertise, significant time-commitment, and often results in unacceptably poor mesh quality for optical image reconstruction, a reality that represents a significant roadblock for translational research of multimodal NIR imaging. This work addresses these challenges directly by introducing automated digital imaging and communications in medicine image stack segmentation and a new one-click three-dimensional mesh generator optimized for multimodal NIR imaging, and combining these capabilities into a single software package (available for free download) with a streamlined workflow. Image processing time and mesh quality benchmarks were examined for four common multimodal NIR use-cases (breast, brain, pancreas, and small animal) and were compared to a commercial image processing package. Applying these tools resulted in a fivefold decrease in image processing time and 62% improvement in minimum mesh quality, in the absence of extra mesh postprocessing. These capabilities represent a significant step toward enabling translational multimodal NIR research for both expert and nonexpert users in an open-source platform.

  7. A novel real time imaging platform to quantify macrophage phagocytosis.

    PubMed

    Kapellos, Theodore S; Taylor, Lewis; Lee, Heyne; Cowley, Sally A; James, William S; Iqbal, Asif J; Greaves, David R

    2016-09-15

    Phagocytosis of pathogens, apoptotic cells and debris is a key feature of macrophage function in host defense and tissue homeostasis. Quantification of macrophage phagocytosis in vitro has traditionally been technically challenging. Here we report the optimization and validation of the IncuCyte ZOOM® real time imaging platform for macrophage phagocytosis based on pHrodo® pathogen bioparticles, which only fluoresce when localized in the acidic environment of the phagolysosome. Image analysis and fluorescence quantification were performed with the automated IncuCyte™ Basic Software. Titration of the bioparticle number showed that the system is more sensitive than a spectrofluorometer, as it can detect phagocytosis when using 20× less E. coli bioparticles. We exemplified the power of this real time imaging platform by studying phagocytosis of murine alveolar, bone marrow and peritoneal macrophages. We further demonstrate the ability of this platform to study modulation of the phagocytic process, as pharmacological inhibitors of phagocytosis suppressed bioparticle uptake in a concentration-dependent manner, whereas opsonins augmented phagocytosis. We also investigated the effects of macrophage polarization on E. coli phagocytosis. Bone marrow-derived macrophage (BMDM) priming with M2 stimuli, such as IL-4 and IL-10 resulted in higher engulfment of bioparticles in comparison with M1 polarization. Moreover, we demonstrated that tolerization of BMDMs with lipopolysaccharide (LPS) results in impaired E. coli bioparticle phagocytosis. This novel real time assay will enable researchers to quantify macrophage phagocytosis with a higher degree of accuracy and sensitivity and will allow investigation of limited populations of primary phagocytes in vitro. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  8. Open Marketplace for Simulation Software on the Basis of a Web Platform

    NASA Astrophysics Data System (ADS)

    Kryukov, A. P.; Demichev, A. P.

    2016-02-01

    The focus in development of a new generation of middleware shifts from the global grid systems to building convenient and efficient web platforms for remote access to individual computing resources. Further line of their development, suggested in this work, is related not only with the quantitative increase in their number and with the expansion of scientific, engineering, and manufacturing areas in which they are used, but also with improved technology for remote deployment of application software on the resources interacting with the web platforms. Currently, the services for providers of application software in the context of scientific-oriented web platforms is not developed enough. The proposed in this work new web platforms of application software market should have all the features of the existing web platforms for submissions of jobs to remote resources plus the provision of specific web services for interaction on market principles between the providers and consumers of application packages. The suggested approach will be approved on the example of simulation applications in the field of nonlinear optics.

  9. Updates to FuncLab, a Matlab based GUI for handling receiver functions

    NASA Astrophysics Data System (ADS)

    Porritt, Robert W.; Miller, Meghan S.

    2018-02-01

    Receiver functions are a versatile tool commonly used in seismic imaging. Depending on how they are processed, they can be used to image discontinuity structure within the crust or mantle or they can be inverted for seismic velocity either directly or jointly with complementary datasets. However, modern studies generally require large datasets which can be challenging to handle; therefore, FuncLab was originally written as an interactive Matlab GUI to assist in handling these large datasets. This software uses a project database to allow interactive trace editing, data visualization, H-κ stacking for crustal thickness and Vp/Vs ratio, and common conversion point stacking while minimizing computational costs. Since its initial release, significant advances have been made in the implementation of web services and changes in the underlying Matlab platform have necessitated a significant revision to the software. Here, we present revisions to the software, including new features such as data downloading via irisFetch.m, receiver function calculations via processRFmatlab, on-the-fly cross-section tools, interface picking, and more. In the descriptions of the tools, we present its application to a test dataset in Michigan, Wisconsin, and neighboring areas following the passage of USArray Transportable Array. The software is made available online at https://robporritt.wordpress.com/software.

  10. Open source hardware and software platform for robotics and artificial intelligence applications

    NASA Astrophysics Data System (ADS)

    Liang, S. Ng; Tan, K. O.; Lai Clement, T. H.; Ng, S. K.; Mohammed, A. H. Ali; Mailah, Musa; Azhar Yussof, Wan; Hamedon, Zamzuri; Yussof, Zulkifli

    2016-02-01

    Recent developments in open source hardware and software platforms (Android, Arduino, Linux, OpenCV etc.) have enabled rapid development of previously expensive and sophisticated system within a lower budget and flatter learning curves for developers. Using these platform, we designed and developed a Java-based 3D robotic simulation system, with graph database, which is integrated in online and offline modes with an Android-Arduino based rubbish picking remote control car. The combination of the open source hardware and software system created a flexible and expandable platform for further developments in the future, both in the software and hardware areas, in particular in combination with graph database for artificial intelligence, as well as more sophisticated hardware, such as legged or humanoid robots.

  11. Investigation of Matlab® as platform in navigation and control of an Automatic Guided Vehicle utilising an omnivision sensor.

    PubMed

    Kotze, Ben; Jordaan, Gerrit

    2014-08-25

    Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed.

  12. A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system. PMID:22736956

  13. Low-cost diffuse optical tomography for the classroom

    NASA Astrophysics Data System (ADS)

    Minagawa, Taisuke; Zirak, Peyman; Weigel, Udo M.; Kristoffersen, Anna K.; Mateos, Nicolas; Valencia, Alejandra; Durduran, Turgut

    2012-10-01

    Diffuse optical tomography (DOT) is an emerging imaging modality with potential applications in oncology, neurology, and other clinical areas. It allows the non-invasive probing of the tissue function using relatively inexpensive and safe instrumentation. An educational laboratory setup of a DOT system could be used to demonstrate how photons propagate through tissues, basics of medical tomography, and the concepts of multiple scattering and absorption. Here, we report a DOT setup that could be introduced to the advanced undergraduate or early graduate curriculum using inexpensive and readily available tools. The basis of the system is the LEGO Mindstorms NXT platform which controls the light sources, the detectors (photo-diodes), a mechanical 2D scanning platform, and the data acquisition. A basic tomographic reconstruction is implemented in standard numerical software, and 3D images are reconstructed. The concept was tested and developed in an educational environment that involved a high-school student and a group of post-doctoral fellows.

  14. A vision-based driver nighttime assistance and surveillance system based on intelligent image sensing techniques and a heterogamous dual-core embedded system architecture.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Chiang, Chuan-Yen; Liu, Chuan-Ming; Yuan, Shyan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system.

  15. Investigation of Matlab® as Platform in Navigation and Control of an Automatic Guided Vehicle Utilising an Omnivision Sensor

    PubMed Central

    Kotze, Ben; Jordaan, Gerrit

    2014-01-01

    Automatic Guided Vehicles (AGVs) are navigated utilising multiple types of sensors for detecting the environment. In this investigation such sensors are replaced and/or minimized by the use of a single omnidirectional camera picture stream. An area of interest is extracted, and by using image processing the vehicle is navigated on a set path. Reconfigurability is added to the route layout by signs incorporated in the navigation process. The result is the possible manipulation of a number of AGVs, each on its own designated colour-signed path. This route is reconfigurable by the operator with no programming alteration or intervention. A low resolution camera and a Matlab® software development platform are utilised. The use of Matlab® lends itself to speedy evaluation and implementation of image processing options on the AGV, but its functioning in such an environment needs to be assessed. PMID:25157548

  16. An Automated Platform for High-Resolution Tissue Imaging Using Nanospray Desorption Electrospray Ionization Mass Spectrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lanekoff, Ingela T.; Heath, Brandi S.; Liyu, Andrey V.

    2012-10-02

    An automated platform has been developed for acquisition and visualization of mass spectrometry imaging (MSI) data using nanospray desorption electrospray ionization (nano-DESI). The new system enables robust operation of the nano-DESI imaging source over many hours. This is achieved by controlling the distance between the sample and the probe by mounting the sample holder onto an automated XYZ stage and defining the tilt of the sample plane. This approach is useful for imaging of relatively flat samples such as thin tissue sections. Custom software called MSI QuickView was developed for visualization of large data sets generated in imaging experiments. MSImore » QuickView enables fast visualization of the imaging data during data acquisition and detailed processing after the entire image is acquired. The performance of the system is demonstrated by imaging rat brain tissue sections. High resolution mass analysis combined with MS/MS experiments enabled identification of lipids and metabolites in the tissue section. In addition, high dynamic range and sensitivity of the technique allowed us to generate ion images of low-abundance isobaric lipids. High-spatial resolution image acquired over a small region of the tissue section revealed the spatial distribution of an abundant brain metabolite, creatine, in the white and gray matter that is consistent with the literature data obtained using magnetic resonance spectroscopy.« less

  17. Applications of Sentinel-2 data for agriculture and forest monitoring using the absolute difference (ZABUD) index derived from the AgroEye software (ESA)

    NASA Astrophysics Data System (ADS)

    de Kok, R.; WeŻyk, P.; PapieŻ, M.; Migo, L.

    2017-10-01

    To convince new users of the advantages of the Sentinel_2 sensor, a simplification of classic remote sensing tools allows to create a platform of communication among domain specialists of agricultural analysis, visual image interpreters and remote sensing programmers. An index value, known in the remote sensing user domain as "Zabud" was selected to represent, in color, the essentials of a time series analysis. The color index used in a color atlas offers a working platform for an agricultural field control. This creates a database of test and training areas that enables rapid anomaly detection in the agricultural domain. The use cases and simplifications now function as an introduction to Sentinel_2 based remote sensing, in an area that before relies on VHR imagery and aerial data, to serve mainly the visual interpretation. The database extension with detected anomalies allows developers of open source software to design solutions for further agricultural control with remote sensing.

  18. VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies.

    PubMed

    Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon

    2013-01-01

    The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.

  19. VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies

    PubMed Central

    Lee, Yooyoung; Micheals, Ross J; Filliben, James J; Phillips, P Jonathon

    2013-01-01

    The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST’s measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform. PMID:26401431

  20. Specification and preliminary design of the CARTA system for satellite cartography

    NASA Technical Reports Server (NTRS)

    Machadoesilva, A. J. F. (Principal Investigator); Neto, G. C.; Serra, P. R. M.; Souza, R. C. M.; Mitsuo, Fernando Augusta, II

    1984-01-01

    Digital imagery acquired by satellite have inherent geometrical distortion due to sensor characteristics and to platform variations. In INPE a software system for geometric correction of LANDSAT MSS imagery is under development. Such connected imagery will be useful for map generation. Important examples are the generation of LANDSAT image-charts for the Amazon region and the possibility of integrating digital satellite imagery into a Geographic Information System.

  1. Comparison of fMRI data analysis by SPM99 on different operating systems.

    PubMed

    Shinagawa, Hideo; Honda, Ei-ichi; Ono, Takashi; Kurabayashi, Tohru; Ohyama, Kimie

    2004-09-01

    The hardware chosen for fMRI data analysis may depend on the platform already present in the laboratory or the supporting software. In this study, we ran SPM99 software on multiple platforms to examine whether we could analyze fMRI data by SPM99, and to compare their differences and limitations in processing fMRI data, which can be attributed to hardware capabilities. Six normal right-handed volunteers participated in a study of hand-grasping to obtain fMRI data. Each subject performed a run that consisted of 98 images. The run was measured using a gradient echo-type echo planar imaging sequence on a 1.5T apparatus with a head coil. We used several personal computer (PC), Unix and Linux machines to analyze the fMRI data. There were no differences in the results obtained on several PC, Unix and Linux machines. The only limitations in processing large amounts of the fMRI data were found using PC machines. This suggests that the results obtained with different machines were not affected by differences in hardware components, such as the CPU, memory and hard drive. Rather, it is likely that the limitations in analyzing a huge amount of the fMRI data were due to differences in the operating system (OS).

  2. A Platform-Independent Plugin for Navigating Online Radiology Cases.

    PubMed

    Balkman, Jason D; Awan, Omer A

    2016-06-01

    Software methods that enable navigation of radiology cases on various digital platforms differ between handheld devices and desktop computers. This has resulted in poor compatibility of online radiology teaching files across mobile smartphones, tablets, and desktop computers. A standardized, platform-independent, or "agnostic" approach for presenting online radiology content was produced in this work by leveraging modern hypertext markup language (HTML) and JavaScript web software technology. We describe the design and evaluation of this software, demonstrate its use across multiple viewing platforms, and make it publicly available as a model for future development efforts.

  3. Integrating medical imaging analyses through a high-throughput bundled resource imaging system

    NASA Astrophysics Data System (ADS)

    Covington, Kelsie; Welch, E. Brian; Jeong, Ha-Kyu; Landman, Bennett A.

    2011-03-01

    Exploitation of advanced, PACS-centric image analysis and interpretation pipelines provides well-developed storage, retrieval, and archival capabilities along with state-of-the-art data providence, visualization, and clinical collaboration technologies. However, pursuit of integrated medical imaging analysis through a PACS environment can be limiting in terms of the overhead required to validate, evaluate and integrate emerging research technologies. Herein, we address this challenge through presentation of a high-throughput bundled resource imaging system (HUBRIS) as an extension to the Philips Research Imaging Development Environment (PRIDE). HUBRIS enables PACS-connected medical imaging equipment to invoke tools provided by the Java Imaging Science Toolkit (JIST) so that a medical imaging platform (e.g., a magnetic resonance imaging scanner) can pass images and parameters to a server, which communicates with a grid computing facility to invoke the selected algorithms. Generated images are passed back to the server and subsequently to the imaging platform from which the images can be sent to a PACS. JIST makes use of an open application program interface layer so that research technologies can be implemented in any language capable of communicating through a system shell environment (e.g., Matlab, Java, C/C++, Perl, LISP, etc.). As demonstrated in this proof-of-concept approach, HUBRIS enables evaluation and analysis of emerging technologies within well-developed PACS systems with minimal adaptation of research software, which simplifies evaluation of new technologies in clinical research and provides a more convenient use of PACS technology by imaging scientists.

  4. Architecture and Implementation of OpenPET Firmware and Embedded Software

    PubMed Central

    Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; Peng, Qiyu; Choong, Woon-Seng

    2016-01-01

    OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the flexibility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics – a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures and implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration. PMID:27110034

  5. Evaluation of the BreastSimulator software platform for breast tomography

    NASA Astrophysics Data System (ADS)

    Mettivier, G.; Bliznakova, K.; Sechopoulos, I.; Boone, J. M.; Di Lillo, F.; Sarno, A.; Castriconi, R.; Russo, P.

    2017-08-01

    The aim of this work was the evaluation of the software BreastSimulator, a breast x-ray imaging simulation software, as a tool for the creation of 3D uncompressed breast digital models and for the simulation and the optimization of computed tomography (CT) scanners dedicated to the breast. Eight 3D digital breast phantoms were created with glandular fractions in the range 10%-35%. The models are characterised by different sizes and modelled realistic anatomical features. X-ray CT projections were simulated for a dedicated cone-beam CT scanner and reconstructed with the FDK algorithm. X-ray projection images were simulated for 5 mono-energetic (27, 32, 35, 43 and 51 keV) and 3 poly-energetic x-ray spectra typically employed in current CT scanners dedicated to the breast (49, 60, or 80 kVp). Clinical CT images acquired from two different clinical breast CT scanners were used for comparison purposes. The quantitative evaluation included calculation of the power-law exponent, β, from simulated and real breast tomograms, based on the power spectrum fitted with a function of the spatial frequency, f, of the form S(f)  =  α/f   β . The breast models were validated by comparison against clinical breast CT and published data. We found that the calculated β coefficients were close to that of clinical CT data from a dedicated breast CT scanner and reported data in the literature. In evaluating the software package BreastSimulator to generate breast models suitable for use with breast CT imaging, we found that the breast phantoms produced with the software tool can reproduce the anatomical structure of real breasts, as evaluated by calculating the β exponent from the power spectral analysis of simulated images. As such, this research tool might contribute considerably to the further development, testing and optimisation of breast CT imaging techniques.

  6. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  7. Biomedical Big Data Training Collaborative (BBDTC): An effort to bridge the talent gap in biomedical science and research.

    PubMed

    Purawat, Shweta; Cowart, Charles; Amaro, Rommie E; Altintas, Ilkay

    2016-06-01

    The BBDTC (https://biobigdata.ucsd.edu) is a community-oriented platform to encourage high-quality knowledge dissemination with the aim of growing a well-informed biomedical big data community through collaborative efforts on training and education. The BBDTC collaborative is an e-learning platform that supports the biomedical community to access, develop and deploy open training materials. The BBDTC supports Big Data skill training for biomedical scientists at all levels, and from varied backgrounds. The natural hierarchy of courses allows them to be broken into and handled as modules . Modules can be reused in the context of multiple courses and reshuffled, producing a new and different, dynamic course called a playlist . Users may create playlists to suit their learning requirements and share it with individual users or the wider public. BBDTC leverages the maturity and design of the HUBzero content-management platform for delivering educational content. To facilitate the migration of existing content, the BBDTC supports importing and exporting course material from the edX platform. Migration tools will be extended in the future to support other platforms. Hands-on training software packages, i.e., toolboxes , are supported through Amazon EC2 and Virtualbox virtualization technologies, and they are available as: ( i ) downloadable lightweight Virtualbox Images providing a standardized software tool environment with software packages and test data on their personal machines, and ( ii ) remotely accessible Amazon EC2 Virtual Machines for accessing biomedical big data tools and scalable big data experiments. At the moment, the BBDTC site contains three open Biomedical big data training courses with lecture contents, videos and hands-on training utilizing VM toolboxes, covering diverse topics. The courses have enhanced the hands-on learning environment by providing structured content that users can use at their own pace. A four course biomedical big data series is planned for development in 2016.

  8. Environmental Assessment and Monitoring with ICAMS (Image Characterization and Modeling System) Using Multiscale Remote-Sensing Data

    NASA Technical Reports Server (NTRS)

    Lam, N.; Qiu, H.-I.; Quattrochi, Dale A.; Zhao, Wei

    1997-01-01

    With the rapid increase in spatial data, especially in the NASA-EOS (Earth Observing System) era, it is necessary to develop efficient and innovative tools to handle and analyze these data so that environmental conditions can be assessed and monitored. A main difficulty facing geographers and environmental scientists in environmental assessment and measurement is that spatial analytical tools are not easily accessible. We have recently developed a remote sensing/GIS software module called Image Characterization and Modeling System (ICAMS) to provide specialized spatial analytical tools for the measurement and characterization of satellite and other forms of spatial data. ICAMS runs on both the Intergraph-MGE and Arc/info UNIX and Windows-NT platforms. The main techniques in ICAMS include fractal measurement methods, variogram analysis, spatial autocorrelation statistics, textural measures, aggregation techniques, normalized difference vegetation index (NDVI), and delineation of land/water and vegetated/non-vegetated boundaries. In this paper, we demonstrate the main applications of ICAMS on the Intergraph-MGE platform using Landsat Thematic Mapper images from the city of Lake Charles, Louisiana. While the utilities of ICAMS' spatial measurement methods (e.g., fractal indices) in assessing environmental conditions remain to be researched, making the software available to a wider scientific community can permit the techniques in ICAMS to be evaluated and used for a diversity of applications. The findings from these various studies should lead to improved algorithms and more reliable models for environmental assessment and monitoring.

  9. Integrated Power, Avionics, and Software (iPAS) Space Telecommunications Radio System (STRS) Radio User's Guide -- Advanced Exploration Systems (AES)

    NASA Technical Reports Server (NTRS)

    Roche, Rigoberto; Shalkhauser, Mary Jo Windmille

    2017-01-01

    The Integrated Power, Avionics and Software (IPAS) software defined radio (SDR) was implemented on the Reconfigurable, Intelligently-Adaptive Communication System (RAICS) platform, for radio development at NASA Johnson Space Center. Software and hardware description language (HDL) code were delivered by NASA Glenn Research Center for use in the IPAS test bed and for development of their own Space Telecommunications Radio System (STRS) waveforms on the RAICS platform. The purpose of this document is to describe how to setup and operate the IPAS STRS Radio platform with its delivered test waveform.

  10. Many-core computing for space-based stereoscopic imaging

    NASA Astrophysics Data System (ADS)

    McCall, Paul; Torres, Gildo; LeGrand, Keith; Adjouadi, Malek; Liu, Chen; Darling, Jacob; Pernicka, Henry

    The potential benefits of using parallel computing in real-time visual-based satellite proximity operations missions are investigated. Improvements in performance and relative navigation solutions over single thread systems can be achieved through multi- and many-core computing. Stochastic relative orbit determination methods benefit from the higher measurement frequencies, allowing them to more accurately determine the associated statistical properties of the relative orbital elements. More accurate orbit determination can lead to reduced fuel consumption and extended mission capabilities and duration. Inherent to the process of stereoscopic image processing is the difficulty of loading, managing, parsing, and evaluating large amounts of data efficiently, which may result in delays or highly time consuming processes for single (or few) processor systems or platforms. In this research we utilize the Single-Chip Cloud Computer (SCC), a fully programmable 48-core experimental processor, created by Intel Labs as a platform for many-core software research, provided with a high-speed on-chip network for sharing information along with advanced power management technologies and support for message-passing. The results from utilizing the SCC platform for the stereoscopic image processing application are presented in the form of Performance, Power, Energy, and Energy-Delay-Product (EDP) metrics. Also, a comparison between the SCC results and those obtained from executing the same application on a commercial PC are presented, showing the potential benefits of utilizing the SCC in particular, and any many-core platforms in general for real-time processing of visual-based satellite proximity operations missions.

  11. Arabidopsis phenotyping through Geometric Morphometrics.

    PubMed

    Manacorda, Carlos A; Asurmendi, Sebastian

    2018-06-18

    Recently, much technical progress was achieved in the field of plant phenotyping. High-throughput platforms and the development of improved algorithms for rosette image segmentation make it now possible to extract shape and size parameters for genetic, physiological and environmental studies on a large scale. The development of low-cost phenotyping platforms and freeware resources make it possible to widely expand phenotypic analysis tools for Arabidopsis. However, objective descriptors of shape parameters that could be used independently of platform and segmentation software used are still lacking and shape descriptions still rely on ad hoc or even sometimes contradictory descriptors, which could make comparisons difficult and perhaps inaccurate. Modern geometric morphometrics is a family of methods in quantitative biology proposed to be the main source of data and analytical tools in the emerging field of phenomics studies. Based on the location of landmarks (corresponding points) over imaged specimens and by combining geometry, multivariate analysis and powerful statistical techniques, these tools offer the possibility to reproducibly and accurately account for shape variations amongst groups and measure them in shape distance units. Here, a particular scheme of landmarks placement on Arabidopsis rosette images is proposed to study shape variation in the case of viral infection processes. Shape differences between controls and infected plants are quantified throughout the infectious process and visualized. Quantitative comparisons between two unrelated ssRNA+ viruses are shown and reproducibility issues are assessed. Combined with the newest automated platforms and plant segmentation procedures, geometric morphometric tools could boost phenotypic features extraction and processing in an objective, reproducible manner.

  12. Hybrid Cloud Computing Environment for EarthCube and Geoscience Community

    NASA Astrophysics Data System (ADS)

    Yang, C. P.; Qin, H.

    2016-12-01

    The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.

  13. Image Harvest: an open-source platform for high-throughput plant image processing and analysis.

    PubMed

    Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal

    2016-05-01

    High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.

  14. Technical Note: DIRART – A software suite for deformable image registration and adaptive radiotherapy research

    PubMed Central

    Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Murty Goddu, S.; Mutic, Sasa; Deasy, Joseph O.; Low, Daniel A.

    2011-01-01

    Purpose: Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). Methods:DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. Results: DIRART provides a set of image processing∕registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. Conclusions: By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. PMID:21361176

  15. Tabular data and graphical images in support of the U.S. Geological Survey National Oil and Gas Assessment--San Juan Basin Province (5022): Chapter 7 in Total petroleum systems and geologic assessment of undiscovered oil and gas resources in the San Juan Basin Province, exclusive of Paleozoic rocks, New Mexico and Colorado

    USGS Publications Warehouse

    Klett, T.R.; Le, P.A.

    2013-01-01

    This chapter describes data used in support of the process being applied by the U.S. Geological Survey (USGS) National Oil and Gas Assessment (NOGA) project. Digital tabular data used in this report and archival data that permit the user to perform further analyses are available elsewhere on this CD–ROM. Computers and software may import the data without transcription from the Portable Document Format files (.pdf files) of the text by the reader. Because of the number and variety of platforms and software available, graphical images are provided as .pdf files and tabular data are provided in a raw form as tab-delimited text files (.tab files).

  16. A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data

    PubMed Central

    Venkat, A.; Christensen, C.; Gyulassy, A.; Summa, B.; Federer, F.; Angelucci, A.; Pascucci, V.

    2017-01-01

    The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data. PMID:28638896

  17. A Scalable Cyberinfrastructure for Interactive Visualization of Terascale Microscopy Data.

    PubMed

    Venkat, A; Christensen, C; Gyulassy, A; Summa, B; Federer, F; Angelucci, A; Pascucci, V

    2016-08-01

    The goal of the recently emerged field of connectomics is to generate a wiring diagram of the brain at different scales. To identify brain circuitry, neuroscientists use specialized microscopes to perform multichannel imaging of labeled neurons at a very high resolution. CLARITY tissue clearing allows imaging labeled circuits through entire tissue blocks, without the need for tissue sectioning and section-to-section alignment. Imaging the large and complex non-human primate brain with sufficient resolution to identify and disambiguate between axons, in particular, produces massive data, creating great computational challenges to the study of neural circuits. Researchers require novel software capabilities for compiling, stitching, and visualizing large imagery. In this work, we detail the image acquisition process and a hierarchical streaming platform, ViSUS, that enables interactive visualization of these massive multi-volume datasets using a standard desktop computer. The ViSUS visualization framework has previously been shown to be suitable for 3D combustion simulation, climate simulation and visualization of large scale panoramic images. The platform is organized around a hierarchical cache oblivious data layout, called the IDX file format, which enables interactive visualization and exploration in ViSUS, scaling to the largest 3D images. In this paper we showcase the VISUS framework used in an interactive setting with the microscopy data.

  18. Open Source software and social networks: disruptive alternatives for medical imaging.

    PubMed

    Ratib, Osman; Rosset, Antoine; Heuberger, Joris

    2011-05-01

    In recent decades several major changes in computer and communication technology have pushed the limits of imaging informatics and PACS beyond the traditional system architecture providing new perspectives and innovative approach to a traditionally conservative medical community. Disruptive technologies such as the world-wide-web, wireless networking, Open Source software and recent emergence of cyber communities and social networks have imposed an accelerated pace and major quantum leaps in the progress of computer and technology infrastructure applicable to medical imaging applications. This paper reviews the impact and potential benefits of two major trends in consumer market software development and how they will influence the future of medical imaging informatics. Open Source software is emerging as an attractive and cost effective alternative to traditional commercial software developments and collaborative social networks provide a new model of communication that is better suited to the needs of the medical community. Evidence shows that successful Open Source software tools have penetrated the medical market and have proven to be more robust and cost effective than their commercial counterparts. Developed by developers that are themselves part of the user community, these tools are usually better adapted to the user's need and are more robust than traditional software programs being developed and tested by a large number of contributing users. This context allows a much faster and more appropriate development and evolution of the software platforms. Similarly, communication technology has opened up to the general public in a way that has changed the social behavior and habits adding a new dimension to the way people communicate and interact with each other. The new paradigms have also slowly penetrated the professional market and ultimately the medical community. Secure social networks allowing groups of people to easily communicate and exchange information is a new model that is particularly suitable for some specific groups of healthcare professional and for physicians. It has also changed the expectations of how patients wish to communicate with their physicians. Emerging disruptive technologies and innovative paradigm such as Open Source software are leading the way to a new generation of information systems that slowly will change the way physicians and healthcare providers as well as patients will interact and communicate in the future. The impact of these new technologies is particularly effective in image communication, PACS and teleradiology. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  19. General consumer communication tools for improved image management and communication in medicine.

    PubMed

    Rosset, Chantal; Rosset, Antoine; Ratib, Osman

    2005-12-01

    We elected to explore new technologies emerging on the general consumer market that can improve and facilitate image and data communication in medical and clinical environment. These new technologies developed for communication and storage of data can improve the user convenience and facilitate the communication and transport of images and related data beyond the usual limits and restrictions of a traditional picture archiving and communication systems (PACS) network. We specifically tested and implemented three new technologies provided on Apple computer platforms. (1) We adopted the iPod, a MP3 portable player with a hard disk storage, to easily and quickly move large number of DICOM images. (2) We adopted iChat, a videoconference and instant-messaging software, to transmit DICOM images in real time to a distant computer for conferencing teleradiology. (3) Finally, we developed a direct secure interface to use the iDisk service, a file-sharing service based on the WebDAV technology, to send and share DICOM files between distant computers. These three technologies were integrated in a new open-source image navigation and display software called OsiriX allowing for manipulation and communication of multimodality and multidimensional DICOM image data sets. This software is freely available as an open-source project at http://homepage.mac.com/rossetantoine/OsiriX. Our experience showed that the implementation of these technologies allowed us to significantly enhance the existing PACS with valuable new features without any additional investment or the need for complex extensions of our infrastructure. The added features such as teleradiology, secure and convenient image and data communication, and the use of external data storage services open the gate to a much broader extension of our imaging infrastructure to the outside world.

  20. Framework programmable platform for the advanced software development workstation. Integration mechanism design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Reddy, Uday; Ackley, Keith; Futrell, Mike

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by this model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated.

  1. Fast high-energy X-ray imaging for Severe Accidents experiments on the future PLINIUS-2 platform

    NASA Astrophysics Data System (ADS)

    Berge, L.; Estre, N.; Tisseur, D.; Payan, E.; Eck, D.; Bouyer, V.; Cassiaut-Louis, N.; Journeau, C.; Tellier, R. Le; Pluyette, E.

    2018-01-01

    The future PLINIUS-2 platform of CEA Cadarache will be dedicated to the study of corium interactions in severe nuclear accidents, and will host innovative large-scale experiments. The Nuclear Measurement Laboratory of CEA Cadarache is in charge of real-time high-energy X-ray imaging set-ups, for the study of the corium-water and corium-sodium interaction, and of the corium stratification process. Imaging such large and high-density objects requires a 15 MeV linear electron accelerator coupled to a tungsten target creating a high-energy Bremsstrahlung X-ray flux, with corresponding dose rate about 100 Gy/min at 1 m. The signal is detected by phosphor screens coupled to high-framerate scientific CMOS cameras. The imaging set-up is established using an experimentally-validated home-made simulation software (MODHERATO). The code computes quantitative radiographic signals from the description of the source, object geometry and composition, detector, and geometrical configuration (magnification factor, etc.). It accounts for several noise sources (photonic and electronic noises, swank and readout noise), and for image blur due to the source spot-size and to the detector unsharpness. In a view to PLINIUS-2, the simulation has been improved to account for the scattered flux, which is expected to be significant. The paper presents the scattered flux calculation using the MCNP transport code, and its integration into the MODHERATO simulation. Then the validation of the improved simulation is presented, through confrontation to real measurement images taken on a small-scale equivalent set-up on the PLINIUS platform. Excellent agreement is achieved. This improved simulation is therefore being used to design the PLINIUS-2 imaging set-ups (source, detectors, cameras, etc.).

  2. Towards an Open, Distributed Software Architecture for UxS Operations

    NASA Technical Reports Server (NTRS)

    Cross, Charles D.; Motter, Mark A.; Neilan, James H.; Qualls, Garry D.; Rothhaar, Paul M.; Tran, Loc; Trujillo, Anna C.; Allen, B. Danette

    2015-01-01

    To address the growing need to evaluate, test, and certify an ever expanding ecosystem of UxS platforms in preparation of cultural integration, NASA Langley Research Center's Autonomy Incubator (AI) has taken on the challenge of developing a software framework in which UxS platforms developed by third parties can be integrated into a single system which provides evaluation and testing, mission planning and operation, and out-of-the-box autonomy and data fusion capabilities. This software framework, named AEON (Autonomous Entity Operations Network), has two main goals. The first goal is the development of a cross-platform, extensible, onboard software system that provides autonomy at the mission execution and course-planning level, a highly configurable data fusion framework sensitive to the platform's available sensor hardware, and plug-and-play compatibility with a wide array of computer systems, sensors, software, and controls hardware. The second goal is the development of a ground control system that acts as a test-bed for integration of the proposed heterogeneous fleet, and allows for complex mission planning, tracking, and debugging capabilities. The ground control system should also be highly extensible and allow plug-and-play interoperability with third party software systems. In order to achieve these goals, this paper proposes an open, distributed software architecture which utilizes at its core the Data Distribution Service (DDS) standards, established by the Object Management Group (OMG), for inter-process communication and data flow. The design decisions proposed herein leverage the advantages of existing robotics software architectures and the DDS standards to develop software that is scalable, high-performance, fault tolerant, modular, and readily interoperable with external platforms and software.

  3. Framework Programmable Platform for the Advanced Software Development Workstation: Preliminary system design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, John W., IV; Henderson, Richard; Futrell, Michael T.

    1991-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The focus here is on the design of components that make up the FPP. These components serve as supporting systems for the Integration Mechanism and the Framework Processor and provide the 'glue' that ties the FPP together. Also discussed are the components that allow the platform to operate in a distributed, heterogeneous environment and to manage the development and evolution of software system artifacts.

  4. Open source posturography.

    PubMed

    Rey-Martinez, Jorge; Pérez-Fernández, Nicolás

    2016-12-01

    The proposed validation goal of 0.9 in intra-class correlation coefficient was reached with the results of this study. With the obtained results we consider that the developed software (RombergLab) is a validated balance assessment software. The reliability of this software is dependent of the used force platform technical specifications. Develop and validate a posturography software and share its source code in open source terms. Prospective non-randomized validation study: 20 consecutive adults underwent two balance assessment tests, six condition posturography was performed using a clinical approved software and force platform and the same conditions were measured using the new developed open source software using a low cost force platform. Intra-class correlation index of the sway area obtained from the center of pressure variations in both devices for the six conditions was the main variable used for validation. Excellent concordance between RombergLab and clinical approved force platform was obtained (intra-class correlation coefficient =0.94). A Bland and Altman graphic concordance plot was also obtained. The source code used to develop RombergLab was published in open source terms.

  5. Image processing system design for microcantilever-based optical readout infrared arrays

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  6. Space-Shuttle Emulator Software

    NASA Technical Reports Server (NTRS)

    Arnold, Scott; Askew, Bill; Barry, Matthew R.; Leigh, Agnes; Mermelstein, Scott; Owens, James; Payne, Dan; Pemble, Jim; Sollinger, John; Thompson, Hiram; hide

    2007-01-01

    A package of software has been developed to execute a raw binary image of the space shuttle flight software for simulation of the computational effects of operation of space shuttle avionics. This software can be run on inexpensive computer workstations. Heretofore, it was necessary to use real flight computers to perform such tests and simulations. The package includes a program that emulates the space shuttle orbiter general- purpose computer [consisting of a central processing unit (CPU), input/output processor (IOP), master sequence controller, and buscontrol elements]; an emulator of the orbiter display electronics unit and models of the associated cathode-ray tubes, keyboards, and switch controls; computational models of the data-bus network; computational models of the multiplexer-demultiplexer components; an emulation of the pulse-code modulation master unit; an emulation of the payload data interleaver; a model of the master timing unit; a model of the mass memory unit; and a software component that ensures compatibility of telemetry and command services between the simulated space shuttle avionics and a mission control center. The software package is portable to several host platforms.

  7. Flexible imaging payload for real-time fluorescent biological imaging in parabolic, suborbital and space analog environments

    NASA Astrophysics Data System (ADS)

    Bamsey, Matthew T.; Paul, Anna-Lisa; Graham, Thomas; Ferl, Robert J.

    2014-10-01

    Fluorescent imaging offers the ability to monitor biological functions, in this case biological responses to space-related environments. For plants, fluorescent imaging can include general health indicators such as chlorophyll fluorescence as well as specific metabolic indicators such as engineered fluorescent reporters. This paper describes the Flex Imager a fluorescent imaging payload designed for Middeck Locker deployment and now tested on multiple flight and flight-related platforms. The Flex Imager and associated payload elements have been developed with a focus on 'flexibility' allowing for multiple imaging modalities and change-out of individual imaging or control components in the field. The imaging platform is contained within the standard Middeck Locker spaceflight form factor, with components affixed to a baseplate that permits easy rearrangement and fine adjustment of components. The Flex Imager utilizes standard software packages to simplify operation, operator training, and evaluation by flight provider flight test engineers, or by researchers processing the raw data. Images are obtained using a commercial cooled CCD image sensor, with light-emitting diodes for excitation and a suite of filters that allow biological samples to be imaged over wavelength bands of interest. Although baselined for the monitoring of green fluorescent protein and chlorophyll fluorescence from Arabidopsis samples, the Flex Imager payload permits imaging of any biological sample contained within a standard 10 cm by 10 cm square Petri plate. A sample holder was developed to secure sample plates under different flight profiles while permitting sample change-out should crewed operations be possible. In addition to crew-directed imaging, autonomous or telemetric operation of the payload is also a viable operational mode. An infrared camera has also been integrated into the Flex Imager payload to allow concurrent fluorescent and thermal imaging of samples. The Flex Imager has been utilized to assess, in real-time, the response of plants to novel environments including various spaceflight analogs, including several parabolic flight environments as well as hypobaric plant growth chambers. Basic performance results obtained under these operational environments, as well as laboratory-based tests are described. The Flex Imager has also been designed to be compatible with emerging suborbital platforms.

  8. RAY-UI: A powerful and extensible user interface for RAY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baumgärtel, P., E-mail: peter.baumgaertel@helmholtz-berlin.de; Erko, A.; Schäfers, F.

    2016-07-27

    The RAY-UI project started as a proof-of-concept for an interactive and graphical user interface (UI) for the well-known ray tracing software RAY [1]. In the meantime, it has evolved into a powerful enhanced version of RAY that will serve as the platform for future development and improvement of associated tools. The software as of today supports nearly all sophisticated simulation features of RAY. Furthermore, it delivers very significant usability and work efficiency improvements. Beamline elements can be quickly added or removed in the interactive sequence view. Parameters of any selected element can be accessed directly and in arbitrary order. Withmore » a single click, parameter changes can be tested and new simulation results can be obtained. All analysis results can be explored interactively right after ray tracing by means of powerful integrated image viewing and graphing tools. Unlimited image planes can be positioned anywhere in the beamline, and bundles of image planes can be created for moving the plane along the beam to identify the focus position with live updates of the simulated results. In addition to showing the features and workflow of RAY-UI, we will give an overview of the underlying software architecture as well as examples for use and an outlook for future developments.« less

  9. FPGA Coprocessor for Accelerated Classification of Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Scharenbroich, Lucas J.; Werne, Thomas A.

    2008-01-01

    An effort related to that described in the preceding article focuses on developing a spaceborne processing platform for fast and accurate onboard classification of image data, a critical part of modern satellite image processing. The approach again has been to exploit the versatility of recently developed hybrid Virtex-4FX field-programmable gate array (FPGA) to run diverse science applications on embedded processors while taking advantage of the reconfigurable hardware resources of the FPGAs. In this case, the FPGA serves as a coprocessor that implements legacy C-language support-vector-machine (SVM) image-classification algorithms to detect and identify natural phenomena such as flooding, volcanic eruptions, and sea-ice break-up. The FPGA provides hardware acceleration for increased onboard processing capability than previously demonstrated in software. The original C-language program demonstrated on an imaging instrument aboard the Earth Observing-1 (EO-1) satellite implements a linear-kernel SVM algorithm for classifying parts of the images as snow, water, ice, land, or cloud or unclassified. Current onboard processors, such as on EO-1, have limited computing power, extremely limited active storage capability and are no longer considered state-of-the-art. Using commercially available software that translates C-language programs into hardware description language (HDL) files, the legacy C-language program, and two newly formulated programs for a more capable expanded-linear-kernel and a more accurate polynomial-kernel SVM algorithm, have been implemented in the Virtex-4FX FPGA. In tests, the FPGA implementations have exhibited significant speedups over conventional software implementations running on general-purpose hardware.

  10. [Central online quality assurance in radiology: an IT solution exemplified by the German Breast Cancer Screening Program].

    PubMed

    Czwoydzinski, J; Girnus, R; Sommer, A; Heindel, W; Lenzen, H

    2011-09-01

    Physical-technical quality assurance is one of the essential tasks of the National Reference Centers in the German Breast Cancer Screening Program. For this purpose the mammography units are required to transfer the measured values of the constancy tests on a daily basis and all phantom images created for this purpose on a weekly basis to the reference centers. This is a serious logistical challenge. To meet these requirements, we developed an innovative software tool. By the end of 2005, we had already developed web-based software (MammoControl) allowing the transmission of constancy test results via entry forms. For automatic analysis and transmission of the phantom images, we then introduced an extension (MammoControl DIANA). This was based on Java, Java Web Start, the NetBeans Rich Client Platform, the Pixelmed Java DICOM Toolkit and the ImageJ library. MammoControl DIANA was designed to run locally in the mammography units. This allows automated on-site image analysis. Both results and compressed images can then be transmitted to the reference center. We developed analysis modules for the daily and monthly consistency tests and additionally for a homogeneity test. The software we developed facilitates the immediate availability of measurement results, phantom images, and DICOM header data in all reference centers. This allows both targeted guidance and short response time in the case of errors. We achieved a consistent IT-based evaluation with standardized tools for the entire screening program in Germany. © Georg Thieme Verlag KG Stuttgart · New York.

  11. ArrayNinja: An Open Source Platform for Unified Planning and Analysis of Microarray Experiments.

    PubMed

    Dickson, B M; Cornett, E M; Ramjan, Z; Rothbart, S B

    2016-01-01

    Microarray-based proteomic platforms have emerged as valuable tools for studying various aspects of protein function, particularly in the field of chromatin biochemistry. Microarray technology itself is largely unrestricted in regard to printable material and platform design, and efficient multidimensional optimization of assay parameters requires fluidity in the design and analysis of custom print layouts. This motivates the need for streamlined software infrastructure that facilitates the combined planning and analysis of custom microarray experiments. To this end, we have developed ArrayNinja as a portable, open source, and interactive application that unifies the planning and visualization of microarray experiments and provides maximum flexibility to end users. Array experiments can be planned, stored to a private database, and merged with the imaged results for a level of data interaction and centralization that is not currently attainable with available microarray informatics tools. © 2016 Elsevier Inc. All rights reserved.

  12. Investigation into Cloud Computing for More Robust Automated Bulk Image Geoprocessing

    NASA Technical Reports Server (NTRS)

    Brown, Richard B.; Smoot, James C.; Underwood, Lauren; Armstrong, C. Duane

    2012-01-01

    Geospatial resource assessments frequently require timely geospatial data processing that involves large multivariate remote sensing data sets. In particular, for disasters, response requires rapid access to large data volumes, substantial storage space and high performance processing capability. The processing and distribution of this data into usable information products requires a processing pipeline that can efficiently manage the required storage, computing utilities, and data handling requirements. In recent years, with the availability of cloud computing technology, cloud processing platforms have made available a powerful new computing infrastructure resource that can meet this need. To assess the utility of this resource, this project investigates cloud computing platforms for bulk, automated geoprocessing capabilities with respect to data handling and application development requirements. This presentation is of work being conducted by Applied Sciences Program Office at NASA-Stennis Space Center. A prototypical set of image manipulation and transformation processes that incorporate sample Unmanned Airborne System data were developed to create value-added products and tested for implementation on the "cloud". This project outlines the steps involved in creating and testing of open source software developed process code on a local prototype platform, and then transitioning this code with associated environment requirements into an analogous, but memory and processor enhanced cloud platform. A data processing cloud was used to store both standard digital camera panchromatic and multi-band image data, which were subsequently subjected to standard image processing functions such as NDVI (Normalized Difference Vegetation Index), NDMI (Normalized Difference Moisture Index), band stacking, reprojection, and other similar type data processes. Cloud infrastructure service providers were evaluated by taking these locally tested processing functions, and then applying them to a given cloud-enabled infrastructure to assesses and compare environment setup options and enabled technologies. This project reviews findings that were observed when cloud platforms were evaluated for bulk geoprocessing capabilities based on data handling and application development requirements.

  13. a Novel Technique for Precision Geometric Correction of Jitter Distortion for the Europa Imaging System and Other Rolling-Shutter Cameras

    NASA Astrophysics Data System (ADS)

    Kirk, R. L.; Shepherd, M.; Sides, S. C.

    2018-04-01

    We use simulated images to demonstrate a novel technique for mitigating geometric distortions caused by platform motion ("jitter") as two-dimensional image sensors are exposed and read out line by line ("rolling shutter"). The results indicate that the Europa Imaging System (EIS) on NASA's Europa Clipper can likely meet its scientific goals requiring 0.1-pixel precision. We are therefore adapting the software used to demonstrate and test rolling shutter jitter correction to become part of the standard processing pipeline for EIS. The correction method will also apply to other rolling-shutter cameras, provided they have the operational flexibility to read out selected "check lines" at chosen times during the systematic readout of the frame area.

  14. A Unified Algebraic and Logic-Based Framework Towards Safe Routing Implementations

    DTIC Science & Technology

    2015-08-13

    Software - defined Networks ( SDN ). We developed a declarative platform for implementing SDN protocols using declarative...and debugging several SDN applications. Example-based SDN synthesis. Recent emergence of software - defined networks offers an opportunity to design...domain of Software - defined Networks ( SDN ). We developed a declarative platform for implementing SDN protocols using declarative networking

  15. Mining biomedical images towards valuable information retrieval in biomedical and life sciences

    PubMed Central

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. PMID:27538578

  16. Strip mosaicing confocal microscopy for rapid imaging over large areas of excised tissue

    NASA Astrophysics Data System (ADS)

    Abeytunge, Sanjee; Li, Yongbiao; Larson, Bjorg; Peterson, Gary; Toledo-Crow, Ricardo; Rajadhyaksha, Milind

    2012-03-01

    Confocal mosaicing microscopy is a developing technology platform for imaging tumor margins directly in fresh tissue, without the processing that is required for conventional pathology. Previously, basal cell carcinoma margins were detected by mosaicing of confocal images of 12 x 12 mm2 of excised tissue from Mohs surgery. This mosaicing took 9 minutes. Recently we reported the initial feasibility of a faster approach called "strip mosaicing" on 10 x 10 mm2 of tissue that was demonstrated in 3 minutes. In this paper we report further advances in instrumentation and software. Rapid mosaicing of confocal images on large areas of fresh tissue potentially offers a means to perform pathology at the bedside. Thus, strip mosaicing confocal microscopy may serve as an adjunct to pathology for imaging tumor margins to guide surgery.

  17. NOTE: MMCTP: a radiotherapy research environment for Monte Carlo and patient-specific treatment planning

    NASA Astrophysics Data System (ADS)

    Alexander, A.; DeBlois, F.; Stroian, G.; Al-Yahya, K.; Heath, E.; Seuntjens, J.

    2007-07-01

    Radiotherapy research lacks a flexible computational research environment for Monte Carlo (MC) and patient-specific treatment planning. The purpose of this study was to develop a flexible software package on low-cost hardware with the aim of integrating new patient-specific treatment planning with MC dose calculations suitable for large-scale prospective and retrospective treatment planning studies. We designed the software package 'McGill Monte Carlo treatment planning' (MMCTP) for the research development of MC and patient-specific treatment planning. The MMCTP design consists of a graphical user interface (GUI), which runs on a simple workstation connected through standard secure-shell protocol to a cluster for lengthy MC calculations. Treatment planning information (e.g., images, structures, beam geometry properties and dose distributions) is converted into a convenient MMCTP local file storage format designated, the McGill RT format. MMCTP features include (a) DICOM_RT, RTOG and CADPlan CART format imports; (b) 2D and 3D visualization views for images, structure contours, and dose distributions; (c) contouring tools; (d) DVH analysis, and dose matrix comparison tools; (e) external beam editing; (f) MC transport calculation from beam source to patient geometry for photon and electron beams. The MC input files, which are prepared from the beam geometry properties and patient information (e.g., images and structure contours), are uploaded and run on a cluster using shell commands controlled from the MMCTP GUI. The visualization, dose matrix operation and DVH tools offer extensive options for plan analysis and comparison between MC plans and plans imported from commercial treatment planning systems. The MMCTP GUI provides a flexible research platform for the development of patient-specific MC treatment planning for photon and electron external beam radiation therapy. The impact of this tool lies in the fact that it allows for systematic, platform-independent, large-scale MC treatment planning for different treatment sites. Patient recalculations were performed to validate the software and ensure proper functionality.

  18. Architecture and Implementation of OpenPET Firmware and Embedded Software

    DOE PAGES

    Abu-Nimeh, Faisal T.; Ito, Jennifer; Moses, William W.; ...

    2016-01-11

    OpenPET is an open source, modular, extendible, and high-performance platform suitable for multi-channel data acquisition and analysis. Due to the versatility of the hardware, firmware, and software architectures, the platform is capable of interfacing with a wide variety of detector modules not only in medical imaging but also in homeland security applications. Analog signals from radiation detectors share similar characteristics-a pulse whose area is proportional to the deposited energy and whose leading edge is used to extract a timing signal. As a result, a generic design method of the platform is adopted for the hardware, firmware, and software architectures andmore » implementations. The analog front-end is hosted on a module called a Detector Board, where each board can filter, combine, timestamp, and process multiple channels independently. The processed data is formatted and sent through a backplane bus to a module called Support Board, where 1 Support Board can host up to eight Detector Board modules. The data in the Support Board, coming from 8 Detector Board modules, can be aggregated or correlated (if needed) depending on the algorithm implemented or runtime mode selected. It is then sent out to a computer workstation for further processing. The number of channels (detector modules), to be processed, mandates the overall OpenPET System Configuration, which is designed to handle up to 1,024 channels using 16-channel Detector Boards in the Standard System Configuration and 16,384 channels using 32-channel Detector Boards in the Large System Configuration.« less

  19. Detection of Lock on Radar System Based on Ultrasonic US 100 Sensor And Arduino Uno R3 With Image Processing GUI

    NASA Astrophysics Data System (ADS)

    Baskoro, F.; Reynaldo, B. R.

    2018-04-01

    The development of electronics technology especially in the field of microcontroller occurs very rapidly. There have been many applications and useful use of microcontroller in everyday life as well as in laboratory research. In this study used Arduino Uno R3 as microcontroller-based platform ATMega328 as a sensor distance meter to know the distance of an object with high accuracy. The method used is to utilize the function Timer / Counter in Arduino UNO R3. On the Arduino Uno R3 platform, there is ATMEL ATmega328 microcontroller which has a frequency generating speed up to 20 MHz, 16-bit enumeration capability and using C language as its programming. With the Arduino Uno R3 platform, the ATmega328 microcontroller can be programmed with Arduino IDE software that is simpler and easier because it has been supported by libraries and many support programs. The result of this research is distance measurement to know the location of an object using US ultrasonic wave sensor US 100 with Arduino Uno R3 based on ATMega328 microcontroller which then the result will be displayed using Image Processing.

  20. Framework Programmable Platform for the Advanced Software Development Workstation (FPP/ASDW). Demonstration framework document. Volume 1: Concepts and activity descriptions

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Dewitte, Paul S.; Crump, John W.; Ackley, Keith A.

    1992-01-01

    The Framework Programmable Software Development Platform (FPP) is a project aimed at effectively combining tool and data integration mechanisms with a model of the software development process to provide an intelligent integrated software development environment. Guided by the model, this system development framework will take advantage of an integrated operating environment to automate effectively the management of the software development process so that costly mistakes during the development phase can be eliminated. The Advanced Software Development Workstation (ASDW) program is conducting research into development of advanced technologies for Computer Aided Software Engineering (CASE).

  1. Software architecture for time-constrained machine vision applications

    NASA Astrophysics Data System (ADS)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2013-01-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility, because they are normally oriented toward particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse, and inefficient execution on multicore processors. We present a novel software architecture for time-constrained machine vision applications that addresses these issues. The architecture is divided into three layers. The platform abstraction layer provides a high-level application programming interface for the rest of the architecture. The messaging layer provides a message-passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of message. The application layer provides a repository for reusable application modules designed for machine vision applications. These modules, which include acquisition, visualization, communication, user interface, and data processing, take advantage of the power of well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, the proposed architecture is applied to a real machine vision application: a jam detector for steel pickling lines.

  2. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application.

    PubMed

    Hanwell, Marcus D; de Jong, Wibe A; Harris, Christopher J

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction-connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platform with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web-going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.

  3. Enterprise Implementation of Digital Pathology: Feasibility, Challenges, and Opportunities.

    PubMed

    Hartman, D J; Pantanowitz, L; McHugh, J S; Piccoli, A L; OLeary, M J; Lauro, G R

    2017-10-01

    Digital pathology is becoming technically possible to implement for routine pathology work. At our institution, we have been using digital pathology for second opinion intraoperative consultations for over 10 years. Herein, we describe our experience in converting to a digital pathology platform for primary pathology diagnosis. We implemented an incremental rollout for digital pathology on subspecialty benches, beginning with cases that contained small amounts of tissue (biopsy specimens). We successfully scanned over 40,000 slides through our digital pathology system. Several lessons (both challenges and opportunities) were learned through this implementation. A successful conversion to digital pathology requires pre-imaging adjustments, integrated software and post-imaging evaluations.

  4. 3-D interactive visualisation tools for Hi spectral line imaging

    NASA Astrophysics Data System (ADS)

    van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.

    2017-06-01

    Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.

  5. Structure-from-motion for MAV image sequence analysis with photogrammetric applications

    NASA Astrophysics Data System (ADS)

    Schönberger, J. L.; Fraundorfer, F.; Frahm, J.-M.

    2014-08-01

    MAV systems have found increased attention in the photogrammetric community as an (autonomous) image acquisition platform for accurate 3D reconstruction. For an accurate reconstruction in feasible time, the acquired imagery requires specialized SfM software. Current systems typically use high-resolution sensors in pre-planned flight missions from far distance. We describe and evaluate a new SfM pipeline specifically designed for sequential, close-distance, and low-resolution imagery from mobile cameras with relatively high frame-rate and high overlap. Experiments demonstrate reduced computational complexity by leveraging the temporal consistency, comparable accuracy and point density with respect to state-of-the-art systems.

  6. Pulseq-Graphical Programming Interface: Open source visual environment for prototyping pulse sequences and integrated magnetic resonance imaging algorithm development.

    PubMed

    Ravi, Keerthi Sravan; Potdar, Sneha; Poojar, Pavan; Reddy, Ashok Kumar; Kroboth, Stefan; Nielsen, Jon-Fredrik; Zaitsev, Maxim; Venkatesan, Ramesh; Geethanath, Sairam

    2018-03-11

    To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. We integrated the "Pulseq" platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only ("MATLAB-Pulseq") or Python only ("Python-Pulseq") were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Mobile Phones Democratize and Cultivate Next-Generation Imaging, Diagnostics and Measurement Tools

    PubMed Central

    Ozcan, Aydogan

    2014-01-01

    In this article, I discuss some of the emerging applications and the future opportunities and challenges created by the use of mobile phones and their embedded components for the development of next-generation imaging, sensing, diagnostics and measurement tools. The massive volume of mobile phone users, which has now reached ~7 billion, drives the rapid improvements of the hardware, software and high-end imaging and sensing technologies embedded in our phones, transforming the mobile phone into a cost-effective and yet extremely powerful platform to run e.g., biomedical tests and perform scientific measurements that would normally require advanced laboratory instruments. This rapidly evolving and continuing trend will help us transform how medicine, engineering and sciences are practiced and taught globally. PMID:24647550

  8. Framework Programmable Platform for the advanced software development workstation: Framework processor design document

    NASA Technical Reports Server (NTRS)

    Mayer, Richard J.; Blinn, Thomas M.; Mayer, Paula S. D.; Ackley, Keith A.; Crump, Wes; Sanders, Les

    1991-01-01

    The design of the Framework Processor (FP) component of the Framework Programmable Software Development Platform (FFP) is described. The FFP is a project aimed at combining effective tool and data integration mechanisms with a model of the software development process in an intelligent integrated software development environment. Guided by the model, this Framework Processor will take advantage of an integrated operating environment to provide automated support for the management and control of the software development process so that costly mistakes during the development phase can be eliminated.

  9. Solar Asset Management Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iverson, Aaron; Zviagin, George

    Ra Power Management (RPM) has developed a cloud based software platform that manages the financial and operational functions of third party financed solar projects throughout their lifecycle. RPM’s software streamlines and automates the sales, financing, and management of a portfolio of solar assets. The software helps solar developers automate the most difficult aspects of asset management, leading to increased transparency, efficiency, and reduction in human error. More importantly, our platform will help developers save money by improving their operating margins.

  10. Practical Applications of Digital Pathology.

    PubMed

    Saeed-Vafa, Daryoush; Magliocco, Anthony M

    2015-04-01

    Virtual microscopy and advances in machine learning have paved the way for the ever-expanding field of digital pathology. Multiple image-based computing environments capable of performing automated quantitative and morphological analyses are the foundation on which digital pathology is built. The applications for digital pathology in the clinical setting are numerous and are explored along with the digital software environments themselves, as well as the different analytical modalities specific to digital pathology. Prospective studies, case-control analyses, meta-analyses, and detailed descriptions of software environments were explored that pertained to digital pathology and its use in the clinical setting. Many different software environments have advanced platforms capable of improving digital pathology and potentially influencing clinical decisions. The potential of digital pathology is vast, particularly with the introduction of numerous software environments available for use. With all the digital pathology tools available as well as those in development, the field will continue to advance, particularly in the era of personalized medicine, providing health care professionals with more precise prognostic information as well as helping them guide treatment decisions.

  11. Distributed and Collaborative Software Analysis

    NASA Astrophysics Data System (ADS)

    Ghezzi, Giacomo; Gall, Harald C.

    Throughout the years software engineers have come up with a myriad of specialized tools and techniques that focus on a certain type of software analysissoftware analysis such as source code analysis, co-change analysis or bug prediction. However, easy and straight forward synergies between these analyses and tools rarely exist because of their stand-alone nature, their platform dependence, their different input and output formats and the variety of data to analyze. As a consequence, distributed and collaborative software analysiscollaborative software analysis scenarios and in particular interoperability are severely limited. We describe a distributed and collaborative software analysis platform that allows for a seamless interoperability of software analysis tools across platform, geographical and organizational boundaries. We realize software analysis tools as services that can be accessed and composed over the Internet. These distributed analysis services shall be widely accessible in our incrementally augmented Software Analysis Broker software analysis broker where organizations and tool providers can register and share their tools. To allow (semi-) automatic use and composition of these tools, they are classified and mapped into a software analysis taxonomy and adhere to specific meta-models and ontologiesontologies for their category of analysis.

  12. MilxXplore: a web-based system to explore large imaging datasets.

    PubMed

    Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J

    2013-01-01

    As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis.

  13. Assessing UAV platform types and optical sensor specifications

    NASA Astrophysics Data System (ADS)

    Altena, B.; Goedemé, T.

    2014-05-01

    Photogrammetric acquisition with unmanned aerial vehicles (UAV) has grown extensively over the last couple of years. Such mobile platforms and their processing software have matured, resulting in a market which offers off-the-shelf mapping solutions to surveying companies and geospatial enterprises. Different approaches in platform type and optical instruments exist, though its resulting products have similar specifications. To demonstrate differences in acquisitioning practice, a case study over an open mine was flown with two different off-the-shelf UAVs (a fixed-wing and a multi-rotor). The resulting imagery is analyzed to clarify the differences in collection quality. We look at image settings, and stress the fact of photographic experience if manual setting are applied. For mapping production it might be safest to set the camera on automatic. Furthermore, we try to estimate if blur is present due to image motion. A subtle trend seems to be present, for the fast flying platform though its extent is of similar order to the slow moving one. It shows both systems operate at their limits. Finally, the lens distortion is assessed with special attention to chromatic aberration. Here we see that through calibration such aberrations could be present, however detecting this phenomena directly on imagery is not straightforward. For such effects a normal lens is sufficient, though a better lens and collimator does give significant improvement.

  14. Sub-arcminute pointing from a balloonborne platform

    NASA Astrophysics Data System (ADS)

    Craig, William W.; McLean, Ryan; Hailey, Charles J.

    1998-07-01

    We describe the design and performance of the pointing and aspect reconstruction system on the Gamma-Ray Arcminute Telescope Imaging System. The payload consists of a 4m long gamma-ray telescope, capable of producing images of the gamma-ray sky at an angular resolution of 2 arcminutes. The telescope is operated at an altitude of 40km in azimuth/elevation pointing mode. Using a variety of sensor, including attitude GPS, fiber optic gyroscopes, star and sun trackers, the system is capable of pointing the gamma-ray payload to within an arc-minute from the balloon borne platform. The system is designed for long-term autonomous operation and performed to specification throughout a recent 36 hour flight from Alice Springs, Australia. A star tracker and pattern recognition software developed for the mission permit aspect reconstruction to better than 10 arcseconds. The narrow field star tracker system is capable of acquiring and identifying a star field without external input. We present flight data form all sensors and the resultant gamma-ray source localizations.

  15. Experimental results in autonomous landing approaches by dynamic machine vision

    NASA Astrophysics Data System (ADS)

    Dickmanns, Ernst D.; Werner, Stefan; Kraus, S.; Schell, R.

    1994-07-01

    The 4-D approach to dynamic machine vision, exploiting full spatio-temporal models of the process to be controlled, has been applied to on board autonomous landing approaches of aircraft. Aside from image sequence processing, for which it was developed initially, it is also used for data fusion from a range of sensors. By prediction error feedback an internal representation of the aircraft state relative to the runway in 3-D space and time is servo- maintained in the interpretation process, from which the control applications required are being derived. The validity and efficiency of the approach have been proven both in hardware- in-the-loop simulations and in flight experiments with a twin turboprop aircraft Do128 under perturbations from cross winds and wind gusts. The software package has been ported to `C' and onto a new transputer image processing platform; the system has been expanded for bifocal vision with two cameras of different focal length mounted fixed relative to each other on a two-axes platform for viewing direction control.

  16. Smoothing-Based Relative Navigation and Coded Aperture Imaging

    NASA Technical Reports Server (NTRS)

    Saenz-Otero, Alvar; Liebe, Carl Christian; Hunter, Roger C.; Baker, Christopher

    2017-01-01

    This project will develop an efficient smoothing software for incremental estimation of the relative poses and velocities between multiple, small spacecraft in a formation, and a small, long range depth sensor based on coded aperture imaging that is capable of identifying other spacecraft in the formation. The smoothing algorithm will obtain the maximum a posteriori estimate of the relative poses between the spacecraft by using all available sensor information in the spacecraft formation.This algorithm will be portable between different satellite platforms that possess different sensor suites and computational capabilities, and will be adaptable in the case that one or more satellites in the formation become inoperable. It will obtain a solution that will approach an exact solution, as opposed to one with linearization approximation that is typical of filtering algorithms. Thus, the algorithms developed and demonstrated as part of this program will enhance the applicability of small spacecraft to multi-platform operations, such as precisely aligned constellations and fractionated satellite systems.

  17. Web Platform for Sharing Modeling Software in the Field of Nonlinear Optics

    NASA Astrophysics Data System (ADS)

    Dubenskaya, Julia; Kryukov, Alexander; Demichev, Andrey

    2018-02-01

    We describe the prototype of a Web platform intended for sharing software programs for computer modeling in the rapidly developing field of the nonlinear optics phenomena. The suggested platform is built on the top of the HUBZero open-source middleware. In addition to the basic HUBZero installation we added to our platform the capability to run Docker containers via an external application server and to send calculation programs to those containers for execution. The presented web platform provides a wide range of features and might be of benefit to nonlinear optics researchers.

  18. Review on the Celestial Sphere Positioning of FITS Format Image Based on WCS and Research on General Visualization

    NASA Astrophysics Data System (ADS)

    Song, W. M.; Fan, D. W.; Su, L. Y.; Cui, C. Z.

    2017-11-01

    Calculating the coordinate parameters recorded in the form of key/value pairs in FITS (Flexible Image Transport System) header is the key to determine FITS images' position in the celestial system. As a result, it has great significance in researching the general process of calculating the coordinate parameters. By combining CCD related parameters of astronomical telescope (such as field, focal length, and celestial coordinates in optical axis, etc.), astronomical images recognition algorithm, and WCS (World Coordinate System) theory, the parameters can be calculated effectively. CCD parameters determine the scope of star catalogue, so that they can be used to build a reference star catalogue by the corresponding celestial region of astronomical images; Star pattern recognition completes the matching between the astronomical image and reference star catalogue, and obtains a table with a certain number of stars between CCD plane coordinates and their celestial coordinates for comparison; According to different projection of the sphere to the plane, WCS can build different transfer functions between these two coordinates, and the astronomical position of image pixels can be determined by the table's data we have worked before. FITS images are used to carry out scientific data transmission and analyze as a kind of mainstream data format, but only to be viewed, edited, and analyzed in the professional astronomy software. It decides the limitation of popular science education in astronomy. The realization of a general image visualization method is significant. FITS is converted to PNG or JPEG images firstly. The coordinate parameters in the FITS header are converted to metadata in the form of AVM (Astronomy Visualization Metadata), and then the metadata is added to the PNG or JPEG header. This method can meet amateur astronomers' general needs of viewing and analyzing astronomical images in the non-astronomical software platform. The overall design flow is realized through the java program and tested by SExtractor, WorldWide Telescope, picture viewer, and other software.

  19. Open-Source Learning Management System and Web 2.0 Online Social Software Applications as Learning Platforms for an Elementary School in Singapore

    ERIC Educational Resources Information Center

    Tay, Lee Yong; Lim, Cher Ping; Lye, Sze Yee; Ng, Kay Joo; Lim, Siew Khiaw

    2011-01-01

    This paper analyses how an elementary-level future school in Singapore implements and uses various open-source online platforms, which are easily available online and could be implemented with minimal software cost, for the purpose of teaching and learning. Online platforms have the potential to facilitate students' engagement for independent and…

  20. Thermal Analysis for Monitoring Effects of Shock-Induced Physical, Mechanical, and Chemical Changes in Materials

    DTIC Science & Technology

    2015-01-19

    MS WINDOWS platform, which enables multitasking with simultaneous evaluation and operation 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13...measurement and analysis software for data acquisition, storage and evaluation with MS WINDOWS platform, which enables multitasking with simultaneous...Proteus measurement and analysis software for data acquisition, storage and evaluation with MS WINDOWS platform, which enables multitasking with

  1. Digital Transplantation Pathology: Combining Whole Slide Imaging, Multiplex Staining, and Automated Image Analysis

    PubMed Central

    Isse, Kumiko; Lesniak, Andrew; Grama, Kedar; Roysam, Badrinath; Minervini, Martha I.; Demetris, Anthony J

    2013-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. “-Omics” analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: a) spatial-temporal relationships; b) rare events/cells; c) complex structural context; and d) integration into a “systems” model. Nevertheless, except for immunostaining, no transformative advancements have “modernized” routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology - global “–omic” analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. PMID:22053785

  2. Ultrasound estimates of muscle quality in older adults: reliability and comparison of Photoshop and ImageJ for the grayscale analysis of muscle echogenicity

    PubMed Central

    Seamon, Bryant A.; Teixeira, Carla; Ismail, Catheeja

    2016-01-01

    Background. Quantitative diagnostic ultrasound imaging has been proposed as a method of estimating muscle quality using measures of echogenicity. The Rectangular Marquee Tool (RMT) and the Free Hand Tool (FHT) are two types of editing features used in Photoshop and ImageJ for determining a region of interest (ROI) within an ultrasound image. The primary objective of this study is to determine the intrarater and interrater reliability of Photoshop and ImageJ for the estimate of muscle tissue echogenicity in older adults via grayscale histogram analysis. The secondary objective is to compare the mean grayscale values obtained using both the RMT and FHT methods across both image analysis platforms. Methods. This cross-sectional observational study features 18 community-dwelling men (age = 61.5 ± 2.32 years). Longitudinal views of the rectus femoris were captured using B-mode ultrasound. The ROI for each scan was selected by 2 examiners using the RMT and FHT methods from each software program. Their reliability is assessed using intraclass correlation coefficients (ICCs) and the standard error of the measurement (SEM). Measurement agreement for these values is depicted using Bland-Altman plots. A paired t-test is used to determine mean differences in echogenicity expressed as grayscale values using the RMT and FHT methods to select the post-image acquisition ROI. The degree of association among ROI selection methods and image analysis platforms is analyzed using the coefficient of determination (R2). Results. The raters demonstrated excellent intrarater and interrater reliability using the RMT and FHT methods across both platforms (lower bound 95% CI ICC = .97–.99, p < .001). Mean differences between the echogenicity estimates obtained with the RMT and FHT methods was .87 grayscale levels (95% CI [.54–1.21], p < .0001) using data obtained with both programs. The SEM for Photoshop was .97 and 1.05 grayscale levels when using the RMT and FHT ROI selection methods, respectively. Comparatively, the SEM values were .72 and .81 grayscale levels, respectively, when using the RMT and FHT ROI selection methods in ImageJ. Uniform coefficients of determination (R2 = .96–.99, p < .001) indicate strong positive associations among the grayscale histogram analysis measurement conditions independent of the ROI selection methods and imaging platform. Conclusion. Our method for evaluating muscle echogenicity demonstrated a high degree of intrarater and interrater reliability using both the RMT and FHT methods across 2 common image analysis platforms. The minimal measurement error exhibited by the examiners demonstrates that the ROI selection methods used with Photoshop and ImageJ are suitable for the post-acquisition image analysis of tissue echogenicity in older adults. PMID:26925339

  3. relaxGUI: a new software for fast and simple NMR relaxation data analysis and calculation of ps-ns and μs motion of proteins.

    PubMed

    Bieri, Michael; d'Auvergne, Edward J; Gooley, Paul R

    2011-06-01

    Investigation of protein dynamics on the ps-ns and μs-ms timeframes provides detailed insight into the mechanisms of enzymes and the binding properties of proteins. Nuclear magnetic resonance (NMR) is an excellent tool for studying protein dynamics at atomic resolution. Analysis of relaxation data using model-free analysis can be a tedious and time consuming process, which requires good knowledge of scripting procedures. The software relaxGUI was developed for fast and simple model-free analysis and is fully integrated into the software package relax. It is written in Python and uses wxPython to build the graphical user interface (GUI) for maximum performance and multi-platform use. This software allows the analysis of NMR relaxation data with ease and the generation of publication quality graphs as well as color coded images of molecular structures. The interface is designed for simple data analysis and management. The software was tested and validated against the command line version of relax.

  4. DPOI: Distributed software system development platform for ocean information service

    NASA Astrophysics Data System (ADS)

    Guo, Zhongwen; Hu, Keyong; Jiang, Yongguo; Sun, Zhaosui

    2015-02-01

    Ocean information management is of great importance as it has been employed in many areas of ocean science and technology. However, the developments of Ocean Information Systems (OISs) often suffer from low efficiency because of repetitive work and continuous modifications caused by dynamic requirements. In this paper, the basic requirements of OISs are analyzed first, and then a novel platform DPOI is proposed to improve development efficiency and enhance software quality of OISs by providing off-the-shelf resources. In the platform, the OIS is decomposed hierarchically into a set of modules, which can be reused in different system developments. These modules include the acquisition middleware and data loader that collect data from instruments and files respectively, the database that stores data consistently, the components that support fast application generation, the web services that make the data from distributed sources syntactical by use of predefined schemas and the configuration toolkit that enables software customization. With the assistance of the development platform, the software development needs no programming and the development procedure is thus accelerated greatly. We have applied the development platform in practical developments and evaluated its efficiency in several development practices and different development approaches. The results show that DPOI significantly improves development efficiency and software quality.

  5. Developing a methodology for three-dimensional correlation of PET–CT images and whole-mount histopathology in non-small-cell lung cancer

    PubMed Central

    Dahele, M.; Hwang, D.; Peressotti, C.; Sun, L.; Kusano, M.; Okhai, S.; Darling, G.; Yaffe, M.; Caldwell, C.; Mah, K.; Hornby, J.; Ehrlich, L.; Raphael, S.; Tsao, M.; Behzadi, A.; Weigensberg, C.; Ung, Y.C.

    2008-01-01

    Background Understanding the three-dimensional (3D) volumetric relationship between imaging and functional or histopathologic heterogeneity of tumours is a key concept in the development of image-guided radiotherapy. Our aim was to develop a methodologic framework to enable the reconstruction of resected lung specimens containing non-small-cell lung cancer (nsclc), to register the result in 3D with diagnostic imaging, and to import the reconstruction into a radiation treatment planning system. Methods and Results We recruited 12 patients for an investigation of radiology–pathology correlation (rpc) in nsclc. Before resection, imaging by positron emission tomography (pet) or computed tomography (ct) was obtained. Resected specimens were formalin-fixed for 1–24 hours before sectioning at 3-mm to 10-mm intervals. To try to retain the original shape, we embedded the specimens in agar before sectioning. Consecutive sections were laid out for photography and manually adjusted to maintain shape. Following embedding, the tissue blocks underwent whole-mount sectioning (4-μm sections) and staining with hematoxylin and eosin. Large histopathology slides were used to whole-mount entire sections for digitization. The correct sequence was maintained to assist in subsequent reconstruction. Using Photoshop (Adobe Systems Incorporated, San Jose, CA, U.S.A.), contours were placed on the photographic images to represent the external borders of the section and the extent of macroscopic disease. Sections were stacked in sequence and manually oriented in Photoshop. The macroscopic tumour contours were then transferred to MATLAB (The Mathworks, Natick, MA, U.S.A.) and stacked, producing 3D surface renderings of the resected specimen and embedded gross tumour. To evaluate the microscopic extent of disease, customized “tile-based” and commercial confocal panoramic laser scanning (TISSUEscope: Biomedical Photometrics, Waterloo, ON) systems were used to generate digital images of whole-mount histopathology sections. Using the digital whole-mount images and imaging software, we contoured the gross and microscopic extent of disease. Two methods of registering pathology and imaging were used. First, selected pet and ct images were transferred into Photoshop, where they were contoured, stacked, and reconstructed. After importing the pathology and the imaging contours to MATLAB, the contours were reconstructed, manually rotated, and rigidly registered. In the second method, MATLAB tumour renderings were exported to a software platform for manual registration with the original pet and ct images in multiple planes. Data from this software platform were then exported to the Pinnacle radiation treatment planning system in dicom (Digital Imaging and Communications in Medicine) format. Conclusions There is no one definitive method for 3D volumetric rpc in nsclc. An innovative approach to the 3D reconstruction of resected nsclc specimens incorporates agar embedding of the specimen and whole-mount digital histopathology. The reconstructions can be rigidly and manually registered to imaging modalities such as ct and pet and exported to a radiation treatment planning system. PMID:19008992

  6. A Java application for tissue section image analysis.

    PubMed

    Kamalov, R; Guillaud, M; Haskins, D; Harrison, A; Kemp, R; Chiu, D; Follen, M; MacAulay, C

    2005-02-01

    The medical industry has taken advantage of Java and Java technologies over the past few years, in large part due to the language's platform-independence and object-oriented structure. As such, Java provides powerful and effective tools for developing tissue section analysis software. The background and execution of this development are discussed in this publication. Object-oriented structure allows for the creation of "Slide", "Unit", and "Cell" objects to simulate the corresponding real-world objects. Different functions may then be created to perform various tasks on these objects, thus facilitating the development of the software package as a whole. At the current time, substantial parts of the initially planned functionality have been implemented. Getafics 1.0 is fully operational and currently supports a variety of research projects; however, there are certain features of the software that currently introduce unnecessary complexity and inefficiency. In the future, we hope to include features that obviate these problems.

  7. Antibiogramj: A tool for analysing images from disk diffusion tests.

    PubMed

    Alonso, C A; Domínguez, C; Heras, J; Mata, E; Pascual, V; Torres, C; Zarazaga, M

    2017-05-01

    Disk diffusion testing, known as antibiogram, is widely applied in microbiology to determine the antimicrobial susceptibility of microorganisms. The measurement of the diameter of the zone of growth inhibition of microorganisms around the antimicrobial disks in the antibiogram is frequently performed manually by specialists using a ruler. This is a time-consuming and error-prone task that might be simplified using automated or semi-automated inhibition zone readers. However, most readers are usually expensive instruments with embedded software that require significant changes in laboratory design and workflow. Based on the workflow employed by specialists to determine the antimicrobial susceptibility of microorganisms, we have designed a software tool that, from images of disk diffusion tests, semi-automatises the process. Standard computer vision techniques are employed to achieve such an automatisation. We present AntibiogramJ, a user-friendly and open-source software tool to semi-automatically determine, measure and categorise inhibition zones of images from disk diffusion tests. AntibiogramJ is implemented in Java and deals with images captured with any device that incorporates a camera, including digital cameras and mobile phones. The fully automatic procedure of AntibiogramJ for measuring inhibition zones achieves an overall agreement of 87% with an expert microbiologist; moreover, AntibiogramJ includes features to easily detect when the automatic reading is not correct and fix it manually to obtain the correct result. AntibiogramJ is a user-friendly, platform-independent, open-source, and free tool that, up to the best of our knowledge, is the most complete software tool for antibiogram analysis without requiring any investment in new equipment or changes in the laboratory. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. LIME: 3D visualisation and interpretation of virtual geoscience models

    NASA Astrophysics Data System (ADS)

    Buckley, Simon; Ringdal, Kari; Dolva, Benjamin; Naumann, Nicole; Kurz, Tobias

    2017-04-01

    Three-dimensional and photorealistic acquisition of surface topography, using methods such as laser scanning and photogrammetry, has become widespread across the geosciences over the last decade. With recent innovations in photogrammetric processing software, robust and automated data capture hardware, and novel sensor platforms, including unmanned aerial vehicles, obtaining 3D representations of exposed topography has never been easier. In addition to 3D datasets, fusion of surface geometry with imaging sensors, such as multi/hyperspectral, thermal and ground-based InSAR, and geophysical methods, create novel and highly visual datasets that provide a fundamental spatial framework to address open geoscience research questions. Although data capture and processing routines are becoming well-established and widely reported in the scientific literature, challenges remain related to the analysis, co-visualisation and presentation of 3D photorealistic models, especially for new users (e.g. students and scientists new to geomatics methods). Interpretation and measurement is essential for quantitative analysis of 3D datasets, and qualitative methods are valuable for presentation purposes, for planning and in education. Motivated by this background, the current contribution presents LIME, a lightweight and high performance 3D software for interpreting and co-visualising 3D models and related image data in geoscience applications. The software focuses on novel data integration and visualisation of 3D topography with image sources such as hyperspectral imagery, logs and interpretation panels, geophysical datasets and georeferenced maps and images. High quality visual output can be generated for dissemination purposes, to aid researchers with communication of their research results. The background of the software is described and case studies from outcrop geology, in hyperspectral mineral mapping and geophysical-geospatial data integration are used to showcase the novel methods developed.

  9. Software-based high-level synthesis design of FPGA beamformers for synthetic aperture imaging.

    PubMed

    Amaro, Joao; Yiu, Billy Y S; Falcao, Gabriel; Gomes, Marco A C; Yu, Alfred C H

    2015-05-01

    Field-programmable gate arrays (FPGAs) can potentially be configured as beamforming platforms for ultrasound imaging, but a long design time and skilled expertise in hardware programming are typically required. In this article, we present a novel approach to the efficient design of FPGA beamformers for synthetic aperture (SA) imaging via the use of software-based high-level synthesis techniques. Software kernels (coded in OpenCL) were first developed to stage-wise handle SA beamforming operations, and their corresponding FPGA logic circuitry was emulated through a high-level synthesis framework. After design space analysis, the fine-tuned OpenCL kernels were compiled into register transfer level descriptions to configure an FPGA as a beamformer module. The processing performance of this beamformer was assessed through a series of offline emulation experiments that sought to derive beamformed images from SA channel-domain raw data (40-MHz sampling rate, 12 bit resolution). With 128 channels, our FPGA-based SA beamformer can achieve 41 frames per second (fps) processing throughput (3.44 × 10(8) pixels per second for frame size of 256 × 256 pixels) at 31.5 W power consumption (1.30 fps/W power efficiency). It utilized 86.9% of the FPGA fabric and operated at a 196.5 MHz clock frequency (after optimization). Based on these findings, we anticipate that FPGA and high-level synthesis can together foster rapid prototyping of real-time ultrasound processor modules at low power consumption budgets.

  10. Design of multi-mode compatible image acquisition system for HD area array CCD

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Sui, Xiubao

    2014-11-01

    Combining with the current development trend in video surveillance-digitization and high-definition, a multimode-compatible image acquisition system for HD area array CCD is designed. The hardware and software designs of the color video capture system of HD area array CCD KAI-02150 presented by Truesense Imaging company are analyzed, and the structure parameters of the HD area array CCD and the color video gathering principle of the acquisition system are introduced. Then, the CCD control sequence and the timing logic of the whole capture system are realized. The noises of the video signal (KTC noise and 1/f noise) are filtered by using the Correlated Double Sampling (CDS) technique to enhance the signal-to-noise ratio of the system. The compatible designs in both software and hardware for the two other image sensors of the same series: KAI-04050 and KAI-08050 are put forward; the effective pixels of these two HD image sensors are respectively as many as four million and eight million. A Field Programmable Gate Array (FPGA) is adopted as the key controller of the system to perform the modularization design from top to bottom, which realizes the hardware design by software and improves development efficiency. At last, the required time sequence driving is simulated accurately by the use of development platform of Quartus II 12.1 combining with VHDL. The result of the simulation indicates that the driving circuit is characterized by simple framework, low power consumption, and strong anti-interference ability, which meet the demand of miniaturization and high-definition for the current tendency.

  11. Flexible Software Architecture for Visualization and Seismic Data Analysis

    NASA Astrophysics Data System (ADS)

    Petunin, S.; Pavlov, I.; Mogilenskikh, D.; Podzyuban, D.; Arkhipov, A.; Baturuin, N.; Lisin, A.; Smith, A.; Rivers, W.; Harben, P.

    2007-12-01

    Research in the field of seismology requires software and signal processing utilities for seismogram manipulation and analysis. Seismologists and data analysts often encounter a major problem in the use of any particular software application specific to seismic data analysis: the tuning of commands and windows to the specific waveforms and hot key combinations so as to fit their familiar informational environment. The ability to modify the user's interface independently from the developer requires an adaptive code structure. An adaptive code structure also allows for expansion of software capabilities such as new signal processing modules and implementation of more efficient algorithms. Our approach is to use a flexible "open" architecture for development of geophysical software. This report presents an integrated solution for organizing a logical software architecture based on the Unix version of the Geotool software implemented on the Microsoft NET 2.0 platform. Selection of this platform greatly expands the variety and number of computers that can implement the software, including laptops that can be utilized in field conditions. It also facilitates implementation of communication functions for seismic data requests from remote databases through the Internet. The main principle of the new architecture for Geotool is that scientists should be able to add new routines for digital waveform analysis via software plug-ins that utilize the basic Geotool display for GUI interaction. The use of plug-ins allows the efficient integration of diverse signal-processing software, including software still in preliminary development, into an organized platform without changing the fundamental structure of that platform itself. An analyst's use of Geotool is tracked via a metadata file so that future studies can reconstruct, and alter, the original signal processing operations. The work has been completed in the framework of a joint Russian- American project.

  12. Campus Energy Model for Control and Performance Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2014-09-19

    The core of the modeling platform is an extensible block library for the MATLAB/Simulink software suite. The platform enables true co-simulation (interaction at each simulation time step) with NREL's state-of-the-art modeling tools and other energy modeling software.

  13. Status report of the end-to-end ASKAP software system: towards early science operations

    NASA Astrophysics Data System (ADS)

    Guzman, Juan Carlos; Chapman, Jessica; Marquarding, Malte; Whiting, Matthew

    2016-08-01

    The Australian SKA Pathfinder (ASKAP) is a novel centimetre radio synthesis telescope currently in the commissioning phase and located in the midwest region of Western Australia. It comprises of 36 x 12 m diameter reflector antennas each equipped with state-of-the-art and award winning Phased Array Feeds (PAF) technology. The PAFs provide a wide, 30 square degree field-of-view by forming up to 36 separate dual-polarisation beams at once. This results in a high data rate: 70 TB of correlated visibilities in an 8-hour observation, requiring custom-written, high-performance software running in dedicated High Performance Computing (HPC) facilities. The first six antennas equipped with first-generation PAF technology (Mark I), named the Boolardy Engineering Test Array (BETA) have been in use since 2014 as a platform to test PAF calibration and imaging techniques, and along the way it has been producing some great science results. Commissioning of the ASKAP Array Release 1, that is the first six antennas with second-generation PAFs (Mark II) is currently under way. An integral part of the instrument is the Central Processor platform hosted at the Pawsey Supercomputing Centre in Perth, which executes custom-written software pipelines, designed specifically to meet the ASKAP imaging requirements of wide field of view and high dynamic range. There are three key hardware components of the Central Processor: The ingest nodes (16 x node cluster), the fast temporary storage (1 PB Lustre file system) and the processing supercomputer (200 TFlop system). This High-Performance Computing (HPC) platform is managed and supported by the Pawsey support team. Due to the limited amount of data generated by BETA and the first ASKAP Array Release, the Central Processor platform has been running in a more "traditional" or user-interactive mode. But this is about to change: integration and verification of the online ingest pipeline starts in early 2016, which is required to support the full 300 MHz bandwidth for Array Release 1; followed by the deployment of the real-time data processing components. In addition to the Central Processor, the first production release of the CSIRO ASKAP Science Data Archive (CASDA) has also been deployed in one of the Pawsey Supercomputing Centre facilities and it is integrated to the end-to-end ASKAP data flow system. This paper describes the current status of the "end-to-end" data flow software system from preparing observations to data acquisition, processing and archiving; and the challenges of integrating an HPC facility as a key part of the instrument. It also shares some lessons learned since the start of integration activities and the challenges ahead in preparation for the start of the Early Science program.

  14. Development of a ROV Deployed Video Analysis Tool for Rapid Measurement of Submerged Oil/Gas Leaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savas, Omer

    Expanded deep sea drilling around the globe makes it necessary to have readily available tools to quickly and accurately measure discharge rates from accidental submerged oil/gas leak jets for the first responders to deploy adequate resources for containment. We have developed and tested a field deployable video analysis software package which is able to provide in the field sufficiently accurate flow rate estimates for initial responders in accidental oil discharges in submarine operations. The essence of our approach is based on tracking coherent features at the interface in the near field of immiscible turbulent jets. The software package, UCB_Plume, ismore » ready to be used by the first responders for field implementation. We have tested the tool on submerged water and oil jets which are made visible using fluorescent dyes. We have been able to estimate the discharge rate within 20% accuracy. A high end WINDOWS laptop computer is suggested as the operating platform and a USB connected high speed, high resolution monochrome camera as the imaging device are sufficient for acquiring flow images under continuous unidirectional illumination and running the software in the field. Results are obtained over a matter of minutes.« less

  15. Earth Surface Monitoring with COSI-Corr, Techniques and Applications

    NASA Astrophysics Data System (ADS)

    Leprince, S.; Ayoub, F.; Avouac, J.

    2009-12-01

    Co-registration of Optically Sensed Images and Correlation (COSI-Corr) is a software package developed at the California Institute of Technology (USA) for accurate geometrical processing of optical satellite and aerial imagery. Initially developed for the measurement of co-seismic ground deformation using optical imagery, COSI-Corr is now used for a wide range of applications in Earth Sciences, which take advantage of the software capability to co-register, with very high accuracy, images taken from different sensors and acquired at different times. As long as a sensor is supported in COSI-Corr, all images between the supported sensors can be accurately orthorectified and co-registered. For example, it is possible to co-register a series of SPOT images, a series of aerial photographs, as well as to register a series of aerial photographs with a series of SPOT images, etc... Currently supported sensors include the SPOT 1-5, Quickbird, Worldview 1 and Formosat 2 satellites, the ASTER instrument, and frame camera acquisitions from e.g., aerial survey or declassified satellite imagery. Potential applications include accurate change detection between multi-temporal and multi-spectral images, and the calibration of pushbroom cameras. In particular, COSI-Corr provides a powerful correlation tool, which allows for accurate estimation of surface displacement. The accuracy depends on many factors (e.g., cloud, snow, and vegetation cover, shadows, temporal changes in general, steadiness of the imaging platform, defects of the imaging system, etc...) but in practice, the standard deviation of the measurements obtained from the correlation of mutli-temporal images is typically around 1/20 to 1/10 of the pixel size. The software package also includes post-processing tools such as denoising, destriping, and stacking tools to facilitate data interpretation. Examples drawn from current research in, e.g., seismotectonics, glaciology, and geomorphology will be presented. COSI-Corr is developed in IDL (Interactive Data Language), integrated under the user friendly interface ENVI (Environment for Visualizing Images), and is distributed free of charge for academic research purposes.

  16. P-MartCancer: A New Online Platform to Access CPTAC Datasets and Enable New Analyses | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    The November 1, 2017 issue of Cancer Research is dedicated to a collection of computational resource papers in genomics, proteomics, animal models, imaging, and clinical subjects for non-bioinformaticists looking to incorporate computing tools into their work. Scientists at Pacific Northwest National Laboratory have developed P-MartCancer, an open, web-based interactive software tool that enables statistical analyses of peptide or protein data generated from mass-spectrometry (MS)-based global proteomics experiments.

  17. Mining biomedical images towards valuable information retrieval in biomedical and life sciences.

    PubMed

    Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas

    2016-01-01

    Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. © The Author(s) 2016. Published by Oxford University Press.

  18. REVEAL: Software Documentation and Platform Migration

    NASA Technical Reports Server (NTRS)

    Wilson, Michael A.; Veibell, Victoir T.; Freudinger, Lawrence C.

    2008-01-01

    The Research Environment for Vehicle Embedded Analysis on Linux (REVEAL) is reconfigurable data acquisition software designed for network-distributed test and measurement applications. In development since 2001, it has been successfully demonstrated in support of a number of actual missions within NASA s Suborbital Science Program. Improvements to software configuration control were needed to properly support both an ongoing transition to operational status and continued evolution of REVEAL capabilities. For this reason the project described in this report targets REVEAL software source documentation and deployment of the software on a small set of hardware platforms different from what is currently used in the baseline system implementation. This report specifically describes the actions taken over a ten week period by two undergraduate student interns and serves as a final report for that internship. The topics discussed include: the documentation of REVEAL source code; the migration of REVEAL to other platforms; and an end-to-end field test that successfully validates the efforts.

  19. Development and implementation of an EPID-based method for localizing isocenter.

    PubMed

    Hyer, Daniel E; Mart, Christopher J; Nixon, Earl

    2012-11-08

    The aim of this study was to develop a phantom and analysis software that could be used to quickly and accurately determine the location of radiation isocenter to an accuracy of less than 1 mm using the EPID (Electronic Portal Imaging Device). The proposed solution uses a collimator setting of 10 × 10 cm2 to acquire EPID images of a new phantom constructed from LEGO blocks. Images from a number of gantry and collimator angles are analyzed by automated analysis software to determine the position of the jaws and center of the phantom in each image. The distance between a chosen jaw and the phantom center is then compared to the same distance measured after a 180° collimator rotation to determine if the phantom is centered in the dimension being investigated. Repeated tests show that the system is reproducibly independent of the imaging session, and calculated offsets of the phantom from radiation isocenter are a function of phantom setup only. Accuracy of the algorithm's calculated offsets were verified by imaging the LEGO phantom before and after applying the calculated offset. These measurements show that the offsets are predicted with an accuracy of approximately 0.3 mm, which is on the order of the detector's pitch. Comparison with a star-shot analysis yielded agreement of isocenter location within 0.5 mm. Additionally, the phantom and software are completely independent of linac vendor, and this study presents results from two linac manufacturers. A Varian Optical Guidance Platform (OGP) calibration array was also integrated into the phantom to allow calibration of the OGP while the phantom is positioned at radiation isocenter to reduce setup uncertainty in the calibration. This solution offers a quick, objective method to perform isocenter localization as well as laser alignment and OGP calibration on a monthly basis.

  20. ODIN-object-oriented development interface for NMR.

    PubMed

    Jochimsen, Thies H; von Mengershausen, Michael

    2004-09-01

    A cross-platform development environment for nuclear magnetic resonance (NMR) experiments is presented. It allows rapid prototyping of new pulse sequences and provides a common programming interface for different system types. With this object-oriented interface implemented in C++, the programmer is capable of writing applications to control an experiment that can be executed on different measurement devices, even from different manufacturers, without the need to modify the source code. Due to the clear design of the software, new pulse sequences can be created, tested, and executed within a short time. To post-process the acquired data, an interface to well-known numerical libraries is part of the framework. This allows a transparent integration of the data processing instructions into the measurement module. The software focuses mainly on NMR imaging, but can also be used with limitations for spectroscopic experiments. To demonstrate the capabilities of the framework, results of the same experiment, carried out on two NMR imaging systems from different manufacturers are shown and compared with the results of a simulation.

  1. A LabVIEW®-based software for the control of the AUTORAD platform: a fully automated multisequential flow injection analysis Lab-on-Valve (MSFIA-LOV) system for radiochemical analysis.

    PubMed

    Barbesi, Donato; Vicente Vilas, Víctor; Millet, Sylvain; Sandow, Miguel; Colle, Jean-Yves; Aldave de Las Heras, Laura

    2017-01-01

    A LabVIEW ® -based software for the control of the fully automated multi-sequential flow injection analysis Lab-on-Valve (MSFIA-LOV) platform AutoRAD performing radiochemical analysis is described. The analytical platform interfaces an Arduino ® -based device triggering multiple detectors providing a flexible and fit for purpose choice of detection systems. The different analytical devices are interfaced to the PC running LabVIEW ® VI software using USB and RS232 interfaces, both for sending commands and receiving confirmation or error responses. The AUTORAD platform has been successfully applied for the chemical separation and determination of Sr, an important fission product pertinent to nuclear waste.

  2. Upgrades to the Probabilistic NAS Platform Air Traffic Simulation Software

    NASA Technical Reports Server (NTRS)

    Hunter, George; Boisvert, Benjamin

    2013-01-01

    This document is the final report for the project entitled "Upgrades to the Probabilistic NAS Platform Air Traffic Simulation Software." This report consists of 17 sections which document the results of the several subtasks of this effort. The Probabilistic NAS Platform (PNP) is an air operations simulation platform developed and maintained by the Saab Sensis Corporation. The improvements made to the PNP simulation include the following: an airborne distributed separation assurance capability, a required time of arrival assignment and conformance capability, and a tactical and strategic weather avoidance capability.

  3. OpenNEX, a private-public partnership in support of the national climate assessment

    NASA Astrophysics Data System (ADS)

    Nemani, R. R.; Wang, W.; Michaelis, A.; Votava, P.; Ganguly, S.

    2016-12-01

    The NASA Earth Exchange (NEX) is a collaborative computing platform that has been developed with the objective of bringing scientists together with the software tools, massive global datasets, and supercomputing resources necessary to accelerate research in Earth systems science and global change. NEX is funded as an enabling tool for sustaining the national climate assessment. Over the past five years, researchers have used the NEX platform and produced a number of data sets highly relevant to the National Climate Assessment. These include high-resolution climate projections using different downscaling techniques and trends in historical climate from satellite data. To enable a broader community in exploiting the above datasets, the NEX team partnered with public cloud providers to create the OpenNEX platform. OpenNEX provides ready access to NEX data holdings on a number of public cloud platforms along with pertinent analysis tools and workflows in the form of Machine Images and Docker Containers, lectures and tutorials by experts. We will showcase some of the applications of OpenNEX data and tools by the community on Amazon Web Services, Google Cloud and the NEX Sandbox.

  4. Programmable Logic Device (PLD) Design Description for the Integrated Power, Avionics, and Software (iPAS) Space Telecommunications Radio System (STRS) Radio

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary Jo W.

    2017-01-01

    The Space Telecommunications Radio System (STRS) provides a common, consistent framework for software defined radios (SDRs) to abstract the application software from the radio platform hardware. The STRS standard aims to reduce the cost and risk of using complex, configurable and reprogrammable radio systems across NASA missions. To promote the use of the STRS architecture for future NASA advanced exploration missions, NASA Glenn Research Center (GRC) developed an STRS compliant SDR on a radio platform used by the Advance Exploration System program at the Johnson Space Center (JSC) in their Integrated Power, Avionics, and Software (iPAS) laboratory. At the conclusion of the development, the software and hardware description language (HDL) code was delivered to JSC for their use in their iPAS test bed to get hands-on experience with the STRS standard, and for development of their own STRS Waveforms on the now STRS compliant platform.The iPAS STRS Radio was implemented on the Reconfigurable, Intelligently-Adaptive Communication System (RIACS) platform, currently being used for radio development at JSC. The platform consists of a Xilinx ML605 Virtex-6 FPGA board, an Analog Devices FMCOMMS1-EBZ RF transceiver board, and an Embedded PC (Axiomtek eBox 620-110-FL) running the Ubuntu 12.4 operating system. Figure 1 shows the RIACS platform hardware. The result of this development is a very low cost STRS compliant platform that can be used for waveform developments for multiple applications.The purpose of this document is to describe the design of the HDL code for the FPGA portion of the iPAS STRS Radio particularly the design of the FPGA wrapper and the test waveform.

  5. A case study in open source innovation: developing the Tidepool Platform for interoperability in type 1 diabetes management.

    PubMed

    Neinstein, Aaron; Wong, Jenise; Look, Howard; Arbiter, Brandon; Quirk, Kent; McCanne, Steve; Sun, Yao; Blum, Michael; Adi, Saleh

    2016-03-01

    Develop a device-agnostic cloud platform to host diabetes device data and catalyze an ecosystem of software innovation for type 1 diabetes (T1D) management. An interdisciplinary team decided to establish a nonprofit company, Tidepool, and build open-source software. Through a user-centered design process, the authors created a software platform, the Tidepool Platform, to upload and host T1D device data in an integrated, device-agnostic fashion, as well as an application ("app"), Blip, to visualize the data. Tidepool's software utilizes the principles of modular components, modern web design including REST APIs and JavaScript, cloud computing, agile development methodology, and robust privacy and security. By consolidating the currently scattered and siloed T1D device data ecosystem into one open platform, Tidepool can improve access to the data and enable new possibilities and efficiencies in T1D clinical care and research. The Tidepool Platform decouples diabetes apps from diabetes devices, allowing software developers to build innovative apps without requiring them to design a unique back-end (e.g., database and security) or unique ways of ingesting device data. It allows people with T1D to choose to use any preferred app regardless of which device(s) they use. The authors believe that the Tidepool Platform can solve two current problems in the T1D device landscape: 1) limited access to T1D device data and 2) poor interoperability of data from different devices. If proven effective, Tidepool's open source, cloud model for health data interoperability is applicable to other healthcare use cases. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  6. A case study in open source innovation: developing the Tidepool Platform for interoperability in type 1 diabetes management

    PubMed Central

    Wong, Jenise; Look, Howard; Arbiter, Brandon; Quirk, Kent; McCanne, Steve; Sun, Yao; Blum, Michael; Adi, Saleh

    2016-01-01

    Objective Develop a device-agnostic cloud platform to host diabetes device data and catalyze an ecosystem of software innovation for type 1 diabetes (T1D) management. Materials and Methods An interdisciplinary team decided to establish a nonprofit company, Tidepool, and build open-source software. Results Through a user-centered design process, the authors created a software platform, the Tidepool Platform, to upload and host T1D device data in an integrated, device-agnostic fashion, as well as an application (“app”), Blip, to visualize the data. Tidepool’s software utilizes the principles of modular components, modern web design including REST APIs and JavaScript, cloud computing, agile development methodology, and robust privacy and security. Discussion By consolidating the currently scattered and siloed T1D device data ecosystem into one open platform, Tidepool can improve access to the data and enable new possibilities and efficiencies in T1D clinical care and research. The Tidepool Platform decouples diabetes apps from diabetes devices, allowing software developers to build innovative apps without requiring them to design a unique back-end (e.g., database and security) or unique ways of ingesting device data. It allows people with T1D to choose to use any preferred app regardless of which device(s) they use. Conclusion The authors believe that the Tidepool Platform can solve two current problems in the T1D device landscape: 1) limited access to T1D device data and 2) poor interoperability of data from different devices. If proven effective, Tidepool’s open source, cloud model for health data interoperability is applicable to other healthcare use cases. PMID:26338218

  7. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    DOE PAGES

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    2017-10-30

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less

  8. Open chemistry: RESTful web APIs, JSON, NWChem and the modern web application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanwell, Marcus D.; de Jong, Wibe A.; Harris, Christopher J.

    An end-to-end platform for chemical science research has been developed that integrates data from computational and experimental approaches through a modern web-based interface. The platform offers an interactive visualization and analytics environment that functions well on mobile, laptop and desktop devices. It offers pragmatic solutions to ensure that large and complex data sets are more accessible. Existing desktop applications/frameworks were extended to integrate with high-performance computing resources, and offer command-line tools to automate interaction - connecting distributed teams to this software platform on their own terms. The platform was developed openly, and all source code hosted on the GitHub platformmore » with automated deployment possible using Ansible coupled with standard Ubuntu-based machine images deployed to cloud machines. The platform is designed to enable teams to reap the benefits of the connected web - going beyond what conventional search and analytics platforms offer in this area. It also has the goal of offering federated instances, that can be customized to the sites/research performed. Data gets stored using JSON, extending upon previous approaches using XML, building structures that support computational chemistry calculations. These structures were developed to make it easy to process data across different languages, and send data to a JavaScript-based web client.« less

  9. Evolution of a Reconfigurable Processing Platform for a Next Generation Space Software Defined Radio

    NASA Technical Reports Server (NTRS)

    Kacpura, Thomas J.; Downey, Joseph A.; Anderson, Keffery R.; Baldwin, Keith

    2014-01-01

    The National Aeronautics and Space Administration (NASA)Harris Ka-Band Software Defined Radio (SDR) is the first, fully reprogrammable space-qualified SDR operating in the Ka-Band frequency range. Providing exceptionally higher data communication rates than previously possible, this SDR offers in-orbit reconfiguration, multi-waveform operation, and fast deployment due to its highly modular hardware and software architecture. Currently in operation on the International Space Station (ISS), this new paradigm of reconfigurable technology is enabling experimenters to investigate navigation and networking in the space environment.The modular SDR and the NASA developed Space Telecommunications Radio System (STRS) architecture standard are the basis for Harris reusable, digital signal processing space platform trademarked as AppSTAR. As a result, two new space radio products are a synthetic aperture radar payload and an Automatic Detection Surveillance Broadcast (ADS-B) receiver. In addition, Harris is currently developing many new products similar to the Ka-Band software defined radio for other applications. For NASAs next generation flight Ka-Band radio development, leveraging these advancements could lead to a more robust and more capable software defined radio.The space environment has special considerations different from terrestrial applications that must be considered for any system operated in space. Each space mission has unique requirements that can make these systems unique. These unique requirements can make products that are expensive and limited in reuse. Space systems put a premium on size, weight and power. A key trade is the amount of reconfigurability in a space system. The more reconfigurable the hardware platform, the easier it is to adapt to the platform to the next mission, and this reduces the amount of non-recurring engineering costs. However, the more reconfigurable platforms often use more spacecraft resources. Software has similar considerations to hardware. Having an architecture standard promotes reuse of software and firmware. Space platforms have limited processor capability, which makes the trade on the amount of amount of flexibility paramount.

  10. A cross-platform freeware tool for digital reconstruction of neuronal arborizations from image stacks.

    PubMed

    Brown, Kerry M; Donohue, Duncan E; D'Alessandro, Giampaolo; Ascoli, Giorgio A

    2005-01-01

    Digital reconstruction of neuronal arborizations is an important step in the quantitative investigation of cellular neuroanatomy. In this process, neurites imaged by microscopy are semi-manually traced through the use of specialized computer software and represented as binary trees of branching cylinders (or truncated cones). Such form of the reconstruction files is efficient and parsimonious, and allows extensive morphometric analysis as well as the implementation of biophysical models of electrophysiology. Here, we describe Neuron_ Morpho, a plugin for the popular Java application ImageJ that mediates the digital reconstruction of neurons from image stacks. Both the executable and code of Neuron_ Morpho are freely distributed (www.maths. soton.ac.uk/staff/D'Alessandro/morpho or www.krasnow.gmu.edu/L-Neuron), and are compatible with all major computer platforms (including Windows, Mac, and Linux). We tested Neuron_Morpho by reconstructing two neurons from each of the two preparations representing different brain areas (hippocampus and cerebellum), neuritic type (pyramidal cell dendrites and olivar axonal projection terminals), and labeling method (rapid Golgi impregnation and anterograde dextran amine), and quantitatively comparing the resulting morphologies to those of the same cells reconstructed with the standard commercial system, Neurolucida. None of the numerous morphometric measures that were analyzed displayed any significant or systematic difference between the two reconstructing systems.

  11. Real-time image processing for passive mmW imagery

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron; Bonnett, James; Harrity, Charles; Mackrides, Daniel; Dillon, Thomas E.; Martin, Richard D.; Schuetz, Christopher A.; Kelmelis, Eric; Prather, Dennis W.

    2015-05-01

    The transmission characteristics of millimeter waves (mmWs) make them suitable for many applications in defense and security, from airport preflight scanning to penetrating degraded visual environments such as brownout or heavy fog. While the cold sky provides sufficient illumination for these images to be taken passively in outdoor scenarios, this utility comes at a cost; the diffraction limit of the longer wavelengths involved leads to lower resolution imagery compared to the visible or IR regimes, and the low power levels inherent to passive imagery allow the data to be more easily degraded by noise. Recent techniques leveraging optical upconversion have shown significant promise, but are still subject to fundamental limits in resolution and signal-to-noise ratio. To address these issues we have applied techniques developed for visible and IR imagery to decrease noise and increase resolution in mmW imagery. We have developed these techniques into fieldable software, making use of GPU platforms for real-time operation of computationally complex image processing algorithms. We present data from a passive, 77 GHz, distributed aperture, video-rate imaging platform captured during field tests at full video rate. These videos demonstrate the increase in situational awareness that can be gained through applying computational techniques in real-time without needing changes in detection hardware.

  12. Automatic Orientation of Large Blocks of Oblique Images

    NASA Astrophysics Data System (ADS)

    Rupnik, E.; Nex, F.; Remondino, F.

    2013-05-01

    Nowadays, multi-camera platforms combining nadir and oblique cameras are experiencing a revival. Due to their advantages such as ease of interpretation, completeness through mitigation of occluding areas, as well as system accessibility, they have found their place in numerous civil applications. However, automatic post-processing of such imagery still remains a topic of research. Configuration of cameras poses a challenge on the traditional photogrammetric pipeline used in commercial software and manual measurements are inevitable. For large image blocks it is certainly an impediment. Within theoretical part of the work we review three common least square adjustment methods and recap on possible ways for a multi-camera system orientation. In the practical part we present an approach that successfully oriented a block of 550 images acquired with an imaging system composed of 5 cameras (Canon Eos 1D Mark III) with different focal lengths. Oblique cameras are rotated in the four looking directions (forward, backward, left and right) by 45° with respect to the nadir camera. The workflow relies only upon open-source software: a developed tool to analyse image connectivity and Apero to orient the image block. The benefits of the connectivity tool are twofold: in terms of computational time and success of Bundle Block Adjustment. It exploits the georeferenced information provided by the Applanix system in constraining feature point extraction to relevant images only, and guides the concatenation of images during the relative orientation. Ultimately an absolute transformation is performed resulting in mean re-projection residuals equal to 0.6 pix.

  13. Usability of a real-time tracked augmented reality display system in musculoskeletal injections

    NASA Astrophysics Data System (ADS)

    Baum, Zachary; Ungi, Tamas; Lasso, Andras; Fichtinger, Gabor

    2017-03-01

    PURPOSE: Image-guided needle interventions are seldom performed with augmented reality guidance in clinical practice due to many workspace and usability restrictions. We propose a real-time optically tracked image overlay system to make image-guided musculoskeletal injections more efficient and assess its usability in a bed-side clinical environment. METHODS: An image overlay system consisting of an optically tracked viewbox, tablet computer, and semitransparent mirror allows users to navigate scanned patient volumetric images in real-time using software built on the open-source 3D Slicer application platform. A series of experiments were conducted to evaluate the latency and screen refresh rate of the system using different image resolutions. To assess the usability of the system and software, five medical professionals were asked to navigate patient images while using the overlay and completed a questionnaire to assess the system. RESULTS: In assessing the latency of the system with scanned images of varying size, screen refresh rates were approximately 5 FPS. The study showed that participants found using the image overlay system easy, and found the table-mounted system was significantly more usable and effective than the handheld system. CONCLUSION: It was determined that the system performs comparably with scanned images of varying size when assessing the latency of the system. During our usability study, participants preferred the table-mounted system over the handheld. The participants also felt that the system itself was simple to use and understand. With these results, the image overlay system shows promise for use in a clinical environment.

  14. Online molecular image repository and analysis system: A multicenter collaborative open-source infrastructure for molecular imaging research and application.

    PubMed

    Rahman, Mahabubur; Watabe, Hiroshi

    2018-05-01

    Molecular imaging serves as an important tool for researchers and clinicians to visualize and investigate complex biochemical phenomena using specialized instruments; these instruments are either used individually or in combination with targeted imaging agents to obtain images related to specific diseases with high sensitivity, specificity, and signal-to-noise ratios. However, molecular imaging, which is a multidisciplinary research field, faces several challenges, including the integration of imaging informatics with bioinformatics and medical informatics, requirement of reliable and robust image analysis algorithms, effective quality control of imaging facilities, and those related to individualized disease mapping, data sharing, software architecture, and knowledge management. As a cost-effective and open-source approach to address these challenges related to molecular imaging, we develop a flexible, transparent, and secure infrastructure, named MIRA, which stands for Molecular Imaging Repository and Analysis, primarily using the Python programming language, and a MySQL relational database system deployed on a Linux server. MIRA is designed with a centralized image archiving infrastructure and information database so that a multicenter collaborative informatics platform can be built. The capability of dealing with metadata, image file format normalization, and storing and viewing different types of documents and multimedia files make MIRA considerably flexible. With features like logging, auditing, commenting, sharing, and searching, MIRA is useful as an Electronic Laboratory Notebook for effective knowledge management. In addition, the centralized approach for MIRA facilitates on-the-fly access to all its features remotely through any web browser. Furthermore, the open-source approach provides the opportunity for sustainable continued development. MIRA offers an infrastructure that can be used as cross-boundary collaborative MI research platform for the rapid achievement in cancer diagnosis and therapeutics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  15. Integration of USB and firewire cameras in machine vision applications

    NASA Astrophysics Data System (ADS)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  16. Digital data from shuttle photography: The effects of platform variables

    NASA Technical Reports Server (NTRS)

    Davis, Bruce E.

    1987-01-01

    Two major criticisms of using Shuttle hand held photography as an Earth science sensor are that it is nondigital, nonquantitative and that it has inconsistent platform characteristics, e.g., variable look angles, especially as compared to remote sensing satellites such as LANDSAT and SPOT. However, these criticisms are assumptions and have not been systematically investigated. The spectral effects of off-nadir views of hand held photography from the Shuttle and their role in interpretation of lava flow morphology on the island of Hawaii are studied. Digitization of photography at JSC and use of LIPS image analysis software in obtaining data is discussed. Preliminary interpretative results of one flow are given. Most of the time was spent in developing procedures and overcoming equipment problems. Preliminary data are satisfactory for detailed analysis.

  17. Simulation of parafoil reconnaissance imagery

    NASA Astrophysics Data System (ADS)

    Kogler, Kent J.; Sutkus, Linas; Troast, Douglas; Kisatsky, Paul; Charles, Alain M.

    1995-08-01

    Reconnaissance from unmanned platforms is currently of interest to DoD and civil sectors concerned with drug trafficking and illegal immigration. Platforms employed vary from motorized aircraft to tethered balloons. One appraoch currently under evaluation deploys a TV camera suspended from a parafoil delivered to the area of interest by a cannon launched projectile. Imagery is then transmitted to a remote monitor for processing and interpretation. This paper presents results of imagery obtained from simulated parafoil flights in which software techniques were developed to process-in image degradation caused by atmospheric obscurants and perturbations in the normal parafoil flight trajectory induced by wind gusts. The approach to capturing continuous motion imagery from captive flight test recordings, the introduction of simulated effects, and the transfer of the processed imagery back to video tape is described.

  18. Anima: Modular Workflow System for Comprehensive Image Data Analysis

    PubMed Central

    Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa

    2014-01-01

    Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541

  19. Validation of ALFIA: a platform for quantifying near-infrared fluorescent images of lymphatic propulsion in humans

    NASA Astrophysics Data System (ADS)

    Rasmussen, John C.; Bautista, Merrick; Tan, I.-Chih; Adams, Kristen E.; Aldrich, Melissa; Marshall, Milton V.; Fife, Caroline E.; Maus, Erik A.; Smith, Latisha A.; Zhang, Jingdan; Xiang, Xiaoyan; Zhou, Shaohua Kevin; Sevick-Muraca, Eva M.

    2011-02-01

    Recently, we demonstrated near-infrared (NIR) fluorescence imaging for quantifying real-time lymphatic propulsion in humans following intradermal injections of microdose amounts of indocyanine green. However computational methods for image analysis are underdeveloped, hindering the translation and clinical adaptation of NIR fluorescent lymphatic imaging. In our initial work we used ImageJ and custom MatLab programs to manually identify lymphatic vessels and individual propulsion events using the temporal transit of the fluorescent dye. In addition, we extracted the apparent velocities of contractile propagation and time periods between propulsion events. Extensive time and effort were required to analyze the 6-8 gigabytes of NIR fluorescent images obtained for each subject. To alleviate this bottleneck, we commenced development of ALFIA, an integrated software platform which will permit automated, near real-time analysis of lymphatic function using NIR fluorescent imaging. However, prior to automation, the base algorithms calculating the apparent velocity and period must be validated to verify that they produce results consistent with the proof-of-concept programs. To do this, both methods were used to analyze NIR fluorescent images of two subjects and the number of propulsive events identified, the average apparent velocities, and the average periods for each subject were compared. Paired Student's t-tests indicate that the differences between their average results are not significant. With the base algorithms validated, further development and automation of ALFIA can be realized, significantly reducing the amount of user interaction required, and potentially enabling the near real-time, clinical evaluation of NIR fluorescent lymphatic imaging.

  20. MMX-I: A data-processing software for multi-modal X-ray imaging and tomography

    NASA Astrophysics Data System (ADS)

    Bergamaschi, A.; Medjoubi, K.; Messaoudi, C.; Marco, S.; Somogyi, A.

    2017-06-01

    Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data.

  1. An automated live imaging platform for studying merozoite egress-invasion in malaria cultures.

    PubMed

    Crick, Alex J; Tiffert, Teresa; Shah, Sheel M; Kotar, Jurij; Lew, Virgilio L; Cicuta, Pietro

    2013-03-05

    Most cases of severe and fatal malaria are caused by the intraerythrocytic asexual reproduction cycle of Plasmodium falciparum. One of the most intriguing and least understood stages in this cycle is the brief preinvasion period during which dynamic merozoite-red-cell interactions align the merozoite apex in preparation for penetration. Studies of the molecular mechanisms involved in this process face formidable technical challenges, requiring multiple observations of merozoite egress-invasion sequences in live cultures under controlled experimental conditions, using high-resolution microscopy and a variety of fluorescent imaging tools. Here we describe a first successful step in the development of a fully automated, robotic imaging platform to enable such studies. Schizont-enriched live cultures of P. falciparum were set up on an inverted stage microscope with software-controlled motorized functions. By applying a variety of imaging filters and selection criteria, we identified infected red cells that were likely to rupture imminently, and recorded their coordinates. We developed a video-image analysis to detect and automatically record merozoite egress events in 100% of the 40 egress-invasion sequences recorded in this study. We observed a substantial polymorphism of the dynamic condition of pre-egress infected cells, probably reflecting asynchronies in the diversity of confluent processes leading to merozoite release. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  2. Giga-pixel fluorescent imaging over an ultra-large field-of-view using a flatbed scanner.

    PubMed

    Göröcs, Zoltán; Ling, Yuye; Yu, Meng Dai; Karahalios, Dimitri; Mogharabi, Kian; Lu, Kenny; Wei, Qingshan; Ozcan, Aydogan

    2013-11-21

    We demonstrate a new fluorescent imaging technique that can screen for fluorescent micro-objects over an ultra-wide field-of-view (FOV) of ~532 cm(2), i.e., 19 cm × 28 cm, reaching a space-bandwidth product of more than 2 billion. For achieving such a large FOV, we modified the hardware and software of a commercially available flatbed scanner, and added a custom-designed absorbing fluorescent filter, a two-dimensional array of external light sources for computer-controlled and high-angle fluorescent excitation. We also re-programmed the driver of the scanner to take full control of the scanner hardware and achieve the highest possible exposure time, gain and sensitivity for detection of fluorescent micro-objects through the gradient index self-focusing lens array that is positioned in front of the scanner sensor chip. For example, this large FOV of our imaging platform allows us to screen more than 2.2 mL of undiluted whole blood for detection of fluorescent micro-objects within <5 minutes. This high-throughput fluorescent imaging platform could be useful for rare cell research and cytometry applications by enabling rapid screening of large volumes of optically dense media. Our results constitute the first time that a flatbed scanner has been converted to a fluorescent imaging system, achieving a record large FOV.

  3. Virtual Reality as a Story Telling Platform for Geoscience Communication

    NASA Astrophysics Data System (ADS)

    Lazar, K.; Moysey, S. M.

    2017-12-01

    Capturing the attention of students and the public is a critical step for increasing societal interest and literacy in earth science issues. Virtual reality (VR) provides a means for geoscience engagement that is well suited to place-based learning through exciting and immersive experiences. One approach is to create fully-immersive virtual gaming environments where players interact with physical objects, such as rock samples and outcrops, to pursue geoscience learning goals. Developing an experience like this, however, can require substantial programming expertise and resources. At the other end of the development spectrum, it is possible for anyone to create immersive virtual experiences with 360-degree imagery, which can be made interactive using easy to use VR editing software to embed videos, audio, images, and other content within the 360-degree image. Accessible editing tools like these make the creation of VR experiences something that anyone can tackle. Using the VR editor ThingLink and imagery from Google Maps, for example, we were able to create an interactive tour of the Grand Canyon, complete with embedded assessments, in a matter of hours. The true power of such platforms, however, comes from the potential to engage students as content authors to create and share stories of place that explore geoscience issues from their personal perspective. For example, we have used combinations of 360-degree images with interactive mapping and web platforms to enable students with no programming experience to create complex web apps as highly engaging story telling platforms. We highlight here examples of how we have implemented such story telling approaches with students to assess learning in courses, to share geoscience research outcomes, and to communicate issues of societal importance.

  4. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  5. Hardware Interface Description for the Integrated Power, Avionics, and Software (iPAS) Space Telecommunications Radio Ssystem (STRS) Radio

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary Jo W.; Roche, Rigoberto

    2017-01-01

    The Space Telecommunications Radio System (STRS) provides a common, consistent framework for software defined radios (SDRs) to abstract the application software from the radio platform hardware. The STRS standard aims to reduce the cost and risk of using complex, configurable and reprogrammable radio systems across NASA missions. To promote the use of the STRS architecture for future NASA advanced exploration missions, NASA Glenn Research Center (GRC) developed an STRS-compliant SDR on a radio platform used by the Advance Exploration System program at the Johnson Space Center (JSC) in their Integrated Power, Avionics, and Software (iPAS) laboratory. The iPAS STRS Radio was implemented on the Reconfigurable, Intelligently-Adaptive Communication System (RIACS) platform, currently being used for radio development at JSC. The platform consists of a Xilinx ML605 Virtex-6 FPGA board, an Analog Devices FMCOMMS1-EBZ RF transceiver board, and an Embedded PC (Axiomtek eBox 620-110-FL) running the Ubuntu 12.4 operating system. Figure 1 shows the RIACS platform hardware. The result of this development is a very low cost STRS compliant platform that can be used for waveform developments for multiple applications.The purpose of this document is to describe how to develop a new waveform using the RIACS platform and the Very High Speed Integrated Circuits (VHSIC) Hardware Description Language (VHDL) FPGA wrapper code and the STRS implementation on the Axiomtek processor.

  6. NASA Tech Briefs, April 2012

    NASA Technical Reports Server (NTRS)

    2012-01-01

    Topics include: Computational Ghost Imaging for Remote Sensing; Digital Architecture for a Trace Gas Sensor Platform; Dispersed Fringe Sensing Analysis - DFSA; Indium Tin Oxide Resistor-Based Nitric Oxide Microsensors; Gas Composition Sensing Using Carbon Nanotube Arrays; Sensor for Boundary Shear Stress in Fluid Flow; Model-Based Method for Sensor Validation; Qualification of Engineering Camera for Long-Duration Deep Space Missions; Remotely Powered Reconfigurable Receiver for Extreme Environment Sensing Platforms; Bump Bonding Using Metal-Coated Carbon Nanotubes; In Situ Mosaic Brightness Correction; Simplex GPS and InSAR Inversion Software; Virtual Machine Language 2.1; Multi-Scale Three-Dimensional Variational Data Assimilation System for Coastal Ocean Prediction; Pandora Operation and Analysis Software; Fabrication of a Cryogenic Bias Filter for Ultrasensitive Focal Plane; Processing of Nanosensors Using a Sacrificial Template Approach; High-Temperature Shape Memory Polymers; Modular Flooring System; Non-Toxic, Low-Freezing, Drop-In Replacement Heat Transfer Fluids; Materials That Enhance Efficiency and Radiation Resistance of Solar Cells; Low-Cost, Rugged High-Vacuum System; Static Gas-Charging Plug; Floating Oil-Spill Containment Device; Stemless Ball Valve; Improving Balance Function Using Low Levels of Electrical Stimulation of the Balance Organs; Oxygen-Methane Thruster; Lunar Navigation Determination System - LaNDS; Launch Method for Kites in Low-Wind or No-Wind Conditions; Supercritical CO2 Cleaning System for Planetary Protection and Contamination Control Applications; Design and Performance of a Wideband Radio Telescope; Finite Element Models for Electron Beam Freeform Fabrication Process Autonomous Information Unit for Fine-Grain Data Access Control and Information Protection in a Net-Centric System; Vehicle Detection for RCTA/ANS (Autonomous Navigation System); Image Mapping and Visual Attention on the Sensory Ego-Sphere; HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis; and IMAGESEER - IMAGEs for Education and Research.

  7. DrishtiCare: a telescreening platform for diabetic retinopathy powered with fundus image analysis.

    PubMed

    Joshi, Gopal Datt; Sivaswamy, Jayanthi

    2011-01-01

    Diabetic retinopathy is the leading cause of blindness in urban populations. Early diagnosis through regular screening and timely treatment has been shown to prevent visual loss and blindness. It is very difficult to cater to this vast set of diabetes patients, primarily because of high costs in reaching out to patients and a scarcity of skilled personnel. Telescreening offers a cost-effective solution to reach out to patients but is still inadequate due to an insufficient number of experts who serve the diabetes population. Developments toward fundus image analysis have shown promise in addressing the scarcity of skilled personnel for large-scale screening. This article aims at addressing the underlying issues in traditional telescreening to develop a solution that leverages the developments carried out in fundus image analysis. We propose a novel Web-based telescreening solution (called DrishtiCare) integrating various value-added fundus image analysis components. A Web-based platform on the software as a service (SaaS) delivery model is chosen to make the service cost-effective, easy to use, and scalable. A server-based prescreening system is employed to scrutinize the fundus images of patients and to refer them to the experts. An automatic quality assessment module ensures transfer of fundus images that meet grading standards. An easy-to-use interface, enabled with new visualization features, is designed for case examination by experts. Three local primary eye hospitals have participated and used DrishtiCare's telescreening service. A preliminary evaluation of the proposed platform is performed on a set of 119 patients, of which 23% are identified with the sight-threatening retinopathy. Currently, evaluation at a larger scale is under process, and a total of 450 patients have been enrolled. The proposed approach provides an innovative way of integrating automated fundus image analysis in the telescreening framework to address well-known challenges in large-scale disease screening. It offers a low-cost, effective, and easily adoptable screening solution to primary care providers. © 2010 Diabetes Technology Society.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iverson, Aaron

    Ra Power Management (RPM) has developed a cloud based software platform that manages the financial and operational functions of third party financed solar projects throughout their lifecycle. RPM’s software streamlines and automates the sales, financing, and management of a portfolio of solar assets. The software helps solar developers automate the most difficult aspects of asset management, leading to increased transparency, efficiency, and reduction in human error. More importantly, our platform will help developers save money by improving their operating margins

  9. Creating an X Window Terminal-Based Information Technology Center.

    ERIC Educational Resources Information Center

    Klassen, Tim W.

    1997-01-01

    The creation of an information technology center at the University of Oregon Science Library is described. Goals included providing access to Internet-based resources and multimedia software, platforms for running science-oriented software, and resources so students can create multimedia materials. A mixed-lab platform was created with Unix-based…

  10. Use of the NetBeans Platform for NASA Robotic Conjunction Assessment Risk Analysis

    NASA Technical Reports Server (NTRS)

    Sabey, Nickolas J.

    2014-01-01

    The latest Java and JavaFX technologies are very attractive software platforms for customers involved in space mission operations such as those of NASA and the US Air Force. For NASA Robotic Conjunction Assessment Risk Analysis (CARA), the NetBeans platform provided an environment in which scalable software solutions could be developed quickly and efficiently. Both Java 8 and the NetBeans platform are in the process of simplifying CARA development in secure environments by providing a significant amount of capability in a single accredited package, where accreditation alone can account for 6-8 months for each library or software application. Capabilities either in use or being investigated by CARA include: 2D and 3D displays with JavaFX, parallelization with the new Streams API, and scalability through the NetBeans plugin architecture.

  11. Intraoperative visualization and assessment of electromagnetic tracking error

    NASA Astrophysics Data System (ADS)

    Harish, Vinyas; Ungi, Tamas; Lasso, Andras; MacDonald, Andrew; Nanji, Sulaiman; Fichtinger, Gabor

    2015-03-01

    Electromagnetic tracking allows for increased flexibility in designing image-guided interventions, however it is well understood that electromagnetic tracking is prone to error. Visualization and assessment of the tracking error should take place in the operating room with minimal interference with the clinical procedure. The goal was to achieve this ideal in an open-source software implementation in a plug and play manner, without requiring programming from the user. We use optical tracking as a ground truth. An electromagnetic sensor and optical markers are mounted onto a stylus device, pivot calibrated for both trackers. Electromagnetic tracking error is defined as difference of tool tip position between electromagnetic and optical readings. Multiple measurements are interpolated into the thin-plate B-spline transform visualized in real time using 3D Slicer. All tracked devices are used in a plug and play manner through the open-source SlicerIGT and PLUS extensions of the 3D Slicer platform. Tracking error was measured multiple times to assess reproducibility of the method, both with and without placing ferromagnetic objects in the workspace. Results from exhaustive grid sampling and freehand sampling were similar, indicating that a quick freehand sampling is sufficient to detect unexpected or excessive field distortion in the operating room. The software is available as a plug-in for the 3D Slicer platforms. Results demonstrate potential for visualizing electromagnetic tracking error in real time for intraoperative environments in feasibility clinical trials in image-guided interventions.

  12. Genetic and Computational Approaches for Studying Plant Development and Abiotic Stress Responses Using Image-Based Phenotyping

    NASA Astrophysics Data System (ADS)

    Campbell, M. T.; Walia, H.; Grondin, A.; Knecht, A.

    2017-12-01

    The development of abiotic stress tolerant crops (i.e. drought, salinity, or heat stress) requires the discovery of DNA sequence variants associated with stress tolerance-related traits. However, many traits underlying adaptation to abiotic stress involve a suite of physiological pathways that may be induced at different times throughout the duration of stress. Conventional single-point phenotyping approaches fail to fully capture these temporal responses, and thus downstream genetic analysis may only identify a subset of the genetic variants that are important for adaptation to sub-optimal environments. Although genomic resources for crops have advanced tremendously, the collection of phenotypic data for morphological and physiological traits is laborious and remains a significant bottleneck in bridging the phenotype-genotype gap. In recent years, the availability of automated, image-based phenotyping platforms has provided researchers with an opportunity to collect morphological and physiological traits non-destructively in a highly controlled environment. Moreover, these platforms allow abiotic stress responses to be recorded throughout the duration of the experiment, and have facilitated the use of function-valued traits for genetic analyses in major crops. We will present our approaches for addressing abiotic stress tolerance in cereals. This talk will focus on novel open-source software to process and extract biological meaningful data from images generated from these phenomics platforms. In addition, we will discuss the statistical approaches to model longitudinal phenotypes and dissect the genetic basis of dynamic responses to these abiotic stresses throughout development.

  13. Pathogen metadata platform: software for accessing and analyzing pathogen strain information.

    PubMed

    Chang, Wenling E; Peterson, Matthew W; Garay, Christopher D; Korves, Tonia

    2016-09-15

    Pathogen metadata includes information about where and when a pathogen was collected and the type of environment it came from. Along with genomic nucleotide sequence data, this metadata is growing rapidly and becoming a valuable resource not only for research but for biosurveillance and public health. However, current freely available tools for analyzing this data are geared towards bioinformaticians and/or do not provide summaries and visualizations needed to readily interpret results. We designed a platform to easily access and summarize data about pathogen samples. The software includes a PostgreSQL database that captures metadata useful for disease outbreak investigations, and scripts for downloading and parsing data from NCBI BioSample and BioProject into the database. The software provides a user interface to query metadata and obtain standardized results in an exportable, tab-delimited format. To visually summarize results, the user interface provides a 2D histogram for user-selected metadata types and mapping of geolocated entries. The software is built on the LabKey data platform, an open-source data management platform, which enables developers to add functionalities. We demonstrate the use of the software in querying for a pathogen serovar and for genome sequence identifiers. This software enables users to create a local database for pathogen metadata, populate it with data from NCBI, easily query the data, and obtain visual summaries. Some of the components, such as the database, are modular and can be incorporated into other data platforms. The source code is freely available for download at https://github.com/wchangmitre/bioattribution .

  14. Architecture of a platform for hardware-in-the-loop simulation of flying vehicle control systems

    NASA Astrophysics Data System (ADS)

    Belokon', S. A.; Zolotukhin, Yu. N.; Filippov, M. N.

    2017-07-01

    A hardware-software platform is presented, which is designed for the development and hardware-in-the-loop simulation of flying vehicle control systems. This platform ensures the construction of the mathematical model of the plant, development of algorithms and software for onboard radioelectronic equipment and ground control station, and visualization of the three-dimensional model of the vehicle and external environment of the cockpit in the simulator training mode.

  15. Supporting in- and off-Hospital Patient Management Using a Web-based Integrated Software Platform.

    PubMed

    Spyropoulos, Basile; Botsivali, Maria; Tzavaras, Aris; Pierros, Vasileios

    2015-01-01

    In this paper, a Web-based software platform appropriately designed to support the continuity of health care information and management for both in and out of hospital care is presented. The system has some additional features as it is the formation of continuity of care records and the transmission of referral letters with a semantically annotated web service. The platform's Web-orientation provides significant advantages, allowing for easily accomplished remote access.

  16. Surveillance and reconnaissance ground system architecture

    NASA Astrophysics Data System (ADS)

    Devambez, Francois

    2001-12-01

    Modern conflicts induces various modes of deployment, due to the type of conflict, the type of mission, and phase of conflict. It is then impossible to define fixed architecture systems for surveillance ground segments. Thales has developed a structure for a ground segment based on the operational functions required, and on the definition of modules and networks. Theses modules are software and hardware modules, including communications and networks. This ground segment is called MGS (Modular Ground Segment), and is intended for use in airborne reconnaissance systems, surveillance systems, and U.A.V. systems. Main parameters for the definition of a modular ground image exploitation system are : Compliance with various operational configurations, Easy adaptation to the evolution of theses configurations, Interoperability with NATO and multinational forces, Security, Multi-sensors, multi-platforms capabilities, Technical modularity, Evolutivity Reduction of life cycle cost The general performances of the MGS are presented : type of sensors, acquisition process, exploitation of images, report generation, data base management, dissemination, interface with C4I. The MGS is then described as a set of hardware and software modules, and their organization to build numerous operational configurations. Architectures are from minimal configuration intended for a mono-sensor image exploitation system, to a full image intelligence center, for a multilevel exploitation of multi-sensor.

  17. A PC-based multispectral scanner data evaluation workstation: Application to Daedalus scanners

    NASA Technical Reports Server (NTRS)

    Jedlovec, Gary J.; James, Mark W.; Smith, Matthew R.; Atkinson, Robert J.

    1991-01-01

    In late 1989, a personal computer (PC)-based data evaluation workstation was developed to support post flight processing of Multispectral Atmospheric Mapping Sensor (MAMS) data. The MAMS Quick View System (QVS) is an image analysis and display system designed to provide the capability to evaluate Daedalus scanner data immediately after an aircraft flight. Even in its original form, the QVS offered the portability of a personal computer with the advanced analysis and display features of a mainframe image analysis system. It was recognized, however, that the original QVS had its limitations, both in speed and processing of MAMS data. Recent efforts are presented that focus on overcoming earlier limitations and adapting the system to a new data tape structure. In doing so, the enhanced Quick View System (QVS2) will accommodate data from any of the four spectrometers used with the Daedalus scanner on the NASA ER2 platform. The QVS2 is designed around the AST 486/33 MHz CPU personal computer and comes with 10 EISA expansion slots, keyboard, and 4.0 mbytes of memory. Specialized PC-McIDAS software provides the main image analysis and display capability for the system. Image analysis and display of the digital scanner data is accomplished with PC-McIDAS software.

  18. Waveform Developer's Guide for the Integrated Power, Avionics, and Software (iPAS) Space Telecommunications Radio System (STRS) Radio

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary Jo W.; Roche, Rigoberto

    2017-01-01

    The Space Telecommunications Radio System (STRS) provides a common, consistent framework for software defined radios (SDRs) to abstract the application software from the radio platform hardware. The STRS standard aims to reduce the cost and risk of using complex, configurable and reprogrammable radio systems across NASA missions. To promote the use of the STRS architecture for future NASA advanced exploration missions, NASA Glenn Research Center (GRC) developed an STRS-compliant SDR on a radio platform used by the Advance Exploration System program at the Johnson Space Center (JSC) in their Integrated Power, Avionics, and Software (iPAS) laboratory. The iPAS STRS Radio was implemented on the Reconfigurable, Intelligently-Adaptive Communication System (RIACS) platform, currently being used for radio development at JSC. The platform consists of a Xilinx(Trademark) ML605 Virtex(Trademark)-6 FPGA board, an Analog Devices FMCOMMS1-EBZ RF transceiver board, and an Embedded PC (Axiomtek(Trademark) eBox 620-110-FL) running the Ubuntu 12.4 operating system. The result of this development is a very low cost STRS compliant platform that can be used for waveform developments for multiple applications. The purpose of this document is to describe how to develop a new waveform using the RIACS platform and the Very High Speed Integrated Circuits (VHSIC) Hardware Description Language (VHDL) FPGA wrapper code and the STRS implementation on the Axiomtek processor.

  19. MilxXplore: a web-based system to explore large imaging datasets

    PubMed Central

    Bourgeat, P; Dore, V; Villemagne, V L; Rowe, C C; Salvado, O; Fripp, J

    2013-01-01

    Objective As large-scale medical imaging studies are becoming more common, there is an increasing reliance on automated software to extract quantitative information from these images. As the size of the cohorts keeps increasing with large studies, there is a also a need for tools that allow results from automated image processing and analysis to be presented in a way that enables fast and efficient quality checking, tagging and reporting on cases in which automatic processing failed or was problematic. Materials and methods MilxXplore is an open source visualization platform, which provides an interface to navigate and explore imaging data in a web browser, giving the end user the opportunity to perform quality control and reporting in a user friendly, collaborative and efficient way. Discussion Compared to existing software solutions that often provide an overview of the results at the subject's level, MilxXplore pools the results of individual subjects and time points together, allowing easy and efficient navigation and browsing through the different acquisitions of a subject over time, and comparing the results against the rest of the population. Conclusions MilxXplore is fast, flexible and allows remote quality checks of processed imaging data, facilitating data sharing and collaboration across multiple locations, and can be easily integrated into a cloud computing pipeline. With the growing trend of open data and open science, such a tool will become increasingly important to share and publish results of imaging analysis. PMID:23775173

  20. Bringing Legacy Visualization Software to Modern Computing Devices via Application Streaming

    NASA Astrophysics Data System (ADS)

    Fisher, Ward

    2014-05-01

    Planning software compatibility across forthcoming generations of computing platforms is a problem commonly encountered in software engineering and development. While this problem can affect any class of software, data analysis and visualization programs are particularly vulnerable. This is due in part to their inherent dependency on specialized hardware and computing environments. A number of strategies and tools have been designed to aid software engineers with this task. While generally embraced by developers at 'traditional' software companies, these methodologies are often dismissed by the scientific software community as unwieldy, inefficient and unnecessary. As a result, many important and storied scientific software packages can struggle to adapt to a new computing environment; for example, one in which much work is carried out on sub-laptop devices (such as tablets and smartphones). Rewriting these packages for a new platform often requires significant investment in terms of development time and developer expertise. In many cases, porting older software to modern devices is neither practical nor possible. As a result, replacement software must be developed from scratch, wasting resources better spent on other projects. Enabled largely by the rapid rise and adoption of cloud computing platforms, 'Application Streaming' technologies allow legacy visualization and analysis software to be operated wholly from a client device (be it laptop, tablet or smartphone) while retaining full functionality and interactivity. It mitigates much of the developer effort required by other more traditional methods while simultaneously reducing the time it takes to bring the software to a new platform. This work will provide an overview of Application Streaming and how it compares against other technologies which allow scientific visualization software to be executed from a remote computer. We will discuss the functionality and limitations of existing application streaming frameworks and how a developer might prepare their software for application streaming. We will also examine the secondary benefits realized by moving legacy software to the cloud. Finally, we will examine the process by which a legacy Java application, the Integrated Data Viewer (IDV), is to be adapted for tablet computing via Application Streaming.

  1. Methodology for automating software systems. Task 1 of the foundations for automating software systems

    NASA Technical Reports Server (NTRS)

    Moseley, Warren

    1989-01-01

    The early stages of a research program designed to establish an experimental research platform for software engineering are described. Major emphasis is placed on Computer Assisted Software Engineering (CASE). The Poor Man's CASE Tool is based on the Apple Macintosh system, employing available software including Focal Point II, Hypercard, XRefText, and Macproject. These programs are functional in themselves, but through advanced linking are available for operation from within the tool being developed. The research platform is intended to merge software engineering technology with artificial intelligence (AI). In the first prototype of the PMCT, however, the sections of AI are not included. CASE tools assist the software engineer in planning goals, routes to those goals, and ways to measure progress. The method described allows software to be synthesized instead of being written or built.

  2. PLUME and research sotware

    NASA Astrophysics Data System (ADS)

    Baudin, Veronique; Gomez-Diaz, Teresa

    2013-04-01

    The PLUME open platform (https://www.projet-plume.org) has as first goal to share competences and to value the knowledge of software experts within the French higher education and research communities. The project proposes in its platform the access to more than 380 index cards describing useful and economic software for this community, with open access to everybody. The second goal of PLUME focuses on to improve the visibility of software produced by research laboratories within the higher education and research communities. The "development-ESR" index cards briefly describe the main features of the software, including references to research publications associated to it. The platform counts more than 300 cards describing research software, where 89 cards have an English version. In this talk we describe the theme classification and the taxonomy of the index cards and the evolution with new themes added to the project. We will also focus on the organisation of PLUME as an open project and its interests in the promotion of free/open source software from and for research, contributing to the creation of a community of shared knowledge.

  3. Volcview: A Web-Based Platform for Satellite Monitoring of Volcanic Activity and Eruption Response

    NASA Astrophysics Data System (ADS)

    Schneider, D. J.; Randall, M.; Parker, T.

    2014-12-01

    The U.S. Geological Survey (USGS), in cooperation with University and State partners, operates five volcano observatories that employ specialized software packages and computer systems to process and display real-time data coming from in-situ geophysical sensors and from near-real-time satellite sources. However, access to these systems both inside and from outside the observatory offices are limited in some cases by factors such as software cost, network security, and bandwidth. Thus, a variety of Internet-based tools have been developed by the USGS Volcano Science Center to: 1) Improve accessibility to data sources for staff scientists across volcano monitoring disciplines; 2) Allow access for observatory partners and for after-hours, on-call duty scientists; 3) Provide situational awareness for emergency managers and the general public. Herein we describe VolcView (volcview.wr.usgs.gov), a freely available, web-based platform for display and analysis of near-real-time satellite data. Initial geographic coverage is of the volcanoes in Alaska, the Russian Far East, and the Commonwealth of the Northern Mariana Islands. Coverage of other volcanoes in the United States will be added in the future. Near-real-time satellite data from NOAA, NASA and JMA satellite systems are processed to create image products for detection of elevated surface temperatures and volcanic ash and SO2 clouds. VolcView uses HTML5 and the canvas element to provide image overlays (volcano location and alert status, annotation, and location information) and image products that can be queried to provide data values, location and measurement capabilities. Use over the past year during the eruptions of Pavlof, Veniaminof, and Cleveland volcanoes in Alaska by the Alaska Volcano Observatory, the National Weather Service, and the U.S. Air Force has reinforced the utility of shared situational awareness and has guided further development. These include overlay of volcanic cloud trajectory and dispersion models, atmospheric temperature profiles, and incorporation of monitoring alerts from ground and satellite-based algorithms. Challenges for future development include reducing the latency in satellite data reception and processing, and increasing the geographic coverage from polar-orbiting satellite platforms.

  4. A wellness software platform with smart wearable devices and the demonstration report for personal wellness management

    NASA Astrophysics Data System (ADS)

    Kang, Won-Seok; Son, Chang-Sik; Lee, Sangho; Choi, Rock-Hyun; Ha, Yeong-Mi

    2017-07-01

    In this paper, we introduce a wellness software platform, called WellnessHumanCare, is a semi-automatic wellness management software platform which has the functions of complex wellness data acquisition(mental, physical and environmental one) with smart wearable devices, complex wellness condition analysis, private-aware online/offline recommendation, real-time monitoring apps (Smartphone-based, Web-based) and so on and we has demonstrated a wellness management service with 79 participants (experimental group: 39, control group: 40) who has worked at experimental group (H Corp.) and control group (K Corp.), Korea and 3 months in order to show the efficiency of the WellnessHumanCare.

  5. Raven-II: an open platform for surgical robotics research.

    PubMed

    Hannaford, Blake; Rosen, Jacob; Friedman, Diana W; King, Hawkeye; Roan, Phillip; Cheng, Lei; Glozman, Daniel; Ma, Ji; Kosari, Sina Nia; White, Lee

    2013-04-01

    The Raven-II is a platform for collaborative research on advances in surgical robotics. Seven universities have begun research using this platform. The Raven-II system has two 3-DOF spherical positioning mechanisms capable of attaching interchangeable four DOF instruments. The Raven-II software is based on open standards such as Linux and ROS to maximally facilitate software development. The mechanism is robust enough for repeated experiments and animal surgery experiments, but is not engineered to sufficient safety standards for human use. Mechanisms in place for interaction among the user community and dissemination of results include an electronic forum, an online software SVN repository, and meetings and workshops at major robotics conferences.

  6. An embedded multi-core parallel model for real-time stereo imaging

    NASA Astrophysics Data System (ADS)

    He, Wenjing; Hu, Jian; Niu, Jingyu; Li, Chuanrong; Liu, Guangyu

    2018-04-01

    The real-time processing based on embedded system will enhance the application capability of stereo imaging for LiDAR and hyperspectral sensor. The task partitioning and scheduling strategies for embedded multiprocessor system starts relatively late, compared with that for PC computer. In this paper, aimed at embedded multi-core processing platform, a parallel model for stereo imaging is studied and verified. After analyzing the computing amount, throughout capacity and buffering requirements, a two-stage pipeline parallel model based on message transmission is established. This model can be applied to fast stereo imaging for airborne sensors with various characteristics. To demonstrate the feasibility and effectiveness of the parallel model, a parallel software was designed using test flight data, based on the 8-core DSP processor TMS320C6678. The results indicate that the design performed well in workload distribution and had a speed-up ratio up to 6.4.

  7. Operations Manager Tim Miller checks out software for the Airborne Synthetic Aperature Radar (AIRSAR

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Tim Miller checks out software for the Airborne Synthetic Aperture Radar (AIRSAR). He was the AIRSAR operations manager for NASA's Jet Propulsion Laboratory. The AIRSAR produces imaging data for a range of studies conducted by the DC-8. NASA is using a DC-8 aircraft as a flying science laboratory. The platform aircraft, based at NASA's Dryden Flight Research Center, Edwards, Calif., collects data for many experiments in support of scientific projects serving the world scientific community. Included in this community are NASA, federal, state, academic and foreign investigators. Data gathered by the DC-8 at flight altitude and by remote sensing have been used for scientific studies in archeology, ecology, geography, hydrology, meteorology, oceanography, volcanology, atmospheric chemistry, soil science and biology.

  8. Digital transplantation pathology: combining whole slide imaging, multiplex staining and automated image analysis.

    PubMed

    Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J

    2012-01-01

    Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. ©Copyright 2011 The American Society of Transplantation and the American Society of Transplant Surgeons.

  9. Interactive tele-radiological segmentation systems for treatment and diagnosis.

    PubMed

    Zimeras, S; Gortzis, L G

    2012-01-01

    Telehealth is the exchange of health information and the provision of health care services through electronic information and communications technology, where participants are separated by geographic, time, social and cultural barriers. The shift of telemedicine from desktop platforms to wireless and mobile technologies is likely to have a significant impact on healthcare in the future. It is therefore crucial to develop a general information exchange e-medical system to enables its users to perform online and offline medical consultations through diagnosis. During the medical diagnosis, image analysis techniques combined with doctor's opinions could be useful for final medical decisions. Quantitative analysis of digital images requires detection and segmentation of the borders of the object of interest. In medical images, segmentation has traditionally been done by human experts. Even with the aid of image processing software (computer-assisted segmentation tools), manual segmentation of 2D and 3D CT images is tedious, time-consuming, and thus impractical, especially in cases where a large number of objects must be specified. Substantial computational and storage requirements become especially acute when object orientation and scale have to be considered. Therefore automated or semi-automated segmentation techniques are essential if these software applications are ever to gain widespread clinical use. The main purpose of this work is to analyze segmentation techniques for the definition of anatomical structures under telemedical systems.

  10. Integration of digital signal processing technologies with pulsed electron paramagnetic resonance imaging

    PubMed Central

    Pursley, Randall H.; Salem, Ghadi; Devasahayam, Nallathamby; Subramanian, Sankaran; Koscielniak, Janusz; Krishna, Murali C.; Pohida, Thomas J.

    2006-01-01

    The integration of modern data acquisition and digital signal processing (DSP) technologies with Fourier transform electron paramagnetic resonance (FT-EPR) imaging at radiofrequencies (RF) is described. The FT-EPR system operates at a Larmor frequency (Lf) of 300 MHz to facilitate in vivo studies. This relatively low frequency Lf, in conjunction with our ~10 MHz signal bandwidth, enables the use of direct free induction decay time-locked subsampling (TLSS). This particular technique provides advantages by eliminating the traditional analog intermediate frequency downconversion stage along with the corresponding noise sources. TLSS also results in manageable sample rates that facilitate the design of DSP-based data acquisition and image processing platforms. More specifically, we utilize a high-speed field programmable gate array (FPGA) and a DSP processor to perform advanced real-time signal and image processing. The migration to a DSP-based configuration offers the benefits of improved EPR system performance, as well as increased adaptability to various EPR system configurations (i.e., software configurable systems instead of hardware reconfigurations). The required modifications to the FT-EPR system design are described, with focus on the addition of DSP technologies including the application-specific hardware, software, and firmware developed for the FPGA and DSP processor. The first results of using real-time DSP technologies in conjunction with direct detection bandpass sampling to implement EPR imaging at RF frequencies are presented. PMID:16243552

  11. Utilizing Technology for FCS Education: Selecting Appropriate Interactive Webinar Software

    ERIC Educational Resources Information Center

    Zoumenou, Virginie; Sigman-Grant, Madeleine; Coleman, Gayle; Malekian, Fatemeh; Zee, Julia M. K.; Fountain, Brent J.; Marsh, Akela

    2015-01-01

    The purpose of this research was to identify commonly used interactive webinar software platforms and to conduct a testing session on best practices related to an interactive webinar. The study employed the Adobe Connect and the Maestro Conference platforms. The 15 participants experienced five best practices: pre-work, polling, breakout room,…

  12. Data Analysis of a Space Experiment: Common Software Tackles Uncommon Task

    NASA Technical Reports Server (NTRS)

    Wilkinson, R. Allen

    1998-01-01

    Presented here are the software adaptations developed by laboratory scientists to process the space experiment data products from three experiments on two International Microgravity Laboratory Missions (IML-1 and IML-2). The challenge was to accommodate interacting with many types of hardware and software developed by both European Space Agency (ESA) and NASA aerospace contractors, where data formats were neither commercial nor familiar to scientists. Some of the data had been corrupted by bit shifting of byte boundaries. Least-significant/most-significant byte swapping also occurred as might be expected for the various hardware platforms involved. The data consisted of 20 GBytes per experiment of both numerical and image data. A significant percentage of the bytes were consumed in NASA formatting with extra layers of packetizing structure. It was provided in various pieces to the scientists on magnetic tapes, Syquest cartridges, DAT tapes, CD-ROMS, analog video tapes, and by network FIP. In this paper I will provide some science background and present the software processing used to make the data useful in the months after the missions.

  13. SignalPlant: an open signal processing software platform.

    PubMed

    Plesinger, F; Jurco, J; Halamek, J; Jurak, P

    2016-07-01

    The growing technical standard of acquisition systems allows the acquisition of large records, often reaching gigabytes or more in size as is the case with whole-day electroencephalograph (EEG) recordings, for example. Although current 64-bit software for signal processing is able to process (e.g. filter, analyze, etc) such data, visual inspection and labeling will probably suffer from rather long latency during the rendering of large portions of recorded signals. For this reason, we have developed SignalPlant-a stand-alone application for signal inspection, labeling and processing. The main motivation was to supply investigators with a tool allowing fast and interactive work with large multichannel records produced by EEG, electrocardiograph and similar devices. The rendering latency was compared with EEGLAB and proves significantly faster when displaying an image from a large number of samples (e.g. 163-times faster for 75  ×  10(6) samples). The presented SignalPlant software is available free and does not depend on any other computation software. Furthermore, it can be extended with plugins by third parties ensuring its adaptability to future research tasks and new data formats.

  14. Development and Validation of an Interactive Internet Platform for Older People: The Healthy Ageing Through Internet Counselling in the Elderly Study.

    PubMed

    Jongstra, Susan; Beishuizen, Cathrien; Andrieu, Sandrine; Barbera, Mariagnese; van Dorp, Matthijs; van de Groep, Bram; Guillemont, Juliette; Mangialasche, Francesca; van Middelaar, Tessa; Moll van Charante, Eric; Soininen, Hilkka; Kivipelto, Miia; Richard, Edo

    2017-02-01

    A myriad of Web-based applications on self-management have been developed, but few focus on older people. In the face of global aging, older people form an important target population for cardiovascular prevention. This article describes the full development of an interactive Internet platform for older people, which was designed for the Healthy Ageing Through Internet Counselling in the Elderly (HATICE) study. We provide recommendations to design senior-friendly Web-based applications for a new approach to multicomponent cardiovascular prevention. The development of the platform followed five phases: (1) conceptual framework; (2) platform concept and functional design; (3) platform building (software and content); (4) testing and pilot study; and (5) final product. We performed a meta-analysis, reviewed guidelines for cardiovascular diseases, and consulted end users, experts, and software developers to create the platform concept and content. The software was built in iterative cycles. In the pilot study, 41 people aged ≥65 years used the platform for 8 weeks. Participants used the interactive features of the platform and appreciated the coach support. During all phases adjustments were made to incorporate all improvements from the previous phases. The final platform is a personal, secured, and interactive platform supported by a coach. When carefully designed, an interactive Internet platform is acceptable and feasible for use by older people with basic computer skills. To improve acceptability by older people, we recommend involving the end users in the process of development, to personalize the platform and to combine the application with human support. The interactive HATICE platform will be tested for efficacy in a multinational randomized controlled trial (ISRCTN48151589).

  15. Virtual network computing: cross-platform remote display and collaboration software.

    PubMed

    Konerding, D E

    1999-04-01

    VNC (Virtual Network Computing) is a computer program written to address the problem of cross-platform remote desktop/application display. VNC uses a client/server model in which an image of the desktop of the server is transmitted to the client and displayed. The client collects mouse and keyboard input from the user and transmits them back to the server. The VNC client and server can run on Windows 95/98/NT, MacOS, and Unix (including Linux) operating systems. VNC is multi-user on Unix machines (any number of servers can be run are unrelated to the primary display of the computer), while it is effectively single-user on Macintosh and Windows machines (only one server can be run, displaying the contents of the primary display of the server). The VNC servers can be configured to allow more than one client to connect at one time, effectively allowing collaboration through the shared desktop. I describe the function of VNC, provide details of installation, describe how it achieves its goal, and evaluate the use of VNC for molecular modelling. VNC is an extremely useful tool for collaboration, instruction, software development, and debugging of graphical programs with remote users.

  16. Development of Analytical Plug-ins for ENSITE: Version 1.0

    DTIC Science & Technology

    2017-11-01

    ENSITE’s core-software platform builds upon leading geospatial platforms already in use by the Army and is designed to offer an easy-to-use, customized ...use by the Army and is designed to offer an easy-to-use, customized set of workflows for CB planners. Within this platform are added software compo...public good . Find out more at www.erdc.usace.army.mil. To search for other technical reports published by ERDC, visit the ERDC online library at

  17. Modeling Stratigraphic Architecture of Deep-water Deposits Using a Small Unmanned Aircraft: Neogene Thin-bedded Turbidites, East Coast Basin, New Zealand

    NASA Astrophysics Data System (ADS)

    Nieminski, N.; Graham, S. A.

    2014-12-01

    One of the outstanding challenges of field geology is inaccessibility of exposure. The ability to view and characterize outcrops that are difficult to study from the ground is greatly improved by aerial investigation. Detailed stratigraphic architecture of such exposures is best addressed by using advances and availability of small unmanned aircraft systems (sUAS) that can safely navigate from high-altitude overviews of study areas to within a meter of the exposure of interest. High-resolution photographs acquired at various elevations and azimuths by sUAS are then used to convert field measurements to digital representations in three-dimensions at a fine scale. Photogrammetric software is used to capture complex, detailed topography by creating digital surface models with a range imaging technique that estimates three-dimensional structures from two-dimensional image sequences. The digital surface model is overlain by detailed, high-resolution photography. Pairing sUAS technology with readily available photogrammetry software that requires little processing time and resources offers a revolutionary and cost-effective methodology for geoscientists to investigate and quantify stratigraphic and structural complexity of field studies from the convenience of the office. These methods of imaging and modeling remote outcrops are demonstrated in the East Coast Basin, New Zealand, where wave-cut platform exposures of Miocene deep-water deposits offer a unique opportunity to investigate the flow processes and resulting characteristics of thin-bedded turbidite deposits. Stratigraphic architecture of wavecut platform and vertically-dipping exposures of these thin-bedded turbidites is investigated with sUAS coupled with Structure from Motion (SfM) photogrammetry software. This approach allows the geometric and spatial variation of deep-water architecture to be characterized continuously along 2,000 meters of lateral exposure, as well as to measure and quantify cyclic variations in thin-bedded turbidites at centimeter scale. Results yield a spatial and temporal understanding of a deep-water depositional system at a scale that was previously unattainable using conventional field geology techniques, and a virtual outcrop that can be used for classroom education.

  18. SAR target recognition using behaviour library of different shapes in different incidence angles and polarisations

    NASA Astrophysics Data System (ADS)

    Fallahpour, Mojtaba Behzad; Dehghani, Hamid; Jabbar Rashidi, Ali; Sheikhi, Abbas

    2018-05-01

    Target recognition is one of the most important issues in the interpretation of the synthetic aperture radar (SAR) images. Modelling, analysis, and recognition of the effects of influential parameters in the SAR can provide a better understanding of the SAR imaging systems, and therefore facilitates the interpretation of the produced images. Influential parameters in SAR images can be divided into five general categories of radar, radar platform, channel, imaging region, and processing section, each of which has different physical, structural, hardware, and software sub-parameters with clear roles in the finally formed images. In this paper, for the first time, a behaviour library that includes the effects of polarisation, incidence angle, and shape of targets, as radar and imaging region sub-parameters, in the SAR images are extracted. This library shows that the created pattern for each of cylindrical, conical, and cubic shapes is unique, and due to their unique properties these types of shapes can be recognised in the SAR images. This capability is applied to data acquired with the Canadian RADARSAT1 satellite.

  19. Telemedicine with integrated data security in ATM-based networks

    NASA Astrophysics Data System (ADS)

    Thiel, Andreas; Bernarding, Johannes; Kurth, Ralf; Wenzel, Rudiger; Villringer, Arno; Tolxdorff, Thomas

    1997-05-01

    Telemedical services rely on the digital transfer of large amounts of data in a short time. The acceptance of these services requires therefore new hard- and software concepts. The fast exchange of data is well performed within a high- speed ATM-based network. The fast access to the data from different platforms imposes more difficult problems, which may be divided into those relating to standardized data formats and those relating to different levels of data security across nations. For a standardized access to the formats and those relating to different levels of data security across nations. For a standardized access to the image data, a DICOM 3.0 server was implemented.IMages were converted into the DICOM 3.0 standard if necessary. The access to the server is provided by an implementation of DICOM in JAVA allowing access to the data from different platforms. Data protection measures to ensure the secure transfer of sensitive patient data are not yet solved within the DICOM concept. We investigated different schemes to protect data using the DICOM/JAVA modality with as little impact on data transfer speed as possible.

  20. Distributed Processing of Sentinel-2 Products using the BIGEARTH Platform

    NASA Astrophysics Data System (ADS)

    Bacu, Victor; Stefanut, Teodor; Nandra, Constantin; Mihon, Danut; Gorgan, Dorian

    2017-04-01

    The constellation of observational satellites orbiting around Earth is constantly increasing, providing more data that need to be processed in order to extract meaningful information and knowledge from it. Sentinel-2 satellites, part of the Copernicus Earth Observation program, aim to be used in agriculture, forestry and many other land management applications. ESA's SNAP toolbox can be used to process data gathered by Sentinel-2 satellites but is limited to the resources provided by a stand-alone computer. In this paper we present a cloud based software platform that makes use of this toolbox together with other remote sensing software applications to process Sentinel-2 products. The BIGEARTH software platform [1] offers an integrated solution for processing Earth Observation data coming from different sources (such as satellites or on-site sensors). The flow of processing is defined as a chain of tasks based on the WorDeL description language [2]. Each task could rely on a different software technology (such as Grass GIS and ESA's SNAP) in order to process the input data. One important feature of the BIGEARTH platform comes from this possibility of interconnection and integration, throughout the same flow of processing, of the various well known software technologies. All this integration is transparent from the user perspective. The proposed platform extends the SNAP capabilities by enabling specialists to easily scale the processing over distributed architectures, according to their specific needs and resources. The software platform [3] can be used in multiple configurations. In the basic one the software platform runs as a standalone application inside a virtual machine. Obviously in this case the computational resources are limited but it will give an overview of the functionalities of the software platform, and also the possibility to define the flow of processing and later on to execute it on a more complex infrastructure. The most complex and robust configuration is based on cloud computing and allows the installation on a private or public cloud infrastructure. In this configuration, the processing resources can be dynamically allocated and the execution time can be considerably improved by the available virtual resources and the number of parallelizable sequences in the processing flow. The presentation highlights the benefits and issues of the proposed solution by analyzing some significant experimental use cases. Main references for further information: [1] BigEarth project, http://cgis.utcluj.ro/projects/bigearth [2] Constantin Nandra, Dorian Gorgan: "Defining Earth data batch processing tasks by means of a flexible workflow description language", ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., III-4, 59-66, (2016). [3] Victor Bacu, Teodor Stefanut, Dorian Gorgan, "Adaptive Processing of Earth Observation Data on Cloud Infrastructures Based on Workflow Description", Proceedings of the Intelligent Computer Communication and Processing (ICCP), IEEE-Press, pp.444-454, (2015).

  1. Toward an integrated software platform for systems pharmacology

    PubMed Central

    Ghosh, Samik; Matsuoka, Yukiko; Asai, Yoshiyuki; Hsin, Kun-Yi; Kitano, Hiroaki

    2013-01-01

    Understanding complex biological systems requires the extensive support of computational tools. This is particularly true for systems pharmacology, which aims to understand the action of drugs and their interactions in a systems context. Computational models play an important role as they can be viewed as an explicit representation of biological hypotheses to be tested. A series of software and data resources are used for model development, verification and exploration of the possible behaviors of biological systems using the model that may not be possible or not cost effective by experiments. Software platforms play a dominant role in creativity and productivity support and have transformed many industries, techniques that can be applied to biology as well. Establishing an integrated software platform will be the next important step in the field. © 2013 The Authors. Biopharmaceutics & Drug Disposition published by John Wiley & Sons, Ltd. PMID:24150748

  2. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool

    PubMed Central

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2008-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444

  3. Final Report Ra Power Management 1255 10-15-16 FINAL_Public

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iverson, Aaron

    Ra Power Management (RPM) has developed a cloud based software platform that manages the financial and operational functions of third party financed solar projects throughout their lifecycle. RPM’s software streamlines and automates the sales, financing, and management of a portfolio of solar assets. The software helps solar developers automate the most difficult aspects of asset management, leading to increased transparency, efficiency, and reduction in human error. More importantly, our platform will help developers save money by improving their operating margins

  4. Simulated Thin-Film Growth and Imaging

    NASA Astrophysics Data System (ADS)

    Schillaci, Michael

    2001-06-01

    Thin-films have become the cornerstone of the electronics, telecommunications, and broadband markets. A list of potential products includes: computer boards and chips, satellites, cell phones, fuel cells, superconductors, flat panel displays, optical waveguides, building and automotive windows, food and beverage plastic containers, metal foils, pipe plating, vision ware, manufacturing equipment and turbine engines. For all of these reasons a basic understanding of the physical processes involved in both growing and imaging thin-films can provide a wonderful research project for advanced undergraduate and first-year graduate students. After producing rudimentary two- and three-dimensional thin-film models incorporating ballsitic deposition and nearest neighbor Coulomb-type interactions, the QM tunneling equations are used to produce simulated scanning tunneling microscope (SSTM) images of the films. A discussion of computational platforms, languages, and software packages that may be used to accomplish similar results is also given.

  5. Study on nondestructive detection system based on x-ray for wire ropes conveyer belt

    NASA Astrophysics Data System (ADS)

    Miao, Changyun; Shi, Boya; Wan, Peng; Li, Jie

    2008-03-01

    A nondestructive detection system based on X-ray for wire ropes conveyer belt is designed by X-ray detection technology. In this paper X-ray detection principle is analyzed, a design scheme of the system is presented; image processing of conveyer belt is researched and image processing algorithms are given; X-ray acquisition receiving board is designed with the use of FPGA and DSP; the software of the system is programmed by C#.NET on WINXP/WIN2000 platform. The experiment indicates the system can implement remote real-time detection of wire ropes conveyer belt images, find faults and give an alarm in time. The system is direct perceived, strong real-time and high accurate. It can be used for fault detection of wire ropes conveyer belts in mines, ports, terminals and other fields.

  6. A resilient and secure software platform and architecture for distributed spacecraft

    NASA Astrophysics Data System (ADS)

    Otte, William R.; Dubey, Abhishek; Karsai, Gabor

    2014-06-01

    A distributed spacecraft is a cluster of independent satellite modules flying in formation that communicate via ad-hoc wireless networks. This system in space is a cloud platform that facilitates sharing sensors and other computing and communication resources across multiple applications, potentially developed and maintained by different organizations. Effectively, such architecture can realize the functions of monolithic satellites at a reduced cost and with improved adaptivity and robustness. Openness of these architectures pose special challenges because the distributed software platform has to support applications from different security domains and organizations, and where information flows have to be carefully managed and compartmentalized. If the platform is used as a robust shared resource its management, configuration, and resilience becomes a challenge in itself. We have designed and prototyped a distributed software platform for such architectures. The core element of the platform is a new operating system whose services were designed to restrict access to the network and the file system, and to enforce resource management constraints for all non-privileged processes Mixed-criticality applications operating at different security labels are deployed and controlled by a privileged management process that is also pre-configuring all information flows. This paper describes the design and objective of this layer.

  7. Avoidable Software Procurements

    DTIC Science & Technology

    2012-09-01

    software license, software usage, ELA, Software as a Service , SaaS , Software Asset...PaaS Platform as a Service SaaS Software as a Service SAM Software Asset Management SMS System Management Server SEWP Solutions for Enterprise Wide...delivery of full Cloud Services , we will see the transition of the Cloud Computing service model from Iaas to SaaS , or Software as a Service . Software

  8. ATLAS@AWS

    NASA Astrophysics Data System (ADS)

    Gehrcke, Jan-Philip; Kluth, Stefan; Stonjek, Stefan

    2010-04-01

    We show how the ATLAS offline software is ported on the Amazon Elastic Compute Cloud (EC2). We prepare an Amazon Machine Image (AMI) on the basis of the standard ATLAS platform Scientific Linux 4 (SL4). Then an instance of the SLC4 AMI is started on EC2 and we install and validate a recent release of the ATLAS offline software distribution kit. The installed software is archived as an image on the Amazon Simple Storage Service (S3) and can be quickly retrieved and connected to new SL4 AMI instances using the Amazon Elastic Block Store (EBS). ATLAS jobs can then configure against the release kit using the ATLAS configuration management tool (cmt) in the standard way. The output of jobs is exported to S3 before the SL4 AMI is terminated. Job status information is transferred to the Amazon SimpleDB service. The whole process of launching instances of our AMI, starting, monitoring and stopping jobs and retrieving job output from S3 is controlled from a client machine using python scripts implementing the Amazon EC2/S3 API via the boto library working together with small scripts embedded in the SL4 AMI. We report our experience with setting up and operating the system using standard ATLAS job transforms.

  9. OpenROCS: a software tool to control robotic observatories

    NASA Astrophysics Data System (ADS)

    Colomé, Josep; Sanz, Josep; Vilardell, Francesc; Ribas, Ignasi; Gil, Pere

    2012-09-01

    We present the Open Robotic Observatory Control System (OpenROCS), an open source software platform developed for the robotic control of telescopes. It acts as a software infrastructure that executes all the necessary processes to implement responses to the system events that appear in the routine and non-routine operations associated to data-flow and housekeeping control. The OpenROCS software design and implementation provides a high flexibility to be adapted to different observatory configurations and event-action specifications. It is based on an abstract model that is independent of the specific hardware or software and is highly configurable. Interfaces to the system components are defined in a simple manner to achieve this goal. We give a detailed description of the version 2.0 of this software, based on a modular architecture developed in PHP and XML configuration files, and using standard communication protocols to interface with applications for hardware monitoring and control, environment monitoring, scheduling of tasks, image processing and data quality control. We provide two examples of how it is used as the core element of the control system in two robotic observatories: the Joan Oró Telescope at the Montsec Astronomical Observatory (Catalonia, Spain) and the SuperWASP Qatar Telescope at the Roque de los Muchachos Observatory (Canary Islands, Spain).

  10. Characterization of Disease-Related Covariance Topographies with SSMPCA Toolbox: Effects of Spatial Normalization and PET Scanners

    PubMed Central

    Peng, Shichun; Ma, Yilong; Spetsieris, Phoebe G; Mattis, Paul; Feigin, Andrew; Dhawan, Vijay; Eidelberg, David

    2013-01-01

    In order to generate imaging biomarkers from disease-specific brain networks, we have implemented a general toolbox to rapidly perform scaled subprofile modeling (SSM) based on principal component analysis (PCA) on brain images of patients and normals. This SSMPCA toolbox can define spatial covariance patterns whose expression in individual subjects can discriminate patients from controls or predict behavioral measures. The technique may depend on differences in spatial normalization algorithms and brain imaging systems. We have evaluated the reproducibility of characteristic metabolic patterns generated by SSMPCA in patients with Parkinson's disease (PD). We used [18F]fluorodeoxyglucose PET scans from PD patients and normal controls. Motor-related (PDRP) and cognition-related (PDCP) metabolic patterns were derived from images spatially normalized using four versions of SPM software (spm99, spm2, spm5 and spm8). Differences between these patterns and subject scores were compared across multiple independent groups of patients and control subjects. These patterns and subject scores were highly reproducible with different normalization programs in terms of disease discrimination and cognitive correlation. Subject scores were also comparable in PD patients imaged across multiple PET scanners. Our findings confirm a very high degree of consistency among brain networks and their clinical correlates in PD using images normalized in four different SPM platforms. SSMPCA toolbox can be used reliably for generating disease-specific imaging biomarkers despite the continued evolution of image preprocessing software in the neuroimaging community. Network expressions can be quantified in individual patients independent of different physical characteristics of PET cameras. PMID:23671030

  11. Characterization of disease-related covariance topographies with SSMPCA toolbox: effects of spatial normalization and PET scanners.

    PubMed

    Peng, Shichun; Ma, Yilong; Spetsieris, Phoebe G; Mattis, Paul; Feigin, Andrew; Dhawan, Vijay; Eidelberg, David

    2014-05-01

    To generate imaging biomarkers from disease-specific brain networks, we have implemented a general toolbox to rapidly perform scaled subprofile modeling (SSM) based on principal component analysis (PCA) on brain images of patients and normals. This SSMPCA toolbox can define spatial covariance patterns whose expression in individual subjects can discriminate patients from controls or predict behavioral measures. The technique may depend on differences in spatial normalization algorithms and brain imaging systems. We have evaluated the reproducibility of characteristic metabolic patterns generated by SSMPCA in patients with Parkinson's disease (PD). We used [(18) F]fluorodeoxyglucose PET scans from patients with PD and normal controls. Motor-related (PDRP) and cognition-related (PDCP) metabolic patterns were derived from images spatially normalized using four versions of SPM software (spm99, spm2, spm5, and spm8). Differences between these patterns and subject scores were compared across multiple independent groups of patients and control subjects. These patterns and subject scores were highly reproducible with different normalization programs in terms of disease discrimination and cognitive correlation. Subject scores were also comparable in patients with PD imaged across multiple PET scanners. Our findings confirm a very high degree of consistency among brain networks and their clinical correlates in PD using images normalized in four different SPM platforms. SSMPCA toolbox can be used reliably for generating disease-specific imaging biomarkers despite the continued evolution of image preprocessing software in the neuroimaging community. Network expressions can be quantified in individual patients independent of different physical characteristics of PET cameras. Copyright © 2013 Wiley Periodicals, Inc.

  12. DeepInfer: open-source deep learning deployment toolkit for image-guided therapy

    NASA Astrophysics Data System (ADS)

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-03-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research work ows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  13. DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy.

    PubMed

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A; Kapur, Tina; Wells, William M; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-02-11

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose "DeepInfer" - an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections.

  14. Comparing the Microsoft Kinect to a traditional mouse for adjusting the viewed tissue densities of three-dimensional anatomical structures

    NASA Astrophysics Data System (ADS)

    Juhnke, Bethany; Berron, Monica; Philip, Adriana; Williams, Jordan; Holub, Joseph; Winer, Eliot

    2013-03-01

    Advancements in medical image visualization in recent years have enabled three-dimensional (3D) medical images to be volume-rendered from magnetic resonance imaging (MRI) and computed tomography (CT) scans. Medical data is crucial for patient diagnosis and medical education, and analyzing these three-dimensional models rather than two-dimensional (2D) slices would enable more efficient analysis by surgeons and physicians, especially non-radiologists. An interaction device that is intuitive, robust, and easily learned is necessary to integrate 3D modeling software into the medical community. The keyboard and mouse configuration does not readily manipulate 3D models because these traditional interface devices function within two degrees of freedom, not the six degrees of freedom presented in three dimensions. Using a familiar, commercial-off-the-shelf (COTS) device for interaction would minimize training time and enable maximum usability with 3D medical images. Multiple techniques are available to manipulate 3D medical images and provide doctors more innovative ways of visualizing patient data. One such example is windowing. Windowing is used to adjust the viewed tissue density of digital medical data. A software platform available at the Virtual Reality Applications Center (VRAC), named Isis, was used to visualize and interact with the 3D representations of medical data. In this paper, we present the methodology and results of a user study that examined the usability of windowing 3D medical imaging using a Kinect™ device compared to a traditional mouse.

  15. DeepInfer: Open-Source Deep Learning Deployment Toolkit for Image-Guided Therapy

    PubMed Central

    Mehrtash, Alireza; Pesteie, Mehran; Hetherington, Jorden; Behringer, Peter A.; Kapur, Tina; Wells, William M.; Rohling, Robert; Fedorov, Andriy; Abolmaesumi, Purang

    2017-01-01

    Deep learning models have outperformed some of the previous state-of-the-art approaches in medical image analysis. Instead of using hand-engineered features, deep models attempt to automatically extract hierarchical representations at multiple levels of abstraction from the data. Therefore, deep models are usually considered to be more flexible and robust solutions for image analysis problems compared to conventional computer vision models. They have demonstrated significant improvements in computer-aided diagnosis and automatic medical image analysis applied to such tasks as image segmentation, classification and registration. However, deploying deep learning models often has a steep learning curve and requires detailed knowledge of various software packages. Thus, many deep models have not been integrated into the clinical research workflows causing a gap between the state-of-the-art machine learning in medical applications and evaluation in clinical research procedures. In this paper, we propose “DeepInfer” – an open-source toolkit for developing and deploying deep learning models within the 3D Slicer medical image analysis platform. Utilizing a repository of task-specific models, DeepInfer allows clinical researchers and biomedical engineers to deploy a trained model selected from the public registry, and apply it to new data without the need for software development or configuration. As two practical use cases, we demonstrate the application of DeepInfer in prostate segmentation for targeted MRI-guided biopsy and identification of the target plane in 3D ultrasound for spinal injections. PMID:28615794

  16. Portable widefield imaging device for ICG-detection of the sentinel lymph node

    NASA Astrophysics Data System (ADS)

    Govone, Angelo Biasi; Gómez-García, Pablo Aurelio; Carvalho, André Lopes; Capuzzo, Renato de Castro; Magalhães, Daniel Varela; Kurachi, Cristina

    2015-06-01

    Metastasis is one of the major cancer complications, since the malignant cells detach from the primary tumor and reaches other organs or tissues. The sentinel lymph node (SLN) is the first lymphatic structure to be affected by the malignant cells, but its location is still a great challenge for the medical team. This occurs due to the fact that the lymph nodes are located between the muscle fibers, making it visualization difficult. Seeking to aid the surgeon in the detection of the SLN, the present study aims to develop a widefield fluorescence imaging device using the indocyanine green as fluorescence marker. The system is basically composed of a 780nm illumination unit, optical components for 810nm fluorescence detection, two CCD cameras, a laptop, and dedicated software. The illumination unit has 16 diode lasers. A dichroic mirror and bandpass filters select and deliver the excitation light to the interrogated tissue, and select and deliver the fluorescence light to the camera. One camera is responsible for the acquisition of visible light and the other one for the acquisition of the ICG fluorescence. The software developed at the LabVIEW® platform generates a real time merged image where it is possible to observe the fluorescence spots, related to the lymph nodes, superimposed at the image under white light. The system was tested in a mice model, and a first patient with tongue cancer was imaged. Both results showed the potential use of the presented fluorescence imaging system assembled for sentinel lymph node detection.

  17. A Randomized Trial Comparing Classical Participatory Design to VandAID, an Interactive CrowdSourcing Platform to Facilitate User-centered Design.

    PubMed

    Dufendach, Kevin R; Koch, Sabine; Unertl, Kim M; Lehmann, Christoph U

    2017-10-26

    Early involvement of stakeholders in the design of medical software is particularly important due to the need to incorporate complex knowledge and actions associated with clinical work. Standard user-centered design methods include focus groups and participatory design sessions with individual stakeholders, which generally limit user involvement to a small number of individuals due to the significant time investments from designers and end users. The goal of this project was to reduce the effort for end users to participate in co-design of a software user interface by developing an interactive web-based crowdsourcing platform. In a randomized trial, we compared a new web-based crowdsourcing platform to standard participatory design sessions. We developed an interactive, modular platform that allows responsive remote customization and design feedback on a visual user interface based on user preferences. The responsive canvas is a dynamic HTML template that responds in real time to user preference selections. Upon completion, the design team can view the user's interface creations through an administrator portal and download the structured selections through a REDCap interface. We have created a software platform that allows users to customize a user interface and see the results of that customization in real time, receiving immediate feedback on the impact of their design choices. Neonatal clinicians used the new platform to successfully design and customize a neonatal handoff tool. They received no specific instruction and yet were able to use the software easily and reported high usability. VandAID, a new web-based crowdsourcing platform, can involve multiple users in user-centered design simultaneously and provides means of obtaining design feedback remotely. The software can provide design feedback at any stage in the design process, but it will be of greatest utility for specifying user requirements and evaluating iterative designs with multiple options.

  18. Molecular Optical Simulation Environment (MOSE): A Platform for the Simulation of Light Propagation in Turbid Media

    PubMed Central

    Ren, Shenghan; Chen, Xueli; Wang, Hailong; Qu, Xiaochao; Wang, Ge; Liang, Jimin; Tian, Jie

    2013-01-01

    The study of light propagation in turbid media has attracted extensive attention in the field of biomedical optical molecular imaging. In this paper, we present a software platform for the simulation of light propagation in turbid media named the “Molecular Optical Simulation Environment (MOSE)”. Based on the gold standard of the Monte Carlo method, MOSE simulates light propagation both in tissues with complicated structures and through free-space. In particular, MOSE synthesizes realistic data for bioluminescence tomography (BLT), fluorescence molecular tomography (FMT), and diffuse optical tomography (DOT). The user-friendly interface and powerful visualization tools facilitate data analysis and system evaluation. As a major measure for resource sharing and reproducible research, MOSE aims to provide freeware for research and educational institutions, which can be downloaded at http://www.mosetm.net. PMID:23577215

  19. Cost effective raspberry pi-based radio frequency identification tagging of mice suitable for automated in vivo imaging.

    PubMed

    Bolaños, Federico; LeDue, Jeff M; Murphy, Timothy H

    2017-01-30

    Automation of animal experimentation improves consistency, reduces potential for error while decreasing animal stress and increasing well-being. Radio frequency identification (RFID) tagging can identify individual mice in group housing environments enabling animal-specific tracking of physiological parameters. We describe a simple protocol to radio frequency identification (RFID) tag and detect mice. RFID tags were injected sub-cutaneously after brief isoflurane anesthesia and do not require surgical steps such as suturing or incisions. We employ glass-encapsulated 125kHz tags that can be read within 30.2±2.4mm of the antenna. A raspberry pi single board computer and tag reader enable automated logging and cross platform support is possible through Python. We provide sample software written in Python to provide a flexible and cost effective system for logging the weights of multiple mice in relation to pre-defined targets. The sample software can serve as the basis of any behavioral or physiological task where users will need to identify and track specific animals. Recently, we have applied this system of tagging to automated mouse brain imaging within home-cages. We provide a cost effective solution employing open source software to facilitate adoption in applications such as automated imaging or tracking individual animal weights during tasks where food or water restriction is employed as motivation for a specific behavior. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Optical analysis of electro-optical systems by MTF calculus

    NASA Astrophysics Data System (ADS)

    Barbarini, Elisa Signoreto; Dos Santos, Daniel, Jr.; Stefani, Mário Antonio; Yasuoka, Fátima Maria Mitsue; Castro Neto, Jarbas C.; Rodrigues, Evandro Luís Linhari

    2011-08-01

    One of the widely used methods for performance analysis of an optical system is the determination of the Modulation Transfer Function (MTF). The MTF represents a quantitative and direct measure of image quality, and, besides being an objective test, it can be used on concatenated optical system. This paper presents the application of software called SMTF (software modulation transfer function), built in C++ and Open CV platforms for MTF calculation on electro-optical system. Through this technique, it is possible to develop specific method to measure the real time performance of a digital fundus camera, an infrared sensor and an ophthalmological surgery microscope. Each optical instrument mentioned has a particular device to measure the MTF response, which is being developed. Then the MTF information assists the analysis of the optical system alignment, and also defines its resolution limit by the MTF graphic. The result obtained from the implemented software is compared with the theoretical MTF curve from the analyzed systems.

  1. Application of Nexus copy number software for CNV detection and analysis.

    PubMed

    Darvishi, Katayoon

    2010-04-01

    Among human structural genomic variation, copy number variants (CNVs) are the most frequently known component, comprised of gains/losses of DNA segments that are generally 1 kb in length or longer. Array-based comparative genomic hybridization (aCGH) has emerged as a powerful tool for detecting genomic copy number variants (CNVs). With the rapid increase in the density of array technology and with the adaptation of new high-throughput technology, a reliable and computationally scalable method for accurate mapping of recurring DNA copy number aberrations has become a main focus in research. Here we introduce Nexus Copy Number software, a platform-independent tool, to analyze the output files of all types of commercial and custom-made comparative genomic hybridization (CGH) and single-nucleotide polymorphism (SNP) arrays, such as those manufactured by Affymetrix, Agilent Technologies, Illumina, and Roche NimbleGen. It also supports data generated by various array image-analysis software tools such as GenePix, ImaGene, and BlueFuse. (c) 2010 by John Wiley & Sons, Inc.

  2. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience

    PubMed Central

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S.; Mcginnis, Issac

    2017-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions. PMID:29278255

  3. Software Reliability Analysis of NASA Space Flight Software: A Practical Experience.

    PubMed

    Sukhwani, Harish; Alonso, Javier; Trivedi, Kishor S; Mcginnis, Issac

    2016-01-01

    In this paper, we present the software reliability analysis of the flight software of a recently launched space mission. For our analysis, we use the defect reports collected during the flight software development. We find that this software was developed in multiple releases, each release spanning across all software life-cycle phases. We also find that the software releases were developed and tested for four different hardware platforms, spanning from off-the-shelf or emulation hardware to actual flight hardware. For releases that exhibit reliability growth or decay, we fit Software Reliability Growth Models (SRGM); otherwise we fit a distribution function. We find that most releases exhibit reliability growth, with Log-Logistic (NHPP) and S-Shaped (NHPP) as the best-fit SRGMs. For the releases that experience reliability decay, we investigate the causes for the same. We find that such releases were the first software releases to be tested on a new hardware platform, and hence they encountered major hardware integration issues. Also such releases seem to have been developed under time pressure in order to start testing on the new hardware platform sooner. Such releases exhibit poor reliability growth, and hence exhibit high predicted failure rate. Other problems include hardware specification changes and delivery delays from vendors. Thus, our analysis provides critical insights and inputs to the management to improve the software development process. As NASA has moved towards a product line engineering for its flight software development, software for future space missions will be developed in a similar manner and hence the analysis results for this mission can be considered as a baseline for future flight software missions.

  4. ART-ML: a new markup language for modelling and representation of biological processes in cardiovascular diseases.

    PubMed

    Karvounis, E C; Exarchos, T P; Fotiou, E; Sakellarios, A I; Iliopoulou, D; Koutsouris, D; Fotiadis, D I

    2013-01-01

    With an ever increasing number of biological models available on the internet, a standardized modelling framework is required to allow information to be accessed and visualized. In this paper we propose a novel Extensible Markup Language (XML) based format called ART-ML that aims at supporting the interoperability and the reuse of models of geometry, blood flow, plaque progression and stent modelling, exported by any cardiovascular disease modelling software. ART-ML has been developed and tested using ARTool. ARTool is a platform for the automatic processing of various image modalities of coronary and carotid arteries. The images and their content are fused to develop morphological models of the arteries in 3D representations. All the above described procedures integrate disparate data formats, protocols and tools. ART-ML proposes a representation way, expanding ARTool, for interpretability of the individual resources, creating a standard unified model for the description of data and, consequently, a format for their exchange and representation that is machine independent. More specifically, ARTool platform incorporates efficient algorithms which are able to perform blood flow simulations and atherosclerotic plaque evolution modelling. Integration of data layers between different modules within ARTool are based upon the interchange of information included in the ART-ML model repository. ART-ML provides a markup representation that enables the representation and management of embedded models within the cardiovascular disease modelling platform, the storage and interchange of well-defined information. The corresponding ART-ML model incorporates all relevant information regarding geometry, blood flow, plaque progression and stent modelling procedures. All created models are stored in a model repository database which is accessible to the research community using efficient web interfaces, enabling the interoperability of any cardiovascular disease modelling software models. ART-ML can be used as a reference ML model in multiscale simulations of plaque formation and progression, incorporating all scales of the biological processes.

  5. ChRIS--A web-based neuroimaging and informatics system for collecting, organizing, processing, visualizing and sharing of medical data.

    PubMed

    Pienaar, Rudolph; Rannou, Nicolas; Bernal, Jorge; Hahn, Daniel; Grant, P Ellen

    2015-01-01

    The utility of web browsers for general purpose computing, long anticipated, is only now coming into fruition. In this paper we present a web-based medical image data and information management software platform called ChRIS ([Boston] Children's Research Integration System). ChRIS' deep functionality allows for easy retrieval of medical image data from resources typically found in hospitals, organizes and presents information in a modern feed-like interface, provides access to a growing library of plugins that process these data - typically on a connected High Performance Compute Cluster, allows for easy data sharing between users and instances of ChRIS and provides powerful 3D visualization and real time collaboration.

  6. GC31G-1182: Opennex, a Private-Public Partnership in Support of the National Climate Assessment

    NASA Technical Reports Server (NTRS)

    Nemani, Ramakrishna R.; Wang, Weile; Michaelis, Andrew; Votava, Petr; Ganguly, Sangram

    2016-01-01

    The NASA Earth Exchange (NEX) is a collaborative computing platform that has been developed with the objective of bringing scientists together with the software tools, massive global datasets, and supercomputing resources necessary to accelerate research in Earth systems science and global change. NEX is funded as an enabling tool for sustaining the national climate assessment. Over the past five years, researchers have used the NEX platform and produced a number of data sets highly relevant to the National Climate Assessment. These include high-resolution climate projections using different downscaling techniques and trends in historical climate from satellite data. To enable a broader community in exploiting the above datasets, the NEX team partnered with public cloud providers to create the OpenNEX platform. OpenNEX provides ready access to NEX data holdings on a number of public cloud platforms along with pertinent analysis tools and workflows in the form of Machine Images and Docker Containers, lectures and tutorials by experts. We will showcase some of the applications of OpenNEX data and tools by the community on Amazon Web Services, Google Cloud and the NEX Sandbox.

  7. What can we learn from in-soil imaging of a live plant: X-ray Computed Tomography and 3D numerical simulation of root-soil system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiaofan; Varga, Tamas; Liu, Chongxuan

    Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere. X-ray Computed Tomography (XCT) has been proven to be an effective tool for non-invasive root imaging and analysis. A combination of XCT, open-source software, and in-house developed code was used to non-invasively image a prairie dropseed (Sporobolus heterolepis) specimen, segment the root data to obtain a 3D image of the root structure, and extract quantitative information from the 3D data, respectively. Based on the explicitly-resolved root structure, pore-scale computational fluid dynamics (CFD) simulations were applied to numerically investigate the root-soil-groundwater system. The plant root conductivity, soilmore » hydraulic conductivity and transpiration rate were shown to control the groundwater distribution. Furthermore, the coupled imaging-modeling approach demonstrates a realistic platform to investigate rhizosphere flow processes and would be feasible to provide useful information linked to upscaled models.« less

  8. What can we learn from in-soil imaging of a live plant: X-ray Computed Tomography and 3D numerical simulation of root-soil system

    DOE PAGES

    Yang, Xiaofan; Varga, Tamas; Liu, Chongxuan; ...

    2017-05-04

    Plant roots play a critical role in plant-soil-microbe interactions that occur in the rhizosphere. X-ray Computed Tomography (XCT) has been proven to be an effective tool for non-invasive root imaging and analysis. A combination of XCT, open-source software, and in-house developed code was used to non-invasively image a prairie dropseed (Sporobolus heterolepis) specimen, segment the root data to obtain a 3D image of the root structure, and extract quantitative information from the 3D data, respectively. Based on the explicitly-resolved root structure, pore-scale computational fluid dynamics (CFD) simulations were applied to numerically investigate the root-soil-groundwater system. The plant root conductivity, soilmore » hydraulic conductivity and transpiration rate were shown to control the groundwater distribution. Furthermore, the coupled imaging-modeling approach demonstrates a realistic platform to investigate rhizosphere flow processes and would be feasible to provide useful information linked to upscaled models.« less

  9. Validity of computational hemodynamics in human arteries based on 3D time-of-flight MR angiography and 2D electrocardiogram gated phase contrast images

    NASA Astrophysics Data System (ADS)

    Yu, Huidan (Whitney); Chen, Xi; Chen, Rou; Wang, Zhiqiang; Lin, Chen; Kralik, Stephen; Zhao, Ye

    2015-11-01

    In this work, we demonstrate the validity of 4-D patient-specific computational hemodynamics (PSCH) based on 3-D time-of-flight (TOF) MR angiography (MRA) and 2-D electrocardiogram (ECG) gated phase contrast (PC) images. The mesoscale lattice Boltzmann method (LBM) is employed to segment morphological arterial geometry from TOF MRA, to extract velocity profiles from ECG PC images, and to simulate fluid dynamics on a unified GPU accelerated computational platform. Two healthy volunteers are recruited to participate in the study. For each volunteer, a 3-D high resolution TOF MRA image and 10 2-D ECG gated PC images are acquired to provide the morphological geometry and the time-varying flow velocity profiles for necessary inputs of the PSCH. Validation results will be presented through comparisons of LBM vs. 4D Flow Software for flow rates and LBM simulation vs. MRA measurement for blood flow velocity maps. Indiana University Health (IUH) Values Fund.

  10. Mobile Ultrasound Plane Wave Beamforming on iPhone or iPad using Metal- based GPU Processing

    NASA Astrophysics Data System (ADS)

    Hewener, Holger J.; Tretbar, Steffen H.

    Mobile and cost effective ultrasound devices are being used in point of care scenarios or the drama room. To reduce the costs of such devices we already presented the possibilities of consumer devices like the Apple iPad for full signal processing of raw data for ultrasound image generation. Using technologies like plane wave imaging to generate a full image with only one excitation/reception event the acquisition times and power consumption of ultrasound imaging can be reduced for low power mobile devices based on consumer electronics realizing the transition from FPGA or ASIC based beamforming into more flexible software beamforming. The massive parallel beamforming processing can be done with the Apple framework "Metal" for advanced graphics and general purpose GPU processing for the iOS platform. We were able to integrate the beamforming reconstruction into our mobile ultrasound processing application with imaging rates up to 70 Hz on iPad Air 2 hardware.

  11. RSEIS and RFOC: Seismic Analysis in R

    NASA Astrophysics Data System (ADS)

    Lees, J. M.

    2015-12-01

    Open software is essential for reproducible scientific exchange. R-packages provide a platform for development of seismological investigation software that can be properly documented and traced for data processing. A suite of R packages designed for a wide range of seismic analysis is currently available in the free software platform called R. R is a software platform based on the S-language developed at Bell Labs decades ago. Routines in R can be run as standalone function calls, or developed in object-oriented mode. R comes with a base set of routines, and thousands of user developed packages. The packages developed at UNC include subroutines and interactive codes for processing seismic data, analyzing geographic information (GIS) and inverting data involved in a variety of geophysical applications. On CRAN (Comprehensive R Archive Network, http://www.r-project.org/) currently available packages related to seismic analysis are RSEIS, Rquake, GEOmap, RFOC, zoeppritz, RTOMO, and geophys, Rwave, PEIP, hht, rFDSN. These include signal processing, data management, mapping, earthquake location, deconvolution, focal mechanisms, wavelet transforms, Hilbert-Huang Transforms, tomographic inversion, and Mogi deformation among other useful functionality. All software in R packages is required to have detailed documentation, making the exchange and modification of existing software easy. In this presentation, I will focus on packages RSEIS and RFOC, showing examples from a variety of seismic analyses. The R approach has similarities to the popular (and expensive) MATLAB platform, although R is open source and free to down load.

  12. Software-Defined Radio for Space-to-Space Communications

    NASA Technical Reports Server (NTRS)

    Fisher, Ken; Jih, Cindy; Moore, Michael S.; Price, Jeremy C.; Abbott, Ben A.; Fritz, Justin A.

    2011-01-01

    A paper describes the Space- to-Space Communications System (SSCS) Software- Defined Radio (SDR) research project to determine the most appropriate method for creating flexible and reconfigurable radios to implement wireless communications channels for space vehicles so that fewer radios are required, and commonality in hardware and software architecture can be leveraged for future missions. The ability to reconfigure the SDR through software enables one radio platform to be reconfigured to interoperate with many different waveforms. This means a reduction in the number of physical radio platforms necessary to support a space mission s communication requirements, thus decreasing the total size, weight, and power needed for a mission.

  13. Mindboggling morphometry of human brains

    PubMed Central

    Bao, Forrest S.; Giard, Joachim; Stavsky, Eliezer; Lee, Noah; Rossa, Brian; Reuter, Martin; Chaibub Neto, Elias

    2017-01-01

    Mindboggle (http://mindboggle.info) is an open source brain morphometry platform that takes in preprocessed T1-weighted MRI data and outputs volume, surface, and tabular data containing label, feature, and shape information for further analysis. In this article, we document the software and demonstrate its use in studies of shape variation in healthy and diseased humans. The number of different shape measures and the size of the populations make this the largest and most detailed shape analysis of human brains ever conducted. Brain image morphometry shows great potential for providing much-needed biological markers for diagnosing, tracking, and predicting progression of mental health disorders. Very few software algorithms provide more than measures of volume and cortical thickness, while more subtle shape measures may provide more sensitive and specific biomarkers. Mindboggle computes a variety of (primarily surface-based) shapes: area, volume, thickness, curvature, depth, Laplace-Beltrami spectra, Zernike moments, etc. We evaluate Mindboggle’s algorithms using the largest set of manually labeled, publicly available brain images in the world and compare them against state-of-the-art algorithms where they exist. All data, code, and results of these evaluations are publicly available. PMID:28231282

  14. FTOOLS: A FITS Data Processing and Analysis Software Package

    NASA Astrophysics Data System (ADS)

    Blackburn, J. K.

    FTOOLS, a highly modular collection of over 110 utilities for processing and analyzing data in the FITS (Flexible Image Transport System) format, has been developed in support of the HEASARC (High Energy Astrophysics Science Archive Research Center) at NASA's Goddard Space Flight Center. Each utility performs a single simple task such as presentation of file contents, extraction of specific rows or columns, appending or merging tables, binning values in a column or selecting subsets of rows based on a boolean expression. Individual utilities can easily be chained together in scripts to achieve more complex operations such as the generation and displaying of spectra or light curves. The collection of utilities provides both generic processing and analysis utilities and utilities specific to high energy astrophysics data sets used for the ASCA, ROSAT, GRO, and XTE missions. A core set of FTOOLS providing support for generic FITS data processing, FITS image analysis and timing analysis can easily be split out of the full software package for users not needing the high energy astrophysics mission utilities. The FTOOLS software package is designed to be both compatible with IRAF and completely stand alone in a UNIX or VMS environment. The user interface is controlled by standard IRAF parameter files. The package is self documenting through the IRAF help facility and a stand alone help task. Software is written in ANSI C and \\fortran to provide portability across most computer systems. The data format dependencies between hardware platforms are isolated through the FITSIO library package.

  15. Archive of Digital Chirp Subbottom Profile Data Collected During USGS Cruise 14BIM05 Offshore of Breton Island, Louisiana, August 2014

    USGS Publications Warehouse

    Forde, Arnell S.; Flocks, James G.; Wiese, Dana S.; Fredericks, Jake J.

    2016-03-29

    The archived trace data are in standard SEG Y rev. 0 format (Barry and others, 1975); the first 3,200 bytes of the card image header are in American Standard Code for Information Interchange (ASCII) format instead of Extended Binary Coded Decimal Interchange Code (EBCDIC) format. The SEG Y files are available on the DVD version of this report or online, downloadable via the USGS Coastal and Marine Geoscience Data System (http://cmgds.marine.usgs.gov). The data are also available for viewing using GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org) multi-platform open source software. The Web version of this archive does not contain the SEG Y trace files. To obtain the complete DVD archive, contact USGS Information Services at 1-888-ASK-USGS or infoservices@usgs.gov. The SEG Y files may be downloaded and processed with commercial or public domain software such as Seismic Unix (SU) (Cohen and Stockwell, 2010). See the How To Download SEG Y Data page for download instructions. The printable profiles are provided as Graphics Interchange Format (GIF) images processed and gained using SU software and can be viewed from theProfiles page or by using the links located on the trackline maps; refer to the Software page for links to example SU processing scripts.

  16. MEGA X: Molecular Evolutionary Genetics Analysis across Computing Platforms.

    PubMed

    Kumar, Sudhir; Stecher, Glen; Li, Michael; Knyaz, Christina; Tamura, Koichiro

    2018-06-01

    The Molecular Evolutionary Genetics Analysis (Mega) software implements many analytical methods and tools for phylogenomics and phylomedicine. Here, we report a transformation of Mega to enable cross-platform use on Microsoft Windows and Linux operating systems. Mega X does not require virtualization or emulation software and provides a uniform user experience across platforms. Mega X has additionally been upgraded to use multiple computing cores for many molecular evolutionary analyses. Mega X is available in two interfaces (graphical and command line) and can be downloaded from www.megasoftware.net free of charge.

  17. Arsenic removal from contaminated groundwater by membrane-integrated hybrid plant: optimization and control using Visual Basic platform.

    PubMed

    Chakrabortty, S; Sen, M; Pal, P

    2014-03-01

    A simulation software (ARRPA) has been developed in Microsoft Visual Basic platform for optimization and control of a novel membrane-integrated arsenic separation plant in the backdrop of absence of such software. The user-friendly, menu-driven software is based on a dynamic linearized mathematical model, developed for the hybrid treatment scheme. The model captures the chemical kinetics in the pre-treating chemical reactor and the separation and transport phenomena involved in nanofiltration. The software has been validated through extensive experimental investigations. The agreement between the outputs from computer simulation program and the experimental findings are excellent and consistent under varying operating conditions reflecting high degree of accuracy and reliability of the software. High values of the overall correlation coefficient (R (2) = 0.989) and Willmott d-index (0.989) are indicators of the capability of the software in analyzing performance of the plant. The software permits pre-analysis, manipulation of input data, helps in optimization and exhibits performance of an integrated plant visually on a graphical platform. Performance analysis of the whole system as well as the individual units is possible using the tool. The software first of its kind in its domain and in the well-known Microsoft Excel environment is likely to be very useful in successful design, optimization and operation of an advanced hybrid treatment plant for removal of arsenic from contaminated groundwater.

  18. Global Ultraviolet Imaging Processing for the GGS Polar Visible Imaging System (VIS)

    NASA Technical Reports Server (NTRS)

    Frank, L. A.

    1997-01-01

    The Visible Imaging System (VIS) on Polar spacecraft of the NASA Goddard Space Flight Center was launched into orbit about Earth on February 24, 1996. Since shortly after launch, the Earth Camera subsystem of the VIS has been operated nearly continuously to acquire far ultraviolet, global images of Earth and its northern and southern auroral ovals. The only exceptions to this continuous imaging occurred for approximately 10 days at the times of the Polar spacecraft re-orientation maneuvers in October, 1996 and April, 1997. Since launch, approximately 525,000 images have been acquired with the VIS Earth Camera. The VIS instrument operational health continues to be excellent. Since launch, all systems have operated nominally with all voltages, currents, and temperatures remaining at nominal values. In addition, the sensitivity of the Earth Camera to ultraviolet light has remained constant throughout the operation period. Revised flight software was uploaded to the VIS in order to compensate for the spacecraft wobble. This is accomplished by electronic shuttering of the sensor in synchronization with the 6-second period of the wobble, thus recovering the original spatial resolution obtainable with the VIS Earth Camera. In addition, software patches were uploaded to make the VIS immune to signal dropouts that occur in the sliprings of the despun platform mechanism. These changes have worked very well. The VIS and in particular the VIS Earth Camera is fully operational and will continue to acquire global auroral images as the sun progresses toward solar maximum conditions after the turn of the century.

  19. Software-based approach toward vendor independent real-time photoacoustic imaging using ultrasound beamformed data

    NASA Astrophysics Data System (ADS)

    Zhang, Haichong K.; Huang, Howard; Lei, Chen; Kim, Younsu; Boctor, Emad M.

    2017-03-01

    Photoacoustic (PA) imaging has shown its potential for many clinical applications, but current research and usage of PA imaging are constrained by additional hardware costs to collect channel data, as the PA signals are incorrectly processed in existing clinical ultrasound systems. This problem arises from the fact that ultrasound systems beamform the PA signals as echoes from the ultrasound transducer instead of directly from illuminated sources. Consequently, conventional implementations of PA imaging rely on parallel channel acquisition from research platforms, which are not only slow and expensive, but are also mostly not approved by the FDA for clinical use. In previous studies, we have proposed the synthetic-aperture based photoacoustic re-beamformer (SPARE) that uses ultrasound beamformed radio frequency (RF) data as the input, which is readily available in clinical ultrasound scanners. The goal of this work is to implement the SPARE beamformer in a clinical ultrasound system, and to experimentally demonstrate its real-time visualization. Assuming a high pulsed repetition frequency (PRF) laser is used, a PZT-based pseudo PA source transmission was synchronized with the ultrasound line trigger. As a result, the frame-rate increases when limiting the image field-of-view (FOV), with 50 to 20 frames per second achieved for FOVs from 35 mm to 70 mm depth, respectively. Although in reality the maximum PRF of laser firing limits the PA image frame rate, this result indicates that the developed software is capable of displaying PA images with the maximum possible frame-rate for certain laser system without acquiring channel data.

  20. Potential of the Cogex Software Platform to Replace Logbooks in Capstone Design Projects

    ERIC Educational Resources Information Center

    Foley, David; Charron, François; Plante, Jean-Sébastien

    2018-01-01

    Recent technologies are offering the power to share and grow knowledge and ideas in unprecedented ways. The CogEx software platform was developed to take advantage of the digital world with innovative ideas to support designers work in both industrial and academic contexts. This paper presents a qualitative study on the usage of CogEx during…

  1. Parallel algorithm of VLBI software correlator under multiprocessor environment

    NASA Astrophysics Data System (ADS)

    Zheng, Weimin; Zhang, Dong

    2007-11-01

    The correlator is the key signal processing equipment of a Very Lone Baseline Interferometry (VLBI) synthetic aperture telescope. It receives the mass data collected by the VLBI observatories and produces the visibility function of the target, which can be used to spacecraft position, baseline length measurement, synthesis imaging, and other scientific applications. VLBI data correlation is a task of data intensive and computation intensive. This paper presents the algorithms of two parallel software correlators under multiprocessor environments. A near real-time correlator for spacecraft tracking adopts the pipelining and thread-parallel technology, and runs on the SMP (Symmetric Multiple Processor) servers. Another high speed prototype correlator using the mixed Pthreads and MPI (Massage Passing Interface) parallel algorithm is realized on a small Beowulf cluster platform. Both correlators have the characteristic of flexible structure, scalability, and with 10-station data correlating abilities.

  2. Web based 3-D medical image visualization on the PC.

    PubMed

    Kim, N; Lee, D H; Kim, J H; Kim, Y; Cho, H J

    1998-01-01

    With the recent advance of Web and its associated technologies, information sharing on distribute computing environments has gained a great amount of attention from many researchers in many application areas, such as medicine, engineering, and business. One basic requirement of distributed medical consultation systems is that geographically dispersed, disparate participants are allowed to exchange information readily with each other. Such software also needs to be supported on a broad range of computer platforms to increase the softwares accessibility. In this paper, the development of world-wide-web based medical consultation system for radiology imaging is addressed to provide platform independence and greater accessibility. The system supports sharing of 3-dimensional objects. We use VRML (Virtual Reality Modeling Language), which is the defacto standard in 3-D modeling on the Web. 3-D objects are reconstructed from CT or MRI volume data using a VRML format, which can be viewed and manipulated easily in Web-browsers with a VRML plug-in. A Marching cubes method is used in the transformation of scanned volume data sets to polygonal surfaces of VRML. A decimation algorithm is adopted to reduce the number of meshes in the resulting VRML file. 3-D volume data are often very large in size, hence loading the data on PC level computers requires a significant reduction of the size of the data, while minimizing the loss of the original shape information. This is also important to decrease network delays. A prototype system has been implemented (http://cybernet5.snu.ac.kr/-cyber/mrivrml .html), and several sessions of experiments are carried out.

  3. Evaluation of H.264 and H.265 full motion video encoding for small UAS platforms

    NASA Astrophysics Data System (ADS)

    McGuinness, Christopher D.; Walker, David; Taylor, Clark; Hill, Kerry; Hoffman, Marc

    2016-05-01

    Of all the steps in the image acquisition and formation pipeline, compression is the only process that degrades image quality. A selected compression algorithm succeeds or fails to provide sufficient quality at the requested compression rate depending on how well the algorithm is suited to the input data. Applying an algorithm designed for one type of data to a different type often results in poor compression performance. This is mostly the case when comparing the performance of H.264, designed for standard definition data, to HEVC (High Efficiency Video Coding), which the Joint Collaborative Team on Video Coding (JCT-VC) designed for high-definition data. This study focuses on evaluating how HEVC compares to H.264 when compressing data from small UAS platforms. To compare the standards directly, we assess two open-source traditional software solutions: x264 and x265. These software-only comparisons allow us to establish a baseline of how much improvement can generally be expected of HEVC over H.264. Then, specific solutions leveraging different types of hardware are selected to understand the limitations of commercial-off-the-shelf (COTS) options. Algorithmically, regardless of the implementation, HEVC is found to provide similar quality video as H.264 at 40% lower data rates for video resolutions greater than 1280x720, roughly 1 Megapixel (MPx). For resolutions less than 1MPx, H.264 is an adequate solution though a small (roughly 20%) compression boost is earned by employing HEVC. New low cost, size, weight, and power (CSWAP) HEVC implementations are being developed and will be ideal for small UAS systems.

  4. SIproc: an open-source biomedical data processing platform for large hyperspectral images.

    PubMed

    Berisha, Sebastian; Chang, Shengyuan; Saki, Sam; Daeinejad, Davar; He, Ziqi; Mankar, Rupali; Mayerich, David

    2017-04-10

    There has recently been significant interest within the vibrational spectroscopy community to apply quantitative spectroscopic imaging techniques to histology and clinical diagnosis. However, many of the proposed methods require collecting spectroscopic images that have a similar region size and resolution to the corresponding histological images. Since spectroscopic images contain significantly more spectral samples than traditional histology, the resulting data sets can approach hundreds of gigabytes to terabytes in size. This makes them difficult to store and process, and the tools available to researchers for handling large spectroscopic data sets are limited. Fundamental mathematical tools, such as MATLAB, Octave, and SciPy, are extremely powerful but require that the data be stored in fast memory. This memory limitation becomes impractical for even modestly sized histological images, which can be hundreds of gigabytes in size. In this paper, we propose an open-source toolkit designed to perform out-of-core processing of hyperspectral images. By taking advantage of graphical processing unit (GPU) computing combined with adaptive data streaming, our software alleviates common workstation memory limitations while achieving better performance than existing applications.

  5. Open-source image reconstruction of super-resolution structured illumination microscopy data in ImageJ

    PubMed Central

    Müller, Marcel; Mönkemöller, Viola; Hennig, Simon; Hübner, Wolfgang; Huser, Thomas

    2016-01-01

    Super-resolved structured illumination microscopy (SR-SIM) is an important tool for fluorescence microscopy. SR-SIM microscopes perform multiple image acquisitions with varying illumination patterns, and reconstruct them to a super-resolved image. In its most frequent, linear implementation, SR-SIM doubles the spatial resolution. The reconstruction is performed numerically on the acquired wide-field image data, and thus relies on a software implementation of specific SR-SIM image reconstruction algorithms. We present fairSIM, an easy-to-use plugin that provides SR-SIM reconstructions for a wide range of SR-SIM platforms directly within ImageJ. For research groups developing their own implementations of super-resolution structured illumination microscopy, fairSIM takes away the hurdle of generating yet another implementation of the reconstruction algorithm. For users of commercial microscopes, it offers an additional, in-depth analysis option for their data independent of specific operating systems. As a modular, open-source solution, fairSIM can easily be adapted, automated and extended as the field of SR-SIM progresses. PMID:26996201

  6. Cross-platform validation and analysis environment for particle physics

    NASA Astrophysics Data System (ADS)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    2017-11-01

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for online validation of Monte Carlo event samples through a web interface.

  7. LIMAO: Cross-platform software for simulating laser-induced alignment and orientation dynamics of linear-, symmetric- and asymmetric tops

    NASA Astrophysics Data System (ADS)

    Szidarovszky, Tamás; Jono, Maho; Yamanouchi, Kaoru

    2018-07-01

    A user-friendly and cross-platform software called Laser-Induced Molecular Alignment and Orientation simulator (LIMAO) has been developed. The program can be used to simulate within the rigid rotor approximation the rotational dynamics of gas phase molecules induced by linearly polarized intense laser fields at a given temperature. The software is implemented in the Java and Mathematica programming languages. The primary aim of LIMAO is to aid experimental scientists in predicting and analyzing experimental data representing laser-induced spatial alignment and orientation of molecules.

  8. A client/server system for Internet access to biomedical text/image databanks.

    PubMed

    Thoma, G R; Long, L R; Berman, L E

    1996-01-01

    Internet access to mixed text/image databanks is finding application in the medical world. An example is a database of medical X-rays and associated data consisting of demographic, socioeconomic, physician's exam, medical laboratory and other information collected as part of a nationwide health survey conducted by the government. Another example is a collection of digitized cryosection images, CT and MR taken of cadavers as part of the National Library of Medicine's Visible Human Project. In both cases, the challenge is to provide access to both the image and the associated text for a wide end user community to create atlases, conduct epidemiological studies, to develop image-specific algorithms for compression, enhancement and other types of image processing, among many other applications. The databanks mentioned above are being created in prototype form. This paper describes the prototype system developed for the archiving of the data and the client software to enable a broad range of end users to access the archive, retrieve text and image data, display the data and manipulate the images. System design considerations include; data organization in a relational database management system with object-oriented extensions; a hierarchical organization of the image data by different resolution levels for different user classes; client design based on common hardware and software platforms incorporating SQL search capability, X Window, Motif and TAE (a development environment supporting rapid prototyping and management of graphic-oriented user interfaces); potential to include ultra high resolution display monitors as a user option; intuitive user interface paradigm for building complex queries; and contrast enhancement, magnification and mensuration tools for better viewing by the user.

  9. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    NASA Astrophysics Data System (ADS)

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  10. Radiation and scattering from printed antennas on cylindrically conformal platforms

    NASA Technical Reports Server (NTRS)

    Kempel, Leo C.; Volakis, John L.; Bindiganavale, Sunil

    1994-01-01

    The goal was to develop suitable methods and software for the analysis of antennas on cylindrical coated and uncoated platforms. Specifically, the finite element boundary integral and finite element ABC methods were employed successfully and associated software were developed for the analysis and design of wraparound and discrete cavity-backed arrays situated on cylindrical platforms. This work led to the successful implementation of analysis software for such antennas. Developments which played a role in this respect are the efficient implementation of the 3D Green's function for a metallic cylinder, the incorporation of the fast Fourier transform in computing the matrix-vector products executed in the solver of the finite element-boundary integral system, and the development of a new absorbing boundary condition for terminating the finite element mesh on cylindrical surfaces.

  11. Development and implementation of an EPID‐based method for localizing isocenter

    PubMed Central

    Hyer, Daniel E.; Nixon, Earl

    2012-01-01

    The aim of this study was to develop a phantom and analysis software that could be used to quickly and accurately determine the location of radiation isocenter to an accuracy of less than 1 mm using the EPID (Electronic Portal Imaging Device). The proposed solution uses a collimator setting of 10×10cm2 to acquire EPID images of a new phantom constructed from LEGO blocks. Images from a number of gantry and collimator angles are analyzed by automated analysis software to determine the position of the jaws and center of the phantom in each image. The distance between a chosen jaw and the phantom center is then compared to the same distance measured after a 180° collimator rotation to determine if the phantom is centered in the dimension being investigated. Repeated tests show that the system is reproducibly independent of the imaging session, and calculated offsets of the phantom from radiation isocenter are a function of phantom setup only. Accuracy of the algorithm's calculated offsets were verified by imaging the LEGO phantom before and after applying the calculated offset. These measurements show that the offsets are predicted with an accuracy of approximately 0.3 mm, which is on the order of the detector's pitch. Comparison with a star‐shot analysis yielded agreement of isocenter location within 0.5 mm. Additionally, the phantom and software are completely independent of linac vendor, and this study presents results from two linac manufacturers. A Varian Optical Guidance Platform (OGP) calibration array was also integrated into the phantom to allow calibration of the OGP while the phantom is positioned at radiation isocenter to reduce setup uncertainty in the calibration. This solution offers a quick, objective method to perform isocenter localization as well as laser alignment and OGP calibration on a monthly basis. PACS number: 87.55.Qr PMID:23149787

  12. The Development of GIS Educational Resources Sharing among Central Taiwan Universities

    NASA Astrophysics Data System (ADS)

    Chou, T.-Y.; Yeh, M.-L.; Lai, Y.-C.

    2011-09-01

    Using GIS in the classroom enhance students' computer skills and explore the range of knowledge. The paper highlights GIS integration on e-learning platform and introduces a variety of abundant educational resources. This research project will demonstrate tools for e-learning environment and delivers some case studies for learning interaction from Central Taiwan Universities. Feng Chia University (FCU) obtained a remarkable academic project subsidized by Ministry of Education and developed e-learning platform for excellence in teaching/learning programs among Central Taiwan's universities. The aim of the project is to integrate the educational resources of 13 universities in central Taiwan. FCU is serving as the hub of Center University. To overcome the problem of distance, e-platforms have been established to create experiences with collaboration enhanced learning. The e-platforms provide coordination of web service access among the educational community and deliver GIS educational resources. Most of GIS related courses cover the development of GIS, principles of cartography, spatial data analysis and overlaying, terrain analysis, buffer analysis, 3D GIS application, Remote Sensing, GPS technology, and WebGIS, MobileGIS, ArcGIS manipulation. In each GIS case study, students have been taught to know geographic meaning, collect spatial data and then use ArcGIS software to analyze spatial data. On one of e-Learning platforms provide lesson plans and presentation slides. Students can learn Arc GIS online. As they analyze spatial data, they can connect to GIS hub to get data they need including satellite images, aerial photos, and vector data. Moreover, e-learning platforms provide solutions and resources. Different levels of image scales have been integrated into the systems. Multi-scale spatial development and analyses in Central Taiwan integrate academic research resources among CTTLRC partners. Thus, establish decision-making support mechanism in teaching and learning. Accelerate communication, cooperation and sharing among academic units

  13. Database for the collection and analysis of clinical data and images of neoplasms of the sinonasal tract.

    PubMed

    Trimarchi, Matteo; Lund, Valerie J; Nicolai, Piero; Pini, Massimiliano; Senna, Massimo; Howard, David J

    2004-04-01

    The Neoplasms of the Sinonasal Tract software package (NSNT v 1.0) implements a complete visual database for patients with sinonasal neoplasia, facilitating standardization of data and statistical analysis. The software, which is compatible with the Macintosh and Windows platforms, provides multiuser application with a dedicated server (on Windows NT or 2000 or Macintosh OS 9 or X and a network of clients) together with web access, if required. The system hardware consists of an Apple Power Macintosh G4500 MHz computer with PCI bus, 256 Mb of RAM plus 60 Gb hard disk, or any IBM-compatible computer with a Pentium 2 processor. Image acquisition may be performed with different frame-grabber cards for analog or digital video input of different standards (PAL, SECAM, or NTSC) and levels of quality (VHS, S-VHS, Betacam, Mini DV, DV). The visual database is based on 4th Dimension by 4D Inc, and video compression is made in real-time MPEG format. Six sections have been developed: demographics, symptoms, extent of disease, radiology, treatment, and follow-up. Acquisition of data includes computed tomography and magnetic resonance imaging, histology, and endoscopy images, allowing sequential comparison. Statistical analysis integral to the program provides Kaplan-Meier survival curves. The development of a dedicated, user-friendly database for sinonasal neoplasia facilitates a multicenter network and has obvious clinical and research benefits.

  14. STSE: Spatio-Temporal Simulation Environment Dedicated to Biology.

    PubMed

    Stoma, Szymon; Fröhlich, Martina; Gerber, Susanne; Klipp, Edda

    2011-04-28

    Recently, the availability of high-resolution microscopy together with the advancements in the development of biomarkers as reporters of biomolecular interactions increased the importance of imaging methods in molecular cell biology. These techniques enable the investigation of cellular characteristics like volume, size and geometry as well as volume and geometry of intracellular compartments, and the amount of existing proteins in a spatially resolved manner. Such detailed investigations opened up many new areas of research in the study of spatial, complex and dynamic cellular systems. One of the crucial challenges for the study of such systems is the design of a well stuctured and optimized workflow to provide a systematic and efficient hypothesis verification. Computer Science can efficiently address this task by providing software that facilitates handling, analysis, and evaluation of biological data to the benefit of experimenters and modelers. The Spatio-Temporal Simulation Environment (STSE) is a set of open-source tools provided to conduct spatio-temporal simulations in discrete structures based on microscopy images. The framework contains modules to digitize, represent, analyze, and mathematically model spatial distributions of biochemical species. Graphical user interface (GUI) tools provided with the software enable meshing of the simulation space based on the Voronoi concept. In addition, it supports to automatically acquire spatial information to the mesh from the images based on pixel luminosity (e.g. corresponding to molecular levels from microscopy images). STSE is freely available either as a stand-alone version or included in the linux live distribution Systems Biology Operational Software (SB.OS) and can be downloaded from http://www.stse-software.org/. The Python source code as well as a comprehensive user manual and video tutorials are also offered to the research community. We discuss main concepts of the STSE design and workflow. We demonstrate it's usefulness using the example of a signaling cascade leading to formation of a morphological gradient of Fus3 within the cytoplasm of the mating yeast cell Saccharomyces cerevisiae. STSE is an efficient and powerful novel platform, designed for computational handling and evaluation of microscopic images. It allows for an uninterrupted workflow including digitization, representation, analysis, and mathematical modeling. By providing the means to relate the simulation to the image data it allows for systematic, image driven model validation or rejection. STSE can be scripted and extended using the Python language. STSE should be considered rather as an API together with workflow guidelines and a collection of GUI tools than a stand alone application. The priority of the project is to provide an easy and intuitive way of extending and customizing software using the Python language.

  15. Layered approach to workstation design for medical image viewing

    NASA Astrophysics Data System (ADS)

    Haynor, David R.; Zick, Gregory L.; Heritage, Marcus B.; Kim, Yongmin

    1992-07-01

    Software engineering principles suggest that complex software systems are best constructed from independent, self-contained modules, thereby maximizing the portability, maintainability and modifiability of the produced code. This principal is important in the design of medical imaging workstations, where further developments in technology (CPU, memory, interface devices, displays, network connections) are required for clinically acceptable workstations, and it is desirable to provide different hardware platforms with the ''same look and feel'' for the user. In addition, the set of desired functions is relatively well understood, but the optimal user interface for delivering these functions on a clinically acceptable workstation is still different depending on department, specialty, or individual preference. At the University of Washington, we are developing a viewing station based on the IBM RISC/6000 computer and on new technologies that are just becoming commercially available. These include advanced voice recognition systems and an ultra-high-speed network. We are developing a set of specifications and a conceptual design for the workstation, and will be producing a prototype. This paper presents our current concepts concerning the architecture and software system design of the future prototype. Our conceptual design specifies requirements for a Database Application Programming Interface (DBAPI) and for a User API (UAPI). The DBAPI consists of a set of subroutine calls that define the admissible transactions between the workstation and an image archive. The UAPI describes the requests a user interface program can make of the workstation. It incorporates basic display and image processing functions, yet is specifically designed to allow extensions to the basic set at the application level. We will discuss the fundamental elements of the two API''s and illustrate their application to workstation design.

  16. A Software Development Platform for Wearable Medical Applications.

    PubMed

    Zhang, Ruikai; Lin, Wei

    2015-10-01

    Wearable medical devices have become a leading trend in healthcare industry. Microcontrollers are computers on a chip with sufficient processing power and preferred embedded computing units in those devices. We have developed a software platform specifically for the design of the wearable medical applications with a small code footprint on the microcontrollers. It is supported by the open source real time operating system FreeRTOS and supplemented with a set of standard APIs for the architectural specific hardware interfaces on the microcontrollers for data acquisition and wireless communication. We modified the tick counter routine in FreeRTOS to include a real time soft clock. When combined with the multitasking features in the FreeRTOS, the platform offers the quick development of wearable applications and easy porting of the application code to different microprocessors. Test results have demonstrated that the application software developed using this platform are highly efficient in CPU usage while maintaining a small code foot print to accommodate the limited memory space in microcontrollers.

  17. CROPPER: a metagene creator resource for cross-platform and cross-species compendium studies.

    PubMed

    Paananen, Jussi; Storvik, Markus; Wong, Garry

    2006-09-22

    Current genomic research methods provide researchers with enormous amounts of data. Combining data from different high-throughput research technologies commonly available in biological databases can lead to novel findings and increase research efficiency. However, combining data from different heterogeneous sources is often a very arduous task. These sources can be different microarray technology platforms, genomic databases, or experiments performed on various species. Our aim was to develop a software program that could facilitate the combining of data from heterogeneous sources, and thus allow researchers to perform genomic cross-platform/cross-species studies and to use existing experimental data for compendium studies. We have developed a web-based software resource, called CROPPER that uses the latest genomic information concerning different data identifiers and orthologous genes from the Ensembl database. CROPPER can be used to combine genomic data from different heterogeneous sources, allowing researchers to perform cross-platform/cross-species compendium studies without the need for complex computational tools or the requirement of setting up one's own in-house database. We also present an example of a simple cross-platform/cross-species compendium study based on publicly available Parkinson's disease data derived from different sources. CROPPER is a user-friendly and freely available web-based software resource that can be successfully used for cross-species/cross-platform compendium studies.

  18. eeDAP: An Evaluation Environment for Digital and Analog Pathology.

    PubMed

    Gallas, Brandon D; Cheng, Wei-Chung; Gavrielides, Marios A; Ivansky, Adam; Keay, Tyler; Wunderlich, Adam; Hipp, Jason; Hewitt, Stephen M

    2014-01-01

    The purpose of this work is to present a platform for designing and executing studies that compare pathologists interpreting histopathology of whole slide images (WSI) on a computer display to pathologists interpreting glass slides on an optical microscope. Here we present eeDAP, an evaluation environment for digital and analog pathology. The key element in eeDAP is the registration of the WSI to the glass slide. Registration is accomplished through computer control of the microscope stage and a camera mounted on the microscope that acquires images of the real time microscope view. Registration allows for the evaluation of the same regions of interest (ROIs) in both domains. This can reduce or eliminate disagreements that arise from pathologists interpreting different areas and focuses the comparison on image quality. We reduced the pathologist interpretation area from an entire glass slide (≈10-30 mm) 2 to small ROIs <(50 um) 2 . We also made possible the evaluation of individual cells. We summarize eeDAP's software and hardware and provide calculations and corresponding images of the microscope field of view and the ROIs extracted from the WSIs. These calculations help provide a sense of eeDAP's functionality and operating principles, while the images provide a sense of the look and feel of studies that can be conducted in the digital and analog domains. The eeDAP software can be downloaded from code.google.com (project: eeDAP) as Matlab source or as a precompiled stand-alone license-free application.

  19. CILogon: An Integrated Identity and Access Management Platform for Science

    NASA Astrophysics Data System (ADS)

    Basney, J.

    2016-12-01

    When scientists work together, they use web sites and other software to share their ideas and data. To ensure the integrity of their work, these systems require the scientists to log in and verify that they are part of the team working on a particular science problem. Too often, the identity and access verification process is a stumbling block for the scientists. Scientific research projects are forced to invest time and effort into developing and supporting Identity and Access Management (IAM) services, distracting them from the core goals of their research collaboration. CILogon provides an IAM platform that enables scientists to work together to meet their IAM needs more effectively so they can allocate more time and effort to their core mission of scientific research. The CILogon platform enables federated identity management and collaborative organization management. Federated identity management enables researchers to use their home organization identities to access cyberinfrastructure, rather than requiring yet another username and password to log on. Collaborative organization management enables research projects to define user groups for authorization to collaboration platforms (e.g., wikis, mailing lists, and domain applications). CILogon's IAM platform serves the unique needs of research collaborations, namely the need to dynamically form collaboration groups across organizations and countries, sharing access to data, instruments, compute clusters, and other resources to enable scientific discovery. CILogon provides a software-as-a-service platform to ease integration with cyberinfrastructure, while making all software components publicly available under open source licenses to enable re-use. Figure 1 illustrates the components and interfaces of this platform. CILogon has been operational since 2010 and has been used by over 7,000 researchers from more than 170 identity providers to access cyberinfrastructure including Globus, LIGO, Open Science Grid, SeedMe, and XSEDE. The "CILogon 2.0" platform, launched in 2016, adds support for virtual organization (VO) membership management, identity linking, international collaborations, and standard integration protocols, through integration with the Internet2 COmanage collaboration software.

  20. A LabVIEW® based generic CT scanner control software platform.

    PubMed

    Dierick, M; Van Loo, D; Masschaele, B; Boone, M; Van Hoorebeke, L

    2010-01-01

    UGCT, the Centre for X-ray tomography at Ghent University (Belgium) does research on X-ray tomography and its applications. This includes the development and construction of state-of-the-art CT scanners for scientific research. Because these scanners are built for very different purposes they differ considerably in their physical implementations. However, they all share common principle functionality. In this context a generic software platform was developed using LabVIEW® in order to provide the same interface and functionality on all scanners. This article describes the concept and features of this software, and its potential for tomography in a research setting. The core concept is to rigorously separate the abstract operation of a CT scanner from its actual physical configuration. This separation is achieved by implementing a sender-listener architecture. The advantages are that the resulting software platform is generic, scalable, highly efficient, easy to develop and to extend, and that it can be deployed on future scanners with minimal effort.

Top