STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.
Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X
2009-08-01
This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.
Imaging has enormous untapped potential to improve cancer research through software to extract and process morphometric and functional biomarkers. In the era of non-cytotoxic treatment agents, multi- modality image-guided ablative therapies and rapidly evolving computational resources, quantitative imaging software can be transformative in enabling minimally invasive, objective and reproducible evaluation of cancer treatment response. Post-processing algorithms are integral to high-throughput analysis and fine- grained differentiation of multiple molecular targets.
A Software Platform for Post-Processing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Donald J.; Martin, Richard E.; Seebo, Jeff P.; Trinh, Long B.; Walker, James L.; Winfree, William P.
2007-01-01
Ultrasonic, microwave, and terahertz nondestructive evaluation imaging systems generally require the acquisition of waveforms at each scan point to form an image. For such systems, signal and image processing methods are commonly needed to extract information from the waves and improve resolution of, and highlight, defects in the image. Since some similarity exists for all waveform-based NDE methods, it would seem a common software platform containing multiple signal and image processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. This presentation describes NASA Glenn Research Center's approach in developing a common software platform for processing waveform-based NDE signals and images. This platform is currently in use at NASA Glenn and at Lockheed Martin Michoud Assembly Facility for processing of pulsed terahertz and ultrasonic data. Highlights of the software operation will be given. A case study will be shown for use with terahertz data. The authors also request scientists and engineers who are interested in sharing customized signal and image processing algorithms to contribute to this effort by letting the authors code up and include these algorithms in future releases.
Platform for Post-Processing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Don J.
2010-01-01
Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.
Advantages and Disadvantages in Image Processing with Free Software in Radiology.
Mujika, Katrin Muradas; Méndez, Juan Antonio Juanes; de Miguel, Andrés Framiñan
2018-01-15
Currently, there are sophisticated applications that make it possible to visualize medical images and even to manipulate them. These software applications are of great interest, both from a teaching and a radiological perspective. In addition, some of these applications are known as Free Open Source Software because they are free and the source code is freely available, and therefore it can be easily obtained even on personal computers. Two examples of free open source software are Osirix Lite® and 3D Slicer®. However, this last group of free applications have limitations in its use. For the radiological field, manipulating and post-processing images is increasingly important. Consequently, sophisticated computing tools that combine software and hardware to process medical images are needed. In radiology, graphic workstations allow their users to process, review, analyse, communicate and exchange multidimensional digital images acquired with different image-capturing radiological devices. These radiological devices are basically CT (Computerised Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), etc. Nevertheless, the programs included in these workstations have a high cost which always depends on the software provider and is always subject to its norms and requirements. With this study, we aim to present the advantages and disadvantages of these radiological image visualization systems in the advanced management of radiological studies. We will compare the features of the VITREA2® and AW VolumeShare 5® radiology workstation with free open source software applications like OsiriX® and 3D Slicer®, with examples from specific studies.
Villoria, Eduardo M; Lenzi, Antônio R; Soares, Rodrigo V; Souki, Bernardo Q; Sigurdsson, Asgeir; Marques, Alexandre P; Fidel, Sandra R
2017-01-01
To describe the use of open-source software for the post-processing of CBCT imaging for the assessment of periapical lesions development after endodontic treatment. CBCT scans were retrieved from endodontic records of two patients. Three-dimensional virtual models, voxel counting, volumetric measurement (mm 3 ) and mean intensity of the periapical lesion were performed with ITK-SNAP v. 3.0 software. Three-dimensional models of the lesions were aligned and overlapped through the MeshLab software, which performed an automatic recording of the anatomical structures, based on the best fit. Qualitative and quantitative analyses of the changes in lesions size after treatment were performed with the 3DMeshMetric software. The ITK-SNAP v. 3.0 showed the smaller value corresponding to the voxel count and the volume of the lesion segmented in yellow, indicating reduction in volume of the lesion after the treatment. A higher value of the mean intensity of the segmented image in yellow was also observed, which suggested new bone formation. Colour mapping and "point value" tool allowed the visualization of the reduction of periapical lesions in several regions. Researchers and clinicians in the monitoring of endodontic periapical lesions have the opportunity to use open-source software.
Introducing PLIA: Planetary Laboratory for Image Analysis
NASA Astrophysics Data System (ADS)
Peralta, J.; Hueso, R.; Barrado, N.; Sánchez-Lavega, A.
2005-08-01
We present a graphical software tool developed under IDL software to navigate, process and analyze planetary images. The software has a complete Graphical User Interface and is cross-platform. It can also run under the IDL Virtual Machine without the need to own an IDL license. The set of tools included allow image navigation (orientation, centring and automatic limb determination), dynamical and photometric atmospheric measurements (winds and cloud albedos), cylindrical and polar projections, as well as image treatment under several procedures. Being written in IDL, it is modular and easy to modify and grow for adding new capabilities. We show several examples of the software capabilities with Galileo-Venus observations: Image navigation, photometrical corrections, wind profiles obtained by cloud tracking, cylindrical projections and cloud photometric measurements. Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.
Ahn, Hye Shin; Kim, Sun Mi; Jang, Mijung; Yun, Bo La; Kim, Bohyoung; Ko, Eun Sook; Han, Boo-Kyung; Chang, Jung Min; Yi, Ann; Cho, Nariya; Moon, Woo Kyung; Choi, Hye Young
2014-01-01
To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige®), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of "Mammogram enhancement ver. 2.0"; group B (SMB), specimen mammography with application of "Mammogram enhancement ver. 2.0". Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software.
PIV Data Validation Software Package
NASA Technical Reports Server (NTRS)
Blackshire, James L.
1997-01-01
A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.
Open source software in a practical approach for post processing of radiologic images.
Valeri, Gianluca; Mazza, Francesco Antonino; Maggi, Stefania; Aramini, Daniele; La Riccia, Luigi; Mazzoni, Giovanni; Giovagnoni, Andrea
2015-03-01
The purpose of this paper is to evaluate the use of open source software (OSS) to process DICOM images. We selected 23 programs for Windows and 20 programs for Mac from 150 possible OSS programs including DICOM viewers and various tools (converters, DICOM header editors, etc.). The programs selected all meet the basic requirements such as free availability, stand-alone application, presence of graphical user interface, ease of installation and advanced features beyond simple display monitor. Capabilities of data import, data export, metadata, 2D viewer, 3D viewer, support platform and usability of each selected program were evaluated on a scale ranging from 1 to 10 points. Twelve programs received a score higher than or equal to eight. Among them, five obtained a score of 9: 3D Slicer, MedINRIA, MITK 3M3, VolView, VR Render; while OsiriX received 10. OsiriX appears to be the only program able to perform all the operations taken into consideration, similar to a workstation equipped with proprietary software, allowing the analysis and interpretation of images in a simple and intuitive way. OsiriX is a DICOM PACS workstation for medical imaging and software for image processing for medical research, functional imaging, 3D imaging, confocal microscopy and molecular imaging. This application is also a good tool for teaching activities because it facilitates the attainment of learning objectives among students and other specialists.
Hortolà, Policarp
2010-01-01
When dealing with microscopic still images of some kinds of samples, the out-of-focus problem represents a particularly serious limiting factor for the subsequent generation of fully sharp 3D animations. In order to produce fully-focused 3D animations of strongly uneven surface microareas, a vertical stack of six digital secondary-electron SEM micrographs of a human bloodstain microarea was acquired. Afterwards, single combined images were generated using a macrophotography and light microscope image post-processing software. Subsequently, 3D animations of texture and topography were obtained in different formats using a combination of software tools. Finally, a 3D-like animation of a texture-topography composite was obtained in different formats using another combination of software tools. By one hand, results indicate that the use of image post-processing software not concerned primarily with electron micrographs allows to obtain, in an easy way, fully-focused images of strongly uneven surface microareas of bloodstains from small series of partially out-of-focus digital SEM micrographs. On the other hand, results also indicate that such small series of electron micrographs can be utilized for generating 3D and 3D-like animations that can subsequently be converted into different formats, by using certain user-friendly software facilities not originally designed for use in SEM, that are easily available from Internet. Although the focus of this study was on bloodstains, the methods used in it well probably are also of relevance for studying the surface microstructures of other organic or inorganic materials whose sharp displaying is difficult of obtaining from a single SEM micrograph.
On the release of cppxfel for processing X-ray free-electron laser images.
Ginn, Helen Mary; Evans, Gwyndaf; Sauter, Nicholas K; Stuart, David Ian
2016-06-01
As serial femtosecond crystallography expands towards a variety of delivery methods, including chip-based methods, and smaller collected data sets, the requirement to optimize the data analysis to produce maximum structure quality is becoming increasingly pressing. Here cppxfel , a software package primarily written in C++, which showcases several data analysis techniques, is released. This software package presently indexes images using DIALS (diffraction integration for advanced light sources) and performs an initial orientation matrix refinement, followed by post-refinement of individual images against a reference data set. Cppxfel is released with the hope that the unique and useful elements of this package can be repurposed for existing software packages. However, as released, it produces high-quality crystal structures and is therefore likely to be also useful to experienced users of X-ray free-electron laser (XFEL) software who wish to maximize the information extracted from a limited number of XFEL images.
On the release of cppxfel for processing X-ray free-electron laser images
Ginn, Helen Mary; Evans, Gwyndaf; Sauter, Nicholas K.; ...
2016-05-11
As serial femtosecond crystallography expands towards a variety of delivery methods, including chip-based methods, and smaller collected data sets, the requirement to optimize the data analysis to produce maximum structure quality is becoming increasingly pressing. Herecppxfel, a software package primarily written in C++, which showcases several data analysis techniques, is released. This software package presently indexes images using DIALS (diffraction integration for advanced light sources) and performs an initial orientation matrix refinement, followed by post-refinement of individual images against a reference data set.Cppxfelis released with the hope that the unique and useful elements of this package can be repurposed formore » existing software packages. However, as released, it produces high-quality crystal structures and is therefore likely to be also useful to experienced users of X-ray free-electron laser (XFEL) software who wish to maximize the information extracted from a limited number of XFEL images.« less
A user-friendly LabVIEW software platform for grating based X-ray phase-contrast imaging.
Wang, Shenghao; Han, Huajie; Gao, Kun; Wang, Zhili; Zhang, Can; Yang, Meng; Wu, Zhao; Wu, Ziyu
2015-01-01
X-ray phase-contrast imaging can provide greatly improved contrast over conventional absorption-based imaging for weakly absorbing samples, such as biological soft tissues and fibre composites. In this study, we introduced an easy and fast way to develop a user-friendly software platform dedicated to the new grating-based X-ray phase-contrast imaging setup at the National Synchrotron Radiation Laboratory of the University of Science and Technology of China. The control of 21 motorized stages, of a piezoelectric stage and of an X-ray tube are achieved with this software, it also covers image acquisition with a flat panel detector for automatic phase stepping scan. Moreover, a data post-processing module for signals retrieval and other custom features are in principle available. With a seamless integration of all the necessary functions in one software package, this platform greatly facilitate users' activities during experimental runs with this grating based X-ray phase contrast imaging setup.
NASA Astrophysics Data System (ADS)
Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.
2017-05-01
These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.
Cardiovascular imaging environment: will the future be cloud-based?
Kawel-Boehm, Nadine; Bluemke, David A
2017-07-01
In cardiovascular CT and MR imaging large datasets have to be stored, post-processed, analyzed and distributed. Beside basic assessment of volume and function in cardiac magnetic resonance imaging e.g., more sophisticated quantitative analysis is requested requiring specific software. Several institutions cannot afford various types of software and provide expertise to perform sophisticated analysis. Areas covered: Various cloud services exist related to data storage and analysis specifically for cardiovascular CT and MR imaging. Instead of on-site data storage, cloud providers offer flexible storage services on a pay-per-use basis. To avoid purchase and maintenance of specialized software for cardiovascular image analysis, e.g. to assess myocardial iron overload, MR 4D flow and fractional flow reserve, evaluation can be performed with cloud based software by the consumer or complete analysis is performed by the cloud provider. However, challenges to widespread implementation of cloud services include regulatory issues regarding patient privacy and data security. Expert commentary: If patient privacy and data security is guaranteed cloud imaging is a valuable option to cope with storage of large image datasets and offer sophisticated cardiovascular image analysis for institutions of all sizes.
Inselect: Automating the Digitization of Natural History Collections
Hudson, Lawrence N.; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W.; van der Walt, Stéfan; Smith, Vincent S.
2015-01-01
The world’s natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect—a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization. PMID:26599208
Inselect: Automating the Digitization of Natural History Collections.
Hudson, Lawrence N; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W; van der Walt, Stéfan; Smith, Vincent S
2015-01-01
The world's natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect-a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization.
A Pipeline for 3D Digital Optical Phenotyping Plant Root System Architecture
NASA Astrophysics Data System (ADS)
Davis, T. W.; Shaw, N. M.; Schneider, D. J.; Shaff, J. E.; Larson, B. G.; Craft, E. J.; Liu, Z.; Kochian, L. V.; Piñeros, M. A.
2017-12-01
This work presents a new pipeline for digital optical phenotyping the root system architecture of agricultural crops. The pipeline begins with a 3D root-system imaging apparatus for hydroponically grown crop lines of interest. The apparatus acts as a self-containing dark room, which includes an imaging tank, motorized rotating bearing and digital camera. The pipeline continues with the Plant Root Imaging and Data Acquisition (PRIDA) software, which is responsible for image capturing and storage. Once root images have been captured, image post-processing is performed using the Plant Root Imaging Analysis (PRIA) command-line tool, which extracts root pixels from color images. Following the pre-processing binarization of digital root images, 3D trait characterization is performed using the next-generation RootReader3D software. RootReader3D measures global root system architecture traits, such as total root system volume and length, total number of roots, and maximum rooting depth and width. While designed to work together, the four stages of the phenotyping pipeline are modular and stand-alone, which provides flexibility and adaptability for various research endeavors.
Free software helps map and display data
NASA Astrophysics Data System (ADS)
Wessel, Paul; Smith, Walter H. F.
When creating camera-ready figures, most scientists are familiar with the sequence of raw data → processing → final illustration and with the spending of large sums of money to finalize papers for submission to scientific journals, prepare proposals, and create overheads and slides for various presentations. This process can be tedious and is often done manually, since available commercial or in-house software usually can do only part of the job.To expedite this process, we introduce the Generic Mapping Tools (GMT), which is a free, public domain software package that can be used to manipulate columns of tabular data, time series, and gridded data sets and to display these data in a variety of forms ranging from simple x-y plots to maps and color, perspective, and shaded-relief illustrations. GMT uses the PostScript page description language, which can create arbitrarily complex images in gray tones or 24-bit true color by superimposing multiple plot files. Line drawings, bitmapped images, and text can be easily combined in one illustration. PostScript plot files are device-independent, meaning the same file can be printed at 300 dots per inch (dpi) on an ordinary laserwriter or at 2470 dpi on a phototypesetter when ultimate quality is needed. GMT software is written as a set of UNIX tools and is totally self contained and fully documented. The system is offered free of charge to federal agencies and nonprofit educational organizations worldwide and is distributed over the computer network Internet.
Digital image modification detection using color information and its histograms.
Zhou, Haoyu; Shen, Yue; Zhu, Xinghui; Liu, Bo; Fu, Zigang; Fan, Na
2016-09-01
The rapid development of many open source and commercial image editing software makes the authenticity of the digital images questionable. Copy-move forgery is one of the most widely used tampering techniques to create desirable objects or conceal undesirable objects in a scene. Existing techniques reported in the literature to detect such tampering aim to improve the robustness of these methods against the use of JPEG compression, blurring, noise, or other types of post processing operations. These post processing operations are frequently used with the intention to conceal tampering and reduce tampering clues. A robust method based on the color moments and other five image descriptors is proposed in this paper. The method divides the image into fixed size overlapping blocks. Clustering operation divides entire search space into smaller pieces with similar color distribution. Blocks from the tampered regions will reside within the same cluster since both copied and moved regions have similar color distributions. Five image descriptors are used to extract block features, which makes the method more robust to post processing operations. An ensemble of deep compositional pattern-producing neural networks are trained with these extracted features. Similarity among feature vectors in clusters indicates possible forged regions. Experimental results show that the proposed method can detect copy-move forgery even if an image was distorted by gamma correction, addictive white Gaussian noise, JPEG compression, or blurring. Copyright © 2016. Published by Elsevier Ireland Ltd.
Image Navigation and Registration Performance Assessment Evaluation Tools for GOES-R ABI and GLM
NASA Technical Reports Server (NTRS)
Houchin, Scott; Porter, Brian; Graybill, Justin; Slingerland, Philip
2017-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. This paper describes the software design and implementation of IPATS and provides preliminary test results.
Optimizing Performance of Scientific Visualization Software to Support Frontier-Class Computations
2015-08-01
Hypersonic Sciences Branch) for providing sample datasets and permission to use an image of Q_Criterion isosurface for this report; Dr Anders Grimsrud...10.1. EnSight CSM and CFD Post processing; c2014 [accessed 2015 July 6] http:// www.ceisoftware.com. Main Page. XDMF; 2014 Nov 7 [2015 July 6] http
An Optimal Partial Differential Equations-based Stopping Criterion for Medical Image Denoising.
Khanian, Maryam; Feizi, Awat; Davari, Ali
2014-01-01
Improving the quality of medical images at pre- and post-surgery operations are necessary for beginning and speeding up the recovery process. Partial differential equations-based models have become a powerful and well-known tool in different areas of image processing such as denoising, multiscale image analysis, edge detection and other fields of image processing and computer vision. In this paper, an algorithm for medical image denoising using anisotropic diffusion filter with a convenient stopping criterion is presented. In this regard, the current paper introduces two strategies: utilizing the efficient explicit method due to its advantages with presenting impressive software technique to effectively solve the anisotropic diffusion filter which is mathematically unstable, proposing an automatic stopping criterion, that takes into consideration just input image, as opposed to other stopping criteria, besides the quality of denoised image, easiness and time. Various medical images are examined to confirm the claim.
Simulation based mask defect repair verification and disposition
NASA Astrophysics Data System (ADS)
Guo, Eric; Zhao, Shirley; Zhang, Skin; Qian, Sandy; Cheng, Guojie; Vikram, Abhishek; Li, Ling; Chen, Ye; Hsiang, Chingyun; Zhang, Gary; Su, Bo
2009-10-01
As the industry moves towards sub-65nm technology nodes, the mask inspection, with increased sensitivity and shrinking critical defect size, catches more and more nuisance and false defects. Increased defect counts pose great challenges in the post inspection defect classification and disposition: which defect is real defect, and among the real defects, which defect should be repaired and how to verify the post-repair defects. In this paper, we address the challenges in mask defect verification and disposition, in particular, in post repair defect verification by an efficient methodology, using SEM mask defect images, and optical inspection mask defects images (only for verification of phase and transmission related defects). We will demonstrate the flow using programmed mask defects in sub-65nm technology node design. In total 20 types of defects were designed including defects found in typical real circuit environments with 30 different sizes designed for each type. The SEM image was taken for each programmed defect after the test mask was made. Selected defects were repaired and SEM images from the test mask were taken again. Wafers were printed with the test mask before and after repair as defect printability references. A software tool SMDD-Simulation based Mask Defect Disposition-has been used in this study. The software is used to extract edges from the mask SEM images and convert them into polygons to save in GDSII format. Then, the converted polygons from the SEM images were filled with the correct tone to form mask patterns and were merged back into the original GDSII design file. This merge is for the purpose of contour simulation-since normally the SEM images cover only small area (~1 μm) and accurate simulation requires including larger area of optical proximity effect. With lithography process model, the resist contour of area of interest (AOI-the area surrounding a mask defect) can be simulated. If such complicated model is not available, a simple optical model can be used to get simulated aerial image intensity in the AOI. With built-in contour analysis functions, the SMDD software can easily compare the contour (or intensity) differences between defect pattern and normal pattern. With user provided judging criteria, this software can be easily disposition the defect based on contour comparison. In addition, process sensitivity properties, like MEEF and NILS, can be readily obtained in the AOI with a lithography model, which will make mask defect disposition criteria more intelligent.
Current and future trends in marine image annotation software
NASA Astrophysics Data System (ADS)
Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.
2016-12-01
Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.
High Resolution Near Real Time Image Processing and Support for MSSS Modernization
NASA Astrophysics Data System (ADS)
Duncan, R. B.; Sabol, C.; Borelli, K.; Spetka, S.; Addison, J.; Mallo, A.; Farnsworth, B.; Viloria, R.
2012-09-01
This paper describes image enhancement software applications engineering development work that has been performed in support of Maui Space Surveillance System (MSSS) Modernization. It also includes R&D and transition activity that has been performed over the past few years with the objective of providing increased space situational awareness (SSA) capabilities. This includes Air Force Research Laboratory (AFRL) use of an FY10 Dedicated High Performance Investment (DHPI) cluster award -- and our selection and planned use for an FY12 DHPI award. We provide an introduction to image processing of electro optical (EO) telescope sensors data; and a high resolution image enhancement and near real time processing and summary status overview. We then describe recent image enhancement applications development and support for MSSS Modernization, results to date, and end with a discussion of desired future development work and conclusions. Significant improvements to image processing enhancement have been realized over the past several years, including a key application that has realized more than a 10,000-times speedup compared to the original R&D code -- and a greater than 72-times speedup over the past few years. The latest version of this code maintains software efficiency for post-mission processing while providing optimization for image processing of data from a new EO sensor at MSSS. Additional work has also been performed to develop low latency, near real time processing of data that is collected by the ground-based sensor during overhead passes of space objects.
NASA Astrophysics Data System (ADS)
Longmore, S. P.; Bikos, D.; Szoke, E.; Miller, S. D.; Brummer, R.; Lindsey, D. T.; Hillger, D.
2014-12-01
The increasing use of mobile phones equipped with digital cameras and the ability to post images and information to the Internet in real-time has significantly improved the ability to report events almost instantaneously. In the context of severe weather reports, a representative digital image conveys significantly more information than a simple text or phone relayed report to a weather forecaster issuing severe weather warnings. It also allows the forecaster to reasonably discern the validity and quality of a storm report. Posting geo-located, time stamped storm report photographs utilizing a mobile phone application to NWS social media weather forecast office pages has generated recent positive feedback from forecasters. Building upon this feedback, this discussion advances the concept, development, and implementation of a formalized Photo Storm Report (PSR) mobile application, processing and distribution system and Advanced Weather Interactive Processing System II (AWIPS-II) plug-in display software.The PSR system would be composed of three core components: i) a mobile phone application, ii) a processing and distribution software and hardware system, and iii) AWIPS-II data, exchange and visualization plug-in software. i) The mobile phone application would allow web-registered users to send geo-location, view direction, and time stamped PSRs along with severe weather type and comments to the processing and distribution servers. ii) The servers would receive PSRs, convert images and information to NWS network bandwidth manageable sizes in an AWIPS-II data format, distribute them on the NWS data communications network, and archive the original PSRs for possible future research datasets. iii) The AWIPS-II data and exchange plug-ins would archive PSRs, and the visualization plug-in would display PSR locations, times and directions by hour, similar to surface observations. Hovering on individual PSRs would reveal photo thumbnails and clicking on them would display the full resolution photograph.Here, we present initial NWS forecaster feedback received from social media posted PSRs, motivating the possible advantages of PSRs within AWIPS-II, the details of developing and implementing a PSR system, and possible future applications beyond severe weather reports and AWIPS-II.
NASA Astrophysics Data System (ADS)
Cucchiaro, S.; Maset, E.; Fusiello, A.; Cazorzi, F.
2018-05-01
In recent years, the combination of Structure-from-Motion (SfM) algorithms and UAV-based aerial images has revolutionised 3D topographic surveys for natural environment monitoring, offering low-cost, fast and high quality data acquisition and processing. A continuous monitoring of the morphological changes through multi-temporal (4D) SfM surveys allows, e.g., to analyse the torrent dynamic also in complex topography environment like debris-flow catchments, provided that appropriate tools and procedures are employed in the data processing steps. In this work we test two different software packages (3DF Zephyr Aerial and Agisoft Photoscan) on a dataset composed of both UAV and terrestrial images acquired on a debris-flow reach (Moscardo torrent - North-eastern Italian Alps). Unlike other papers in the literature, we evaluate the results not only on the raw point clouds generated by the Structure-from- Motion and Multi-View Stereo algorithms, but also on the Digital Terrain Models (DTMs) created after post-processing. Outcomes show differences between the DTMs that can be considered irrelevant for the geomorphological phenomena under analysis. This study confirms that SfM photogrammetry can be a valuable tool for monitoring sediment dynamics, but accurate point cloud post-processing is required to reliably localize geomorphological changes.
Paintdakhi, Ahmad; Parry, Bradley; Campos, Manuel; Irnov, Irnov; Elf, Johan; Surovtsev, Ivan; Jacobs-Wagner, Christine
2016-01-01
Summary With the realization that bacteria display phenotypic variability among cells and exhibit complex subcellular organization critical for cellular function and behavior, microscopy has re-emerged as a primary tool in bacterial research during the last decade. However, the bottleneck in today’s single-cell studies is quantitative image analysis of cells and fluorescent signals. Here, we address current limitations through the development of Oufti, a stand-alone, open-source software package for automated measurements of microbial cells and fluorescence signals from microscopy images. Oufti provides computational solutions for tracking touching cells in confluent samples, handles various cell morphologies, offers algorithms for quantitative analysis of both diffraction and non-diffraction-limited fluorescence signals, and is scalable for high-throughput analysis of massive datasets, all with subpixel precision. All functionalities are integrated in a single package. The graphical user interface, which includes interactive modules for segmentation, image analysis, and post-processing analysis, makes the software broadly accessible to users irrespective of their computational skills. PMID:26538279
Hemodynamics model of fluid–solid interaction in internal carotid artery aneurysms
Fu-Yu, Wang; Lei, Liu; Xiao-Jun, Zhang; Hai-Yue, Ju
2010-01-01
The objective of this study is to present a relatively simple method to reconstruct cerebral aneurysms as 3D numerical grids. The method accurately duplicates the geometry to provide computer simulations of the blood flow. Initial images were obtained by using CT angiography and 3D digital subtraction angiography in DICOM format. The image was processed by using MIMICS software, and the 3D fluid model (blood flow) and 3D solid model (wall) were generated. The subsequent output was exported to the ANSYS workbench software to generate the volumetric mesh for further hemodynamic study. The fluid model was defined and simulated in CFX software while the solid model was calculated in ANSYS software. The force data calculated firstly in the CFX software were transferred to the ANSYS software, and after receiving the force data, total mesh displacement data were calculated in the ANSYS software. Then, the mesh displacement data were transferred back to the CFX software. The data exchange was processed in workbench software. The results of simulation could be visualized in CFX-post. Two examples of grid reconstruction and blood flow simulation for patients with internal carotid artery aneurysms were presented. The wall shear stress, wall total pressure, and von Mises stress could be visualized. This method seems to be relatively simple and suitable for direct use by neurosurgeons or neuroradiologists, and maybe a practical tool for planning treatment and follow-up of patients after neurosurgical or endovascular interventions with 3D angiography. PMID:20812022
Hemodynamics model of fluid-solid interaction in internal carotid artery aneurysms.
Bai-Nan, Xu; Fu-Yu, Wang; Lei, Liu; Xiao-Jun, Zhang; Hai-Yue, Ju
2011-01-01
The objective of this study is to present a relatively simple method to reconstruct cerebral aneurysms as 3D numerical grids. The method accurately duplicates the geometry to provide computer simulations of the blood flow. Initial images were obtained by using CT angiography and 3D digital subtraction angiography in DICOM format. The image was processed by using MIMICS software, and the 3D fluid model (blood flow) and 3D solid model (wall) were generated. The subsequent output was exported to the ANSYS workbench software to generate the volumetric mesh for further hemodynamic study. The fluid model was defined and simulated in CFX software while the solid model was calculated in ANSYS software. The force data calculated firstly in the CFX software were transferred to the ANSYS software, and after receiving the force data, total mesh displacement data were calculated in the ANSYS software. Then, the mesh displacement data were transferred back to the CFX software. The data exchange was processed in workbench software. The results of simulation could be visualized in CFX-post. Two examples of grid reconstruction and blood flow simulation for patients with internal carotid artery aneurysms were presented. The wall shear stress, wall total pressure, and von Mises stress could be visualized. This method seems to be relatively simple and suitable for direct use by neurosurgeons or neuroradiologists, and maybe a practical tool for planning treatment and follow-up of patients after neurosurgical or endovascular interventions with 3D angiography.
Automated phenotype pattern recognition of zebrafish for high-throughput screening.
Schutera, Mark; Dickmeis, Thomas; Mione, Marina; Peravali, Ravindra; Marcato, Daniel; Reischl, Markus; Mikut, Ralf; Pylatiuk, Christian
2016-07-03
Over the last years, the zebrafish (Danio rerio) has become a key model organism in genetic and chemical screenings. A growing number of experiments and an expanding interest in zebrafish research makes it increasingly essential to automatize the distribution of embryos and larvae into standard microtiter plates or other sample holders for screening, often according to phenotypical features. Until now, such sorting processes have been carried out by manually handling the larvae and manual feature detection. Here, a prototype platform for image acquisition together with a classification software is presented. Zebrafish embryos and larvae and their features such as pigmentation are detected automatically from the image. Zebrafish of 4 different phenotypes can be classified through pattern recognition at 72 h post fertilization (hpf), allowing the software to classify an embryo into 2 distinct phenotypic classes: wild-type versus variant. The zebrafish phenotypes are classified with an accuracy of 79-99% without any user interaction. A description of the prototype platform and of the algorithms for image processing and pattern recognition is presented.
NASA Astrophysics Data System (ADS)
Weber, Walter H.; Mair, H. Douglas; Jansen, Dion
2003-03-01
A suite of basic signal processors has been developed. These basic building blocks can be cascaded together to form more complex processors without the need for programming. The data structures between each of the processors are handled automatically. This allows a processor built for one purpose to be applied to any type of data such as images, waveform arrays and single values. The processors are part of Winspect Data Acquisition software. The new processors are fast enough to work on A-scan signals live while scanning. Their primary use is to extract features, reduce noise or to calculate material properties. The cascaded processors work equally well on live A-scan displays, live gated data or as a post-processing engine on saved data. Researchers are able to call their own MATLAB or C-code from anywhere within the processor structure. A built-in formula node processor that uses a simple algebraic editor may make external user programs unnecessary. This paper also discusses the problems associated with ad hoc software development and how graphical programming languages can tie up researchers writing software rather than designing experiments.
MOPEX: a software package for astronomical image processing and visualization
NASA Astrophysics Data System (ADS)
Makovoz, David; Roby, Trey; Khan, Iffat; Booth, Hartley
2006-06-01
We present MOPEX - a software package for astronomical image processing and display. The package is a combination of command-line driven image processing software written in C/C++ with a Java-based GUI. The main image processing capabilities include creating mosaic images, image registration, background matching, point source extraction, as well as a number of minor image processing tasks. The combination of the image processing and display capabilities allows for much more intuitive and efficient way of performing image processing. The GUI allows for the control over the image processing and display to be closely intertwined. Parameter setting, validation, and specific processing options are entered by the user through a set of intuitive dialog boxes. Visualization feeds back into further processing by providing a prompt feedback of the processing results. The GUI also allows for further analysis by accessing and displaying data from existing image and catalog servers using a virtual observatory approach. Even though originally designed for the Spitzer Space Telescope mission, a lot of functionalities are of general usefulness and can be used for working with existing astronomical data and for new missions. The software used in the package has undergone intensive testing and benefited greatly from effective software reuse. The visualization part has been used for observation planning for both the Spitzer and Herschel Space Telescopes as part the tool Spot. The visualization capabilities of Spot have been enhanced and integrated with the image processing functionality of the command-line driven MOPEX. The image processing software is used in the Spitzer automated pipeline processing, which has been in operation for nearly 3 years. The image processing capabilities have also been tested in off-line processing by numerous astronomers at various institutions around the world. The package is multi-platform and includes automatic update capabilities. The software package has been developed by a small group of software developers and scientists at the Spitzer Science Center. It is available for distribution at the Spitzer Science Center web page.
Wang, Chunliang; Ritter, Felix; Smedby, Orjan
2010-07-01
To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added <10ms of processing time while the other IPC methods cost 1-5 s in our experiments. The browser-server style communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.
[Design and development of the DSA digital subtraction workstation].
Peng, Wen-Xian; Peng, Tian-Zhou; Xia, Shun-Ren; Jin, Guang-Bo
2008-05-01
According to the patient examination criterion and the demands of all related departments, the DSA digital subtraction workstation has been successfully designed and is introduced in this paper by analyzing the characteristic of video source of DSA which was manufactured by GE Company and has no DICOM standard interface. The workstation includes images-capturing gateway and post-processing software. With the developed workstation, all images from this early DSA equipment are transformed into DICOM format and then are shared in different machines.
Geometric Calibration and Validation of Ultracam Aerial Sensors
NASA Astrophysics Data System (ADS)
Gruber, Michael; Schachinger, Bernhard; Muick, Marc; Neuner, Christian; Tschemmernegg, Helfried
2016-03-01
We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.
A software tool for advanced MRgFUS prostate therapy planning and follow up
NASA Astrophysics Data System (ADS)
van Straaten, Dörte; Hoogenboom, Martijn; van Amerongen, Martinus J.; Weiler, Florian; Issawi, Jumana Al; Günther, Matthias; Fütterer, Jurgen; Jenne, Jürgen W.
2017-03-01
US guided HIFU/FUS ablation for the therapy of prostate cancer is a clinical established method, while MR guided HIFU/FUS applications for prostate recently started clinical evaluation. Even if MRI examination is an excellent diagnostic tool for prostate cancer, it is a time consuming procedure and not practicable within an MRgFUS therapy session. The aim of our ongoing work is to develop software to support therapy planning and post-therapy follow-up for MRgFUS on localized prostate cancer, based on multi-parametric MR protocols. The clinical workflow of diagnosis, therapy and follow-up of MR guided FUS on prostate cancer was deeply analyzed. Based on this, the image processing workflow was designed and all necessary components, e.g. GUI, viewer, registration tools etc. were defined and implemented. The software bases on MeVisLab with several implemented C++ modules for the image processing tasks. The developed software, called LTC (Local Therapy Control) will register and visualize automatically all images (T1w, T2w, DWI etc.) and ADC or perfusion maps gained from the diagnostic MRI session. This maximum of diagnostic information helps to segment all necessary ROIs, e.g. the tumor, for therapy planning. Final therapy planning will be performed based on these segmentation data in the following MRgFUS therapy session. In addition, the developed software should help to evaluate the therapy success, by synchronization and display of pre-therapeutic, therapy and follow-up image data including the therapy plan and thermal dose information. In this ongoing project, the first stand-alone prototype was completed and will be clinically evaluated.
Hsieh, Sheng-Hsun; Li, Yung-Hui; Tien, Chung-Hao; Chang, Chin-Chen
2016-12-01
Iris recognition has gained increasing popularity over the last few decades; however, the stand-off distance in a conventional iris recognition system is too short, which limits its application. In this paper, we propose a novel hardware-software hybrid method to increase the stand-off distance in an iris recognition system. When designing the system hardware, we use an optimized wavefront coding technique to extend the depth of field. To compensate for the blurring of the image caused by wavefront coding, on the software side, the proposed system uses a local patch-based super-resolution method to restore the blurred image to its clear version. The collaborative effect of the new hardware design and software post-processing showed great potential in our experiment. The experimental results showed that such improvement cannot be achieved by using a hardware-or software-only design. The proposed system can increase the capture volume of a conventional iris recognition system by three times and maintain the system's high recognition rate.
Realistic Simulations of Coronagraphic Observations with WFIRST
NASA Astrophysics Data System (ADS)
Rizzo, Maxime; Zimmerman, Neil; Roberge, Aki; Lincowski, Andrew; Arney, Giada; Stark, Chris; Jansen, Tiffany; Turnbull, Margaret; WFIRST Science Investigation Team (Turnbull)
2018-01-01
We present a framework to simulate observing scenarios with the WFIRST Coronagraphic Instrument (CGI). The Coronagraph and Rapid Imaging Spectrograph in Python (crispy) is an open-source package that can be used to create CGI data products for analysis and development of post-processing routines. The software convolves time-varying coronagraphic PSFs with realistic astrophysical scenes which contain a planetary architecture, a consistent dust structure, and a background field composed of stars and galaxies. The focal plane can be read out by a WFIRST electron-multiplying CCD model directly, or passed through a WFIRST integral field spectrograph model first. Several elementary post-processing routines are provided as part of the package.
Platform-independent software for medical image processing on the Internet
NASA Astrophysics Data System (ADS)
Mancuso, Michael E.; Pathak, Sayan D.; Kim, Yongmin
1997-05-01
We have developed a software tool for image processing over the Internet. The tool is a general purpose, easy to use, flexible, platform independent image processing software package with functions most commonly used in medical image processing.It provides for processing of medical images located wither remotely on the Internet or locally. The software was written in Java - the new programming language developed by Sun Microsystems. It was compiled and tested using Microsoft's Visual Java 1.0 and Microsoft's Just in Time Compiler 1.00.6211. The software is simple and easy to use. In order to use the tool, the user needs to download the software from our site before he/she runs it using any Java interpreter, such as those supplied by Sun, Symantec, Borland or Microsoft. Future versions of the operating systems supplied by Sun, Microsoft, Apple, IBM, and others will include Java interpreters. The software is then able to access and process any image on the iNternet or on the local computer. Using a 512 X 512 X 8-bit image, a 3 X 3 convolution took 0.88 seconds on an Intel Pentium Pro PC running at 200 MHz with 64 Mbytes of memory. A window/level operation took 0.38 seconds while a 3 X 3 median filter took 0.71 seconds. These performance numbers demonstrate the feasibility of using this software interactively on desktop computes. Our software tool supports various image processing techniques commonly used in medical image processing and can run without the need of any specialized hardware. It can become an easily accessible resource over the Internet to promote the learning and of understanding image processing algorithms. Also, it could facilitate sharing of medical image databases and collaboration amongst researchers and clinicians, regardless of location.
LIBRA is a fully-automatic breast density estimation software solution based on a published algorithm that works on either raw (i.e., “FOR PROCESSING”) or vendor post-processed (i.e., “FOR PRESENTATION”) digital mammography images. LIBRA has been applied to over 30,000 screening exams and is being increasingly utilized in larger studies.
Measuring the complexity of design in real-time imaging software
NASA Astrophysics Data System (ADS)
Sangwan, Raghvinder S.; Vercellone-Smith, Pamela; Laplante, Phillip A.
2007-02-01
Due to the intricacies in the algorithms involved, the design of imaging software is considered to be more complex than non-image processing software (Sangwan et al, 2005). A recent investigation (Larsson and Laplante, 2006) examined the complexity of several image processing and non-image processing software packages along a wide variety of metrics, including those postulated by McCabe (1976), Chidamber and Kemerer (1994), and Martin (2003). This work found that it was not always possible to quantitatively compare the complexity between imaging applications and nonimage processing systems. Newer research and an accompanying tool (Structure 101, 2006), however, provides a greatly simplified approach to measuring software complexity. Therefore it may be possible to definitively quantify the complexity differences between imaging and non-imaging software, between imaging and real-time imaging software, and between software programs of the same application type. In this paper, we review prior results and describe the methodology for measuring complexity in imaging systems. We then apply a new complexity measurement methodology to several sets of imaging and non-imaging code in order to compare the complexity differences between the two types of applications. The benefit of such quantification is far reaching, for example, leading to more easily measured performance improvement and quality in real-time imaging code.
Pre- and Post-Processing Tools to Streamline the CFD Process
NASA Technical Reports Server (NTRS)
Dorney, Suzanne Miller
2002-01-01
This viewgraph presentation provides information on software development tools to facilitate the use of CFD (Computational Fluid Dynamics) codes. The specific CFD codes FDNS and CORSAIR are profiled, and uses for software development tools with these codes during pre-processing, interim-processing, and post-processing are explained.
Characterization of fission gas bubbles in irradiated U-10Mo fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casella, Andrew M.; Burkes, Douglas E.; MacFarlan, Paul J.
2017-09-01
Irradiated U-10Mo fuel samples were prepared with traditional mechanical potting and polishing methods with in a hot cell. They were then removed and imaged with an SEM located outside of a hot cell. The images were then processed with basic imaging techniques from 3 separate software packages. The results were compared and a baseline method for characterization of fission gas bubbles in the samples is proposed. It is hoped that through adoption of or comparison to this baseline method that sample characterization can be somewhat standardized across the field of post irradiated examination of metal fuels.
2018-01-01
Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962
Sedgewick, Gerald J.; Ericson, Marna
2015-01-01
Obtaining digital images of color brightfield microscopy is an important aspect of biomedical research and the clinical practice of diagnostic pathology. Although the field of digital pathology has had tremendous advances in whole-slide imaging systems, little effort has been directed toward standardizing color brightfield digital imaging to maintain image-to-image consistency and tonal linearity. Using a single camera and microscope to obtain digital images of three stains, we show that microscope and camera systems inherently produce image-to-image variation. Moreover, we demonstrate that post-processing with a widely used raster graphics editor software program does not completely correct for session-to-session inconsistency. We introduce a reliable method for creating consistent images with a hardware/software solution (ChromaCal™; Datacolor Inc., NJ) along with its features for creating color standardization, preserving linear tonal levels, providing automated white balancing and setting automated brightness to consistent levels. The resulting image consistency using this method will also streamline mean density and morphometry measurements, as images are easily segmented and single thresholds can be used. We suggest that this is a superior method for color brightfield imaging, which can be used for quantification and can be readily incorporated into workflows. PMID:25575568
NASA Technical Reports Server (NTRS)
1990-01-01
The Ames digital image velocimetry technology has been incorporated in a commercially available image processing software package that allows motion measurement of images on a PC alone. The software, manufactured by Werner Frei Associates, is IMAGELAB FFT. IMAGELAB FFT is a general purpose image processing system with a variety of other applications, among them image enhancement of fingerprints and use by banks and law enforcement agencies for analysis of videos run during robberies.
3D OCT imaging in clinical settings: toward quantitative measurements of retinal structures
NASA Astrophysics Data System (ADS)
Zawadzki, Robert J.; Fuller, Alfred R.; Zhao, Mingtao; Wiley, David F.; Choi, Stacey S.; Bower, Bradley A.; Hamann, Bernd; Izatt, Joseph A.; Werner, John S.
2006-02-01
The acquisition speed of current FD-OCT (Fourier Domain - Optical Coherence Tomography) instruments allows rapid screening of three-dimensional (3D) volumes of human retinas in clinical settings. To take advantage of this ability requires software used by physicians to be capable of displaying and accessing volumetric data as well as supporting post processing in order to access important quantitative information such as thickness maps and segmented volumes. We describe our clinical FD-OCT system used to acquire 3D data from the human retina over the macula and optic nerve head. B-scans are registered to remove motion artifacts and post-processed with customized 3D visualization and analysis software. Our analysis software includes standard 3D visualization techniques along with a machine learning support vector machine (SVM) algorithm that allows a user to semi-automatically segment different retinal structures and layers. Our program makes possible measurements of the retinal layer thickness as well as volumes of structures of interest, despite the presence of noise and structural deformations associated with retinal pathology. Our software has been tested successfully in clinical settings for its efficacy in assessing 3D retinal structures in healthy as well as diseased cases. Our tool facilitates diagnosis and treatment monitoring of retinal diseases.
Nagi, Sana Ehsen; Khan, Farhan Raza
2017-01-01
With root canal treatment, the organic debris and micro-organisms from pulp space is removed and an ideal canal preparation is achieved that is conducive of hermetic obturation. The purpose of this study was to correlate the pre-operative canal curvature with the postoperative curvature in human extracted teeth prepared with K-3 rotary systems. The root canal preparation was carried out on extracted human molars and premolars using K-3 endodontic rotary files. A pre and post-operative image of the teeth using digital radiograph were taken in order to compare pre and post-operative canal curvature. The images were saved in an images retrieval system (Gendex software, USA). Change in the canal curvature was measured using the software measuring tool (Vixwin software, USA). Student paired t-test and Pearson correlation test was applied at 0.05 level of significance. There is a statistically significant difference between pre-operative and post-operative canal curvature (p-value <0.001) and a strong positive correlation (91% correlation) between pre-operative and post-operative canal curvature in teeth prepared with the K-3 rotary files. A significant difference between pre and post instrumentation curvature was found. Degree of canal curvature was not correlated with time taken for canal preparation.
Friedman, Tamir; Michalski, Mark; Goodman, T Rob; Brown, J Elliott
2016-03-01
Three-dimensional (3D) printing has recently erupted into the medical arena due to decreased costs and increased availability of printers and software tools. Due to lack of detailed information in the medical literature on the methods for 3D printing, we have reviewed the medical and engineering literature on the various methods for 3D printing and compiled them into a practical "how to" format, thereby enabling the novice to start 3D printing with very limited funds. We describe (1) background knowledge, (2) imaging parameters, (3) software, (4) hardware, (5) post-processing, and (6) financial aspects required to cost-effectively reproduce a patient's disease ex vivo so that the patient, engineer and surgeon may hold the anatomy and associated pathology in their hands.
NASA Astrophysics Data System (ADS)
Yang, Yanlong; Zhou, Xing; Li, Runze; Van Horn, Mark; Peng, Tong; Lei, Ming; Wu, Di; Chen, Xun; Yao, Baoli; Ye, Tong
2015-03-01
Bessel beams have been used in many applications due to their unique optical properties of maintaining their intensity profiles unchanged during propagation. In imaging applications, Bessel beams have been successfully used to provide extended focuses for volumetric imaging and uniformed illumination plane in light-sheet microscopy. Coupled with two-photon excitation, Bessel beams have been successfully used in realizing fluorescence projected volumetric imaging. We demonstrated previously a stereoscopic solution-two-photon fluorescence stereomicroscopy (TPFSM)-for recovering the depth information in volumetric imaging with Bessel beams. In TPFSM, tilted Bessel beams were used to generate stereoscopic images on a laser scanning two-photon fluorescence microscope; upon post image processing we could successfully provide 3D perception of acquired volume images by wearing anaglyph 3D glasses. However, tilted Bessel beams were generated by shifting either an axicon or an objective laterally; the slow imaging speed and severe aberrations made it hard to use in real-time volume imaging. In this article, we report recent improvements of TPFSM with newly designed scanner and imaging software, which allows 3D stereoscopic imaging without moving any of the optical components on the setup. This improvement has dramatically improved focusing qualities and imaging speed so that the TPFSM can be performed potentially in real-time to provide 3D visualization in scattering media without post image processing.
Image enhancement software for underwater recovery operations: User's manual
NASA Astrophysics Data System (ADS)
Partridge, William J.; Therrien, Charles W.
1989-06-01
This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.
Remote Sensing Image Quality Assessment Experiment with Post-Processing
NASA Astrophysics Data System (ADS)
Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.
2018-04-01
This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.
NASA Astrophysics Data System (ADS)
Avdelidis, N. P.; Kappatos, V.; Georgoulas, G.; Karvelis, P.; Deli, C. K.; Theodorakeas, P.; Giakas, G.; Tsiokanos, A.; Koui, M.; Jamurtas, A. Z.
2017-04-01
Exercise induced muscle damage (EIMD), is usually experienced in i) humans who have been physically inactive for prolonged periods of time and then begin with sudden training trials and ii) athletes who train over their normal limits. EIMD is not so easy to be detected and quantified, by means of commonly measurement tools and methods. Thermography has been used successfully as a research detection tool in medicine for the last 6 decades but very limited work has been reported on EIMD area. The main purpose of this research is to assess and characterize EIMD, using thermography and image processing techniques. The first step towards that goal is to develop a reliable segmentation technique to isolate the region of interest (ROI). A semi-automatic image processing software was designed and regions of the left and right leg based on superpixels were segmented. The image is segmented into a number of regions and the user is able to intervene providing the regions which belong to each of the two legs. In order to validate the image processing software, an extensive experimental investigation was carried out, acquiring thermographic images of the rectus femoris muscle before, immediately post and 24, 48 and 72 hours after an acute bout of eccentric exercise (5 sets of 15 maximum repetitions), on males and females (20-30 year-old). Results indicate that the semi-automated approach provides an excellent bench-mark that can be used as a clinical reliable tool.
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
Image enhancement in positron emission mammography
NASA Astrophysics Data System (ADS)
Slavine, Nikolai V.; Seiler, Stephen; McColl, Roderick W.; Lenkinski, Robert E.
2017-02-01
Purpose: To evaluate an efficient iterative deconvolution method (RSEMD) for improving the quantitative accuracy of previously reconstructed breast images by commercial positron emission mammography (PEM) scanner. Materials and Methods: The RSEMD method was tested on breast phantom data and clinical PEM imaging data. Data acquisition was performed on a commercial Naviscan Flex Solo II PEM camera. This method was applied to patient breast images previously reconstructed with Naviscan software (MLEM) to determine improvements in resolution, signal to noise ratio (SNR) and contrast to noise ratio (CNR.) Results: In all of the patients' breast studies the post-processed images proved to have higher resolution and lower noise as compared with images reconstructed by conventional methods. In general, the values of SNR reached a plateau at around 6 iterations with an improvement factor of about 2 for post-processed Flex Solo II PEM images. Improvements in image resolution after the application of RSEMD have also been demonstrated. Conclusions: A rapidly converging, iterative deconvolution algorithm with a novel resolution subsets-based approach RSEMD that operates on patient DICOM images has been used for quantitative improvement in breast imaging. The RSEMD method can be applied to clinical PEM images to improve image quality to diagnostically acceptable levels and will be crucial in order to facilitate diagnosis of tumor progression at the earliest stages. The RSEMD method can be considered as an extended Richardson-Lucy algorithm with multiple resolution levels (resolution subsets).
Peterman, Robert J; Jiang, Shuying; Johe, Rene; Mukherjee, Padma M
2016-12-01
Dolphin® visual treatment objective (VTO) prediction software is routinely utilized by orthodontists during the treatment planning of orthognathic cases to help predict post-surgical soft tissue changes. Although surgical soft tissue prediction is considered to be a vital tool, its accuracy is not well understood in tow-jaw surgical procedures. The objective of this study was to quantify the accuracy of Dolphin Imaging's VTO soft tissue prediction software on class III patients treated with maxillary advancement and mandibular setback and to validate the efficacy of the software in such complex cases. This retrospective study analyzed the records of 14 patients treated with comprehensive orthodontics in conjunction with two-jaw orthognathic surgery. Pre- and post-treatment radiographs were traced and superimposed to determine the actual skeletal movements achieved in surgery. This information was then used to simulate surgery in the software and generate a final soft tissue patient profile prediction. Prediction images were then compared to the actual post-treatment profile photos to determine differences. Dolphin Imaging's software was determined to be accurate within an error range of +/- 2 mm in the X-axis at most landmarks. The lower lip predictions were most inaccurate. Clinically, the observed error suggests that the VTO may be used for demonstration and communication with a patient or consulting practitioner. However, Dolphin should not be useful for precise treatment planning of surgical movements. This program should be used with caution to prevent unrealistic patient expectations and dissatisfaction.
Carlton, Connor D; Mitchell, Samantha; Lewis, Patrick
2018-01-01
Over the past decade, Structure from Motion (SfM) has increasingly been used as a means of digital preservation and for documenting archaeological excavations, architecture, and cultural material. However, few studies have tapped the potential of using SfM to document and analyze taphonomic processes affecting burials for forensic sciences purposes. This project utilizes SfM models to elucidate specific post-depositional events that affected a series of three human cadavers deposited at the South East Texas Applied Forensic Science Facility (STAFS). The aim of this research was to test the ability for untrained researchers to employ spatial software and photogrammetry for data collection purposes. For a series of three months a single lens reflex (SLR) camera was used to capture a series of overlapping images at periodic stages in the decomposition process of each cadaver. These images are processed through photogrammetric software that creates a 3D model that can be measured, manipulated, and viewed. This project used photogrammetric and geospatial software to map changes in decomposition and movement of the body from original deposition points. Project results indicate SfM and GIS as a useful tool for documenting decomposition and taphonomic processes. Results indicate photogrammetry is an efficient, relatively simple, and affordable tool for the documentation of decomposition. Copyright © 2017 Elsevier B.V. All rights reserved.
The CTBTO/WMO Atmospheric Backtracking Response System and the Data Fusion Exercise 2007
2008-09-01
sensitivity of the measurement (sample) towards releases at all points on the globe . For a more comprehensive description, see the presentation from last...localization information, including the error ellipse, is comparatively small. The red spots on the right image mark seismic events that occurred on...hours indicated in the calendar of the PTS post-processing software WEB- GRAPE . 2008 Monitoring Research Review: Ground-Based Nuclear
Lizarraga, Gabriel; Li, Chunfei; Cabrerizo, Mercedes; Barker, Warren; Loewenstein, David A; Duara, Ranjan; Adjouadi, Malek
2018-04-26
Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. ©Gabriel Lizarraga, Chunfei Li, Mercedes Cabrerizo, Warren Barker, David A Loewenstein, Ranjan Duara, Malek Adjouadi. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 26.04.2018.
NASA Astrophysics Data System (ADS)
Zou, Liang; Fu, Zhuang; Zhao, YanZheng; Yang, JunYan
2010-07-01
This paper proposes a kind of pipelined electric circuit architecture implemented in FPGA, a very large scale integrated circuit (VLSI), which efficiently deals with the real time non-uniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPA). Dual Nios II soft-core processors and a DSP with a 64+ core together constitute this image system. Each processor undertakes own systematic task, coordinating its work with each other's. The system on programmable chip (SOPC) in FPGA works steadily under the global clock frequency of 96Mhz. Adequate time allowance makes FPGA perform NUC image pre-processing algorithm with ease, which has offered favorable guarantee for the work of post image processing in DSP. And at the meantime, this paper presents a hardware (HW) and software (SW) co-design in FPGA. Thus, this systematic architecture yields an image processing system with multiprocessor, and a smart solution to the satisfaction with the performance of the system.
ODIN-object-oriented development interface for NMR.
Jochimsen, Thies H; von Mengershausen, Michael
2004-09-01
A cross-platform development environment for nuclear magnetic resonance (NMR) experiments is presented. It allows rapid prototyping of new pulse sequences and provides a common programming interface for different system types. With this object-oriented interface implemented in C++, the programmer is capable of writing applications to control an experiment that can be executed on different measurement devices, even from different manufacturers, without the need to modify the source code. Due to the clear design of the software, new pulse sequences can be created, tested, and executed within a short time. To post-process the acquired data, an interface to well-known numerical libraries is part of the framework. This allows a transparent integration of the data processing instructions into the measurement module. The software focuses mainly on NMR imaging, but can also be used with limitations for spectroscopic experiments. To demonstrate the capabilities of the framework, results of the same experiment, carried out on two NMR imaging systems from different manufacturers are shown and compared with the results of a simulation.
Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D
2007-01-01
Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology.
Clayton, Gemma
2013-06-01
This project was undertaken as part of the PhD research project of Paul Malone, Pricipal Investigator, Covance plc, Harrogate. Mr Malone approached the photography department for involvement in the study with the aim of settling the current debate on the anatomical and histological features of the distal radioulnar ligaments by capturing the anatomy photographically throughout the process of dissection via a microtome. The author was approached to lead on the photographic protocol as part of her post-graduate certificate training at Staffordshire University. High-resolution digital images of an entire human arm were required, the main area of interest being the distal radioulnar joint of the wrist. Images were to be taken at 40 μm intervals as the specimen was sliced. When microtomy was undertaken through the ligaments images were made at 20 μm intervals. A method of suspending a camera approximately 1 metre above the specimen was devised, together with the preparation for the capture, processing and storage of images. The resulting images were then to be subject to further analysis in the form of 3-Dimensional reconstruction, using computer modelling techniques and software. The possibility of merging the images with sequences obtained from both CT & MRI using image handling software is also an area of exploration, in collaboration with the University of Manchester's Visualisation Centre.
Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center
NASA Technical Reports Server (NTRS)
Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val
1989-01-01
Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
NASA Astrophysics Data System (ADS)
Faber, Tracy L.; Garcia, Ernest V.; Lalush, David S.; Segars, W. Paul; Tsui, Benjamin M.
2001-05-01
The spline-based Mathematical Cardiac Torso (MCAT) phantom is a realistic software simulation designed to simulate single photon emission computed tomographic (SPECT) data. It incorporates a heart model of known size and shape; thus, it is invaluable for measuring accuracy of acquisition, reconstruction, and post-processing routines. New functionality has been added by replacing the standard heart model with left ventricular (LV) epicaridal and endocardial surface points detected from actual patient SPECT perfusion studies. LV surfaces detected from standard post-processing quantitation programs are converted through interpolation in space and time into new B-spline models. Perfusion abnormalities are added to the model based on results of standard perfusion quantification. The new LV is translated and rotated to fit within existing atria and right ventricular models, which are scaled based on the size of the LV. Simulations were created for five different patients with myocardial infractions who had undergone SPECT perfusion imaging. Shape, size, and motion of the resulting activity map were compared visually to the original SPECT images. In all cases, size, shape and motion of simulated LVs matched well with the original images. Thus, realistic simulations with known physiologic and functional parameters can be created for evaluating efficacy of processing algorithms.
fMRI paradigm designing and post-processing tools
James, Jija S; Rajesh, PG; Chandran, Anuvitha VS; Kesavadas, Chandrasekharan
2014-01-01
In this article, we first review some aspects of functional magnetic resonance imaging (fMRI) paradigm designing for major cognitive functions by using stimulus delivery systems like Cogent, E-Prime, Presentation, etc., along with their technical aspects. We also review the stimulus presentation possibilities (block, event-related) for visual or auditory paradigms and their advantage in both clinical and research setting. The second part mainly focus on various fMRI data post-processing tools such as Statistical Parametric Mapping (SPM) and Brain Voyager, and discuss the particulars of various preprocessing steps involved (realignment, co-registration, normalization, smoothing) in these software and also the statistical analysis principles of General Linear Modeling for final interpretation of a functional activation result. PMID:24851001
An image-processing software package: UU and Fig for optical metrology applications
NASA Astrophysics Data System (ADS)
Chen, Lujie
2013-06-01
Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.
Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W
2017-01-01
The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca 2+ -imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca 2+ imaging datasets, particularly when these have been acquired at different spatial scales.
Rueckl, Martin; Lenzi, Stephen C.; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W.
2017-01-01
The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales. PMID:28706482
Correspondence between fiber post and drill dimensions for post canal preparation.
Portigliatti, Ricardo Pablo; Tumini, José Luis; Bertoldi Hepburn, Alejandro Daniel; Aromando, Romina Flavia; Olmos, Jorge Lorenzo
2017-12-01
To compare fiber posts of several calibers and trademarks to their corresponding root canal preparation drills. Three widely used endodontic post brands and their drills were evaluated: Exacto, ParaPost Taper Lux, and Macro-Lock Illusion X-RO. Fiber posts and drills were microphotographed with a scanning electron microscope and images were analyzed using ImageJ image processing software. Fiber post diameter on apical extreme (Pd0), fiber post diameter at 5 mm from the apical extreme (Pd5), drill diameter on apical extreme (Dd0) and drill diameter at 5 mm from the apical extreme (Dd5) were analyzed. The data were statistically analyzed using student t-test. Exacto posts 0.5 showed larger dimensions than their corresponding drills (P< 0.05) at Pd0. Macro-Lock posts showed no significant differences vs. their drills at Pd0 in any of the studied groups. ParaPost drills 4.5, 5 and 5.5 were statistically significantly larger than their posts at Dd0 (P< 0.05). Exacto posts 0.5 and 1 showed larger dimensions than their drills measured at Pd5 (P< 0.05). Exacto posts number 2 showed smaller calibers than their corresponding drills at Pd5 (P< 0.05). Macro-Lock drills number 4 and ParaPost drills number 5 were larger than their posts at Dd5 (P< 0.05). Poor spatial correspondence between post and drill dimensions can adversely affect the film thickness of the resin cement, diminishing bond strength due to polymerization shrinkage. The lack of correspondence in size between posts and drills may lead to the formation of empty chambers between the post and endodontic obturation with excessive luting cement thickness, thus inducing critical C-Factor stresses.
Compact Video Microscope Imaging System Implemented in Colloid Studies
NASA Technical Reports Server (NTRS)
McDowell, Mark
2002-01-01
Long description Photographs showing fiber-optic light source, microscope and charge-coupled discharge (CCD) camera head connected to camera body, CCD camera body feeding data to image acquisition board in PC, and Cartesian robot controlled via PC board. The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. CMIS can be used in situ with a minimum amount of user intervention. This system can scan, find areas of interest in, focus on, and acquire images automatically. Many multiple-cell experiments require microscopy for in situ observations; this is feasible only with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control. The software also has a user-friendly interface, which can be used independently of the hardware for further post-experiment analysis. CMIS has been successfully developed in the SML Laboratory at the NASA Glenn Research Center and adapted for use for colloid studies and is available for telescience experiments. The main innovations this year are an improved interface, optimized algorithms, and the ability to control conventional full-sized microscopes in addition to compact microscopes. The CMIS software-hardware interface is being integrated into our SML Analysis package, which will be a robust general-purpose image-processing package that can handle over 100 space and industrial applications.
Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.
2017-09-01
It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.
Badano, Luigi P; Kolias, Theodore J; Muraru, Denisa; Abraham, Theodore P; Aurigemma, Gerard; Edvardsen, Thor; D'Hooge, Jan; Donal, Erwan; Fraser, Alan G; Marwick, Thomas; Mertens, Luc; Popescu, Bogdan A; Sengupta, Partho P; Lancellotti, Patrizio; Thomas, James D; Voigt, Jens-Uwe
2018-03-27
The EACVI/ASE/Industry Task Force to standardize deformation imaging prepared this consensus document to standardize definitions and techniques for using two-dimensional (2D) speckle tracking echocardiography (STE) to assess left atrial, right ventricular, and right atrial myocardial deformation. This document is intended for both the technical engineering community and the clinical community at large to provide guidance on selecting the functional parameters to measure and how to measure them using 2D STE.This document aims to represent a significant step forward in the collaboration between the scientific societies and the industry since technical specifications of the software packages designed to post-process echocardiographic datasets have been agreed and shared before their actual development. Hopefully, this will lead to more clinically oriented software packages which will be better tailored to clinical needs and will allow industry to save time and resources in their development.
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Day, John H. (Technical Monitor)
2000-01-01
Post-Processing of data related to a Global Positioning System (GPS) simulation is an important activity in qualification of a GPS receiver for space flight. Because a GPS simulator is a critical resource it is desirable to move off the pertinent simulation data from the simulator as soon as a test is completed. The simulator data files are usually moved to a Personal Computer (PC), where the post-processing of the receiver logged measurements and solutions data and simulated data is performed. Typically post-processing is accomplished using PC-based commercial software languages and tools. Because of commercial software systems generality their general-purpose functions are notoriously slow and more than often are the bottleneck problem even for short duration experiments. For example, it may take 8 hours to post-process data from a 6-hour simulation. There is a need to do post-processing faster, especially in order to use the previous test results as feedback for a next simulation setup. This paper demonstrates that a fast software linear interpolation algorithm is applicable to a large class of engineering problems, like GPS simulation data post-processing, where computational time is a critical resource and is one of the most important considerations. An approach is developed that allows to speed-up post-processing by an order of magnitude. It is based on improving the post-processing bottleneck interpolation algorithm using apriori information that is specific to the GPS simulation application. The presented post-processing scheme was used in support of a few successful space flight missions carrying GPS receivers. A future approach to solving the post-processing performance problem using Field Programmable Gate Array (FPGA) technology is described.
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan
2017-03-01
In this paper, we describe an enhanced DICOM Secondary Capture (SC) that integrates Image Quantification (IQ) results, Regions of Interest (ROIs), and Time Activity Curves (TACs) with screen shots by embedding extra medical imaging information into a standard DICOM header. A software toolkit of DICOM IQSC has been developed to implement the SC-centered information integration of quantitative analysis for routine practice of nuclear medicine. Primary experiments show that the DICOM IQSC method is simple and easy to implement seamlessly integrating post-processing workstations with PACS for archiving and retrieving IQ information. Additional DICOM IQSC applications in routine nuclear medicine and clinic research are also discussed.
GUIDOS: tools for the assessment of pattern, connectivity, and fragmentation
NASA Astrophysics Data System (ADS)
Vogt, Peter
2013-04-01
Pattern, connectivity, and fragmentation can be considered as pillars for a quantitative analysis of digital landscape images. The free software toolbox GUIDOS (http://forest.jrc.ec.europa.eu/download/software/guidos) includes a variety of dedicated methodologies for the quantitative assessment of these features. Amongst others, Morphological Spatial Pattern Analysis (MSPA) is used for an intuitive description of image pattern structures and the automatic detection of connectivity pathways. GUIDOS includes tools for the detection and quantitative assessment of key nodes and links as well as to define connectedness in raster images and to setup appropriate input files for an enhanced network analysis using Conefor Sensinode. Finally, fragmentation is usually defined from a species point of view but a generic and quantifiable indicator is needed to measure fragmentation and its changes. Some preliminary results for different conceptual approaches will be shown for a sample dataset. Complemented by pre- and post-processing routines and a complete GIS environment the portable GUIDOS Toolbox may facilitate a holistic assessment in risk assessment studies, landscape planning, and conservation/restoration policies. Alternatively, individual analysis components may contribute to or enhance studies conducted with other software packages in landscape ecology.
Introduction of A New Toolbox for Processing Digital Images From Multiple Camera Networks: FMIPROT
NASA Astrophysics Data System (ADS)
Melih Tanis, Cemal; Nadir Arslan, Ali
2017-04-01
Webcam networks intended for scientific monitoring of ecosystems is providing digital images and other environmental data for various studies. Also, other types of camera networks can also be used for scientific purposes, e.g. usage of traffic webcams for phenological studies, camera networks for ski tracks and avalanche monitoring over mountains for hydrological studies. To efficiently harness the potential of these camera networks, easy to use software which can obtain and handle images from different networks having different protocols and standards is necessary. For the analyses of the images from webcam networks, numerous software packages are freely available. These software packages have different strong features not only for analyzing but also post processing digital images. But specifically for the ease of use, applicability and scalability, a different set of features could be added. Thus, a more customized approach would be of high value, not only for analyzing images of comprehensive camera networks, but also considering the possibility to create operational data extraction and processing with an easy to use toolbox. At this paper, we introduce a new toolbox, entitled; Finnish Meteorological Institute Image PROcessing Tool (FMIPROT) which a customized approach is followed. FMIPROT has currently following features: • straightforward installation, • no software dependencies that require as extra installations, • communication with multiple camera networks, • automatic downloading and handling images, • user friendly and simple user interface, • data filtering, • visualizing results on customizable plots, • plugins; allows users to add their own algorithms. Current image analyses in FMIPROT include "Color Fraction Extraction" and "Vegetation Indices". The analysis of color fraction extraction is calculating the fractions of the colors in a region of interest, for red, green and blue colors along with brightness and luminance parameters. The analysis of vegetation indices is a collection of indices used in vegetation phenology and includes "Green Fraction" (green chromatic coordinate), "Green-Red Vegetation Index" and "Green Excess Index". "Snow cover fraction" analysis which detects snow covered pixels in the images and georeference them on a geospatial plane to calculate the snow cover fraction is being implemented at the moment. FMIPROT is being developed during the EU Life+ MONIMET project. Altogether we mounted 28 cameras at 14 different sites in Finland as MONIMET camera network. In this paper, we will present details of FMIPROT and analysis results from MONIMET camera network. We will also discuss on future planned developments of FMIPROT.
Retrieval of land cover information under thin fog in Landsat TM image
NASA Astrophysics Data System (ADS)
Wei, Yuchun
2008-04-01
Thin fog, which often appears in remote sensing image of subtropical climate region, has resulted in the low image quantity and bad image mapping. Therefore, it is necessary to develop the image processing method to retrieve land cover information under thin fog. In this paper, the Landsat TM image near the Taihu Lake that is in the subtropical climate zone of China was used as an example, and the workflow and method used to retrieve the land cover information under thin fog have been built based on ENVI software and a single TM image. The basic step covers three parts: 1) isolating the thin fog area in image according to the spectral difference of different bands; 2) retrieving the visible band information of different land cover types under thin fog from the near-infrared bands according to the relationships between near-infrared bands and visible bands of different land cover types in the area without fog; 3) image post-process. The result showed that the method in the paper is easy and suitable, and can be used to improve the quantity of TM image mapping more effectively.
Comparison of breast percent density estimation from raw versus processed digital mammograms
NASA Astrophysics Data System (ADS)
Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina
2011-03-01
We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.
Appel, R D; Palagi, P M; Walther, D; Vargas, J R; Sanchez, J C; Ravier, F; Pasquali, C; Hochstrasser, D F
1997-12-01
Although two-dimensional electrophoresis (2-DE) computer analysis software packages have existed ever since 2-DE technology was developed, it is only now that the hardware and software technology allows large-scale studies to be performed on low-cost personal computers or workstations, and that setting up a 2-DE computer analysis system in a small laboratory is no longer considered a luxury. After a first attempt in the seventies and early eighties to develop 2-DE analysis software systems on hardware that had poor or even no graphical capabilities, followed in the late eighties by a wave of innovative software developments that were possible thanks to new graphical interface standards such as XWindows, a third generation of 2-DE analysis software packages has now come to maturity. It can be run on a variety of low-cost, general-purpose personal computers, thus making the purchase of a 2-DE analysis system easily attainable for even the smallest laboratory that is involved in proteome research. Melanie II 2-D PAGE, developed at the University Hospital of Geneva, is such a third-generation software system for 2-DE analysis. Based on unique image processing algorithms, this user-friendly object-oriented software package runs on multiple platforms, including Unix, MS-Windows 95 and NT, and Power Macintosh. It provides efficient spot detection and quantitation, state-of-the-art image comparison, statistical data analysis facilities, and is Internet-ready. Linked to proteome databases such as those available on the World Wide Web, it represents a valuable tool for the "Virtual Lab" of the post-genome area.
OIPAV: an integrated software system for ophthalmic image processing, analysis and visualization
NASA Astrophysics Data System (ADS)
Zhang, Lichun; Xiang, Dehui; Jin, Chao; Shi, Fei; Yu, Kai; Chen, Xinjian
2018-03-01
OIPAV (Ophthalmic Images Processing, Analysis and Visualization) is a cross-platform software which is specially oriented to ophthalmic images. It provides a wide range of functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis and visualization to help researchers and clinicians deal with various ophthalmic images such as optical coherence tomography (OCT) images and color photo of fundus, etc. It enables users to easily access to different ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images and improve quantitative evaluations. In this paper, we will present the system design and functional modules of the platform and demonstrate various applications. With a satisfying function scalability and expandability, we believe that the software can be widely applied in ophthalmology field.
[Progress in Application of Measuring Skeleton by CT in Forensic Anthropology Research].
Miao, C Y; Xu, L; Wang, N; Zhang, M; Li, Y S; Lü, J X
2017-02-01
Individual identification by measuring the human skeleton is an important research in the field of forensic anthropology. Computed tomography (CT) technology can provide high-resolution image of skeleton. Skeleton image can be reformed by software in the post-processing workstation. Different skeleton measurement indexes of anthropology, such as diameter, angle, area and volume, can be measured on section and reformative images. Measurement process is barely affected by human factors. This paper reviews the literatures at home and abroad about the application of measuring skeleton by CT in forensic anthropology research for individual identification in four aspects, including sex determination, height infer, facial soft tissue thickness measurement and age estimation. The major technology and the application of CT in forensic anthropology research are compared and discussed, respectively. Copyright© by the Editorial Department of Journal of Forensic Medicine.
NASA Technical Reports Server (NTRS)
Aldcroft, T.; Karovska, M.; Cresitello-Dittmar, M.; Cameron, R.
2000-01-01
The aspect system of the Chandra Observatory plays a key role in realizing the full potential of Chandra's x-ray optics and detectors. To achieve the highest spatial and spectral resolution (for grating observations), an accurate post-facto time history of the spacecraft attitude and internal alignment is needed. The CXC has developed a suite of tools which process sensor data from the aspect camera assembly and gyroscopes, and produce the spacecraft aspect solution. In this poster, the design of the aspect pipeline software is briefly described, followed by details of aspect system performance during the first eight months of flight. The two key metrics of aspect performance are: image reconstruction accuracy, which measures the x-ray image blurring introduced by aspect; and celestial location, which is the accuracy of detected source positions in absolute sky coordinates.
Earth Surface Monitoring with COSI-Corr, Techniques and Applications
NASA Astrophysics Data System (ADS)
Leprince, S.; Ayoub, F.; Avouac, J.
2009-12-01
Co-registration of Optically Sensed Images and Correlation (COSI-Corr) is a software package developed at the California Institute of Technology (USA) for accurate geometrical processing of optical satellite and aerial imagery. Initially developed for the measurement of co-seismic ground deformation using optical imagery, COSI-Corr is now used for a wide range of applications in Earth Sciences, which take advantage of the software capability to co-register, with very high accuracy, images taken from different sensors and acquired at different times. As long as a sensor is supported in COSI-Corr, all images between the supported sensors can be accurately orthorectified and co-registered. For example, it is possible to co-register a series of SPOT images, a series of aerial photographs, as well as to register a series of aerial photographs with a series of SPOT images, etc... Currently supported sensors include the SPOT 1-5, Quickbird, Worldview 1 and Formosat 2 satellites, the ASTER instrument, and frame camera acquisitions from e.g., aerial survey or declassified satellite imagery. Potential applications include accurate change detection between multi-temporal and multi-spectral images, and the calibration of pushbroom cameras. In particular, COSI-Corr provides a powerful correlation tool, which allows for accurate estimation of surface displacement. The accuracy depends on many factors (e.g., cloud, snow, and vegetation cover, shadows, temporal changes in general, steadiness of the imaging platform, defects of the imaging system, etc...) but in practice, the standard deviation of the measurements obtained from the correlation of mutli-temporal images is typically around 1/20 to 1/10 of the pixel size. The software package also includes post-processing tools such as denoising, destriping, and stacking tools to facilitate data interpretation. Examples drawn from current research in, e.g., seismotectonics, glaciology, and geomorphology will be presented. COSI-Corr is developed in IDL (Interactive Data Language), integrated under the user friendly interface ENVI (Environment for Visualizing Images), and is distributed free of charge for academic research purposes.
SYRMEP Tomo Project: a graphical user interface for customizing CT reconstruction workflows.
Brun, Francesco; Massimi, Lorenzo; Fratini, Michela; Dreossi, Diego; Billé, Fulvio; Accardo, Agostino; Pugliese, Roberto; Cedola, Alessia
2017-01-01
When considering the acquisition of experimental synchrotron radiation (SR) X-ray CT data, the reconstruction workflow cannot be limited to the essential computational steps of flat fielding and filtered back projection (FBP). More refined image processing is often required, usually to compensate artifacts and enhance the quality of the reconstructed images. In principle, it would be desirable to optimize the reconstruction workflow at the facility during the experiment (beamtime). However, several practical factors affect the image reconstruction part of the experiment and users are likely to conclude the beamtime with sub-optimal reconstructed images. Through an example of application, this article presents SYRMEP Tomo Project (STP), an open-source software tool conceived to let users design custom CT reconstruction workflows. STP has been designed for post-beamtime (off-line use) and for a new reconstruction of past archived data at user's home institution where simple computing resources are available. Releases of the software can be downloaded at the Elettra Scientific Computing group GitHub repository https://github.com/ElettraSciComp/STP-Gui.
Processing LiDAR Data to Predict Natural Hazards
NASA Technical Reports Server (NTRS)
Fairweather, Ian; Crabtree, Robert; Hager, Stacey
2008-01-01
ELF-Base and ELF-Hazards (wherein 'ELF' signifies 'Extract LiDAR Features' and 'LiDAR' signifies 'light detection and ranging') are developmental software modules for processing remote-sensing LiDAR data to identify past natural hazards (principally, landslides) and predict future ones. ELF-Base processes raw LiDAR data, including LiDAR intensity data that are often ignored in other software, to create digital terrain models (DTMs) and digital feature models (DFMs) with sub-meter accuracy. ELF-Hazards fuses raw LiDAR data, data from multispectral and hyperspectral optical images, and DTMs and DFMs generated by ELF-Base to generate hazard risk maps. Advanced algorithms in these software modules include line-enhancement and edge-detection algorithms, surface-characterization algorithms, and algorithms that implement innovative data-fusion techniques. The line-extraction and edge-detection algorithms enable users to locate such features as faults and landslide headwall scarps. Also implemented in this software are improved methodologies for identification and mapping of past landslide events by use of (1) accurate, ELF-derived surface characterizations and (2) three LiDAR/optical-data-fusion techniques: post-classification data fusion, maximum-likelihood estimation modeling, and hierarchical within-class discrimination. This software is expected to enable faster, more accurate forecasting of natural hazards than has previously been possible.
A PC-based multispectral scanner data evaluation workstation: Application to Daedalus scanners
NASA Technical Reports Server (NTRS)
Jedlovec, Gary J.; James, Mark W.; Smith, Matthew R.; Atkinson, Robert J.
1991-01-01
In late 1989, a personal computer (PC)-based data evaluation workstation was developed to support post flight processing of Multispectral Atmospheric Mapping Sensor (MAMS) data. The MAMS Quick View System (QVS) is an image analysis and display system designed to provide the capability to evaluate Daedalus scanner data immediately after an aircraft flight. Even in its original form, the QVS offered the portability of a personal computer with the advanced analysis and display features of a mainframe image analysis system. It was recognized, however, that the original QVS had its limitations, both in speed and processing of MAMS data. Recent efforts are presented that focus on overcoming earlier limitations and adapting the system to a new data tape structure. In doing so, the enhanced Quick View System (QVS2) will accommodate data from any of the four spectrometers used with the Daedalus scanner on the NASA ER2 platform. The QVS2 is designed around the AST 486/33 MHz CPU personal computer and comes with 10 EISA expansion slots, keyboard, and 4.0 mbytes of memory. Specialized PC-McIDAS software provides the main image analysis and display capability for the system. Image analysis and display of the digital scanner data is accomplished with PC-McIDAS software.
Kudo, Kohsuke; Uwano, Ikuko; Hirai, Toshinori; Murakami, Ryuji; Nakamura, Hideo; Fujima, Noriyuki; Yamashita, Fumio; Goodwin, Jonathan; Higuchi, Satomi; Sasaki, Makoto
2017-04-10
The purpose of the present study was to compare different software algorithms for processing DSC perfusion images of cerebral tumors with respect to i) the relative CBV (rCBV) calculated, ii) the cutoff value for discriminating low- and high-grade gliomas, and iii) the diagnostic performance for differentiating these tumors. Following approval of institutional review board, informed consent was obtained from all patients. Thirty-five patients with primary glioma (grade II, 9; grade III, 8; and grade IV, 18 patients) were included. DSC perfusion imaging was performed with 3-Tesla MRI scanner. CBV maps were generated by using 11 different algorithms of four commercially available software and one academic program. rCBV of each tumor compared to normal white matter was calculated by ROI measurements. Differences in rCBV value were compared between algorithms for each tumor grade. Receiver operator characteristics analysis was conducted for the evaluation of diagnostic performance of different algorithms for differentiating between different grades. Several algorithms showed significant differences in rCBV, especially for grade IV tumors. When differentiating between low- (II) and high-grade (III/IV) tumors, the area under the ROC curve (Az) was similar (range 0.85-0.87), and there were no significant differences in Az between any pair of algorithms. In contrast, the optimal cutoff values varied between algorithms (range 4.18-6.53). rCBV values of tumor and cutoff values for discriminating low- and high-grade gliomas differed between software packages, suggesting that optimal software-specific cutoff values should be used for diagnosis of high-grade gliomas.
JIP: Java image processing on the Internet
NASA Astrophysics Data System (ADS)
Wang, Dongyan; Lin, Bo; Zhang, Jun
1998-12-01
In this paper, we present JIP - Java Image Processing on the Internet, a new Internet based application for remote education and software presentation. JIP offers an integrate learning environment on the Internet where remote users not only can share static HTML documents and lectures notes, but also can run and reuse dynamic distributed software components, without having the source code or any extra work of software compilation, installation and configuration. By implementing a platform-independent distributed computational model, local computational resources are consumed instead of the resources on a central server. As an extended Java applet, JIP allows users to selected local image files on their computers or specify any image on the Internet using an URL as input. Multimedia lectures such as streaming video/audio and digital images are integrated into JIP and intelligently associated with specific image processing functions. Watching demonstrations an practicing the functions with user-selected input data dramatically encourages leaning interest, while promoting the understanding of image processing theory. The JIP framework can be easily applied to other subjects in education or software presentation, such as digital signal processing, business, mathematics, physics, or other areas such as employee training and charged software consumption.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-01
... With Image Processing Systems, Components Thereof, and Associated Software; Notice of Investigation..., and associated software by reason of infringement of certain claims of U.S. Patent Nos. 7,043,087... processing systems, components thereof, and associated software that infringe one or more of claims 1, 6, and...
Paskevich, Valerie F.
1992-01-01
The Branch of Atlantic Marine Geology has been involved in the collection, processing and digital mosaicking of high, medium and low-resolution side-scan sonar data during the past 6 years. In the past, processing and digital mosaicking has been accomplished with a dedicated, shore-based computer system. With the need to process sidescan data in the field with increased power and reduced cost of major workstations, a need to have an image processing package on a UNIX based computer system which could be utilized in the field as well as be more generally available to Branch personnel was identified. This report describes the initial development of that package referred to as the Woods Hole Image Processing System (WHIPS). The software was developed using the Unidata NetCDF software interface to allow data to be more readily portable between different computer operating systems.
Churilov, Leonid; Liu, Daniel; Ma, Henry; Christensen, Soren; Nagakane, Yoshinari; Campbell, Bruce; Parsons, Mark W; Levi, Christopher R; Davis, Stephen M; Donnan, Geoffrey A
2013-04-01
The appropriateness of a software platform for rapid MRI assessment of the amount of salvageable brain tissue after stroke is critical for both the validity of the Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) Clinical Trial of stroke thrombolysis beyond 4.5 hours and for stroke patient care outcomes. The objective of this research is to develop and implement a methodology for selecting the acute stroke imaging software platform most appropriate for the setting of a multi-centre clinical trial. A multi-disciplinary decision making panel formulated the set of preferentially independent evaluation attributes. Alternative Multi-Attribute Value Measurement methods were used to identify the best imaging software platform followed by sensitivity analysis to ensure the validity and robustness of the proposed solution. Four alternative imaging software platforms were identified. RApid processing of PerfusIon and Diffusion (RAPID) software was selected as the most appropriate for the needs of the EXTEND trial. A theoretically grounded generic multi-attribute selection methodology for imaging software was developed and implemented. The developed methodology assured both a high quality decision outcome and a rational and transparent decision process. This development contributes to stroke literature in the area of comprehensive evaluation of MRI clinical software. At the time of evaluation, RAPID software presented the most appropriate imaging software platform for use in the EXTEND clinical trial. The proposed multi-attribute imaging software evaluation methodology is based on sound theoretical foundations of multiple criteria decision analysis and can be successfully used for choosing the most appropriate imaging software while ensuring both robust decision process and outcomes. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.
NASA Astrophysics Data System (ADS)
Ye, Jinzuo; Chi, Chongwei; Zhang, Shuang; Ma, Xibo; Tian, Jie
2014-02-01
Sentinel lymph node (SLN) in vivo detection is vital in breast cancer surgery. A new near-infrared fluorescence-based surgical navigation system (SNS) imaging software, which has been developed by our research group, is presented for SLN detection surgery in this paper. The software is based on the fluorescence-based surgical navigation hardware system (SNHS) which has been developed in our lab, and is designed specifically for intraoperative imaging and postoperative data analysis. The surgical navigation imaging software consists of the following software modules, which mainly include the control module, the image grabbing module, the real-time display module, the data saving module and the image processing module. And some algorithms have been designed to achieve the performance of the software, for example, the image registration algorithm based on correlation matching. Some of the key features of the software include: setting the control parameters of the SNS; acquiring, display and storing the intraoperative imaging data in real-time automatically; analysis and processing of the saved image data. The developed software has been used to successfully detect the SLNs in 21 cases of breast cancer patients. In the near future, we plan to improve the software performance and it will be extensively used for clinical purpose.
NASA Astrophysics Data System (ADS)
Chen, Po-Hao; Botzolakis, Emmanuel; Mohan, Suyash; Bryan, R. N.; Cook, Tessa
2016-03-01
In radiology, diagnostic errors occur either through the failure of detection or incorrect interpretation. Errors are estimated to occur in 30-35% of all exams and contribute to 40-54% of medical malpractice litigations. In this work, we focus on reducing incorrect interpretation of known imaging features. Existing literature categorizes cognitive bias leading a radiologist to an incorrect diagnosis despite having correctly recognized the abnormal imaging features: anchoring bias, framing effect, availability bias, and premature closure. Computational methods make a unique contribution, as they do not exhibit the same cognitive biases as a human. Bayesian networks formalize the diagnostic process. They modify pre-test diagnostic probabilities using clinical and imaging features, arriving at a post-test probability for each possible diagnosis. To translate Bayesian networks to clinical practice, we implemented an entirely web-based open-source software tool. In this tool, the radiologist first selects a network of choice (e.g. basal ganglia). Then, large, clearly labeled buttons displaying salient imaging features are displayed on the screen serving both as a checklist and for input. As the radiologist inputs the value of an extracted imaging feature, the conditional probabilities of each possible diagnosis are updated. The software presents its level of diagnostic discrimination using a Pareto distribution chart, updated with each additional imaging feature. Active collaboration with the clinical radiologist is a feasible approach to software design and leads to design decisions closely coupling the complex mathematics of conditional probability in Bayesian networks with practice.
NASA Technical Reports Server (NTRS)
1992-01-01
To convert raw data into environmental products, the National Weather Service and other organizations use the Global 9000 image processing system marketed by Global Imaging, Inc. The company's GAE software package is an enhanced version of the TAE, developed by Goddard Space Flight Center to support remote sensing and image processing applications. The system can be operated in three modes and is combined with HP Apollo workstation hardware.
The ImageJ ecosystem: an open platform for biomedical image analysis
Schindelin, Johannes; Rueden, Curtis T.; Hiner, Mark C.; Eliceiri, Kevin W.
2015-01-01
Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available – from commercial to academic, special-purpose to Swiss army knife, small to large–but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts life science, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. PMID:26153368
The ImageJ ecosystem: An open platform for biomedical image analysis.
Schindelin, Johannes; Rueden, Curtis T; Hiner, Mark C; Eliceiri, Kevin W
2015-01-01
Technology in microscopy advances rapidly, enabling increasingly affordable, faster, and more precise quantitative biomedical imaging, which necessitates correspondingly more-advanced image processing and analysis techniques. A wide range of software is available-from commercial to academic, special-purpose to Swiss army knife, small to large-but a key characteristic of software that is suitable for scientific inquiry is its accessibility. Open-source software is ideal for scientific endeavors because it can be freely inspected, modified, and redistributed; in particular, the open-software platform ImageJ has had a huge impact on the life sciences, and continues to do so. From its inception, ImageJ has grown significantly due largely to being freely available and its vibrant and helpful user community. Scientists as diverse as interested hobbyists, technical assistants, students, scientific staff, and advanced biology researchers use ImageJ on a daily basis, and exchange knowledge via its dedicated mailing list. Uses of ImageJ range from data visualization and teaching to advanced image processing and statistical analysis. The software's extensibility continues to attract biologists at all career stages as well as computer scientists who wish to effectively implement specific image-processing algorithms. In this review, we use the ImageJ project as a case study of how open-source software fosters its suites of software tools, making multitudes of image-analysis technology easily accessible to the scientific community. We specifically explore what makes ImageJ so popular, how it impacts the life sciences, how it inspires other projects, and how it is self-influenced by coevolving projects within the ImageJ ecosystem. © 2015 Wiley Periodicals, Inc.
Image-Processing Software For A Hypercube Computer
NASA Technical Reports Server (NTRS)
Lee, Meemong; Mazer, Alan S.; Groom, Steven L.; Williams, Winifred I.
1992-01-01
Concurrent Image Processing Executive (CIPE) is software system intended to develop and use image-processing application programs on concurrent computing environment. Designed to shield programmer from complexities of concurrent-system architecture, it provides interactive image-processing environment for end user. CIPE utilizes architectural characteristics of particular concurrent system to maximize efficiency while preserving architectural independence from user and programmer. CIPE runs on Mark-IIIfp 8-node hypercube computer and associated SUN-4 host computer.
The variability of software scoring of the CDMAM phantom associated with a limited number of images
NASA Astrophysics Data System (ADS)
Yang, Chang-Ying J.; Van Metter, Richard
2007-03-01
Software scoring approaches provide an attractive alternative to human evaluation of CDMAM images from digital mammography systems, particularly for annual quality control testing as recommended by the European Protocol for the Quality Control of the Physical and Technical Aspects of Mammography Screening (EPQCM). Methods for correlating CDCOM-based results with human observer performance have been proposed. A common feature of all methods is the use of a small number (at most eight) of CDMAM images to evaluate the system. This study focuses on the potential variability in the estimated system performance that is associated with these methods. Sets of 36 CDMAM images were acquired under carefully controlled conditions from three different digital mammography systems. The threshold visibility thickness (TVT) for each disk diameter was determined using previously reported post-analysis methods from the CDCOM scorings for a randomly selected group of eight images for one measurement trial. This random selection process was repeated 3000 times to estimate the variability in the resulting TVT values for each disk diameter. The results from using different post-analysis methods, different random selection strategies and different digital systems were compared. Additional variability of the 0.1 mm disk diameter was explored by comparing the results from two different image data sets acquired under the same conditions from the same system. The magnitude and the type of error estimated for experimental data was explained through modeling. The modeled results also suggest a limitation in the current phantom design for the 0.1 mm diameter disks. Through modeling, it was also found that, because of the binomial statistic nature of the CDMAM test, the true variability of the test could be underestimated by the commonly used method of random re-sampling.
MOSAIC: Software for creating mosaics from collections of images
NASA Technical Reports Server (NTRS)
Varosi, F.; Gezari, D. Y.
1992-01-01
We have developed a powerful, versatile image processing and analysis software package called MOSAIC, designed specifically for the manipulation of digital astronomical image data obtained with (but not limited to) two-dimensional array detectors. The software package is implemented using the Interactive Data Language (IDL), and incorporates new methods for processing, calibration, analysis, and visualization of astronomical image data, stressing effective methods for the creation of mosaic images from collections of individual exposures, while at the same time preserving the photometric integrity of the original data. Since IDL is available on many computers, the MOSAIC software runs on most UNIX and VAX workstations with the X-Windows or Sun View graphics interface.
Web-based interactive 2D/3D medical image processing and visualization software.
Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid
2010-05-01
There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.
An evolution of image source camera attribution approaches.
Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul
2016-05-01
Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Pertuz, Said; McDonald, Elizabeth S; Weinstein, Susan P; Conant, Emily F; Kontos, Despina
2016-04-01
To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board-approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration-cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging-based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment.
He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian
2015-02-01
Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.
Apple Image Processing Educator
NASA Technical Reports Server (NTRS)
Gunther, F. J.
1981-01-01
A software system design is proposed and demonstrated with pilot-project software. The system permits the Apple II microcomputer to be used for personalized computer-assisted instruction in the digital image processing of LANDSAT images. The programs provide data input, menu selection, graphic and hard-copy displays, and both general and detailed instructions. The pilot-project results are considered to be successful indicators of the capabilities and limits of microcomputers for digital image processing education.
Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y
2014-07-08
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.
Escott, Edward J; Rubinstein, David
2004-01-01
It is often necessary for radiologists to use digital images in presentations and conferences. Most imaging modalities produce images in the Digital Imaging and Communications in Medicine (DICOM) format. The image files tend to be large and thus cannot be directly imported into most presentation software, such as Microsoft PowerPoint; the large files also consume storage space. There are many free programs that allow viewing and processing of these files on a personal computer, including conversion to more common file formats such as the Joint Photographic Experts Group (JPEG) format. Free DICOM image viewing and processing software for computers running on the Microsoft Windows operating system has already been evaluated. However, many people use the Macintosh (Apple Computer) platform, and a number of programs are available for these users. The World Wide Web was searched for free DICOM image viewing or processing software that was designed for the Macintosh platform or is written in Java and is therefore platform independent. The features of these programs and their usability were evaluated. There are many free programs for the Macintosh platform that enable viewing and processing of DICOM images. (c) RSNA, 2004.
Digital radiography: optimization of image quality and dose using multi-frequency software.
Precht, H; Gerke, O; Rosendahl, K; Tingberg, A; Waaler, D
2012-09-01
New developments in processing of digital radiographs (DR), including multi-frequency processing (MFP), allow optimization of image quality and radiation dose. This is particularly promising in children as they are believed to be more sensitive to ionizing radiation than adults. To examine whether the use of MFP software reduces the radiation dose without compromising quality at DR of the femur in 5-year-old-equivalent anthropomorphic and technical phantoms. A total of 110 images of an anthropomorphic phantom were imaged on a DR system (Canon DR with CXDI-50 C detector and MLT[S] software) and analyzed by three pediatric radiologists using Visual Grading Analysis. In addition, 3,500 images taken of a technical contrast-detail phantom (CDRAD 2.0) provide an objective image-quality assessment. Optimal image-quality was maintained at a dose reduction of 61% with MLT(S) optimized images. Even for images of diagnostic quality, MLT(S) provided a dose reduction of 88% as compared to the reference image. Software impact on image quality was found significant for dose (mAs), dynamic range dark region and frequency band. By optimizing image processing parameters, a significant dose reduction is possible without significant loss of image quality.
NASA Technical Reports Server (NTRS)
Ayap, Shanti; Fisher, Forest; Gladden, Roy; Khanampompan, Teerapat
2008-01-01
This software tool saves time and reduces risk by automating two labor-intensive and error-prone post-processing steps required for every DKF [DSN (Deep Space Network) Keyword File] that MRO (Mars Reconnaissance Orbiter) produces, and is being extended to post-process the corresponding TSOE (Text Sequence Of Events) as well. The need for this post-processing step stems from limitations in the seq-gen modeling resulting in incorrect DKF generation that is then cleaned up in post-processing.
Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing
NASA Technical Reports Server (NTRS)
Logan, Thomas L.; Bryant, Nevin A.
1987-01-01
The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.
Pict'Earth: A new Method of Virtual Globe Data Acquisition
NASA Astrophysics Data System (ADS)
Johnson, J.; Long, S.; Riallant, D.; Hronusov, V.
2007-12-01
Georeferenced aerial imagery facilitates and enhances Earth science investigations. The realized value of imagery as a tool is measured from the spatial, temporal and radiometric resolution of the imagery. Currently, there is an need for a system which facilitates the rapid acquisition and distribution of high-resolution aerial earth images of localized areas. The Pict'Earth group has developed an apparatus and software algorithms which facilitate such tasks. Hardware includes a small radio-controlled model airplane (RC UAV); Light smartphones with high resolution cameras (Nokia NSeries Devices); and a GPS connected to the smartphone via the bluetooth protocol, or GPS-equipped phone. Software includes python code which controls the functions of the smartphone and GPS to acquire data in-flight; Online Virtual Globe applications including Google Earth, AJAX/Web2.0 technologies and services; APIs and libraries for developers, all of which are based on open XML-based GIS data standards. This new process for acquisition and distribution of high-resolution aerial earth images includes the following stages: Perform Survey over area of interest (AOI) with the RC UAV (Mobile Liveprocessing). In real-time our software collects images from the smartphone camera and positional data (latitude, longitude, altitude and heading) from the GPS. The software then calculates the earth footprint (geoprint) of each image and creates KML files which incorporate the georeferenced images and tracks of UAV. Optionally, it is possible to send the data in- flight via SMS/MMS (text and multimedia messages), or cellular internet networks via FTP. In Post processing the images are filtered, transformed, and assembled into a orthorectified image mosaic. The final mosaic is then cut into tiles and uploaded as a user ready product to web servers in kml format for use in Virtual Globes and other GIS applications. The obtained images and resultant data have high spatial resolution, can be updated in near-real time (high temporal resolution), and provide current radiance values (which is important for seasonal work). The final mosaics can also be assembled into time-lapse sequences and presented temporally. The suggested solution is cost effective when compared to the alternative methods of acquiring similar imagery. The systems are compact, mobile, and do not require a substantial amount of auxiliary equipment. Ongoing development of the software makes it possible to adapt the technology to different platforms, smartphones, sensors, and types of data. The range of application of this technology potentially covers a large part of the spectrum of Earth sciences including the calibration and validation of high-resolution satellite-derived products. These systems are currently being used for monitoring of dynamic land and water surface processes, and can be used for reconnaissance when locating and establishing field measurement sites.
NASA Technical Reports Server (NTRS)
Benkelman, Cody A.
1997-01-01
The project team has outlined several technical objectives which will allow the companies to improve on their current capabilities. These include modifications to the imaging system, enabling it to operate more cost effectively and with greater ease of use, automation of the post-processing software to mosaic and orthorectify the image scenes collected, and the addition of radiometric calibration to greatly aid in the ability to perform accurate change detection. Business objectives include fine tuning of the market plan plus specification of future product requirements, expansion of sales activities (including identification of necessary additional resources required to meet stated revenue objectives), development of a product distribution plan, and implementation of a world wide sales effort.
A new software for dimensional measurements in 3D endodontic root canal instrumentation.
Sinibaldi, Raffaele; Pecci, Raffaella; Somma, Francesco; Della Penna, Stefania; Bedini, Rossella
2012-01-01
The main issue to be faced to get size estimates of 3D modification of the dental canal after endodontic treatment is the co-registration of the image stacks obtained through micro computed tomography (micro-CT) scans before and after treatment. Here quantitative analysis of micro-CT images have been performed by means of new dedicated software targeted to the analysis of root canal after endodontic instrumentation. This software analytically calculates the best superposition between the pre and post structures using the inertia tensor of the tooth. This strategy avoid minimization procedures, which can be user dependent, and time consuming. Once the co-registration have been achieved dimensional measurements have then been performed by contemporary evaluation of quantitative parameters over the two superimposed stacks of micro-CT images. The software automatically calculated the changes of volume, surface and symmetry axes in 3D occurring after the instrumentation. The calculation is based on direct comparison of the canal and canal branches selected by the user on the pre treatment image stack.
Development and implementation of software systems for imaging spectroscopy
Boardman, J.W.; Clark, R.N.; Mazer, A.S.; Biehl, L.L.; Kruse, F.A.; Torson, J.; Staenz, K.
2006-01-01
Specialized software systems have played a crucial role throughout the twenty-five year course of the development of the new technology of imaging spectroscopy, or hyperspectral remote sensing. By their very nature, hyperspectral data place unique and demanding requirements on the computer software used to visualize, analyze, process and interpret them. Often described as a marriage of the two technologies of reflectance spectroscopy and airborne/spaceborne remote sensing, imaging spectroscopy, in fact, produces data sets with unique qualities, unlike previous remote sensing or spectrometer data. Because of these unique spatial and spectral properties hyperspectral data are not readily processed or exploited with legacy software systems inherited from either of the two parent fields of study. This paper provides brief reviews of seven important software systems developed specifically for imaging spectroscopy.
NASA Technical Reports Server (NTRS)
1973-01-01
A description of each of the software modules of the Image Data Processing System (IDAPS) is presented. The changes in the software modules are the result of additions to the application software of the system and an upgrade of the IBM 7094 Mod(1) computer to a 1301 disk storage configuration. Necessary information about IDAPS sofware is supplied to the computer programmer who desires to make changes in the software system or who desires to use portions of the software outside of the IDAPS system. Each software module is documented with: module name, purpose, usage, common block(s) description, method (algorithm of subroutine) flow diagram (if needed), subroutines called, and storage requirements.
Glemser, Philip A; Pfleiderer, Michael; Heger, Anna; Tremper, Jan; Krauskopf, Astrid; Schlemmer, Heinz-Peter; Yen, Kathrin; Simons, David
2017-03-01
The aim of this multi-reader feasibility study was to evaluate new post-processing CT imaging tools in rib fracture assessment of forensic cases by analyzing detection time and diagnostic accuracy. Thirty autopsy cases (20 with and 10 without rib fractures in autopsy) were randomly selected and included in this study. All cases received a native whole body CT scan prior to the autopsy procedure, which included dissection and careful evaluation of each rib. In addition to standard transverse sections (modality A), CT images were subjected to a reconstruction algorithm to compute axial labelling of the ribs (modality B) as well as "unfolding" visualizations of the rib cage (modality C, "eagle tool"). Three radiologists with different clinical and forensic experience who were blinded to autopsy results evaluated all cases in a random manner of modality and case. Rib fracture assessment of each reader was evaluated compared to autopsy and a CT consensus read as radiologic reference. A detailed evaluation of relevant test parameters revealed a better accordance to the CT consensus read as to the autopsy. Modality C was the significantly quickest rib fracture detection modality despite slightly reduced statistic test parameters compared to modalities A and B. Modern CT post-processing software is able to shorten reading time and to increase sensitivity and specificity compared to standard autopsy alone. The eagle tool as an easy to use tool is suited for an initial rib fracture screening prior to autopsy and can therefore be beneficial for forensic pathologists.
Ibrahim, Reham S; Fathy, Hoda
2018-03-30
Tracking the impact of commonly applied post-harvesting and industrial processing practices on the compositional integrity of ginger rhizome was implemented in this work. Untargeted metabolite profiling was performed using digitally-enhanced HPTLC method where the chromatographic fingerprints were extracted using ImageJ software then analysed with multivariate Principal Component Analysis (PCA) for pattern recognition. A targeted approach was applied using a new, validated, simple and fast HPTLC image analysis method for simultaneous quantification of the officially recognized markers 6-, 8-, 10-gingerol and 6-shogaol in conjunction with chemometric Hierarchical Clustering Analysis (HCA). The results of both targeted and untargeted metabolite profiling revealed that peeling, drying in addition to storage employed during processing have a great influence on ginger chemo-profile, the different forms of processed ginger shouldn't be used interchangeably. Moreover, it deemed necessary to consider the holistic metabolic profile for comprehensive evaluation of ginger during processing. Copyright © 2018. Published by Elsevier B.V.
MMX-I: data-processing software for multimodal X-ray imaging and tomography.
Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea
2016-05-01
A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors' knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments.
PScan 1.0: flexible software framework for polygon based multiphoton microscopy
NASA Astrophysics Data System (ADS)
Li, Yongxiao; Lee, Woei Ming
2016-12-01
Multiphoton laser scanning microscopes exhibit highly localized nonlinear optical excitation and are powerful instruments for in-vivo deep tissue imaging. Customized multiphoton microscopy has a significantly superior performance for in-vivo imaging because of precise control over the scanning and detection system. To date, there have been several flexible software platforms catered to custom built microscopy systems i.e. ScanImage, HelioScan, MicroManager, that perform at imaging speeds of 30-100fps. In this paper, we describe a flexible software framework for high speed imaging systems capable of operating from 5 fps to 1600 fps. The software is based on the MATLAB image processing toolbox. It has the capability to communicate directly with a high performing imaging card (Matrox Solios eA/XA), thus retaining high speed acquisition. The program is also designed to communicate with LabVIEW and Fiji for instrument control and image processing. Pscan 1.0 can handle high imaging rates and contains sufficient flexibility for users to adapt to their high speed imaging systems.
GPU-Based Real-Time Volumetric Ultrasound Image Reconstruction for a Ring Array
Choe, Jung Woo; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T.
2014-01-01
Synthetic phased array (SPA) beamforming with Hadamard coding and aperture weighting is an optimal option for real-time volumetric imaging with a ring array, a particularly attractive geometry in intracardiac and intravascular applications. However, the imaging frame rate of this method is limited by the immense computational load required in synthetic beamforming. For fast imaging with a ring array, we developed graphics processing unit (GPU)-based, real-time image reconstruction software that exploits massive data-level parallelism in beamforming operations. The GPU-based software reconstructs and displays three cross-sectional images at 45 frames per second (fps). This frame rate is 4.5 times higher than that for our previously-developed multi-core CPU-based software. In an alternative imaging mode, it shows one B-mode image rotating about the axis and its maximum intensity projection (MIP), processed at a rate of 104 fps. This paper describes the image reconstruction procedure on the GPU platform and presents the experimental images obtained using this software. PMID:23529080
Reference software implementation for GIFTS ground data processing
NASA Astrophysics Data System (ADS)
Garcia, R. K.; Howell, H. B.; Knuteson, R. O.; Martin, G. D.; Olson, E. R.; Smuga-Otto, M. J.
2006-08-01
Future satellite weather instruments such as high spectral resolution imaging interferometers pose a challenge to the atmospheric science and software development communities due to the immense data volumes they will generate. An open-source, scalable reference software implementation demonstrating the calibration of radiance products from an imaging interferometer, the Geosynchronous Imaging Fourier Transform Spectrometer1 (GIFTS), is presented. This paper covers essential design principles laid out in summary system diagrams, lessons learned during implementation and preliminary test results from the GIFTS Information Processing System (GIPS) prototype.
Developing tools for digital radar image data evaluation
NASA Technical Reports Server (NTRS)
Domik, G.; Leberl, F.; Raggam, J.
1986-01-01
The refinement of radar image analysis methods has led to a need for a systems approach to radar image processing software. Developments stimulated through satellite radar are combined with standard image processing techniques to create a user environment to manipulate and analyze airborne and satellite radar images. One aim is to create radar products for the user from the original data to enhance the ease of understanding the contents. The results are called secondary image products and derive from the original digital images. Another aim is to support interactive SAR image analysis. Software methods permit use of a digital height model to create ortho images, synthetic images, stereo-ortho images, radar maps or color combinations of different component products. Efforts are ongoing to integrate individual tools into a combined hardware/software environment for interactive radar image analysis.
Control Software for Advanced Video Guidance Sensor
NASA Technical Reports Server (NTRS)
Howard, Richard T.; Book, Michael L.; Bryan, Thomas C.
2006-01-01
Embedded software has been developed specifically for controlling an Advanced Video Guidance Sensor (AVGS). A Video Guidance Sensor is an optoelectronic system that provides guidance for automated docking of two vehicles. Such a system includes pulsed laser diodes and a video camera, the output of which is digitized. From the positions of digitized target images and known geometric relationships, the relative position and orientation of the vehicles are computed. The present software consists of two subprograms running in two processors that are parts of the AVGS. The subprogram in the first processor receives commands from an external source, checks the commands for correctness, performs commanded non-image-data-processing control functions, and sends image data processing parts of commands to the second processor. The subprogram in the second processor processes image data as commanded. Upon power-up, the software performs basic tests of functionality, then effects a transition to a standby mode. When a command is received, the software goes into one of several operational modes (e.g. acquisition or tracking). The software then returns, to the external source, the data appropriate to the command.
Software-assisted post-interventional assessment of radiofrequency ablation
NASA Astrophysics Data System (ADS)
Rieder, Christian; Geisler, Benjamin; Bruners, Philipp; Isfort, Peter; Na, Hong-Sik; Mahnken, Andreas H.; Hahn, Horst K.
2014-03-01
Radiofrequency ablation (RFA) is becoming a standard procedure for minimally invasive tumor treatment in clinical practice. Due to its common technical procedure, low complication rate, and low cost, RFA has become an alternative to surgical resection in the liver. To evaluate the therapy success of RFA, thorough follow-up imaging is essential. Conventionally, shape, size, and position of tumor and coagulation are visually compared in a side-by-side manner using pre- and post-interventional images. To objectify the verification of the treatment success, a novel software assistant allowing for fast and accurate comparison of tumor and coagulation is proposed. In this work, the clinical value of the proposed assessment software is evaluated. In a retrospective clinical study, 39 cases of hepatic tumor ablation are evaluated using the prototype software and conventional image comparison by four radiologists with different levels of experience. The cases are randomized and evaluated in two sessions to avoid any recall-bias. Self-confidence of correct diagnosis (local recurrence vs. no local recurrence) on a six-point scale is given for each case by the radiologists. Sensitivity, specificity, positive and negative predictive values as well as receiver operating curves are calculated for both methods. It is shown that the software-assisted method allows physicians to correctly identify local tumor recurrence with a higher percentage than the conventional method (sensitivity: 0.6 vs. 0.35), whereas the percentage of correctly identified successful ablations is slightly reduced (specificity: 0.83 vs. 0.89).
Image Understanding Architecture
1991-09-01
architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers
Using digital photo technology to improve visualization of gastric lumen CT images
NASA Astrophysics Data System (ADS)
Pyrgioti, M.; Kyriakidis, A.; Chrysostomou, S.; Panaritis, V.
2006-12-01
In order to evaluate the gastric lumen CT images better, a new method is being applied to images using an Image Processing software. During a 12-month period, 69 patients with various gastric symptoms and 20 normal (as far as it concerns the upper gastrointestinal system) volunteers underwent computed tomography of the upper gastrointestinal system. Just before the examination the patients and the normal volunteers underwent preparation with 40 ml soda water and 10 ml gastrografin. All the CT images were digitized with an Olympus 3.2 Mpixel digital camera and further processed with an Image Processing software. The administration per os of gastrografin and soda water resulted in the distension of the stomach and consequently better visualization of all the anatomic parts. By using an Image Processing software in a PC, all the pathological and normal images of the stomach were better diagnostically estimated. We believe that the photo digital technology improves the diagnostic capacity not only of the CT image but also in MRI and probably many other imaging methods.
Vasconcelos, Taruska Ventorini; Neves, Frederico Sampaio; Moraes, Lívia Almeida Bueno; Freitas, Deborah Queiroz
2015-01-01
This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.
Free and open source software for the manipulation of digital images.
Solomon, Robert W
2009-06-01
Free and open source software is a type of software that is nearly as powerful as commercial software but is freely downloadable. This software can do almost everything that the expensive programs can. GIMP (gnu image manipulation program) is the free program that is comparable to Photoshop, and versions are available for Windows, Macintosh, and Linux platforms. This article briefly describes how GIMP can be installed and used to manipulate radiology images. It is no longer necessary to budget large amounts of money for high-quality software to achieve the goals of image processing and document creation because free and open source software is available for the user to download at will.
MMX-I: data-processing software for multimodal X-ray imaging and tomography
Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea
2016-01-01
A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159
Wizard CD Plus and ProTaper Universal: analysis of apical transportation using new software
GIANNASTASIO, Daiana; da ROSA, Ricardo Abreu; PERES, Bernardo Urbanetto; BARRETO, Mirela Sangoi; DOTTO, Gustavo Nogara; KUGA, Milton Carlos; PEREIRA, Jefferson Ricardo; SÓ, Marcus Vinícius Reis
2013-01-01
Objective This study has two aims: 1) to evaluate the apical transportation of the Wizard CD Plus and ProTaper Universal after preparation of simulated root canals; 2) to compare, with Adobe Photoshop, the ability of a new software (Regeemy) in superposing and subtracting images. Material and Methods Twenty five simulated root canals in acrylic-resin blocks (with 20º curvature) underwent cone beam computed tomography before and after preparation with the rotary systems (70 kVp, 4 mA, 10 s and with the 8×8 cm FoV selection). Canals were prepared up to F2 (ProTaper) and 24.04 (Wizard CD Plus) instruments and the working length was established to 15 mm. The tomographic images were imported into iCAT Vision software and CorelDraw for standardization. The superposition of pre- and post-instrumentation images from both systems was performed using Regeemy and Adobe Photoshop. The apical transportation was measured in millimetres using Image J. Five acrylic resin blocks were used to validate the superposition achieved by the software. Student's t-test for independent samples was used to evaluate the apical transportation achieved by the rotary systems using each software individually. Student's t-test for paired samples was used to compare the ability of each software in superposing and subtracting images from one rotary system per time. Results The values obtained with Regeemy and Adobe Photoshop were similar to rotary systems (P>0.05). ProTaper Universal and Wizard CD Plus promoted similar apical transportation regardless of the software used for image's superposition and subtraction (P>0.05). Conclusion Wizard CD Plus and ProTaper Universal promoted little apical transportation. Regeemy consists in a feasible software to superpose and subtract images and appears to be an alternative to Adobe Photoshop. PMID:24212994
Wizard CD Plus and ProTaper Universal: analysis of apical transportation using new software.
Giannastasio, Daiana; Rosa, Ricardo Abreu da; Peres, Bernardo Urbanetto; Barreto, Mirela Sangoi; Dotto, Gustavo Nogara; Kuga, Milton Carlos; Pereira, Jefferson Ricardo; Só, Marcus Vinícius Reis
2013-01-01
This study has two aims: 1) to evaluate the apical transportation of the Wizard CD Plus and ProTaper Universal after preparation of simulated root canals; 2) to compare, with Adobe Photoshop, the ability of a new software (Regeemy) in superposing and subtracting images. Twenty five simulated root canals in acrylic-resin blocks (with 20º curvature) underwent cone beam computed tomography before and after preparation with the rotary systems (70 kVp, 4 mA, 10 s and with the 8×8 cm FoV selection). Canals were prepared up to F2 (ProTaper) and 24.04 (Wizard CD Plus) instruments and the working length was established to 15 mm. The tomographic images were imported into iCAT Vision software and CorelDraw for standardization. The superposition of pre- and post-instrumentation images from both systems was performed using Regeemy and Adobe Photoshop. The apical transportation was measured in millimetres using Image J. Five acrylic resin blocks were used to validate the superposition achieved by the software. Student's t-test for independent samples was used to evaluate the apical transportation achieved by the rotary systems using each software individually. Student's t-test for paired samples was used to compare the ability of each software in superposing and subtracting images from one rotary system per time. The values obtained with Regeemy and Adobe Photoshop were similar to rotary systems (P>0.05). ProTaper Universal and Wizard CD Plus promoted similar apical transportation regardless of the software used for image's superposition and subtraction (P>0.05). Wizard CD Plus and ProTaper Universal promoted little apical transportation. Regeemy consists in a feasible software to superpose and subtract images and appears to be an alternative to Adobe Photoshop.
Subband/Transform MATLAB Functions For Processing Images
NASA Technical Reports Server (NTRS)
Glover, D.
1995-01-01
SUBTRANS software is package of routines implementing image-data-processing functions for use with MATLAB*(TM) software. Provides capability to transform image data with block transforms and to produce spatial-frequency subbands of transformed data. Functions cascaded to provide further decomposition into more subbands. Also used in image-data-compression systems. For example, transforms used to prepare data for lossy compression. Written for use in MATLAB mathematical-analysis environment.
Li, Haobo; Chen, Yanxi; Qiang, Minfei; Zhang, Kun; Jiang, Yuchen; Zhang, Yijie; Jia, Xiaoyang
2017-06-14
The objective of this study is to evaluate the value of computed tomography (CT) post-processing images in postoperative assessment of Lisfranc injuries compared with plain radiographs. A total of 79 cases with closed Lisfranc injuries that were treated with conventional open reduction and internal fixation from January 2010 to June 2016 were analyzed. Postoperative assessment was performed by two independent orthopedic surgeons with both plain radiographs and CT post-processing images. Inter- and intra-observer agreement were analyzed by kappa statistics while the differences between the two postoperative imaging assessments were assessed using the χ 2 test (McNemar's test). Significance was assumed when p < 0.05. Inter- and intra-observer agreement of CT post-processing images was much higher than that of plain radiographs. Non-anatomic reduction was more easily identified in patients with injuries of Myerson classifications A, B1, B2, and C1 using CT post-processing images with overall groups (p < 0.05), and poor internal fixation was also more easily detected in patients with injuries of Myerson classifications A, B1, B2, and C2 using CT post-processing images with overall groups (p < 0.05). CT post-processing images can be more reliable than plain radiographs in the postoperative assessment of reduction and implant placement for Lisfranc injuries.
Three Dimensional Optical Coherence Tomography Imaging: Advantages and Advances
Gabriele, Michelle L; Wollstein, Gadi; Ishikawa, Hiroshi; Xu, Juan; Kim, Jongsick; Kagemann, Larry; Folio, Lindsey S; Schuman, Joel S.
2010-01-01
Three dimensional (3D) ophthalmic imaging using optical coherence tomography (OCT) has revolutionized assessment of the eye, the retina in particular. Recent technological improvements have made the acquisition of 3D-OCT datasets feasible. However, while volumetric data can improve disease diagnosis and follow-up, novel image analysis techniques are now necessary in order to process the dense 3D-OCT dataset. Fundamental software improvements include methods for correcting subject eye motion, segmenting structures or volumes of interest, extracting relevant data post hoc and signal averaging to improve delineation of retinal layers. In addition, innovative methods for image display, such as C-mode sectioning, provide a unique viewing perspective and may improve interpretation of OCT images of pathologic structures. While all of these methods are being developed, most remain in an immature state. This review describes the current status of 3D-OCT scanning and interpretation, and discusses the need for standardization of clinical protocols as well as the potential benefits of 3D-OCT scanning that could come when software methods for fully exploiting these rich data sets are available clinically. The implications of new image analysis approaches include improved reproducibility of measurements garnered from 3D-OCT, which may then help improve disease discrimination and progression detection. In addition, 3D-OCT offers the potential for preoperative surgical planning and intraoperative surgical guidance. PMID:20542136
Development of image analysis software for quantification of viable cells in microchips.
Georg, Maximilian; Fernández-Cabada, Tamara; Bourguignon, Natalia; Karp, Paola; Peñaherrera, Ana B; Helguera, Gustavo; Lerner, Betiana; Pérez, Maximiliano S; Mertelsmann, Roland
2018-01-01
Over the past few years, image analysis has emerged as a powerful tool for analyzing various cell biology parameters in an unprecedented and highly specific manner. The amount of data that is generated requires automated methods for the processing and analysis of all the resulting information. The software available so far are suitable for the processing of fluorescence and phase contrast images, but often do not provide good results from transmission light microscopy images, due to the intrinsic variation of the acquisition of images technique itself (adjustment of brightness / contrast, for instance) and the variability between image acquisition introduced by operators / equipment. In this contribution, it has been presented an image processing software, Python based image analysis for cell growth (PIACG), that is able to calculate the total area of the well occupied by cells with fusiform and rounded morphology in response to different concentrations of fetal bovine serum in microfluidic chips, from microscopy images in transmission light, in a highly efficient way.
The influence of software filtering in digital mammography image quality
NASA Astrophysics Data System (ADS)
Michail, C.; Spyropoulou, V.; Kalyvas, N.; Valais, I.; Dimitropoulos, N.; Fountos, G.; Kandarakis, I.; Panayiotakis, G.
2009-05-01
Breast cancer is one of the most frequently diagnosed cancers among women. Several techniques have been developed to help in the early detection of breast cancer such as conventional and digital x-ray mammography, positron and single-photon emission mammography, etc. A key advantage in digital mammography is that images can be manipulated as simple computer image files. Thus non-dedicated commercially available image manipulation software can be employed to process and store the images. The image processing tools of the Photoshop (CS 2) software usually incorporate digital filters which may be used to reduce image noise, enhance contrast and increase spatial resolution. However, improving an image quality parameter may result in degradation of another. The aim of this work was to investigate the influence of three sharpening filters, named hereafter sharpen, sharpen more and sharpen edges on image resolution and noise. Image resolution was assessed by means of the Modulation Transfer Function (MTF).In conclusion it was found that the correct use of commercial non-dedicated software on digital mammograms may improve some aspects of image quality.
NASA Technical Reports Server (NTRS)
Redmann, G. H.
1976-01-01
Recent advances in image processing and new applications are presented to the user community to stimulate the development and transfer of this technology to industrial and commercial applications. The Proceedings contains 37 papers and abstracts, including many illustrations (some in color) and provides a single reference source for the user community regarding the ordering and obtaining of NASA-developed image-processing software and science data.
NASA Astrophysics Data System (ADS)
Tonbul, H.; Kavzoglu, T.
2017-12-01
Forest fires are among the most important natural disasters with the damage to the natural habitat and human-life. Mapping damaged forest fires is crucial for assessing ecological effects caused by fire, monitoring land cover changes and modeling atmospheric and climatic effects of fire. In this context, satellite data provides a great advantage to users by providing a rapid process of detecting burning areas and determining the severity of fire damage. Especially, Mediterranean ecosystems countries sets the suitable conditions for the forest fires. In this study, the determination of burnt areas of forest fire in Pedrógão Grande region of Portugal occurred in June 2017 was carried out using Landsat 8 OLI and Sentinel-2A satellite images. The Pedrógão Grande fire was one of the largest fires in Portugal, more than 60 people was killed and thousands of hectares were ravaged. In this study, four pairs of pre-fire and post-fire top of atmosphere (TOA) and atmospherically corrected images were utilized. The red and near infrared (NIR) spectral bands of pre-fire and post-fire images were stacked and multiresolution segmentation algorithm was applied. In the segmentation processes, the image objects were generated with estimated optimum homogeneity criteria. Using eCognition software, rule sets have been created to distinguish unburned areas from burned areas. In constructing the rule sets, NDVI threshold values were determined pre- and post-fire and areas where vegetation loss was detected using the NDVI difference image. The results showed that both satellite images yielded successful results for burned area discrimination with a very high degree of consistency in terms of spatial overlap and total burned area (over 93%). Object based image analysis (OBIA) was found highly effective in delineation of burnt areas.
Mori, Shinichiro; Inaniwa, Taku; Kumagai, Motoki; Kuwae, Tsunekazu; Matsuzaki, Yuka; Furukawa, Takuji; Shirai, Toshiyuki; Noda, Koji
2012-06-01
To increase the accuracy of carbon ion beam scanning therapy, we have developed a graphical user interface-based digitally-reconstructed radiograph (DRR) software system for use in routine clinical practice at our center. The DRR software is used in particular scenarios in the new treatment facility to achieve the same level of geometrical accuracy at the treatment as at the imaging session. DRR calculation is implemented simply as the summation of CT image voxel values along the X-ray projection ray. Since we implemented graphics processing unit-based computation, the DRR images are calculated with a speed sufficient for the particular clinical practice requirements. Since high spatial resolution flat panel detector (FPD) images should be registered to the reference DRR images in patient setup process in any scenarios, the DRR images also needs higher spatial resolution close to that of FPD images. To overcome the limitation of the CT spatial resolution imposed by the CT voxel size, we applied image processing to improve the calculated DRR spatial resolution. The DRR software introduced here enabled patient positioning with sufficient accuracy for the implementation of carbon-ion beam scanning therapy at our center.
Pre-stack depth Migration imaging of the Hellenic Subduction Zone
NASA Astrophysics Data System (ADS)
Hussni, S. G.; Becel, A.; Schenini, L.; Laigle, M.; Dessa, J. X.; Galve, A.; Vitard, C.
2017-12-01
In 365 AD, a major M>8-tsunamignic earthquake occurred along the southwestern segment of the Hellenic subduction zone. Although this is the largest seismic event ever reported in Europe, some fundamental questions remain regarding the deep geometry of the interplate megathrust, as well as other faults within the overriding plate potentially connected to it. The main objective here is to image those deep structures, whose depths range between 15 and 45 km, using leading edge seismic reflection equipment. To this end, a 210-km-long multichannel seismic profile was acquired with the 8 km-long streamer and the 6600 cu.in source of R/V Marcus Langseth. This was realized at the end of 2015, during the SISMED cruise. This survey was made possible through a collective effort gathering labs (Géoazur, LDEO, ISTEP, ENS-Paris, EOST, LDO, Dpt. Geosciences of Pau Univ). A preliminary processing sequence has first been applied using Geovation software of CGG, which yielded a post-stack time migration of collected data, as well as pre-stack time migration obtained with a model derived from velocity analyses. Using Paradigm software, a pre-stack depth migration was subsequently carried out. This step required some tuning in the pre-processing sequence in order to improve multiple removal, noise suppression and to better reveal the true geometry of reflectors in depth. This iteration of pre-processing included, the use of parabolic Radon transform, FK filtering and time variant band pass filtering. An initial velocity model was built using depth-converted RMS velocities obtained from SISMED data for the sedimentary layer, complemented at depth with a smooth version of the tomographic velocities derived from coincident wide-angle data acquired during the 2012-ULYSSE survey. Then, we performed a Kirchhoff Pre-stack depth migration with traveltimes calculated using the Eikonal equation. Velocity model were then tuned through residual velocity analyses to flatten reflections in common reflection point gathers. These new results improve the imaging of deep reflectors and even reveal some new features. We will present this work, a comparison with our previously obtained post-stack time migration, as well as some insights into the new geological structures revealed by the depth imaging.
NASA Technical Reports Server (NTRS)
Cramer, K. Elliott; Syed, Hazari I.
1995-01-01
This user's manual describes the installation and operation of TIA, the Thermal-Imaging acquisition and processing Application, developed by the Nondestructive Evaluation Sciences Branch at NASA Langley Research Center, Hampton, Virginia. TIA is a user friendly graphical interface application for the Macintosh 2 and higher series computers. The software has been developed to interface with the Perceptics/Westinghouse Pixelpipe(TM) and PixelStore(TM) NuBus cards and the GW Instruments MacADIOS(TM) input-output (I/O) card for the Macintosh for imaging thermal data. The software is also capable of performing generic image-processing functions.
Automatic Orientation of Large Blocks of Oblique Images
NASA Astrophysics Data System (ADS)
Rupnik, E.; Nex, F.; Remondino, F.
2013-05-01
Nowadays, multi-camera platforms combining nadir and oblique cameras are experiencing a revival. Due to their advantages such as ease of interpretation, completeness through mitigation of occluding areas, as well as system accessibility, they have found their place in numerous civil applications. However, automatic post-processing of such imagery still remains a topic of research. Configuration of cameras poses a challenge on the traditional photogrammetric pipeline used in commercial software and manual measurements are inevitable. For large image blocks it is certainly an impediment. Within theoretical part of the work we review three common least square adjustment methods and recap on possible ways for a multi-camera system orientation. In the practical part we present an approach that successfully oriented a block of 550 images acquired with an imaging system composed of 5 cameras (Canon Eos 1D Mark III) with different focal lengths. Oblique cameras are rotated in the four looking directions (forward, backward, left and right) by 45° with respect to the nadir camera. The workflow relies only upon open-source software: a developed tool to analyse image connectivity and Apero to orient the image block. The benefits of the connectivity tool are twofold: in terms of computational time and success of Bundle Block Adjustment. It exploits the georeferenced information provided by the Applanix system in constraining feature point extraction to relevant images only, and guides the concatenation of images during the relative orientation. Ultimately an absolute transformation is performed resulting in mean re-projection residuals equal to 0.6 pix.
Software for MR image overlay guided needle insertions: the clinical translation process
NASA Astrophysics Data System (ADS)
Ungi, Tamas; U-Thainual, Paweena; Fritz, Jan; Iordachita, Iulian I.; Flammang, Aaron J.; Carrino, John A.; Fichtinger, Gabor
2013-03-01
PURPOSE: Needle guidance software using augmented reality image overlay was translated from the experimental phase to support preclinical and clinical studies. Major functional and structural changes were needed to meet clinical requirements. We present the process applied to fulfill these requirements, and selected features that may be applied in the translational phase of other image-guided surgical navigation systems. METHODS: We used an agile software development process for rapid adaptation to unforeseen clinical requests. The process is based on iterations of operating room test sessions, feedback discussions, and software development sprints. The open-source application framework of 3D Slicer and the NA-MIC kit provided sufficient flexibility and stable software foundations for this work. RESULTS: All requirements were addressed in a process with 19 operating room test iterations. Most features developed in this phase were related to workflow simplification and operator feedback. CONCLUSION: Efficient and affordable modifications were facilitated by an open source application framework and frequent clinical feedback sessions. Results of cadaver experiments show that software requirements were successfully solved after a limited number of operating room tests.
The Gemini Planet Imager: integration and status
NASA Astrophysics Data System (ADS)
Macintosh, Bruce A.; Anthony, Andre; Atwood, Jennifer; Barriga, Nicolas; Bauman, Brian; Caputa, Kris; Chilcote, Jeffery; Dillon, Daren; Doyon, René; Dunn, Jennifer; Gavel, Donald T.; Galvez, Ramon; Goodsell, Stephen J.; Graham, James R.; Hartung, Markus; Isaacs, Joshua; Kerley, Dan; Konopacky, Quinn; Labrie, Kathleen; Larkin, James E.; Maire, Jerome; Marois, Christian; Millar-Blanchaer, Max; Nunez, Arturo; Oppenheimer, Ben R.; Palmer, David W.; Pazder, John; Perrin, Marshall; Poyneer, Lisa A.; Quirez, Carlos; Rantakyro, Frederik; Reshtov, Vlad; Saddlemyer, Leslie; Sadakuni, Naru; Savransky, Dmitry; Sivaramakrishnan, Anand; Smith, Malcolm; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Weiss, Jason; Wiktorowicz, Sloane
2012-09-01
The Gemini Planet Imager is a next-generation instrument for the direct detection and characterization of young warm exoplanets, designed to be an order of magnitude more sensitive than existing facilities. It combines a 1700-actuator adaptive optics system, an apodized-pupil Lyot coronagraph, a precision interferometric infrared wavefront sensor, and a integral field spectrograph. All hardware and software subsystems are now complete and undergoing integration and test at UC Santa Cruz. We will present test results on each subsystem and the results of end-to-end testing. In laboratory testing, GPI has achieved a raw contrast (without post-processing) of 10-6 5σ at 0.4", and with multiwavelength speckle suppression, 2x10-7 at the same separation.
Practical guide for implementing hybrid PET/MR clinical service: lessons learned from our experience
Parikh, Nainesh; Friedman, Kent P.; Shah, Shetal N.; Chandarana, Hersh
2015-01-01
Positron emission tomography (PET) and magnetic resonance imaging, until recently, have been performed on separate PET and MR systems with varying temporal delay between the two acquisitions. The interpretation of these two separately acquired studies requires cognitive fusion by radiologists/nuclear medicine physicians or dedicated and challenging post-processing. Recent advances in hardware and software with introduction of hybrid PET/MR systems have made it possible to acquire the PET and MR images simultaneously or near simultaneously. This review article serves as a road-map for clinical implementation of hybrid PET/MR systems and briefly discusses hardware systems, the personnel needs, safety and quality issues, and reimbursement topics based on experience at NYU Langone Medical Center and Cleveland Clinic. PMID:25985966
The GPM Ground Validation Program: Pre to Post-Launch
NASA Astrophysics Data System (ADS)
Petersen, W. A.
2014-12-01
NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, types and data quality are being routinely generated to facilitate statistical GV of instantaneous and merged GPM products. To assess precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of ground-satellite estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi-radar, gauge and disdrometer facility located in coastal Virginia. This presentation will summarize the evolution of the NASA GPM GV program from pre to post-launch eras and highlight early evaluations of GPM satellite datasets.
Documentation of operational protocol for the use of MAMA software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwartz, Daniel S.
2016-01-21
Image analysis of Scanning Electron Microscope (SEM) micrographs is a complex process that can vary significantly between analysts. The factors causing the variation are numerous, and the purpose of Task 2b is to develop and test a set of protocols designed to minimize variation in image analysis between different analysts and laboratories, specifically using the MAMA software package, Version 2.1. The protocols were designed to be “minimally invasive”, so that expert SEM operators will not be overly constrained in the way they analyze particle samples. The protocols will be tested using a round-robin approach where results from expert SEM usersmore » at Los Alamos National Laboratory, Lawrence Livermore National Laboratory, Pacific Northwest National Laboratory, Savannah River National Laboratory, and the National Institute of Standards and Testing will be compared. The variation of the results will be used to quantify uncertainty in the particle image analysis process. The round-robin exercise will proceed with 3 levels of rigor, each with their own set of protocols, as described below in Tasks 2b.1, 2b.2, and 2b.3. The uncertainty will be developed using NIST standard reference material SRM 1984 “Thermal Spray Powder – Particle Size Distribution, Tungsten Carbide/Cobalt (Acicular)” [Reference 1]. Full details are available in the Certificate of Analysis, posted on the NIST website (http://www.nist.gov/srm/).« less
NASA Technical Reports Server (NTRS)
Vu, Duc; Sandor, Michael; Agarwal, Shri
2005-01-01
CSAM Metrology Software Tool (CMeST) is a computer program for analysis of false-color CSAM images of plastic-encapsulated microcircuits. (CSAM signifies C-mode scanning acoustic microscopy.) The colors in the images indicate areas of delamination within the plastic packages. Heretofore, the images have been interpreted by human examiners. Hence, interpretations have not been entirely consistent and objective. CMeST processes the color information in image-data files to detect areas of delamination without incurring inconsistencies of subjective judgement. CMeST can be used to create a database of baseline images of packages acquired at given times for comparison with images of the same packages acquired at later times. Any area within an image can be selected for analysis, which can include examination of different delamination types by location. CMeST can also be used to perform statistical analyses of image data. Results of analyses are available in a spreadsheet format for further processing. The results can be exported to any data-base-processing software.
NASA Astrophysics Data System (ADS)
Dostálová, Alena; Naeimi, Vahid; Wagner, Wolfgang; Elefante, Stefano; Cao, Senmao; Persson, Henrik
2016-10-01
One of the major advantages of the Sentinel-1 data is its capability to provide very high spatio-temporal coverage allowing the mapping of large areas as well as creation of dense time-series of the Sentinel-1 acquisitions. The SGRT software developed at TU Wien aims at automated processing of Sentinel-1 data for global and regional products. The first step of the processing consists of the Sentinel-1 data geocoding with the help of S1TBX software and their resampling to a common grid. These resampled images serve as an input for the product derivation. Thus, it is very important to select the most reliable processing settings and assess the geocoding uncertainty for both backscatter and projected local incidence angle images. Within this study, selection of Sentinel-1 acquisitions over 3 test areas in Europe were processed manually in the S1TBX software, testing multiple software versions, processing settings and digital elevation models (DEM) and the accuracy of the resulting geocoded images were assessed. Secondly, all available Sentinel-1 data over the areas were processed using selected settings and detailed quality check was performed. Overall, strong influence of the used DEM on the geocoding quality was confirmed with differences up to 80 meters in areas with higher terrain variations. In flat areas, the geocoding accuracy of backscatter images was overall good, with observed shifts between 0 and 30m. Larger systematic shifts were identified in case of projected local incidence angle images. These results encourage the automated processing of large volumes of Sentinel-1 data.
PACS-based interface for 3D anatomical structure visualization and surgical planning
NASA Astrophysics Data System (ADS)
Koehl, Christophe; Soler, Luc; Marescaux, Jacques
2002-05-01
The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.
Spot restoration for GPR image post-processing
Paglieroni, David W; Beer, N. Reginald
2014-05-20
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Truong, Quynh A.; Thai, Wai-ee; Wai, Bryan; Cordaro, Kevin; Cheng, Teresa; Beaudoin, Jonathan; Xiong, Guanglei; Cheung, Jim W.; Altman, Robert; Min, James K.; Singh, Jagmeet P.; Barrett, Conor D.; Danik, Stephan
2015-01-01
Background Myocardial scar is a substrate for ventricular tachycardia and sudden cardiac death. Late enhancement computed tomography (CT) imaging can detect scar, but it remains unclear whether newer late enhancement dual-energy (LE-DECT) acquisition has benefit over standard single-energy late enhancement (LE-CT). Objective We aim to compare late enhancement CT using newer LE-DECT acquisition and single-energy LE-CT acquisitions to pathology and electroanatomical map (EAM) in an experimental chronic myocardial infarction (MI) porcine study. Methods In 8 chronic MI pigs (59±5 kg), we performed dual-source CT, EAM, and pathology. For CT imaging, we performed 3 acquisitions at 10 minutes post-contrast: LE-CT 80 kV, LE-CT 100 kV, and LE-DECT with two post-processing software settings. Results Of the sequences, LE-CT 100 kV provided the best contrast-to-noise ratio (all p≤0.03) and correlation to pathology for scar (ρ=0.88). While LE-DECT overestimated scar (both p=0.02), LE-CT images did not (both p=0.08). On a segment basis (n=136), all CT sequences had high specificity (87–93%) and modest sensitivity (50–67%), with LE-CT 100 kV having the highest specificity of 93% for scar detection compared to pathology and agreement with EAM (κ 0.69). Conclusions Standard single-energy LE-CT, particularly 100kV, matched better to pathology and EAM than dual-energy LE-DECT for scar detection. Larger human trials as well as more technical-based studies that optimize varying different energies with newer hardware and software are warranted. PMID:25977115
CFHT data processing and calibration ESPaDOnS pipeline: Upena and OPERA (optical spectropolarimetry)
NASA Astrophysics Data System (ADS)
Martioli, Eder; Teeple, D.; Manset, Nadine
2011-03-01
CFHT is ESPaDOnS responsible for processing raw images, removing instrument related artifacts, and delivering science-ready data to the PIs. Here we describe the Upena pipeline, which is the software used to reduce the echelle spectro-polarimetric data obtained with the ESPaDOnS instrument. Upena is an automated pipeline that performs calibration and reduction of raw images. Upena has the capability of both performing real-time image-by-image basis reduction and a post observing night complete reduction. Upena produces polarization and intensity spectra in FITS format. The pipeline is designed to perform parallel computing for improved speed, which assures that the final products are delivered to the PIs before noon HST after each night of observations. We also present the OPERA project, which is an open-source pipeline to reduce ESPaDOnS data that will be developed as a collaborative work between CFHT and the scientific community. OPERA will match the core capabilities of Upena and in addition will be open-source, flexible and extensible.
NASA Astrophysics Data System (ADS)
Wi, S.; Ray, P. A.; Brown, C.
2015-12-01
A software package developed to facilitate building distributed hydrologic models in a modular modeling system is presented. The software package provides a user-friendly graphical user interface that eases its practical use in water resources-related research and practice. The modular modeling system organizes the options available to users when assembling models according to the stages of hydrological cycle, such as potential evapotranspiration, soil moisture accounting, and snow/glacier melting processes. The software is intended to be a comprehensive tool that simplifies the task of developing, calibrating, validating, and using hydrologic models through the inclusion of intelligent automation to minimize user effort, and reduce opportunities for error. Processes so far automated include the definition of system boundaries (i.e., watershed delineation), climate and geographical input generation, and parameter calibration. Built-in post-processing toolkits greatly improve the functionality of the software as a decision support tool for water resources system management and planning. Example post-processing toolkits enable streamflow simulation at ungauged sites with predefined model parameters, and perform climate change risk assessment by means of the decision scaling approach. The software is validated through application to watersheds representing a variety of hydrologic regimes.
Image retrieval and processing system version 2.0 development work
NASA Technical Reports Server (NTRS)
Slavney, Susan H.; Guinness, Edward A.
1991-01-01
The Image Retrieval and Processing System (IRPS) is a software package developed at Washington University and used by the NASA Regional Planetary Image Facilities (RPIF's). The IRPS combines data base management and image processing components to allow the user to examine catalogs of image data, locate the data of interest, and perform radiometric and geometric calibration of the data in preparation for analysis. Version 1.0 of IRPS was completed in Aug. 1989 and was installed at several IRPS's. Other RPIF's use remote logins via NASA Science Internet to access IRPS at Washington University. Work was begun on designing and population a catalog of Magellan image products that will be part of IRPS Version 2.0, planned for release by the end of calendar year 1991. With this catalog, a user will be able to search by orbit and by location for Magellan Basic Image Data Records (BIDR's), Mosaicked Image Data Records (MIDR's), and Altimetry-Radiometry Composite Data Records (ARCDR's). The catalog will include the Magellan CD-ROM volume, director, and file name for each data product. The image processing component of IRPS is based on the Planetary Image Cartography Software (PICS) developed by the U.S. Geological Survey, Flagstaff, Arizona. To augment PICS capabilities, a set of image processing programs were developed that are compatible with PICS-format images. This software includes general-purpose functions that PICS does not have, analysis and utility programs for specific data sets, and programs from other sources that were modified to work with PICS images. Some of the software will be integrated into the Version 2.0 release of IRPS. A table is presented that lists the programs with a brief functional description of each.
Fiji: an open-source platform for biological-image analysis.
Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert
2012-06-28
Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.
A service protocol for post-processing of medical images on the mobile device
NASA Astrophysics Data System (ADS)
He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian
2014-03-01
With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.
Mirion--a software package for automatic processing of mass spectrometric images.
Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B
2013-08-01
Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.
Coincident Extraction of Line Objects from Stereo Image Pairs.
1983-09-01
4.4.3 Reconstruction of intersections 4.5 Final result processing 5. Presentation of the results 5.1 FIM image processing system 5.2 Extraction results in...image. To achieve this goal, the existing software system had to be modified and extended considerably. The following sections of this report will give...8000 pixels of each image without explicit loading of subimages could not yet be performed due to computer system software problems. m m n m -4- The
Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.
Arganda-Carreras, Ignacio; Andrey, Philippe
2017-01-01
With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.
A study of quantification of aortic compliance in mice using radial acquisition phase contrast MRI
NASA Astrophysics Data System (ADS)
Zhao, Xuandong
Spatiotemporal changes in blood flow velocity measured using Phase contrast Magnetic Resonance Imaging (MRI) can be used to quantify Pulse Wave Velocity (PWV) and Wall Shear Stress (WSS), well known indices of vessel compliance. A study was conducted to measure the PWV in the aortic arch in young healthy children using conventional phase contrast MRI and a post processing algorithm that automatically track the peak velocity in phase contrast images. It is shown that the PWV calculated using peak velocity-time data has less variability compared to that using mean velocity and flow. Conventional MR data acquisition techniques lack both the spatial and temporal resolution needed to accurately calculate PWV and WSS in in vivo studies using transgenic animal models of arterial diseases. Radial k-space acquisition can improve both spatial and temporal resolution. A major part of this thesis was devoted to developing technology for Radial Phase Contrast Magnetic Resonance (RPCMR) cine imaging on a 7 Tesla Animal scanner. A pulse sequence with asymmetric radial k-space acquisition was designed and implemented. Software developed to reconstruct the RPCMR images include gridding, density compensation and centering of k-Space that corrects the image ghosting introduced by hardware response time. Image processing software was developed to automatically segment the vessel lumen and correct for phase offset due to eddy currents. Finally, in vivo and ex vivo aortic compliance measurements were conducted in a well-established mouse model for atherosclerosis: Apolipoprotein E-knockout (ApoE-KO). Using RPCMR technique, a significantly higher PWV value as well as a higher average WSS was detected among 9 months old ApoE-KO mice compare to in wild type mice. A follow up ex-vivo test of tissue elasticity confirmed the impaired distensibility of aortic arteries among ApoE-KO mice.
Platform for Postprocessing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Don
2008-01-01
Taking advantage of the similarities that exist among all waveform-based non-destructive evaluation (NDE) methods, a common software platform has been developed containing multiple- signal and image-processing techniques for waveforms and images. The NASA NDE Signal and Image Processing software has been developed using the latest versions of LabVIEW, and its associated Advanced Signal Processing and Vision Toolkits. The software is useable on a PC with Windows XP and Windows Vista. The software has been designed with a commercial grade interface in which two main windows, Waveform Window and Image Window, are displayed if the user chooses a waveform file to display. Within these two main windows, most actions are chosen through logically conceived run-time menus. The Waveform Window has plots for both the raw time-domain waves and their frequency- domain transformations (fast Fourier transform and power spectral density). The Image Window shows the C-scan image formed from information of the time-domain waveform (such as peak amplitude) or its frequency-domain transformation at each scan location. The user also has the ability to open an image, or series of images, or a simple set of X-Y paired data set in text format. Each of the Waveform and Image Windows contains menus from which to perform many user actions. An option exists to use raw waves obtained directly from scan, or waves after deconvolution if system wave response is provided. Two types of deconvolution, time-based subtraction or inverse-filter, can be performed to arrive at a deconvolved wave set. Additionally, the menu on the Waveform Window allows preprocessing of waveforms prior to image formation, scaling and display of waveforms, formation of different types of images (including non-standard types such as velocity), gating of portions of waves prior to image formation, and several other miscellaneous and specialized operations. The menu available on the Image Window allows many further image processing and analysis operations, some of which are found in commercially-available image-processing software programs (such as Adobe Photoshop), and some that are not (removing outliers, Bscan information, region-of-interest analysis, line profiles, and precision feature measurements).
Pertuz, Said; McDonald, Elizabeth S.; Weinstein, Susan P.; Conant, Emily F.
2016-01-01
Purpose To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Materials and Methods Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board–approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration–cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Results Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging–based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Conclusion Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment. © RSNA, 2015 Online supplemental material is available for this article. PMID:26491909
Digital image processing of bone - Problems and potentials
NASA Technical Reports Server (NTRS)
Morey, E. R.; Wronski, T. J.
1980-01-01
The development of a digital image processing system for bone histomorphometry and fluorescent marker monitoring is discussed. The system in question is capable of making measurements of UV or light microscope features on a video screen with either video or computer-generated images, and comprises a microscope, low-light-level video camera, video digitizer and display terminal, color monitor, and PDP 11/34 computer. Capabilities demonstrated in the analysis of an undecalcified rat tibia include the measurement of perimeter and total bone area, and the generation of microscope images, false color images, digitized images and contoured images for further analysis. Software development will be based on an existing software library, specifically the mini-VICAR system developed at JPL. It is noted that the potentials of the system in terms of speed and reliability far exceed any problems associated with hardware and software development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azevedo, S.G.; Fitch, J.P.
1987-10-21
Conventional software interfaces that use imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal-processing software (SIG). As an alternative, ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal LISP Environment (ISLE) is an example of an interpreted functional language interface based on common LISP. Advantages of ISLE include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligence (AI)more » software. 10 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azevedo, S.G.; Fitch, J.P.
1987-05-01
Conventional software interfaces which utilize imperative computer commands or menu interactions are often restrictive environments when used for researching new algorithms or analyzing processed experimental data. We found this to be true with current signal processing software (SIG). Existing ''functional language'' interfaces provide features such as command nesting for a more natural interaction with the data. The Image and Signal Lisp Environment (ISLE) will be discussed as an example of an interpreted functional language interface based on Common LISP. Additional benefits include multidimensional and multiple data-type independence through dispatching functions, dynamic loading of new functions, and connections to artificial intelligencemore » software.« less
ELAS: A powerful, general purpose image processing package
NASA Technical Reports Server (NTRS)
Walters, David; Rickman, Douglas
1991-01-01
ELAS is a software package which has been utilized as an image processing tool for more than a decade. It has been the source of several commercial packages. Now available on UNIX workstations it is a very powerful, flexible set of software. Applications at Stennis Space Center have included a very wide range of areas including medicine, forestry, geology, ecological modeling, and sonar imagery. It remains one of the most powerful image processing packages available, either commercially or in the public domain.
Spatially assisted down-track median filter for GPR image post-processing
Paglieroni, David W; Beer, N Reginald
2014-10-07
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Parallel Wavefront Analysis for a 4D Interferometer
NASA Technical Reports Server (NTRS)
Rao, Shanti R.
2011-01-01
This software provides a programming interface for automating data collection with a PhaseCam interferometer from 4D Technology, and distributing the image-processing algorithm across a cluster of general-purpose computers. Multiple instances of 4Sight (4D Technology s proprietary software) run on a networked cluster of computers. Each connects to a single server (the controller) and waits for instructions. The controller directs the interferometer to several images, then assigns each image to a different computer for processing. When the image processing is finished, the server directs one of the computers to collate and combine the processed images, saving the resulting measurement in a file on a disk. The available software captures approximately 100 images and analyzes them immediately. This software separates the capture and analysis processes, so that analysis can be done at a different time and faster by running the algorithm in parallel across several processors. The PhaseCam family of interferometers can measure an optical system in milliseconds, but it takes many seconds to process the data so that it is usable. In characterizing an adaptive optics system, like the next generation of astronomical observatories, thousands of measurements are required, and the processing time quickly becomes excessive. A programming interface distributes data processing for a PhaseCam interferometer across a Windows computing cluster. A scriptable controller program coordinates data acquisition from the interferometer, storage on networked hard disks, and parallel processing. Idle time of the interferometer is minimized. This architecture is implemented in Python and JavaScript, and may be altered to fit a customer s needs.
Feature extraction from multiple data sources using genetic programming
NASA Astrophysics Data System (ADS)
Szymanski, John J.; Brumby, Steven P.; Pope, Paul A.; Eads, Damian R.; Esch-Mosher, Diana M.; Galassi, Mark C.; Harvey, Neal R.; McCulloch, Hersey D.; Perkins, Simon J.; Porter, Reid B.; Theiler, James P.; Young, Aaron C.; Bloch, Jeffrey J.; David, Nancy A.
2002-08-01
Feature extraction from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. We use the GENetic Imagery Exploitation (GENIE) software for this purpose, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniques to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land cover features including towns, wildfire burnscars, and forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szymanski, J. J.; Brumby, Steven P.; Pope, P. A.
Feature extration from imagery is an important and long-standing problem in remote sensing. In this paper, we report on work using genetic programming to perform feature extraction simultaneously from multispectral and digital elevation model (DEM) data. The tool used is the GENetic Imagery Exploitation (GENIE) software, which produces image-processing software that inherently combines spatial and spectral processing. GENIE is particularly useful in exploratory studies of imagery, such as one often does in combining data from multiple sources. The user trains the software by painting the feature of interest with a simple graphical user interface. GENIE then uses genetic programming techniquesmore » to produce an image-processing pipeline. Here, we demonstrate evolution of image processing algorithms that extract a range of land-cover features including towns, grasslands, wild fire burn scars, and several types of forest. We use imagery from the DOE/NNSA Multispectral Thermal Imager (MTI) spacecraft, fused with USGS 1:24000 scale DEM data.« less
TDat: An Efficient Platform for Processing Petabyte-Scale Whole-Brain Volumetric Images.
Li, Yuxin; Gong, Hui; Yang, Xiaoquan; Yuan, Jing; Jiang, Tao; Li, Xiangning; Sun, Qingtao; Zhu, Dan; Wang, Zhenyu; Luo, Qingming; Li, Anan
2017-01-01
Three-dimensional imaging of whole mammalian brains at single-neuron resolution has generated terabyte (TB)- and even petabyte (PB)-sized datasets. Due to their size, processing these massive image datasets can be hindered by the computer hardware and software typically found in biological laboratories. To fill this gap, we have developed an efficient platform named TDat, which adopts a novel data reformatting strategy by reading cuboid data and employing parallel computing. In data reformatting, TDat is more efficient than any other software. In data accessing, we adopted parallelization to fully explore the capability for data transmission in computers. We applied TDat in large-volume data rigid registration and neuron tracing in whole-brain data with single-neuron resolution, which has never been demonstrated in other studies. We also showed its compatibility with various computing platforms, image processing software and imaging systems.
Correcting Concomitant Gradient Distortion in Microtesla Magnetic Resonance Imaging
NASA Astrophysics Data System (ADS)
Myers, Whittier
2005-03-01
Progress in ultra-low field magnetic resonance imaging (MRI) using an untuned gradiometer coupled to a Superconducting Quantum Interference Device (SQUID) has resulted in three-dimensional images with an in-plane resolution of 2 mm. Protons in samples up to 80 mm in size were prepolarized in a 100 mT field, manipulated by ˜100 μT/m gradients for image encoding, and detected by the SQUID in the ˜65 μT precession field. Maxwell's equations prohibit a unidirectional magnetic field gradient. While the additional concomitant gradients can be neglected in high-field MRI, they distort high-resolution images of large samples taken in microtesla precession fields. We propose two methods to mitigate such distortion: raising the precession field during image encoding, and software post-processing. Both approaches are demonstrated using computer simulations and MRI images. Simulations show that the combination of these techniques can correct the concomitant gradient distortion present in a 4-mm resolution image of an object the size of a human brain with a precession field of 50 μT. Supported by USDOE.
NASA Technical Reports Server (NTRS)
Kumar, P.; Lin, F. Y.; Vaishampayan, V.; Farvardin, N.
1986-01-01
A complete documentation of the software developed in the Communication and Signal Processing Laboratory (CSPL) during the period of July 1985 to March 1986 is provided. Utility programs and subroutines that were developed for a user-friendly image and speech processing environment are described. Additional programs for data compression of image and speech type signals are included. Also, programs for the zero-memory and block transform quantization in the presence of channel noise are described. Finally, several routines for simulating the perfromance of image compression algorithms are included.
Predictive images of postoperative levator resection outcome using image processing software.
Mawatari, Yuki; Fukushima, Mikiko
2016-01-01
This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller's muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop ® ). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery.
Predictive images of postoperative levator resection outcome using image processing software
Mawatari, Yuki; Fukushima, Mikiko
2016-01-01
Purpose This study aims to evaluate the efficacy of processed images to predict postoperative appearance following levator resection. Methods Analysis involved 109 eyes from 65 patients with blepharoptosis who underwent advancement of levator aponeurosis and Müller’s muscle complex (levator resection). Predictive images were prepared from preoperative photographs using the image processing software (Adobe Photoshop®). Images of selected eyes were digitally enlarged in an appropriate manner and shown to patients prior to surgery. Results Approximately 1 month postoperatively, we surveyed our patients using questionnaires. Fifty-six patients (89.2%) were satisfied with their postoperative appearances, and 55 patients (84.8%) positively responded to the usefulness of processed images to predict postoperative appearance. Conclusion Showing processed images that predict postoperative appearance to patients prior to blepharoptosis surgery can be useful for those patients concerned with their postoperative appearance. This approach may serve as a useful tool to simulate blepharoptosis surgery. PMID:27757008
Non-linear Post Processing Image Enhancement
NASA Technical Reports Server (NTRS)
Hunt, Shawn; Lopez, Alex; Torres, Angel
1997-01-01
A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,
A MultiDiscipline Approach to Digitizing Historic Seismograms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Andrew
2016-04-07
Retriever Technology has developed and has made available free of charge a seismogram digitization software package called SKATE (Seismogram Kit for Automatic Trace Extraction). We have developed an extensive set of algorithms that process seismogram image files, provide editing tools, and output time series data. The software is available online and free of charge at seismo.redfish.com. To demonstrate the speed and cost effectiveness of the software, we have processed over 30,000 images.
NASA Astrophysics Data System (ADS)
Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek
2009-09-01
High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.
Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas
2014-03-01
The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. © 2013 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.
Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas
2014-01-01
The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. Biotechnol. Bioeng. 2014;111: 504–517. © 2013 Wiley Periodicals, Inc. PMID:24037521
NASA Technical Reports Server (NTRS)
Buckner, J. D.; Council, H. W.; Edwards, T. R.
1974-01-01
Description of the hardware and software implementing the system of time-lapse reproduction of images through interactive graphics (TRIIG). The system produces a quality hard copy of processed images in a fast and inexpensive manner. This capability allows for optimal development of processing software through the rapid viewing of many image frames in an interactive mode. Three critical optical devices are used to reproduce an image: an Optronics photo reader/writer, the Adage Graphics Terminal, and Polaroid Type 57 high speed film. Typical sources of digitized images are observation satellites, such as ERTS or Mariner, computer coupled electron microscopes for high-magnification studies, or computer coupled X-ray devices for medical research.
NASA Astrophysics Data System (ADS)
Montanini, R.; Quattrocchi, A.; Piccolo, S. A.
2016-09-01
Alphanumeric marking is a common technique employed in industrial applications for identification of products. However, the realised mark can undergo deterioration, either by extensive use or voluntary deletion (e.g. removal of identification numbers of weapons or vehicles). For recovery of the lost data many destructive or non-destructive techniques have been endeavoured so far, which however present several restrictions. In this paper, active infrared thermography has been exploited for the first time in order to assess its effectiveness in restoring paint covered and abraded labels made by means of different manufacturing processes (laser, dot peen, impact, cold press and scribe). Optical excitation of the target surface has been achieved using pulse (PT), lock-in (LT) and step heating (SHT) thermography. Raw infrared images were analysed with a dedicated image processing software originally developed in Matlab™, exploiting several methods, which include thermographic signal reconstruction (TSR), guided filtering (GF), block guided filtering (BGF) and logarithmic transformation (LN). Proper image processing of the raw infrared images resulted in superior contrast and enhanced readability. In particular, for deeply abraded marks, good outcomes have been obtained by application of logarithmic transformation to raw PT images and block guided filtering to raw phase LT images. With PT and LT it was relatively easy to recover labels covered by paint, with the latter one providing better thermal contrast for all the examined targets. Step heating thermography never led to adequate label identification instead.
Object-oriented design of medical imaging software.
Ligier, Y; Ratib, O; Logean, M; Girard, C; Perrier, R; Scherrer, J R
1994-01-01
A special software package for interactive display and manipulation of medical images was developed at the University Hospital of Geneva, as part of a hospital wide Picture Archiving and Communication System (PACS). This software package, called Osiris, was especially designed to be easily usable and adaptable to the needs of noncomputer-oriented physicians. The Osiris software has been developed to allow the visualization of medical images obtained from any imaging modality. It provides generic manipulation tools, processing tools, and analysis tools more specific to clinical applications. This software, based on an object-oriented paradigm, is portable and extensible. Osiris is available on two different operating systems: the Unix X-11/OSF-Motif based workstations, and the Macintosh family.
Documentation of procedures for textural/spatial pattern recognition techniques
NASA Technical Reports Server (NTRS)
Haralick, R. M.; Bryant, W. F.
1976-01-01
A C-130 aircraft was flown over the Sam Houston National Forest on March 21, 1973 at 10,000 feet altitude to collect multispectral scanner (MSS) data. Existing textural and spatial automatic processing techniques were used to classify the MSS imagery into specified timber categories. Several classification experiments were performed on this data using features selected from the spectral bands and a textural transform band. The results indicate that (1) spatial post-processing a classified image can cut the classification error to 1/2 or 1/3 of its initial value, (2) spatial post-processing the classified image using combined spectral and textural features produces a resulting image with less error than post-processing a classified image using only spectral features and (3) classification without spatial post processing using the combined spectral textural features tends to produce about the same error rate as a classification without spatial post processing using only spectral features.
Intelligence algorithms for autonomous navigation in a ground vehicle
NASA Astrophysics Data System (ADS)
Petkovsek, Steve; Shakya, Rahul; Shin, Young Ho; Gautam, Prasanna; Norton, Adam; Ahlgren, David J.
2012-01-01
This paper will discuss the approach to autonomous navigation used by "Q," an unmanned ground vehicle designed by the Trinity College Robot Study Team to participate in the Intelligent Ground Vehicle Competition (IGVC). For the 2011 competition, Q's intelligence was upgraded in several different areas, resulting in a more robust decision-making process and a more reliable system. In 2010-2011, the software of Q was modified to operate in a modular parallel manner, with all subtasks (including motor control, data acquisition from sensors, image processing, and intelligence) running simultaneously in separate software processes using the National Instruments (NI) LabVIEW programming language. This eliminated processor bottlenecks and increased flexibility in the software architecture. Though overall throughput was increased, the long runtime of the image processing process (150 ms) reduced the precision of Q's realtime decisions. Q had slow reaction times to obstacles detected only by its cameras, such as white lines, and was limited to slow speeds on the course. To address this issue, the image processing software was simplified and also pipelined to increase the image processing throughput and minimize the robot's reaction times. The vision software was also modified to detect differences in the texture of the ground, so that specific surfaces (such as ramps and sand pits) could be identified. While previous iterations of Q failed to detect white lines that were not on a grassy surface, this new software allowed Q to dynamically alter its image processing state so that appropriate thresholds could be applied to detect white lines in changing conditions. In order to maintain an acceptable target heading, a path history algorithm was used to deal with local obstacle fields and GPS waypoints were added to provide a global target heading. These modifications resulted in Q placing 5th in the autonomous challenge and 4th in the navigation challenge at IGVC.
NASA Astrophysics Data System (ADS)
Wang, Cuihuan; Kim, Leonard; Barnard, Nicola; Khan, Atif; Pierce, Mark C.
2016-02-01
Our long term goal is to develop a high-resolution imaging method for comprehensive assessment of tissue removed during lumpectomy procedures. By identifying regions of high-grade disease within the excised specimen, we aim to develop patient-specific post-operative radiation treatment regimens. We have assembled a benchtop spectral-domain optical coherence tomography (SD-OCT) system with 1320 nm center wavelength. Automated beam scanning enables "sub-volumes" spanning 5 mm x 5 mm x 2 mm (500 A-lines x 500 B-scans x 2 mm in depth) to be collected in under 15 seconds. A motorized sample positioning stage enables multiple sub-volumes to be acquired across an entire tissue specimen. Sub-volumes are rendered from individual B-scans in 3D Slicer software and en face (XY) images are extracted at specific depths. These images are then tiled together using MosaicJ software to produce a large area en face view (up to 40 mm x 25 mm). After OCT imaging, specimens were sectioned and stained with HE, allowing comparison between OCT image features and disease markers on histopathology. This manuscript describes the technical aspects of image acquisition and reconstruction, and reports initial qualitative comparison between large area en face OCT images and HE stained tissue sections. Future goals include developing image reconstruction algorithms for mapping an entire sample, and registering OCT image volumes with clinical CT and MRI images for post-operative treatment planning.
HST/WFC3: Understanding and Mitigating Radiation Damage Effects in the CCD Detectors
NASA Astrophysics Data System (ADS)
Baggett, S.; Anderson, J.; Sosey, M.; MacKenty, J.; Gosmeyer, C.; Noeske, K.; Gunning, H.; Bourque, M.
2015-09-01
At the heart of the Hubble Space Telescope Wide Field Camera 3 (HST/WFC3) UVIS channel resides a 4096x4096 pixel e2v CCD array. While these detectors are performing extremely well after more than 5 years in low-earth orbit, the cumulative effects of radiation damage cause a continual growth in the hot pixel population and a progressive loss in charge transfer efficiency (CTE) over time. The decline in CTE has two effects: (1) it reduces the detected source flux as the defects trap charge during readout and (2) it systematically shifts source centroids as the trapped charge is later released. The flux losses can be significant, particularly for faint sources in low background images. Several mitigation options exist, including target placement within the field of view, empirical stellar photometric corrections, post-flash mode and an empirical pixel-based CTE correction. The application of a post-flash has been remarkably effective in WFC3 at reducing CTE losses in low background images for a relatively small noise penalty. Currently all WFC3 observers are encouraged to post-flash images with low backgrounds. Another powerful option in mitigating CTE losses is the pixel-based CTE correction. Analagous to the CTE correction software currently in use in the HST Advanced Camera for Surveys (ACS) pipeline, the algorithm employs an empirical observationally-constrained model of how much charge is captured and released in order to reconstruct the image. Applied to images (with or without post-flash) after they are acquired, the software is currently available as a standalone routine. The correction will be incorporated into the standard WFC3 calibration pipeline.
Autonomous robot software development using simple software components
NASA Astrophysics Data System (ADS)
Burke, Thomas M.; Chung, Chan-Jin
2004-10-01
Developing software to control a sophisticated lane-following, obstacle-avoiding, autonomous robot can be demanding and beyond the capabilities of novice programmers - but it doesn"t have to be. A creative software design utilizing only basic image processing and a little algebra, has been employed to control the LTU-AISSIG autonomous robot - a contestant in the 2004 Intelligent Ground Vehicle Competition (IGVC). This paper presents a software design equivalent to that used during the IGVC, but with much of the complexity removed. The result is an autonomous robot software design, that is robust, reliable, and can be implemented by programmers with a limited understanding of image processing. This design provides a solid basis for further work in autonomous robot software, as well as an interesting and achievable robotics project for students.
MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING
ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN
2013-01-01
In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963
GPM Ground Validation: Pre to Post-Launch Era
NASA Astrophysics Data System (ADS)
Petersen, Walt; Skofronick-Jackson, Gail; Huffman, George
2015-04-01
NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, accumulation, types and data quality are being routinely generated to facilitate statistical GV of instantaneous (e.g., Level II orbit) and merged (e.g., IMERG) GPM products. Toward assessing precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of both ground and satellite-based estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi-radar, gauge and disdrometer facility located in coastal Virginia. This presentation will summarize the evolution of the NASA GPM GV program from pre to post-launch eras and place focus on evaluation of year-1 post-launch GPM satellite datasets including Level II GPROF, DPR and Combined algorithms, and Level III IMERG products.
Three-dimensional contrasted visualization of pancreas in rats using clinical MRI and CT scanners.
Yin, Ting; Coudyzer, Walter; Peeters, Ronald; Liu, Yewei; Cona, Marlein Miranda; Feng, Yuanbo; Xia, Qian; Yu, Jie; Jiang, Yansheng; Dymarkowski, Steven; Huang, Gang; Chen, Feng; Oyen, Raymond; Ni, Yicheng
2015-01-01
The purpose of this work was to visualize the pancreas in post-mortem rats with local contrast medium infusion by three-dimensional (3D) magnetic resonance imaging (MRI) and computed tomography (CT) using clinical imagers. A total of 16 Sprague Dawley rats of about 300 g were used for the pancreas visualization. Following the baseline imaging, a mixed contrast medium dye called GadoIodo-EB containing optimized concentrations of Gd-DOTA, iomeprol and Evens blue was infused into the distally obstructed common bile duct (CBD) for post-contrast imaging with 3.0 T MRI and 128-slice CT scanners. Images were post-processed with the MeVisLab software package. MRI findings were co-registered with CT scans and validated with histomorphology, with relative contrast ratios quantified. Without contrast enhancement, the pancreas was indiscernible. After infusion of GadoIodo-EB solution, only the pancreatic region became outstandingly visible, as shown by 3D rendering MRI and CT and proven by colored dissection and histological examinations. The measured volume of the pancreas averaged 1.12 ± 0.04 cm(3) after standardization. Relative contrast ratios were 93.28 ± 34.61% and 26.45 ± 5.29% for MRI and CT respectively. We have developed a multifunctional contrast medium dye to help clearly visualize and delineate rat pancreas in situ using clinical MRI and CT scanners. The topographic landmarks thus created with 3D demonstration may help to provide guidelines for the next in vivo pancreatic MRI research in rodents. Copyright © 2015 John Wiley & Sons, Ltd.
48 CFR 242.503-2 - Post-award conference procedure.
Code of Federal Regulations, 2014 CFR
2014-10-01
... contracts that include the clause at 252.234-7004, Cost and Software Data Reporting, postaward conferences shall include a discussion of the contractor's standard cost and software data reporting (CSDR) process...
48 CFR 242.503-2 - Post-award conference procedure.
Code of Federal Regulations, 2012 CFR
2012-10-01
... contracts that include the clause at 252.234-7004, Cost and Software Data Reporting, postaward conferences shall include a discussion of the contractor's standard cost and software data reporting (CSDR) process...
48 CFR 242.503-2 - Post-award conference procedure.
Code of Federal Regulations, 2011 CFR
2011-10-01
... contracts that include the clause at 252.234-7004, Cost and Software Data Reporting, postaward conferences shall include a discussion of the contractor's standard cost and software data reporting (CSDR) process...
Imaging through Fog Using Polarization Imaging in the Visible/NIR/SWIR Spectrum
2017-01-11
few haze effects as possible. One post processing step on the image in order to complete image dehazing Figure 6: Basic architecture of the...Page 16 Figure 7: Basic architecture of post-processing techniques to recover an image dehazed from a raw image This first study was limited on the
NASA Astrophysics Data System (ADS)
Weihusen, Andreas; Ritter, Felix; Kröger, Tim; Preusser, Tobias; Zidowitz, Stephan; Peitgen, Heinz-Otto
2007-03-01
Image guided radiofrequency (RF) ablation has taken a significant part in the clinical routine as a minimally invasive method for the treatment of focal liver malignancies. Medical imaging is used in all parts of the clinical workflow of an RF ablation, incorporating treatment planning, interventional targeting and result assessment. This paper describes a software application, which has been designed to support the RF ablation workflow under consideration of the requirements of clinical routine, such as easy user interaction and a high degree of robust and fast automatic procedures, in order to keep the physician from spending too much time at the computer. The application therefore provides a collection of specialized image processing and visualization methods for treatment planning and result assessment. The algorithms are adapted to CT as well as to MR imaging. The planning support contains semi-automatic methods for the segmentation of liver tumors and the surrounding vascular system as well as an interactive virtual positioning of RF applicators and a concluding numerical estimation of the achievable heat distribution. The assessment of the ablation result is supported by the segmentation of the coagulative necrosis and an interactive registration of pre- and post-interventional image data for the comparison of tumor and necrosis segmentation masks. An automatic quantification of surface distances is performed to verify the embedding of the tumor area into the thermal lesion area. The visualization methods support representations in the commonly used orthogonal 2D view as well as in 3D scenes.
The design of real time infrared image generation software based on Creator and Vega
NASA Astrophysics Data System (ADS)
Wang, Rui-feng; Wu, Wei-dong; Huo, Jun-xiu
2013-09-01
Considering the requirement of high reality and real-time quality dynamic infrared image of an infrared image simulation, a method to design real-time infrared image simulation application on the platform of VC++ is proposed. This is based on visual simulation software Creator and Vega. The functions of Creator are introduced simply, and the main features of Vega developing environment are analyzed. The methods of infrared modeling and background are offered, the designing flow chart of the developing process of IR image real-time generation software and the functions of TMM Tool and MAT Tool and sensor module are explained, at the same time, the real-time of software is designed.
Footprint Representation of Planetary Remote Sensing Data
NASA Astrophysics Data System (ADS)
Walter, S. H. G.; Gasselt, S. V.; Michael, G.; Neukum, G.
The geometric outline of remote sensing image data, the so called footprint, can be represented as a number of coordinate tuples. These polygons are associated with according attribute information such as orbit name, ground- and image resolution, solar longitude and illumination conditions to generate a powerful base for classification of planetary experiment data. Speed, handling and extended capabilites are the reasons for using geodatabases to store and access these data types. Techniques for such a spatial database of footprint data are demonstrated using the Relational Database Management System (RDBMS) PostgreSQL, spatially enabled by the PostGIS extension. Exemplary, footprints of the HRSC and OMEGA instruments, both onboard ESA's Mars Express Orbiter, are generated and connected to attribute information. The aim is to provide high-resolution footprints of the OMEGA instrument to the science community for the first time and make them available for web-based mapping applications like the "Planetary Interactive GIS-on-the-Web Analyzable Database" (PIG- WAD), produced by the USGS. Map overlays with HRSC or other instruments like MOC and THEMIS (footprint maps are already available for these instruments and can be integrated into the database) allow on-the-fly intersection and comparison as well as extended statistics of the data. Footprint polygons are generated one by one using standard software provided by the instrument teams. Attribute data is calculated and stored together with the geometric information. In the case of HRSC, the coordinates of the footprints are already available in the VICAR label of each image file. Using the VICAR RTL and PostgreSQL's libpq C library they are loaded into the database using the Well-Known Text (WKT) notation by the Open Geospatial Consortium, Inc. (OGC). For the OMEGA instrument, image data is read using IDL routines developed and distributed by the OMEGA team. Image outlines are exported together with relevant attribute data to the industry standard Shapefile format. These files are translated to a Structured Query Language (SQL) command sequence suitable for insertion into the PostGIS/PostgrSQL database using the shp2pgsql data loader provided by the PostGIS software. PostgreSQL's advanced features such as geometry types, rules, operators and functions allow complex spatial queries and on-the-fly processing of data on DBMS level e.g. generalisation of the outlines. Processing done by the DBMS, visualisation via GIS systems and utilisation for web-based applications like mapservers will be demonstrated.
Rapid processing of PET list-mode data for efficient uncertainty estimation and data analysis
NASA Astrophysics Data System (ADS)
Markiewicz, P. J.; Thielemans, K.; Schott, J. M.; Atkinson, D.; Arridge, S. R.; Hutton, B. F.; Ourselin, S.
2016-07-01
In this technical note we propose a rapid and scalable software solution for the processing of PET list-mode data, which allows the efficient integration of list mode data processing into the workflow of image reconstruction and analysis. All processing is performed on the graphics processing unit (GPU), making use of streamed and concurrent kernel execution together with data transfers between disk and CPU memory as well as CPU and GPU memory. This approach leads to fast generation of multiple bootstrap realisations, and when combined with fast image reconstruction and analysis, it enables assessment of uncertainties of any image statistic and of any component of the image generation process (e.g. random correction, image processing) within reasonable time frames (e.g. within five minutes per realisation). This is of particular value when handling complex chains of image generation and processing. The software outputs the following: (1) estimate of expected random event data for noise reduction; (2) dynamic prompt and random sinograms of span-1 and span-11 and (3) variance estimates based on multiple bootstrap realisations of (1) and (2) assuming reasonable count levels for acceptable accuracy. In addition, the software produces statistics and visualisations for immediate quality control and crude motion detection, such as: (1) count rate curves; (2) centre of mass plots of the radiodistribution for motion detection; (3) video of dynamic projection views for fast visual list-mode skimming and inspection; (4) full normalisation factor sinograms. To demonstrate the software, we present an example of the above processing for fast uncertainty estimation of regional SUVR (standard uptake value ratio) calculation for a single PET scan of 18F-florbetapir using the Siemens Biograph mMR scanner.
Image-Based Single Cell Profiling: High-Throughput Processing of Mother Machine Experiments
Sachs, Christian Carsten; Grünberger, Alexander; Helfrich, Stefan; Probst, Christopher; Wiechert, Wolfgang; Kohlheyer, Dietrich; Nöh, Katharina
2016-01-01
Background Microfluidic lab-on-chip technology combined with live-cell imaging has enabled the observation of single cells in their spatio-temporal context. The mother machine (MM) cultivation system is particularly attractive for the long-term investigation of rod-shaped bacteria since it facilitates continuous cultivation and observation of individual cells over many generations in a highly parallelized manner. To date, the lack of fully automated image analysis software limits the practical applicability of the MM as a phenotypic screening tool. Results We present an image analysis pipeline for the automated processing of MM time lapse image stacks. The pipeline supports all analysis steps, i.e., image registration, orientation correction, channel/cell detection, cell tracking, and result visualization. Tailored algorithms account for the specialized MM layout to enable a robust automated analysis. Image data generated in a two-day growth study (≈ 90 GB) is analyzed in ≈ 30 min with negligible differences in growth rate between automated and manual evaluation quality. The proposed methods are implemented in the software molyso (MOther machine AnaLYsis SOftware) that provides a new profiling tool to analyze unbiasedly hitherto inaccessible large-scale MM image stacks. Conclusion Presented is the software molyso, a ready-to-use open source software (BSD-licensed) for the unsupervised analysis of MM time-lapse image stacks. molyso source code and user manual are available at https://github.com/modsim/molyso. PMID:27661996
Land application driven performance requirements for airborne imaging spectroscopy
NASA Astrophysics Data System (ADS)
Schaepman, M. E.; Schläpfer, D.; Kaiser, J. W.; Brazile, J.; Itten, K. I.
2003-04-01
Over the past few years, a joint Swiss/Belgium ESA initiative resulted in a project to build a precursor mission of future spaceborne imaging spectrometers, namely APEX (Airborne Prism Experiment). APEX is designed to be an airborne dispersive pushbroom imaging spectrometer operating in the solar reflected wavelength range between 400 and 2500 nm. The system is optimized for land applications including limnology, snow, soil, amongst others. The baseline for the requirements of APEX are built on various land requirements and subsequently modelled to at-sensor specific radiances. The model is based on existing biophysical and -chemical retrieval algorithms and assumes no physical limitation of the sensor system. Final technology limitations are discussed using system tradeoffs. The absolute radiance calibration of APEX includes the use of pre- and post-data acquisition internal calibration facility as well as a laboratory calibration and a performance model serving as a stable reference. We will discuss the instrument's present status in its breadboarding phase, including some new results with respect to the detector development and design optimization for imaging spectrometers. In the same framework of APEX, a complete processing and archiving facility (PAF) is developed. The PAF not only includes imaging spectrometer data processing up to physical units, but also geometric and atmospheric correction for each scene, as well as calibration data input. The PAF software includes an Internet based web-server and provides interfaces to data users as well as instrument operators and programmers. The software design, the tools and its life cycle is discussed as well. Further we will discuss particular instrument requirements (resampling, bad pixel treatment, etc.) in view of the operation of the PAF as well as their consequences on the product quality. Finally we will discuss a combined approach for geometric and atmospheric correction including BRDF (or view angle) related effects.
West, A G; Goldsmith, G R; Matimati, I; Dawson, T E
2011-08-30
Previous studies have demonstrated the potential for large errors to occur when analyzing waters containing organic contaminants using isotope ratio infrared spectroscopy (IRIS). In an attempt to address this problem, IRIS manufacturers now provide post-processing spectral analysis software capable of identifying samples with the types of spectral interference that compromises their stable isotope analysis. Here we report two independent tests of this post-processing spectral analysis software on two IRIS systems, OA-ICOS (Los Gatos Research Inc.) and WS-CRDS (Picarro Inc.). Following a similar methodology to a previous study, we cryogenically extracted plant leaf water and soil water and measured the δ(2)H and δ(18)O values of identical samples by isotope ratio mass spectrometry (IRMS) and IRIS. As an additional test, we analyzed plant stem waters and tap waters by IRMS and IRIS in an independent laboratory. For all tests we assumed that the IRMS value represented the "true" value against which we could compare the stable isotope results from the IRIS methods. Samples showing significant deviations from the IRMS value (>2σ) were considered to be contaminated and representative of spectral interference in the IRIS measurement. Over the two studies, 83% of plant species were considered contaminated on OA-ICOS and 58% on WS-CRDS. Post-analysis, spectra were analyzed using the manufacturer's spectral analysis software, in order to see if the software correctly identified contaminated samples. In our tests the software performed well, identifying all the samples with major errors. However, some false negatives indicate that user evaluation and testing of the software are necessary. Repeat sampling of plants showed considerable variation in the discrepancies between IRIS and IRMS. As such, we recommend that spectral analysis of IRIS data must be incorporated into standard post-processing routines. Furthermore, we suggest that the results from spectral analysis be included when reporting stable isotope data from IRIS. Copyright © 2011 John Wiley & Sons, Ltd.
Topomapping of Mars with HRSC images, ISIS, and a commercial stereo workstation
NASA Astrophysics Data System (ADS)
Kirk, R. L.; Howington-Kraus, E.; Galuszka, D.; Redding, B.; Hare, T. M.
HRSC on Mars Express [1] is the first camera designed specifically for stereo imaging to be used in mapping a planet other than the Earth. Nine detectors view the planet through a single lens to obtain four-band color coverage and stereo images at 3 to 5 distinct angles in a single pass over the target. The short interval between acquisition of the images ensures that changes that could interfere with stereo matching are minimized. The resolution of the nadir channel is 12.5 m at periapsis, poorer at higher points in the elliptical orbit. The stereo channels are typically operated at 2x coarser resolution and the color channels at 4x or 8x. Since the commencement of operations in January 2004, approximately 58% of Mars has been imaged at nadir resolutions better than 50 m/pixel. This coverage is expected to increase significantly during the recently approved extended mission of Mars Express, giving the HRSC dataset enormous potential for regional and even global mapping. Systematic processing of the HRSC images is carried out at the German Aerospace Center (DLR) in Berlin. Preliminary digital topographic models (DTMs) at 200 m/post resolution and orthorectified image products are produced in near-realtime for all orbits, by using the VICAR software system [2]. The tradeoff of universal coverage but limited DTM resolution makes these products optimal for many but not all research studies. Experiments on adaptive processing with the same software, for a limited number of orbits, have allowed DTMs of higher resolution (down to 50 m/post) to be produced [3]. In addition, numerous Co-Investigators on the HRSC team (including ourselves) are actively researching techniques to improve on the standard products, by such methods as bundle adjustment, alternate approaches to stereo DTM generation, and refinement of DTMs by photoclinometry (shape-from-shading) [4]. The HRSC team is conducting a systematic comparison of these alternative processing approaches by arranging for team members to produce DTMs in a consistent coordinate system from a carefully chosen suite of test images [5]. Here, we describe our own approach to HRSC processing and the results we obtained with the test images. We have developed an independent capability for processing of HRSC images at the USGS, based on the approach previously taken with Mars Global Surveyor Mars Orbiter Camera (MGS MOC) images [6]. The chosen approach uses both the USGS digital cartographic system ISIS and the commercial photogrammetric software SOCET SET ( R BAE Systems) and exploits the strengths of each. This capability provides 1 an independent point of comparison for the standard processing, as described here. It also prepares us for systematic mapping with HRSC data, if desired, and makes some useful processing tools (including relatively powerful photometric normalization and photoclinometry software) available to a wide community of ISIS users. ISIS [7] provides an end-to-end system for the analysis of digital images and production of maps from them that is readily extended to new missions. Its stereo capabilities are, however, limited. SOCET SET [8] is tailored to aerial and Earth-orbital imagery but provides a complete workflow with modules for bundle adjustment (MST), automatic stereomatching (ATE), and interactive quality control/editing of DTMs with stereo viewing (ITE). Our processing approach for MOC and other stereo datasets has been to use ISIS to ingest images in an archival format, decompress them as necessary, and perform instrument-specific radiometric calibration. Software written in ISIS is used to translate the image and, more importantly, orientation parameters and other metadata, to the formats understood by SOCET SET. The commercial system is then used for "three-dimensional" processing: bundle-adjustment (including measurement of needed control points), DTM generation, and DTM editing. Final steps such as orthrectification and mosaicking of images can be performed either in SOCET SET or in ISIS after exporting the DTM data back to it. This workflow was modified slightly for HRSC to take advantage of the standard processing performed at the DLR. As the first step in DTM production, we import VICAR Level 2 files (radiometrically calibrated but still in the raw camera geometry) into ISIS where they can immediately be used or exported to SOCET SET. HRSC Level 3 and 4 products (DTMs and orthorectified images) can also be imported and used as map-projected data (e.g., Level 4 DTMs from DLR can be compared with those produced in SOCET SET). Our results for images from orbit h1235 (covering western Candor Chasma) and the adjacent orbits h0894, h0905, h0927 (Nanedi Valles), are encouraging even though we were unable to take full advantage of the multiple-line design of HRSC in the analysis. The version of SOCET SET used (5.2) does not allow for the introduction of constraints in the bundle adjustment to ensure that the images from a single HRSC orbit share the same trajectory and pointing history. We therefore computed offsets to the trajectory and pointing angles for each image of the set as if they were fully independent Furthermore, a limitation of the existing SOCET (and ISIS) pushbroom scanner sensor models is that the exposure time per line is taken as constant for each image. HRSC is generally operated so that the line time changes multiple times per orbit, requiring us to split each VICAR image into multiple files for processing. Because the segments of each image could not be constrained to have consistent adjustments, the DTM of Nanedi Valles produced from these image segments contained small discontinuities at the segment boundaries. This problem did not arise for Candor Chasma 2 because the entire study area was covered without changes in the time per image line. The latest release of SOCET SET (5.3) incorporates the ability to do constrained bundle adjustment and should solve these problems. In addition, we are modifying the ISIS and SOCET sensor models to allow changes of line time within an image. This will greatly reduce the effort needed to work with HRSC image sets with frequent line time changes (i.e., the vast majority), because we will no longer have to split them into short segments that must be controlled and processed individually. In addition, a bug in recent and current versions of SOCET SET prevents the capability for multi-way image matching from being used with sets of scanner images. We therefore collected separate DTMs by pairwise matching of each combination of images (nadir-stereo1, nadir-stereo2, stereo1-stereo2) within an orbit and merged the results. The bug will be corrected in a future release of SOCET SET, making multi-way matching possible. This is expected to improve the robustness of DTM generation and reduce the need for interactive editing. The Candor Chasma bundle adjustment yielded RMS two-dimensional residuals of 0.5 to 0.7 pixels in most bands, 1.4 pixels in the blue. RMS residuals to the ground control provided by Mars Orbiter Laser Altimeter (MOLA) data were ˜180 m horizontally but only 15 m vertically. Adjustments to the spacecraft orientation were surprisingly large, and may be correlated: 0.1 to 2.4 km in position, ≤0.3° in omega, ≤0.8° in the other two angles. Placement of the (manually selected) control points was found to be critical; matching MOLA to the images to constrain horizontal coordinates is easiest at slope breaks such as the canyon edges, but vertical constraints are best obtained in areas of low slope. As a result, it is preferable to choose separate points for horizontal and vertical control. It is also useful to import the MOLA ground tracks into SOCET SET in order to be sure of picking control points on or near altimetry profiles rather than in gaps where the MOLA DTM has been filled by interpolation. We collected DTMs at 75 m/post in the interior of Candor Chasma and 300 m on the walls and surrounding plateau, and merged the results from both spacings and all 3 image combinations at 75 m/post. For Nanedi Valles, which lacks the extremely steep or flat areas encountered in Candor, DTMs at both spacings were collected over the full study area. A small amount of interactive editing was performed to remove areas of obvious matcher errors from the individual DTMs before they were merged. In most cases, this resulted in the combined DTM being based on the other, more successful matching results. Parts of the plateau around Candor Chasma, which has very little image texture, could not be matched successfully and were filled with MOLA data. As would be expected, the resulting DTM appears sharper than either MOLA at 463 m/post or the preliminary HRSC DTM at 200 m/post. The added detail is subjectively well correlated with the image but is not as sharp at the 75 m (˜3 pixel) grid spacing. 3 With the DTM and orthorectified images translated back into ISIS format, a variety of useful additional processing steps could be demonstrated, such as generation of pan-sharpened true and false color images, color-albedo maps, and band-ratio images with correction for surface and atmospheric photometric effects. Similar processing of the nadir and stereo panchromatic images, which have phase angles ranging from 17° to 48°, reveals a surprising diversity of surface photometric behavior. Maps of phase- dependence of scattering will not only be useful for empirical classification of surface units and quantitative modeling of microtexture and other photometric parameters, they are also likely to be essential for the rigorous comparison of the color images, which span a comparable range of phase angles. Finally, by dividing the nadir image by a smoothed version of the albedo map, we were able to obtain an image in which all but the most localized albedo variations had been removed. The albedo-corrected image was then analyzed by two-dimensional photoclinometry [9] to generate a DTM that contains real geomorphic detail at the limit of image resolution while retaining consistency with the stereo and MOLA data over longer distances. Because photoclinometry serves merely as a form of "smart interpolation" to fill in local details in the stereo DTM, the complications that can arise in the general case [10] do not occur, and this processing can be carried out unsupervised. We note in conclusion that orthorectification of the images, photometric normalization and modeling, and photoclinometry are all performed with the free software system ISIS. At the moment, the commercial software SOCET SET is required for both bundle adjustment and stereo DTM production. The USGS is currently developing its own bundle adjustment software for HRSC and other line scanners, which, when available, will make it possible for ISIS users to control HRSC images to MOLA and therefore to use the altimetric topography in subsequent processing and analysis steps similar to those described here. Acknowledgement: For this study, the HRSC Experiment Team of the German Aerospace Center (DLR) in Berlin has provided HRSC Preliminary 200m DTM(s). References: [1] Neukum, G., et al. (2004) Nature, 432, 971. [2] Scholten, F., et al. (2005) PE&RS, 71, 1143. [3] Gwinner, K., et al. (2005) PFG, 5, 387. [4] Albertz, J., et al. (2005) PE&RS, 71, 1153. [5] Heipke, C., et al. (2006) IAPRS, submitted. [6] Kirk, R.L., et al. (2003) JGR, 108, 8088. [7] Eliason, E. (1997) LPS XXVIII, 331; Gaddis et al. (1997) LPS XXVIII, 387; Torson, J., and K. Becker, (1997) LPS XXVIII, 1443. [8] Miller, S.B., and A.S. Walker (1993) ACSM/ASPRS Annual Conv., 3, 256; S.B., and A.S. Walker (1995) Z. Phot. Fern. 63, 4. [9] Kirk, R.L. (1987) Ph.D. Thesis, Caltech, Part III. [10] Kirk, R.L., et al. (2003) ISPRS-ET Workshop, http://astrogeology.usgs.gov/Projects/ISPRS/Meetings/Houston2003/abstracts/ Kirk_isprs_mar03.pdf. 4
A model for the prediction of latent errors using data obtained during the development process
NASA Technical Reports Server (NTRS)
Gaffney, J. E., Jr.; Martello, S. J.
1984-01-01
A model implemented in a program that runs on the IBM PC for estimating the latent (or post ship) content of a body of software upon its initial release to the user is presented. The model employs the count of errors discovered at one or more of the error discovery processes during development, such as a design inspection, as the input data for a process which provides estimates of the total life-time (injected) error content and of the latent (or post ship) error content--the errors remaining a delivery. The model presented presumes that these activities cover all of the opportunities during the software development process for error discovery (and removal).
NASA Astrophysics Data System (ADS)
Knevels, Raphael; Leopold, Philip; Petschko, Helene
2017-04-01
With high-resolution airborne Light Detection and Ranging (LiDAR) data more commonly available, many studies have been performed to facilitate the detailed information on the earth surface and to analyse its limitation. Specifically in the field of natural hazards, digital terrain models (DTM) have been used to map hazardous processes such as landslides mainly by visual interpretation of LiDAR DTM derivatives. However, new approaches are striving towards automatic detection of landslides to speed up the process of generating landslide inventories. These studies usually use a combination of optical imagery and terrain data, and are designed in commercial software packages such as ESRI ArcGIS, Definiens eCognition, or MathWorks MATLAB. The objective of this study was to investigate the potential of open-source software for automatic landslide detection based only on high-resolution LiDAR DTM derivatives in a study area within the federal state of Burgenland, Austria. The study area is very prone to landslides which have been mapped with different methodologies in recent years. The free development environment R was used to integrate open-source geographic information system (GIS) software, such as SAGA (System for Automated Geoscientific Analyses), GRASS (Geographic Resources Analysis Support System), or TauDEM (Terrain Analysis Using Digital Elevation Models). The implemented geographic-object-based image analysis (GEOBIA) consisted of (1) derivation of land surface parameters, such as slope, surface roughness, curvature, or flow direction, (2) finding optimal scale parameter by the use of an objective function, (3) multi-scale segmentation, (4) classification of landslide parts (main scarp, body, flanks) by k-mean thresholding, (5) assessment of the classification performance using a pre-existing landslide inventory, and (6) post-processing analysis for the further use in landslide inventories. The results of the developed open-source approach demonstrated good success rates to objectively detect landslides in high-resolution topography data by GEOBIA.
NASA Astrophysics Data System (ADS)
Gómez-Gutiérrez, Álvaro; Juan de Sanjosé-Blasco, José; Schnabel, Susanne; de Matías-Bejarano, Javier; Pulido-Fernández, Manuel; Berenguer-Sempere, Fernando
2015-04-01
In this work, the hypothesis of improving 3D models obtained with Structure from Motion (SfM) approaches using images pre-processed by High Dynamic Range (HDR) techniques is tested. Photographs of the Veleta Rock Glacier in Spain were captured with different exposure values (EV0, EV+1 and EV-1), two focal lengths (35 and 100 mm) and under different weather conditions for the years 2008, 2009, 2011, 2012 and 2014. HDR images were produced using the different EV steps within Fusion F.1 software. Point clouds were generated using commercial and free available SfM software: Agisoft Photoscan and 123D Catch. Models Obtained using pre-processed images and non-preprocessed images were compared in a 3D environment with a benchmark 3D model obtained by means of a Terrestrial Laser Scanner (TLS). A total of 40 point clouds were produced, georeferenced and compared. Results indicated that for Agisoft Photoscan software differences in the accuracy between models obtained with pre-processed and non-preprocessed images were not significant from a statistical viewpoint. However, in the case of the free available software 123D Catch, models obtained using images pre-processed by HDR techniques presented a higher point density and were more accurate. This tendency was observed along the 5 studied years and under different capture conditions. More work should be done in the near future to corroborate whether the results of similar software packages can be improved by HDR techniques (e.g. ARC3D, Bundler and PMVS2, CMP SfM, Photosynth and VisualSFM).
User Interactive Software for Analysis of Human Physiological Data
NASA Technical Reports Server (NTRS)
Cowings, Patricia S.; Toscano, William; Taylor, Bruce C.; Acharya, Soumydipta
2006-01-01
Ambulatory physiological monitoring has been used to study human health and performance in space and in a variety of Earth-based environments (e.g., military aircraft, armored vehicles, small groups in isolation, and patients). Large, multi-channel data files are typically recorded in these environments, and these files often require the removal of contaminated data prior to processing and analyses. Physiological data processing can now be performed with user-friendly, interactive software developed by the Ames Psychophysiology Research Laboratory. This software, which runs on a Windows platform, contains various signal-processing routines for both time- and frequency- domain data analyses (e.g., peak detection, differentiation and integration, digital filtering, adaptive thresholds, Fast Fourier Transform power spectrum, auto-correlation, etc.). Data acquired with any ambulatory monitoring system that provides text or binary file format are easily imported to the processing software. The application provides a graphical user interface where one can manually select and correct data artifacts utilizing linear and zero interpolation and adding trigger points for missed peaks. Block and moving average routines are also provided for data reduction. Processed data in numeric and graphic format can be exported to Excel. This software, PostProc (for post-processing) requires the Dadisp engineering spreadsheet (DSP Development Corp), or equivalent, for implementation. Specific processing routines were written for electrocardiography, electroencephalography, electromyography, blood pressure, skin conductance level, impedance cardiography (cardiac output, stroke volume, thoracic fluid volume), temperature, and respiration
Computer Sciences and Data Systems, volume 1
NASA Technical Reports Server (NTRS)
1987-01-01
Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.
Finite Element Simulations of Kaikoura, NZ Earthquake using DInSAR and High-Resolution DSMs
NASA Astrophysics Data System (ADS)
Barba, M.; Willis, M. J.; Tiampo, K. F.; Glasscoe, M. T.; Clark, M. K.; Zekkos, D.; Stahl, T. A.; Massey, C. I.
2017-12-01
Three-dimensional displacements from the Kaikoura, NZ, earthquake in November 2016 are imaged here using Differential Interferometric Synthetic Aperture Radar (DInSAR) and high-resolution Digital Surface Model (DSM) differencing and optical pixel tracking. Full-resolution co- and post-seismic interferograms of Sentinel-1A/B images are constructed using the JPL ISCE software. The OSU SETSM software is used to produce repeat 0.5 m posting DSMs from commercial satellite imagery, which are supplemented with UAV derived DSMs over the Kaikoura fault rupture on the eastern South Island, NZ. DInSAR provides long-wavelength motions while DSM differencing and optical pixel tracking provides both horizontal and vertical near fault motions, improving the modeling of shallow rupture dynamics. JPL GeoFEST software is used to perform finite element modeling of the fault segments and slip distributions and, in turn, the associated asperity distribution. The asperity profile is then used to simulate event rupture, the spatial distribution of stress drop, and the associated stress changes. Finite element modeling of slope stability is accomplished using the ultra high-resolution UAV derived DSMs to examine the evolution of post-earthquake topography, landslide dynamics and volumes. Results include new insights into shallow dynamics of fault slip and partitioning, estimates of stress change, and improved understanding of its relationship with the associated seismicity, deformation, and triggered cascading hazards.
NASA Astrophysics Data System (ADS)
Neff, T.; Kiessling, F.; Brix, G.; Baudendistel, K.; Zechmann, C.; Giesel, F. L.; Bendl, R.
2005-09-01
Planning of radiotherapy is often difficult due to restrictions on morphological images. New imaging techniques enable the integration of biological information into treatment planning and help to improve the detection of vital and aggressive tumour areas. This might improve clinical outcome. However, nowadays morphological data sets are still the gold standard in the planning of radiotherapy. In this paper, we introduce an in-house software platform enabling us to combine images from different imaging modalities yielding biological and morphological information in a workflow driven approach. This is demonstrated for the combination of morphological CT, MRI, functional DCE-MRI and PET data. Data of patients with a tumour of the prostate and with a meningioma were examined with DCE-MRI by applying pharmacokinetic two-compartment models for post-processing. The results were compared with the clinical plans for radiation therapy. Generated parameter maps give additional information about tumour spread, which can be incorporated in the definition of safety margins.
Optical analysis of crystal growth
NASA Technical Reports Server (NTRS)
Workman, Gary L.; Passeur, Andrea; Harper, Sabrina
1994-01-01
Processing and data reduction of holographic images from Spacelab presents some interesting challenges in determining the effects of microgravity on crystal growth processes. Evaluation of several processing techniques, including the Computerized Holographic Image Processing System and the image processing software ITEX150, will provide fundamental information for holographic analysis of the space flight data.
Flifla, M J; Garreau, M; Rolland, J P; Coatrieux, J L; Thomas, D
1992-12-01
'IBIS' is a set of computer programs concerned with the processing of electron micrographs, with particular emphasis on the requirements for structural analyses of biological macromolecules. The software is written in FORTRAN 77 and runs on Unix workstations. A description of the various functions and the implementation mode is given. Some examples illustrate the user interface.
Three-dimensional micro-scale strain mapping in living biological soft tissues.
Moo, Eng Kuan; Sibole, Scott C; Han, Sang Kuy; Herzog, Walter
2018-04-01
Non-invasive characterization of the mechanical micro-environment surrounding cells in biological tissues at multiple length scales is important for the understanding of the role of mechanics in regulating the biosynthesis and phenotype of cells. However, there is a lack of imaging methods that allow for characterization of the cell micro-environment in three-dimensional (3D) space. The aims of this study were (i) to develop a multi-photon laser microscopy protocol capable of imprinting 3D grid lines onto living tissue at a high spatial resolution, and (ii) to develop image processing software capable of analyzing the resulting microscopic images and performing high resolution 3D strain analyses. Using articular cartilage as the biological tissue of interest, we present a novel two-photon excitation imaging technique for measuring the internal 3D kinematics in intact cartilage at sub-micrometer resolution, spanning length scales from the tissue to the cell level. Using custom image processing software, we provide accurate and robust 3D micro-strain analysis that allows for detailed qualitative and quantitative assessment of the 3D tissue kinematics. This novel technique preserves tissue structural integrity post-scanning, therefore allowing for multiple strain measurements at different time points in the same specimen. The proposed technique is versatile and opens doors for experimental and theoretical investigations on the relationship between tissue deformation and cell biosynthesis. Studies of this nature may enhance our understanding of the mechanisms underlying cell mechano-transduction, and thus, adaptation and degeneration of soft connective tissues. We presented a novel two-photon excitation imaging technique for measuring the internal 3D kinematics in intact cartilage at sub-micrometer resolution, spanning from tissue length scale to cellular length scale. Using a custom image processing software (lsmgridtrack), we provide accurate and robust micro-strain analysis that allowed for detailed qualitative and quantitative assessment of the 3D tissue kinematics. The approach presented here can also be applied to other biological tissues such as meniscus and annulus fibrosus, as well as tissue-engineered tissues for the characterization of their mechanical properties. This imaging technique opens doors for experimental and theoretical investigation on the relationship between tissue deformation and cell biosynthesis. Studies of this nature may enhance our understanding of the mechanisms underlying cell mechano-transduction, and thus, adaptation and degeneration of soft connective tissues. Copyright © 2018 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.
2014-04-01
The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.
A specialized plug-in software module for computer-aided quantitative measurement of medical images.
Wang, Q; Zeng, Y J; Huo, P; Hu, J L; Zhang, J H
2003-12-01
This paper presents a specialized system for quantitative measurement of medical images. Using Visual C++, we developed a computer-aided software based on Image-Pro Plus (IPP), a software development platform. When transferred to the hard disk of a computer by an MVPCI-V3A frame grabber, medical images can be automatically processed by our own IPP plug-in for immunohistochemical analysis, cytomorphological measurement and blood vessel segmentation. In 34 clinical studies, the system has shown its high stability, reliability and ease of utility.
Analysis-Software for Hyperspectral Algal Reflectance Probes v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timlin, Jerilyn A.; Reichardt, Thomas A.; Jenson, Travis J.
This software provides onsite analysis of the hyperspectral reflectance data acquired on an outdoor algal pond by a multichannel, fiber-coupled spectroradiometer. The analysis algorithm is based on numerical inversion of a reflectance model, in which the above-water reflectance is expressed as a function of the single backscattering albedo, which is dependent on the backscatter and absorption coefficients of the algal culture, which are in turn related to the algal biomass and pigment optical activity, respectively. Prior to the development of this software, while raw multichannel data were displayed in real time, analysis required a post-processing procedure to extract the relevantmore » parameters. This software provides the capability to track the temporal variation of such culture parameters in real time, as raw data are being acquired, or can be run in a post processing mode. The software allows the user to select between different algal species, incorporate the appropriate calibration data, and observe the quality of the resulting model inversions.« less
Römpp, Andreas; Schramm, Thorsten; Hester, Alfons; Klinkert, Ivo; Both, Jean-Pierre; Heeren, Ron M A; Stöckli, Markus; Spengler, Bernhard
2011-01-01
Imaging mass spectrometry is the method of scanning a sample of interest and generating an "image" of the intensity distribution of a specific analyte. The data sets consist of a large number of mass spectra which are usually acquired with identical settings. Existing data formats are not sufficient to describe an MS imaging experiment completely. The data format imzML was developed to allow the flexible and efficient exchange of MS imaging data between different instruments and data analysis software.For this purpose, the MS imaging data is divided in two separate files. The mass spectral data is stored in a binary file to ensure efficient storage. All metadata (e.g., instrumental parameters, sample details) are stored in an XML file which is based on the standard data format mzML developed by HUPO-PSI. The original mzML controlled vocabulary was extended to include specific parameters of imaging mass spectrometry (such as x/y position and spatial resolution). The two files (XML and binary) are connected by offset values in the XML file and are unambiguously linked by a universally unique identifier. The resulting datasets are comparable in size to the raw data and the separate metadata file allows flexible handling of large datasets.Several imaging MS software tools already support imzML. This allows choosing from a (growing) number of processing tools. One is no longer limited to proprietary software, but is able to use the processing software which is best suited for a specific question or application. On the other hand, measurements from different instruments can be compared within one software application using identical settings for data processing. All necessary information for evaluating and implementing imzML can be found at http://www.imzML.org .
Dynamic feature analysis for Voyager at the Image Processing Laboratory
NASA Technical Reports Server (NTRS)
Yagi, G. M.; Lorre, J. J.; Jepsen, P. L.
1978-01-01
Voyager 1 and 2 were launched from Cape Kennedy to Jupiter, Saturn, and beyond on September 5, 1977 and August 20, 1977. The role of the Image Processing Laboratory is to provide the Voyager Imaging Team with the necessary support to identify atmospheric features (tiepoints) for Jupiter and Saturn data, and to analyze and display them in a suitable form. This support includes the software needed to acquire and store tiepoints, the hardware needed to interactively display images and tiepoints, and the general image processing environment necessary for decalibration and enhancement of the input images. The objective is an understanding of global circulation in the atmospheres of Jupiter and Saturn. Attention is given to the Voyager imaging subsystem, the Voyager imaging science objectives, hardware, software, display monitors, a dynamic feature study, decalibration, navigation, and data base.
Remote hardware-reconfigurable robotic camera
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.
2001-10-01
In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.
AIRSAR Web-Based Data Processing
NASA Technical Reports Server (NTRS)
Chu, Anhua; Van Zyl, Jakob; Kim, Yunjin; Hensley, Scott; Lou, Yunling; Madsen, Soren; Chapman, Bruce; Imel, David; Durden, Stephen; Tung, Wayne
2007-01-01
The AIRSAR automated, Web-based data processing and distribution system is an integrated, end-to-end synthetic aperture radar (SAR) processing system. Designed to function under limited resources and rigorous demands, AIRSAR eliminates operational errors and provides for paperless archiving. Also, it provides a yearly tune-up of the processor on flight missions, as well as quality assurance with new radar modes and anomalous data compensation. The software fully integrates a Web-based SAR data-user request subsystem, a data processing system to automatically generate co-registered multi-frequency images from both polarimetric and interferometric data collection modes in 80/40/20 MHz bandwidth, an automated verification quality assurance subsystem, and an automatic data distribution system for use in the remote-sensor community. Features include Survey Automation Processing in which the software can automatically generate a quick-look image from an entire 90-GB SAR raw data 32-MB/s tape overnight without operator intervention. Also, the software allows product ordering and distribution via a Web-based user request system. To make AIRSAR more user friendly, it has been designed to let users search by entering the desired mission flight line (Missions Searching), or to search for any mission flight line by entering the desired latitude and longitude (Map Searching). For precision image automation processing, the software generates the products according to each data processing request stored in the database via a Queue management system. Users are able to have automatic generation of coregistered multi-frequency images as the software generates polarimetric and/or interferometric SAR data processing in ground and/or slant projection according to user processing requests for one of the 12 radar modes.
NASA Technical Reports Server (NTRS)
Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.
2005-01-01
A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.
Vorstenbosch, Joshua; Islur, Avi
2017-06-01
Breast augmentation is among the most frequently performed cosmetic plastic surgeries. Providing patients with "realistic" 3D simulations of breast augmentation outcomes is becoming increasingly common. Until recently, such programs were costly and required significant equipment, training, and office space. New simple user-friendly cloud-based programs have been developed, but to date there remains a paucity of objective evidence comparing these 3D simulations with the post-operative outcomes. To determine the aesthetic similarity between pre-operative 3D simulation generated by Crisalix and real post-operative outcomes. A retrospective review of 20 patients receiving bilateral breast augmentation was conducted comparing 6-month post-operative outcomes with 3D simulation using Crisalix software. Similarities between post-operative and simulated images were measured by three attending plastic surgeons and ten plastic surgery residents using a series of parameters. Assessment reveals similarity between the 3D simulation and 6-month post-operative images for overall appearance, breast height, breast width, breast volume, breast projection, and nipple correction. Crisalix software generated more representative simulations for symmetric breasts than for tuberous or ptotic breasts. Comparison of overall aesthetic outcome to simulation showed that the post-operative outcome was more appealing for the symmetric and tuberous breasts and less appealing for the ptotic breasts. Our data suggest that Crisalix offers a good overall 3D simulated image of post-operative breast augmentation outcomes. Improvements to the simulation of the post-operative outcomes for ptotic and tuberous breasts would result in greater predictive capabilities of Crisalix. Collectively, Crisalix offers good predictive simulations for symmetric breasts. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors http://www.springer.com/00266 .
NASA Astrophysics Data System (ADS)
Wong, Erwin
2000-03-01
Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.
VIEW-Station software and its graphical user interface
NASA Astrophysics Data System (ADS)
Kawai, Tomoaki; Okazaki, Hiroshi; Tanaka, Koichiro; Tamura, Hideyuki
1992-04-01
VIEW-Station is a workstation-based image processing system which merges the state-of-the- art software environment of Unix with the computing power of a fast image processor. VIEW- Station has a hierarchical software architecture, which facilitates device independence when porting across various hardware configurations, and provides extensibility in the development of application systems. The core image computing language is V-Sugar. V-Sugar provides a set of image-processing datatypes and allows image processing algorithms to be simply expressed, using a functional notation. VIEW-Station provides a hardware independent window system extension called VIEW-Windows. In terms of GUI (Graphical User Interface) VIEW-Station has two notable aspects. One is to provide various types of GUI as visual environments for image processing execution. Three types of interpreters called (mu) V- Sugar, VS-Shell and VPL are provided. Users may choose whichever they prefer based on their experience and tasks. The other notable aspect is to provide facilities to create GUI for new applications on the VIEW-Station system. A set of widgets are available for construction of task-oriented GUI. A GUI builder called VIEW-Kid is developed for WYSIWYG interactive interface design.
Note: A simple image processing based fiducial auto-alignment method for sample registration.
Robertson, Wesley D; Porto, Lucas R; Ip, Candice J X; Nantel, Megan K T; Tellkamp, Friedjof; Lu, Yinfei; Miller, R J Dwayne
2015-08-01
A simple method for the location and auto-alignment of sample fiducials for sample registration using widely available MATLAB/LabVIEW software is demonstrated. The method is robust, easily implemented, and applicable to a wide variety of experiment types for improved reproducibility and increased setup speed. The software uses image processing to locate and measure the diameter and center point of circular fiducials for distance self-calibration and iterative alignment and can be used with most imaging systems. The method is demonstrated to be fast and reliable in locating and aligning sample fiducials, provided here by a nanofabricated array, with accuracy within the optical resolution of the imaging system. The software was further demonstrated to register, load, and sample the dynamically wetted array.
MO-PIS-Exhibit Hall-01: Tools for TG-142 Linac Imaging QA I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clements, M; Wiesmeyer, M
2014-06-15
Partners in Solutions is an exciting new program in which AAPM partners with our vendors to present practical “hands-on” information about the equipment and software systems that we use in our clinics. The therapy topic this year is solutions for TG-142 recommendations for linear accelerator imaging QA. Note that the sessions are being held in a special purpose room built on the Exhibit Hall Floor, to encourage further interaction with the vendors. Automated Imaging QA for TG-142 with RIT Presentation Time: 2:45 – 3:15 PM This presentation will discuss software tools for automated imaging QA and phantom analysis for TG-142.more » All modalities used in radiation oncology will be discussed, including CBCT, planar kV imaging, planar MV imaging, and imaging and treatment coordinate coincidence. Vendor supplied phantoms as well as a variety of third-party phantoms will be shown, along with appropriate analyses, proper phantom setup procedures and scanning settings, and a discussion of image quality metrics. Tools for process automation will be discussed which include: RIT Cognition (machine learning for phantom image identification), RIT Cerberus (automated file system monitoring and searching), and RunQueueC (batch processing of multiple images). In addition to phantom analysis, tools for statistical tracking, trending, and reporting will be discussed. This discussion will include an introduction to statistical process control, a valuable tool in analyzing data and determining appropriate tolerances. An Introduction to TG-142 Imaging QA Using Standard Imaging Products Presentation Time: 3:15 – 3:45 PM Medical Physicists want to understand the logic behind TG-142 Imaging QA. What is often missing is a firm understanding of the connections between the EPID and OBI phantom imaging, the software “algorithms” that calculate the QA metrics, the establishment of baselines, and the analysis and interpretation of the results. The goal of our brief presentation will be to establish and solidify these connections. Our talk will be motivated by the Standard Imaging, Inc. phantom and software solutions. We will present and explain each of the image quality metrics in TG-142 in terms of the theory, mathematics, and algorithms used to implement them in the Standard Imaging PIPSpro software. In the process, we will identify the regions of phantom images that are analyzed by each algorithm. We then will discuss the process of the creation of baselines and typical ranges of acceptable values for each imaging quality metric.« less
FITS Liberator: Image processing software
NASA Astrophysics Data System (ADS)
Lindberg Christensen, Lars; Nielsen, Lars Holm; Nielsen, Kaspar K.; Johansen, Teis; Hurt, Robert; de Martin, David
2012-06-01
The ESA/ESO/NASA FITS Liberator makes it possible to process and edit astronomical science data in the FITS format to produce stunning images of the universe. Formerly a plugin for Adobe Photoshop, the current version of FITS Liberator is a stand-alone application and no longer requires Photoshop. This image processing software makes it possible to create color images using raw observations from a range of telescopes; the FITS Liberator continues to support the FITS and PDS formats, preferred by astronomers and planetary scientists respectively, which enables data to be processed from a wide range of telescopes and planetary probes, including ESO's Very Large Telescope, the NASA/ESA Hubble Space Telescope, NASA's Spitzer Space Telescope, ESA's XMM-Newton Telescope and Cassini-Huygens or Mars Reconnaissance Orbiter.
Web-Based Mapping Puts the World at Your Fingertips
NASA Technical Reports Server (NTRS)
2008-01-01
NASA's award-winning Earth Resources Laboratory Applications Software (ELAS) package was developed at Stennis Space Center. Since 1978, ELAS has been used worldwide for processing satellite and airborne sensor imagery data of the Earth's surface into readable and usable information. DATASTAR Inc., of Picayune, Mississippi, has used ELAS software in the DATASTAR Image Processing Exploitation (DIPEx) desktop and Internet image processing, analysis, and manipulation software. The new DIPEx Version III includes significant upgrades and improvements compared to its esteemed predecessor. A true World Wide Web application, this product evolved with worldwide geospatial dimensionality and numerous other improvements that seamlessly support the World Wide Web version.
Electrophoresis gel image processing and analysis using the KODAK 1D software.
Pizzonia, J
2001-06-01
The present article reports on the performance of the KODAK 1D Image Analysis Software for the acquisition of information from electrophoresis experiments and highlights the utility of several mathematical functions for subsequent image processing, analysis, and presentation. Digital images of Coomassie-stained polyacrylamide protein gels containing molecular weight standards and ethidium bromide stained agarose gels containing DNA mass standards are acquired using the KODAK Electrophoresis Documentation and Analysis System 290 (EDAS 290). The KODAK 1D software is used to optimize lane and band identification using features such as isomolecular weight lines. Mathematical functions for mass standard representation are presented, and two methods for estimation of unknown band mass are compared. Given the progressive transition of electrophoresis data acquisition and daily reporting in peer-reviewed journals to digital formats ranging from 8-bit systems such as EDAS 290 to more expensive 16-bit systems, the utility of algorithms such as Gaussian modeling, which can correct geometric aberrations such as clipping due to signal saturation common at lower bit depth levels, is discussed. Finally, image-processing tools that can facilitate image preparation for presentation are demonstrated.
cisTEM, user-friendly software for single-particle image processing.
Grant, Timothy; Rohou, Alexis; Grigorieff, Nikolaus
2018-03-07
We have developed new open-source software called cis TEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cis TEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k - 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cis TEM is available for download from cistem.org. © 2018, Grant et al.
cisTEM, user-friendly software for single-particle image processing
2018-01-01
We have developed new open-source software called cisTEM (computational imaging system for transmission electron microscopy) for the processing of data for high-resolution electron cryo-microscopy and single-particle averaging. cisTEM features a graphical user interface that is used to submit jobs, monitor their progress, and display results. It implements a full processing pipeline including movie processing, image defocus determination, automatic particle picking, 2D classification, ab-initio 3D map generation from random parameters, 3D classification, and high-resolution refinement and reconstruction. Some of these steps implement newly-developed algorithms; others were adapted from previously published algorithms. The software is optimized to enable processing of typical datasets (2000 micrographs, 200 k – 300 k particles) on a high-end, CPU-based workstation in half a day or less, comparable to GPU-accelerated processing. Jobs can also be scheduled on large computer clusters using flexible run profiles that can be adapted for most computing environments. cisTEM is available for download from cistem.org. PMID:29513216
The Western Aeronautical Test Range. Chapter 10 Tools
NASA Technical Reports Server (NTRS)
Knudtson, Kevin; Park, Alice; Downing, Robert; Sheldon, Jack; Harvey, Robert; Norcross, April
2011-01-01
The Western Aeronautical Test Range (WATR) staff at the NASA Dryden Flight Research Center is developing a translation software called Chapter 10 Tools in response to challenges posed by post-flight processing data files originating from various on-board digital recorders that follow the Range Commanders Council Inter-Range Instrumentation Group (IRIG) 106 Chapter 10 Digital Recording Standard but use differing interpretations of the Standard. The software will read the date files regardless of the vendor implementation of the source recorder, displaying data, identifying and correcting errors, and producing a data file that can be successfully processed post-flight
Low-cost digital image processing at the University of Oklahoma
NASA Technical Reports Server (NTRS)
Harrington, J. A., Jr.
1981-01-01
Computer assisted instruction in remote sensing at the University of Oklahoma involves two separate approaches and is dependent upon initial preprocessing of a LANDSAT computer compatible tape using software developed for an IBM 370/158 computer. In-house generated preprocessing algorithms permits students or researchers to select a subset of a LANDSAT scene for subsequent analysis using either general purpose statistical packages or color graphic image processing software developed for Apple II microcomputers. Procedures for preprocessing the data and image analysis using either of the two approaches for low-cost LANDSAT data processing are described.
Processing, mosaicking and management of the Monterey Bay digital sidescan-sonar images
Chavez, P.S.; Isbrecht, J.; Galanis, P.; Gabel, G.L.; Sides, S.C.; Soltesz, D.L.; Ross, Stephanie L.; Velasco, M.G.
2002-01-01
Sidescan-sonar imaging systems with digital capabilities have now been available for approximately 20 years. In this paper we present several of the various digital image processing techniques developed by the U.S. Geological Survey (USGS) and used to apply intensity/radiometric and geometric corrections, as well as enhance and digitally mosaic, sidescan-sonar images of the Monterey Bay region. New software run by a WWW server was designed and implemented to allow very large image data sets, such as the digital mosaic, to be easily viewed interactively, including the ability to roam throughout the digital mosaic at the web site in either compressed or full 1-m resolution. The processing is separated into the two different stages: preprocessing and information extraction. In the preprocessing stage, sensor-specific algorithms are applied to correct for both geometric and intensity/radiometric distortions introduced by the sensor. This is followed by digital mosaicking of the track-line strips into quadrangle format which can be used as input to either visual or digital image analysis and interpretation. An automatic seam removal procedure was used in combination with an interactive digital feathering/stenciling procedure to help minimize tone or seam matching problems between image strips from adjacent track-lines. The sidescan-sonar image processing package is part of the USGS Mini Image Processing System (MIPS) and has been designed to process data collected by any 'generic' digital sidescan-sonar imaging system. The USGS MIPS software, developed over the last 20 years as a public domain package, is available on the WWW at: http://terraweb.wr.usgs.gov/trs/software.html.
Scalable software architecture for on-line multi-camera video processing
NASA Astrophysics Data System (ADS)
Camplani, Massimo; Salgado, Luis
2011-03-01
In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead.
Folks, Russell D; Garcia, Ernest V; Taylor, Andrew T
2007-03-01
Quantitative nuclear renography has numerous potential sources of error. We previously reported the initial development of a computer software module for comprehensively addressing the issue of quality control (QC) in the analysis of radionuclide renal images. The objective of this study was to prospectively test the QC software. The QC software works in conjunction with standard quantitative renal image analysis using a renal quantification program. The software saves a text file that summarizes QC findings as possible errors in user-entered values, calculated values that may be unreliable because of the patient's clinical condition, and problems relating to acquisition or processing. To test the QC software, a technologist not involved in software development processed 83 consecutive nontransplant clinical studies. The QC findings of the software were then tabulated. QC events were defined as technical (study descriptors that were out of range or were entered and then changed, unusually sized or positioned regions of interest, or missing frames in the dynamic image set) or clinical (calculated functional values judged to be erroneous or unreliable). Technical QC events were identified in 36 (43%) of 83 studies. Clinical QC events were identified in 37 (45%) of 83 studies. Specific QC events included starting the camera after the bolus had reached the kidney, dose infiltration, oversubtraction of background activity, and missing frames in the dynamic image set. QC software has been developed to automatically verify user input, monitor calculation of renal functional parameters, summarize QC findings, and flag potentially unreliable values for the nuclear medicine physician. Incorporation of automated QC features into commercial or local renal software can reduce errors and improve technologist performance and should improve the efficiency and accuracy of image interpretation.
Making Visible the Coding Process: Using Qualitative Data Software in a Post-Structural Study
ERIC Educational Resources Information Center
Ryan, Mary
2009-01-01
Qualitative research methods require transparency to ensure the "trustworthiness" of the data analysis. The intricate processes of organising, coding and analysing the data are often rendered invisible in the presentation of the research findings, which requires a "leap of faith" for the reader. Computer assisted data analysis software can be used…
Inspection design using 2D phased array, TFM and cueMAP software
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGilp, Ailidh; Dziewierz, Jerzy; Lardner, Tim
2014-02-18
A simulation suite, cueMAP, has been developed to facilitate the design of inspection processes and sparse 2D array configurations. At the core of cueMAP is a Total Focusing Method (TFM) imaging algorithm that enables computer assisted design of ultrasonic inspection scenarios, including the design of bespoke array configurations to match the inspection criteria. This in-house developed TFM code allows for interactive evaluation of image quality indicators of ultrasonic imaging performance when utilizing a 2D phased array working in FMC/TFM mode. The cueMAP software uses a series of TFM images to build a map of resolution, contrast and sensitivity of imagingmore » performance of a simulated reflector, swept across the inspection volume. The software takes into account probe properties, wedge or water standoff, and effects of specimen curvature. In the validation process of this new software package, two 2D arrays have been evaluated on 304n stainless steel samples, typical of the primary circuit in nuclear plants. Thick section samples have been inspected using a 1MHz 2D matrix array. Due to the processing efficiency of the software, the data collected from these array configurations has been used to investigate the influence sub-aperture operation on inspection performance.« less
Software for X-Ray Images Calculation of Hydrogen Compression Device in Megabar Pressure Range
NASA Astrophysics Data System (ADS)
Egorov, Nikolay; Bykov, Alexander; Pavlov, Valery
2007-06-01
Software for x-ray images simulation is described. The software is a part of x-ray method used for investigation of an equation of state of hydrogen in a megabar pressure range. A graphical interface that clearly and simply allows users to input data for x-ray image calculation: properties of the studied device, parameters of the x-ray radiation source, parameters of the x-ray radiation recorder, the experiment geometry; to represent the calculation results and efficiently transmit them to other software for processing. The calculation time is minimized. This makes it possible to perform calculations in a dialogue regime. The software is written in ``MATLAB'' system.
Nolden, Marco; Zelzer, Sascha; Seitel, Alexander; Wald, Diana; Müller, Michael; Franz, Alfred M; Maleike, Daniel; Fangerau, Markus; Baumhauer, Matthias; Maier-Hein, Lena; Maier-Hein, Klaus H; Meinzer, Hans-Peter; Wolf, Ivo
2013-07-01
The Medical Imaging Interaction Toolkit (MITK) has been available as open-source software for almost 10 years now. In this period the requirements of software systems in the medical image processing domain have become increasingly complex. The aim of this paper is to show how MITK evolved into a software system that is able to cover all steps of a clinical workflow including data retrieval, image analysis, diagnosis, treatment planning, intervention support, and treatment control. MITK provides modularization and extensibility on different levels. In addition to the original toolkit, a module system, micro services for small, system-wide features, a service-oriented architecture based on the Open Services Gateway initiative (OSGi) standard, and an extensible and configurable application framework allow MITK to be used, extended and deployed as needed. A refined software process was implemented to deliver high-quality software, ease the fulfillment of regulatory requirements, and enable teamwork in mixed-competence teams. MITK has been applied by a worldwide community and integrated into a variety of solutions, either at the toolkit level or as an application framework with custom extensions. The MITK Workbench has been released as a highly extensible and customizable end-user application. Optional support for tool tracking, image-guided therapy, diffusion imaging as well as various external packages (e.g. CTK, DCMTK, OpenCV, SOFA, Python) is available. MITK has also been used in several FDA/CE-certified applications, which demonstrates the high-quality software and rigorous development process. MITK provides a versatile platform with a high degree of modularization and interoperability and is well suited to meet the challenging tasks of today's and tomorrow's clinically motivated research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoaf, S.; APS Engineering Support Division
A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.
New Software for Ensemble Creation in the Spitzer-Space-Telescope Operations Database
NASA Technical Reports Server (NTRS)
Laher, Russ; Rector, John
2004-01-01
Some of the computer pipelines used to process digital astronomical images from NASA's Spitzer Space Telescope require multiple input images, in order to generate high-level science and calibration products. The images are grouped into ensembles according to well documented ensemble-creation rules by making explicit associations in the operations Informix database at the Spitzer Science Center (SSC). The advantage of this approach is that a simple database query can retrieve the required ensemble of pipeline input images. New and improved software for ensemble creation has been developed. The new software is much faster than the existing software because it uses pre-compiled database stored-procedures written in Informix SPL (SQL programming language). The new software is also more flexible because the ensemble creation rules are now stored in and read from newly defined database tables. This table-driven approach was implemented so that ensemble rules can be inserted, updated, or deleted without modifying software.
Digital Earth Watch: Investigating the World with Digital Cameras
NASA Astrophysics Data System (ADS)
Gould, A. D.; Schloss, A. L.; Beaudry, J.; Pickle, J.
2015-12-01
Every digital camera including the smart phone camera can be a scientific tool. Pictures contain millions of color intensity measurements organized spatially allowing us to measure properties of objects in the images. This presentation will demonstrate how digital pictures can be used for a variety of studies with a special emphasis on using repeat digital photographs to study change-over-time in outdoor settings with a Picture Post. Demonstrations will include using inexpensive color filters to take pictures that enhance features in images such as unhealthy leaves on plants, or clouds in the sky. Software available at no cost from the Digital Earth Watch (DEW) website that lets students explore light, color and pixels, manipulate color in images and make measurements, will be demonstrated. DEW and Picture Post were developed with support from NASA. Please visit our websites: DEW: http://dew.globalsystemsscience.orgPicture Post: http://picturepost.unh.edu
Desktop publishing and medical imaging: paper as hardcopy medium for digital images.
Denslow, S
1994-08-01
Desktop-publishing software and hardware has progressed to the point that many widely used word-processing programs are capable of printing high-quality digital images with many shades of gray from black to white. Accordingly, it should be relatively easy to print digital medical images on paper for reports, instructional materials, and in research notes. Components were assembled that were necessary for extracting image data from medical imaging devices and converting the data to a form usable by word-processing software. A system incorporating these components was implemented in a medical setting and has been operating for 18 months. The use of this system by medical staff has been monitored.
NASA Technical Reports Server (NTRS)
Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.
1982-01-01
Documentation of the preliminary software developed as a framework for a generalized integrated robotic system simulation is presented. The program structure is composed of three major functions controlled by a program executive. The three major functions are: system definition, analysis tools, and post processing. The system definition function handles user input of system parameters and definition of the manipulator configuration. The analysis tools function handles the computational requirements of the program. The post processing function allows for more detailed study of the results of analysis tool function executions. Also documented is the manipulator joint model software to be used as the basis of the manipulator simulation which will be part of the analysis tools capability.
[Development of a Text-Data Based Learning Tool That Integrates Image Processing and Displaying].
Shinohara, Hiroyuki; Hashimoto, Takeyuki
2015-01-01
We developed a text-data based learning tool that integrates image processing and displaying by Excel. Knowledge required for programing this tool is limited to using absolute, relative, and composite cell references and learning approximately 20 mathematical functions available in Excel. The new tool is capable of resolution translation, geometric transformation, spatial-filter processing, Radon transform, Fourier transform, convolutions, correlations, deconvolutions, wavelet transform, mutual information, and simulation of proton density-, T1-, and T2-weighted MR images. The processed images of 128 x 128 pixels or 256 x 256 pixels are observed directly within Excel worksheets without using any particular image display software. The results of image processing using this tool were compared with those using C language and the new tool was judged to have sufficient accuracy to be practically useful. The images displayed on Excel worksheets were compared with images using binary-data display software. This comparison indicated that the image quality of the Excel worksheets was nearly equal to the latter in visual impressions. Since image processing is performed by using text-data, the process is visible and facilitates making contrasts by using mathematical equations within the program. We concluded that the newly developed tool is adequate as a computer-assisted learning tool for use in medical image processing.
The Preliminary Results of GMSTech: A Software Development for Microseismic Characterization
NASA Astrophysics Data System (ADS)
Rohaman, Maman; Suhendi, Cahli; Verdhora Ry, Rexha; Sugiartono Prabowo, Billy; Widiyantoro, Sri; Nugraha, Andri Dian; Yudistira, Tedi; Mujihardi, Bambang
2017-04-01
The processing of microseismic data requires reliable software for imaging the condition of subsurface related to occurring microseismicity. In general, the currently available software is only specific for certain processing module and developed by the different developer. However, the software with integrated processing modules will give a better value because the users can use it easier and faster. We developed GMSTech (Ganesha Microseismic Technology), a C# language-based standing-alone software consisting several modules for processing of microseismic data. Its function is to solve a non-linear inverse problem and imaging the subsurface. C# library is supported by ILNumerics to reduce time consumption and give good visualization. In this preliminary result, we will present four developed modules: (1) hypocenter determination, (2) moment magnitude calculation, and (3) 3D seismic tomography. In the first module, we provide four methods for locating the microseismic events that can be chosen by a user independently: simulated annealing method, guided grid-search method, Geiger’s method, and joint hypocenter determination (JHD). The second module can be used for calculating moment magnitude using Brune method and to estimate the released energy of the event. At last, we also provided the module of 3-D seismic tomography for imaging the velocity structures based on delay time tomography. We demonstrated the software using both a synthetic data and a real data from a certain geothermal field in Indonesia. The results for all modules are reliable and remarkable, reviewed statistically by RMS error. We will keep examining the software using another set of data and developing further modules of processing.
Selections from 2017: Image Processing with AstroImageJ
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-12-01
Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry with interactive light curve fitting:plot light curves of a star in real timeCitationKaren A. Collins et al 2017 AJ 153 77. doi:10.3847/1538-3881/153/2/77
Automated Diatom Analysis Applied to Traditional Light Microscopy: A Proof-of-Concept Study
NASA Astrophysics Data System (ADS)
Little, Z. H. L.; Bishop, I.; Spaulding, S. A.; Nelson, H.; Mahoney, C.
2017-12-01
Diatom identification and enumeration by high resolution light microscopy is required for many areas of research and water quality assessment. Such analyses, however, are both expertise and labor-intensive. These challenges motivate the need for an automated process to efficiently and accurately identify and enumerate diatoms. Improvements in particle analysis software have increased the likelihood that diatom enumeration can be automated. VisualSpreadsheet software provides a possible solution for automated particle analysis of high-resolution light microscope diatom images. We applied the software, independent of its complementary FlowCam hardware, to automated analysis of light microscope images containing diatoms. Through numerous trials, we arrived at threshold settings to correctly segment 67% of the total possible diatom valves and fragments from broad fields of view. (183 light microscope images were examined containing 255 diatom particles. Of the 255 diatom particles present, 216 diatoms valves and fragments of valves were processed, with 170 properly analyzed and focused upon by the software). Manual analysis of the images yielded 255 particles in 400 seconds, whereas the software yielded a total of 216 particles in 68 seconds, thus highlighting that the software has an approximate five-fold efficiency advantage in particle analysis time. As in past efforts, incomplete or incorrect recognition was found for images with multiple valves in contact or valves with little contrast. The software has potential to be an effective tool in assisting taxonomists with diatom enumeration by completing a large portion of analyses. Benefits and limitations of the approach are presented to allow for development of future work in image analysis and automated enumeration of traditional light microscope images containing diatoms.
TU-AB-204-01: Device Approval Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delfino, J.
The responsibilities of the Food and Drug Administration (FDA) have increased since the inception of the Food and Drugs Act in 1906. Medical devices first came under comprehensive regulation with the passage of the 1938 Food, Drug, and Cosmetic Act. In 1971 FDA also took on the responsibility for consumer protection against unnecessary exposure to radiation-emitting devices for home and occupational use. However it was not until 1976, under the Medical Device Regulation Act, that the FDA was responsible for the safety and effectiveness of medical devices. This session will be presented by the Division of Radiological Health (DRH) andmore » the Division of Imaging, Diagnostics, and Software Reliability (DIDSR) from the Center for Devices and Radiological Health (CDRH) at the FDA. The symposium will discuss on how we protect and promote public health with a focus on medical physics applications organized into four areas: pre-market device review, post-market surveillance, device compliance, current regulatory research efforts and partnerships with other organizations. The pre-market session will summarize the pathways FDA uses to regulate the investigational use and commercialization of diagnostic imaging and radiation therapy medical devices in the US, highlighting resources available to assist investigators and manufacturers. The post-market session will explain the post-market surveillance and compliance activities FDA performs to monitor the safety and effectiveness of devices on the market. The third session will describe research efforts that support the regulatory mission of the Agency. An overview of our regulatory research portfolio to advance our understanding of medical physics and imaging technologies and approaches to their evaluation will be discussed. Lastly, mechanisms that FDA uses to seek public input and promote collaborations with professional, government, and international organizations, such as AAPM, International Electrotechnical Commission (IEC), Image Gently, and the Quantitative Imaging Biomarkers Alliance (QIBA) among others, to fulfill FDA’s mission will be discussed. Learning Objectives: Understand FDA’s pre-market and post-market review processes for medical devices Understand FDA’s current regulatory research activities in the areas of medical physics and imaging products Understand how being involved with AAPM and other organizations can also help to promote innovative, safe and effective medical devices J. Delfino, nothing to disclose.« less
TU-AB-204-00: CDRH/FDA Regulatory Processes and Device Science Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The responsibilities of the Food and Drug Administration (FDA) have increased since the inception of the Food and Drugs Act in 1906. Medical devices first came under comprehensive regulation with the passage of the 1938 Food, Drug, and Cosmetic Act. In 1971 FDA also took on the responsibility for consumer protection against unnecessary exposure to radiation-emitting devices for home and occupational use. However it was not until 1976, under the Medical Device Regulation Act, that the FDA was responsible for the safety and effectiveness of medical devices. This session will be presented by the Division of Radiological Health (DRH) andmore » the Division of Imaging, Diagnostics, and Software Reliability (DIDSR) from the Center for Devices and Radiological Health (CDRH) at the FDA. The symposium will discuss on how we protect and promote public health with a focus on medical physics applications organized into four areas: pre-market device review, post-market surveillance, device compliance, current regulatory research efforts and partnerships with other organizations. The pre-market session will summarize the pathways FDA uses to regulate the investigational use and commercialization of diagnostic imaging and radiation therapy medical devices in the US, highlighting resources available to assist investigators and manufacturers. The post-market session will explain the post-market surveillance and compliance activities FDA performs to monitor the safety and effectiveness of devices on the market. The third session will describe research efforts that support the regulatory mission of the Agency. An overview of our regulatory research portfolio to advance our understanding of medical physics and imaging technologies and approaches to their evaluation will be discussed. Lastly, mechanisms that FDA uses to seek public input and promote collaborations with professional, government, and international organizations, such as AAPM, International Electrotechnical Commission (IEC), Image Gently, and the Quantitative Imaging Biomarkers Alliance (QIBA) among others, to fulfill FDA’s mission will be discussed. Learning Objectives: Understand FDA’s pre-market and post-market review processes for medical devices Understand FDA’s current regulatory research activities in the areas of medical physics and imaging products Understand how being involved with AAPM and other organizations can also help to promote innovative, safe and effective medical devices J. Delfino, nothing to disclose.« less
NASA Technical Reports Server (NTRS)
Matthews, Christine G.; Posenau, Mary-Anne; Leonard, Desiree M.; Avis, Elizabeth L.; Debure, Kelly R.; Stacy, Kathryn; Vonofenheim, Bill
1992-01-01
The intent is to provide an introduction to the image processing capabilities available at the Langley Research Center (LaRC) Central Scientific Computing Complex (CSCC). Various image processing software components are described. Information is given concerning the use of these components in the Data Visualization and Animation Laboratory at LaRC.
a Photogrammetric Pipeline for the 3d Reconstruction of Cassis Images on Board Exomars Tgo
NASA Astrophysics Data System (ADS)
Simioni, E.; Re, C.; Mudric, T.; Pommerol, A.; Thomas, N.; Cremonese, G.
2017-07-01
CaSSIS (Colour and Stereo Surface Imaging System) is the stereo imaging system onboard the European Space Agency and ROSCOSMOS ExoMars Trace Gas Orbiter (TGO) that has been launched on 14 March 2016 and entered a Mars elliptical orbit on 19 October 2016. During the first bounded orbits, CaSSIS returned its first multiband images taken on 22 and 26 November 2016. The telescope acquired 11 images, each composed by 30 framelets, of the Martian surface near Hebes Chasma and Noctis Labyrithus regions reaching at closest approach at a distance of 250 km from the surface. Despite of the eccentricity of this first orbit, CaSSIS has provided one stereo pair with a mean ground resolution of 6 m from a mean distance of 520 km. The team at the Astronomical Observatory of Padova (OAPD-INAF) is involved into different stereo oriented missions and it is realizing a software for the generation of Digital Terrain Models from the CaSSIS images. The SW will be then adapted also for other projects involving stereo camera systems. To compute accurate 3D models, several sequential methods and tools have been developed. The preliminary pipeline provides: the generation of rectified images from the CaSSIS framelets, a matching core and post-processing methods. The software includes in particular: an automatic tie points detection by the Speeded Up Robust Features (SURF) operator, an initial search for the correspondences through Normalize Cross Correlation (NCC) algorithm and the Adaptive Least Square Matching (LSM) algorithm in a hierarchical approach. This work will show a preliminary DTM generated by the first CaSSIS stereo images.
NASA Astrophysics Data System (ADS)
Noordmans, Herke Jan; de Roode, Rowland; Verdaasdonk, Rudolf
2007-03-01
Multi-spectral images of human tissue taken in-vivo often contain image alignment problems as patients have difficulty in retaining their posture during the acquisition time of 20 seconds. Previously, it has been attempted to correct motion errors with image registration software developed for MR or CT data but these algorithms have been proven to be too slow and erroneous for practical use with multi-spectral images. A new software package has been developed which allows the user to play a decisive role in the registration process as the user can monitor the progress of the registration continuously and force it in the right direction when it starts to fail. The software efficiently exploits videocard hardware to gain speed and to provide a perfect subvoxel correspondence between registration field and display. An 8 bit graphic card was used to efficiently register and resample 12 bit images using the hardware interpolation modes present on the graphic card. To show the feasibility of this new registration process, the software was applied in clinical practice evaluating the dosimetry for psoriasis and KTP laser treatment. The microscopic differences between images of normal skin and skin exposed to UV light proved that an affine registration step including zooming and slanting is critical for a subsequent elastic match to have success. The combination of user interactive registration software with optimal addressing the potentials of PC video card hardware greatly improves the speed of multi spectral image registration.
NASA Astrophysics Data System (ADS)
Noordmans, Herke J.; de Roode, Rowland; Verdaasdonk, Rudolf
2007-02-01
Multi-spectral images of human tissue taken in-vivo often contain image alignment problems as patients have difficulty in retaining their posture during the acquisition time of 20 seconds. Previously, it has been attempted to correct motion errors with image registration software developed for MR or CT data but these algorithms have been proven to be too slow and erroneous for practical use with multi-spectral images. A new software package has been developed which allows the user to play a decisive role in the registration process as the user can monitor the progress of the registration continuously and force it in the right direction when it starts to fail. The software efficiently exploits videocard hardware to gain speed and to provide a perfect subvoxel correspondence between registration field and display. An 8 bit graphic card was used to efficiently register and resample 12 bit images using the hardware interpolation modes present on the graphic card. To show the feasibility of this new registration process, the software was applied in clinical practice evaluating the dosimetry for psoriasis and KTP laser treatment. The microscopic differences between images of normal skin and skin exposed to UV light proved that an affine registration step including zooming and slanting is critical for a subsequent elastic match to have success. The combination of user interactive registration software with optimal addressing the potentials of PC video card hardware greatly improves the speed of multi spectral image registration.
Operational Analysis of Time-Optimal Maneuvering for Imaging Spacecraft
2013-03-01
imaging spacecraft. The analysis is facilitated through the use of AGI’s Systems Tool Kit ( STK ) software. An Analytic Hierarchy Process (AHP)-based...the Singapore-developed X-SAT imaging spacecraft. The analysis is facilitated through the use of AGI’s Systems Tool Kit ( STK ) software. An Analytic...89 B. FUTURE WORK................................................................................. 90 APPENDIX A. STK DATA AND BENEFIT
PANDA: a pipeline toolbox for analyzing brain diffusion images.
Cui, Zaixu; Zhong, Suyu; Xu, Pengfei; He, Yong; Gong, Gaolang
2013-01-01
Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named "Pipeline for Analyzing braiN Diffusion imAges" (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics [e.g., fractional anisotropy (FA) and mean diffusivity (MD)] that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies.
System for Continuous Delivery of MODIS Imagery to Internet Mapping Applications
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2008-01-01
This software represents a complete, unsupervised processing chain that generates a continuously updating global image of the Earth from the most recent available MODIS Level 1B scenes. The software constantly updates a global image of the Earth at 250 m per pixel.
Automated daily quality control analysis for mammography in a multi-unit imaging center.
Sundell, Veli-Matti; Mäkelä, Teemu; Meaney, Alexander; Kaasalainen, Touko; Savolainen, Sauli
2018-01-01
Background The high requirements for mammography image quality necessitate a systematic quality assurance process. Digital imaging allows automation of the image quality analysis, which can potentially improve repeatability and objectivity compared to a visual evaluation made by the users. Purpose To develop an automatic image quality analysis software for daily mammography quality control in a multi-unit imaging center. Material and Methods An automated image quality analysis software using the discrete wavelet transform and multiresolution analysis was developed for the American College of Radiology accreditation phantom. The software was validated by analyzing 60 randomly selected phantom images from six mammography systems and 20 phantom images with different dose levels from one mammography system. The results were compared to a visual analysis made by four reviewers. Additionally, long-term image quality trends of a full-field digital mammography system and a computed radiography mammography system were investigated. Results The automated software produced feature detection levels comparable to visual analysis. The agreement was good in the case of fibers, while the software detected somewhat more microcalcifications and characteristic masses. Long-term follow-up via a quality assurance web portal demonstrated the feasibility of using the software for monitoring the performance of mammography systems in a multi-unit imaging center. Conclusion Automated image quality analysis enables monitoring the performance of digital mammography systems in an efficient, centralized manner.
Training system for digital mammographic diagnoses of breast cancer
NASA Astrophysics Data System (ADS)
Thomaz, R. L.; Nirschl Crozara, M. G.; Patrocinio, A. C.
2013-03-01
As the technology evolves, the analog mammography systems are being replaced by digital systems. The digital system uses video monitors as the display of mammographic images instead of the previously used screen-film and negatoscope for analog images. The change in the way of visualizing mammographic images may require a different approach for training the health care professionals in diagnosing the breast cancer with digital mammography. Thus, this paper presents a computational approach to train the health care professionals providing a smooth transition between analog and digital technology also training to use the advantages of digital image processing tools to diagnose the breast cancer. This computational approach consists of a software where is possible to open, process and diagnose a full mammogram case from a database, which has the digital images of each of the mammographic views. The software communicates with a gold standard digital mammogram cases database. This database contains the digital images in Tagged Image File Format (TIFF) and the respective diagnoses according to BI-RADSTM, these files are read by software and shown to the user as needed. There are also some digital image processing tools that can be used to provide better visualization of each single image. The software was built based on a minimalist and a user-friendly interface concept that might help in the smooth transition. It also has an interface for inputting diagnoses from the professional being trained, providing a result feedback. This system has been already completed, but hasn't been applied to any professional training yet.
An SPM12 extension for multiple sclerosis lesion segmentation
NASA Astrophysics Data System (ADS)
Roura, Eloy; Oliver, Arnau; Cabezas, Mariano; Valverde, Sergi; Pareto, Deborah; Vilanova, Joan C.; Ramió-Torrentà, Lluís.; Rovira, Àlex; Lladó, Xavier
2016-03-01
Purpose: Magnetic resonance imaging is nowadays the hallmark to diagnose multiple sclerosis (MS), characterized by white matter lesions. Several approaches have been recently presented to tackle the lesion segmentation problem, but none of them have been accepted as a standard tool in the daily clinical practice. In this work we present yet another tool able to automatically segment white matter lesions outperforming the current-state-of-the-art approaches. Methods: This work is an extension of Roura et al. [1], where external and platform dependent pre-processing libraries (brain extraction, noise reduction and intensity normalization) were required to achieve an optimal performance. Here we have updated and included all these required pre-processing steps into a single framework (SPM software). Therefore, there is no need of external tools to achieve the desired segmentation results. Besides, we have changed the working space from T1w to FLAIR, reducing interpolation errors produced in the registration process from FLAIR to T1w space. Finally a post-processing constraint based on shape and location has been added to reduce false positive detections. Results: The evaluation of the tool has been done on 24 MS patients. Qualitative and quantitative results are shown with both approaches in terms of lesion detection and segmentation. Conclusion: We have simplified both installation and implementation of the approach, providing a multiplatform tool1 integrated into the SPM software, which relies only on using T1w and FLAIR images. We have reduced with this new version the computation time of the previous approach while maintaining the performance.
Design and Applications of Rapid Image Tile Producing Software Based on Mosaic Dataset
NASA Astrophysics Data System (ADS)
Zha, Z.; Huang, W.; Wang, C.; Tang, D.; Zhu, L.
2018-04-01
Map tile technology is widely used in web geographic information services. How to efficiently produce map tiles is key technology for rapid service of images on web. In this paper, a rapid producing software for image tile data based on mosaic dataset is designed, meanwhile, the flow of tile producing is given. Key technologies such as cluster processing, map representation, tile checking, tile conversion and compression in memory are discussed. Accomplished by software development and tested by actual image data, the results show that this software has a high degree of automation, would be able to effectively reducing the number of IO and improve the tile producing efficiency. Moreover, the manual operations would be reduced significantly.
Ghosh, Adarsh; Singh, Tulika; Singla, Veenu; Bagga, Rashmi; Khandelwal, Niranjan
2017-12-01
Apparent diffusion coefficient (ADC) maps are usually generated by builtin software provided by the MRI scanner vendors; however, various open-source postprocessing software packages are available for image manipulation and parametric map generation. The purpose of this study is to establish the reproducibility of absolute ADC values obtained using different postprocessing software programs. DW images with three b values were obtained with a 1.5-T MRI scanner, and the trace images were obtained. ADC maps were automatically generated by the in-line software provided by the vendor during image generation and were also separately generated on postprocessing software. These ADC maps were compared on the basis of ROIs using paired t test, Bland-Altman plot, mountain plot, and Passing-Bablok regression plot. There was a statistically significant difference in the mean ADC values obtained from the different postprocessing software programs when the same baseline trace DW images were used for the ADC map generation. For using ADC values as a quantitative cutoff for histologic characterization of tissues, standardization of the postprocessing algorithm is essential across processing software packages, especially in view of the implementation of vendor-neutral archiving.
Software architecture for intelligent image processing using Prolog
NASA Astrophysics Data System (ADS)
Jones, Andrew C.; Batchelor, Bruce G.
1994-10-01
We describe a prototype system for interactive image processing using Prolog, implemented by the first author on an Apple Macintosh computer. This system is inspired by Prolog+, but differs from it in two particularly important respects. The first is that whereas Prolog+ assumes the availability of dedicated image processing hardware, with which the Prolog system communicates, our present system implements image processing functions in software using the C programming language. The second difference is that although our present system supports Prolog+ commands, these are implemented in terms of lower-level Prolog predicates which provide a more flexible approach to image manipulation. We discuss the impact of the Apple Macintosh operating system upon the implementation of the image-processing functions, and the interface between these functions and the Prolog system. We also explain how the Prolog+ commands have been implemented. The system described in this paper is a fairly early prototype, and we outline how we intend to develop the system, a task which is expedited by the extensible architecture we have implemented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, X; Liu, L; Xing, L
Purpose: Visualization and processing of medical images and radiation treatment plan evaluation have traditionally been constrained to local workstations with limited computation power and ability of data sharing and software update. We present a web-based image processing and planning evaluation platform (WIPPEP) for radiotherapy applications with high efficiency, ubiquitous web access, and real-time data sharing. Methods: This software platform consists of three parts: web server, image server and computation server. Each independent server communicates with each other through HTTP requests. The web server is the key component that provides visualizations and user interface through front-end web browsers and relay informationmore » to the backend to process user requests. The image server serves as a PACS system. The computation server performs the actual image processing and dose calculation. The web server backend is developed using Java Servlets and the frontend is developed using HTML5, Javascript, and jQuery. The image server is based on open source DCME4CHEE PACS system. The computation server can be written in any programming language as long as it can send/receive HTTP requests. Our computation server was implemented in Delphi, Python and PHP, which can process data directly or via a C++ program DLL. Results: This software platform is running on a 32-core CPU server virtually hosting the web server, image server, and computation servers separately. Users can visit our internal website with Chrome browser, select a specific patient, visualize image and RT structures belonging to this patient and perform image segmentation running Delphi computation server and Monte Carlo dose calculation on Python or PHP computation server. Conclusion: We have developed a webbased image processing and plan evaluation platform prototype for radiotherapy. This system has clearly demonstrated the feasibility of performing image processing and plan evaluation platform through a web browser and exhibited potential for future cloud based radiotherapy.« less
ERIC Educational Resources Information Center
Greenberg, Richard
1998-01-01
Describes the Image Processing for Teaching (IPT) project which provides digital image processing to excite students about science and mathematics as they use research-quality software on microcomputers. Provides information on IPT whose components of this dissemination project have been widespread teacher education, curriculum-based materials…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-09
... With Image Processing Systems, Components Thereof, and Associated Software; Notice of Commission... importation of certain electronic devices with image processing systems, components thereof, and associated... direct infringement is asserted and the accused article does not meet every limitation of the asserted...
No filter: A characterization of #pharmacist posts on Instagram.
Hindman, F Mark; Bukowitz, Alison E; Reed, Brent N; Mattingly, T Joseph
The primary objective was to characterize the underlying intent of Instagram posts using the hashtag metadata term "#pharmacist" over a 1-year period. The secondary objective was to determine whether statistically significant relationships existed between the categories and the 2 dichotomous variables tested, self-portrayed images, and relation to health care. Retrospective, cross-sectional, mixed methods, exploratory, descriptive study. A review of available Instagram posts using the hashtag metadata "#pharmacist" from November 4, 2014, to November 3, 2015. Data were collected using software provided by NEXT Analytics. A sample of 14 random days was selected. Six hundred sixty-one Instagram posts containing "#pharmacist" in the caption. Categorization of post (including both picture and primary caption), self-portrayed images (i.e., "selfie"), and health care-related images. One thousand three hundred thirty-eight posts were collected from the 14-day sample. Of the posts, 661 (49.4%) were analyzed; the remainder were excluded for being written in a non-English language or containing "#pharmacist" in the comments of the post, rather than the primary caption; 19.7% of all posts fell into the Celebration category, followed by Work Experience and Advertisement with 18.6% and 12.6%, respectively. The remainder of the categories contained 10% or fewer posts. Less than 25% of posts were self-portrayed images, and 88% of posts were deemed health care-related. Instagram is an emerging social media platform that can be used to expand patient education, professional advocacy, and public health outreach. In this study, the majority of #pharmacist posts were celebratory in nature, and the majority were determined to be related to health care. Posts containing #pharmacist may provide the opportunity to educate the public regarding the knowledge and capabilities of pharmacists. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Shameoni Niaei, M.; Kilic, Y.; Yildiran, B. E.; Yüzlükoglu, F.; Yesilyaprak, C.
2016-12-01
We have described a new software (MIPS) about the analysis and image processing of the meteorological satellite (Meteosat) data for an astronomical observatory. This software will be able to help to make some atmospherical forecast (cloud, humidity, rain) using meteosat data for robotic telescopes. MIPS uses a python library for Eumetsat data that aims to be completely open-source and licenced under GNU/General Public Licence (GPL). MIPS is a platform independent and uses h5py, numpy, and PIL with the general-purpose and high-level programming language Python and the QT framework.
AstroImageJ: Image Processing and Photometric Extraction for Ultra-precise Astronomical Light Curves
NASA Astrophysics Data System (ADS)
Collins, Karen A.; Kielkopf, John F.; Stassun, Keivan G.; Hessman, Frederic V.
2017-02-01
ImageJ is a graphical user interface (GUI) driven, public domain, Java-based, software package for general image processing traditionally used mainly in life sciences fields. The image processing capabilities of ImageJ are useful and extendable to other scientific fields. Here we present AstroImageJ (AIJ), which provides an astronomy specific image display environment and tools for astronomy specific image calibration and data reduction. Although AIJ maintains the general purpose image processing capabilities of ImageJ, AIJ is streamlined for time-series differential photometry, light curve detrending and fitting, and light curve plotting, especially for applications requiring ultra-precise light curves (e.g., exoplanet transits). AIJ reads and writes standard Flexible Image Transport System (FITS) files, as well as other common image formats, provides FITS header viewing and editing, and is World Coordinate System aware, including an automated interface to the astrometry.net web portal for plate solving images. AIJ provides research grade image calibration and analysis tools with a GUI driven approach, and easily installed cross-platform compatibility. It enables new users, even at the level of undergraduate student, high school student, or amateur astronomer, to quickly start processing, modeling, and plotting astronomical image data with one tightly integrated software package.
George, L D; Lusty, J; Owens, D R; Ollerton, R L
1999-08-01
To determine whether software processing of digitised retinal images using a "sharpen" filter improves the ability to grade diabetic retinopathy. 150 macula centred retinal images were taken as 35 mm colour transparencies representing a spectrum of diabetic retinopathy, digitised, and graded in random order before and after the application of a sharpen filter (Adobe Photoshop). Digital enhancement of contrast and brightness was performed and a X2 digital zoom was utilised. The grades from the unenhanced and enhanced digitised images were compared with the same retinal fields viewed as slides. Overall agreement in retinopathy grade from the digitised images improved from 83.3% (125/150) to 94.0% (141/150) with sight threatening diabetic retinopathy (STDR) correctly identified in 95.5% (84/88) and 98.9% (87/88) of cases when using unenhanced and enhanced images respectively. In total, five images were overgraded and four undergraded from the enhanced images compared with 17 and eight images respectively when using unenhanced images. This study demonstrates that the already good agreement in grading performance can be further improved by software manipulation or processing of digitised retinal images.
George, L; Lusty, J; Owens, D; Ollerton, R
1999-01-01
AIMS—To determine whether software processing of digitised retinal images using a "sharpen" filter improves the ability to grade diabetic retinopathy. METHODS—150 macula centred retinal images were taken as 35 mm colour transparencies representing a spectrum of diabetic retinopathy, digitised, and graded in random order before and after the application of a sharpen filter (Adobe Photoshop). Digital enhancement of contrast and brightness was performed and a X2 digital zoom was utilised. The grades from the unenhanced and enhanced digitised images were compared with the same retinal fields viewed as slides. RESULTS—Overall agreement in retinopathy grade from the digitised images improved from 83.3% (125/150) to 94.0% (141/150) with sight threatening diabetic retinopathy (STDR) correctly identified in 95.5% (84/88) and 98.9% (87/88) of cases when using unenhanced and enhanced images respectively. In total, five images were overgraded and four undergraded from the enhanced images compared with 17 and eight images respectively when using unenhanced images. CONCLUSION—This study demonstrates that the already good agreement in grading performance can be further improved by software manipulation or processing of digitised retinal images. PMID:10413691
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, C; Yin, Y
Purpose: The purpose of this research is investigating which texture features extracted from FDG-PET images by gray-level co-occurrence matrix(GLCM) have a higher prognostic value than the other texture features. Methods: 21 non-small cell lung cancer(NSCLC) patients were approved in the study. Patients underwent 18F-FDG PET/CT scans with both pre-treatment and post-treatment. Firstly, the tumors were extracted by our house developed software. Secondly, the clinical features including the maximum SUV and tumor volume were extracted by MIM vista software, and texture features including angular second moment, contrast, inverse different moment, entropy and correlation were extracted using MATLAB.The differences can be calculatedmore » by using post-treatment features to subtract pre-treatment features. Finally, the SPSS software was used to get the Pearson correlation coefficients and Spearman rank correlation coefficients between the change ratios of texture features and change ratios of clinical features. Results: The Pearson and Spearman rank correlation coefficient between contrast and SUV maximum is 0.785 and 0.709. The P and S value between inverse difference moment and tumor volume is 0.953 and 0.942. Conclusion: This preliminary study showed that the relationships between different texture features and the same clinical feature are different. Finding the prognostic value of contrast and inverse difference moment were higher than the other three textures extracted by GLCM.« less
DOCLIB: a software library for document processing
NASA Astrophysics Data System (ADS)
Jaeger, Stefan; Zhu, Guangyu; Doermann, David; Chen, Kevin; Sampat, Summit
2006-01-01
Most researchers would agree that research in the field of document processing can benefit tremendously from a common software library through which institutions are able to develop and share research-related software and applications across academic, business, and government domains. However, despite several attempts in the past, the research community still lacks a widely-accepted standard software library for document processing. This paper describes a new library called DOCLIB, which tries to overcome the drawbacks of earlier approaches. Many of DOCLIB's features are unique either in themselves or in their combination with others, e.g. the factory concept for support of different image types, the juxtaposition of image data and metadata, or the add-on mechanism. We cherish the hope that DOCLIB serves the needs of researchers better than previous approaches and will readily be accepted by a larger group of scientists.
Echocardiography derived three-dimensional printing of normal and abnormal mitral annuli.
Mahmood, Feroze; Owais, Khurram; Montealegre-Gallegos, Mario; Matyal, Robina; Panzica, Peter; Maslow, Andrew; Khabbaz, Kamal R
2014-01-01
The objective of this study was to assess the clinical feasibility of using echocardiographic data to generate three-dimensional models of normal and pathologic mitral valve annuli before and after repair procedures. High-resolution transesophageal echocardiographic data from five patients was analyzed to delineate and track the mitral annulus (MA) using Tom Tec Image-Arena software. Coordinates representing the annulus were imported into Solidworks software for constructing solid models. These solid models were converted to stereolithographic (STL) file format and three-dimensionally printed by a commercially available Maker Bot Replicator 2 three-dimensional printer. Total time from image acquisition to printing was approximately 30 min. Models created were highly reflective of known geometry, shape and size of normal and pathologic mitral annuli. Post-repair models also closely resembled shapes of the rings they were implanted with. Compared to echocardiographic images of annuli seen on a computer screen, physical models were able to convey clinical information more comprehensively, making them helpful in appreciating pathology, as well as post-repair changes. Three-dimensional printing of the MA is possible and clinically feasible using routinely obtained echocardiographic images. Given the short turn-around time and the lack of need for additional imaging, a technique we describe here has the potential for rapid integration into clinical practice to assist with surgical education, planning and decision-making.
Parallel-Processing Software for Creating Mosaic Images
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard; Deen, Robert; McCauley, Michael; DeJong, Eric
2008-01-01
A computer program implements parallel processing for nearly real-time creation of panoramic mosaics of images of terrain acquired by video cameras on an exploratory robotic vehicle (e.g., a Mars rover). Because the original images are typically acquired at various camera positions and orientations, it is necessary to warp the images into the reference frame of the mosaic before stitching them together to create the mosaic. [Also see "Parallel-Processing Software for Correlating Stereo Images," Software Supplement to NASA Tech Briefs, Vol. 31, No. 9 (September 2007) page 26.] The warping algorithm in this computer program reflects the considerations that (1) for every pixel in the desired final mosaic, a good corresponding point must be found in one or more of the original images and (2) for this purpose, one needs a good mathematical model of the cameras and a good correlation of individual pixels with respect to their positions in three dimensions. The desired mosaic is divided into slices, each of which is assigned to one of a number of central processing units (CPUs) operating simultaneously. The results from the CPUs are gathered and placed into the final mosaic. The time taken to create the mosaic depends upon the number of CPUs, the speed of each CPU, and whether a local or a remote data-staging mechanism is used.
Vision-sensing image analysis for GTAW process control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, D.D.
1994-11-01
Image analysis of a gas tungsten arc welding (GTAW) process was completed using video images from a charge coupled device (CCD) camera inside a specially designed coaxial (GTAW) electrode holder. Video data was obtained from filtered and unfiltered images, with and without the GTAW arc present, showing weld joint features and locations. Data Translation image processing boards, installed in an IBM PC AT 386 compatible computer, and Media Cybernetics image processing software were used to investigate edge flange weld joint geometry for image analysis.
Microvax-based data management and reduction system for the regional planetary image facilities
NASA Technical Reports Server (NTRS)
Arvidson, R.; Guinness, E.; Slavney, S.; Weiss, B.
1987-01-01
Presented is a progress report for the Regional Planetary Image Facilities (RPIF) prototype image data management and reduction system being jointly implemented by Washington University and the USGS, Flagstaff. The system will consist of a MicroVAX with a high capacity (approx 300 megabyte) disk drive, a compact disk player, an image display buffer, a videodisk player, USGS image processing software, and SYSTEM 1032 - a commercial relational database management package. The USGS, Flagstaff, will transfer their image processing software including radiometric and geometric calibration routines, to the MicroVAX environment. Washington University will have primary responsibility for developing the database management aspects of the system and for integrating the various aspects into a working system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
A. Alfonsi; C. Rabiti; D. Mandelli
The Reactor Analysis and Virtual control ENviroment (RAVEN) code is a software tool that acts as the control logic driver and post-processing engine for the newly developed Thermal-Hydraulic code RELAP-7. RAVEN is now a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities: Derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures), allowing on-line monitoring/controlling in the Phase Space Perform both Monte-Carlo sampling of random distributed events and Dynamic Event Tree based analysis Facilitate the input/output handling through a Graphical User Interface (GUI) and a post-processing data miningmore » module« less
Neutron imaging data processing using the Mantid framework
NASA Astrophysics Data System (ADS)
Pouzols, Federico M.; Draper, Nicholas; Nagella, Sri; Yang, Erica; Sajid, Ahmed; Ross, Derek; Ritchie, Brian; Hill, John; Burca, Genoveva; Minniti, Triestino; Moreton-Smith, Christopher; Kockelmann, Winfried
2016-09-01
Several imaging instruments are currently being constructed at neutron sources around the world. The Mantid software project provides an extensible framework that supports high-performance computing for data manipulation, analysis and visualisation of scientific data. At ISIS, IMAT (Imaging and Materials Science & Engineering) will offer unique time-of-flight neutron imaging techniques which impose several software requirements to control the data reduction and analysis. Here we outline the extensions currently being added to Mantid to provide specific support for neutron imaging requirements.
An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis
NASA Astrophysics Data System (ADS)
Kim, Yongmin; Alexander, Thomas
1986-06-01
In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.
Postprocessing classification images
NASA Technical Reports Server (NTRS)
Kan, E. P.
1979-01-01
Program cleans up remote-sensing maps. It can be used with existing image-processing software. Remapped images closely resemble familiar resource information maps and can replace or supplement classification images not postprocessed by this program.
NASA Technical Reports Server (NTRS)
Kizhner, Semion; Day, John H. (Technical Monitor)
2000-01-01
Post-processing of data, related to a GPS receiver test in a GPS simulator and test facility, is an important step towards qualifying a receiver for space flight. Although the GPS simulator provides all the parameters needed to analyze a simulation, as well as excellent analysis tools on the simulator workstation, post-processing is not a GPS simulator or receiver function alone, and it must be planned as a separate pre-flight test program requirement. A GPS simulator is a critical resource, and it is desirable to move off the pertinent test data from the simulator as soon as a test is completed. The receiver and simulator databases are used to extract the test data files for postprocessing. These files are then usually moved from the simulator and receiver systems to a personal computer (PC) platform, where post-processing is done typically using PC-based commercial software languages and tools. Because of commercial software systems generality their functions are notoriously slow and more than often are the bottleneck even for short duration simulator-based tests. There is a need to do post-processing faster and within an hour after test completion, including all required operations on the simulator and receiver to prepare and move off the post-processing files. This is especially significant in order to use the previous test feedback for the next simulation setup or to run near back-to-back simulation scenarios. Solving the post-processing timing problem is critical for a pre-flight test program success. Towards this goal an approach was developed that allows to speed-up post-processing by an order of a magnitude. It is based on improving the post-processing bottleneck function algorithm using a priory information that is specific to a GPS simulation application and using only the necessary volume of truth data. The presented postprocessing scheme was used in support of a few successful space flight missions carrying GPS receivers.
Near Real-Time Georeference of Umanned Aerial Vehicle Images for Post-Earthquake Response
NASA Astrophysics Data System (ADS)
Wang, S.; Wang, X.; Dou, A.; Yuan, X.; Ding, L.; Ding, X.
2018-04-01
The rapid collection of Unmanned Aerial Vehicle (UAV) remote sensing images plays an important role in the fast submitting disaster information and the monitored serious damaged objects after the earthquake. However, for hundreds of UAV images collected in one flight sortie, the traditional data processing methods are image stitching and three-dimensional reconstruction, which take one to several hours, and affect the speed of disaster response. If the manual searching method is employed, we will spend much more time to select the images and the find images do not have spatial reference. Therefore, a near-real-time rapid georeference method for UAV remote sensing disaster data is proposed in this paper. The UAV images are achieved georeference combined with the position and attitude data collected by UAV flight control system, and the georeferenced data is organized by means of world file which is developed by ESRI. The C # language is adopted to compile the UAV images rapid georeference software, combined with Geospatial Data Abstraction Library (GDAL). The result shows that it can realize rapid georeference of remote sensing disaster images for up to one thousand UAV images within one minute, and meets the demand of rapid disaster response, which is of great value in disaster emergency application.
Content standards for medical image metadata
NASA Astrophysics Data System (ADS)
d'Ornellas, Marcos C.; da Rocha, Rafael P.
2003-12-01
Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.
Protyping machine vision software on the World Wide Web
NASA Astrophysics Data System (ADS)
Karantalis, George; Batchelor, Bruce G.
1998-10-01
Interactive image processing is a proven technique for analyzing industrial vision applications and building prototype systems. Several of the previous implementations have used dedicated hardware to perform the image processing, with a top layer of software providing a convenient user interface. More recently, self-contained software packages have been devised and these run on a standard computer. The advent of the Java programming language has made it possible to write platform-independent software, operating over the Internet, or a company-wide Intranet. Thus, there arises the possibility of designing at least some shop-floor inspection/control systems, without the vision engineer ever entering the factories where they will be used. It successful, this project will have a major impact on the productivity of vision systems designers.
Source detection in astronomical images by Bayesian model comparison
NASA Astrophysics Data System (ADS)
Frean, Marcus; Friedlander, Anna; Johnston-Hollitt, Melanie; Hollitt, Christopher
2014-12-01
The next generation of radio telescopes will generate exabytes of data on hundreds of millions of objects, making automated methods for the detection of astronomical objects ("sources") essential. Of particular importance are faint, diffuse objects embedded in noise. There is a pressing need for source finding software that identifies these sources, involves little manual tuning, yet is tractable to calculate. We first give a novel image discretisation method that incorporates uncertainty about how an image should be discretised. We then propose a hierarchical prior for astronomical images, which leads to a Bayes factor indicating how well a given region conforms to a model of source that is exceptionally unconstrained, compared to a model of background. This enables the efficient localisation of regions that are "suspiciously different" from the background distribution, so our method looks not for brightness but for anomalous distributions of intensity, which is much more general. The model of background can be iteratively improved by removing the influence on it of sources as they are discovered. The approach is evaluated by identifying sources in real and simulated data, and performs well on these measures: the Bayes factor is maximized at most real objects, while returning only a moderate number of false positives. In comparison to a catalogue constructed by widely-used source detection software with manual post-processing by an astronomer, our method found a number of dim sources that were missing from the "ground truth" catalogue.
Transplant Image Processing Technology under Windows into the Platform Based on MiniGUI
NASA Astrophysics Data System (ADS)
Gan, Lan; Zhang, Xu; Lv, Wenya; Yu, Jia
MFC has a large number of digital image processing-related API functions, object-oriented and class mechanisms which provides image processing technology strong support in Windows. But in embedded systems, image processing technology dues to the restrictions of hardware and software do not have the environment of MFC in Windows. Therefore, this paper draws on the experience of image processing technology of Windows and transplants it into MiniGUI embedded systems. The results show that MiniGUI/Embedded graphical user interface applications about image processing which used in embedded image processing system can be good results.
Automatic Feature Extraction System.
1982-12-01
exploitation. It was used for * processing of black and white and multispectral reconnaissance photography, side-looking synthetic aperture radar imagery...the image data and different software modules for image queing and formatting, the result of the input process will be images in standard AFES file...timely manner. The FFS configuration provides the environment necessary for integrated testing of image processing functions and design and
Aldridge, Matthew D; Waddington, Wendy W; Dickson, John C; Prakash, Vineet; Ell, Peter J; Bomanji, Jamshed B
2013-11-01
A three-dimensional model-based resolution recovery (RR) reconstruction algorithm that compensates for collimator-detector response, resulting in an improvement in reconstructed spatial resolution and signal-to-noise ratio of single-photon emission computed tomography (SPECT) images, was tested. The software is said to retain image quality even with reduced acquisition time. Clinically, any improvement in patient throughput without loss of quality is to be welcomed. Furthermore, future restrictions in radiotracer supplies may add value to this type of data analysis. The aims of this study were to assess improvement in image quality using the software and to evaluate the potential of performing reduced time acquisitions for bone and parathyroid SPECT applications. Data acquisition was performed using the local standard SPECT/CT protocols for 99mTc-hydroxymethylene diphosphonate bone and 99mTc-methoxyisobutylisonitrile parathyroid SPECT imaging. The principal modification applied was the acquisition of an eight-frame gated data set acquired using an ECG simulator with a fixed signal as the trigger. This had the effect of partitioning the data such that the effect of reduced time acquisitions could be assessed without conferring additional scanning time on the patient. The set of summed data sets was then independently reconstructed using the RR software to permit a blinded assessment of the effect of acquired counts upon reconstructed image quality as adjudged by three experienced observers. Data sets reconstructed with the RR software were compared with the local standard processing protocols; filtered back-projection and ordered-subset expectation-maximization. Thirty SPECT studies were assessed (20 bone and 10 parathyroid). The images reconstructed with the RR algorithm showed improved image quality for both full-time and half-time acquisitions over local current processing protocols (P<0.05). The RR algorithm improved image quality compared with local processing protocols and has been introduced into routine clinical use. SPECT acquisitions are now acquired at half of the time previously required. The method of binning the data can be applied to any other camera system to evaluate the reduction in acquisition time for similar processes. The potential for dose reduction is also inherent with this approach.
A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer
NASA Astrophysics Data System (ADS)
Luckman, Adrian J.; Allinson, Nigel M.
1989-03-01
A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.
Uranus: a rapid prototyping tool for FPGA embedded computer vision
NASA Astrophysics Data System (ADS)
Rosales-Hernández, Victor; Castillo-Jimenez, Liz; Viveros-Velez, Gilberto; Zuñiga-Grajeda, Virgilio; Treviño Torres, Abel; Arias-Estrada, M.
2007-01-01
The starting point for all successful system development is the simulation. Performing high level simulation of a system can help to identify, insolate and fix design problems. This work presents Uranus, a software tool for simulation and evaluation of image processing algorithms with support to migrate them to an FPGA environment for algorithm acceleration and embedded processes purposes. The tool includes an integrated library of previous coded operators in software and provides the necessary support to read and display image sequences as well as video files. The user can use the previous compiled soft-operators in a high level process chain, and code his own operators. Additional to the prototyping tool, Uranus offers FPGA-based hardware architecture with the same organization as the software prototyping part. The hardware architecture contains a library of FPGA IP cores for image processing that are connected with a PowerPC based system. The Uranus environment is intended for rapid prototyping of machine vision and the migration to FPGA accelerator platform, and it is distributed for academic purposes.
Capturing a failure of an ASIC in-situ, using infrared radiometry and image processing software
NASA Technical Reports Server (NTRS)
Ruiz, Ronald P.
2003-01-01
Failures in electronic devices can sometimes be tricky to locate-especially if they are buried inside radiation-shielded containers designed to work in outer space. Such was the case with a malfunctioning ASIC (Application Specific Integrated Circuit) that was drawing excessive power at a specific temperature during temperature cycle testing. To analyze the failure, infrared radiometry (thermography) was used in combination with image processing software to locate precisely where the power was being dissipated at the moment the failure took place. The IR imaging software was used to make the image of the target and background, appear as unity. As testing proceeded and the failure mode was reached, temperature changes revealed the precise location of the fault. The results gave the design engineers the information they needed to fix the problem. This paper describes the techniques and equipment used to accomplish this failure analysis.
An efficient approach to integrated MeV ion imaging.
Nikbakht, T; Kakuee, O; Solé, V A; Vosuoghi, Y; Lamehi-Rachti, M
2018-03-01
An ionoluminescence (IL) spectral imaging system, besides the common MeV ion imaging facilities such as µ-PIXE and µ-RBS, is implemented at the Van de Graaff laboratory of Tehran. A versatile processing software is required to handle the large amount of data concurrently collected in µ-IL and common MeV ion imaging measurements through the respective methodologies. The open-source freeware PyMca, with image processing and multivariate analysis capabilities, is employed to simultaneously process common MeV ion imaging and µ-IL data. Herein, the program was adapted to support the OM_DAQ listmode data format. The appropriate performance of the µ-IL data acquisition system is confirmed through a case study. Moreover, the capabilities of the software for simultaneous analysis of µ-PIXE and µ-RBS experimental data are presented. Copyright © 2017 Elsevier B.V. All rights reserved.
Processing Ocean Images to Detect Large Drift Nets
NASA Technical Reports Server (NTRS)
Veenstra, Tim
2009-01-01
A computer program processes the digitized outputs of a set of downward-looking video cameras aboard an aircraft flying over the ocean. The purpose served by this software is to facilitate the detection of large drift nets that have been lost, abandoned, or jettisoned. The development of this software and of the associated imaging hardware is part of a larger effort to develop means of detecting and removing large drift nets before they cause further environmental damage to the ocean and to shores on which they sometimes impinge. The software is capable of near-realtime processing of as many as three video feeds at a rate of 30 frames per second. After a user sets the parameters of an adjustable algorithm, the software analyzes each video stream, detects any anomaly, issues a command to point a high-resolution camera toward the location of the anomaly, and, once the camera has been so aimed, issues a command to trigger the camera shutter. The resulting high-resolution image is digitized, and the resulting data are automatically uploaded to the operator s computer for analysis.
Software for Partly Automated Recognition of Targets
NASA Technical Reports Server (NTRS)
Opitz, David; Blundell, Stuart; Bain, William; Morris, Matthew; Carlson, Ian; Mangrich, Mark; Selinsky, T.
2002-01-01
The Feature Analyst is a computer program for assisted (partially automated) recognition of targets in images. This program was developed to accelerate the processing of high-resolution satellite image data for incorporation into geographic information systems (GIS). This program creates an advanced user interface that embeds proprietary machine-learning algorithms in commercial image-processing and GIS software. A human analyst provides samples of target features from multiple sets of data, then the software develops a data-fusion model that automatically extracts the remaining features from selected sets of data. The program thus leverages the natural ability of humans to recognize objects in complex scenes, without requiring the user to explain the human visual recognition process by means of lengthy software. Two major subprograms are the reactive agent and the thinking agent. The reactive agent strives to quickly learn the user's tendencies while the user is selecting targets and to increase the user's productivity by immediately suggesting the next set of pixels that the user may wish to select. The thinking agent utilizes all available resources, taking as much time as needed, to produce the most accurate autonomous feature-extraction model possible.
Kohlmeier, Carsten; Behrens, Peter; Böger, Andreas; Ramachandran, Brinda; Caparso, Anthony; Schulze, Dirk; Stude, Philipp; Heiland, Max; Assaf, Alexandre T
2017-12-01
The ATI SPG microstimulator is designed to be fixed on the posterior maxilla, with the integrated lead extending into the pterygopalatine fossa to electrically stimulate the sphenopalatine ganglion (SPG) as a treatment for cluster headache. Preoperative surgical planning to ensure the placement of the microstimulator in close proximity (within 5 mm) to the SPG is critical for treatment efficacy. The aim of this study was to improve the surgical procedure by navigating the initial dissection prior to implantation using a passive optical navigation system and to match the post-operative CBCT images with the preoperative treatment plan to verify the accuracy of the intraoperative placement of the microstimulator. Custom methods and software were used that result in a 3D rotatable digitally reconstructed fluoroscopic image illustrating the patient-specific placement with the ATI SPG microstimulator. Those software tools were preoperatively integrated with the planning software of the navigation system to be used intraoperatively for navigated placement. Intraoperatively, the SPG microstimulator was implanted by completing the initial dissection with CT navigation, while the final position of the stimulator was verified by 3D CBCT. Those reconstructed images were then immediately matched with the preoperative CT scans with the digitally inserted SPG microstimulator. This method allowed for visual comparison of both CT scans and verified correct positioning of the SPG microstimulator. Twenty-four surgeries were performed using this new method of CT navigated assistance during SPG microstimulator implantation. Those results were compared to results of 21 patients previously implanted without the assistance of CT navigation. Using CT navigation during the initial dissection, an average distance reduction of 1.2 mm between the target point and electrode tip of the SPG microstimulator was achieved. Using the navigation software for navigated implantation and matching the preoperative planned scans with those performed post-operatively, the average distance was 2.17 mm with navigation, compared to 3.37 mm in the 28 surgeries without navigation. Results from this new procedure showed a significant reduction (p = 0.009) in the average distance from the SPG microstimulator to the desired target point. Therefore, a distinct improvement could be achieved in positioning of the SPG microstimulator through the use of intraoperative navigation during the initial dissection and by post-operative matching of pre- and post-operatively performed CBCT scans.
Digital techniques for processing Landsat imagery
NASA Technical Reports Server (NTRS)
Green, W. B.
1978-01-01
An overview of the basic techniques used to process Landsat images with a digital computer, and the VICAR image processing software developed at JPL and available to users through the NASA sponsored COSMIC computer program distribution center is presented. Examples of subjective processing performed to improve the information display for the human observer, such as contrast enhancement, pseudocolor display and band rationing, and of quantitative processing using mathematical models, such as classification based on multispectral signatures of different areas within a given scene and geometric transformation of imagery into standard mapping projections are given. Examples are illustrated by Landsat scenes of the Andes mountains and Altyn-Tagh fault zone in China before and after contrast enhancement and classification of land use in Portland, Oregon. The VICAR image processing software system which consists of a language translator that simplifies execution of image processing programs and provides a general purpose format so that imagery from a variety of sources can be processed by the same basic set of general applications programs is described.
Ahn, Jungmo; Park, JaeYeon; Park, Donghwan; Paek, Jeongyeup; Ko, JeongGil
2018-01-01
With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images.
Software tool for portal dosimetry research.
Vial, P; Hunt, P; Greer, P B; Oliver, L; Baldock, C
2008-09-01
This paper describes a software tool developed for research into the use of an electronic portal imaging device (EPID) to verify dose for intensity modulated radiation therapy (IMRT) beams. A portal dose image prediction (PDIP) model that predicts the EPID response to IMRT beams has been implemented into a commercially available treatment planning system (TPS). The software tool described in this work was developed to modify the TPS PDIP model by incorporating correction factors into the predicted EPID image to account for the difference in EPID response to open beam radiation and multileaf collimator (MLC) transmitted radiation. The processes performed by the software tool include; i) read the MLC file and the PDIP from the TPS, ii) calculate the fraction of beam-on time that each point in the IMRT beam is shielded by MLC leaves, iii) interpolate correction factors from look-up tables, iv) create a corrected PDIP image from the product of the original PDIP and the correction factors and write the corrected image to file, v) display, analyse, and export various image datasets. The software tool was developed using the Microsoft Visual Studio.NET framework with the C# compiler. The operation of the software tool was validated. This software provided useful tools for EPID dosimetry research, and it is being utilised and further developed in ongoing EPID dosimetry and IMRT dosimetry projects.
Zhang, Chunhua; Walters, Dan; Kovacs, John M.
2014-01-01
With the growth of the low altitude remote sensing (LARS) industry in recent years, their practical application in precision agriculture seems all the more possible. However, only a few scientists have reported using LARS to monitor crop conditions. Moreover, there have been concerns regarding the feasibility of such systems for producers given the issues related to the post-processing of images, technical expertise, and timely delivery of information. The purpose of this study is to showcase actual requests by farmers to monitor crop conditions in their fields using an unmanned aerial vehicle (UAV). Working in collaboration with farmers in northeastern Ontario, we use optical and near-infrared imagery to monitor fertilizer trials, conduct crop scouting and map field tile drainage. We demonstrate that LARS imagery has many practical applications. However, several obstacles remain, including the costs associated with both the LARS system and the image processing software, the extent of professional training required to operate the LARS and to process the imagery, and the influence from local weather conditions (e.g. clouds, wind) on image acquisition all need to be considered. Consequently, at present a feasible solution for producers might be the use of LARS service provided by private consultants or in collaboration with LARS scientific research teams. PMID:25386696
Zhang, Chunhua; Walters, Dan; Kovacs, John M
2014-01-01
With the growth of the low altitude remote sensing (LARS) industry in recent years, their practical application in precision agriculture seems all the more possible. However, only a few scientists have reported using LARS to monitor crop conditions. Moreover, there have been concerns regarding the feasibility of such systems for producers given the issues related to the post-processing of images, technical expertise, and timely delivery of information. The purpose of this study is to showcase actual requests by farmers to monitor crop conditions in their fields using an unmanned aerial vehicle (UAV). Working in collaboration with farmers in northeastern Ontario, we use optical and near-infrared imagery to monitor fertilizer trials, conduct crop scouting and map field tile drainage. We demonstrate that LARS imagery has many practical applications. However, several obstacles remain, including the costs associated with both the LARS system and the image processing software, the extent of professional training required to operate the LARS and to process the imagery, and the influence from local weather conditions (e.g. clouds, wind) on image acquisition all need to be considered. Consequently, at present a feasible solution for producers might be the use of LARS service provided by private consultants or in collaboration with LARS scientific research teams.
Zandieh, Shahin; Bernt, Reinhard; Knoll, Peter; Wenzel, Thomas; Hittmair, Karl; Haller, Joerg; Hergan, Klaus; Mirzaei, Siroos
2016-01-01
Abstract Many people exposed to torture later suffer from torture-related post-traumatic stress disorder (TR-PTSD). The aim of this study was to analyze the morphologic and functional brain changes in patients with TR-PTSD using magnetic resonance imaging (MRI) and positron emission tomography (PET). This study evaluated 19 subjects. Thirteen subcortical brain structures were evaluated using FSL software. On the T1-weighted images, normalized brain volumes were measured using SIENAX software. The study compared the volume of the brain and 13 subcortical structures in 9 patients suffering from TR-PTSD after torture and 10 healthy volunteers (HV). Diffusion-weighted imaging (DWI) was performed in the transverse plane. In addition, the 18F-FDG PET data were evaluated to identify the activity of the elected regions. The mean left hippocampal volume for the TR-PTSD group was significantly lower than in the HV group (post hoc test (Bonferroni) P < 0.001). There was a significant difference between the gray matter volume of the patients with TR-PTSD and the HV group (post hoc test (Bonferroni) P < 0.001). The TR-PTSD group showed low significant expansion of the ventricles in contrast to the HV group (post hoc test (Bonferroni) P < 0.001). Diffusion-weighted imaging revealed significant differences in the right frontal lobe and the left occipital lobe between the TR-PTSD and HV group (post hoc test (Bonferroni) P < 0.001). Moderate hypometabolism was noted in the occipital lobe in 6 of the 9 patients with TR-PTSD, in the temporal lobe in 1 of the 9 patients, and in the caudate nucleus in 5 of the 9 patients. In 2 cases, additional hypometabolism was observed in the posterior cingulate cortex and in the parietal and frontal lobes. The findings from this study show that TR-PTSD might have a deleterious influence on a set of specific brain structures. This study also demonstrated that PET combined with MRI is sensitive in detecting possible metabolic and structural brain changes in TR-PTSD. PMID:27082610
Zandieh, Shahin; Bernt, Reinhard; Knoll, Peter; Wenzel, Thomas; Hittmair, Karl; Haller, Joerg; Hergan, Klaus; Mirzaei, Siroos
2016-04-01
Many people exposed to torture later suffer from torture-related post-traumatic stress disorder (TR-PTSD). The aim of this study was to analyze the morphologic and functional brain changes in patients with TR-PTSD using magnetic resonance imaging (MRI) and positron emission tomography (PET). This study evaluated 19 subjects. Thirteen subcortical brain structures were evaluated using FSL software. On the T1-weighted images, normalized brain volumes were measured using SIENAX software. The study compared the volume of the brain and 13 subcortical structures in 9 patients suffering from TR-PTSD after torture and 10 healthy volunteers (HV). Diffusion-weighted imaging (DWI) was performed in the transverse plane. In addition, the 18F-FDG PET data were evaluated to identify the activity of the elected regions. The mean left hippocampal volume for the TR-PTSD group was significantly lower than in the HV group (post hoc test (Bonferroni) P < 0.001). There was a significant difference between the gray matter volume of the patients with TR-PTSD and the HV group (post hoc test (Bonferroni) P < 0.001). The TR-PTSD group showed low significant expansion of the ventricles in contrast to the HV group (post hoc test (Bonferroni) P < 0.001). Diffusion-weighted imaging revealed significant differences in the right frontal lobe and the left occipital lobe between the TR-PTSD and HV group (post hoc test (Bonferroni) P < 0.001). Moderate hypometabolism was noted in the occipital lobe in 6 of the 9 patients with TR-PTSD, in the temporal lobe in 1 of the 9 patients, and in the caudate nucleus in 5 of the 9 patients. In 2 cases, additional hypometabolism was observed in the posterior cingulate cortex and in the parietal and frontal lobes. The findings from this study show that TR-PTSD might have a deleterious influence on a set of specific brain structures. This study also demonstrated that PET combined with MRI is sensitive in detecting possible metabolic and structural brain changes in TR-PTSD.
An advanced software suite for the processing and analysis of silicon luminescence images
NASA Astrophysics Data System (ADS)
Payne, D. N. R.; Vargas, C.; Hameiri, Z.; Wenham, S. R.; Bagnall, D. M.
2017-06-01
Luminescence imaging is a versatile characterisation technique used for a broad range of research and industrial applications, particularly for the field of photovoltaics where photoluminescence and electroluminescence imaging is routinely carried out for materials analysis and quality control. Luminescence imaging can reveal a wealth of material information, as detailed in extensive literature, yet these techniques are often only used qualitatively instead of being utilised to their full potential. Part of the reason for this is the time and effort required for image processing and analysis in order to convert image data to more meaningful results. In this work, a custom built, Matlab based software suite is presented which aims to dramatically simplify luminescence image processing and analysis. The suite includes four individual programs which can be used in isolation or in conjunction to achieve a broad array of functionality, including but not limited to, point spread function determination and deconvolution, automated sample extraction, image alignment and comparison, minority carrier lifetime calibration and iron impurity concentration mapping.
"Proximal Sensing" capabilities for snow cover monitoring
NASA Astrophysics Data System (ADS)
Valt, Mauro; Salvatori, Rosamaria; Plini, Paolo; Salzano, Roberto; Giusti, Marco; Montagnoli, Mauro; Sigismondi, Daniele; Cagnati, Anselmo
2013-04-01
The seasonal snow cover represents one of the most important land cover class in relation to environmental studies in mountain areas, especially considering its variation during time. Snow cover and its extension play a relevant role for the studies on the atmospheric dynamics and the evolution of climate. It is also important for the analysis and management of water resources and for the management of touristic activities in mountain areas. Recently, webcam images collected at daily or even hourly intervals are being used as tools to observe the snow covered areas; those images, properly processed, can be considered a very important environmental data source. Images captured by digital cameras become a useful tool at local scale providing images even when the cloud coverage makes impossible the observation by satellite sensors. When suitably processed these images can be used for scientific purposes, having a good resolution (at least 800x600x16 million colours) and a very good sampling frequency (hourly images taken through the whole year). Once stored in databases, those images represent therefore an important source of information for the study of recent climatic changes, to evaluate the available water resources and to analyse the daily surface evolution of the snow cover. The Snow-noSnow software has been specifically designed to automatically detect the extension of snow cover collected from webcam images with a very limited human intervention. The software was tested on images collected on Alps (ARPAV webcam network) and on Apennine in a pilot station properly equipped for this project by CNR-IIA. The results obtained through the use of Snow-noSnow are comparable to the one achieved by photo-interpretation and could be considered as better as the ones obtained using the image segmentation routine implemented into image processing commercial softwares. Additionally, Snow-noSnow operates in a semi-automatic way and has a reduced processing time. The analysis of this kind of images could represent an useful element to support the interpretation of remote sensing images, especially those provided by high spatial resolution sensors. Keywords: snow cover monitoring, digital images, software, Alps, Apennines.
D Modelling with the Samsung Gear 360
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Previtali, M.; Roncoroni, F.
2017-02-01
The Samsung Gear 360 is a consumer grade spherical camera able to capture photos and videos. The aim of this work is to test the metric accuracy and the level of detail achievable with the Samsung Gear 360 coupled with digital modelling techniques based on photogrammetry/computer vision algorithms. Results demonstrate that the direct use of the projection generated inside the mobile phone or with Gear 360 Action Direction (the desktop software for post-processing) have a relatively low metric accuracy. As results were in contrast with the accuracy achieved by using the original fisheye images (front and rear facing images) in photogrammetric reconstructions, an alternative solution to generate the equirectangular projections was developed. A calibration aimed at understanding the intrinsic parameters of the two lenses camera, as well as their relative orientation, allowed one to generate new equirectangular projections from which a significant improvement of geometric accuracy has been achieved.
NASA Technical Reports Server (NTRS)
Chan, William M.; Rogers, Stuart E.; Nash, Steven M.; Buning, Pieter G.; Meakin, Robert
2005-01-01
Chimera Grid Tools (CGT) is a software package for performing computational fluid dynamics (CFD) analysis utilizing the Chimera-overset-grid method. For modeling flows with viscosity about geometrically complex bodies in relative motion, the Chimera-overset-grid method is among the most computationally cost-effective methods for obtaining accurate aerodynamic results. CGT contains a large collection of tools for generating overset grids, preparing inputs for computer programs that solve equations of flow on the grids, and post-processing of flow-solution data. The tools in CGT include grid editing tools, surface-grid-generation tools, volume-grid-generation tools, utility scripts, configuration scripts, and tools for post-processing (including generation of animated images of flows and calculating forces and moments exerted on affected bodies). One of the tools, denoted OVERGRID, is a graphical user interface (GUI) that serves to visualize the grids and flow solutions and provides central access to many other tools. The GUI facilitates the generation of grids for a new flow-field configuration. Scripts that follow the grid generation process can then be constructed to mostly automate grid generation for similar configurations. CGT is designed for use in conjunction with a computer-aided-design program that provides the geometry description of the bodies, and a flow-solver program.
Rules based process window OPC
NASA Astrophysics Data System (ADS)
O'Brien, Sean; Soper, Robert; Best, Shane; Mason, Mark
2008-03-01
As a preliminary step towards Model-Based Process Window OPC we have analyzed the impact of correcting post-OPC layouts using rules based methods. Image processing on the Brion Tachyon was used to identify sites where the OPC model/recipe failed to generate an acceptable solution. A set of rules for 65nm active and poly were generated by classifying these failure sites. The rules were based upon segment runlengths, figure spaces, and adjacent figure widths. 2.1 million sites for active were corrected in a small chip (comparing the pre and post rules based operations), and 59 million were found at poly. Tachyon analysis of the final reticle layout found weak margin sites distinct from those sites repaired by rules-based corrections. For the active layer more than 75% of the sites corrected by rules would have printed without a defect indicating that most rulesbased cleanups degrade the lithographic pattern. Some sites were missed by the rules based cleanups due to either bugs in the DRC software or gaps in the rules table. In the end dramatic changes to the reticle prevented catastrophic lithography errors, but this method is far too blunt. A more subtle model-based procedure is needed changing only those sites which have unsatisfactory lithographic margin.
Spencer, Jean L; Bhatia, Vivek N; Whelan, Stephen A; Costello, Catherine E; McComb, Mark E
2013-12-01
The identification of protein post-translational modifications (PTMs) is an increasingly important component of proteomics and biomarker discovery, but very few tools exist for performing fast and easy characterization of global PTM changes and differential comparison of PTMs across groups of data obtained from liquid chromatography-tandem mass spectrometry experiments. STRAP PTM (Software Tool for Rapid Annotation of Proteins: Post-Translational Modification edition) is a program that was developed to facilitate the characterization of PTMs using spectral counting and a novel scoring algorithm to accelerate the identification of differential PTMs from complex data sets. The software facilitates multi-sample comparison by collating, scoring, and ranking PTMs and by summarizing data visually. The freely available software (beta release) installs on a PC and processes data in protXML format obtained from files parsed through the Trans-Proteomic Pipeline. The easy-to-use interface allows examination of results at protein, peptide, and PTM levels, and the overall design offers tremendous flexibility that provides proteomics insight beyond simple assignment and counting.
NASA Astrophysics Data System (ADS)
Conerty, Michelle D.; Castracane, James; Cacace, Anthony T.; Parnes, Steven M.; Gardner, Glendon M.; Miller, Mitchell B.
1995-05-01
Electronic Speckle Pattern Interferometry (ESPI) is a nondestructive optical evaluation technique that is capable of determining surface and subsurface integrity through the quantitative evaluation of static or vibratory motion. By utilizing state of the art developments in the areas of lasers, fiber optics and solid state detector technology, this technique has become applicable in medical research and diagnostics. Based on initial support from NIDCD and continued support from InterScience, Inc., we have been developing a range of instruments for improved diagnostic evaluation in otolaryngological applications based on the technique of ESPI. These compact fiber optic instruments are capable of making real time interferometric measurements of the target tissue. Ongoing development of image post- processing software is currently capable of extracting the desired quantitative results from the acquired interferometric images. The goal of the research is to develop a fully automated system in which the image processing and quantification will be performed in hardware in near real-time. Subsurface details of both the tympanic membrane and vocal cord dynamics could speed the diagnosis of otosclerosis, laryngeal tumors, and aid in the evaluation of surgical procedures.
NASA Astrophysics Data System (ADS)
Lakdawalla, E. S.
2008-11-01
Many recent planetary science missions, including the Mars Exploration Rovers, Cassini-Huygens, and New Horizons, have instituted a policy of the rapid release of ``raw'' images to the Internet within days or even hours of their acquisition. The availability of these data, along with the increasing power of home computers and availability of high-bandwidth Internet connections, have stimulated the development of a worldwide community of armchair planetary scientists, who are able to participate in the everyday drama of exploratory missions' encounters with new worlds and new landscapes. Far from passive onlookers, many of these enthusiasts have taught themselves image processing techniques and have even written software to perform automated processing and mosaicking of these raw data sets. They rapidly produce stunning visualizations and then post them to their own blogs or online forums, where they also engage in discussing scientific observations and inferences about the data sets, broadening missions' public outreach efforts beyond their direct reach. These amateur space scientists feel a deep sense of involvement in and connection to space missions, which makes them enthusiastic (and occasionally demanding) supporters of space exploration.
Buried object detection in GPR images
Paglieroni, David W; Chambers, David H; Bond, Steven W; Beer, W. Reginald
2014-04-29
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions
Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner
2016-01-01
In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter. PMID:27983669
Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions.
Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner
2016-12-15
In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter.
Software system design for the non-null digital Moiré interferometer
NASA Astrophysics Data System (ADS)
Chen, Meng; Hao, Qun; Hu, Yao; Wang, Shaopu; Li, Tengfei; Li, Lin
2016-11-01
Aspheric optical components are an indispensable part of modern optics systems. With the development of aspheric optical elements fabrication technique, high-precision figure error test method of aspheric surfaces is a quite urgent issue now. We proposed a digital Moiré interferometer technique (DMIT) based on partial compensation principle for aspheric and freeform surface measurement. Different from traditional interferometer, DMIT consists of a real and a virtual interferometer. The virtual interferometer is simulated with Zemax software to perform phase-shifting and alignment. We can get the results by a series of calculation with the real interferogram and virtual interferograms generated by computer. DMIT requires a specific, reliable software system to ensure its normal work. Image acquisition and data processing are two important parts in this system. And it is also a challenge to realize the connection between the real and virtual interferometer. In this paper, we present a software system design for DMIT with friendly user interface and robust data processing features, enabling us to acquire the figure error of the measured asphere. We choose Visual C++ as the software development platform and control the ideal interferometer by using hybrid programming with Zemax. After image acquisition and data transmission, the system calls image processing algorithms written with Matlab to calculate the figure error of the measured asphere. We test the software system experimentally. In the experiment, we realize the measurement of an aspheric surface and prove the feasibility of the software system.
WE-FG-202-12: Investigation of Longitudinal Salivary Gland DCE-MRI Changes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ger, R; Howell, R; Li, H
Purpose: To determine the correlation between dose and changes through treatment in dynamic contrast enhanced (DCE) MRI voxel parameters (Ktrans, kep, Ve, and Vp) within salivary glands of head and neck oropharyngeal squamous cell carcinoma (HNSCC) patients. Methods: 17 HNSCC patients treated with definitive radiation therapy completed DCE-MRI scans on a 3T scanner at pre-treatment, mid-treatment, and post-treatment time points. Mid-treatment and post-treatment DCE images were deformably registered to pre-treatment DCE images (Velocity software package). Pharmacokinetic analysis of the DCE images used a modified Tofts model to produce parameter maps with an arterial input function selected from each patient’s perivertebralmore » space on the image (NordicICE software package). In-house software was developed for voxel-by-voxel longitudinal analysis of the salivary glands within the registered images. The planning CT was rigidly registered to the pre-treatment DCE image to obtain dose values in each voxel. Voxels within the lower and upper dose quartiles for each gland were averaged for each patient, then an average of the patients’ means for the two quartiles were compared. Dose-relationships were also assessed by Spearman correlations between dose and voxel parameter changes for each patient’s gland. Results: Changes in parameters’ means between time points were observed, but inter-patient variability was high. Ve of the parotid was the only parameter that had a consistently significant longitudinal difference between dose quartiles. The highest Spearman correlation was Vp of the sublingual gland for the change in the pre-treatment to mid-treatment values with only a ρ=0.29. Conclusion: In this preliminary study, there was large inter-patient variability in the changes of DCE voxel parameters with no clear relationship with dose. Additional patients may reduce the uncertainties and allow for the determination of the existence of parameter and dose relationships.« less
Radar signal pre-processing to suppress surface bounce and multipath
Paglieroni, David W; Mast, Jeffrey E; Beer, N. Reginald
2013-12-31
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes that return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
More flexibility in representing geometric distortion in astronomical images
NASA Astrophysics Data System (ADS)
Shupe, David L.; Laher, Russ R.; Storrie-Lombardi, Lisa; Surace, Jason; Grillmair, Carl; Levitan, David; Sesar, Branimir
2012-09-01
A number of popular software tools in the public domain are used by astronomers, professional and amateur alike, but some of the tools that have similar purposes cannot be easily interchanged, owing to the lack of a common standard. For the case of image distortion, SCAMP and SExtractor, available from Astromatic.net, perform astrometric calibration and source-object extraction on image data, and image-data geometric distortion is computed in celestial coordinates with polynomial coefficients stored in the FITS header with the PV i_j keywords. Another widely-used astrometric-calibration service, Astrometry.net, solves for distortion in pixel coordinates using the SIP convention that was introduced by the Spitzer Science Center. Up until now, due to the complexity of these distortion representations, it was very difficult to use the output of one of these packages as input to the other. New Python software, along with faster-computing C-language translations, have been developed at the Infrared Processing and Analysis Center (IPAC) to convert FITS-image headers from PV to SIP and vice versa. It is now possible to straightforwardly use Astrometry.net for astrometric calibration and then SExtractor for source-object extraction. The new software also enables astrometric calibration by SCAMP followed by image visualization with tools that support SIP distortion, but not PV . The software has been incorporated into the image-processing pipelines of the Palomar Transient Factory (PTF), which generate FITS images with headers containing both distortion representations. The software permits the conversion of archived images, such as from the Spitzer Heritage Archive and NASA/IPAC Infrared Science Archive, from SIP to PV or vice versa. This new capability renders unnecessary any new representation, such as the proposed TPV distortion convention.
Riffel, Johannes H; Keller, Marius G P; Aurich, Matthias; Sander, Yannick; Andre, Florian; Giusca, Sorin; Aus dem Siepen, Fabian; Seitz, Sebastian; Galuschky, Christian; Korosoglou, Grigorios; Mereles, Derliz; Katus, Hugo A; Buss, Sebastian J
2015-07-01
Myocardial deformation measurement is superior to left ventricular ejection fraction in identifying early changes in myocardial contractility and prediction of cardiovascular outcome. The lack of standardization hinders its clinical implementation. The aim of the study is to investigate a novel standardized deformation imaging approach based on the feature tracking algorithm for the assessment of global longitudinal (GLS) and global circumferential strain (GCS) in echocardiography and cardiac magnetic resonance imaging (CMR). 70 subjects undergoing CMR were consecutively investigated with echocardiography within a median time of 30 min. GLS and GCS were analyzed with a post-processing software incorporating the same standardized algorithm for both modalities. Global strain was defined as the relative shortening of the whole endocardial contour length and calculated according to the strain formula. Mean GLS values were -16.2 ± 5.3 and -17.3 ± 5.3 % for echocardiography and CMR, respectively. GLS did not differ significantly between the two imaging modalities, which showed strong correlation (r = 0.86), a small bias (-1.1 %) and narrow 95 % limits of agreement (LOA ± 5.4 %). Mean GCS values were -17.9 ± 6.3 and -24.4 ± 7.8 % for echocardiography and CMR, respectively. GCS was significantly underestimated by echocardiography (p < 0.001). A weaker correlation (r = 0.73), a higher bias (-6.5 %) and wider LOA (± 10.5 %) were observed for GCS. GLS showed a strong correlation (r = 0.92) when image quality was good, while correlation dropped to r = 0.82 with poor acoustic windows in echocardiography. GCS assessment revealed only a strong correlation (r = 0.87) when echocardiographic image quality was good. No significant differences for GLS between two different echocardiographic vendors could be detected. Quantitative assessment of GLS using a standardized software algorithm allows the direct comparison of values acquired irrespective of the imaging modality. GLS may, therefore, serve as a reliable parameter for the assessment of global left ventricular function in clinical routine besides standard evaluation of the ejection fraction.
PI2GIS: processing image to geographical information systems, a learning tool for QGIS
NASA Astrophysics Data System (ADS)
Correia, R.; Teodoro, A.; Duarte, L.
2017-10-01
To perform an accurate interpretation of remote sensing images, it is necessary to extract information using different image processing techniques. Nowadays, it became usual to use image processing plugins to add new capabilities/functionalities integrated in Geographical Information System (GIS) software. The aim of this work was to develop an open source application to automatically process and classify remote sensing images from a set of satellite input data. The application was integrated in a GIS software (QGIS), automating several image processing steps. The use of QGIS for this purpose is justified since it is easy and quick to develop new plugins, using Python language. This plugin is inspired in the Semi-Automatic Classification Plugin (SCP) developed by Luca Congedo. SCP allows the supervised classification of remote sensing images, the calculation of vegetation indices such as NDVI (Normalized Difference Vegetation Index) and EVI (Enhanced Vegetation Index) and other image processing operations. When analysing SCP, it was realized that a set of operations, that are very useful in teaching classes of remote sensing and image processing tasks, were lacking, such as the visualization of histograms, the application of filters, different image corrections, unsupervised classification and several environmental indices computation. The new set of operations included in the PI2GIS plugin can be divided into three groups: pre-processing, processing, and classification procedures. The application was tested consider an image from Landsat 8 OLI from a North area of Portugal.
Shi, Yulin; Veidenbaum, Alexander V; Nicolau, Alex; Xu, Xiangmin
2015-01-15
Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post hoc processing and analysis. Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22× speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. Copyright © 2014 Elsevier B.V. All rights reserved.
Shi, Yulin; Veidenbaum, Alexander V.; Nicolau, Alex; Xu, Xiangmin
2014-01-01
Background Modern neuroscience research demands computing power. Neural circuit mapping studies such as those using laser scanning photostimulation (LSPS) produce large amounts of data and require intensive computation for post-hoc processing and analysis. New Method Here we report on the design and implementation of a cost-effective desktop computer system for accelerated experimental data processing with recent GPU computing technology. A new version of Matlab software with GPU enabled functions is used to develop programs that run on Nvidia GPUs to harness their parallel computing power. Results We evaluated both the central processing unit (CPU) and GPU-enabled computational performance of our system in benchmark testing and practical applications. The experimental results show that the GPU-CPU co-processing of simulated data and actual LSPS experimental data clearly outperformed the multi-core CPU with up to a 22x speedup, depending on computational tasks. Further, we present a comparison of numerical accuracy between GPU and CPU computation to verify the precision of GPU computation. In addition, we show how GPUs can be effectively adapted to improve the performance of commercial image processing software such as Adobe Photoshop. Comparison with Existing Method(s) To our best knowledge, this is the first demonstration of GPU application in neural circuit mapping and electrophysiology-based data processing. Conclusions Together, GPU enabled computation enhances our ability to process large-scale data sets derived from neural circuit mapping studies, allowing for increased processing speeds while retaining data precision. PMID:25277633
A new procedure of modal parameter estimation for high-speed digital image correlation
NASA Astrophysics Data System (ADS)
Huňady, Róbert; Hagara, Martin
2017-09-01
The paper deals with the use of 3D digital image correlation in determining modal parameters of mechanical systems. It is a non-contact optical method, which for the measurement of full-field spatial displacements and strains of bodies uses precise digital cameras with high image resolution. Most often this method is utilized for testing of components or determination of material properties of various specimens. In the case of using high-speed cameras for measurement, the correlation system is capable of capturing various dynamic behaviors, including vibration. This enables the potential use of the mentioned method in experimental modal analysis. For that purpose, the authors proposed a measuring chain for the correlation system Q-450 and developed a software application called DICMAN 3D, which allows the direct use of this system in the area of modal testing. The created application provides the post-processing of measured data and the estimation of modal parameters. It has its own graphical user interface, in which several algorithms for the determination of natural frequencies, mode shapes and damping of particular modes of vibration are implemented. The paper describes the basic principle of the new estimation procedure which is crucial in the light of post-processing. Since the FRF matrix resulting from the measurement is usually relatively large, the estimation of modal parameters directly from the FRF matrix may be time-consuming and may occupy a large part of computer memory. The procedure implemented in DICMAN 3D provides a significant reduction in memory requirements and computational time while achieving a high accuracy of modal parameters. Its computational efficiency is particularly evident when the FRF matrix consists of thousands of measurement DOFs. The functionality of the created software application is presented on a practical example in which the modal parameters of a composite plate excited by an impact hammer were determined. For the verification of the obtained results a verification experiment was conducted during which the vibration responses were measured using conventional acceleration sensors. In both cases MIMO analysis was realized.
Software and Algorithms for Biomedical Image Data Processing and Visualization
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Lambert, James; Lam, Raymond
2004-01-01
A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such as product inspection or assembly of parts in space and industry.
Color reproduction software for a digital still camera
NASA Astrophysics Data System (ADS)
Lee, Bong S.; Park, Du-Sik; Nam, Byung D.
1998-04-01
We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.
SABRE--A Novel Software Tool for Bibliographic Post-Processing.
ERIC Educational Resources Information Center
Burge, Cecil D.
1989-01-01
Describes the software architecture and application of SABRE (Semi-Automated Bibliographic Environment), which is one of the first products to provide a semi-automatic environment for relevancy ranking of citations obtained from searches of bibliographic databases. Features designed to meet the review, categorization, culling, and reporting needs…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ochs, R.
The responsibilities of the Food and Drug Administration (FDA) have increased since the inception of the Food and Drugs Act in 1906. Medical devices first came under comprehensive regulation with the passage of the 1938 Food, Drug, and Cosmetic Act. In 1971 FDA also took on the responsibility for consumer protection against unnecessary exposure to radiation-emitting devices for home and occupational use. However it was not until 1976, under the Medical Device Regulation Act, that the FDA was responsible for the safety and effectiveness of medical devices. This session will be presented by the Division of Radiological Health (DRH) andmore » the Division of Imaging, Diagnostics, and Software Reliability (DIDSR) from the Center for Devices and Radiological Health (CDRH) at the FDA. The symposium will discuss on how we protect and promote public health with a focus on medical physics applications organized into four areas: pre-market device review, post-market surveillance, device compliance, current regulatory research efforts and partnerships with other organizations. The pre-market session will summarize the pathways FDA uses to regulate the investigational use and commercialization of diagnostic imaging and radiation therapy medical devices in the US, highlighting resources available to assist investigators and manufacturers. The post-market session will explain the post-market surveillance and compliance activities FDA performs to monitor the safety and effectiveness of devices on the market. The third session will describe research efforts that support the regulatory mission of the Agency. An overview of our regulatory research portfolio to advance our understanding of medical physics and imaging technologies and approaches to their evaluation will be discussed. Lastly, mechanisms that FDA uses to seek public input and promote collaborations with professional, government, and international organizations, such as AAPM, International Electrotechnical Commission (IEC), Image Gently, and the Quantitative Imaging Biomarkers Alliance (QIBA) among others, to fulfill FDA’s mission will be discussed. Learning Objectives: Understand FDA’s pre-market and post-market review processes for medical devices Understand FDA’s current regulatory research activities in the areas of medical physics and imaging products Understand how being involved with AAPM and other organizations can also help to promote innovative, safe and effective medical devices J. Delfino, nothing to disclose.« less
A software platform for the analysis of dermatology images
NASA Astrophysics Data System (ADS)
Vlassi, Maria; Mavraganis, Vlasios; Asvestas, Panteleimon
2017-11-01
The purpose of this paper is to present a software platform developed in Python programming environment that can be used for the processing and analysis of dermatology images. The platform provides the capability for reading a file that contains a dermatology image. The platform supports image formats such as Windows bitmaps, JPEG, JPEG2000, portable network graphics, TIFF. Furthermore, it provides suitable tools for selecting, either manually or automatically, a region of interest (ROI) on the image. The automated selection of a ROI includes filtering for smoothing the image and thresholding. The proposed software platform has a friendly and clear graphical user interface and could be a useful second-opinion tool to a dermatologist. Furthermore, it could be used to classify images including from other anatomical parts such as breast or lung, after proper re-training of the classification algorithms.
SU-E-J-225: CEST Imaging in Head and Neck Cancer Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J; Hwang, K; Fuller, C
Purpose: Chemical Exchange Saturation Transfer (CEST) imaging is an MRI technique enables the detection and imaging of metabolically active compounds in vivo. It has been used to differentiate tumor types and metabolic characteristics. Unlike PET/CT,CEST imaging does not use isotopes so it can be used on patient repeatedly. This study is to report the preliminary results of CEST imaging in Head and Neck cancer (HNC) patients. Methods: A CEST imaging sequence and the post-processing software was developed on a 3T clinical MRI scanner. Ten patients with Human papilloma virus positive oropharyngeal cancer were imaged in their immobilized treatment position. Amore » 5 mm slice CEST image was acquired (128×128, FOV=20∼24cm) to encompass the maximum dimension of tumor. Twenty-nine off-set frequencies (from −7.8ppm to +7.8 ppm) were acquired to obtain the Z-spectrum. Asymmetry analysis was used to extract the CEST contrasts. ROI at the tumor, node and surrounding tissues were measured. Results: CEST images were successfully acquired and Zspectrum asymmetry analysis demonstrated clear CEST contrasts in tumor as well as the surrounding tissues. 3∼5% CEST contrast in the range of 1 to 4 ppm was noted in tumor as well as grossly involved nodes. Injection of glucose produced a marked increase of CEST contrast in tumor region (∼10%). Motion and pulsation artifacts tend to smear the CEST contrast, making the interpretation of the image contrast difficult. Field nonuniformity, pulsation in blood vesicle and susceptibility artifacts caused by air cavities were also problematic for CEST imaging. Conclusion: We have demonstrated successful CEST acquisition and Z-spectrum reconstruction on HNC patients with a clinical scanner. MRI acquisition in immobilized treatment position is critical for image quality as well as the success of CEST image acquisition. CEST images provide novel contrast of metabolites in HNC and present great potential in the pre- and post-treatment assessment of patients undergoing radiation therapy.« less
Towards a framework for agent-based image analysis of remote-sensing data
Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera
2015-01-01
Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects’ properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA). PMID:27721916
Towards a framework for agent-based image analysis of remote-sensing data.
Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera
2015-04-03
Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).
Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph
2018-06-01
Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.
Moving base Gravity Gradiometer Survey System (GGSS) program
NASA Astrophysics Data System (ADS)
Pfohl, Louis; Rusnak, Walter; Jircitano, Albert; Grierson, Andrew
1988-04-01
The GGSS program began in early 1983 with the objective of delivering a landmobile and airborne system capable of fast, accurate, and economical gravity gradient surveys of large areas anywhere in the world. The objective included the development and use of post-mission data reduction software to process the survey data into solutions for the gravity disturbance vector components (north, east and vertical). This document describes the GGSS equipment hardware and software, integration and lab test procedures and results, and airborne and land survey procedures and results. Included are discussions on test strategies, post-mission data reduction algorithms, and the data reduction processing experience. Perspectives and conclusions are drawn from the results.
Image analysis for maintenance of coating quality in nickel electroplating baths--real time control.
Vidal, M; Amigo, J M; Bro, R; van den Berg, F; Ostra, M; Ubide, C
2011-11-07
The aim of this paper is to show how it is possible to extract analytical information from images acquired with a flatbed scanner and make use of this information for real time control of a nickel plating process. Digital images of plated steel sheets in a nickel bath are used to follow the process under degradation of specific additives. Dedicated software has been developed for making the obtained results accessible to process operators. This includes obtaining the RGB image, to select the red channel data exclusively, to calculate the histogram of the red channel data and to calculate the mean colour value (MCV) and the standard deviation of the red channel data. MCV is then used by the software to determine the concentration of the additives Supreme Plus Brightner (SPB) and SA-1 (for confidentiality reasons, the chemical contents cannot be further detailed) present in the bath (these two additives degrade and their concentration changes during the process). Finally, the software informs the operator when the bath is generating unsuitable quality plating and suggests the amount of SPB and SA-1 to be added in order to recover the original plating quality. Copyright © 2011 Elsevier B.V. All rights reserved.
A quality-refinement process for medical imaging applications.
Neuhaus, J; Maleike, D; Nolden, M; Kenngott, H-G; Meinzer, H-P; Wolf, I
2009-01-01
To introduce and evaluate a process for refinement of software quality that is suitable to research groups. In order to avoid constraining researchers too much, the quality improvement process has to be designed carefully. The scope of this paper is to present and evaluate a process to advance quality aspects of existing research prototypes in order to make them ready for initial clinical studies. The proposed process is tailored for research environments and therefore more lightweight than traditional quality management processes. Focus on quality criteria that are important at the given stage of the software life cycle. Usage of tools that automate aspects of the process is emphasized. To evaluate the additional effort that comes along with the process, it was exemplarily applied for eight prototypical software modules for medical image processing. The introduced process has been applied to improve the quality of all prototypes so that they could be successfully used in clinical studies. The quality refinement yielded an average of 13 person days of additional effort per project. Overall, 107 bugs were found and resolved by applying the process. Careful selection of quality criteria and the usage of automated process tools lead to a lightweight quality refinement process suitable for scientific research groups that can be applied to ensure a successful transfer of technical software prototypes into clinical research workflows.
OsiriX: an open-source software for navigating in multidimensional DICOM images.
Rosset, Antoine; Spadola, Luca; Ratib, Osman
2004-09-01
A multidimensional image navigation and display software was designed for display and interpretation of large sets of multidimensional and multimodality images such as combined PET-CT studies. The software is developed in Objective-C on a Macintosh platform under the MacOS X operating system using the GNUstep development environment. It also benefits from the extremely fast and optimized 3D graphic capabilities of the OpenGL graphic standard widely used for computer games optimized for taking advantage of any hardware graphic accelerator boards available. In the design of the software special attention was given to adapt the user interface to the specific and complex tasks of navigating through large sets of image data. An interactive jog-wheel device widely used in the video and movie industry was implemented to allow users to navigate in the different dimensions of an image set much faster than with a traditional mouse or on-screen cursors and sliders. The program can easily be adapted for very specific tasks that require a limited number of functions, by adding and removing tools from the program's toolbar and avoiding an overwhelming number of unnecessary tools and functions. The processing and image rendering tools of the software are based on the open-source libraries ITK and VTK. This ensures that all new developments in image processing that could emerge from other academic institutions using these libraries can be directly ported to the OsiriX program. OsiriX is provided free of charge under the GNU open-source licensing agreement at http://homepage.mac.com/rossetantoine/osirix.
Crowdsourcing Based 3d Modeling
NASA Astrophysics Data System (ADS)
Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.
2016-06-01
Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.
PANDA: a pipeline toolbox for analyzing brain diffusion images
Cui, Zaixu; Zhong, Suyu; Xu, Pengfei; He, Yong; Gong, Gaolang
2013-01-01
Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named “Pipeline for Analyzing braiN Diffusion imAges” (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics [e.g., fractional anisotropy (FA) and mean diffusivity (MD)] that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies. PMID:23439846
AXAF-1 High Resolution Assembly Image Model and Comparison with X-Ray Ground Test Image
NASA Technical Reports Server (NTRS)
Zissa, David E.
1999-01-01
The x-ray ground test of the AXAF-I High Resolution Mirror Assembly was completed in 1997 at the X-ray Calibration Facility at Marshall Space Flight Center. Mirror surface measurements by HDOS, alignment results from Kodak, and predicted gravity distortion in the horizontal test configuration are being used to model the x-ray test image. The Marshall Space Flight Center (MSFC) image modeling serves as a cross check with Smithsonian Astrophysical observatory modeling. The MSFC image prediction software has evolved from the MSFC model of the x-ray test of the largest AXAF-I mirror pair in 1991. The MSFC image modeling software development is being assisted by the University of Alabama in Huntsville. The modeling process, modeling software, and image prediction will be discussed. The image prediction will be compared with the x-ray test results.
NASA Astrophysics Data System (ADS)
Law, Yuen C.; Tenbrinck, Daniel; Jiang, Xiaoyi; Kuhlen, Torsten
2014-03-01
Computer-assisted processing and interpretation of medical ultrasound images is one of the most challenging tasks within image analysis. Physical phenomena in ultrasonographic images, e.g., the characteristic speckle noise and shadowing effects, make the majority of standard methods from image analysis non optimal. Furthermore, validation of adapted computer vision methods proves to be difficult due to missing ground truth information. There is no widely accepted software phantom in the community and existing software phantoms are not exible enough to support the use of specific speckle models for different tissue types, e.g., muscle and fat tissue. In this work we propose an anatomical software phantom with a realistic speckle pattern simulation to _ll this gap and provide a exible tool for validation purposes in medical ultrasound image analysis. We discuss the generation of speckle patterns and perform statistical analysis of the simulated textures to obtain quantitative measures of the realism and accuracy regarding the resulting textures.
Gallo-Oller, Gabriel; Ordoñez, Raquel; Dotor, Javier
2018-06-01
Since its first description, Western blot has been widely used in molecular labs. It constitutes a multistep method that allows the detection and/or quantification of proteins from simple to complex protein mixtures. Western blot quantification method constitutes a critical step in order to obtain accurate and reproducible results. Due to the technical knowledge required for densitometry analysis together with the resources availability, standard office scanners are often used for the imaging acquisition of developed Western blot films. Furthermore, the use of semi-quantitative software as ImageJ (Java-based image-processing and analysis software) is clearly increasing in different scientific fields. In this work, we describe the use of office scanner coupled with the ImageJ software together with a new image background subtraction method for accurate Western blot quantification. The proposed method represents an affordable, accurate and reproducible approximation that could be used in the presence of limited resources availability. Copyright © 2018 Elsevier B.V. All rights reserved.
APQ-102 imaging radar digital image quality study
NASA Technical Reports Server (NTRS)
Griffin, C. R.; Estes, J. M.
1982-01-01
A modified APQ-102 sidelooking radar collected synthetic aperture radar (SAR) data which was digitized and recorded on wideband magnetic tape. These tapes were then ground processed into computer compatible tapes (CCT's). The CCT's may then be processed into high resolution radar images by software on the CYBER computer.
Acquisition and Post-Processing of Immunohistochemical Images.
Sedgewick, Jerry
2017-01-01
Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.
Initial Navigation Alignment of Optical Instruments on GOES-R
NASA Astrophysics Data System (ADS)
Isaacson, P.; DeLuccia, F.; Reth, A. D.; Igli, D. A.; Carter, D.
2016-12-01
The GOES-R satellite is the first in NOAA's next-generation series of geostationary weather satellites. In addition to a number of space weather sensors, it will carry two principal optical earth-observing instruments, the Advanced Baseline Imager (ABI) and the Geostationary Lightning Mapper (GLM). During launch, currently scheduled for November of 2016, the alignment of these optical instruments is anticipated to shift from that measured during pre-launch characterization. While both instruments have image navigation and registration (INR) processing algorithms to enable automated geolocation of the collected data, the launch-derived misalignment may be too large for these approaches to function without an initial adjustment to calibration parameters. The parameters that may require adjustment are for Line of Sight Motion Compensation (LMC), and the adjustments will be estimated on orbit during the post-launch test (PLT) phase. We have developed approaches to estimate the initial alignment errors for both ABI and GLM image products. Our approaches involve comparison of ABI and GLM images collected during PLT to a set of reference ("truth") images using custom image processing tools and other software (the INR Performance Assessment Tool Set, or "IPATS") being developed for other INR assessments of ABI and GLM data. IPATS is based on image correlation approaches to determine offsets between input and reference images, and these offsets are the fundamental input to our estimate of the initial alignment errors. Initial testing of our alignment algorithms on proxy datasets lends high confidence that their application will determine the initial alignment errors to within sufficient accuracy to enable the operational INR processing approaches to proceed in a nominal fashion. We will report on the algorithms, implementation approach, and status of these initial alignment tools being developed for the GOES-R ABI and GLM instruments.
Spatially adaptive migration tomography for multistatic GPR imaging
Paglieroni, David W; Beer, N. Reginald
2013-08-13
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Synthetic aperture integration (SAI) algorithm for SAR imaging
Chambers, David H; Mast, Jeffrey E; Paglieroni, David W; Beer, N. Reginald
2013-07-09
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Zero source insertion technique to account for undersampling in GPR imaging
Chambers, David H; Mast, Jeffrey E; Paglieroni, David W
2014-02-25
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Real-time system for imaging and object detection with a multistatic GPR array
Paglieroni, David W; Beer, N Reginald; Bond, Steven W; Top, Philip L; Chambers, David H; Mast, Jeffrey E; Donetti, John G; Mason, Blake C; Jones, Steven M
2014-10-07
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Qiu, L L; Li, S; Bai, Y X
2016-06-01
To develop surgical templates for orthodontic miniscrew implantation based on cone-beam CT(CBCT)three-dimensional(3D)images and to evaluate the safety and stability of implantation guided by the templates. DICOM data obtained in patients who had CBCT scans taken were processed using Mimics software, and 3D images of teeth and maxillary bone were acquired. Meanwhile, 3D images of miniscrews were acquired using Solidworks software and processed with Mimics software. Virtual position of miniscrews was determined based on 3D images of teeth, bone, and miniscrews. 3D virtual templates were designed according to the virtual implantation plans. STL files were output and the real templates were fabricated with stereolithographic appliance(SLA). Postoperative CBCT scans were used to evaluate the implantation safety and the stability of miniscrews were investigated. All the templates were positioned accurately and kept stable throughout the implantation process. No root damage was found. The deviations were(1.73±0.65)mm at the corona, and(1.28±0.82)mm at the apex, respectively. The stability of miniscrews was fairly well. Surgical templates for miniscrew implantation could be acquired based on 3D CBCT images and fabricated with SLA. Implantation guided by these templates was safe and stable.
NASA Technical Reports Server (NTRS)
2013-01-01
Topics include: Cloud Absorption Radiometer Autonomous Navigation System - CANS, Software Method for Computed Tomography Cylinder Data Unwrapping, Re-slicing, and Analysis, Discrete Data Qualification System and Method Comprising Noise Series Fault Detection, Simple Laser Communications Terminal for Downlink from Earth Orbit at Rates Exceeding 10 Gb/s, Application Program Interface for the Orion Aerodynamics Database, Hyperspectral Imager-Tracker, Web Application Software for Ground Operations Planning Database (GOPDb) Management, Software Defined Radio with Parallelized Software Architecture, Compact Radar Transceiver with Included Calibration, Software Defined Radio with Parallelized Software Architecture, Phase Change Material Thermal Power Generator, The Thermal Hogan - A Means of Surviving the Lunar Night, Micromachined Active Magnetic Regenerator for Low-Temperature Magnetic Coolers, Nano-Ceramic Coated Plastics, Preparation of a Bimetal Using Mechanical Alloying for Environmental or Industrial Use, Phase Change Material for Temperature Control of Imager or Sounder on GOES Type Satellites in GEO, Dual-Compartment Inflatable Suitlock, Modular Connector Keying Concept, Genesis Ultrapure Water Megasonic Wafer Spin Cleaner, Piezoelectrically Initiated Pyrotechnic Igniter, Folding Elastic Thermal Surface - FETS, Multi-Pass Quadrupole Mass Analyzer, Lunar Sulfur Capture System, Environmental Qualification of a Single-Crystal Silicon Mirror for Spaceflight Use, Planar Superconducting Millimeter-Wave/Terahertz Channelizing Filter, Qualification of UHF Antenna for Extreme Martian Thermal Environments, Ensemble Eclipse: A Process for Prefab Development Environment for the Ensemble Project, ISS Live!, Space Operations Learning Center (SOLC) iPhone/iPad Application, Software to Compare NPP HDF5 Data Files, Planetary Data Systems (PDS) Imaging Node Atlas II, Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit, Translating MAPGEN to ASPEN for MER, Support Routines for In Situ Image Processing, and Semi-Supervised Eigenbasis Novelty Detection.
Your Personal Analysis Toolkit - An Open Source Solution
NASA Astrophysics Data System (ADS)
Mitchell, T.
2009-12-01
Open source software is commonly known for its web browsers, word processors and programming languages. However, there is a vast array of open source software focused on geographic information management and geospatial application building in general. As geo-professionals, having easy access to tools for our jobs is crucial. Open source software provides the opportunity to add a tool to your tool belt and carry it with you for your entire career - with no license fees, a supportive community and the opportunity to test, adopt and upgrade at your own pace. OSGeo is a US registered non-profit representing more than a dozen mature geospatial data management applications and programming resources. Tools cover areas such as desktop GIS, web-based mapping frameworks, metadata cataloging, spatial database analysis, image processing and more. Learn about some of these tools as they apply to AGU members, as well as how you can join OSGeo and its members in getting the job done with powerful open source tools. If you haven't heard of OSSIM, MapServer, OpenLayers, PostGIS, GRASS GIS or the many other projects under our umbrella - then you need to hear this talk. Invest in yourself - use open source!
1998-03-28
This image-based surface map of Pluto was assembled by computer image processing software from four separate images of Pluto disk taken with the European Space Agency Faint Object Camera aboard NASA Hubble Space Telescope.
Pre-Processes for Urban Areas Detection in SAR Images
NASA Astrophysics Data System (ADS)
Altay Açar, S.; Bayır, Ş.
2017-11-01
In this study, pre-processes for urban areas detection in synthetic aperture radar (SAR) images are examined. These pre-processes are image smoothing, thresholding and white coloured regions determination. Image smoothing is carried out to remove noises then thresholding is applied to obtain binary image. Finally, candidate urban areas are detected by using white coloured regions determination. All pre-processes are applied by utilizing the developed software. Two different SAR images which are acquired by TerraSAR-X are used in experimental study. Obtained results are shown visually.
2009-09-01
using her beadbeater, Sonya Dyhrman for being my initial biology advisor, Heidi Sosik for her advice on image processing , the residents of Watson...64 2-17 Phycobiliprotein absorption spectra . . . . . . . . . . . . . . . . . . . . . 66 3-1 Image processing for automated cell counts...digital camera and Axiovision 4.6.3 software. Images were measured, and cell metrics were determined using the MATLAB image processing toolbox
NASA Astrophysics Data System (ADS)
Markelin, L.; Honkavaara, E.; Näsi, R.; Nurminen, K.; Hakala, T.
2014-08-01
Remote sensing based on unmanned airborne vehicles (UAVs) is a rapidly developing field of technology. UAVs enable accurate, flexible, low-cost and multiangular measurements of 3D geometric, radiometric, and temporal properties of land and vegetation using various sensors. In this paper we present a geometric processing chain for multiangular measurement system that is designed for measuring object directional reflectance characteristics in a wavelength range of 400-900 nm. The technique is based on a novel, lightweight spectral camera designed for UAV use. The multiangular measurement is conducted by collecting vertical and oblique area-format spectral images. End products of the geometric processing are image exterior orientations, 3D point clouds and digital surface models (DSM). This data is needed for the radiometric processing chain that produces reflectance image mosaics and multiangular bidirectional reflectance factor (BRF) observations. The geometric processing workflow consists of the following three steps: (1) determining approximate image orientations using Visual Structure from Motion (VisualSFM) software, (2) calculating improved orientations and sensor calibration using a method based on self-calibrating bundle block adjustment (standard photogrammetric software) (this step is optional), and finally (3) creating dense 3D point clouds and DSMs using Photogrammetric Surface Reconstruction from Imagery (SURE) software that is based on semi-global-matching algorithm and it is capable of providing a point density corresponding to the pixel size of the image. We have tested the geometric processing workflow over various targets, including test fields, agricultural fields, lakes and complex 3D structures like forests.
Govsa, Figen; Ozer, Mehmet Asim; Sirinturk, Suzan; Eraslan, Cenk; Alagoz, Ahmet Kemal
2017-08-01
A new application of teaching anatomy includes the use of computed tomography angiography (CTA) images to create clinically relevant three-dimensional (3D) printed models. The purpose of this article is to review recent innovations on the process and the application of 3D printed models as a tool for using under and post-graduate medical education. Images of aortic arch pattern received by CTA were converted into 3D images using the Google SketchUp free software and were saved in stereolithography format. Using a 3D printer (Makerbot), a model mode polylactic acid material was printed. A two-vessel left aortic arch was identified consisting of the brachiocephalic trunk and left subclavian artery. The life-like 3D models were rotated 360° in all axes in hand. The early adopters in education and clinical practices have embraced the medical imaging-guided 3D printed anatomical models for their ability to provide tactile feedback and a superior appreciation of visuospatial relationship between the anatomical structures. Printed vascular models are used to assist in preoperative planning, develop intraoperative guidance tools, and to teach patients surgical trainees in surgical practice.
NASA Astrophysics Data System (ADS)
Karmazikov, Y. V.; Fainberg, E. M.
2005-06-01
Work with DICOM compatible equipment integrated into hardware and software systems for medical purposes has been considered. Structures of process of reception and translormation of the data are resulted by the example of digital rentgenography and angiography systems, included in hardware-software complex DIMOL-IK. Algorithms of reception and the analysis of the data are offered. Questions of the further processing and storage of the received data are considered.
yourSky: Custom Sky-Image Mosaics via the Internet
NASA Technical Reports Server (NTRS)
Jacob, Joseph
2003-01-01
yourSky (http://yourSky.jpl.nasa.gov) is a computer program that supplies custom astronomical image mosaics of sky regions specified by requesters using client computers connected to the Internet. [yourSky is an upgraded version of the software reported in Software for Generating Mosaics of Astronomical Images (NPO-21121), NASA Tech Briefs, Vol. 25, No. 4 (April 2001), page 16a.] A requester no longer has to engage in the tedious process of determining what subset of images is needed, nor even to know how the images are indexed in image archives. Instead, in response to a requester s specification of the size and location of the sky area, (and optionally of the desired set and type of data, resolution, coordinate system, projection, and image format), yourSky automatically retrieves the component image data from archives totaling tens of terabytes stored on computer tape and disk drives at multiple sites and assembles the component images into a mosaic image by use of a high-performance parallel code. yourSky runs on the server computer where the mosaics are assembled. Because yourSky includes a Web-interface component, no special client software is needed: ordinary Web browser software is sufficient.
CellAnimation: an open source MATLAB framework for microscopy assays.
Georgescu, Walter; Wikswo, John P; Quaranta, Vito
2012-01-01
Advances in microscopy technology have led to the creation of high-throughput microscopes that are capable of generating several hundred gigabytes of images in a few days. Analyzing such wealth of data manually is nearly impossible and requires an automated approach. There are at present a number of open-source and commercial software packages that allow the user to apply algorithms of different degrees of sophistication to the images and extract desired metrics. However, the types of metrics that can be extracted are severely limited by the specific image processing algorithms that the application implements, and by the expertise of the user. In most commercial software, code unavailability prevents implementation by the end user of newly developed algorithms better suited for a particular type of imaging assay. While it is possible to implement new algorithms in open-source software, rewiring an image processing application requires a high degree of expertise. To obviate these limitations, we have developed an open-source high-throughput application that allows implementation of different biological assays such as cell tracking or ancestry recording, through the use of small, relatively simple image processing modules connected into sophisticated imaging pipelines. By connecting modules, non-expert users can apply the particular combination of well-established and novel algorithms developed by us and others that are best suited for each individual assay type. In addition, our data exploration and visualization modules make it easy to discover or select specific cell phenotypes from a heterogeneous population. CellAnimation is distributed under the Creative Commons Attribution-NonCommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/). CellAnimationsource code and documentation may be downloaded from www.vanderbilt.edu/viibre/software/documents/CellAnimation.zip. Sample data are available at www.vanderbilt.edu/viibre/software/documents/movies.zip. walter.georgescu@vanderbilt.edu Supplementary data available at Bioinformatics online.
FEM analysis of different dental root canal-post systems in young permanent teeth.
Vitale, M C; Chiesa, M; Coltellaro, F; Bignardi, C; Celozzi, M; Poggio, C
2008-09-01
Aim of this work was to carry out a comparative evaluation of the structural behaviour of different root canal posts (cylindrical, conical and triple conical) fitted in a second lower bicuspid and subjected to compression and bending test. This study has been carried out by numerical method of structural analysis of finite elements (FEM, Finite Element Method). Different tridimensional models were obtained by CAT images of an extracted tooth, endodontically treated, filled with guttapercha and triple conical glass post. Images have been elaborated by a software for images (Mimics and Ansys) and CAD (Rhinoceros 3 D). In the models a II Class restoration has been virtually created. In the numerical simulation dental tissues (enamel, dentine and root cement), guttapercha, root canal cement, different posts, different techniques of cementation and crown restoration (composites and adhesive systems) have been considered. Strain distributions in dental tissues, in root canal cement and in posts have been compared. The equivalent tensions and the single components (traction, compression and cut) have been analysed. In all examined posts, the most strained part is resulted the coronal one, even if the total tension, in the different tooth-post analyzed systems, resulted uniformly distributed. A similar behaviour was shown by the root canal cement. According to the analyzed conditions of bond and load, varying according to the geometry of the considered posts, our results confirm that there is no substantial difference of deformation in posts, root canal cement and treated tooth.
Stahlberg, E; Planert, M; Panagiotopoulos, N; Horn, M; Wiedner, M; Kleemann, M; Barkhausen, J; Goltz, J P
2017-02-01
The aim was to evaluate the feasibility and efficacy of a new method for pre-operative calculation of an appropriate C-arm position for iliac bifurcation visualisation during endovascular aortic repair (EVAR) procedures by using three dimensional computed tomography angiography (CTA) post-processing software. Post-processing software was used to simulate C-arm angulations in two dimensions (oblique, cranial/caudal) for appropriate visualisation of distal landing zones at the iliac bifurcation during EVAR. Retrospectively, 27 consecutive EVAR patients (25 men, mean ± SD age 73 ± 7 years) were identified; one group of patients (NEW; n = 12 [23 iliac bifurcations]) was compared after implementation of the new method with a group of patients who received a historic method (OLD; n = 15 [23 iliac bifurcations]), treated with EVAR before the method was applied. In the OLD group, a median of 2.0 (interquartile range [IQR] 1-3) digital subtraction angiography runs were needed per iliac bifurcation versus 1.0 (IQR 1-1) runs in the NEW group (p = .007). The median dose area products per iliac bifurcation were 11951 mGy*cm 2 (IQR 7308-16663 mGy*cm 2 ) for the NEW, and 39394 mGy*cm 2 (IQR 19066-53702 mGy*cm 2 ) for the OLD group, respectively (p = .001). The median volume of contrast per iliac bifurcation was 13.0 mL (IQR: 13-13 mL) in the NEW and 26 mL (IQR 13-39 mL) in the OLD group (p = .007). Pre-operative simulation of the appropriate C-arm angulation in two dimensions using dedicated computed tomography angiography post-processing software is feasible and significantly reduces radiation and contrast medium exposure. Copyright © 2016 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.
Novel wavelength diversity technique for high-speed atmospheric turbulence compensation
NASA Astrophysics Data System (ADS)
Arrasmith, William W.; Sullivan, Sean F.
2010-04-01
The defense, intelligence, and homeland security communities are driving a need for software dominant, real-time or near-real time atmospheric turbulence compensated imagery. The development of parallel processing capabilities are finding application in diverse areas including image processing, target tracking, pattern recognition, and image fusion to name a few. A novel approach to the computationally intensive case of software dominant optical and near infrared imaging through atmospheric turbulence is addressed in this paper. Previously, the somewhat conventional wavelength diversity method has been used to compensate for atmospheric turbulence with great success. We apply a new correlation based approach to the wavelength diversity methodology using a parallel processing architecture enabling high speed atmospheric turbulence compensation. Methods for optical imaging through distributed turbulence are discussed, simulation results are presented, and computational and performance assessments are provided.
Data to Pictures to Data: Outreach Imaging Software and Metadata
NASA Astrophysics Data System (ADS)
Levay, Z.
2011-07-01
A convergence between astronomy science and digital photography has enabled a steady stream of visually rich imagery from state-of-the-art data. The accessibility of hardware and software has facilitated an explosion of astronomical images for outreach, from space-based observatories, ground-based professional facilities and among the vibrant amateur astrophotography community. Producing imagery from science data involves a combination of custom software to understand FITS data (FITS Liberator), off-the-shelf, industry-standard software to composite multi-wavelength data and edit digital photographs (Adobe Photoshop), and application of photo/image-processing techniques. Some additional effort is needed to close the loop and enable this imagery to be conveniently available for various purposes beyond web and print publication. The metadata paradigms in digital photography are now complying with FITS and science software to carry information such as keyword tags and world coordinates, enabling these images to be usable in more sophisticated, imaginative ways exemplified by Sky in Google Earth and World Wide Telescope.
ScanImage: flexible software for operating laser scanning microscopes.
Pologruto, Thomas A; Sabatini, Bernardo L; Svoboda, Karel
2003-05-17
Laser scanning microscopy is a powerful tool for analyzing the structure and function of biological specimens. Although numerous commercial laser scanning microscopes exist, some of the more interesting and challenging applications demand custom design. A major impediment to custom design is the difficulty of building custom data acquisition hardware and writing the complex software required to run the laser scanning microscope. We describe a simple, software-based approach to operating a laser scanning microscope without the need for custom data acquisition hardware. Data acquisition and control of laser scanning are achieved through standard data acquisition boards. The entire burden of signal integration and image processing is placed on the CPU of the computer. We quantitate the effectiveness of our data acquisition and signal conditioning algorithm under a variety of conditions. We implement our approach in an open source software package (ScanImage) and describe its functionality. We present ScanImage, software to run a flexible laser scanning microscope that allows easy custom design.
SoFAST: Automated Flare Detection with the PROBA2/SWAP EUV Imager
NASA Astrophysics Data System (ADS)
Bonte, K.; Berghmans, D.; De Groof, A.; Steed, K.; Poedts, S.
2013-08-01
The Sun Watcher with Active Pixels and Image Processing (SWAP) EUV imager onboard PROBA2 provides a non-stop stream of coronal extreme-ultraviolet (EUV) images at a cadence of typically 130 seconds. These images show the solar drivers of space-weather, such as flares and erupting filaments. We have developed a software tool that automatically processes the images and localises and identifies flares. On one hand, the output of this software tool is intended as a service to the Space Weather Segment of ESA's Space Situational Awareness (SSA) program. On the other hand, we consider the PROBA2/SWAP images as a model for the data from the Extreme Ultraviolet Imager (EUI) instrument prepared for the future Solar Orbiter mission, where onboard intelligence is required for prioritising data within the challenging telemetry quota. In this article we present the concept of the software, the first statistics on its effectiveness and the online display in real time of its results. Our results indicate that it is not only possible to detect EUV flares automatically in an acquired dataset, but that quantifying a range of EUV dynamics is also possible. The method is based on thresholding of macropixelled image sequences. The robustness and simplicity of the algorithm is a clear advantage for future onboard use.
Software designs of image processing tasks with incremental refinement of computation.
Anastasia, Davide; Andreopoulos, Yiannis
2010-08-01
Software realizations of computationally-demanding image processing tasks (e.g., image transforms and convolution) do not currently provide graceful degradation when their clock-cycles budgets are reduced, e.g., when delay deadlines are imposed in a multitasking environment to meet throughput requirements. This is an important obstacle in the quest for full utilization of modern programmable platforms' capabilities since worst-case considerations must be in place for reasonable quality of results. In this paper, we propose (and make available online) platform-independent software designs performing bitplane-based computation combined with an incremental packing framework in order to realize block transforms, 2-D convolution and frame-by-frame block matching. The proposed framework realizes incremental computation: progressive processing of input-source increments improves the output quality monotonically. Comparisons with the equivalent nonincremental software realization of each algorithm reveal that, for the same precision of the result, the proposed approach can lead to comparable or faster execution, while it can be arbitrarily terminated and provide the result up to the computed precision. Application examples with region-of-interest based incremental computation, task scheduling per frame, and energy-distortion scalability verify that our proposal provides significant performance scalability with graceful degradation.
Software for Partly Automated Recognition of Targets
NASA Technical Reports Server (NTRS)
Opitz, David; Blundell, Stuart; Bain, William; Morris, Matthew; Carlson, Ian; Mangrich, Mark
2003-01-01
The Feature Analyst is a computer program for assisted (partially automated) recognition of targets in images. This program was developed to accelerate the processing of high-resolution satellite image data for incorporation into geographic information systems (GIS). This program creates an advanced user interface that embeds proprietary machine-learning algorithms in commercial image-processing and GIS software. A human analyst provides samples of target features from multiple sets of data, then the software develops a data-fusion model that automatically extracts the remaining features from selected sets of data. The program thus leverages the natural ability of humans to recognize objects in complex scenes, without requiring the user to explain the human visual recognition process by means of lengthy software. Two major subprograms are the reactive agent and the thinking agent. The reactive agent strives to quickly learn the user s tendencies while the user is selecting targets and to increase the user s productivity by immediately suggesting the next set of pixels that the user may wish to select. The thinking agent utilizes all available resources, taking as much time as needed, to produce the most accurate autonomous feature-extraction model possible.
Evaluation of a breast software model for 2D and 3D X-ray imaging studies of the breast.
Baneva, Yanka; Bliznakova, Kristina; Cockmartin, Lesley; Marinov, Stoyko; Buliev, Ivan; Mettivier, Giovanni; Bosmans, Hilde; Russo, Paolo; Marshall, Nicholas; Bliznakov, Zhivko
2017-09-01
In X-ray imaging, test objects reproducing breast anatomy characteristics are realized to optimize issues such as image processing or reconstruction, lesion detection performance, image quality and radiation induced detriment. Recently, a physical phantom with a structured background has been introduced for both 2D mammography and breast tomosynthesis. A software version of this phantom and a few related versions are now available and a comparison between these 3D software phantoms and the physical phantom will be presented. The software breast phantom simulates a semi-cylindrical container filled with spherical beads of different diameters. Four computational breast phantoms were generated with a dedicated software application and for two of these, physical phantoms are also available and they are used for the side by side comparison. Planar projections in mammography and tomosynthesis were simulated under identical incident air kerma conditions. Tomosynthesis slices were reconstructed with an in-house developed reconstruction software. In addition to a visual comparison, parameters like fractal dimension, power law exponent β and second order statistics (skewness, kurtosis) of planar projections and tomosynthesis reconstructed images were compared. Visually, an excellent agreement between simulated and real planar and tomosynthesis images is observed. The comparison shows also an overall very good agreement between parameters evaluated from simulated and experimental images. The computational breast phantoms showed a close match with their physical versions. The detailed mathematical analysis of the images confirms the agreement between real and simulated 2D mammography and tomosynthesis images. The software phantom is ready for optimization purpose and extrapolation of the phantom to other breast imaging techniques. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A software to digital image processing to be used in the voxel phantom development.
Vieira, J W; Lima, F R A
2009-11-15
Anthropomorphic models used in computational dosimetry, also denominated phantoms, are based on digital images recorded from scanning of real people by Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The voxel phantom construction requests computational processing for transformations of image formats, to compact two-dimensional (2-D) images forming of three-dimensional (3-D) matrices, image sampling and quantization, image enhancement, restoration and segmentation, among others. Hardly the researcher of computational dosimetry will find all these available abilities in single software, and almost always this difficulty presents as a result the decrease of the rhythm of his researches or the use, sometimes inadequate, of alternative tools. The need to integrate the several tasks mentioned above to obtain an image that can be used in an exposure computational model motivated the development of the Digital Image Processing (DIP) software, mainly to solve particular problems in Dissertations and Thesis developed by members of the Grupo de Pesquisa em Dosimetria Numérica (GDN/CNPq). Because of this particular objective, the software uses the Portuguese idiom in their implementations and interfaces. This paper presents the second version of the DIP, whose main changes are the more formal organization on menus and menu items, and menu for digital image segmentation. Currently, the DIP contains the menus Fundamentos, Visualizações, Domínio Espacial, Domínio de Frequências, Segmentações and Estudos. Each menu contains items and sub-items with functionalities that, usually, request an image as input and produce an image or an attribute in the output. The DIP reads edits and writes binary files containing the 3-D matrix corresponding to a stack of axial images from a given geometry that can be a human body or other volume of interest. It also can read any type of computational image and to make conversions. When the task involves only an output image, this is saved as a JPEG file in the Windows default; when it involves an image stack, the output binary file is denominated SGI (Simulações Gráficas Interativas (Interactive Graphic Simulations), an acronym already used in other publications of the GDN/CNPq.
Digital Earth Watch And Picture Post Network: Measuring The Environment Through Digital Images
NASA Astrophysics Data System (ADS)
Schloss, A. L.; Beaudry, J.; Pickle, J.; Carrera, F.
2012-12-01
Digital Earth Watch (DEW) involves individuals, schools, organizations and communities in a systematic monitoring project of their local environment, especially vegetation health. The program offers people the means to join the Picture Post network and to study and analyze their own findings using DEW software. A Picture Post is an easy-to-use and inexpensive platform for repeatedly taking digital photographs as a standardized set of images of the entire 360° landscape, which then can be shared over the Internet on the Picture Post website. This simple concept has the potential to create a wealth of information and data on changing environmental conditions, which is important for a society grappling with the effects of environmental change. Picture Posts may be added by anyone interested in monitoring a particular location. The value of a Picture Post is in the commitment of participants to take repeated photographs - monthly, weekly, or even daily - to build up a long-term record over many years. This poster will show examples of Picture Post pictures being used for monitoring and research applications, and a DEW mobile app for capturing repeat digital photographs at a virtual post. We invite individuals, schools, informal education centers, groups and communities to join.; A new post and its website. ; Creating a virtual post using the mobile app.
NASA Astrophysics Data System (ADS)
Pandey, Palak; Kunte, Pravin D.
2016-10-01
This study presents an easy, modular, user-friendly, and flexible software package for processing of Landsat 7 ETM and Landsat 8 OLI-TIRS data for estimating suspended particulate matter concentrations in the coastal waters. This package includes 1) algorithm developed using freely downloadable SCILAB package, 2) ERDAS Models for iterative processing of Landsat images and 3) ArcMAP tool for plotting and map making. Utilizing SCILAB package, a module is written for geometric corrections, radiometric corrections and obtaining normalized water-leaving reflectance by incorporating Landsat 8 OLI-TIRS and Landsat 7 ETM+ data. Using ERDAS models, a sequence of modules are developed for iterative processing of Landsat images and estimating suspended particulate matter concentrations. Processed images are used for preparing suspended sediment concentration maps. The applicability of this software package is demonstrated by estimating and plotting seasonal suspended sediment concentration maps off the Bengal delta. The software is flexible enough to accommodate other remotely sensed data like Ocean Color monitor (OCM) data, Indian Remote Sensing data (IRS), MODIS data etc. by replacing a few parameters in the algorithm, for estimating suspended sediment concentration in coastal waters.
Dipy, a library for the analysis of diffusion MRI data.
Garyfallidis, Eleftherios; Brett, Matthew; Amirbekian, Bagrat; Rokem, Ariel; van der Walt, Stefan; Descoteaux, Maxime; Nimmo-Smith, Ian
2014-01-01
Diffusion Imaging in Python (Dipy) is a free and open source software project for the analysis of data from diffusion magnetic resonance imaging (dMRI) experiments. dMRI is an application of MRI that can be used to measure structural features of brain white matter. Many methods have been developed to use dMRI data to model the local configuration of white matter nerve fiber bundles and infer the trajectory of bundles connecting different parts of the brain. Dipy gathers implementations of many different methods in dMRI, including: diffusion signal pre-processing; reconstruction of diffusion distributions in individual voxels; fiber tractography and fiber track post-processing, analysis and visualization. Dipy aims to provide transparent implementations for all the different steps of dMRI analysis with a uniform programming interface. We have implemented classical signal reconstruction techniques, such as the diffusion tensor model and deterministic fiber tractography. In addition, cutting edge novel reconstruction techniques are implemented, such as constrained spherical deconvolution and diffusion spectrum imaging (DSI) with deconvolution, as well as methods for probabilistic tracking and original methods for tractography clustering. Many additional utility functions are provided to calculate various statistics, informative visualizations, as well as file-handling routines to assist in the development and use of novel techniques. In contrast to many other scientific software projects, Dipy is not being developed by a single research group. Rather, it is an open project that encourages contributions from any scientist/developer through GitHub and open discussions on the project mailing list. Consequently, Dipy today has an international team of contributors, spanning seven different academic institutions in five countries and three continents, which is still growing.
Dipy, a library for the analysis of diffusion MRI data
Garyfallidis, Eleftherios; Brett, Matthew; Amirbekian, Bagrat; Rokem, Ariel; van der Walt, Stefan; Descoteaux, Maxime; Nimmo-Smith, Ian
2014-01-01
Diffusion Imaging in Python (Dipy) is a free and open source software project for the analysis of data from diffusion magnetic resonance imaging (dMRI) experiments. dMRI is an application of MRI that can be used to measure structural features of brain white matter. Many methods have been developed to use dMRI data to model the local configuration of white matter nerve fiber bundles and infer the trajectory of bundles connecting different parts of the brain. Dipy gathers implementations of many different methods in dMRI, including: diffusion signal pre-processing; reconstruction of diffusion distributions in individual voxels; fiber tractography and fiber track post-processing, analysis and visualization. Dipy aims to provide transparent implementations for all the different steps of dMRI analysis with a uniform programming interface. We have implemented classical signal reconstruction techniques, such as the diffusion tensor model and deterministic fiber tractography. In addition, cutting edge novel reconstruction techniques are implemented, such as constrained spherical deconvolution and diffusion spectrum imaging (DSI) with deconvolution, as well as methods for probabilistic tracking and original methods for tractography clustering. Many additional utility functions are provided to calculate various statistics, informative visualizations, as well as file-handling routines to assist in the development and use of novel techniques. In contrast to many other scientific software projects, Dipy is not being developed by a single research group. Rather, it is an open project that encourages contributions from any scientist/developer through GitHub and open discussions on the project mailing list. Consequently, Dipy today has an international team of contributors, spanning seven different academic institutions in five countries and three continents, which is still growing. PMID:24600385
Smet, M H; Breysem, L; Mussen, E; Bosmans, H; Marshall, N W; Cockmartin, L
2018-07-01
To evaluate the impact of digital detector, dose level and post-processing on neonatal chest phantom X-ray image quality (IQ). A neonatal phantom was imaged using four different detectors: a CR powder phosphor (PIP), a CR needle phosphor (NIP) and two wireless CsI DR detectors (DXD and DRX). Five different dose levels were studied for each detector and two post-processing algorithms evaluated for each vendor. Three paediatric radiologists scored the images using European quality criteria plus additional questions on vascular lines, noise and disease simulation. Visual grading characteristics and ordinal regression statistics were used to evaluate the effect of detector type, post-processing and dose on VGA score (VGAS). No significant differences were found between the NIP, DXD and CRX detectors (p>0.05) whereas the PIP detector had significantly lower VGAS (p< 0.0001). Processing did not influence VGAS (p=0.819). Increasing dose resulted in significantly higher VGAS (p<0.0001). Visual grading analysis (VGA) identified a detector air kerma/image (DAK/image) of ~2.4 μGy as an ideal working point for NIP, DXD and DRX detectors. VGAS tracked IQ differences between detectors and dose levels but not image post-processing changes. VGA showed a DAK/image value above which perceived IQ did not improve, potentially useful for commissioning. • A VGA study detects IQ differences between detectors and dose levels. • The NIP detector matched the VGAS of the CsI DR detectors. • VGA data are useful in setting initial detector air kerma level. • Differences in NNPS were consistent with changes in VGAS.
NASA Astrophysics Data System (ADS)
Chen, Wo-Hsing; Sanghvi, Narendra T.; Carlson, Roy; Uchida, Toyoaki
2011-09-01
Sonablate® 500 (SB-500) HIFU devices have been successfully used to treat prostate cancer non-invasively. In addition, Visually Directed HIFU with the SB-500 has demonstrated higher efficacy. Visually Directed HIFU works by displaying hyperechoic changes on the B-mode ultrasound images. However, small changes in the grey-scale images are not detectable by Visually Directed HIFU. To detect all tissue changes reliably, the SB-500 was enhanced with quantitative, real-time Tissue Change Monitoring (TCM) software. TCM uses pulse-echo ultrasound backscattered RF signals in 2D to estimate changes in the tissue properties caused by HIFU. The RF signal energy difference is calculated in selected frequency bands (pre and post HIFU) for each treatment site. The results are overlaid on the real-time ultrasound image in green, yellow and orange to represent low, medium and high degree of change in backscattered energy levels. The color mapping scheme was derived on measured temperature and backscattered RF signals from in vitro chicken tissue experiments. The TCM software was installed and tested in a clinical device to obtain human RF data. Post HIFU contrast enhanced MRI scans verified necrotic regions of the prostate. The color mapping success rate at higher HIFU power levels was 94% in the initial clinical test. Based on these results, TCM software has been released for wider usage. The clinical studies with TCM in Japan and The Bahamas have provided the following PSA (ng/ml) results. Japan (n = 97), PSA pre-treatment/post-treatment; minimum 0.7/0.0, maximum 76.0/4.73, median 6.89/0.07, standard deviation 11.19/0.62. The Bahamas (n = 59), minimum 0.4/0.0, maximum 13.0/1.4, median 4.7/0.1, standard deviation 2.8/0.3.
NASA Astrophysics Data System (ADS)
Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.
2009-12-01
This paper involves easy accessible integrated web-based analysis of satellite images with a plug-in based open source software. The paper is targeted to both users and developers of geospatial software. Guided by a use case scenario, we describe the ILWIS software and its toolbox to access satellite images through the GEONETCast broadcasting system. The last two decades have shown a major shift from stand-alone software systems to networked ones, often client/server applications using distributed geo-(web-)services. This allows organisations to combine without much effort their own data with remotely available data and processing functionality. Key to this integrated spatial data analysis is a low-cost access to data from within a user-friendly and flexible software. Web-based open source software solutions are more often a powerful option for developing countries. The Integrated Land and Water Information System (ILWIS) is a PC-based GIS & Remote Sensing software, comprising a complete package of image processing, spatial analysis and digital mapping and was developed as commercial software from the early nineties onwards. Recent project efforts have migrated ILWIS into a modular, plug-in-based open source software, and provide web-service support for OGC-based web mapping and processing. The core objective of the ILWIS Open source project is to provide a maintainable framework for researchers and software developers to implement training components, scientific toolboxes and (web-) services. The latest plug-ins have been developed for multi-criteria decision making, water resources analysis and spatial statistics analysis. The development of this framework is done since 2007 in the context of 52°North, which is an open initiative that advances the development of cutting edge open source geospatial software, using the GPL license. GEONETCast, as part of the emerging Global Earth Observation System of Systems (GEOSS), puts essential environmental data at the fingertips of users around the globe. This user-friendly and low-cost information dissemination provides global information as a basis for decision-making in a number of critical areas, including public health, energy, agriculture, weather, water, climate, natural disasters and ecosystems. GEONETCast makes available satellite images via Digital Video Broadcast (DVB) technology. An OGC WMS interface and plug-ins which convert GEONETCast data streams allow an ILWIS user to integrate various distributed data sources with data locally stored on his machine. Our paper describes a use case in which ILWIS is used with GEONETCast satellite imagery for decision making processes in Ghana. We also explain how the ILWIS software can be extended with additional functionality by means of building plug-ins and unfold our plans to implement other OGC standards, such as WCS and WPS in the same context. Especially, the latter one can be seen as a major step forward in terms of moving well-proven desktop based processing functionality to the web. This enables the embedding of ILWIS functionality in Spatial Data Infrastructures or even the execution in scalable and on-demand cloud computing environments.
gr-MRI: A software package for magnetic resonance imaging using software defined radios
NASA Astrophysics Data System (ADS)
Hasselwander, Christopher J.; Cao, Zhipeng; Grissom, William A.
2016-09-01
The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5 Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately 2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500 kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.
Semi-automated camera trap image processing for the detection of ungulate fence crossing events.
Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija
2017-09-27
Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Tuell, Grady
2010-04-01
The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.
MassImager: A software for interactive and in-depth analysis of mass spectrometry imaging data.
He, Jiuming; Huang, Luojiao; Tian, Runtao; Li, Tiegang; Sun, Chenglong; Song, Xiaowei; Lv, Yiwei; Luo, Zhigang; Li, Xin; Abliz, Zeper
2018-07-26
Mass spectrometry imaging (MSI) has become a powerful tool to probe molecule events in biological tissue. However, it is a widely held viewpoint that one of the biggest challenges is an easy-to-use data processing software for discovering the underlying biological information from complicated and huge MSI dataset. Here, a user-friendly and full-featured MSI software including three subsystems, Solution, Visualization and Intelligence, named MassImager, is developed focusing on interactive visualization, in-situ biomarker discovery and artificial intelligent pathological diagnosis. Simplified data preprocessing and high-throughput MSI data exchange, serialization jointly guarantee the quick reconstruction of ion image and rapid analysis of dozens of gigabytes datasets. It also offers diverse self-defined operations for visual processing, including multiple ion visualization, multiple channel superposition, image normalization, visual resolution enhancement and image filter. Regions-of-interest analysis can be performed precisely through the interactive visualization between the ion images and mass spectra, also the overlaid optical image guide, to directly find out the region-specific biomarkers. Moreover, automatic pattern recognition can be achieved immediately upon the supervised or unsupervised multivariate statistical modeling. Clear discrimination between cancer tissue and adjacent tissue within a MSI dataset can be seen in the generated pattern image, which shows great potential in visually in-situ biomarker discovery and artificial intelligent pathological diagnosis of cancer. All the features are integrated together in MassImager to provide a deep MSI processing solution at the in-situ metabolomics level for biomarker discovery and future clinical pathological diagnosis. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Noninvasive Test Detects Cardiovascular Disease
NASA Technical Reports Server (NTRS)
2007-01-01
At NASA's Jet Propulsion Laboratory (JPL), NASA-developed Video Imaging Communication and Retrieval (VICAR) software laid the groundwork for analyzing images of all kinds. A project seeking to use imaging technology for health care diagnosis began when the imaging team considered using the VICAR software to analyze X-ray images of soft tissue. With marginal success using X-rays, the team applied the same methodology to ultrasound imagery, which was already digitally formatted. The new approach proved successful for assessing amounts of plaque build-up and arterial wall thickness, direct predictors of heart disease, and the result was a noninvasive diagnostic system with the ability to accurately predict heart health. Medical Technologies International Inc. (MTI) further developed and then submitted the technology to a vigorous review process at the FDA, which cleared the software for public use. The software, patented under the name Prowin, is being used in MTI's patented ArterioVision, a carotid intima-media thickness (CIMT) test that uses ultrasound image-capturing and analysis software to noninvasively identify the risk for the major cause of heart attack and strokes: atherosclerosis. ArterioVision provides a direct measurement of atherosclerosis by safely and painlessly measuring the thickness of the first two layers of the carotid artery wall using an ultrasound procedure and advanced image-analysis software. The technology is now in use in all 50 states and in many countries throughout the world.
Advances in Software Tools for Pre-processing and Post-processing of Overset Grid Computations
NASA Technical Reports Server (NTRS)
Chan, William M.
2004-01-01
Recent developments in three pieces of software for performing pre-processing and post-processing work on numerical computations using overset grids are presented. The first is the OVERGRID graphical interface which provides a unified environment for the visualization, manipulation, generation and diagnostics of geometry and grids. Modules are also available for automatic boundary conditions detection, flow solver input preparation, multiple component dynamics input preparation and dynamics animation, simple solution viewing for moving components, and debris trajectory analysis input preparation. The second is a grid generation script library that enables rapid creation of grid generation scripts. A sample of recent applications will be described. The third is the OVERPLOT graphical interface for displaying and analyzing history files generated by the flow solver. Data displayed include residuals, component forces and moments, number of supersonic and reverse flow points, and various dynamics parameters.
Unmanned Vehicle Guidance Using Video Camera/Vehicle Model
NASA Technical Reports Server (NTRS)
Sutherland, T.
1999-01-01
A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.
Raster Metafile And Raster Metafile Translator Programs
NASA Technical Reports Server (NTRS)
Randall, Donald P.; Gates, Raymond L.; Skeens, Kristi M.
1994-01-01
Raster Metafile (RM) computer program is generic raster-image-format program, and Raster Metafile Translator (RMT) program is assortment of software tools for processing images prepared in this format. Processing includes reading, writing, and displaying RM images. Such other image-manipulation features as minimal compositing operator and resizing option available under RMT command structure. RMT written in FORTRAN 77 and C language.
Photometric Modeling of Simulated Surace-Resolved Bennu Images
NASA Astrophysics Data System (ADS)
Golish, D.; DellaGiustina, D. N.; Clark, B.; Li, J. Y.; Zou, X. D.; Bennett, C. A.; Lauretta, D. S.
2017-12-01
The Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) is a NASA mission to study and return a sample of asteroid (101955) Bennu. Imaging data from the mission will be used to develop empirical surface-resolved photometric models of Bennu at a series of wavelengths. These models will be used to photometrically correct panchromatic and color base maps of Bennu, compensating for variations due to shadows and photometric angle differences, thereby minimizing seams in mosaicked images. Well-corrected mosaics are critical to the generation of a global hazard map and a global 1064-nm reflectance map which predicts LIDAR response. These data products directly feed into the selection of a site from which to safely acquire a sample. We also require photometric correction for the creation of color ratio maps of Bennu. Color ratios maps provide insight into the composition and geological history of the surface and allow for comparison to other Solar System small bodies. In advance of OSIRIS-REx's arrival at Bennu, we use simulated images to judge the efficacy of both the photometric modeling software and the mission observation plan. Our simulation software is based on USGS's Integrated Software for Imagers and Spectrometers (ISIS) and uses a synthetic shape model, a camera model, and an empirical photometric model to generate simulated images. This approach gives us the flexibility to create simulated images of Bennu based on analog surfaces from other small Solar System bodies and to test our modeling software under those conditions. Our photometric modeling software fits image data to several conventional empirical photometric models and produces the best fit model parameters. The process is largely automated, which is crucial to the efficient production of data products during proximity operations. The software also produces several metrics on the quality of the observations themselves, such as surface coverage and the completeness of the data set for evaluating the phase and disk functions of the surface. Application of this software to simulated mission data has revealed limitations in the initial mission design, which has fed back into the planning process. The entire photometric pipeline further serves as an exercise of planned activities for proximity operations.
Software implementation of the SKIPSM paradigm under PIP
NASA Astrophysics Data System (ADS)
Hack, Ralf; Waltz, Frederick M.; Batchelor, Bruce G.
1997-09-01
SKIPSM (separated-kernel image processing using finite state machines) is a technique for implementing large-kernel binary- morphology operators and many other operations. While earlier papers on SKIPSM concentrated mainly on implementations using pipelined hardware, there is considerable scope for achieving major speed improvements in software systems. Using identical control software, one-pass binary erosion and dilation structuring elements (SEs) ranging from the trivial (3 by 3) to the gigantic (51 by 51, or even larger), are readily available. Processing speed is independent of the size of the SE, making the SKIPSM approach practical for work with very large SEs on ordinary desktop computers. PIP (prolog image processing) is an interactive machine vision prototyping environment developed at the University of Wales Cardiff. It consists of a large number of image processing operators embedded within the standard AI language Prolog. This paper describes the SKIPSM implementation of binary morphology operators within PIP. A large set of binary erosion and dilation operations (circles, squares, diamonds, octagons, etc.) is available to the user through a command-line driven dialogue, via pull-down menus, or incorporated into standard (Prolog) programs. Little has been done thus far to optimize speed on this first software implementation of SKIPSM. Nevertheless, the results are impressive. The paper describes sample applications and presents timing figures. Readers have the opportunity to try out these operations on demonstration software written by the University of Wales, or via their WWW home page at http://bruce.cs.cf.ac.uk/bruce/index.html .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, R.M.; Zander, M.E.; Brown, S.K.
1992-09-01
This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development ofmore » both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, R.M.; Zander, M.E.; Brown, S.K.
1992-01-01
This paper describes the application of video image processing to beam profile measurements on the Ground Test Accelerator (GTA). A diagnostic was needed to measure beam profiles in the intermediate matching section (IMS) between the radio-frequency quadrupole (RFQ) and the drift tube linac (DTL). Beam profiles are measured by injecting puffs of gas into the beam. The light emitted from the beam-gas interaction is captured and processed by a video image processing system, generating the beam profile data. A general purpose, modular and flexible video image processing system, imagetool, was used for the GTA image profile measurement. The development ofmore » both software and hardware for imagetool and its integration with the GTA control system (GTACS) will be discussed. The software includes specialized algorithms for analyzing data and calibrating the system. The underlying design philosophy of imagetool was tested by the experience of building and using the system, pointing the way for future improvements. The current status of the system will be illustrated by samples of experimental data.« less
Direct-Solve Image-Based Wavefront Sensing
NASA Technical Reports Server (NTRS)
Lyon, Richard G.
2009-01-01
A method of wavefront sensing (more precisely characterized as a method of determining the deviation of a wavefront from a nominal figure) has been invented as an improved means of assessing the performance of an optical system as affected by such imperfections as misalignments, design errors, and fabrication errors. The method is implemented by software running on a single-processor computer that is connected, via a suitable interface, to the image sensor (typically, a charge-coupled device) in the system under test. The software collects a digitized single image from the image sensor. The image is displayed on a computer monitor. The software directly solves for the wavefront in a time interval of a fraction of a second. A picture of the wavefront is displayed. The solution process involves, among other things, fast Fourier transforms. It has been reported to the effect that some measure of the wavefront is decomposed into modes of the optical system under test, but it has not been reported whether this decomposition is postprocessing of the solution or part of the solution process.
Imaging Systems: What, When, How.
ERIC Educational Resources Information Center
Lunin, Lois F.; And Others
1992-01-01
The three articles in this special section on document image files discuss intelligent character recognition, including comparison with optical character recognition; selection of displays for document image processing, focusing on paperlike displays; and imaging hardware, software, and vendors, including guidelines for system selection. (MES)
[Application of optical flow dynamic texture in land use/cover change detection].
Yan, Li; Gong, Yi-Long; Zhang, Yi; Duan, Wei
2014-11-01
In the present study, a novel change detection approach for high resolution remote sensing images is proposed based on the optical flow dynamic texture (OFDT), which could achieve the land use & land cover change information automatically with a dynamic description of ground-object changes. This paper describes the ground-object gradual change process from the principle using optical flow theory, which breaks the ground-object sudden change hypothesis in remote sensing change detection methods in the past. As the steps of this method are simple, it could be integrated in the systems and software such as Land Resource Management and Urban Planning software that needs to find ground-object changes. This method takes into account the temporal dimension feature between remote sensing images, which provides a richer set of information for remote sensing change detection, thereby improving the status that most of the change detection methods are mainly dependent on the spatial dimension information. In this article, optical flow dynamic texture is the basic reflection of changes, and it is used in high resolution remote sensing image support vector machine post-classification change detection, combined with spectral information. The texture in the temporal dimension which is considered in this article has a smaller amount of data than most of the textures in the spatial dimensions. The highly automated texture computing has only one parameter to set, which could relax the onerous manual evaluation present status. The effectiveness of the proposed approach is evaluated with the 2011 and 2012 QuickBird datasets covering Duerbert Mongolian Autonomous County of Daqing City, China. Then, the effects of different optical flow smooth coefficient and the impact on the description of the ground-object changes in the method are deeply analyzed: The experiment result is satisfactory, with an 87.29% overall accuracy and an 0.850 7 Kappa index, and the method achieves better performance than the post-classification change detection methods using spectral information only.
21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.
Code of Federal Regulations, 2012 CFR
2012-04-01
... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...
21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...
21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.
Code of Federal Regulations, 2014 CFR
2014-04-01
... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...
21 CFR 876.1300 - Ingestible telemetric gastrointestinal capsule imaging system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... images of the small bowel with a wireless camera contained in a capsule. This device includes an... receiving/recording unit, a data storage device, computer software to process the images, and accessories...
Image processing system design for microcantilever-based optical readout infrared arrays
NASA Astrophysics Data System (ADS)
Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu
2012-12-01
Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.
The image-guided surgery toolkit IGSTK: an open source C++ software toolkit.
Enquobahrie, Andinet; Cheng, Patrick; Gary, Kevin; Ibanez, Luis; Gobbi, David; Lindseth, Frank; Yaniv, Ziv; Aylward, Stephen; Jomier, Julien; Cleary, Kevin
2007-11-01
This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers' mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.
SU-F-J-72: A Clinical Usable Integrated Contouring Quality Evaluation Software for Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, S; Dolly, S; Cai, B
Purpose: To introduce the Auto Contour Evaluation (ACE) software, which is the clinical usable, user friendly, efficient and all-in-one toolbox for automatically identify common contouring errors in radiotherapy treatment planning using supervised machine learning techniques. Methods: ACE is developed with C# using Microsoft .Net framework and Windows Presentation Foundation (WPF) for elegant GUI design and smooth GUI transition animations through the integration of graphics engines and high dots per inch (DPI) settings on modern high resolution monitors. The industrial standard software design pattern, Model-View-ViewModel (MVVM) pattern, is chosen to be the major architecture of ACE for neat coding structure, deepmore » modularization, easy maintainability and seamless communication with other clinical software. ACE consists of 1) a patient data importing module integrated with clinical patient database server, 2) a 2D DICOM image and RT structure simultaneously displaying module, 3) a 3D RT structure visualization module using Visualization Toolkit or VTK library and 4) a contour evaluation module using supervised pattern recognition algorithms to detect contouring errors and display detection results. ACE relies on supervised learning algorithms to handle all image processing and data processing jobs. Implementations of related algorithms are powered by Accord.Net scientific computing library for better efficiency and effectiveness. Results: ACE can take patient’s CT images and RT structures from commercial treatment planning software via direct user input or from patients’ database. All functionalities including 2D and 3D image visualization and RT contours error detection have been demonstrated with real clinical patient cases. Conclusion: ACE implements supervised learning algorithms and combines image processing and graphical visualization modules for RT contours verification. ACE has great potential for automated radiotherapy contouring quality verification. Structured with MVVM pattern, it is highly maintainable and extensible, and support smooth connections with other clinical software tools.« less
The application of digital techniques to the analysis of metallurgical experiments
NASA Technical Reports Server (NTRS)
Rathz, T. J.
1977-01-01
The application of a specific digital computer system (known as the Image Data Processing System) to the analysis of three NASA-sponsored metallurgical experiments is discussed in some detail. The basic hardware and software components of the Image Data Processing System are presented. Many figures are presented in the discussion of each experimental analysis in an attempt to show the accuracy and speed that the Image Data Processing System affords in analyzing photographic images dealing with metallurgy, and in particular with material processing.
Henderson, Fiona; Hart, Philippa J; Pradillo, Jesus M; Kassiou, Michael; Christie, Lidan; Williams, Kaye J; Boutin, Herve; McMahon, Adam
2018-05-15
Stroke is a leading cause of disability worldwide. Understanding the recovery process post-stroke is essential; however, longer-term recovery studies are lacking. In vivo positron emission tomography (PET) can image biological recovery processes, but is limited by spatial resolution and its targeted nature. Untargeted mass spectrometry imaging offers high spatial resolution, providing an ideal ex vivo tool for brain recovery imaging. Magnetic resonance imaging (MRI) was used to image a rat brain 48 h after ischaemic stroke to locate the infarcted regions of the brain. PET was carried out 3 months post-stroke using the tracers [ 18 F]DPA-714 for TSPO and [ 18 F]IAM6067 for sigma-1 receptors to image neuroinflammation and neurodegeneration, respectively. The rat brain was flash-frozen immediately after PET scanning, and sectioned for matrix-assisted laser desorption/ionisation mass spectrometry (MALDI-MS) imaging. Three months post-stroke, PET imaging shows minimal detection of neurodegeneration and neuroinflammation, indicating that the brain has stabilised. However, MALDI-MS images reveal distinct differences in lipid distributions (e.g. phosphatidylcholine and sphingomyelin) between the scar and the healthy brain, suggesting that recovery processes are still in play. It is currently not known if the altered lipids in the scar will change on a longer time scale, or if they are stabilised products of the brain post-stroke. The data demonstrates the ability to combine MALD-MS with in vivo PET to image different aspects of stroke recovery. Copyright © 2018 John Wiley & Sons, Ltd.
How Digital Image Processing Became Really Easy
NASA Astrophysics Data System (ADS)
Cannon, Michael
1988-02-01
In the early and mid-1970s, digital image processing was the subject of intense university and corporate research. The research lay along two lines: (1) developing mathematical techniques for improving the appearance of or analyzing the contents of images represented in digital form, and (2) creating cost-effective hardware to carry out these techniques. The research has been very effective, as evidenced by the continued decline of image processing as a research topic, and the rapid increase of commercial companies to market digital image processing software and hardware.
IPL Processing of the Viking Orbiter Images of Mars
NASA Technical Reports Server (NTRS)
Ruiz, R. M.; Elliott, D. A.; Yagi, G. M.; Pomphrey, R. B.; Power, M. A.; Farrell, W., Jr.; Lorre, J. J.; Benton, W. D.; Dewar, R. E.; Cullen, L. E.
1977-01-01
The Viking orbiter cameras returned over 9000 images of Mars during the 6-month nominal mission. Digital image processing was required to produce products suitable for quantitative and qualitative scientific interpretation. Processing included the production of surface elevation data using computer stereophotogrammetric techniques, crater classification based on geomorphological characteristics, and the generation of color products using multiple black-and-white images recorded through spectral filters. The Image Processing Laboratory of the Jet Propulsion Laboratory was responsible for the design, development, and application of the software required to produce these 'second-order' products.
Develop Direct Geo-referencing System Based on Open Source Software and Hardware Platform
NASA Astrophysics Data System (ADS)
Liu, H. S.; Liao, H. M.
2015-08-01
Direct geo-referencing system uses the technology of remote sensing to quickly grasp images, GPS tracks, and camera position. These data allows the construction of large volumes of images with geographic coordinates. So that users can be measured directly on the images. In order to properly calculate positioning, all the sensor signals must be synchronized. Traditional aerial photography use Position and Orientation System (POS) to integrate image, coordinates and camera position. However, it is very expensive. And users could not use the result immediately because the position information does not embed into image. To considerations of economy and efficiency, this study aims to develop a direct geo-referencing system based on open source software and hardware platform. After using Arduino microcontroller board to integrate the signals, we then can calculate positioning with open source software OpenCV. In the end, we use open source panorama browser, panini, and integrate all these to open source GIS software, Quantum GIS. A wholesome collection of data - a data processing system could be constructed.
False colors removal on the YCr-Cb color space
NASA Astrophysics Data System (ADS)
Tomaselli, Valeria; Guarnera, Mirko; Messina, Giuseppe
2009-01-01
Post-processing algorithms are usually placed in the pipeline of imaging devices to remove residual color artifacts introduced by the demosaicing step. Although demosaicing solutions aim to eliminate, limit or correct false colors and other impairments caused by a non ideal sampling, post-processing techniques are usually more powerful in achieving this purpose. This is mainly because the input of post-processing algorithms is a fully restored RGB color image. Moreover, post-processing can be applied more than once, in order to meet some quality criteria. In this paper we propose an effective technique for reducing the color artifacts generated by conventional color interpolation algorithms, in YCrCb color space. This solution efficiently removes false colors and can be executed while performing the edge emphasis process.
Making and Using Aesthetically Pleasing Images With HDI
NASA Astrophysics Data System (ADS)
Buckner, Spencer L.
2017-01-01
The Half-Degree Imager (HDI) was installed as the primary imager on the 0.9-m WIYN telescope in October 2013. In the three plus years since then it has proven to be highly effective as a scientific instrument for the 0.9-m WIYN consortium. One thing that has been missing from the mix are aesthetically pleasing images for use in publicity and public outreach. The lack of “pretty pictures” is understandable since the HDI is designed for scientific use and observers are given limited telescope time. However, images which appeal to the general public can be an effective tool for public outreach, publicity and recruitment of students into astronomy programs. As a counter to the loss of limited telescope time an observer has, “pretty picture” images can be taken under less than desirable conditions when photometric studies would have limited usefulness. Astroimaging has become a popular pastime among both amateur and professional astronomers. At Austin Peay State University astrophotography is a popular course with non-science majors that wish to complete an astronomy minor as well as physics majors pursuing the astrophysics track. Images of a number of Messier objects have been taken with the HDI camera and are used to teach the basics of image calibration and processing for aesthetic value to students in the astrophotography class. Using HDI images with most image processing software commercially available to the public does present some problems, though. The extended FITS format of the images is not readable by most amateur image processing software and such software can also have problems recognizing the filter configurations of the HDI. Overcoming these issues and how the images are used in APSU courses, publicity and public outreach as well as finished pictures will be discussed in this presentation. A poster describing the processing techniques used will be displayed during the concurrent HDI poster session along with several poster-sized prints of images.
UCXp camera imaging principle and key technologies of data post-processing
NASA Astrophysics Data System (ADS)
Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao
2014-03-01
The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.
Assessment of the mechanics of a tissue-engineered rat trachea in an image-processing environment.
Silva, Thiago Henrique Gomes da; Pazetti, Rogerio; Aoki, Fabio Gava; Cardoso, Paulo Francisco Guerreiro; Valenga, Marcelo Henrique; Deffune, Elenice; Evaristo, Thaiane; Pêgo-Fernandes, Paulo Manuel; Moriya, Henrique Takachi
2014-07-01
Despite the recent success regarding the transplantation of tissue-engineered airways, the mechanical properties of these grafts are not well understood. Mechanical assessment of a tissue-engineered airway graft before implantation may be used in the future as a predictor of function. The aim of this preliminary work was to develop a noninvasive image-processing environment for the assessment of airway mechanics. Decellularized, recellularized and normal tracheas (groups DECEL, RECEL, and CONTROL, respectively) immersed in Krebs-Henseleit solution were ventilated by a small-animal ventilator connected to a Fleisch pneumotachograph and two pressure transducers (differential and gauge). A camera connected to a stereomicroscope captured images of the pulsation of the trachea before instillation of saline solution and after instillation of Krebs-Henseleit solution, followed by instillation with Krebs-Henseleit with methacholine 0.1 M (protocols A, K and KMCh, respectively). The data were post-processed with computer software and statistical comparisons between groups and protocols were performed. There were statistically significant variations in the image measurements of the medial region of the trachea between the groups (two-way analysis of variance [ANOVA], p<0.01) and of the proximal region between the groups and protocols (two-way ANOVA, p<0.01). The technique developed in this study is an innovative method for performing a mechanical assessment of engineered tracheal grafts that will enable evaluation of the viscoelastic properties of neo-tracheas prior to transplantation.
Scherer, Michael D; Kattadiyil, Mathew T; Parciak, Ewa; Puri, Shweta
2014-01-01
Three-dimensional radiographic imaging for dental implant treatment planning is gaining widespread interest and popularity. However, application of the data from 30 imaging can be a complex and daunting process initially. The purpose of this article is to describe features of three software packages and the respective computerized guided surgical templates (GST) fabricated from them. A step-by-step method of interpreting and ordering a GST to simplify the process of the surgical planning and implant placement is discussed.
Concurrent Image Processing Executive (CIPE). Volume 1: Design overview
NASA Technical Reports Server (NTRS)
Lee, Meemong; Groom, Steven L.; Mazer, Alan S.; Williams, Winifred I.
1990-01-01
The design and implementation of a Concurrent Image Processing Executive (CIPE), which is intended to become the support system software for a prototype high performance science analysis workstation are described. The target machine for this software is a JPL/Caltech Mark 3fp Hypercube hosted by either a MASSCOMP 5600 or a Sun-3, Sun-4 workstation; however, the design will accommodate other concurrent machines of similar architecture, i.e., local memory, multiple-instruction-multiple-data (MIMD) machines. The CIPE system provides both a multimode user interface and an applications programmer interface, and has been designed around four loosely coupled modules: user interface, host-resident executive, hypercube-resident executive, and application functions. The loose coupling between modules allows modification of a particular module without significantly affecting the other modules in the system. In order to enhance hypercube memory utilization and to allow expansion of image processing capabilities, a specialized program management method, incremental loading, was devised. To minimize data transfer between host and hypercube, a data management method which distributes, redistributes, and tracks data set information was implemented. The data management also allows data sharing among application programs. The CIPE software architecture provides a flexible environment for scientific analysis of complex remote sensing image data, such as planetary data and imaging spectrometry, utilizing state-of-the-art concurrent computation capabilities.
2002-12-01
radio and batteries. The procedures outlined in this CHETN will concentrate on the Magellan GPS ProMARK X-CP receiver as it was used to collect...The Magellan GPS ProMARK X-CP is a small robust light receiver that can log 9 hr of both pseudorange and carrier phase satellite data for post...post- processing software, pseudorange GPS data recorded by the ProMARK X-CP can be post-processed differential to achieve 1-3 m (3.3-9.8 ft) horizontal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tenney, J.L.
SARS is a data acquisition system designed to gather and process radar data from aircraft flights. A database of flight trajectories has been developed for Albuquerque, NM, and Amarillo, TX. The data is used for safety analysis and risk assessment reports. To support this database effort, Sandia developed a collection of hardware and software tools to collect and post process the aircraft radar data. This document describes the data reduction tools which comprise the SARS, and maintenance procedures for the hardware and software system.
A graphic user interface for efficient 3D photo-reconstruction based on free software
NASA Astrophysics Data System (ADS)
Castillo, Carlos; James, Michael; Gómez, Jose A.
2015-04-01
Recently, different studies have stressed the applicability of 3D photo-reconstruction based on Structure from Motion algorithms in a wide range of geoscience applications. For the purpose of image photo-reconstruction, a number of commercial and freely available software packages have been developed (e.g. Agisoft Photoscan, VisualSFM). The workflow involves typically different stages such as image matching, sparse and dense photo-reconstruction, point cloud filtering and georeferencing. For approaches using open and free software, each of these stages usually require different applications. In this communication, we present an easy-to-use graphic user interface (GUI) developed in Matlab® code as a tool for efficient 3D photo-reconstruction making use of powerful existing software: VisualSFM (Wu, 2015) for photo-reconstruction and CloudCompare (Girardeau-Montaut, 2015) for point cloud processing. The GUI performs as a manager of configurations and algorithms, taking advantage of the command line modes of existing software, which allows an intuitive and automated processing workflow for the geoscience user. The GUI includes several additional features: a) a routine for significantly reducing the duration of the image matching operation, normally the most time consuming stage; b) graphical outputs for understanding the overall performance of the algorithm (e.g. camera connectivity, point cloud density); c) a number of useful options typically performed before and after the photo-reconstruction stage (e.g. removal of blurry images, image renaming, vegetation filtering); d) a manager of batch processing for the automated reconstruction of different image datasets. In this study we explore the advantages of this new tool by testing its performance using imagery collected in several soil erosion applications. References Girardeau-Montaut, D. 2015. CloudCompare documentation accessed at http://cloudcompare.org/ Wu, C. 2015. VisualSFM documentation access at http://ccwu.me/vsfm/doc.html#.
Assesment on the performance of electrode arrays using image processing technique
NASA Astrophysics Data System (ADS)
Usman, N.; Khiruddin, A.; Nawawi, Mohd
2017-08-01
Interpreting inverted resistivity section is time consuming, tedious and requires other sources of information to be relevant geologically. Image processing technique was used in order to perform post inversion processing which make geophysical data interpretation easier. The inverted data sets were imported into the PCI Geomatica 9.0.1 for further processing. The data sets were clipped and merged together in order to match the coordinates of the three layers and permit pixel to pixel analysis. Dipole-dipole array is more sensitive to resistivity variation with depth in comparison with Werner-Schlumberger and pole-dipole. Image processing serves as good post-inversion tool in geophysical data processing.
Spotlight-8 Image Analysis Software
NASA Technical Reports Server (NTRS)
Klimek, Robert; Wright, Ted
2006-01-01
Spotlight is a cross-platform GUI-based software package designed to perform image analysis on sequences of images generated by combustion and fluid physics experiments run in a microgravity environment. Spotlight can perform analysis on a single image in an interactive mode or perform analysis on a sequence of images in an automated fashion. Image processing operations can be employed to enhance the image before various statistics and measurement operations are performed. An arbitrarily large number of objects can be analyzed simultaneously with independent areas of interest. Spotlight saves results in a text file that can be imported into other programs for graphing or further analysis. Spotlight can be run on Microsoft Windows, Linux, and Apple OS X platforms.
Moore, Sarah; Kailasapathy, Kasipathy; Phillips, Michael; Jones, Mark R
2015-07-01
Microencapsulation is proposed to protect probiotic strains from food processing procedures and to maintain probiotic viability. Little research has described the in situ viability of microencapsulated probiotics. This study successfully developed a real-time viability standard curve for microencapsulated bacteria using confocal microscopy, fluorescent dyes and image analysis software. Copyright © 2015 Elsevier B.V. All rights reserved.
A novel method about detecting missing holes on the motor carling
NASA Astrophysics Data System (ADS)
Xu, Hongsheng; Tan, Hao; Li, Guirong
2018-03-01
After a deep analysis on how to use an image processing system to detect the missing holes on the motor carling, we design the whole system combined with the actual production conditions of the motor carling. Afterwards we explain the whole system's hardware and software in detail. We introduce the general functions for the system's hardware and software. Analyzed these general functions, we discuss the modules of the system's hardware and software and the theory to design these modules in detail. The measurement to confirm the area to image processing, edge detection, randomized Hough transform to circle detecting is explained in detail. Finally, the system result tested in the laboratory and in the factory is given out.
Development of digital interactive processing system for NOAA satellites AVHRR data
NASA Astrophysics Data System (ADS)
Gupta, R. K.; Murthy, N. N.
The paper discusses the digital image processing system for NOAA/AVHRR data including Land applications - configured around VAX 11/750 host computer supported with FPS 100 Array Processor, Comtal graphic display and HP Plotting devices; wherein the system software for relational Data Base together with query and editing facilities, Man-Machine Interface using form, menu and prompt inputs including validation of user entries for data type and range; preprocessing software for data calibration, Sun-angle correction, Geometric Corrections for Earth curvature effect and Earth rotation offsets and Earth location of AVHRR image have been accomplished. The implemented image enhancement techniques such as grey level stretching, histogram equalization and convolution are discussed. The software implementation details for the computation of vegetative index and normalized vegetative index using NOAA/AVHRR channels 1 and 2 data together with output are presented; scientific background for such computations and obtainability of similar indices from Landsat/MSS data are also included. The paper concludes by specifying the further software developments planned and the progress envisaged in the field of vegetation index studies.
21 CFR 892.2050 - Picture archiving and communications system.
Code of Federal Regulations, 2014 CFR
2014-04-01
... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...
21 CFR 892.2050 - Picture archiving and communications system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...
21 CFR 892.2050 - Picture archiving and communications system.
Code of Federal Regulations, 2012 CFR
2012-04-01
... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...
21 CFR 892.2050 - Picture archiving and communications system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... processing of medical images. Its hardware components may include workstations, digitizers, communications... hardcopy devices. The software components may provide functions for performing operations related to image...
Image processing and products for the Magellan mission to Venus
NASA Technical Reports Server (NTRS)
Clark, Jerry; Alexander, Doug; Andres, Paul; Lewicki, Scott; Mcauley, Myche
1992-01-01
The Magellan mission to Venus is providing planetary scientists with massive amounts of new data about the surface geology of Venus. Digital image processing is an integral part of the ground data system that provides data products to the investigators. The mosaicking of synthetic aperture radar (SAR) image data from the spacecraft is being performed at JPL's Multimission Image Processing Laboratory (MIPL). MIPL hosts and supports the Image Data Processing Subsystem (IDPS), which was developed in a VAXcluster environment of hardware and software that includes optical disk jukeboxes and the TAE-VICAR (Transportable Applications Executive-Video Image Communication and Retrieval) system. The IDPS is being used by processing analysts of the Image Data Processing Team to produce the Magellan image data products. Various aspects of the image processing procedure are discussed.
Gómez-Villafuertes, Rosa; Paniagua-Herranz, Lucía; Gascon, Sergio; de Agustín-Durán, David; Ferreras, María de la O; Gil-Redondo, Juan Carlos; Queipo, María José; Menendez-Mendez, Aida; Pérez-Sen, Ráquel; Delicado, Esmerilda G; Gualix, Javier; Costa, Marcos R; Schroeder, Timm; Miras-Portugal, María Teresa; Ortega, Felipe
2017-12-16
Understanding the mechanisms that control critical biological events of neural cell populations, such as proliferation, differentiation, or cell fate decisions, will be crucial to design therapeutic strategies for many diseases affecting the nervous system. Current methods to track cell populations rely on their final outcomes in still images and they generally fail to provide sufficient temporal resolution to identify behavioral features in single cells. Moreover, variations in cell death, behavioral heterogeneity within a cell population, dilution, spreading, or the low efficiency of the markers used to analyze cells are all important handicaps that will lead to incomplete or incorrect read-outs of the results. Conversely, performing live imaging and single cell tracking under appropriate conditions represents a powerful tool to monitor each of these events. Here, a time-lapse video-microscopy protocol, followed by post-processing, is described to track neural populations with single cell resolution, employing specific software. The methods described enable researchers to address essential questions regarding the cell biology and lineage progression of distinct neural populations.
A software tool of digital tomosynthesis application for patient positioning in radiotherapy.
Yan, Hui; Dai, Jian-Rong
2016-03-08
Digital Tomosynthesis (DTS) is an image modality in reconstructing tomographic images from two-dimensional kV projections covering a narrow scan angles. Comparing with conventional cone-beam CT (CBCT), it requires less time and radiation dose in data acquisition. It is feasible to apply this technique in patient positioning in radiotherapy. To facilitate its clinical application, a software tool was developed and the reconstruction processes were accelerated by graphic process-ing unit (GPU). Two reconstruction and two registration processes are required for DTS application which is different from conventional CBCT application which requires one image reconstruction process and one image registration process. The reconstruction stage consists of productions of two types of DTS. One type of DTS is reconstructed from cone-beam (CB) projections covering a narrow scan angle and is named onboard DTS (ODTS), which represents the real patient position in treatment room. Another type of DTS is reconstructed from digitally reconstructed radiography (DRR) and is named reference DTS (RDTS), which represents the ideal patient position in treatment room. Prior to the reconstruction of RDTS, The DRRs are reconstructed from planning CT using the same acquisition setting of CB projections. The registration stage consists of two matching processes between ODTS and RDTS. The target shift in lateral and longitudinal axes are obtained from the matching between ODTS and RDTS in coronal view, while the target shift in longitudinal and vertical axes are obtained from the matching between ODTS and RDTS in sagittal view. In this software, both DRR and DTS reconstruction algorithms were implemented on GPU environments for acceleration purpose. The comprehensive evaluation of this software tool was performed including geometric accuracy, image quality, registration accuracy, and reconstruction efficiency. The average correlation coefficient between DRR/DTS generated by GPU-based algorithm and CPU-based algorithm is 0.99. Based on the measurements of cube phantom on DTS, the geometric errors are within 0.5 mm in three axes. For both cube phantom and pelvic phantom, the registration errors are within 0.5 mm in three axes. Compared with reconstruction performance of CPU-based algorithms, the performances of DRR and DTS reconstructions are improved by a factor of 15 to 20. A GPU-based software tool was developed for DTS application for patient positioning of radiotherapy. The geometric and registration accuracy met the clinical requirement in patient setup of radiotherapy. The high performance of DRR and DTS reconstruction algorithms was achieved by the GPU-based computation environments. It is a useful software tool for researcher and clinician in evaluating DTS application in patient positioning of radiotherapy.
A software platform for phase contrast x-ray breast imaging research.
Bliznakova, K; Russo, P; Mettivier, G; Requardt, H; Popov, P; Bravin, A; Buliev, I
2015-06-01
To present and validate a computer-based simulation platform dedicated for phase contrast x-ray breast imaging research. The software platform, developed at the Technical University of Varna on the basis of a previously validated x-ray imaging software simulator, comprises modules for object creation and for x-ray image formation. These modules were updated to take into account the refractive index for phase contrast imaging as well as implementation of the Fresnel-Kirchhoff diffraction theory of the propagating x-ray waves. Projection images are generated in an in-line acquisition geometry. To test and validate the platform, several phantoms differing in their complexity were constructed and imaged at 25 keV and 60 keV at the beamline ID17 of the European Synchrotron Radiation Facility. The software platform was used to design computational phantoms that mimic those used in the experimental study and to generate x-ray images in absorption and phase contrast modes. The visual and quantitative results of the validation process showed an overall good correlation between simulated and experimental images and show the potential of this platform for research in phase contrast x-ray imaging of the breast. The application of the platform is demonstrated in a feasibility study for phase contrast images of complex inhomogeneous and anthropomorphic breast phantoms, compared to x-ray images generated in absorption mode. The improved visibility of mammographic structures suggests further investigation and optimisation of phase contrast x-ray breast imaging, especially when abnormalities are present. The software platform can be exploited also for educational purposes. Copyright © 2015 Elsevier Ltd. All rights reserved.
Image Processing for Teaching.
ERIC Educational Resources Information Center
Greenberg, R.; And Others
1993-01-01
The Image Processing for Teaching project provides a powerful medium to excite students about science and mathematics, especially children from minority groups and others whose needs have not been met by traditional teaching. Using professional-quality software on microcomputers, students explore a variety of scientific data sets, including…
Pipeline for illumination correction of images for high-throughput microscopy.
Singh, S; Bray, M-A; Jones, T R; Carpenter, A E
2014-12-01
The presence of systematic noise in images in high-throughput microscopy experiments can significantly impact the accuracy of downstream results. Among the most common sources of systematic noise is non-homogeneous illumination across the image field. This often adds an unacceptable level of noise, obscures true quantitative differences and precludes biological experiments that rely on accurate fluorescence intensity measurements. In this paper, we seek to quantify the improvement in the quality of high-content screen readouts due to software-based illumination correction. We present a straightforward illumination correction pipeline that has been used by our group across many experiments. We test the pipeline on real-world high-throughput image sets and evaluate the performance of the pipeline at two levels: (a) Z'-factor to evaluate the effect of the image correction on a univariate readout, representative of a typical high-content screen, and (b) classification accuracy on phenotypic signatures derived from the images, representative of an experiment involving more complex data mining. We find that applying the proposed post-hoc correction method improves performance in both experiments, even when illumination correction has already been applied using software associated with the instrument. To facilitate the ready application and future development of illumination correction methods, we have made our complete test data sets as well as open-source image analysis pipelines publicly available. This software-based solution has the potential to improve outcomes for a wide-variety of image-based HTS experiments. © 2014 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.
Initial clinical results with a new needle screen storage phosphor system in chest radiograms.
Körner, M; Wirth, S; Treitl, M; Reiser, M; Pfeifer, K-J
2005-11-01
To evaluate image quality and anatomical detail depiction in dose-reduced digital plain chest radiograms using a new needle screen storage phosphor (NIP) in comparison to full dose conventional powder screen storage phosphor (PIP) images. 24 supine chest radiograms were obtained with PIP at standard dose and compared to follow-up studies of the same patients obtained with NIP with dose reduced to 50 % of the PIP dose (all imaging systems: AGFA-Gevaert, Mortsel, Belgium). In both systems identical versions of post-processing software supplied by the manufacturer were used with matched parameters. Six independent readers blinded to both modality and dose evaluated the images for depiction and differentiation of defined anatomical regions (peripheral lung parenchyma, central lung parenchyma, hilum, heart, diaphragm, upper mediastinum, and bone). All NIP images were compared to the corresponding PIP images using a five-point scale (- 2, clearly inferior to + 2, clearly superior). Overall image quality was rated for each PIP and NIP image separately (1, not usable to 5, excellent). PIP and dose reduced NIP images were rated equivalent. Mean image noise impression was only slightly higher on NIP images. Mean image quality for NIP showed no significant differences (p > 0.05, Mann-Whitney U test). With the use of the new needle structured storage phosphors in chest radiography, dose reduction of up to 50 % is possible without detracting from image quality or detail depiction. Especially in patients with multiple follow-up studies the overall dose can be decreased significantly.
Fuzzy Logic Enhanced Digital PIV Processing Software
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1999-01-01
Digital Particle Image Velocimetry (DPIV) is an instantaneous, planar velocity measurement technique that is ideally suited for studying transient flow phenomena in high speed turbomachinery. DPIV is being actively used at the NASA Glenn Research Center to study both stable and unstable operating conditions in a high speed centrifugal compressor. Commercial PIV systems are readily available which provide near real time feedback of the PIV image data quality. These commercial systems are well designed to facilitate the expedient acquisition of PIV image data. However, as with any general purpose system, these commercial PIV systems do not meet all of the data processing needs required for PIV image data reduction in our compressor research program. An in-house PIV PROCessing (PIVPROC) code has been developed for reducing PIV data. The PIVPROC software incorporates fuzzy logic data validation for maximum information recovery from PIV image data. PIVPROC enables combined cross-correlation/particle tracking wherein the highest possible spatial resolution velocity measurements are obtained.
[Optimizing histological image data for 3-D reconstruction using an image equalizer].
Roth, A; Melzer, K; Annacker, K; Lipinski, H G; Wiemann, M; Bingmann, D
2002-01-01
Bone cells form a wired network within the extracellular bone matrix. To analyse this complex 3D structure, we employed a confocal fluorescence imaging procedure to visualize live bone cells within their native surrounding. By means of newly developed image processing software, the "Image-Equalizer", we aimed to enhanced the contrast and eliminize artefacts in such a way that cell bodies as well as fine interconnecting processes were visible.
Mato Abad, Virginia; García-Polo, Pablo; O'Daly, Owen; Hernández-Tamames, Juan Antonio; Zelaya, Fernando
2016-04-01
The method of Arterial Spin Labeling (ASL) has experienced a significant rise in its application to functional imaging, since it is the only technique capable of measuring blood perfusion in a truly non-invasive manner. Currently, there are no commercial packages for processing ASL data and there is no recognized standard for normalizing ASL data to a common frame of reference. This work describes a new Automated Software for ASL Processing (ASAP) that can automatically process several ASL datasets. ASAP includes functions for all stages of image pre-processing: quantification, skull-stripping, co-registration, partial volume correction and normalization. To assess the applicability and validity of the toolbox, this work shows its application in the study of hypoperfusion in a sample of healthy subjects at risk of progressing to Alzheimer's disease. ASAP requires limited user intervention, minimizing the possibility of random and systematic errors, and produces cerebral blood flow maps that are ready for statistical group analysis. The software is easy to operate and results in excellent quality of spatial normalization. The results found in this evaluation study are consistent with previous studies that find decreased perfusion in Alzheimer's patients in similar regions and demonstrate the applicability of ASAP. Copyright © 2015 Elsevier Inc. All rights reserved.
Software manual for operating particle displacement tracking data acquisition and reduction system
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1991-01-01
The software manual is presented. The necessary steps required to record, analyze, and reduce Particle Image Velocimetry (PIV) data using the Particle Displacement Tracking (PDT) technique are described. The new PDT system is an all electronic technique employing a CCD video camera and a large memory buffer frame-grabber board to record low velocity (less than or equal to 20 cm/s) flows. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine 2-D velocity vectors. All the PDT data acquisition, analysis, and data reduction software is written to run on an 80386 PC.
NASA Technical Reports Server (NTRS)
Hofman, L. B.; Erickson, W. K.; Donovan, W. E.
1984-01-01
Image Display and Analysis Systems (MIDAS) developed at NASA/Ames for the analysis of Landsat MSS images is described. The MIDAS computer power and memory, graphics, resource-sharing, expansion and upgrade, environment and maintenance, and software/user-interface requirements are outlined; the implementation hardware (including 32-bit microprocessor, 512K error-correcting RAM, 70 or 140-Mbyte formatted disk drive, 512 x 512 x 24 color frame buffer, and local-area-network transceiver) and applications software (ELAS, CIE, and P-EDITOR) are characterized; and implementation problems, performance data, and costs are examined. Planned improvements in MIDAS hardware and design goals and areas of exploration for MIDAS software are discussed.
Yang, Deshan; Brame, Scott; El Naqa, Issam; Aditya, Apte; Wu, Yu; Goddu, S Murty; Mutic, Sasa; Deasy, Joseph O; Low, Daniel A
2011-01-01
Recent years have witnessed tremendous progress in image guide radiotherapy technology and a growing interest in the possibilities for adapting treatment planning and delivery over the course of treatment. One obstacle faced by the research community has been the lack of a comprehensive open-source software toolkit dedicated for adaptive radiotherapy (ART). To address this need, the authors have developed a software suite called the Deformable Image Registration and Adaptive Radiotherapy Toolkit (DIRART). DIRART is an open-source toolkit developed in MATLAB. It is designed in an object-oriented style with focus on user-friendliness, features, and flexibility. It contains four classes of DIR algorithms, including the newer inverse consistency algorithms to provide consistent displacement vector field in both directions. It also contains common ART functions, an integrated graphical user interface, a variety of visualization and image-processing features, dose metric analysis functions, and interface routines. These interface routines make DIRART a powerful complement to the Computational Environment for Radiotherapy Research (CERR) and popular image-processing toolkits such as ITK. DIRART provides a set of image processing/registration algorithms and postprocessing functions to facilitate the development and testing of DIR algorithms. It also offers a good amount of options for DIR results visualization, evaluation, and validation. By exchanging data with treatment planning systems via DICOM-RT files and CERR, and by bringing image registration algorithms closer to radiotherapy applications, DIRART is potentially a convenient and flexible platform that may facilitate ART and DIR research. 0 2011 Ameri-
Application of Neutron Tomography in Culture Heritage research.
Mongy, T
2014-02-01
Neutron Tomography (NT) investigation of Culture Heritages (CH) is an efficient tool for understanding the culture of ancient civilizations. Neutron imaging (NI) is a-state-of-the-art non-destructive tool in the area of CH and plays an important role in the modern archeology. The NI technology can be widely utilized in the field of elemental analysis. At Egypt Second Research Reactor (ETRR-2), a collimated Neutron Radiography (NR) beam is employed for neutron imaging purposes. A digital CCD camera is utilized for recording the beam attenuation in the sample. This helps for the detection of hidden objects and characterization of material properties. Research activity can be extended to use computer software for quantitative neutron measurement. Development of image processing algorithms can be used to obtain high quality images. In this work, full description of ETRR-2 was introduced with up to date neutron imaging system as well. Tomographic investigation of a clay forged artifact represents CH object was studied by neutron imaging methods in order to obtain some hidden information and highlight some attractive quantitative measurements. Computer software was used for imaging processing and enhancement. Also the Astra Image 3.0 Pro software was employed for high precise measurements and imaging enhancement using advanced algorithms. This work increased the effective utilization of the ETRR-2 Neutron Radiography/Tomography (NR/T) technique in Culture Heritages activities. © 2013 Elsevier Ltd. All rights reserved.
Computer-aided diagnosis software for vulvovaginal candidiasis detection from Pap smear images.
Momenzadeh, Mohammadreza; Vard, Alireza; Talebi, Ardeshir; Mehri Dehnavi, Alireza; Rabbani, Hossein
2018-01-01
Vulvovaginal candidiasis (VVC) is a common gynecologic infection and it occurs when there is overgrowth of the yeast called Candida. VVC diagnosis is usually done by observing a Pap smear sample under a microscope and searching for the conidium and mycelium components of Candida. This manual method is time consuming, subjective and tedious. Any diagnosis tools that detect VVC, semi- or full-automatically, can be very helpful to pathologists. This article presents a computer aided diagnosis (CAD) software to improve human diagnosis of VVC from Pap smear samples. The proposed software is designed based on phenotypic and morphology features of the Candida in Pap smear sample images. This software provide a user-friendly interface which consists of a set of image processing tools and analytical results that helps to detect Candida and determine severity of illness. The software was evaluated on 200 Pap smear sample images and obtained specificity of 91.04% and sensitivity of 92.48% to detect VVC. As a result, the use of the proposed software reduces diagnostic time and can be employed as a second objective opinion for pathologists. © 2017 Wiley Periodicals, Inc.
Rapid Prototyping Integrated With Nondestructive Evaluation and Finite Element Analysis
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Baaklini, George Y.
2001-01-01
Most reverse engineering approaches involve imaging or digitizing an object then creating a computerized reconstruction that can be integrated, in three dimensions, into a particular design environment. Rapid prototyping (RP) refers to the practical ability to build high-quality physical prototypes directly from computer aided design (CAD) files. Using rapid prototyping, full-scale models or patterns can be built using a variety of materials in a fraction of the time required by more traditional prototyping techniques (refs. 1 and 2). Many software packages have been developed and are being designed to tackle the reverse engineering and rapid prototyping issues just mentioned. For example, image processing and three-dimensional reconstruction visualization software such as Velocity2 (ref. 3) are being used to carry out the construction process of three-dimensional volume models and the subsequent generation of a stereolithography file that is suitable for CAD applications. Producing three-dimensional models of objects from computed tomography (CT) scans is becoming a valuable nondestructive evaluation methodology (ref. 4). Real components can be rendered and subjected to temperature and stress tests using structural engineering software codes. For this to be achieved, accurate high-resolution images have to be obtained via CT scans and then processed, converted into a traditional file format, and translated into finite element models. Prototyping a three-dimensional volume of a composite structure by reading in a series of two-dimensional images generated via CT and by using and integrating commercial software (e.g. Velocity2, MSC/PATRAN (ref. 5), and Hypermesh (ref. 6)) is being applied successfully at the NASA Glenn Research Center. The building process from structural modeling to the analysis level is outlined in reference 7. Subsequently, a stress analysis of a composite cooling panel under combined thermomechanical loading conditions was performed to validate this process.
Ring artifact reduction in synchrotron x-ray tomography through helical acquisition
NASA Astrophysics Data System (ADS)
Pelt, Daniël M.; Parkinson, Dilworth Y.
2018-03-01
In synchrotron x-ray tomography, systematic defects in certain detector elements can result in arc-shaped artifacts in the final reconstructed image of the scanned sample. These ring artifacts are commonly found in many applications of synchrotron tomography, and can make it difficult or impossible to use the reconstructed image in further analyses. The severity of ring artifacts is often reduced in practice by applying pre-processing on the acquired data, or post-processing on the reconstructed image. However, such additional processing steps can introduce additional artifacts as well, and rely on specific choices of hyperparameter values. In this paper, a different approach to reducing the severity of ring artifacts is introduced: a helical acquisition mode. By moving the sample parallel to the rotation axis during the experiment, the sample is detected at different detector positions in each projection, reducing the effect of systematic errors in detector elements. Alternatively, helical acquisition can be viewed as a way to transform ring artifacts to helix-like artifacts in the reconstructed volume, reducing their severity. We show that data acquired with the proposed mode can be transformed to data acquired with a virtual circular trajectory, enabling further processing of the data with existing software packages for circular data. Results for both simulated data and experimental data show that the proposed method is able to significantly reduce ring artifacts in practice, even compared with popular existing methods, without introducing additional artifacts.
HypsIRI On-Board Science Data Processing
NASA Technical Reports Server (NTRS)
Flatley, Tom
2010-01-01
Topics include On-board science data processing, on-board image processing, software upset mitigation, on-board data reduction, on-board 'VSWIR" products, HyspIRI demonstration testbed, and processor comparison.
Jain, Niharika; Pawar, Ajinkya M.; Ukey, Piyush D.; Jain, Prashant K.; Thakur, Bhagyashree; Gupta, Abhishek
2017-01-01
Objectives: To compare the relative axis modification and canal concentricity after glide path preparation with 20/0.02 hand K-file (NITIFLEX®) and 20/0.04 rotary file (HyFlex™ CM) with subsequent instrumentation with 1.5 mm self-adjusting file (SAF). Materials and Methods: One hundred and twenty ISO 15, 0.02 taper, Endo Training Blocks (Dentsply Maillefer, Ballaigues, Switzerland) were acquired and randomly divided into following two groups (n = 60): group 1, establishing glide path till 20/0.02 hand K-file (NITIFLEX®) followed by instrumentation with 1.5 mm SAF; and Group 2, establishing glide path till 20/0.04 rotary file (HyFlex™ CM) followed by instrumentation with 1.5 mm SAF. Pre- and post-instrumentation digital images were processed with MATLAB R 2013 software to identify the central axis, and then superimposed using digital imaging software (Picasa 3.0 software, Google Inc., California, USA) taking five landmarks as reference points. Student's t-test for pairwise comparisons was applied with the level of significance set at 0.05. Results: Training blocks instrumented with 20/0.04 rotary file and SAF were associated less deviation in canal axis (at all the five marked points), representing better canal concentricity compared to those, in which glide path was established by 20/0.02 hand K-files followed by SAF instrumentation. Conclusion: Canal geometry is better maintained after SAF instrumentation with a prior glide path established with 20/0.04 rotary file. PMID:28855752
IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java
Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2015-01-01
Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319
IQM: an extensible and portable open source application for image and signal analysis in Java.
Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2015-01-01
Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.
Lessons Learned through the Development and Publication of AstroImageJ
NASA Astrophysics Data System (ADS)
Collins, Karen
2018-01-01
As lead author of the scientific image processing software package AstroImageJ (AIJ), I will discuss the reasoning behind why we decided to release AIJ to the public, and the lessons we learned related to the development, publication, distribution, and support of AIJ. I will also summarize the AIJ code language selection, code documentation and testing approaches, code distribution, update, and support facilities used, and the code citation and licensing decisions. Since AIJ was initially developed as part of my graduate research and was my first scientific open source software publication, many of my experiences and difficulties encountered may parallel those of others new to scientific software publication. Finally, I will discuss the benefits and disadvantages of releasing scientific software that I now recognize after having AIJ in the public domain for more than five years.
Yuki, I; Kambayashi, Y; Ikemura, A; Abe, Y; Kan, I; Mohamed, A; Dahmani, C; Suzuki, T; Ishibashi, T; Takao, H; Urashima, M; Murayama, Y
2016-02-01
Combination of high-resolution C-arm CT and novel metal artifact reduction software may contribute to the assessment of aneurysms treated with stent-assisted coil embolization. This study aimed to evaluate the efficacy of a novel Metal Artifact Reduction prototype software combined with the currently available high spatial-resolution C-arm CT prototype implementation by using an experimental aneurysm model treated with stent-assisted coil embolization. Eight experimental aneurysms were created in 6 swine. Coil embolization of each aneurysm was performed by using a stent-assisted technique. High-resolution C-arm CT with intra-arterial contrast injection was performed immediately after the treatment. The obtained images were processed with Metal Artifact Reduction. Five neurointerventional specialists reviewed the image quality before and after Metal Artifact Reduction. Observational and quantitative analyses (via image analysis software) were performed. Every aneurysm was successfully created and treated with stent-assisted coil embolization. Before Metal Artifact Reduction, coil loops protruding through the stent lumen were not visualized due to the prominent metal artifacts produced by the coils. These became visible after Metal Artifact Reduction processing. Contrast filling in the residual aneurysm was also visualized after Metal Artifact Reduction in every aneurysm. Both the observational (P < .0001) and quantitative (P < .001) analyses showed significant reduction of the metal artifacts after application of the Metal Artifact Reduction prototype software. The combination of high-resolution C-arm CT and Metal Artifact Reduction enables differentiation of the coil mass, stent, and contrast material on the same image by significantly reducing the metal artifacts produced by the platinum coils. This novel image technique may improve the assessment of aneurysms treated with stent-assisted coil embolization. © 2016 by American Journal of Neuroradiology.
NASA Astrophysics Data System (ADS)
Preuss, R.
2014-12-01
This article discusses the current capabilities of automate processing of the image data on the example of using PhotoScan software by Agisoft. At present, image data obtained by various registration systems (metric and non - metric cameras) placed on airplanes, satellites, or more often on UAVs is used to create photogrammetric products. Multiple registrations of object or land area (large groups of photos are captured) are usually performed in order to eliminate obscured area as well as to raise the final accuracy of the photogrammetric product. Because of such a situation t he geometry of the resulting image blocks is far from the typical configuration of images. For fast images georeferencing automatic image matching algorithms are currently applied. They can create a model of a block in the local coordinate system or using initial exterior orientation and measured control points can provide image georeference in an external reference frame. In the case of non - metric image application, it is also possible to carry out self - calibration process at this stage. Image matching algorithm is also used in generation of dense point clouds reconstructing spatial shape of the object (area). In subsequent processing steps it is possible to obtain typical photogrammetric products such as orthomosaic, DSM or DTM and a photorealistic solid model of an object . All aforementioned processing steps are implemented in a single program in contrary to standard commercial software dividing all steps into dedicated modules. Image processing leading to final geo referenced products can be fully automated including sequential implementation of the processing steps at predetermined control parameters. The paper presents the practical results of the application fully automatic generation of othomosaic for both images obtained by a metric Vexell camera and a block of images acquired by a non - metric UAV system
NASA Technical Reports Server (NTRS)
Kwok, John H.; Call, Jared A.; Khanampornpan, Teerapat
2012-01-01
This software automatically processes Sally Ride Science (SRS) delivered MoonKAM camera control files (ccf) into uplink products for the GRAIL-A and GRAIL-B spacecraft as part of an education and public outreach (EPO) extension to the Grail Mission. Once properly validated and deemed safe for execution onboard the spacecraft, MoonKommand generates the command products via the Automated Sequence Processor (ASP) and generates uplink (.scmf) files for radiation to the Grail-A and/or Grail-B spacecraft. Any errors detected along the way are reported back to SRS via email. With Moon Kommand, SRS can control their EPO instrument as part of a fully automated process. Inputs are received from SRS as either image capture files (.ccficd) for new image requests, or downlink/delete files (.ccfdl) for requesting image downlink from the instrument and on-board memory management. The Moon - Kommand outputs are command and file-load (.scmf) files that will be uplinked by the Deep Space Network (DSN). Without MoonKommand software, uplink product generation for the MoonKAM instrument would be a manual process. The software is specific to the Moon - KAM instrument on the GRAIL mission. At the time of this writing, the GRAIL mission was making final preparations to begin the science phase, which was scheduled to continue until June 2012.
Proteomic analysis and comparison of the biopsy and autopsy specimen of human brain temporal lobe.
He, Sizhi; Wang, Qingsong; He, Jintang; Pu, Hai; Yang, Wei; Ji, Jianguo
2006-09-01
The proteomic study on human temporal lobe can help us to understand the physiological function of CNS in normal as well as in pathological state. Proteomic tools are potent for the assessment of protein stability post mortem. In this pilot study, the human temporal lobe biopsy specimen with chronic pharmacoresistant temporal lobe epilepsy (TLE) and autopsy specimen in control were separated by 2-DE. Using MALDI-TOF-MS and MS/MS, 375 protein spots were identified which were the products of 267 genes. Six down-regulated and 23 up-regulated protein spots in the autopsy specimen were ascertained after the gel image analysis with the ImageMaster software. A number of proteins that include neurotransmitter metabolic and glycolytic enzymes, cytoprotective proteins and cytoskeleton were found decreased while the precursor of apolipoprotein A-I increased in the TLE brain. We tried several methods to prepare the protein samples and found that DNase and RNase treatment, ultracentrifugation and Amersham clean-up kit purification can improve gel separation quality. This work optimized the sample preparation method and constructed a primary protein database of human temporal lobe and found some proteins with remarkable level change probably involved in the post-mortem process and chronic pharmacoresistant TLE pathogenesis.
High resolution image processing on low-cost microcomputers
NASA Technical Reports Server (NTRS)
Miller, R. L.
1993-01-01
Recent advances in microcomputer technology have resulted in systems that rival the speed, storage, and display capabilities of traditionally larger machines. Low-cost microcomputers can provide a powerful environment for image processing. A new software program which offers sophisticated image display and analysis on IBM-based systems is presented. Designed specifically for a microcomputer, this program provides a wide-range of functions normally found only on dedicated graphics systems, and therefore can provide most students, universities and research groups with an affordable computer platform for processing digital images. The processing of AVHRR images within this environment is presented as an example.
Uses of software in digital image analysis: a forensic report
NASA Astrophysics Data System (ADS)
Sharma, Mukesh; Jha, Shailendra
2010-02-01
Forensic image analysis is required an expertise to interpret the content of an image or the image itself in legal matters. Major sub-disciplines of forensic image analysis with law enforcement applications include photo-grammetry, photographic comparison, content analysis and image authentication. It has wide applications in forensic science range from documenting crime scenes to enhancing faint or indistinct patterns such as partial fingerprints. The process of forensic image analysis can involve several different tasks, regardless of the type of image analysis performed. Through this paper authors have tried to explain these tasks, which are described in to three categories: Image Compression, Image Enhancement & Restoration and Measurement Extraction. With the help of examples like signature comparison, counterfeit currency comparison and foot-wear sole impression using the software Canvas and Corel Draw.
77 FR 40082 - Certain Gaming and Entertainment Consoles, Related Software, and Components Thereof...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-06
... Commission remands for the ALJ to (1) apply the Commission's opinion in Certain Electronic Devices With Image Processing Systems, Components Thereof, and Associated Software, Inv. No. 337-TA-724, Comm'n Op. (Dec. 21...
Secure Display of Space-Exploration Images
NASA Technical Reports Server (NTRS)
Cheng, Cecilia; Thornhill, Gillian; McAuley, Michael
2006-01-01
Java EDR Display Interface (JEDI) is software for either local display or secure Internet distribution, to authorized clients, of image data acquired from cameras aboard spacecraft engaged in exploration of remote planets. ( EDR signifies experimental data record, which, in effect, signifies image data.) Processed at NASA s Multimission Image Processing Laboratory (MIPL), the data can be from either near-realtime processing streams or stored files. JEDI uses the Java Advanced Imaging application program interface, plus input/output packages that are parts of the Video Image Communication and Retrieval software of the MIPL, to display images. JEDI can be run as either a standalone application program or within a Web browser as a servlet with an applet front end. In either operating mode, JEDI communicates using the HTTP(s) protocol(s). In the Web-browser case, the user must provide a password to gain access. For each user and/or image data type, there is a configuration file, called a "personality file," containing parameters that control the layout of the displays and the information to be included in them. Once JEDI has accepted the user s password, it processes the requested EDR (provided that user is authorized to receive the specific EDR) to create a display according to the user s personality file.
Image processing and analysis of Saturn's rings
NASA Technical Reports Server (NTRS)
Yagi, G. M.; Jepsen, P. L.; Garneau, G. W.; Mosher, J. A.; Doyle, L. R.; Lorre, J. J.; Avis, C. C.; Korsmo, E. P.
1981-01-01
Processing of Voyager image data of Saturn's rings at JPL's Image Processing Laboratory is described. A software system to navigate the flight images, facilitate feature tracking, and to project the rings has been developed. This system has been used to make measurements of ring radii and to measure the velocities of the spoke features in the B-Ring. A projected ring movie to study the development of these spoke features has been generated. Finally, processing to facilitate comparison of the photometric properties of Saturn's rings at various phase angles is described.
Photogrammetric 3d Building Reconstruction from Thermal Images
NASA Astrophysics Data System (ADS)
Maset, E.; Fusiello, A.; Crosilla, F.; Toldo, R.; Zorzetto, D.
2017-08-01
This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.
NASA Technical Reports Server (NTRS)
Huh, Oscar Karl; Leibowitz, Scott G.; Dirosa, Donald; Hill, John M.
1986-01-01
The use of NOAA Advanced Very High Resolution Radar/High Resolution Picture Transmission (AVHRR/HRPT) imagery for earth resource applications is provided for the applications scientist for use within the various Earth science, resource, and agricultural disciplines. A guide to processing NOAA AVHRR data using the hardware and software systems integrated for this NASA project is provided. The processing steps from raw data on computer compatible tapes (1B data format) through usable qualitative and quantitative products for applications are given. The manual is divided into two parts. The first section describes the NOAA satellite system, its sensors, and the theoretical basis for using these data for environmental applications. Part 2 is a hands-on description of how to use a specific image processing system, the International Imaging Systems, Inc. (I2S) Model 75 Array Processor and S575 software, to process these data.
Programmable lithography engine (ProLE) grid-type supercomputer and its applications
NASA Astrophysics Data System (ADS)
Petersen, John S.; Maslow, Mark J.; Gerold, David J.; Greenway, Robert T.
2003-06-01
There are many variables that can affect lithographic dependent device yield. Because of this, it is not enough to make optical proximity corrections (OPC) based on the mask type, wavelength, lens, illumination-type and coherence. Resist chemistry and physics along with substrate, exposure, and all post-exposure processing must be considered too. Only a holistic approach to finding imaging solutions will accelerate yield and maximize performance. Since experiments are too costly in both time and money, accomplishing this takes massive amounts of accurate simulation capability. Our solution is to create a workbench that has a set of advanced user applications that utilize best-in-class simulator engines for solving litho-related DFM problems using distributive computing. Our product, ProLE (Programmable Lithography Engine), is an integrated system that combines Petersen Advanced Lithography Inc."s (PAL"s) proprietary applications and cluster management software wrapped around commercial software engines, along with optional commercial hardware and software. It uses the most rigorous lithography simulation engines to solve deep sub-wavelength imaging problems accurately and at speeds that are several orders of magnitude faster than current methods. Specifically, ProLE uses full vector thin-mask aerial image models or when needed, full across source 3D electromagnetic field simulation to make accurate aerial image predictions along with calibrated resist models;. The ProLE workstation from Petersen Advanced Lithography, Inc., is the first commercial product that makes it possible to do these intensive calculations at a fraction of a time previously available thus significantly reducing time to market for advance technology devices. In this work, ProLE is introduced, through model comparison to show why vector imaging and rigorous resist models work better than other less rigorous models, then some applications of that use our distributive computing solution are shown. Topics covered describe why ProLE solutions are needed from an economic and technical aspect, a high level discussion of how the distributive system works, speed benchmarking, and finally, a brief survey of applications including advanced aberrations for lens sensitivity and flare studies, optical-proximity-correction for a bitcell and an application that will allow evaluation of the potential of a design to have systematic failures during fabrication.
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Baaklini, George Y.; Zagidulin, Dmitri; Rauser, Richard W.
2000-01-01
Capabilities and expertise related to the development of links between nondestructive evaluation (NDE) and finite element analysis (FEA) at Glenn Research Center (GRC) are demonstrated. Current tools to analyze data produced by computed tomography (CT) scans are exercised to help assess the damage state in high temperature structural composite materials. A utility translator was written to convert velocity (an image processing software) STL data file to a suitable CAD-FEA type file. Finite element analyses are carried out with MARC, a commercial nonlinear finite element code, and the analytical results are discussed. Modeling was established by building MSC/Patran (a pre and post processing finite element package) generated model and comparing it to a model generated by Velocity in conjunction with MSC/Patran Graphics. Modeling issues and results are discussed in this paper. The entire process that outlines the tie between the data extracted via NDE and the finite element modeling and analysis is fully described.
NASA Technical Reports Server (NTRS)
Hussey, K. J.; Hall, J. R.; Mortensen, R. A.
1986-01-01
Image processing methods and software used to animate nonimaging remotely sensed data on cloud cover are described. Three FORTRAN programs were written in the VICAR2/TAE image processing domain to perform 3D perspective rendering, to interactively select parameters controlling the projection, and to interpolate parameter sets for animation images between key frames. Operation of the 3D programs and transferring the images to film is automated using executive control language and custom hardware to link the computer and camera.
Branderhorst, Woutjan; de Groot, Jerry E; van Lier, Monique G J T B; Highnam, Ralph P; den Heeten, Gerard J; Grimbergen, Cornelis A
2017-08-01
To assess the accuracy of two methods of determining the contact area between the compression paddle and the breast in mammography. An accurate method to determine the contact area is essential to accurately calculate the average compression pressure applied by the paddle. For a set of 300 breast compressions, we measured the contact areas between breast and paddle, both capacitively using a transparent foil with indium-tin-oxide (ITO) coating attached to the paddle, and retrospectively from the obtained mammograms using image processing software (Volpara Enterprise, algorithm version 1.5.2). A gold standard was obtained from video images of the compressed breast. During each compression, the breast was illuminated from the sides in order to create a dark shadow on the video image where the breast was in contact with the compression paddle. We manually segmented the shadows captured at the time of x-ray exposure and measured their areas. We found a strong correlation between the manual segmentations and the capacitive measurements [r = 0.989, 95% CI (0.987, 0.992)] and between the manual segmentations and the image processing software [r = 0.978, 95% CI (0.972, 0.982)]. Bland-Altman analysis showed a bias of -0.0038 dm 2 for the capacitive measurement (SD 0.0658, 95% limits of agreement [-0.1329, 0.1252]) and -0.0035 dm 2 for the image processing software [SD 0.0962, 95% limits of agreement (-0.1921, 0.1850)]. The size of the contact area between the paddle and the breast can be determined accurately and precisely, both in real-time using the capacitive method, and retrospectively using image processing software. This result is beneficial for scientific research, data analysis and quality control systems that depend on one of these two methods for determining the average pressure on the breast during mammographic compression. © 2017 Sigmascreening B.V. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2012 CFR
2012-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2014 CFR
2014-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
Software organization for a prolog-based prototyping system for machine vision
NASA Astrophysics Data System (ADS)
Jones, Andrew C.; Hack, Ralf; Batchelor, Bruce G.
1996-11-01
We describe PIP (prolog image processing)--a prototype system for interactive image processing using Prolog, implemented on an Apple Macintosh computer. PIP is the latest in a series of products that the third author has been involved in the implementation of, under the collective title Prolog+. PIP differs from our previous systems in two particularly important respects. The first is that whereas we previously required dedicated image processing hardware, the present system implements image processing routines using software. The second difference is that our present system is hierarchical in structure, where the top level of the hierarchy emulates Prolog+, but there is a flexible infrastructure which supports more sophisticated image manipulation which we will be able to exploit in due course . We discuss the impact of the Apple Macintosh operating system upon the implementation of the image processing functions, and the interface between these functions and the Prolog system. We also explain how the existing set of Prolog+ commands has been implemented. PIP is now nearing maturity, and we will make a version of it generally available in the near future. However, although the represent version of PIP constitutes a complete image processing tool, there are a number of ways in which we are intending to enhance future versions, with a view to added flexibility and efficiency: we discuss these ideas briefly near the end of the present paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The software processes recorded thermal video and detects the flight tracks of birds and bats that passed through the camera's field of view. The output is a set of images that show complete flight tracks for any detections, with the direction of travel indicated and the thermal image of the animal delineated. A report of the descriptive features of each detected track is also output in the form of a comma-separated value text file.
Parallel workflow tools to facilitate human brain MRI post-processing
Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang
2015-01-01
Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043
Predicting tool life in turning operations using neural networks and image processing
NASA Astrophysics Data System (ADS)
Mikołajczyk, T.; Nowicki, K.; Bustillo, A.; Yu Pimenov, D.
2018-05-01
A two-step method is presented for the automatic prediction of tool life in turning operations. First, experimental data are collected for three cutting edges under the same constant processing conditions. In these experiments, the parameter of tool wear, VB, is measured with conventional methods and the same parameter is estimated using Neural Wear, a customized software package that combines flank wear image recognition and Artificial Neural Networks (ANNs). Second, an ANN model of tool life is trained with the data collected from the first two cutting edges and the subsequent model is evaluated on two different subsets for the third cutting edge: the first subset is obtained from the direct measurement of tool wear and the second is obtained from the Neural Wear software that estimates tool wear using edge images. Although the complete-automated solution, Neural Wear software for tool wear recognition plus the ANN model of tool life prediction, presented a slightly higher error than the direct measurements, it was within the same range and can meet all industrial requirements. These results confirm that the combination of image recognition software and ANN modelling could potentially be developed into a useful industrial tool for low-cost estimation of tool life in turning operations.
Clauser, Paola; Marcon, Magda; Maieron, Marta; Zuiani, Chiara; Bazzocchi, Massimo; Baltzer, Pascal A T
2016-07-01
To evaluate the influence of post-processing systems, intra- and inter-reader agreement on the variability of apparent diffusion coefficient (ADC) measurements in breast lesions. Forty-one patients with 41 biopsy-proven breast lesions gave their informed consent and were included in this prospective IRB-approved study. Magnetic resonance imaging (MRI) examinations were performed at 1.5 T using an EPI-DWI sequence, with b-values of 0 and 1000 s/mm(2). Two radiologists (R1, R2) reviewed the images in separate sessions and measured the ADC for lesion, using MRI-workstation (S-WS), PACS-workstation (P-WS) and a commercial DICOM viewer (O-SW). Agreement was evaluated using the intraclass correlation coefficient (ICC), Bland-Altman plots and coefficient of variation (CV). Thirty-one malignant, two high-risk and eight benign mass-like lesions were analysed. Intra-reader agreement was almost perfect (ICC-R1 = 0.974; ICC-R2 = 0.990) while inter-reader agreement was substantial (ICC from 0.615 to 0.682). Bland-Altman plots revealed a significant bias in ADC values measured between O-SW and S-WS (P = 0.025), no further systematic differences were identified. CV varied from 6.8 % to 7.9 %. Post-processing systems may have a significant, although minor, impact on ADC measurements in breast lesions. While intra-reader agreement is high, the main source of ADC variability seems to be caused by inter-reader variation. • ADC provides quantitative information on breast lesions independent from the system used. • ADC measurement using different workstations and software systems is generally reliable. • Systematic, but minor, differences may occur between different post-processing systems. • Inter-reader agreement of ADC measurements exceeded intra-reader agreement.
Delora, Adam; Gonzales, Aaron; Medina, Christopher S; Mitchell, Adam; Mohed, Abdul Faheem; Jacobs, Russell E; Bearer, Elaine L
2016-01-15
Magnetic resonance imaging (MRI) is a well-developed technique in neuroscience. Limitations in applying MRI to rodent models of neuropsychiatric disorders include the large number of animals required to achieve statistical significance, and the paucity of automation tools for the critical early step in processing, brain extraction, which prepares brain images for alignment and voxel-wise statistics. This novel timesaving automation of template-based brain extraction ("skull-stripping") is capable of quickly and reliably extracting the brain from large numbers of whole head images in a single step. The method is simple to install and requires minimal user interaction. This method is equally applicable to different types of MR images. Results were evaluated with Dice and Jacquard similarity indices and compared in 3D surface projections with other stripping approaches. Statistical comparisons demonstrate that individual variation of brain volumes are preserved. A downloadable software package not otherwise available for extraction of brains from whole head images is included here. This software tool increases speed, can be used with an atlas or a template from within the dataset, and produces masks that need little further refinement. Our new automation can be applied to any MR dataset, since the starting point is a template mask generated specifically for that dataset. The method reliably and rapidly extracts brain images from whole head images, rendering them useable for subsequent analytical processing. This software tool will accelerate the exploitation of mouse models for the investigation of human brain disorders by MRI. Copyright © 2015 Elsevier B.V. All rights reserved.
A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing
NASA Technical Reports Server (NTRS)
Overmeyer, Austin D.
2015-01-01
A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.
Digital imaging technology assessment: Digital document storage project
NASA Technical Reports Server (NTRS)
1989-01-01
An ongoing technical assessment and requirements definition project is examining the potential role of digital imaging technology at NASA's STI facility. The focus is on the basic components of imaging technology in today's marketplace as well as the components anticipated in the near future. Presented is a requirement specification for a prototype project, an initial examination of current image processing at the STI facility, and an initial summary of image processing projects at other sites. Operational imaging systems incorporate scanners, optical storage, high resolution monitors, processing nodes, magnetic storage, jukeboxes, specialized boards, optical character recognition gear, pixel addressable printers, communications, and complex software processes.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
NASA Astrophysics Data System (ADS)
Tokareva, Victoria
2018-04-01
New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.
The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.
1992-01-01
The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.
Dendritic tree extraction from noisy maximum intensity projection images in C. elegans.
Greenblum, Ayala; Sznitman, Raphael; Fua, Pascal; Arratia, Paulo E; Oren, Meital; Podbilewicz, Benjamin; Sznitman, Josué
2014-06-12
Maximum Intensity Projections (MIP) of neuronal dendritic trees obtained from confocal microscopy are frequently used to study the relationship between tree morphology and mechanosensory function in the model organism C. elegans. Extracting dendritic trees from noisy images remains however a strenuous process that has traditionally relied on manual approaches. Here, we focus on automated and reliable 2D segmentations of dendritic trees following a statistical learning framework. Our dendritic tree extraction (DTE) method uses small amounts of labelled training data on MIPs to learn noise models of texture-based features from the responses of tree structures and image background. Our strategy lies in evaluating statistical models of noise that account for both the variability generated from the imaging process and from the aggregation of information in the MIP images. These noisy models are then used within a probabilistic, or Bayesian framework to provide a coarse 2D dendritic tree segmentation. Finally, some post-processing is applied to refine the segmentations and provide skeletonized trees using a morphological thinning process. Following a Leave-One-Out Cross Validation (LOOCV) method for an MIP databse with available "ground truth" images, we demonstrate that our approach provides significant improvements in tree-structure segmentations over traditional intensity-based methods. Improvements for MIPs under various imaging conditions are both qualitative and quantitative, as measured from Receiver Operator Characteristic (ROC) curves and the yield and error rates in the final segmentations. In a final step, we demonstrate our DTE approach on previously unseen MIP samples including the extraction of skeletonized structures, and compare our method to a state-of-the art dendritic tree tracing software. Overall, our DTE method allows for robust dendritic tree segmentations in noisy MIPs, outperforming traditional intensity-based methods. Such approach provides a useable segmentation framework, ultimately delivering a speed-up for dendritic tree identification on the user end and a reliable first step towards further morphological characterizations of tree arborization.
Software for keratometry measurements using portable devices
NASA Astrophysics Data System (ADS)
Iyomasa, C. M.; Ventura, L.; De Groote, J. J.
2010-02-01
In this work we present an image processing software for automatic astigmatism measurements developed for a hand held keratometer. The system projects 36 light spots, from LEDs, displayed in a precise circle at the lachrymal film of the examined cornea. The displacement, the size and deformation of the reflected image of these light spots are analyzed providing the keratometry. The purpose of this research is to develop a software that performs fast and precise calculations in mainstream mobile devices. In another words, a software that can be implemented in portable computer systems, which could be of low cost and easy to handle. This project allows portability for keratometers and is a previous work for a portable corneal topographer.
ERIC Educational Resources Information Center
Kirchoff, Jeffrey S. J.; Cook, Mike
2015-01-01
It is well established that the 21st century literate student needs to be able to effectively craft and interpret texts that use multiple communicative modes. Graphic novels are one text type that facilitates such literacy instruction, as the seamless relationship between words, image, and sound (in the form of sound effects) are inherent to the…
ERIC Educational Resources Information Center
Carleton, Renee E.
2012-01-01
Computer-aided learning (CAL) is used increasingly to teach anatomy in post-secondary programs. Studies show that augmentation of traditional cadaver dissection and model examination by CAL can be associated with positive student learning outcomes. In order to reduce costs associated with the purchase of skeletons and models and to encourage study…
Using Microsoft PowerPoint as an Astronomical Image Analysis Tool
NASA Astrophysics Data System (ADS)
Beck-Winchatz, Bernhard
2006-12-01
Engaging students in the analysis of authentic scientific data is an effective way to teach them about the scientific process and to develop their problem solving, teamwork and communication skills. In astronomy several image processing and analysis software tools have been developed for use in school environments. However, the practical implementation in the classroom is often difficult because the teachers may not have the comfort level with computers necessary to install and use these tools, they may not have adequate computer privileges and/or support, and they may not have the time to learn how to use specialized astronomy software. To address this problem, we have developed a set of activities in which students analyze astronomical images using basic tools provided in PowerPoint. These include measuring sizes, distances, and angles, and blinking images. In contrast to specialized software, PowerPoint is broadly available on school computers. Many teachers are already familiar with PowerPoint, and the skills developed while learning how to analyze astronomical images are highly transferable. We will discuss several practical examples of measurements, including the following: -Variations in the distances to the sun and moon from their angular sizes -Magnetic declination from images of shadows -Diameter of the moon from lunar eclipse images -Sizes of lunar craters -Orbital radii of the Jovian moons and mass of Jupiter -Supernova and comet searches -Expansion rate of the universe from images of distant galaxies
Photo CD and Other Digital Imaging Technologies: What's out There and What's It For?
ERIC Educational Resources Information Center
Chen, Ching-Chih
1993-01-01
Describes Kodak's Photo CD technology and its impact on digital imaging. Color desktop publishing, image processing and preservation, image archival storage, and interactive multimedia development, as well as the equipment, software, and services that make these applications possible, are described. Contact information for developers and…
gr-MRI: A software package for magnetic resonance imaging using software defined radios.
Hasselwander, Christopher J; Cao, Zhipeng; Grissom, William A
2016-09-01
The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately $2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs. Copyright © 2016. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-20
Radiographic Image Acquisition & Processing Software for Security Markets. Used in operation of commercial x-ray scanners and manipulation of x-ray images for emergency responders including State, Local, Federal, and US Military bomb technicians and analysts.
TU-AB-204-03: Research Activities in Medical Physics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badano, A.
The responsibilities of the Food and Drug Administration (FDA) have increased since the inception of the Food and Drugs Act in 1906. Medical devices first came under comprehensive regulation with the passage of the 1938 Food, Drug, and Cosmetic Act. In 1971 FDA also took on the responsibility for consumer protection against unnecessary exposure to radiation-emitting devices for home and occupational use. However it was not until 1976, under the Medical Device Regulation Act, that the FDA was responsible for the safety and effectiveness of medical devices. This session will be presented by the Division of Radiological Health (DRH) andmore » the Division of Imaging, Diagnostics, and Software Reliability (DIDSR) from the Center for Devices and Radiological Health (CDRH) at the FDA. The symposium will discuss on how we protect and promote public health with a focus on medical physics applications organized into four areas: pre-market device review, post-market surveillance, device compliance, current regulatory research efforts and partnerships with other organizations. The pre-market session will summarize the pathways FDA uses to regulate the investigational use and commercialization of diagnostic imaging and radiation therapy medical devices in the US, highlighting resources available to assist investigators and manufacturers. The post-market session will explain the post-market surveillance and compliance activities FDA performs to monitor the safety and effectiveness of devices on the market. The third session will describe research efforts that support the regulatory mission of the Agency. An overview of our regulatory research portfolio to advance our understanding of medical physics and imaging technologies and approaches to their evaluation will be discussed. Lastly, mechanisms that FDA uses to seek public input and promote collaborations with professional, government, and international organizations, such as AAPM, International Electrotechnical Commission (IEC), Image Gently, and the Quantitative Imaging Biomarkers Alliance (QIBA) among others, to fulfill FDA’s mission will be discussed. Learning Objectives: Understand FDA’s pre-market and post-market review processes for medical devices Understand FDA’s current regulatory research activities in the areas of medical physics and imaging products Understand how being involved with AAPM and other organizations can also help to promote innovative, safe and effective medical devices J. Delfino, nothing to disclose.« less
TU-AB-204-02: Device Adverse Events and Compliance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzales, S.
The responsibilities of the Food and Drug Administration (FDA) have increased since the inception of the Food and Drugs Act in 1906. Medical devices first came under comprehensive regulation with the passage of the 1938 Food, Drug, and Cosmetic Act. In 1971 FDA also took on the responsibility for consumer protection against unnecessary exposure to radiation-emitting devices for home and occupational use. However it was not until 1976, under the Medical Device Regulation Act, that the FDA was responsible for the safety and effectiveness of medical devices. This session will be presented by the Division of Radiological Health (DRH) andmore » the Division of Imaging, Diagnostics, and Software Reliability (DIDSR) from the Center for Devices and Radiological Health (CDRH) at the FDA. The symposium will discuss on how we protect and promote public health with a focus on medical physics applications organized into four areas: pre-market device review, post-market surveillance, device compliance, current regulatory research efforts and partnerships with other organizations. The pre-market session will summarize the pathways FDA uses to regulate the investigational use and commercialization of diagnostic imaging and radiation therapy medical devices in the US, highlighting resources available to assist investigators and manufacturers. The post-market session will explain the post-market surveillance and compliance activities FDA performs to monitor the safety and effectiveness of devices on the market. The third session will describe research efforts that support the regulatory mission of the Agency. An overview of our regulatory research portfolio to advance our understanding of medical physics and imaging technologies and approaches to their evaluation will be discussed. Lastly, mechanisms that FDA uses to seek public input and promote collaborations with professional, government, and international organizations, such as AAPM, International Electrotechnical Commission (IEC), Image Gently, and the Quantitative Imaging Biomarkers Alliance (QIBA) among others, to fulfill FDA’s mission will be discussed. Learning Objectives: Understand FDA’s pre-market and post-market review processes for medical devices Understand FDA’s current regulatory research activities in the areas of medical physics and imaging products Understand how being involved with AAPM and other organizations can also help to promote innovative, safe and effective medical devices J. Delfino, nothing to disclose.« less
Image Corruption Detection in Diffusion Tensor Imaging for Post-Processing and Real-Time Monitoring
Li, Yue; Shea, Steven M.; Lorenz, Christine H.; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu
2013-01-01
Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called “corrected Inter-Slice Intensity Discontinuity” (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies. PMID:24204551
MIA - A free and open source software for gray scale medical image analysis
2013-01-01
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed. PMID:24119305
MIA - A free and open source software for gray scale medical image analysis.
Wollny, Gert; Kellman, Peter; Ledesma-Carbayo, María-Jesus; Skinner, Matthew M; Hublin, Jean-Jaques; Hierl, Thomas
2013-10-11
Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large.Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers.One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development.Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don't provide an clear approach when one wants to shape a new command line tool from a prototype shell script. The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew; Chignell, Steve
2017-01-01
Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.
NASA Astrophysics Data System (ADS)
West, Amanda M.; Evangelista, Paul H.; Jarnevich, Catherine S.; Kumar, Sunil; Swallow, Aaron; Luizza, Matthew W.; Chignell, Stephen M.
2017-07-01
Among the most pressing concerns of land managers in post-wildfire landscapes are the establishment and spread of invasive species. Land managers need accurate maps of invasive species cover for targeted management post-disturbance that are easily transferable across space and time. In this study, we sought to develop an iterative, replicable methodology based on limited invasive species occurrence data, freely available remotely sensed data, and open source software to predict the distribution of Bromus tectorum (cheatgrass) in a post-wildfire landscape. We developed four species distribution models using eight spectral indices derived from five months of Landsat 8 Operational Land Imager (OLI) data in 2014. These months corresponded to both cheatgrass growing period and time of field data collection in the study area. The four models were improved using an iterative approach in which a threshold for cover was established, and all models had high sensitivity values when tested on an independent dataset. We also quantified the area at highest risk for invasion in future seasons given 2014 distribution, topographic covariates, and seed dispersal limitations. These models demonstrate the effectiveness of using derived multi-date spectral indices as proxies for species occurrence on the landscape, the importance of selecting thresholds for invasive species cover to evaluate ecological risk in species distribution models, and the applicability of Landsat 8 OLI and the Software for Assisted Habitat Modeling for targeted invasive species management.
Mentrup, Detlef; Jockel, Sascha; Menser, Bernd; Neitzel, Ulrich
2016-06-01
The aim of this work was to experimentally compare the contrast improvement factors (CIFs) of a newly developed software-based scatter correction to the CIFs achieved by an antiscatter grid. To this end, three aluminium discs were placed in the lung, the retrocardial and the abdominal areas of a thorax phantom, and digital radiographs of the phantom were acquired both with and without a stationary grid. The contrast generated by the discs was measured in both images, and the CIFs achieved by grid usage were determined for each disc. Additionally, the non-grid images were processed with a scatter correction software. The contrasts generated by the discs were determined in the scatter-corrected images, and the corresponding CIFs were calculated. The CIFs obtained with the grid and with the software were in good agreement. In conclusion, the experiment demonstrates quantitatively that software-based scatter correction allows restoring the image contrast of a non-grid image in a manner comparable with an antiscatter grid. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
2001-01-01
DATASTAR, Inc., of Picayune, Miss., has taken NASA's award-winning Earth Resources Laboratory Applications (ELAS) software program and evolved it to the point that the company is now providing a unique, spatial imagery service over the Internet. ELAS was developed in the early 80's to process satellite and airborne sensor imagery data of the Earth's surface into readable and useable information. While there are several software packages on the market that allow the manipulation of spatial data into useable products, this is usually a laborious task. The new program, called the DATASTAR Image Processing Exploitation, or DIPX, Delivery Service, is a subscription service available over the Internet that takes the work out of the equation and provides normalized geo-spatial data in the form of decision products.
Operation and Performance of the Mars Exploration Rover Imaging System on the Martian Surface
NASA Technical Reports Server (NTRS)
Maki, Justin N.; Litwin, Todd; Herkenhoff, Ken
2005-01-01
This slide presentation details the Mars Exploration Rover (MER) imaging system. Over 144,000 images have been gathered from all Mars Missions, with 83.5% of them being gathered by MER. Each Rover has 9 cameras (Navcam, front and rear Hazcam, Pancam, Microscopic Image, Descent Camera, Engineering Camera, Science Camera) and produces 1024 x 1024 (1 Megapixel) images in the same format. All onboard image processing code is implemented in flight software and includes extensive processing capabilities such as autoexposure, flat field correction, image orientation, thumbnail generation, subframing, and image compression. Ground image processing is done at the Jet Propulsion Laboratory's Multimission Image Processing Laboratory using Video Image Communication and Retrieval (VICAR) while stereo processing (left/right pairs) is provided for raw image, radiometric correction; solar energy maps,triangulation (Cartesian 3-spaces) and slope maps.
A generalized model for multi-marker analysis of cell cycle progression in synchrony experiments.
Mayhew, Michael B; Robinson, Joshua W; Jung, Boyoun; Haase, Steven B; Hartemink, Alexander J
2011-07-01
To advance understanding of eukaryotic cell division, it is important to observe the process precisely. To this end, researchers monitor changes in dividing cells as they traverse the cell cycle, with the presence or absence of morphological or genetic markers indicating a cell's position in a particular interval of the cell cycle. A wide variety of marker data is available, including information-rich cellular imaging data. However, few formal statistical methods have been developed to use these valuable data sources in estimating how a population of cells progresses through the cell cycle. Furthermore, existing methods are designed to handle only a single binary marker of cell cycle progression at a time. Consequently, they cannot facilitate comparison of experiments involving different sets of markers. Here, we develop a new sampling model to accommodate an arbitrary number of different binary markers that characterize the progression of a population of dividing cells along a branching process. We engineer a strain of Saccharomyces cerevisiae with fluorescently labeled markers of cell cycle progression, and apply our new model to two image datasets we collected from the strain, as well as an independent dataset of different markers. We use our model to estimate the duration of post-cytokinetic attachment between a S.cerevisiae mother and daughter cell. The Java implementation is fast and extensible, and includes a graphical user interface. Our model provides a powerful and flexible cell cycle analysis tool, suitable to any type or combination of binary markers. The software is available from: http://www.cs.duke.edu/~amink/software/cloccs/. michael.mayhew@duke.edu; amink@cs.duke.edu.
Special Software for Planetary Image Processing and Research
NASA Astrophysics Data System (ADS)
Zubarev, A. E.; Nadezhdina, I. E.; Kozlova, N. A.; Brusnikin, E. S.; Karachevtseva, I. P.
2016-06-01
The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data - new three-dimensional control point networks, elevation models, orthomosaics - provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015) and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a).