Medical image computing for computer-supported diagnostics and therapy. Advances and perspectives.
Handels, H; Ehrhardt, J
2009-01-01
Medical image computing has become one of the most challenging fields in medical informatics. In image-based diagnostics of the future software assistance will become more and more important, and image analysis systems integrating advanced image computing methods are needed to extract quantitative image parameters to characterize the state and changes of image structures of interest (e.g. tumors, organs, vessels, bones etc.) in a reproducible and objective way. Furthermore, in the field of software-assisted and navigated surgery medical image computing methods play a key role and have opened up new perspectives for patient treatment. However, further developments are needed to increase the grade of automation, accuracy, reproducibility and robustness. Moreover, the systems developed have to be integrated into the clinical workflow. For the development of advanced image computing systems methods of different scientific fields have to be adapted and used in combination. The principal methodologies in medical image computing are the following: image segmentation, image registration, image analysis for quantification and computer assisted image interpretation, modeling and simulation as well as visualization and virtual reality. Especially, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients and will gain importance in diagnostic and therapy of the future. From a methodical point of view the authors identify the following future trends and perspectives in medical image computing: development of optimized application-specific systems and integration into the clinical workflow, enhanced computational models for image analysis and virtual reality training systems, integration of different image computing methods, further integration of multimodal image data and biosignals and advanced methods for 4D medical image computing. The development of image analysis systems for diagnostic support or operation planning is a complex interdisciplinary process. Image computing methods enable new insights into the patient's image data and have the future potential to improve medical diagnostics and patient treatment.
Flightspeed Integral Image Analysis Toolkit
NASA Technical Reports Server (NTRS)
Thompson, David R.
2009-01-01
The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles
Jiang, Weiping; Wang, Li; Niu, Xiaoji; Zhang, Quan; Zhang, Hui; Tang, Min; Hu, Xiangyun
2014-01-01
A high-precision image-aided inertial navigation system (INS) is proposed as an alternative to the carrier-phase-based differential Global Navigation Satellite Systems (CDGNSSs) when satellite-based navigation systems are unavailable. In this paper, the image/INS integrated algorithm is modeled by a tightly-coupled iterative extended Kalman filter (IEKF). Tightly-coupled integration ensures that the integrated system is reliable, even if few known feature points (i.e., less than three) are observed in the images. A new global observability analysis of this tightly-coupled integration is presented to guarantee that the system is observable under the necessary conditions. The analysis conclusions were verified by simulations and field tests. The field tests also indicate that high-precision position (centimeter-level) and attitude (half-degree-level)-integrated solutions can be achieved in a global reference. PMID:25330046
Multi-viewer tracking integral imaging system and its viewing zone analysis.
Park, Gilbae; Jung, Jae-Hyun; Hong, Keehoon; Kim, Yunhee; Kim, Young-Hoon; Min, Sung-Wook; Lee, Byoungho
2009-09-28
We propose a multi-viewer tracking integral imaging system for viewing angle and viewing zone improvement. In the tracking integral imaging system, the pickup angles in each elemental lens in the lens array are decided by the positions of viewers, which means the elemental image can be made for each viewer to provide wider viewing angle and larger viewing zone. Our tracking integral imaging system is implemented with an infrared camera and infrared light emitting diodes which can track the viewers' exact positions robustly. For multiple viewers to watch integrated three-dimensional images in the tracking integral imaging system, it is needed to formulate the relationship between the multiple viewers' positions and the elemental images. We analyzed the relationship and the conditions for the multiple viewers, and verified them by the implementation of two-viewer tracking integral imaging system.
2017-12-02
Report: Acquisition of an Advanced Thermal Analysis and Imaging System for Integration with Interdisciplinary Research and Education in Low Density...for Integration with Interdisciplinary Research and Education in Low Density Organic-Inorganic Materials Report Term: 0-Other Email: dmisra2
Dictionary-based image reconstruction for superresolution in integrated circuit imaging.
Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim
2015-06-01
Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.
The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.
1992-01-01
The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.
The Image Data Resource: A Bioimage Data Integration and Publication Platform.
Williams, Eleanor; Moore, Josh; Li, Simon W; Rustici, Gabriella; Tarkowska, Aleksandra; Chessel, Anatole; Leo, Simone; Antal, Bálint; Ferguson, Richard K; Sarkans, Ugis; Brazma, Alvis; Salas, Rafael E Carazo; Swedlow, Jason R
2017-08-01
Access to primary research data is vital for the advancement of science. To extend the data types supported by community repositories, we built a prototype Image Data Resource (IDR) that collects and integrates imaging data acquired across many different imaging modalities. IDR links data from several imaging modalities, including high-content screening, super-resolution and time-lapse microscopy, digital pathology, public genetic or chemical databases, and cell and tissue phenotypes expressed using controlled ontologies. Using this integration, IDR facilitates the analysis of gene networks and reveals functional interactions that are inaccessible to individual studies. To enable re-analysis, we also established a computational resource based on Jupyter notebooks that allows remote access to the entire IDR. IDR is also an open source platform that others can use to publish their own image data. Thus IDR provides both a novel on-line resource and a software infrastructure that promotes and extends publication and re-analysis of scientific image data.
Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.
2014-01-01
The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019
Predicting neuropathic ulceration: analysis of static temperature distributions in thermal images
NASA Astrophysics Data System (ADS)
Kaabouch, Naima; Hu, Wen-Chen; Chen, Yi; Anderson, Julie W.; Ames, Forrest; Paulson, Rolf
2010-11-01
Foot ulcers affect millions of Americans annually. Conventional methods used to assess skin integrity, including inspection and palpation, may be valuable approaches, but they usually do not detect changes in skin integrity until an ulcer has already developed. We analyze the feasibility of thermal imaging as a technique to assess the integrity of the skin and its many layers. Thermal images are analyzed using an asymmetry analysis, combined with a genetic algorithm, to examine the infrared images for early detection of foot ulcers. Preliminary results show that the proposed technique can reliably and efficiently detect inflammation and hence effectively predict potential ulceration.
Fang, Yu-Hua Dean; Asthana, Pravesh; Salinas, Cristian; Huang, Hsuan-Ming; Muzic, Raymond F
2010-01-01
An integrated software package, Compartment Model Kinetic Analysis Tool (COMKAT), is presented in this report. COMKAT is an open-source software package with many functions for incorporating pharmacokinetic analysis in molecular imaging research and has both command-line and graphical user interfaces. With COMKAT, users may load and display images, draw regions of interest, load input functions, select kinetic models from a predefined list, or create a novel model and perform parameter estimation, all without having to write any computer code. For image analysis, COMKAT image tool supports multiple image file formats, including the Digital Imaging and Communications in Medicine (DICOM) standard. Image contrast, zoom, reslicing, display color table, and frame summation can be adjusted in COMKAT image tool. It also displays and automatically registers images from 2 modalities. Parametric imaging capability is provided and can be combined with the distributed computing support to enhance computation speeds. For users without MATLAB licenses, a compiled, executable version of COMKAT is available, although it currently has only a subset of the full COMKAT capability. Both the compiled and the noncompiled versions of COMKAT are free for academic research use. Extensive documentation, examples, and COMKAT itself are available on its wiki-based Web site, http://comkat.case.edu. Users are encouraged to contribute, sharing their experience, examples, and extensions of COMKAT. With integrated functionality specifically designed for imaging and kinetic modeling analysis, COMKAT can be used as a software environment for molecular imaging and pharmacokinetic analysis.
Sparse models for correlative and integrative analysis of imaging and genetic data
Lin, Dongdong; Cao, Hongbao; Calhoun, Vince D.
2014-01-01
The development of advanced medical imaging technologies and high-throughput genomic measurements has enhanced our ability to understand their interplay as well as their relationship with human behavior by integrating these two types of datasets. However, the high dimensionality and heterogeneity of these datasets presents a challenge to conventional statistical methods; there is a high demand for the development of both correlative and integrative analysis approaches. Here, we review our recent work on developing sparse representation based approaches to address this challenge. We show how sparse models are applied to the correlation and integration of imaging and genetic data for biomarker identification. We present examples on how these approaches are used for the detection of risk genes and classification of complex diseases such as schizophrenia. Finally, we discuss future directions on the integration of multiple imaging and genomic datasets including their interactions such as epistasis. PMID:25218561
Computerized image analysis for quantitative neuronal phenotyping in zebrafish.
Liu, Tianming; Lu, Jianfeng; Wang, Ye; Campbell, William A; Huang, Ling; Zhu, Jinmin; Xia, Weiming; Wong, Stephen T C
2006-06-15
An integrated microscope image analysis pipeline is developed for automatic analysis and quantification of phenotypes in zebrafish with altered expression of Alzheimer's disease (AD)-linked genes. We hypothesize that a slight impairment of neuronal integrity in a large number of zebrafish carrying the mutant genotype can be detected through the computerized image analysis method. Key functionalities of our zebrafish image processing pipeline include quantification of neuron loss in zebrafish embryos due to knockdown of AD-linked genes, automatic detection of defective somites, and quantitative measurement of gene expression levels in zebrafish with altered expression of AD-linked genes or treatment with a chemical compound. These quantitative measurements enable the archival of analyzed results and relevant meta-data. The structured database is organized for statistical analysis and data modeling to better understand neuronal integrity and phenotypic changes of zebrafish under different perturbations. Our results show that the computerized analysis is comparable to manual counting with equivalent accuracy and improved efficacy and consistency. Development of such an automated data analysis pipeline represents a significant step forward to achieve accurate and reproducible quantification of neuronal phenotypes in large scale or high-throughput zebrafish imaging studies.
Testing for a Signal with Unknown Location and Scale in a Stationary Gaussian Random Field
1994-01-07
Secondary 60D05, 52A22. Key words and phrases. Euler characteristic, integral geometry, image analysis , Gaussian fields, volume of tubes. SUMMARY We...words and phrases. Euler characteristic, integral geometry. image analysis . Gaussian fields. volume of tubes. 20. AMST RACT (Coith..o an revmreo ef* It
Multifacet structure of observed reconstructed integral images.
Martínez-Corral, Manuel; Javidi, Bahram; Martínez-Cuenca, Raúl; Saavedra, Genaro
2005-04-01
Three-dimensional images generated by an integral imaging system suffer from degradations in the form of grid of multiple facets. This multifacet structure breaks the continuity of the observed image and therefore reduces its visual quality. We perform an analysis of this effect and present the guidelines in the design of lenslet imaging parameters for optimization of viewing conditions with respect to the multifacet degradation. We consider the optimization of the system in terms of field of view, observer position and pupil function, lenslet parameters, and type of reconstruction. Numerical tests are presented to verify the theoretical analysis.
Integrated optical 3D digital imaging based on DSP scheme
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.
2008-03-01
We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.
Proceedings of the Third Annual Symposium on Mathematical Pattern Recognition and Image Analysis
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.
1985-01-01
Topics addressed include: multivariate spline method; normal mixture analysis applied to remote sensing; image data analysis; classifications in spatially correlated environments; probability density functions; graphical nonparametric methods; subpixel registration analysis; hypothesis integration in image understanding systems; rectification of satellite scanner imagery; spatial variation in remotely sensed images; smooth multidimensional interpolation; and optimal frequency domain textural edge detection filters.
Deserno, Thomas M; Haak, Daniel; Brandenburg, Vincent; Deserno, Verena; Classen, Christoph; Specht, Paula
2014-12-01
Especially for investigator-initiated research at universities and academic institutions, Internet-based rare disease registries (RDR) are required that integrate electronic data capture (EDC) with automatic image analysis or manual image annotation. We propose a modular framework merging alpha-numerical and binary data capture. In concordance with the Office of Rare Diseases Research recommendations, a requirement analysis was performed based on several RDR databases currently hosted at Uniklinik RWTH Aachen, Germany. With respect to the study management tool that is already successfully operating at the Clinical Trial Center Aachen, the Google Web Toolkit was chosen with Hibernate and Gilead connecting a MySQL database management system. Image and signal data integration and processing is supported by Apache Commons FileUpload-Library and ImageJ-based Java code, respectively. As a proof of concept, the framework is instantiated to the German Calciphylaxis Registry. The framework is composed of five mandatory core modules: (1) Data Core, (2) EDC, (3) Access Control, (4) Audit Trail, and (5) Terminology as well as six optional modules: (6) Binary Large Object (BLOB), (7) BLOB Analysis, (8) Standard Operation Procedure, (9) Communication, (10) Pseudonymization, and (11) Biorepository. Modules 1-7 are implemented in the German Calciphylaxis Registry. The proposed RDR framework is easily instantiated and directly integrates image management and analysis. As open source software, it may assist improved data collection and analysis of rare diseases in near future.
A Mobile Food Record For Integrated Dietary Assessment*
Ahmad, Ziad; Kerr, Deborah A.; Bosch, Marc; Boushey, Carol J.; Delp, Edward J.; Khanna, Nitin; Zhu, Fengqing
2017-01-01
This paper presents an integrated dietary assessment system based on food image analysis that uses mobile devices or smartphones. We describe two components of our integrated system: a mobile application and an image-based food nutrient database that is connected to the mobile application. An easy-to-use mobile application user interface is described that was designed based on user preferences as well as the requirements of the image analysis methods. The user interface is validated by user feedback collected from several studies. Food nutrient and image databases are also described which facilitates image-based dietary assessment and enable dietitians and other healthcare professionals to monitor patients dietary intake in real-time. The system has been tested and validated in several user studies involving more than 500 users who took more than 60,000 food images under controlled and community-dwelling conditions. PMID:28691119
Image Segmentation Analysis for NASA Earth Science Applications
NASA Technical Reports Server (NTRS)
Tilton, James C.
2010-01-01
NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.
Integrating medical imaging analyses through a high-throughput bundled resource imaging system
NASA Astrophysics Data System (ADS)
Covington, Kelsie; Welch, E. Brian; Jeong, Ha-Kyu; Landman, Bennett A.
2011-03-01
Exploitation of advanced, PACS-centric image analysis and interpretation pipelines provides well-developed storage, retrieval, and archival capabilities along with state-of-the-art data providence, visualization, and clinical collaboration technologies. However, pursuit of integrated medical imaging analysis through a PACS environment can be limiting in terms of the overhead required to validate, evaluate and integrate emerging research technologies. Herein, we address this challenge through presentation of a high-throughput bundled resource imaging system (HUBRIS) as an extension to the Philips Research Imaging Development Environment (PRIDE). HUBRIS enables PACS-connected medical imaging equipment to invoke tools provided by the Java Imaging Science Toolkit (JIST) so that a medical imaging platform (e.g., a magnetic resonance imaging scanner) can pass images and parameters to a server, which communicates with a grid computing facility to invoke the selected algorithms. Generated images are passed back to the server and subsequently to the imaging platform from which the images can be sent to a PACS. JIST makes use of an open application program interface layer so that research technologies can be implemented in any language capable of communicating through a system shell environment (e.g., Matlab, Java, C/C++, Perl, LISP, etc.). As demonstrated in this proof-of-concept approach, HUBRIS enables evaluation and analysis of emerging technologies within well-developed PACS systems with minimal adaptation of research software, which simplifies evaluation of new technologies in clinical research and provides a more convenient use of PACS technology by imaging scientists.
Gutman, David A; Cobb, Jake; Somanna, Dhananjaya; Park, Yuna; Wang, Fusheng; Kurc, Tahsin; Saltz, Joel H; Brat, Daniel J; Cooper, Lee A D
2013-01-01
Background The integration and visualization of multimodal datasets is a common challenge in biomedical informatics. Several recent studies of The Cancer Genome Atlas (TCGA) data have illustrated important relationships between morphology observed in whole-slide images, outcome, and genetic events. The pairing of genomics and rich clinical descriptions with whole-slide imaging provided by TCGA presents a unique opportunity to perform these correlative studies. However, better tools are needed to integrate the vast and disparate data types. Objective To build an integrated web-based platform supporting whole-slide pathology image visualization and data integration. Materials and methods All images and genomic data were directly obtained from the TCGA and National Cancer Institute (NCI) websites. Results The Cancer Digital Slide Archive (CDSA) produced is accessible to the public (http://cancer.digitalslidearchive.net) and currently hosts more than 20 000 whole-slide images from 22 cancer types. Discussion The capabilities of CDSA are demonstrated using TCGA datasets to integrate pathology imaging with associated clinical, genomic and MRI measurements in glioblastomas and can be extended to other tumor types. CDSA also allows URL-based sharing of whole-slide images, and has preliminary support for directly sharing regions of interest and other annotations. Images can also be selected on the basis of other metadata, such as mutational profile, patient age, and other relevant characteristics. Conclusions With the increasing availability of whole-slide scanners, analysis of digitized pathology images will become increasingly important in linking morphologic observations with genomic and clinical endpoints. PMID:23893318
Satellite image fusion based on principal component analysis and high-pass filtering.
Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E
2010-06-01
This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.
FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples
Scrofani, G.; Sola-Pikabea, J.; Llavador, A.; Sanchez-Ortiga, E.; Barreiro, J. C.; Saavedra, G.; Garcia-Sucerquia, J.; Martínez-Corral, M.
2017-01-01
In this work, Fourier integral microscope (FIMic), an ultimate design of 3D-integral microscopy, is presented. By placing a multiplexing microlens array at the aperture stop of the microscope objective of the host microscope, FIMic shows extended depth of field and enhanced lateral resolution in comparison with regular integral microscopy. As FIMic directly produces a set of orthographic views of the 3D-micrometer-sized sample, it is suitable for real-time imaging. Following regular integral-imaging reconstruction algorithms, a 2.75-fold enhanced depth of field and 2-time better spatial resolution in comparison with conventional integral microscopy is reported. Our claims are supported by theoretical analysis and experimental images of a resolution test target, cotton fibers, and in-vivo 3D-imaging of biological specimens. PMID:29359107
USDA-ARS?s Scientific Manuscript database
Using five centimeter resolution images acquired with an unmanned aircraft system (UAS), we developed and evaluated an image processing workflow that included the integration of resolution-appropriate field sampling, feature selection, object-based image analysis, and processing approaches for UAS i...
Analysis of autostereoscopic three-dimensional images using multiview wavelets.
Saveljev, Vladimir; Palchikova, Irina
2016-08-10
We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images.
Integrated analysis of remote sensing products from basic geological surveys. [Brazil
NASA Technical Reports Server (NTRS)
Dasilvafagundesfilho, E. (Principal Investigator)
1984-01-01
Recent advances in remote sensing led to the development of several techniques to obtain image information. These techniques as effective tools in geological maping are analyzed. A strategy for optimizing the images in basic geological surveying is presented. It embraces as integrated analysis of spatial, spectral, and temporal data through photoptic (color additive viewer) and computer processing at different scales, allowing large areas survey in a fast, precise, and low cost manner.
Intershot Analysis of Flows in DIII-D
NASA Astrophysics Data System (ADS)
Meyer, W. H.; Allen, S. L.; Samuell, C. M.; Howard, J.
2016-10-01
Analysis of the DIII-D flow diagnostic data require demodulation of interference images, and inversion of the resultant line integrated emissivity and flow (phase) images. Four response matrices are pre-calculated: the emissivity line integral and the line integral of the scalar product of the lines-of-site with the orthogonal unit vectors of parallel flow. Equilibrium data determines the relative weight of the component matrices used in the final flow inversion matrix. Serial processing has been used for the lower divertor viewing flow camera 800x600 pixel image. The full cross section viewing camera will require parallel processing of the 2160x2560 pixel image. We will discuss using a Posix thread pool and a Tesla K40c GPU in the processing of this data. Prepared by LLNL under Contract DE-AC52-07NA27344. This material is based upon work supported by the U.S. DOE, Office of Science, Fusion Energy Sciences.
Edge Preserved Speckle Noise Reduction Using Integrated Fuzzy Filters
Dewal, M. L.; Rohit, Manoj Kumar
2014-01-01
Echocardiographic images are inherent with speckle noise which makes visual reading and analysis quite difficult. The multiplicative speckle noise masks finer details, necessary for diagnosis of abnormalities. A novel speckle reduction technique based on integration of geometric, wiener, and fuzzy filters is proposed and analyzed in this paper. The denoising applications of fuzzy filters are studied and analyzed along with 26 denoising techniques. It is observed that geometric filter retains noise and, to address this issue, wiener filter is embedded into the geometric filter during iteration process. The performance of geometric-wiener filter is further enhanced using fuzzy filters and the proposed despeckling techniques are called integrated fuzzy filters. Fuzzy filters based on moving average and median value are employed in the integrated fuzzy filters. The performances of integrated fuzzy filters are tested on echocardiographic images and synthetic images in terms of image quality metrics. It is observed that the performance parameters are highest in case of integrated fuzzy filters in comparison to fuzzy and geometric-fuzzy filters. The clinical validation reveals that the output images obtained using geometric-wiener, integrated fuzzy, nonlocal means, and details preserving anisotropic diffusion filters are acceptable. The necessary finer details are retained in the denoised echocardiographic images. PMID:27437499
Towards a framework for agent-based image analysis of remote-sensing data
Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera
2015-01-01
Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects’ properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA). PMID:27721916
Towards a framework for agent-based image analysis of remote-sensing data.
Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera
2015-04-03
Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).
Validating a Geographical Image Retrieval System.
ERIC Educational Resources Information Center
Zhu, Bin; Chen, Hsinchun
2000-01-01
Summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. Describes an experiment to validate the performance of this image retrieval system against that of human subjects by examining similarity analysis…
Rabal, Obdulia; Link, Wolfgang; Serelde, Beatriz G; Bischoff, James R; Oyarzabal, Julen
2010-04-01
Here we report the development and validation of a complete solution to manage and analyze the data produced by image-based phenotypic screening campaigns of small-molecule libraries. In one step initial crude images are analyzed for multiple cytological features, statistical analysis is performed and molecules that produce the desired phenotypic profile are identified. A naïve Bayes classifier, integrating chemical and phenotypic spaces, is built and utilized during the process to assess those images initially classified as "fuzzy"-an automated iterative feedback tuning. Simultaneously, all this information is directly annotated in a relational database containing the chemical data. This novel fully automated method was validated by conducting a re-analysis of results from a high-content screening campaign involving 33 992 molecules used to identify inhibitors of the PI3K/Akt signaling pathway. Ninety-two percent of confirmed hits identified by the conventional multistep analysis method were identified using this integrated one-step system as well as 40 new hits, 14.9% of the total, originally false negatives. Ninety-six percent of true negatives were properly recognized too. A web-based access to the database, with customizable data retrieval and visualization tools, facilitates the posterior analysis of annotated cytological features which allows identification of additional phenotypic profiles; thus, further analysis of original crude images is not required.
Ontology-based, Tissue MicroArray oriented, image centered tissue bank
Viti, Federica; Merelli, Ivan; Caprera, Andrea; Lazzari, Barbara; Stella, Alessandra; Milanesi, Luciano
2008-01-01
Background Tissue MicroArray technique is becoming increasingly important in pathology for the validation of experimental data from transcriptomic analysis. This approach produces many images which need to be properly managed, if possible with an infrastructure able to support tissue sharing between institutes. Moreover, the available frameworks oriented to Tissue MicroArray provide good storage for clinical patient, sample treatment and block construction information, but their utility is limited by the lack of data integration with biomolecular information. Results In this work we propose a Tissue MicroArray web oriented system to support researchers in managing bio-samples and, through the use of ontologies, enables tissue sharing aimed at the design of Tissue MicroArray experiments and results evaluation. Indeed, our system provides ontological description both for pre-analysis tissue images and for post-process analysis image results, which is crucial for information exchange. Moreover, working on well-defined terms it is then possible to query web resources for literature articles to integrate both pathology and bioinformatics data. Conclusions Using this system, users associate an ontology-based description to each image uploaded into the database and also integrate results with the ontological description of biosequences identified in every tissue. Moreover, it is possible to integrate the ontological description provided by the user with a full compliant gene ontology definition, enabling statistical studies about correlation between the analyzed pathology and the most commonly related biological processes. PMID:18460177
Huang, H; Coatrieux, G; Shu, H Z; Luo, L M; Roux, Ch
2011-01-01
In this paper we present a medical image integrity verification system that not only allows detecting and approximating malevolent local image alterations (e.g. removal or addition of findings) but is also capable to identify the nature of global image processing applied to the image (e.g. lossy compression, filtering …). For that purpose, we propose an image signature derived from the geometric moments of pixel blocks. Such a signature is computed over regions of interest of the image and then watermarked in regions of non interest. Image integrity analysis is conducted by comparing embedded and recomputed signatures. If any, local modifications are approximated through the determination of the parameters of the nearest generalized 2D Gaussian. Image moments are taken as image features and serve as inputs to one classifier we learned to discriminate the type of global image processing. Experimental results with both local and global modifications illustrate the overall performances of our approach.
Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.
Arganda-Carreras, Ignacio; Andrey, Philippe
2017-01-01
With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.
Interfaces and Integration of Medical Image Analysis Frameworks: Challenges and Opportunities.
Covington, Kelsie; McCreedy, Evan S; Chen, Min; Carass, Aaron; Aucoin, Nicole; Landman, Bennett A
2010-05-25
Clinical research with medical imaging typically involves large-scale data analysis with interdependent software toolsets tied together in a processing workflow. Numerous, complementary platforms are available, but these are not readily compatible in terms of workflows or data formats. Both image scientists and clinical investigators could benefit from using the framework which is a most natural fit to the specific problem at hand, but pragmatic choices often dictate that a compromise platform is used for collaboration. Manual merging of platforms through carefully tuned scripts has been effective, but exceptionally time consuming and is not feasible for large-scale integration efforts. Hence, the benefits of innovation are constrained by platform dependence. Removing this constraint via integration of algorithms from one framework into another is the focus of this work. We propose and demonstrate a light-weight interface system to expose parameters across platforms and provide seamless integration. In this initial effort, we focus on four platforms Medical Image Analysis and Visualization (MIPAV), Java Image Science Toolkit (JIST), command line tools, and 3D Slicer. We explore three case studies: (1) providing a system for MIPAV to expose internal algorithms and utilize these algorithms within JIST, (2) exposing JIST modules through self-documenting command line interface for inclusion in scripting environments, and (3) detecting and using JIST modules in 3D Slicer. We review the challenges and opportunities for light-weight software integration both within development language (e.g., Java in MIPAV and JIST) and across languages (e.g., C/C++ in 3D Slicer and shell in command line tools).
DOT National Transportation Integrated Search
2006-05-08
This paper describes the integration of wavelet analysis and time-domain beamforming : of microphone array output signals for analyzing the acoustic emissions from airplane : generated wake vortices. This integrated process provides visual and quanti...
Lyubimov, Artem Y; Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Brewster, Aaron S; Murray, Thomas D; Sauter, Nicholas K; Berger, James M; Weis, William I; Brunger, Axel T
2016-06-01
Serial femtosecond crystallography (SFX) uses an X-ray free-electron laser to extract diffraction data from crystals not amenable to conventional X-ray light sources owing to their small size or radiation sensitivity. However, a limitation of SFX is the high variability of the diffraction images that are obtained. As a result, it is often difficult to determine optimal indexing and integration parameters for the individual diffraction images. Presented here is a software package, called IOTA , which uses a grid-search technique to determine optimal spot-finding parameters that can in turn affect the success of indexing and the quality of integration on an image-by-image basis. Integration results can be filtered using a priori information about the Bravais lattice and unit-cell dimensions and analyzed for unit-cell isomorphism, facilitating an improvement in subsequent data-processing steps.
3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging
NASA Astrophysics Data System (ADS)
Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak
2017-10-01
Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.
NASA Astrophysics Data System (ADS)
Haak, Daniel; Doma, Aliaa; Gombert, Alexander; Deserno, Thomas M.
2016-03-01
Today, subject's medical data in controlled clinical trials is captured digitally in electronic case report forms (eCRFs). However, eCRFs only insufficiently support integration of subject's image data, although medical imaging is looming large in studies today. For bed-side image integration, we present a mobile application (App) that utilizes the smartphone-integrated camera. To ensure high image quality with this inexpensive consumer hardware, color reference cards are placed in the camera's field of view next to the lesion. The cards are used for automatic calibration of geometry, color, and contrast. In addition, a personalized code is read from the cards that allows subject identification. For data integration, the App is connected to an communication and image analysis server that also holds the code-study-subject relation. In a second system interconnection, web services are used to connect the smartphone with OpenClinica, an open-source, Food and Drug Administration (FDA)-approved electronic data capture (EDC) system in clinical trials. Once the photographs have been securely stored on the server, they are released automatically from the mobile device. The workflow of the system is demonstrated by an ongoing clinical trial, in which photographic documentation is frequently performed to measure the effect of wound incision management systems. All 205 images, which have been collected in the study so far, have been correctly identified and successfully integrated into the corresponding subject's eCRF. Using this system, manual steps for the study personnel are reduced, and, therefore, errors, latency and costs decreased. Our approach also increases data security and privacy.
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan
2017-03-01
In this paper, we describe an enhanced DICOM Secondary Capture (SC) that integrates Image Quantification (IQ) results, Regions of Interest (ROIs), and Time Activity Curves (TACs) with screen shots by embedding extra medical imaging information into a standard DICOM header. A software toolkit of DICOM IQSC has been developed to implement the SC-centered information integration of quantitative analysis for routine practice of nuclear medicine. Primary experiments show that the DICOM IQSC method is simple and easy to implement seamlessly integrating post-processing workstations with PACS for archiving and retrieving IQ information. Additional DICOM IQSC applications in routine nuclear medicine and clinic research are also discussed.
Advanced image based methods for structural integrity monitoring: Review and prospects
NASA Astrophysics Data System (ADS)
Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.
2018-02-01
There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors.
Dutton, Neale A W; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K
2016-07-20
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.
Integrated Modeling Activities for the James Webb Space Telescope: Optical Jitter Analysis
NASA Technical Reports Server (NTRS)
Hyde, T. Tupper; Ha, Kong Q.; Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.
2004-01-01
This is a continuation of a series of papers on the integrated modeling activities for the James Webb Space Telescope(JWST). Starting with the linear optical model discussed in part one, and using the optical sensitivities developed in part two, we now assess the optical image motion and wavefront errors from the structural dynamics. This is often referred to as "jitter: analysis. The optical model is combined with the structural model and the control models to create a linear structural/optical/control model. The largest jitter is due to spacecraft reaction wheel assembly disturbances which are harmonic in nature and will excite spacecraft and telescope structural. The structural/optic response causes image quality degradation due to image motion (centroid error) as well as dynamic wavefront error. Jitter analysis results are used to predict imaging performance, improve the structural design, and evaluate the operational impact of the disturbance sources.
Mesoscale and severe storms (Mass) data management and analysis system
NASA Technical Reports Server (NTRS)
Hickey, J. S.; Karitani, S.; Dickerson, M.
1984-01-01
Progress on the Mesoscale and Severe Storms (MASS) data management and analysis system is described. An interactive atmospheric data base management software package to convert four types of data (Sounding, Single Level, Grid, Image) into standard random access formats is implemented and integrated with the MASS AVE80 Series general purpose plotting and graphics display data analysis software package. An interactive analysis and display graphics software package (AVE80) to analyze large volumes of conventional and satellite derived meteorological data is enhanced to provide imaging/color graphics display utilizing color video hardware integrated into the MASS computer system. Local and remote smart-terminal capability is provided by installing APPLE III computer systems within individual scientist offices and integrated with the MASS system, thus providing color video display, graphics, and characters display of the four data types.
The effect of human image in B2C website design: an eye-tracking study
NASA Astrophysics Data System (ADS)
Wang, Qiuzhen; Yang, Yi; Wang, Qi; Ma, Qingguo
2014-09-01
On B2C shopping websites, effective visual designs can bring about consumers' positive emotional experience. From this perspective, this article developed a research model to explore the impact of human image as a visual element on consumers' online shopping emotions and subsequent attitudes towards websites. This study conducted an eye-tracking experiment to collect both eye movement data and questionnaire data to test the research model. Questionnaire data analysis showed that product pictures combined with human image induced positive emotions among participants, thus promoting their attitudes towards online shopping websites. Specifically, product pictures with human image first produced higher levels of image appeal and perceived social presence, thus stimulating higher levels of enjoyment and subsequent positive attitudes towards the websites. Moreover, a moderating effect of product type was demonstrated on the relationship between the presence of human image and the level of image appeal. Specifically, human image significantly increased the level of image appeal when integrated in entertainment product pictures while this relationship was not significant in terms of utilitarian products. Eye-tracking data analysis further supported these results and provided plausible explanations. The presence of human image significantly increased the pupil size of participants regardless of product types. For entertainment products, participants paid more attention to product pictures integrated with human image whereas for utilitarian products more attention was paid to functional information of products than to product pictures no matter whether or not integrated with human image.
Fast Image Texture Classification Using Decision Trees
NASA Technical Reports Server (NTRS)
Thompson, David R.
2011-01-01
Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.
Aaldering, Loes; Vliegenthart, Rens
Despite the large amount of research into both media coverage of politics as well as political leadership, surprisingly little research has been devoted to the ways political leaders are discussed in the media. This paper studies whether computer-aided content analysis can be applied in examining political leadership images in Dutch newspaper articles. It, firstly, provides a conceptualization of political leader character traits that integrates different perspectives in the literature. Moreover, this paper measures twelve political leadership images in media coverage, based on a large-scale computer-assisted content analysis of Dutch media coverage (including almost 150.000 newspaper articles), and systematically tests the quality of the employed measurement instrument by assessing the relationship between the images, the variance in the measurement, the over-time development of images for two party leaders and by comparing the computer results with manual coding. We conclude that the computerized content analysis provides a valid measurement for the leadership images in Dutch newspapers. Moreover, we find that the dimensions political craftsmanship, vigorousness, integrity, communicative performances and consistency are regularly applied in discussing party leaders, but that portrayal of party leaders in terms of responsiveness is almost completely absent in Dutch newspapers.
Image Harvest: an open-source platform for high-throughput plant image processing and analysis
Knecht, Avi C.; Campbell, Malachy T.; Caprez, Adam; Swanson, David R.; Walia, Harkamal
2016-01-01
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. PMID:27141917
Chondronikola, Maria; Sidossis, Labros S.; Richardson, Lisa M.; Temple, Jeff R.; van den Berg, Patricia A.; Herndon, David N.; Meyer, Walter J.
2012-01-01
Objective Burn injury deformities and obesity have been associated with social integration difficulty and body image dissatisfaction. However, the combined effects of obesity and burn injury in social integration difficulty and body image dissatisfaction are unknown. Methods Adolescent and young adults burn injury survivors were categorized as normal weight (n=47) or overweight and obese (n=21). Burn-related and anthropometric information was obtained from patients' medical records, while validated questionnaires were used to assess the main outcomes and possible confounders. Analysis of covariance and multiple linear regressions were performed to evaluate the objectives of this study. Results Obese and overweight burn injury survivors did not experience increased body image dissatisfaction (12 ± 4.3 vs 13.1 ± 4.4, p = 0.57) or social integration difficulty (17.5 ± 6.9 vs 15.5 ± 5.7, p=0.16) compared to normal weight burn injury survivors. Weight status was not a significant predictor of social integration difficulty or body image dissatisfaction (p=0.19 and p=0.24, respectively). However, mobility limitations predicted greater social integration difficulty (p=0.005) and body image dissatisfaction (p<0.001), while higher weight status at burn was a borderline significant predictor of body image dissatisfaction (p=0.05). Conclusions Obese and overweight adolescents and young adults, who sustained a major burn injury as children, do not experience greater social integration difficulty and body image dissatisfaction compared to normal weight burn injury survivors. Mobility limitations and higher weight status at burn are likely more important factors affecting the long-term social integration difficulty and body image dissatisfaction of these young people. PMID:23292577
Fiji: an open-source platform for biological-image analysis.
Schindelin, Johannes; Arganda-Carreras, Ignacio; Frise, Erwin; Kaynig, Verena; Longair, Mark; Pietzsch, Tobias; Preibisch, Stephan; Rueden, Curtis; Saalfeld, Stephan; Schmid, Benjamin; Tinevez, Jean-Yves; White, Daniel James; Hartenstein, Volker; Eliceiri, Kevin; Tomancak, Pavel; Cardona, Albert
2012-06-28
Fiji is a distribution of the popular open-source software ImageJ focused on biological-image analysis. Fiji uses modern software engineering practices to combine powerful software libraries with a broad range of scripting languages to enable rapid prototyping of image-processing algorithms. Fiji facilitates the transformation of new algorithms into ImageJ plugins that can be shared with end users through an integrated update system. We propose Fiji as a platform for productive collaboration between computer science and biology research communities.
Oetjen, Janina; Aichler, Michaela; Trede, Dennis; Strehlow, Jan; Berger, Judith; Heldmann, Stefan; Becker, Michael; Gottschalk, Michael; Kobarg, Jan Hendrik; Wirtz, Stefan; Schiffler, Stefan; Thiele, Herbert; Walch, Axel; Maass, Peter; Alexandrov, Theodore
2013-09-02
MALDI imaging mass spectrometry (MALDI-imaging) has emerged as a spatially-resolved label-free bioanalytical technique for direct analysis of biological samples and was recently introduced for analysis of 3D tissue specimens. We present a new experimental and computational pipeline for molecular analysis of tissue specimens which integrates 3D MALDI-imaging, magnetic resonance imaging (MRI), and histological staining and microscopy, and evaluate the pipeline by applying it to analysis of a mouse kidney. To ensure sample integrity and reproducible sectioning, we utilized the PAXgene fixation and paraffin embedding and proved its compatibility with MRI. Altogether, 122 serial sections of the kidney were analyzed using MALDI-imaging, resulting in a 3D dataset of 200GB comprised of 2million spectra. We show that elastic image registration better compensates for local distortions of tissue sections. The computational analysis of 3D MALDI-imaging data was performed using our spatial segmentation pipeline which determines regions of distinct molecular composition and finds m/z-values co-localized with these regions. For facilitated interpretation of 3D distribution of ions, we evaluated isosurfaces providing simplified visualization. We present the data in a multimodal fashion combining 3D MALDI-imaging with the MRI volume rendering and with light microscopic images of histologically stained sections. Our novel experimental and computational pipeline for 3D MALDI-imaging can be applied to address clinical questions such as proteomic analysis of the tumor morphologic heterogeneity. Examining the protein distribution as well as the drug distribution throughout an entire tumor using our pipeline will facilitate understanding of the molecular mechanisms of carcinogenesis. Copyright © 2013 Elsevier B.V. All rights reserved.
Image analysis by integration of disparate information
NASA Technical Reports Server (NTRS)
Lemoigne, Jacqueline
1993-01-01
Image analysis often starts with some preliminary segmentation which provides a representation of the scene needed for further interpretation. Segmentation can be performed in several ways, which are categorized as pixel based, edge-based, and region-based. Each of these approaches are affected differently by various factors, and the final result may be improved by integrating several or all of these methods, thus taking advantage of their complementary nature. In this paper, we propose an approach that integrates pixel-based and edge-based results by utilizing an iterative relaxation technique. This approach has been implemented on a massively parallel computer and tested on some remotely sensed imagery from the Landsat-Thematic Mapper (TM) sensor.
Providing integrity, authenticity, and confidentiality for header and pixel data of DICOM images.
Al-Haj, Ali
2015-04-01
Exchange of medical images over public networks is subjected to different types of security threats. This has triggered persisting demands for secured telemedicine implementations that will provide confidentiality, authenticity, and integrity for the transmitted images. The medical image exchange standard (DICOM) offers mechanisms to provide confidentiality for the header data of the image but not for the pixel data. On the other hand, it offers mechanisms to achieve authenticity and integrity for the pixel data but not for the header data. In this paper, we propose a crypto-based algorithm that provides confidentially, authenticity, and integrity for the pixel data, as well as for the header data. This is achieved by applying strong cryptographic primitives utilizing internally generated security data, such as encryption keys, hashing codes, and digital signatures. The security data are generated internally from the header and the pixel data, thus a strong bond is established between the DICOM data and the corresponding security data. The proposed algorithm has been evaluated extensively using DICOM images of different modalities. Simulation experiments show that confidentiality, authenticity, and integrity have been achieved as reflected by the results we obtained for normalized correlation, entropy, PSNR, histogram analysis, and robustness.
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors
Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.
2016-01-01
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643
IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java
Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2015-01-01
Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319
IQM: an extensible and portable open source application for image and signal analysis in Java.
Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2015-01-01
Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.
Paintdakhi, Ahmad; Parry, Bradley; Campos, Manuel; Irnov, Irnov; Elf, Johan; Surovtsev, Ivan; Jacobs-Wagner, Christine
2016-01-01
Summary With the realization that bacteria display phenotypic variability among cells and exhibit complex subcellular organization critical for cellular function and behavior, microscopy has re-emerged as a primary tool in bacterial research during the last decade. However, the bottleneck in today’s single-cell studies is quantitative image analysis of cells and fluorescent signals. Here, we address current limitations through the development of Oufti, a stand-alone, open-source software package for automated measurements of microbial cells and fluorescence signals from microscopy images. Oufti provides computational solutions for tracking touching cells in confluent samples, handles various cell morphologies, offers algorithms for quantitative analysis of both diffraction and non-diffraction-limited fluorescence signals, and is scalable for high-throughput analysis of massive datasets, all with subpixel precision. All functionalities are integrated in a single package. The graphical user interface, which includes interactive modules for segmentation, image analysis, and post-processing analysis, makes the software broadly accessible to users irrespective of their computational skills. PMID:26538279
Advanced GPR imaging of sedimentary features: integrated attribute analysis applied to sand dunes
NASA Astrophysics Data System (ADS)
Zhao, Wenke; Forte, Emanuele; Fontolan, Giorgio; Pipan, Michele
2018-04-01
We evaluate the applicability and the effectiveness of integrated GPR attribute analysis to image the internal sedimentary features of the Piscinas Dunes, SW Sardinia, Italy. The main objective is to explore the limits of GPR techniques to study sediment-bodies geometry and to provide a non-invasive high-resolution characterization of the different subsurface domains of dune architecture. On such purpose, we exploit the high-quality Piscinas data-set to extract and test different attributes of the GPR trace. Composite displays of multi-attributes related to amplitude, frequency, similarity and textural features are displayed with overlays and RGB mixed models. A multi-attribute comparative analysis is used to characterize different radar facies to better understand the characteristics of internal reflection patterns. The results demonstrate that the proposed integrated GPR attribute analysis can provide enhanced information about the spatial distribution of sediment bodies, allowing an enhanced and more constrained data interpretation.
Mari, João Fernando; Saito, José Hiroki; Neves, Amanda Ferreira; Lotufo, Celina Monteiro da Cruz; Destro-Filho, João-Batista; Nicoletti, Maria do Carmo
2015-12-01
Microelectrode Arrays (MEA) are devices for long term electrophysiological recording of extracellular spontaneous or evocated activities on in vitro neuron culture. This work proposes and develops a framework for quantitative and morphological analysis of neuron cultures on MEAs, by processing their corresponding images, acquired by fluorescence microscopy. The neurons are segmented from the fluorescence channel images using a combination of segmentation by thresholding, watershed transform, and object classification. The positioning of microelectrodes is obtained from the transmitted light channel images using the circular Hough transform. The proposed method was applied to images of dissociated culture of rat dorsal root ganglion (DRG) neuronal cells. The morphological and topological quantitative analysis carried out produced information regarding the state of culture, such as population count, neuron-to-neuron and neuron-to-microelectrode distances, soma morphologies, neuron sizes, neuron and microelectrode spatial distributions. Most of the analysis of microscopy images taken from neuronal cultures on MEA only consider simple qualitative analysis. Also, the proposed framework aims to standardize the image processing and to compute quantitative useful measures for integrated image-signal studies and further computational simulations. As results show, the implemented microelectrode identification method is robust and so are the implemented neuron segmentation and classification one (with a correct segmentation rate up to 84%). The quantitative information retrieved by the method is highly relevant to assist the integrated signal-image study of recorded electrophysiological signals as well as the physical aspects of the neuron culture on MEA. Although the experiments deal with DRG cell images, cortical and hippocampal cell images could also be processed with small adjustments in the image processing parameter estimation.
Navigation integrity monitoring and obstacle detection for enhanced-vision systems
NASA Astrophysics Data System (ADS)
Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter
2001-08-01
Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.
An Integrated Imaging Detector of Polarization and Spectral Content
NASA Technical Reports Server (NTRS)
Rust, D. M.; Thompson, K. E.
1993-01-01
A new type of image detector has been designed to simultaneously analyze the polarization of light at all picture elements in a scene. The Integrated Dual Imaging Detector (IDID) consists of a polarizing beamsplitter bonded to a charge-coupled device (CCD), with signal-analysis circuitry and analog-to-digital converters, all integrated on a silicon chip. It should be capable of 1:10(exp 4) polarization discrimination. The IDID should simplify the design and operation of imaging polarimeters and spectroscopic imagers used, for example, in atmospheric and solar research. Innovations in the IDID include (1) two interleaved 512 x 1024-pixel imaging arrays (one for each polarization plane); (2) large dynamic range (well depth of 10(exp 6) electrons per pixel); (3) simultaneous readout of both images at 10 million pixels per second each; (4) on-chip analog signal processing to produce polarization maps in real time; (5) on-chip 10-bit A/D conversion. When used with a lithium-niobate Fabry-Perot etalon or other color filter that can encode spectral information as polarization, the IDID can collect and analyze simultaneous images at two wavelengths. Precise photometric analysis of molecular or atomic concentrations in the atmosphere is one suggested application. When used in a solar telescope, the IDID will charge the polarization, which can then be converted to maps of the vector magnetic fields on the solar surface.
Light-induced voltage alteration for integrated circuit analysis
Cole, Jr., Edward I.; Soden, Jerry M.
1995-01-01
An apparatus and method are described for analyzing an integrated circuit (IC), The invention uses a focused light beam that is scanned over a surface of the IC to generate a light-induced voltage alteration (LIVA) signal for analysis of the IC, The LIVA signal may be used to generate an image of the IC showing the location of any defects in the IC; and it may be further used to image and control the logic states of the IC. The invention has uses for IC failure analysis, for the development of ICs, for production-line inspection of ICs, and for qualification of ICs.
Light-induced voltage alteration for integrated circuit analysis
Cole, E.I. Jr.; Soden, J.M.
1995-07-04
An apparatus and method are described for analyzing an integrated circuit (IC). The invention uses a focused light beam that is scanned over a surface of the IC to generate a light-induced voltage alteration (LIVA) signal for analysis of the IC. The LIVA signal may be used to generate an image of the IC showing the location of any defects in the IC; and it may be further used to image and control the logic states of the IC. The invention has uses for IC failure analysis, for the development of ICs, for production-line inspection of ICs, and for qualification of ICs. 18 figs.
Satellite image analysis using neural networks
NASA Technical Reports Server (NTRS)
Sheldon, Roger A.
1990-01-01
The tremendous backlog of unanalyzed satellite data necessitates the development of improved methods for data cataloging and analysis. Ford Aerospace has developed an image analysis system, SIANN (Satellite Image Analysis using Neural Networks) that integrates the technologies necessary to satisfy NASA's science data analysis requirements for the next generation of satellites. SIANN will enable scientists to train a neural network to recognize image data containing scenes of interest and then rapidly search data archives for all such images. The approach combines conventional image processing technology with recent advances in neural networks to provide improved classification capabilities. SIANN allows users to proceed through a four step process of image classification: filtering and enhancement, creation of neural network training data via application of feature extraction algorithms, configuring and training a neural network model, and classification of images by application of the trained neural network. A prototype experimentation testbed was completed and applied to climatological data.
Developing tools for digital radar image data evaluation
NASA Technical Reports Server (NTRS)
Domik, G.; Leberl, F.; Raggam, J.
1986-01-01
The refinement of radar image analysis methods has led to a need for a systems approach to radar image processing software. Developments stimulated through satellite radar are combined with standard image processing techniques to create a user environment to manipulate and analyze airborne and satellite radar images. One aim is to create radar products for the user from the original data to enhance the ease of understanding the contents. The results are called secondary image products and derive from the original digital images. Another aim is to support interactive SAR image analysis. Software methods permit use of a digital height model to create ortho images, synthetic images, stereo-ortho images, radar maps or color combinations of different component products. Efforts are ongoing to integrate individual tools into a combined hardware/software environment for interactive radar image analysis.
Chen, Xiaoxia; Zhao, Jing; Chen, Tianshu; Gao, Tao; Zhu, Xiaoli; Li, Genxi
2018-01-01
Comprehensive analysis of the expression level and location of tumor-associated membrane proteins (TMPs) is of vital importance for the profiling of tumor cells. Currently, two kinds of independent techniques, i.e. ex situ detection and in situ imaging, are usually required for the quantification and localization of TMPs respectively, resulting in some inevitable problems. Methods: Herein, based on a well-designed and fluorophore-labeled DNAzyme, we develop an integrated and facile method, in which imaging and quantification of TMPs in situ are achieved simultaneously in a single system. The labeled DNAzyme not only produces localized fluorescence for the visualization of TMPs but also catalyzes the cleavage of a substrate to produce quantitative fluorescent signals that can be collected from solution for the sensitive detection of TMPs. Results: Results from the DNAzyme-based in situ imaging and quantification of TMPs match well with traditional immunofluorescence and western blotting. In addition to the advantage of two-in-one, the DNAzyme-based method is highly sensitivity, allowing the detection of TMPs in only 100 cells. Moreover, the method is nondestructive. Cells after analysis could retain their physiological activity and could be cultured for other applications. Conclusion: The integrated system provides solid results for both imaging and quantification of TMPs, making it a competitive method over some traditional techniques for the analysis of TMPs, which offers potential application as a toolbox in the future.
Anima: Modular Workflow System for Comprehensive Image Data Analysis
Rantanen, Ville; Valori, Miko; Hautaniemi, Sampsa
2014-01-01
Modern microscopes produce vast amounts of image data, and computational methods are needed to analyze and interpret these data. Furthermore, a single image analysis project may require tens or hundreds of analysis steps starting from data import and pre-processing to segmentation and statistical analysis; and ending with visualization and reporting. To manage such large-scale image data analysis projects, we present here a modular workflow system called Anima. Anima is designed for comprehensive and efficient image data analysis development, and it contains several features that are crucial in high-throughput image data analysis: programing language independence, batch processing, easily customized data processing, interoperability with other software via application programing interfaces, and advanced multivariate statistical analysis. The utility of Anima is shown with two case studies focusing on testing different algorithms developed in different imaging platforms and an automated prediction of alive/dead C. elegans worms by integrating several analysis environments. Anima is a fully open source and available with documentation at www.anduril.org/anima. PMID:25126541
Viewpoints on Medical Image Processing: From Science to Application
Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas
2013-01-01
Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804
Viewpoints on Medical Image Processing: From Science to Application.
Deserno Né Lehmann, Thomas M; Handels, Heinz; Maier-Hein Né Fritzsche, Klaus H; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas
2013-05-01
Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment.
Lu, Dengsheng; Batistella, Mateus; Moran, Emilio
2009-01-01
Traditional change detection approaches have been proven to be difficult in detecting vegetation changes in the moist tropical regions with multitemporal images. This paper explores the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data for vegetation change detection in the Brazilian Amazon. A principal component analysis was used to integrate TM and HRG panchromatic data. Vegetation change/non-change was detected with the image differencing approach based on the TM and HRG fused image and the corresponding TM image. A rule-based approach was used to classify the TM and HRG multispectral images into thematic maps with three coarse land-cover classes: forest, non-forest vegetation, and non-vegetation lands. A hybrid approach combining image differencing and post-classification comparison was used to detect vegetation change trajectories. This research indicates promising vegetation change techniques, especially for vegetation gain and loss, even if very limited reference data are available. PMID:19789721
OIPAV: an integrated software system for ophthalmic image processing, analysis and visualization
NASA Astrophysics Data System (ADS)
Zhang, Lichun; Xiang, Dehui; Jin, Chao; Shi, Fei; Yu, Kai; Chen, Xinjian
2018-03-01
OIPAV (Ophthalmic Images Processing, Analysis and Visualization) is a cross-platform software which is specially oriented to ophthalmic images. It provides a wide range of functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis and visualization to help researchers and clinicians deal with various ophthalmic images such as optical coherence tomography (OCT) images and color photo of fundus, etc. It enables users to easily access to different ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images and improve quantitative evaluations. In this paper, we will present the system design and functional modules of the platform and demonstrate various applications. With a satisfying function scalability and expandability, we believe that the software can be widely applied in ophthalmology field.
Image Harvest: an open-source platform for high-throughput plant image processing and analysis.
Knecht, Avi C; Campbell, Malachy T; Caprez, Adam; Swanson, David R; Walia, Harkamal
2016-05-01
High-throughput plant phenotyping is an effective approach to bridge the genotype-to-phenotype gap in crops. Phenomics experiments typically result in large-scale image datasets, which are not amenable for processing on desktop computers, thus creating a bottleneck in the image-analysis pipeline. Here, we present an open-source, flexible image-analysis framework, called Image Harvest (IH), for processing images originating from high-throughput plant phenotyping platforms. Image Harvest is developed to perform parallel processing on computing grids and provides an integrated feature for metadata extraction from large-scale file organization. Moreover, the integration of IH with the Open Science Grid provides academic researchers with the computational resources required for processing large image datasets at no cost. Image Harvest also offers functionalities to extract digital traits from images to interpret plant architecture-related characteristics. To demonstrate the applications of these digital traits, a rice (Oryza sativa) diversity panel was phenotyped and genome-wide association mapping was performed using digital traits that are used to describe different plant ideotypes. Three major quantitative trait loci were identified on rice chromosomes 4 and 6, which co-localize with quantitative trait loci known to regulate agronomically important traits in rice. Image Harvest is an open-source software for high-throughput image processing that requires a minimal learning curve for plant biologists to analyzephenomics datasets. © The Author 2016. Published by Oxford University Press on behalf of the Society for Experimental Biology.
An architecture for a brain-image database
NASA Technical Reports Server (NTRS)
Herskovits, E. H.
2000-01-01
The widespread availability of methods for noninvasive assessment of brain structure has enabled researchers to investigate neuroimaging correlates of normal aging, cerebrovascular disease, and other processes; we designate such studies as image-based clinical trials (IBCTs). We propose an architecture for a brain-image database, which integrates image processing and statistical operators, and thus supports the implementation and analysis of IBCTs. The implementation of this architecture is described and results from the analysis of image and clinical data from two IBCTs are presented. We expect that systems such as this will play a central role in the management and analysis of complex research data sets.
A Control System and Streaming DAQ Platform with Image-Based Trigger for X-ray Imaging
NASA Astrophysics Data System (ADS)
Stevanovic, Uros; Caselle, Michele; Cecilia, Angelica; Chilingaryan, Suren; Farago, Tomas; Gasilov, Sergey; Herth, Armin; Kopmann, Andreas; Vogelgesang, Matthias; Balzer, Matthias; Baumbach, Tilo; Weber, Marc
2015-06-01
High-speed X-ray imaging applications play a crucial role for non-destructive investigations of the dynamics in material science and biology. On-line data analysis is necessary for quality assurance and data-driven feedback, leading to a more efficient use of a beam time and increased data quality. In this article we present a smart camera platform with embedded Field Programmable Gate Array (FPGA) processing that is able to stream and process data continuously in real-time. The setup consists of a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, an FPGA readout card, and a readout computer. It is seamlessly integrated in a new custom experiment control system called Concert that provides a more efficient way of operating a beamline by integrating device control, experiment process control, and data analysis. The potential of the embedded processing is demonstrated by implementing an image-based trigger. It records the temporal evolution of physical events with increased speed while maintaining the full field of view. The complete data acquisition system, with Concert and the smart camera platform was successfully integrated and used for fast X-ray imaging experiments at KIT's synchrotron radiation facility ANKA.
Integration of geological remote-sensing techniques in subsurface analysis
Taranik, James V.; Trautwein, Charles M.
1976-01-01
Geological remote sensing is defined as the study of the Earth utilizing electromagnetic radiation which is either reflected or emitted from its surface in wavelengths ranging from 0.3 micrometre to 3 metres. The natural surface of the Earth is composed of a diversified combination of surface cover types, and geologists must understand the characteristics of surface cover types to successfully evaluate remotely-sensed data. In some areas landscape surface cover changes throughout the year, and analysis of imagery acquired at different times of year can yield additional geological information. Integration of different scales of analysis allows landscape features to be effectively interpreted. Interpretation of the static elements displayed on imagery is referred to as an image interpretation. Image interpretation is dependent upon: (1) the geologist's understanding of the fundamental aspects of image formation, and (2.) his ability to detect, delineate, and classify image radiometric data; recognize radiometric patterns; and identify landscape surface characteristics as expressed on imagery. A geologic interpretation integrates surface characteristics of the landscape with subsurface geologic relationships. Development of a geologic interpretation from imagery is dependent upon: (1) the geologist's ability to interpret geomorphic processes from their static surface expression as landscape characteristics on imagery, (2) his ability to conceptualize the dynamic processes responsible for the evolution 6f interpreted geologic relationships (his ability to develop geologic models). The integration of geologic remote-sensing techniques in subsurface analysis is illustrated by development of an exploration model for ground water in the Tucson area of Arizona, and by the development of an exploration model for mineralization in southwest Idaho.
Klukas, Christian; Chen, Dijun; Pape, Jean-Michel
2014-01-01
High-throughput phenotyping is emerging as an important technology to dissect phenotypic components in plants. Efficient image processing and feature extraction are prerequisites to quantify plant growth and performance based on phenotypic traits. Issues include data management, image analysis, and result visualization of large-scale phenotypic data sets. Here, we present Integrated Analysis Platform (IAP), an open-source framework for high-throughput plant phenotyping. IAP provides user-friendly interfaces, and its core functions are highly adaptable. Our system supports image data transfer from different acquisition environments and large-scale image analysis for different plant species based on real-time imaging data obtained from different spectra. Due to the huge amount of data to manage, we utilized a common data structure for efficient storage and organization of data for both input data and result data. We implemented a block-based method for automated image processing to extract a representative list of plant phenotypic traits. We also provide tools for build-in data plotting and result export. For validation of IAP, we performed an example experiment that contains 33 maize (Zea mays ‘Fernandez’) plants, which were grown for 9 weeks in an automated greenhouse with nondestructive imaging. Subsequently, the image data were subjected to automated analysis with the maize pipeline implemented in our system. We found that the computed digital volume and number of leaves correlate with our manually measured data in high accuracy up to 0.98 and 0.95, respectively. In summary, IAP provides a multiple set of functionalities for import/export, management, and automated analysis of high-throughput plant phenotyping data, and its analysis results are highly reliable. PMID:24760818
An image analysis system for near-infrared (NIR) fluorescence lymph imaging
NASA Astrophysics Data System (ADS)
Zhang, Jingdan; Zhou, Shaohua Kevin; Xiang, Xiaoyan; Rasmussen, John C.; Sevick-Muraca, Eva M.
2011-03-01
Quantitative analysis of lymphatic function is crucial for understanding the lymphatic system and diagnosing the associated diseases. Recently, a near-infrared (NIR) fluorescence imaging system is developed for real-time imaging lymphatic propulsion by intradermal injection of microdose of a NIR fluorophore distal to the lymphatics of interest. However, the previous analysis software3, 4 is underdeveloped, requiring extensive time and effort to analyze a NIR image sequence. In this paper, we develop a number of image processing techniques to automate the data analysis workflow, including an object tracking algorithm to stabilize the subject and remove the motion artifacts, an image representation named flow map to characterize lymphatic flow more reliably, and an automatic algorithm to compute lymph velocity and frequency of propulsion. By integrating all these techniques to a system, the analysis workflow significantly reduces the amount of required user interaction and improves the reliability of the measurement.
Ultrasound Assessment of Human Meniscus.
Viren, Tuomas; Honkanen, Juuso T; Danso, Elvis K; Rieppo, Lassi; Korhonen, Rami K; Töyräs, Juha
2017-09-01
The aim of the present study was to evaluate the applicability of ultrasound imaging to quantitative assessment of human meniscus in vitro. Meniscus samples (n = 26) were harvested from 13 knee joints of non-arthritic human cadavers. Subsequently, three locations (anterior, center and posterior) from each meniscus were imaged with two ultrasound transducers (frequencies 9 and 40 MHz), and quantitative ultrasound parameters were determined. Furthermore, partial-least-squares regression analysis was applied for ultrasound signal to determine the relations between ultrasound scattering and meniscus integrity. Significant correlations between measured and predicted meniscus compositions and mechanical properties were obtained (R 2 = 0.38-0.69, p < 0.05). The relationship between conventional ultrasound parameters and integrity of the meniscus was weaker. To conclude, ultrasound imaging exhibited a potential for evaluation of meniscus integrity. Higher ultrasound frequency combined with multivariate analysis of ultrasound backscattering was found to be the most sensitive for evaluation of meniscus integrity. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Toshiba TDF-500 High Resolution Viewing And Analysis System
NASA Astrophysics Data System (ADS)
Roberts, Barry; Kakegawa, M.; Nishikawa, M.; Oikawa, D.
1988-06-01
A high resolution, operator interactive, medical viewing and analysis system has been developed by Toshiba and Bio-Imaging Research. This system provides many advanced features including high resolution displays, a very large image memory and advanced image processing capability. In particular, the system provides CRT frame buffers capable of update in one frame period, an array processor capable of image processing at operator interactive speeds, and a memory system capable of updating multiple frame buffers at frame rates whilst supporting multiple array processors. The display system provides 1024 x 1536 display resolution at 40Hz frame and 80Hz field rates. In particular, the ability to provide whole or partial update of the screen at the scanning rate is a key feature. This allows multiple viewports or windows in the display buffer with both fixed and cine capability. To support image processing features such as windowing, pan, zoom, minification, filtering, ROI analysis, multiplanar and 3D reconstruction, a high performance CPU is integrated into the system. This CPU is an array processor capable of up to 400 million instructions per second. To support the multiple viewer and array processors' instantaneous high memory bandwidth requirement, an ultra fast memory system is used. This memory system has a bandwidth capability of 400MB/sec and a total capacity of 256MB. This bandwidth is more than adequate to support several high resolution CRT's and also the fast processing unit. This fully integrated approach allows effective real time image processing. The integrated design of viewing system, memory system and array processor are key to the imaging system. It is the intention to describe the architecture of the image system in this paper.
NASA Astrophysics Data System (ADS)
Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.
2012-05-01
The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.
Picosecond imaging of signal propagation in integrated circuits
NASA Astrophysics Data System (ADS)
Frohmann, Sven; Dietz, Enrico; Dittrich, Helmar; Hübers, Heinz-Wilhelm
2017-04-01
Optical analysis of integrated circuits (IC) is a powerful tool for analyzing security functions that are implemented in an IC. We present a photon emission microscope for picosecond imaging of hot carrier luminescence in ICs in the near-infrared spectral range from 900 to 1700 nm. It allows for a semi-invasive signal tracking in fully operational ICs on the gate or transistor level with a timing precision of approximately 6 ps. The capabilities of the microscope are demonstrated by imaging the operation of two ICs made by 180 and 60 nm process technology.
Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N
2017-03-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.
Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.
2016-01-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692
NASA Astrophysics Data System (ADS)
Guldner, Ian H.; Yang, Lin; Cowdrick, Kyle R.; Wang, Qingfei; Alvarez Barrios, Wendy V.; Zellmer, Victoria R.; Zhang, Yizhe; Host, Misha; Liu, Fang; Chen, Danny Z.; Zhang, Siyuan
2016-04-01
Metastatic microenvironments are spatially and compositionally heterogeneous. This seemingly stochastic heterogeneity provides researchers great challenges in elucidating factors that determine metastatic outgrowth. Herein, we develop and implement an integrative platform that will enable researchers to obtain novel insights from intricate metastatic landscapes. Our two-segment platform begins with whole tissue clearing, staining, and imaging to globally delineate metastatic landscape heterogeneity with spatial and molecular resolution. The second segment of our platform applies our custom-developed SMART 3D (Spatial filtering-based background removal and Multi-chAnnel forest classifiers-based 3D ReconsTruction), a multi-faceted image analysis pipeline, permitting quantitative interrogation of functional implications of heterogeneous metastatic landscape constituents, from subcellular features to multicellular structures, within our large three-dimensional (3D) image datasets. Coupling whole tissue imaging of brain metastasis animal models with SMART 3D, we demonstrate the capability of our integrative pipeline to reveal and quantify volumetric and spatial aspects of brain metastasis landscapes, including diverse tumor morphology, heterogeneous proliferative indices, metastasis-associated astrogliosis, and vasculature spatial distribution. Collectively, our study demonstrates the utility of our novel integrative platform to reveal and quantify the global spatial and volumetric characteristics of the 3D metastatic landscape with unparalleled accuracy, opening new opportunities for unbiased investigation of novel biological phenomena in situ.
NASA Astrophysics Data System (ADS)
Balbin, Jessie R.; Dela Cruz, Jennifer C.; Camba, Clarisse O.; Gozo, Angelo D.; Jimenez, Sheena Mariz B.; Tribiana, Aivje C.
2017-06-01
Acne vulgaris, commonly called as acne, is a skin problem that occurs when oil and dead skin cells clog up in a person's pores. This is because hormones change which makes the skin oilier. The problem is people really do not know the real assessment of sensitivity of their skin in terms of fluid development on their faces that tends to develop acne vulgaris, thus having more complications. This research aims to assess Acne Vulgaris using luminescent visualization system through optical imaging and integration of image processing algorithms. Specifically, this research aims to design a prototype for facial fluid analysis using luminescent visualization system through optical imaging and integration of fluorescent imaging system, and to classify different facial fluids present in each person. Throughout the process, some structures and layers of the face will be excluded, leaving only a mapped facial structure with acne regions. Facial fluid regions are distinguished from the acne region as they are characterized differently.
Cardiac imaging: working towards fully-automated machine analysis & interpretation.
Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido
2017-03-01
Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered: This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary: Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation.
Integrated system for automated financial document processing
NASA Astrophysics Data System (ADS)
Hassanein, Khaled S.; Wesolkowski, Slawo; Higgins, Ray; Crabtree, Ralph; Peng, Antai
1997-02-01
A system was developed that integrates intelligent document analysis with multiple character/numeral recognition engines in order to achieve high accuracy automated financial document processing. In this system, images are accepted in both their grayscale and binary formats. A document analysis module starts by extracting essential features from the document to help identify its type (e.g. personal check, business check, etc.). These features are also utilized to conduct a full analysis of the image to determine the location of interesting zones such as the courtesy amount and the legal amount. These fields are then made available to several recognition knowledge sources such as courtesy amount recognition engines and legal amount recognition engines through a blackboard architecture. This architecture allows all the available knowledge sources to contribute incrementally and opportunistically to the solution of the given recognition query. Performance results on a test set of machine printed business checks using the integrated system are also reported.
PAVECHECK : integrating deflection and GPR for network condition surveys.
DOT National Transportation Integrated Search
2009-01-01
The PAVECHECK data integration and analysis system was developed to merge Falling Weight : Deflectometer (FWD) and Ground Penetrating Radar (GPR) data together with digital video images of : surface conditions. In this study Global Positioning System...
Mossotti, Victor G.; Eldeeb, A. Raouf
2000-01-01
Turcotte, 1997, and Barton and La Pointe, 1995, have identified many potential uses for the fractal dimension in physicochemical models of surface properties. The image-analysis program described in this report is an extension of the program set MORPH-I (Mossotti and others, 1998), which provided the fractal analysis of electron-microscope images of pore profiles (Mossotti and Eldeeb, 1992). MORPH-II, an integration of the modified kernel of the program MORPH-I with image calibration and editing facilities, was designed to measure the fractal dimension of the exposed surfaces of stone specimens as imaged in cross section in an electron microscope.
NASA Astrophysics Data System (ADS)
Scott, Richard; Khan, Faisal M.; Zeineh, Jack; Donovan, Michael; Fernandez, Gerardo
2015-03-01
Immunofluorescent (IF) image analysis of tissue pathology has proven to be extremely valuable and robust in developing prognostic assessments of disease, particularly in prostate cancer. There have been significant advances in the literature in quantitative biomarker expression as well as characterization of glandular architectures in discrete gland rings. However, while biomarker and glandular morphometric features have been combined as separate predictors in multivariate models, there is a lack of integrative features for biomarkers co-localized within specific morphological sub-types; for example the evaluation of androgen receptor (AR) expression within Gleason 3 glands only. In this work we propose a novel framework employing multiple techniques to generate integrated metrics of morphology and biomarker expression. We demonstrate the utility of the approaches in predicting clinical disease progression in images from 326 prostate biopsies and 373 prostatectomies. Our proposed integrative approaches yield significant improvements over existing IF image feature metrics. This work presents some of the first algorithms for generating innovative characteristics in tissue diagnostics that integrate co-localized morphometry and protein biomarker expression.
An approach to integrate the human vision psychology and perception knowledge into image enhancement
NASA Astrophysics Data System (ADS)
Wang, Hui; Huang, Xifeng; Ping, Jiang
2009-07-01
Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.
Information Acquisition, Analysis and Integration
2016-08-03
of sensing and processing, theory, applications, signal processing, image and video processing, machine learning , technology transfer. 16. SECURITY... learning . 5. Solved elegantly old problems like image and video debluring, intro- ducing new revolutionary approaches. 1 DISTRIBUTION A: Distribution...Polatkan, G. Sapiro, D. Blei, D. B. Dunson, and L. Carin, “ Deep learning with hierarchical convolution factor analysis,” IEEE 6 DISTRIBUTION A
NASA Astrophysics Data System (ADS)
Qie, G.; Wang, G.; Wang, M.
2016-12-01
Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images
Using Deep Learning Algorithm to Enhance Image-review Software for Surveillance Cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Yonggang; Thomas, Maikael A.
We propose the development of proven deep learning algorithms to flag objects and events of interest in Next Generation Surveillance System (NGSS) surveillance to make IAEA image review more efficient. Video surveillance is one of the core monitoring technologies used by the IAEA Department of Safeguards when implementing safeguards at nuclear facilities worldwide. The current image review software GARS has limited automated functions, such as scene-change detection, black image detection and missing scene analysis, but struggles with highly cluttered backgrounds. A cutting-edge algorithm to be developed in this project will enable efficient and effective searches in images and video streamsmore » by identifying and tracking safeguards relevant objects and detect anomalies in their vicinity. In this project, we will develop the algorithm, test it with the IAEA surveillance cameras and data sets collected at simulated nuclear facilities at BNL and SNL, and implement it in a software program for potential integration into the IAEA’s IRAP (Integrated Review and Analysis Program).« less
Quantification of fibre polymerization through Fourier space image analysis
Nekouzadeh, Ali; Genin, Guy M.
2011-01-01
Quantification of changes in the total length of randomly oriented and possibly curved lines appearing in an image is a necessity in a wide variety of biological applications. Here, we present an automated approach based upon Fourier space analysis. Scaled, band-pass filtered power spectral densities of greyscale images are integrated to provide a quantitative measurement of the total length of lines of a particular range of thicknesses appearing in an image. A procedure is presented to correct for changes in image intensity. The method is most accurate for two-dimensional processes with fibres that do not occlude one another. PMID:24959096
An Integrative Object-Based Image Analysis Workflow for Uav Images
NASA Astrophysics Data System (ADS)
Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong
2016-06-01
In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.
Automated analysis of hot spot X-ray images at the National Ignition Facility
NASA Astrophysics Data System (ADS)
Khan, S. F.; Izumi, N.; Glenn, S.; Tommasini, R.; Benedetti, L. R.; Ma, T.; Pak, A.; Kyrala, G. A.; Springer, P.; Bradley, D. K.; Town, R. P. J.
2016-11-01
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. For implosions with temperatures above ˜4 keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
Automated analysis of hot spot X-ray images at the National Ignition Facility
Khan, S. F.; Izumi, N.; Glenn, S.; ...
2016-09-02
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. Here, for implosions with temperatures above ~4keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
Automated analysis of hot spot X-ray images at the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, S. F., E-mail: khan9@llnl.gov; Izumi, N.; Glenn, S.
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. For implosions with temperatures above ∼4 keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
Automated analysis of hot spot X-ray images at the National Ignition Facility.
Khan, S F; Izumi, N; Glenn, S; Tommasini, R; Benedetti, L R; Ma, T; Pak, A; Kyrala, G A; Springer, P; Bradley, D K; Town, R P J
2016-11-01
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. For implosions with temperatures above ∼4 keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.
Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah
2015-01-01
Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.
Development of imaging biomarkers and generation of big data.
Alberich-Bayarri, Ángel; Hernández-Navarro, Rafael; Ruiz-Martínez, Enrique; García-Castro, Fabio; García-Juan, David; Martí-Bonmatí, Luis
2017-06-01
Several image processing algorithms have emerged to cover unmet clinical needs but their application to radiological routine with a clear clinical impact is still not straightforward. Moving from local to big infrastructures, such as Medical Imaging Biobanks (millions of studies), or even more, Federations of Medical Imaging Biobanks (in some cases totaling to hundreds of millions of studies) require the integration of automated pipelines for fast analysis of pooled data to extract clinically relevant conclusions, not uniquely linked to medical imaging, but in combination to other information such as genetic profiling. A general strategy for the development of imaging biomarkers and their integration in the cloud for the quantitative management and exploitation in large databases is herein presented. The proposed platform has been successfully launched and is being validated nowadays among the early adopters' community of radiologists, clinicians, and medical imaging researchers.
The Research on Lucalibration of GF-4 Satellite
NASA Astrophysics Data System (ADS)
Qi, W.; Tan, W.
2018-04-01
Starting from the lunar observation requirements of the GF-4 satellite, the main index such as the resolution, the imaging field, the reflect radiance and the imaging integration time are analyzed combined with the imaging features and parameters of this camera. The analysis results show that the lunar observation of GF-4 satellite has high resolution, wide field which can image the whole moon, the radiance of the pupil which is reflected by the moon is within the dynamic range of the camera, and the lunar image quality can be guaranteed better by setting up a reasonable integration time. At the same time, the radiation transmission model of the lunar radiation calibration is trace and the radiation degree is evaluated.
Integrated sensor biopsy device for real time tissue metabolism analysis
NASA Astrophysics Data System (ADS)
Delgado Alonso, Jesus; Lieberman, Robert A.; DiCarmine, Paul M.; Berry, David; Guzman, Narciso; Marpu, Sreekar B.
2018-02-01
Current methods for guiding cancer biopsies rely almost exclusively on images derived from X-ray, ultrasound, or magnetic resonance, which essentially characterize suspected lesions based only on tissue density. This paper presents a sensor integrated biopsy device for in situ tissue analysis that will enable biopsy teams to measure local tissue chemistry in real time during biopsy procedures, adding a valuable new set of parameters to augment and extend conventional image guidance. A first demonstrator integrating three chemical and biochemical sensors was tested in a mice strain that is a spontaneous breast cancer model. In all cases, the multisensory probe was able to discriminate between healthy tissue, the edge of the tumor, and total insertion inside the cancer tissue, recording real-time information about tissue metabolism.
Integrated approach to multimodal media content analysis
NASA Astrophysics Data System (ADS)
Zhang, Tong; Kuo, C.-C. Jay
1999-12-01
In this work, we present a system for the automatic segmentation, indexing and retrieval of audiovisual data based on the combination of audio, visual and textural content analysis. The video stream is demultiplexed into audio, image and caption components. Then, a semantic segmentation of the audio signal based on audio content analysis is conducted, and each segment is indexed as one of the basic audio types. The image sequence is segmented into shots based on visual information analysis, and keyframes are extracted from each shot. Meanwhile, keywords are detected from the closed caption. Index tables are designed for both linear and non-linear access to the video. It is shown by experiments that the proposed methods for multimodal media content analysis are effective. And that the integrated framework achieves satisfactory results for video information filtering and retrieval.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakubek, J.; Cejnarova, A.; Platkevic, M.
Single quantum counting pixel detectors of Medipix type are starting to be used in various radiographic applications. Compared to standard devices for digital imaging (such as CCDs or CMOS sensors) they present significant advantages: direct conversion of radiation to electric signal, energy sensitivity, noiseless image integration, unlimited dynamic range, absolute linearity. In this article we describe usage of the pixel device TimePix for image accumulation gated by late trigger signal. Demonstration of the technique is given on imaging coincidence instrumental neutron activation analysis (Imaging CINAA). This method allows one to determine concentration and distribution of certain preselected element in anmore » inspected sample.« less
Automating PACS quality control with the Vanderbilt image processing enterprise resource
NASA Astrophysics Data System (ADS)
Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.
2012-02-01
Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.
Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data
Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.
2005-01-01
The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787
Real-time MRI guidance of cardiac interventions.
Campbell-Washburn, Adrienne E; Tavallaei, Mohammad A; Pop, Mihaela; Grant, Elena K; Chubb, Henry; Rhode, Kawal; Wright, Graham A
2017-10-01
Cardiac magnetic resonance imaging (MRI) is appealing to guide complex cardiac procedures because it is ionizing radiation-free and offers flexible soft-tissue contrast. Interventional cardiac MR promises to improve existing procedures and enable new ones for complex arrhythmias, as well as congenital and structural heart disease. Guiding invasive procedures demands faster image acquisition, reconstruction and analysis, as well as intuitive intraprocedural display of imaging data. Standard cardiac MR techniques such as 3D anatomical imaging, cardiac function and flow, parameter mapping, and late-gadolinium enhancement can be used to gather valuable clinical data at various procedural stages. Rapid intraprocedural image analysis can extract and highlight critical information about interventional targets and outcomes. In some cases, real-time interactive imaging is used to provide a continuous stream of images displayed to interventionalists for dynamic device navigation. Alternatively, devices are navigated relative to a roadmap of major cardiac structures generated through fast segmentation and registration. Interventional devices can be visualized and tracked throughout a procedure with specialized imaging methods. In a clinical setting, advanced imaging must be integrated with other clinical tools and patient data. In order to perform these complex procedures, interventional cardiac MR relies on customized equipment, such as interactive imaging environments, in-room image display, audio communication, hemodynamic monitoring and recording systems, and electroanatomical mapping and ablation systems. Operating in this sophisticated environment requires coordination and planning. This review provides an overview of the imaging technology used in MRI-guided cardiac interventions. Specifically, this review outlines clinical targets, standard image acquisition and analysis tools, and the integration of these tools into clinical workflow. 1 Technical Efficacy: Stage 5 J. Magn. Reson. Imaging 2017;46:935-950. © 2017 International Society for Magnetic Resonance in Medicine.
Terahertz Technology: A Boon to Tablet Analysis
Wagh, M. P.; Sonawane, Y. H.; Joshi, O. U.
2009-01-01
The terahertz gap has a frequency ranges from ∼0.3 THz to ∼10 THz in the electromagnetic spectrum which is in between microwave and infrared. The terahertz radiations are invisible to naked eye. In comparison with x-ray they are intrinsically safe, non-destructive and non-invasive. Terahertz spectroscopy enables 3D imaging of structures and materials, and the measurement of the unique spectral fingerprints of chemical and physical forms. Terahertz radiations are produced by a dendrimer based high power terahertz source and spectroscopy technologies. It resolves many of the questions left unanswered by complementary techniques, such as optical imaging, Raman and infrared spectra. In the pharmaceutical industries it enables nondestructive, internal, chemical analysis of tablets, capsules, and other dosage forms. Tablet coatings are a major factor in drug bioavailability. Therefore tablet coatings integrity and uniformity are of crucial importance to quality. Terahertz imaging gives an unparalleled certainty about the integrity of tablet coatings and the matrix performance of tablet cores. This article demonstrates the potential of terahertz pulse imaging for the analysis of tablet coating thickness by illustrating the technique on tablets. PMID:20490288
Madden, David J.; Parks, Emily L.; Tallman, Catherine W.; Boylan, Maria A.; Hoagey, David A.; Cocjin, Sally B.; Packard, Lauren E.; Johnson, Micah A.; Chou, Ying-hui; Potter, Guy G.; Chen, Nan-kuei; Siciliano, Rachel E.; Monge, Zachary A.; Honig, Jesse A.; Diaz, Michele T.
2017-01-01
Age-related decline in fluid cognition can be characterized as a disconnection among specific brain structures, leading to a decline in functional efficiency. The potential sources of disconnection, however, are unclear. We investigated imaging measures of cerebral white matter integrity, resting-state functional connectivity, and white matter hyperintensity (WMH) volume as mediators of the relation between age and fluid cognition, in 145 healthy, community-dwelling adults 19–79 years of age. At a general level of analysis, with a single composite measure of fluid cognition and single measures of each of the three imaging modalities, age exhibited an independent influence on the cognitive and imaging measures, and the imaging variables did not mediate the age-cognition relation. At a more specific level of analysis, resting-state functional connectivity of sensorimotor networks was a significant mediator of the age-related decline in executive function. These findings suggest that different levels of analysis lead to different models of neurocognitive disconnection, and that resting-state functional connectivity, in particular, may contribute to age-related decline in executive function. PMID:28389085
Rubel, Oliver; Bowen, Benjamin P
2018-01-01
Mass spectrometry imaging (MSI) is a transformative imaging method that supports the untargeted, quantitative measurement of the chemical composition and spatial heterogeneity of complex samples with broad applications in life sciences, bioenergy, and health. While MSI data can be routinely collected, its broad application is currently limited by the lack of easily accessible analysis methods that can process data of the size, volume, diversity, and complexity generated by MSI experiments. The development and application of cutting-edge analytical methods is a core driver in MSI research for new scientific discoveries, medical diagnostics, and commercial-innovation. However, the lack of means to share, apply, and reproduce analyses hinders the broad application, validation, and use of novel MSI analysis methods. To address this central challenge, we introduce the Berkeley Analysis and Storage Toolkit (BASTet), a novel framework for shareable and reproducible data analysis that supports standardized data and analysis interfaces, integrated data storage, data provenance, workflow management, and a broad set of integrated tools. Based on BASTet, we describe the extension of the OpenMSI mass spectrometry imaging science gateway to enable web-based sharing, reuse, analysis, and visualization of data analyses and derived data products. We demonstrate the application of BASTet and OpenMSI in practice to identify and compare characteristic substructures in the mouse brain based on their chemical composition measured via MSI.
Cardiac imaging: working towards fully-automated machine analysis & interpretation
Slomka, Piotr J; Dey, Damini; Sitek, Arkadiusz; Motwani, Manish; Berman, Daniel S; Germano, Guido
2017-01-01
Introduction Non-invasive imaging plays a critical role in managing patients with cardiovascular disease. Although subjective visual interpretation remains the clinical mainstay, quantitative analysis facilitates objective, evidence-based management, and advances in clinical research. This has driven developments in computing and software tools aimed at achieving fully automated image processing and quantitative analysis. In parallel, machine learning techniques have been used to rapidly integrate large amounts of clinical and quantitative imaging data to provide highly personalized individual patient-based conclusions. Areas covered This review summarizes recent advances in automated quantitative imaging in cardiology and describes the latest techniques which incorporate machine learning principles. The review focuses on the cardiac imaging techniques which are in wide clinical use. It also discusses key issues and obstacles for these tools to become utilized in mainstream clinical practice. Expert commentary Fully-automated processing and high-level computer interpretation of cardiac imaging are becoming a reality. Application of machine learning to the vast amounts of quantitative data generated per scan and integration with clinical data also facilitates a move to more patient-specific interpretation. These developments are unlikely to replace interpreting physicians but will provide them with highly accurate tools to detect disease, risk-stratify, and optimize patient-specific treatment. However, with each technological advance, we move further from human dependence and closer to fully-automated machine interpretation. PMID:28277804
Future Directions for Astronomical Image Display
NASA Technical Reports Server (NTRS)
Mandel, Eric
2000-01-01
In the "Future Directions for Astronomical Image Displav" project, the Smithsonian Astrophysical Observatory (SAO) and the National Optical Astronomy Observatories (NOAO) evolved our existing image display program into fully extensible. cross-platform image display software. We also devised messaging software to support integration of image display into astronomical analysis systems. Finally, we migrated our software from reliance on Unix and the X Window System to a platform-independent architecture that utilizes the cross-platform Tcl/Tk technology.
Ma, Kevin C; Fernandez, James R; Amezcua, Lilyana; Lerner, Alex; Shiroishi, Mark S; Liu, Brent J
2015-12-01
MRI has been used to identify multiple sclerosis (MS) lesions in brain and spinal cord visually. Integrating patient information into an electronic patient record system has become key for modern patient care in medicine in recent years. Clinically, it is also necessary to track patients' progress in longitudinal studies, in order to provide comprehensive understanding of disease progression and response to treatment. As the amount of required data increases, there exists a need for an efficient systematic solution to store and analyze MS patient data, disease profiles, and disease tracking for both clinical and research purposes. An imaging informatics based system, called MS eFolder, has been developed as an integrated patient record system for data storage and analysis of MS patients. The eFolder system, with a DICOM-based database, includes a module for lesion contouring by radiologists, a MS lesion quantification tool to quantify MS lesion volume in 3D, brain parenchyma fraction analysis, and provide quantitative analysis and tracking of volume changes in longitudinal studies. Patient data, including MR images, have been collected retrospectively at University of Southern California Medical Center (USC) and Los Angeles County Hospital (LAC). The MS eFolder utilizes web-based components, such as browser-based graphical user interface (GUI) and web-based database. The eFolder database stores patient clinical data (demographics, MS disease history, family history, etc.), MR imaging-related data found in DICOM headers, and lesion quantification results. Lesion quantification results are derived from radiologists' contours on brain MRI studies and quantified into 3-dimensional volumes and locations. Quantified results of white matter lesions are integrated into a structured report based on DICOM-SR protocol and templates. The user interface displays patient clinical information, original MR images, and viewing structured reports of quantified results. The GUI also includes a data mining tool to handle unique search queries for MS. System workflow and dataflow steps has been designed based on the IHE post-processing workflow profile, including workflow process tracking, MS lesion contouring and quantification of MR images at a post-processing workstation, and storage of quantitative results as DICOM-SR in DICOM-based storage system. The web-based GUI is designed to display zero-footprint DICOM web-accessible data objects (WADO) and the SR objects. The MS eFolder system has been designed and developed as an integrated data storage and mining solution in both clinical and research environments, while providing unique features, such as quantitative lesion analysis and disease tracking over a longitudinal study. A comprehensive image and clinical data integrated database provided by MS eFolder provides a platform for treatment assessment, outcomes analysis and decision-support. The proposed system serves as a platform for future quantitative analysis derived automatically from CAD algorithms that can also be integrated within the system for individual disease tracking and future MS-related research. Ultimately the eFolder provides a decision-support infrastructure that can eventually be used as add-on value to the overall electronic medical record. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ma, Kevin C.; Fernandez, James R.; Amezcua, Lilyana; Lerner, Alex; Shiroishi, Mark S.; Liu, Brent J.
2016-01-01
Purpose MRI has been used to identify multiple sclerosis (MS) lesions in brain and spinal cord visually. Integrating patient information into an electronic patient record system has become key for modern patient care in medicine in recent years. Clinically, it is also necessary to track patients' progress in longitudinal studies, in order to provide comprehensive understanding of disease progression and response to treatment. As the amount of required data increases, there exists a need for an efficient systematic solution to store and analyze MS patient data, disease profiles, and disease tracking for both clinical and research purposes. Method An imaging informatics based system, called MS eFolder, has been developed as an integrated patient record system for data storage and analysis of MS patients. The eFolder system, with a DICOM-based database, includes a module for lesion contouring by radiologists, a MS lesion quantification tool to quantify MS lesion volume in 3D, brain parenchyma fraction analysis, and provide quantitative analysis and tracking of volume changes in longitudinal studies. Patient data, including MR images, have been collected retrospectively at University of Southern California Medical Center (USC) and Los Angeles County Hospital (LAC). The MS eFolder utilizes web-based components, such as browser-based graphical user interface (GUI) and web-based database. The eFolder database stores patient clinical data (demographics, MS disease history, family history, etc.), MR imaging-related data found in DICOM headers, and lesion quantification results. Lesion quantification results are derived from radiologists' contours on brain MRI studies and quantified into 3-dimensional volumes and locations. Quantified results of white matter lesions are integrated into a structured report based on DICOM-SR protocol and templates. The user interface displays patient clinical information, original MR images, and viewing structured reports of quantified results. The GUI also includes a data mining tool to handle unique search queries for MS. System workflow and dataflow steps has been designed based on the IHE post-processing workflow profile, including workflow process tracking, MS lesion contouring and quantification of MR images at a post-processing workstation, and storage of quantitative results as DICOM-SR in DICOM-based storage system. The web-based GUI is designed to display zero-footprint DICOM web-accessible data objects (WADO) and the SR objects. Summary The MS eFolder system has been designed and developed as an integrated data storage and mining solution in both clinical and research environments, while providing unique features, such as quantitative lesion analysis and disease tracking over a longitudinal study. A comprehensive image and clinical data integrated database provided by MS eFolder provides a platform for treatment assessment, outcomes analysis and decision-support. The proposed system serves as a platform for future quantitative analysis derived automatically from CAD algorithms that can also be integrated within the system for individual disease tracking and future MS-related research. Ultimately the eFolder provides a decision-support infrastructure that can eventually be used as add-on value to the overall electronic medical record. PMID:26564667
LANDSAT-4 image data quality analysis for energy related applications. [nuclear power plant sites
NASA Technical Reports Server (NTRS)
Wukelic, G. E. (Principal Investigator)
1983-01-01
No useable LANDSAT 4 TM data were obtained for the Hanford site in the Columbia Plateau region, but TM simulator data for a Virginia Electric Company nuclear power plant was used to test image processing algorithms. Principal component analyses of this data set clearly indicated that thermal plumes in surface waters used for reactor cooling would be discrenible. Image processing and analysis programs were successfully testing using the 7 band Arkansas test scene and preliminary analysis of TM data for the Savanah River Plant shows that current interactive, image enhancement, analysis and integration techniques can be effectively used for LANDSAT 4 data. Thermal band data appear adequate for gross estimates of thermal changes occurring near operating nuclear facilities especially in surface water bodies being used for reactor cooling purposes. Additional image processing software was written and tested which provides for more rapid and effective analysis of the 7 band TM data.
Functional optical coherence tomography for live dynamic analysis of mouse embryonic cardiogenesis
NASA Astrophysics Data System (ADS)
Wang, Shang; Lopez, Andrew L.; Larina, Irina V.
2018-02-01
Blood flow, heart contraction, and tissue stiffness are important regulators of cardiac morphogenesis and function during embryonic development. Defining how these factors are integrated is critically important to advance prevention, diagnostics, and treatment of congenital heart defects. Mammalian embryonic development is taking place deep within the female body, which makes cardiodynamic imaging and analysis during early developmental stages in humans inaccessible. With thousands of mutant lines available and well-established genetic manipulation tools, mouse is a great model to understand how biomechanical factors are integrated with molecular pathways to regulate cardiac function and development. Dynamic imaging and quantitative analysis of the biomechanics of live mouse embryos have become increasingly important, which demands continuous advancements in imaging techniques and live assessment approaches. This has been one of the major drives to keep pushing the frontier of embryonic imaging for better resolution, higher speed, deeper penetration, and more diverse and effective contrasts. Optical coherence tomography (OCT) has played a significant role in addressing such demands, and its features in non-labeling imaging, 3D capability, a large working distance, and various functional derivatives allow OCT to cover a number of specific applications in embryonic imaging. Recently, our group has made several technical improvements in using OCT to probe the biomechanical aspects of live developing mouse embryos at early stages. These include the direct volumetric structural and functional imaging of the cardiodynamics, four-dimensional quantitative Doppler imaging and analysis of the cardiac blood flow, and fourdimensional blood flow separation from the cardiac wall tissue in the beating embryonic heart. Here, we present a short review of these studies together with brief descriptions of the previous work that demonstrate OCT as a valuable and useful imaging tool for the research in developmental cardiology.
Jeong, Y J; Oh, T I; Woo, E J; Kim, K J
2017-07-01
Recently, highly flexible and soft pressure distribution imaging sensor is in great demand for tactile sensing, gait analysis, ubiquitous life-care based on activity recognition, and therapeutics. In this study, we integrate the piezo-capacitive and piezo-electric nanowebs with the conductive fabric sheets for detecting static and dynamic pressure distributions on a large sensing area. Electrical impedance tomography (EIT) and electric source imaging are applied for reconstructing pressure distribution images from measured current-voltage data on the boundary of the hybrid fabric sensor. We evaluated the piezo-capacitive nanoweb sensor, piezo-electric nanoweb sensor, and hybrid fabric sensor. The results show the feasibility of static and dynamic pressure distribution imaging from the boundary measurements of the fabric sensors.
[Research Progress of Multi-Model Medical Image Fusion at Feature Level].
Zhang, Junjie; Zhou, Tao; Lu, Huiling; Wang, Huiqun
2016-04-01
Medical image fusion realizes advantage integration of functional images and anatomical images.This article discusses the research progress of multi-model medical image fusion at feature level.We firstly describe the principle of medical image fusion at feature level.Then we analyze and summarize fuzzy sets,rough sets,D-S evidence theory,artificial neural network,principal component analysis and other fusion methods’ applications in medical image fusion and get summery.Lastly,we in this article indicate present problems and the research direction of multi-model medical images in the future.
Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT
Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah
2015-01-01
Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965
SIMA: Python software for analysis of dynamic fluorescence imaging data.
Kaifosh, Patrick; Zaremba, Jeffrey D; Danielson, Nathan B; Losonczy, Attila
2014-01-01
Fluorescence imaging is a powerful method for monitoring dynamic signals in the nervous system. However, analysis of dynamic fluorescence imaging data remains burdensome, in part due to the shortage of available software tools. To address this need, we have developed SIMA, an open source Python package that facilitates common analysis tasks related to fluorescence imaging. Functionality of this package includes correction of motion artifacts occurring during in vivo imaging with laser-scanning microscopy, segmentation of imaged fields into regions of interest (ROIs), and extraction of signals from the segmented ROIs. We have also developed a graphical user interface (GUI) for manual editing of the automatically segmented ROIs and automated registration of ROIs across multiple imaging datasets. This software has been designed with flexibility in mind to allow for future extension with different analysis methods and potential integration with other packages. Software, documentation, and source code for the SIMA package and ROI Buddy GUI are freely available at http://www.losonczylab.org/sima/.
White Matter Changes in Tinnitus: Is It All Age and Hearing Loss?
Yoo, Hye Bin; De Ridder, Dirk; Vanneste, Sven
2016-02-01
Tinnitus is a condition characterized by the perception of auditory phantom sounds. It is known as the result of complex interactions between auditory and nonauditory regions. However, previous structural imaging studies on tinnitus patients showed evidence of significant white matter changes caused by hearing loss that are positively correlated with aging. Current study focused on which aspects of tinnitus pathologies affect the white matter integrity the most. We used the diffusion tensor imaging technique to acquire images that have higher contrast in brain white matter to analyze how white matter is influenced by tinnitus-related factors using voxel-based methods, region of interest analysis, and deterministic tractography. As a result, white matter integrity in chronic tinnitus patients was both directly affected by age and also mediated by the hearing loss. The most important changes in white matter regions were found bilaterally in the anterior corona radiata, anterior corpus callosum, and bilateral sagittal strata. In the tractography analysis, the white matter integrity values in tracts of right parahippocampus were correlated with the subjective tinnitus loudness.
Integration of Optical Coherence Tomography Scan Patterns to Augment Clinical Data Suite
NASA Technical Reports Server (NTRS)
Mason, S.; Patel, N.; Van Baalen, M.; Tarver, W.; Otto, C.; Samuels, B.; Koslovsky, M.; Schaefer, C.; Taiym, W.; Wear, M.;
2018-01-01
Vision changes identified in long duration spaceflight astronauts has led Space Medicine at NASA to adopt a more comprehensive clinical monitoring protocol. Optical Coherence Tomography (OCT) was recently implemented at NASA, including on board the International Space Station in 2013. NASA is collaborating with Heidelberg Engineering to increase the fidelity of the current OCT data set by integrating the traditional circumpapillary OCT image with radial and horizontal block images at the optic nerve head. The retinal nerve fiber layer was segmented by two experienced individuals. Intra-rater (N=4 subjects and 70 images) and inter-rater (N=4 subjects and 221 images) agreement was performed. The results of this analysis and the potential benefits will be presented.
Analysis of tracheid development in suppressed-growth Ponderosa Pine using the FPL ring profiler
C. Tim Scott; David W. Vahey
2012-01-01
The Ring Profiler was developed to examine the cross-sectional morphology of wood tracheids in a 12.5-mm core sample. The instrument integrates a specially designed staging apparatus with an optical imaging system to obtain high-contrast, high-resolution images containing about 200-500 tracheids. These images are further enhanced and analyzed to extract tracheid cross-...
Integrated software for the detection of epileptogenic zones in refractory epilepsy.
Mottini, Alejandro; Miceli, Franco; Albin, Germán; Nuñez, Margarita; Ferrándo, Rodolfo; Aguerrebere, Cecilia; Fernandez, Alicia
2010-01-01
In this paper we present an integrated software designed to help nuclear medicine physicians in the detection of epileptogenic zones (EZ) by means of ictal-interictal SPECT and MR images. This tool was designed to be flexible, friendly and efficient. A novel detection method was included (A-contrario) along with the classical detection method (Subtraction analysis). The software's performance was evaluated with two separate sets of validation studies: visual interpretation of 12 patient images by an experimented observer and objective analysis of virtual brain phantom experiments by proposed numerical observers. Our results support the potential use of the proposed software to help nuclear medicine physicians in the detection of EZ in clinical practice.
NASA Astrophysics Data System (ADS)
Goh, Sheng-Yang M.; Irimia, Andrei; Vespa, Paul M.; Van Horn, John D.
2016-03-01
In traumatic brain injury (TBI) and intracerebral hemorrhage (ICH), the heterogeneity of lesion sizes and types necessitates a variety of imaging modalities to acquire a comprehensive perspective on injury extent. Although it is advantageous to combine imaging modalities and to leverage their complementary benefits, there are difficulties in integrating information across imaging types. Thus, it is important that efforts be dedicated to the creation and sustained refinement of resources for multimodal data integration. Here, we propose a novel approach to the integration of neuroimaging data acquired from human patients with TBI/ICH using various modalities; we also demonstrate the integrated use of multimodal magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) data for TBI analysis based on both visual observations and quantitative metrics. 3D models of healthy-appearing tissues and TBIrelated pathology are generated, both of which are derived from multimodal imaging data. MRI volumes acquired using FLAIR, SWI, and T2 GRE are used to segment pathology. Healthy tissues are segmented using user-supervised tools, and results are visualized using a novel graphical approach called a `connectogram', where brain connectivity information is depicted within a circle of radially aligned elements. Inter-region connectivity and its strength are represented by links of variable opacities drawn between regions, where opacity reflects the percentage longitudinal change in brain connectivity density. Our method for integrating, analyzing and visualizing structural brain changes due to TBI and ICH can promote knowledge extraction and enhance the understanding of mechanisms underlying recovery.
Jahanshad, Neda; Kochunov, Peter; Sprooten, Emma; Mandl, René C.; Nichols, Thomas E.; Almassy, Laura; Blangero, John; Brouwer, Rachel M.; Curran, Joanne E.; de Zubicaray, Greig I.; Duggirala, Ravi; Fox, Peter T.; Hong, L. Elliot; Landman, Bennett A.; Martin, Nicholas G.; McMahon, Katie L.; Medland, Sarah E.; Mitchell, Braxton D.; Olvera, Rene L.; Peterson, Charles P.; Starr, John M.; Sussmann, Jessika E.; Toga, Arthur W.; Wardlaw, Joanna M.; Wright, Margaret J.; Hulshoff Pol, Hilleke E.; Bastin, Mark E.; McIntosh, Andrew M.; Deary, Ian J.; Thompson, Paul M.; Glahn, David C.
2013-01-01
The ENIGMA (Enhancing NeuroImaging Genetics through Meta-Analysis) Consortium was set up to analyze brain measures and genotypes from multiple sites across the world to improve the power to detect genetic variants that influence the brain. Diffusion tensor imaging (DTI) yields quantitative measures sensitive to brain development and degeneration, and some common genetic variants may be associated with white matter integrity or connectivity. DTI measures, such as the fractional anisotropy (FA) of water diffusion, may be useful for identifying genetic variants that influence brain microstructure. However, genome-wide association studies (GWAS) require large populations to obtain sufficient power to detect and replicate significant effects, motivating a multi-site consortium effort. As part of an ENIGMA–DTI working group, we analyzed high-resolution FA images from multiple imaging sites across North America, Australia, and Europe, to address the challenge of harmonizing imaging data collected at multiple sites. Four hundred images of healthy adults aged 18–85 from four sites were used to create a template and corresponding skeletonized FA image as a common reference space. Using twin and pedigree samples of different ethnicities, we used our common template to evaluate the heritability of tract-derived FA measures. We show that our template is reliable for integrating multiple datasets by combining results through meta-analysis and unifying the data through exploratory mega-analyses. Our results may help prioritize regions of the FA map that are consistently influenced by additive genetic factors for future genetic discovery studies. Protocols and templates are publicly available at (http://enigma.loni.ucla.edu/ongoing/dti-working-group/). PMID:23629049
An integrated content and metadata based retrieval system for art.
Lewis, Paul H; Martinez, Kirk; Abas, Fazly Salleh; Fauzi, Mohammad Faizal Ahmad; Chan, Stephen C Y; Addis, Matthew J; Boniface, Mike J; Grimwood, Paul; Stevenson, Alison; Lahanier, Christian; Stevenson, James
2004-03-01
A new approach to image retrieval is presented in the domain of museum and gallery image collections. Specialist algorithms, developed to address specific retrieval tasks, are combined with more conventional content and metadata retrieval approaches, and implemented within a distributed architecture to provide cross-collection searching and navigation in a seamless way. External systems can access the different collections using interoperability protocols and open standards, which were extended to accommodate content based as well as text based retrieval paradigms. After a brief overview of the complete system, we describe the novel design and evaluation of some of the specialist image analysis algorithms including a method for image retrieval based on sub-image queries, retrievals based on very low quality images and retrieval using canvas crack patterns. We show how effective retrieval results can be achieved by real end-users consisting of major museums and galleries, accessing the distributed but integrated digital collections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep
2010-06-05
Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less
ERIC Educational Resources Information Center
Tataw, Oben Moses
2013-01-01
Interdisciplinary research in computer science requires the development of computational techniques for practical application in different domains. This usually requires careful integration of different areas of technical expertise. This dissertation presents image and time series analysis algorithms, with practical interdisciplinary applications…
NASA Technical Reports Server (NTRS)
Thompson, Karl E.; Rust, David M.; Chen, Hua
1995-01-01
A new type of image detector has been designed to analyze the polarization of light simultaneously at all picture elements (pixels) in a scene. The Integrated Dual Imaging Detector (IDID) consists of a polarizing beamsplitter bonded to a custom-designed charge-coupled device with signal-analysis circuitry, all integrated on a silicon chip. The IDID should simplify the design and operation of imaging polarimeters and spectroscopic imagers used, for example, in atmospheric and solar research. Other applications include environmental monitoring and robot vision. Innovations in the IDID include two interleaved 512 x 1024 pixel imaging arrays (one for each polarization plane), large dynamic range (well depth of 10(exp 6) electrons per pixel), simultaneous readout and display of both images at 10(exp 6) pixels per second, and on-chip analog signal processing to produce polarization maps in real time. When used with a lithium niobate Fabry-Perot etalon or other color filter that can encode spectral information as polarization, the IDID can reveal tiny differences between simultaneous images at two wavelengths.
MOSAIC: Software for creating mosaics from collections of images
NASA Technical Reports Server (NTRS)
Varosi, F.; Gezari, D. Y.
1992-01-01
We have developed a powerful, versatile image processing and analysis software package called MOSAIC, designed specifically for the manipulation of digital astronomical image data obtained with (but not limited to) two-dimensional array detectors. The software package is implemented using the Interactive Data Language (IDL), and incorporates new methods for processing, calibration, analysis, and visualization of astronomical image data, stressing effective methods for the creation of mosaic images from collections of individual exposures, while at the same time preserving the photometric integrity of the original data. Since IDL is available on many computers, the MOSAIC software runs on most UNIX and VAX workstations with the X-Windows or Sun View graphics interface.
Applications of magnetic resonance image segmentation in neurology
NASA Astrophysics Data System (ADS)
Heinonen, Tomi; Lahtinen, Antti J.; Dastidar, Prasun; Ryymin, Pertti; Laarne, Paeivi; Malmivuo, Jaakko; Laasonen, Erkki; Frey, Harry; Eskola, Hannu
1999-05-01
After the introduction of digital imagin devices in medicine computerized tissue recognition and classification have become important in research and clinical applications. Segmented data can be applied among numerous research fields including volumetric analysis of particular tissues and structures, construction of anatomical modes, 3D visualization, and multimodal visualization, hence making segmentation essential in modern image analysis. In this research project several PC based software were developed in order to segment medical images, to visualize raw and segmented images in 3D, and to produce EEG brain maps in which MR images and EEG signals were integrated. The software package was tested and validated in numerous clinical research projects in hospital environment.
Demonstrating Change with Astronaut Photography Using Object Based Image Analysis
NASA Technical Reports Server (NTRS)
Hollier, Andi; Jagge, Amy
2017-01-01
Every day, hundreds of images of Earth flood the Crew Earth Observations database as astronauts use hand held digital cameras to capture spectacular frames from the International Space Station. The variety of resolutions and perspectives provide a template for assessing land cover change over decades. We will focus on urban growth in the second fastest growing city in the nation, Houston, TX, using Object-Based Image Analysis. This research will contribute to the land change science community, integrated resource planning, and monitoring of the rapid rate of urban sprawl.
NASA Astrophysics Data System (ADS)
Pansing, Craig W.; Hua, Hong; Rolland, Jannick P.
2005-08-01
Head-mounted display (HMD) technologies find a variety of applications in the field of 3D virtual and augmented environments, 3D scientific visualization, as well as wearable displays. While most of the current HMDs use head pose to approximate line of sight, we propose to investigate approaches and designs for integrating eye tracking capability into HMDs from a low-level system design perspective and to explore schemes for optimizing system performance. In this paper, we particularly propose to optimize the illumination scheme, which is a critical component in designing an eye tracking-HMD (ET-HMD) integrated system. An optimal design can improve not only eye tracking accuracy, but also robustness. Using LightTools, we present the simulation of a complete eye illumination and imaging system using an eye model along with multiple near infrared LED (IRLED) illuminators and imaging optics, showing the irradiance variation of the different eye structures. The simulation of dark pupil effects along with multiple 1st-order Purkinje images will be presented. A parametric analysis is performed to investigate the relationships between the IRLED configurations and the irradiance distribution at the eye, and a set of optimal configuration parameters is recommended. The analysis will be further refined by actual eye image acquisition and processing.
NASA Astrophysics Data System (ADS)
Alperovich, Leonid; Averbuch, Amir; Eppelbaum, Lev; Zheludev, Valery
2013-04-01
Karst areas occupy about 14% of the world land. Karst terranes of different origin have caused difficult conditions for building, industrial activity and tourism, and are the source of heightened danger for environment. Mapping of karst (sinkhole) hazards, obviously, will be one of the most significant problems of engineering geophysics in the XXI century. Taking into account the complexity of geological media, some unfavourable environments and known ambiguity of geophysical data analysis, a single geophysical method examination might be insufficient. Wavelet methodology as whole has a significant impact on cardinal problems of geophysical signal processing such as: denoising of signals, enhancement of signals and distinguishing of signals with closely related characteristics and integrated analysis of different geophysical fields (satellite, airborne, earth surface or underground observed data). We developed a three-phase approach to the integrated geophysical localization of subsurface karsts (the same approach could be used for following monitoring of karst dynamics). The first phase consists of modeling devoted to compute various geophysical effects characterizing karst phenomena. The second phase determines development of the signal processing approaches to analyzing of profile or areal geophysical observations. Finally, at the third phase provides integration of these methods in order to create a new method of the combined interpretation of different geophysical data. In the base of our combine geophysical analysis we put modern developments in the wavelet technique of the signal and image processing. The development of the integrated methodology of geophysical field examination will enable to recognizing the karst terranes even by a small ratio of "useful signal - noise" in complex geological environments. For analyzing the geophysical data, we used a technique based on the algorithm to characterize a geophysical image by a limited number of parameters. This set of parameters serves as a signature of the image and is to be utilized for discrimination of images containing karst cavity (K) from the images non-containing karst (N). The constructed algorithm consists of the following main phases: (a) collection of the database, (b) characterization of geophysical images, (c) and dimensionality reduction. Then, each image is characterized by the histogram of the coherency directions. As a result of the previous steps we obtain two sets K and N of the signatures vectors for images from sections containing karst cavity and non-karst subsurface, respectively.
Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis.
Gao, Yurui; Burns, Scott S; Lauzon, Carolyn B; Fong, Andrew E; James, Terry A; Lubar, Joel F; Thatcher, Robert W; Twillie, David A; Wirt, Michael D; Zola, Marc A; Logan, Bret W; Anderson, Adam W; Landman, Bennett A
2013-03-29
Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.
Integration of XNAT/PACS, DICOM, and research software for automated multi-modal image analysis
NASA Astrophysics Data System (ADS)
Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.
2013-03-01
Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software.
Integration of XNAT/PACS, DICOM, and Research Software for Automated Multi-modal Image Analysis
Gao, Yurui; Burns, Scott S.; Lauzon, Carolyn B.; Fong, Andrew E.; James, Terry A.; Lubar, Joel F.; Thatcher, Robert W.; Twillie, David A.; Wirt, Michael D.; Zola, Marc A.; Logan, Bret W.; Anderson, Adam W.; Landman, Bennett A.
2013-01-01
Traumatic brain injury (TBI) is an increasingly important public health concern. While there are several promising avenues of intervention, clinical assessments are relatively coarse and comparative quantitative analysis is an emerging field. Imaging data provide potentially useful information for evaluating TBI across functional, structural, and microstructural phenotypes. Integration and management of disparate data types are major obstacles. In a multi-institution collaboration, we are collecting electroencephalogy (EEG), structural MRI, diffusion tensor MRI (DTI), and single photon emission computed tomography (SPECT) from a large cohort of US Army service members exposed to mild or moderate TBI who are undergoing experimental treatment. We have constructed a robust informatics backbone for this project centered on the DICOM standard and eXtensible Neuroimaging Archive Toolkit (XNAT) server. Herein, we discuss (1) optimization of data transmission, validation and storage, (2) quality assurance and workflow management, and (3) integration of high performance computing with research software. PMID:24386548
Breen, Michael S; Uhlmann, Anne; Ozcan, Sureyya; Chan, Man; Pinto, Dalila; Bahn, Sabine; Stein, Dan J
2017-03-02
Methamphetamine-associated psychosis (MAP) involves widespread neurocognitive and molecular deficits, however accurate diagnosis remains challenging. Integrating relationships between biological markers, brain imaging and clinical parameters may provide an improved mechanistic understanding of MAP, that could in turn drive the development of better diagnostics and treatment approaches. We applied selected reaction monitoring (SRM)-based proteomics, profiling 43 proteins in serum previously implicated in the etiology of major psychiatric disorders, and integrated these data with diffusion tensor imaging (DTI) and psychometric measurements from patients diagnosed with MAP (N = 12), methamphetamine dependence without psychosis (MA; N = 14) and healthy controls (N = 16). Protein analysis identified changes in APOC2 and APOH, which differed significantly in MAP compared to MA and controls. DTI analysis indicated widespread increases in mean diffusivity and radial diffusivity delineating extensive loss of white matter integrity and axon demyelination in MAP. Upon integration, several co-linear relationships between serum proteins and DTI measures reported in healthy controls were disrupted in MA and MAP groups; these involved areas of the brain critical for memory and social emotional processing. These findings suggest that serum proteomics and DTI are sensitive measures for detecting pathophysiological changes in MAP and describe a potential diagnostic fingerprint of the disorder.
The integration of a LANDSAT analysis capability with a geographic information system
NASA Technical Reports Server (NTRS)
Nordstrand, E. A.
1981-01-01
The integration of LANDSAT data was achieved through the development of a flexible, compatible analysis tool and using an existing data base to select the usable data from a LANDSAT analysis. The software package allows manipulation of grid cell data plus the flexibility to allow the user to include FORTRAN statements for special functions. Using this combination of capabilities the user can classify a LANDSAT image and then selectivity merge the results with other data that may exist for the study area.
Identification of suitable fundus images using automated quality assessment methods.
Şevik, Uğur; Köse, Cemal; Berber, Tolga; Erdöl, Hidayet
2014-04-01
Retinal image quality assessment (IQA) is a crucial process for automated retinal image analysis systems to obtain an accurate and successful diagnosis of retinal diseases. Consequently, the first step in a good retinal image analysis system is measuring the quality of the input image. We present an approach for finding medically suitable retinal images for retinal diagnosis. We used a three-class grading system that consists of good, bad, and outlier classes. We created a retinal image quality dataset with a total of 216 consecutive images called the Diabetic Retinopathy Image Database. We identified the suitable images within the good images for automatic retinal image analysis systems using a novel method. Subsequently, we evaluated our retinal image suitability approach using the Digital Retinal Images for Vessel Extraction and Standard Diabetic Retinopathy Database Calibration level 1 public datasets. The results were measured through the F1 metric, which is a harmonic mean of precision and recall metrics. The highest F1 scores of the IQA tests were 99.60%, 96.50%, and 85.00% for good, bad, and outlier classes, respectively. Additionally, the accuracy of our suitable image detection approach was 98.08%. Our approach can be integrated into any automatic retinal analysis system with sufficient performance scores.
Porcel, José M; Hernández, Paula; Martínez-Alonso, Montserrat; Bielsa, Silvia; Salud, Antonieta
2015-02-01
The role of fluorodeoxyglucose (FDG)-PET imaging for diagnosing malignant pleural effusions is not well defined. The aim of this study was to summarize the evidence for its use in ruling in or out the malignant origin of a pleural effusion or thickening. A meta-analysis was conducted of diagnostic accuracy studies published in the Cochrane Library, PubMed, and Embase (inception to June 2013) without language restrictions. Two investigators selected studies that had evaluated the performance of FDG-PET imaging in patients with pleural effusions or thickening, using pleural cytopathology or histopathology as the reference standard for malignancy. Subgroup analyses were conducted according to FDG-PET imaging interpretation (qualitative or semiquantitative), PET imaging equipment (PET vs integrated PET-CT imaging), and/or target population (known lung cancer or malignant pleural mesothelioma). Study quality was assessed using Quality Assessment of Diagnostic Accuracy Studies-2. We used a bivariate random-effects model for the analysis and pooling of diagnostic performance measures across studies. Fourteen non-high risk of bias studies, comprising 407 patients with malignant and 232 with benign pleural conditions, met the inclusion criteria. Semiquantitative PET imaging readings had a significantly lower sensitivity for diagnosing malignant effusions than visual assessments (82% vs 91%; P = .026). The pooled test characteristics of integrated PET-CT imaging systems using semiquantitative interpretations for identifying malignant effusions were: sensitivity, 81%; specificity, 74%; positive likelihood ratio (LR), 3.22; negative LR, 0.26; and area under the curve, 0.838. Resultant data were heterogeneous, and spectrum bias should be considered when appraising FDG-PET imaging operating characteristics. The moderate accuracy of PET-CT imaging using semiquantitative readings precludes its routine recommendation for discriminating malignant from benign pleural effusions.
Young, Nelson; Chang, Zhan; Wishart, David S
2004-04-12
GelScape is a web-based tool that permits facile, interactive annotation, comparison, manipulation and storage of protein gel images. It uses Java applet-servlet technology to allow rapid, remote image handling and image processing in a platform-independent manner. It supports many of the features found in commercial, stand-alone gel analysis software including spot annotation, spot integration, gel warping, image resizing, HTML image mapping, image overlaying as well as the storage of gel image and gel annotation data in compliance with Federated Gel Database requirements.
GLO-Roots: an imaging platform enabling multidimensional characterization of soil-grown root systems
Rellán-Álvarez, Rubén; Lobet, Guillaume; Lindner, Heike; Pradier, Pierre-Luc; Sebastian, Jose; Yee, Muh-Ching; Geng, Yu; Trontin, Charlotte; LaRue, Therese; Schrager-Lavelle, Amanda; Haney, Cara H; Nieu, Rita; Maloof, Julin; Vogel, John P; Dinneny, José R
2015-01-01
Root systems develop different root types that individually sense cues from their local environment and integrate this information with systemic signals. This complex multi-dimensional amalgam of inputs enables continuous adjustment of root growth rates, direction, and metabolic activity that define a dynamic physical network. Current methods for analyzing root biology balance physiological relevance with imaging capability. To bridge this divide, we developed an integrated-imaging system called Growth and Luminescence Observatory for Roots (GLO-Roots) that uses luminescence-based reporters to enable studies of root architecture and gene expression patterns in soil-grown, light-shielded roots. We have developed image analysis algorithms that allow the spatial integration of soil properties, gene expression, and root system architecture traits. We propose GLO-Roots as a system that has great utility in presenting environmental stimuli to roots in ways that evoke natural adaptive responses and in providing tools for studying the multi-dimensional nature of such processes. DOI: http://dx.doi.org/10.7554/eLife.07597.001 PMID:26287479
GLO-Roots: An imaging platform enabling multidimensional characterization of soil-grown root systems
Rellan-Alvarez, Ruben; Lobet, Guillaume; Lindner, Heike; ...
2015-08-19
Root systems develop different root types that individually sense cues from their local environment and integrate this information with systemic signals. This complex multi-dimensional amalgam of inputs enables continuous adjustment of root growth rates, direction, and metabolic activity that define a dynamic physical network. Current methods for analyzing root biology balance physiological relevance with imaging capability. To bridge this divide, we developed an integrated-imaging system called Growth and Luminescence Observatory for Roots (GLO-Roots) that uses luminescence-based reporters to enable studies of root architecture and gene expression patterns in soil-grown, light-shielded roots. We have developed image analysis algorithms that allow themore » spatial integration of soil properties, gene expression, and root system architecture traits. We propose GLO-Roots as a system that has great utility in presenting environmental stimuli to roots in ways that evoke natural adaptive responses and in providing tools for studying the multi-dimensional nature of such processes.« less
Image analysis and modeling in medical image computing. Recent developments and advances.
Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T
2012-01-01
Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body. Hence, model-based image computing methods are important tools to improve medical diagnostics and patient treatment in future.
Integrated light and scanning electron microscopy of GFP-expressing cells.
Peddie, Christopher J; Liv, Nalan; Hoogenboom, Jacob P; Collinson, Lucy M
2014-01-01
Integration of light and electron microscopes provides imaging tools in which fluorescent proteins can be localized to cellular structures with a high level of precision. However, until recently, there were few methods that could deliver specimens with sufficient fluorescent signal and electron contrast for dual imaging without intermediate staining steps. Here, we report protocols that preserve green fluorescent protein (GFP) in whole cells and in ultrathin sections of resin-embedded cells, with membrane contrast for integrated imaging. Critically, GFP is maintained in a stable and active state within the vacuum of an integrated light and scanning electron microscope. For light microscopists, additional structural information gives context to fluorescent protein expression in whole cells, illustrated here by analysis of filopodia and focal adhesions in Madin Darby canine kidney cells expressing GFP-Paxillin. For electron microscopists, GFP highlights the proteins of interest within the architectural space of the cell, illustrated here by localization of the conical lipid diacylglycerol to cellular membranes. © 2014 Elsevier Inc. All rights reserved.
XML-based scripting of multimodality image presentations in multidisciplinary clinical conferences
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Allada, Vivekanand; Dahlbom, Magdalena; Marcus, Phillip; Fine, Ian; Lapstra, Lorelle
2002-05-01
We developed a multi-modality image presentation software for display and analysis of images and related data from different imaging modalities. The software is part of a cardiac image review and presentation platform that supports integration of digital images and data from digital and analog media such as videotapes, analog x-ray films and 35 mm cine films. The software supports standard DICOM image files as well as AVI and PDF data formats. The system is integrated in a digital conferencing room that includes projections of digital and analog sources, remote videoconferencing capabilities, and an electronic whiteboard. The goal of this pilot project is to: 1) develop a new paradigm for image and data management for presentation in a clinically meaningful sequence adapted to case-specific scenarios, 2) design and implement a multi-modality review and conferencing workstation using component technology and customizable 'plug-in' architecture to support complex review and diagnostic tasks applicable to all cardiac imaging modalities and 3) develop an XML-based scripting model of image and data presentation for clinical review and decision making during routine clinical tasks and multidisciplinary clinical conferences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kugland, N. L.; Ryutov, D. D.; Plechaty, C.
2012-10-15
Proton imaging is commonly used to reveal the electric and magnetic fields that are found in high energy density plasmas. Presented here is an analysis of this technique that is directed towards developing additional insight into the underlying physics. This approach considers: formation of images in the limits of weak and strong intensity variations; caustic formation and structure; image inversion to obtain line-integrated field characteristics; direct relations between images and electric or magnetic field structures in a plasma; imaging of sharp features such as Debye sheaths and shocks. Limitations on spatial and temporal resolution are assessed, and similarities with opticalmore » shadowgraphy are noted. Synthetic proton images are presented to illustrate the analysis. These results will be useful for quantitatively analyzing experimental proton imaging data and verifying numerical codes.« less
Automatic classification of spectral units in the Aristarchus plateau
NASA Astrophysics Data System (ADS)
Erard, S.; Le Mouelic, S.; Langevin, Y.
1999-09-01
A reduction scheme has been recently proposed for the NIR images of Clementine (Le Mouelic et al, JGR 1999). This reduction has been used to build an integrated UVvis-NIR image cube of the Aristarchus region, from which compositional and maturity variations can be studied (Pinet et al, LPSC 1999). We will present an analysis of this image cube, providing a classification in spectral types and spectral units. The image cube is processed with Gmode analysis using three different data sets: Normalized spectra provide a classification based mainly on spectral slope variations (ie. maturity and volcanic glasses). This analysis discriminates between craters plus ejecta, mare basalts, and DMD. Olivine-rich areas and Aristarchus central peak are also recognized. Continuum-removed spectra provide a classification more related to compositional variations, which correctly identifies olivine and pyroxenes-rich areas (in Aristarchus, Krieger, Schiaparelli\\ldots). A third analysis uses spectral parameters related to maturity and Fe composition (reflectance, 1 mu m band depth, and spectral slope) rather than intensities. It provides the most spatially consistent picture, but fails in detecting Vallis Schroeteri and DMDs. A supplementary unit, younger and rich in pyroxene, is found on Aristarchus south rim. In conclusion, Gmode analysis can discriminate between different spectral types already identified with more classic methods (PCA, linear mixing\\ldots). No previous assumption is made on the data structure, such as endmembers number and nature, or linear relationship between input variables. The variability of the spectral types is intrinsically accounted for, so that the level of analysis is always restricted to meaningful limits. A complete classification should integrate several analyses based on different sets of parameters. Gmode is therefore a powerful light toll to perform first look analysis of spectral imaging data. This research has been partly founded by the French Programme National de Planetologie.
Suh, Sungho; Itoh, Shinya; Aoyama, Satoshi; Kawahito, Shoji
2010-01-01
For low-noise complementary metal-oxide-semiconductor (CMOS) image sensors, the reduction of pixel source follower noises is becoming very important. Column-parallel high-gain readout circuits are useful for low-noise CMOS image sensors. This paper presents column-parallel high-gain signal readout circuits, correlated multiple sampling (CMS) circuits and their noise reduction effects. In the CMS, the gain of the noise cancelling is controlled by the number of samplings. It has a similar effect to that of an amplified CDS for the thermal noise but is a little more effective for 1/f and RTS noises. Two types of the CMS with simple integration and folding integration are proposed. In the folding integration, the output signal swing is suppressed by a negative feedback using a comparator and one-bit D-to-A converter. The CMS circuit using the folding integration technique allows to realize a very low-noise level while maintaining a wide dynamic range. The noise reduction effects of their circuits have been investigated with a noise analysis and an implementation of a 1Mpixel pinned photodiode CMOS image sensor. Using 16 samplings, dynamic range of 59.4 dB and noise level of 1.9 e(-) for the simple integration CMS and 75 dB and 2.2 e(-) for the folding integration CMS, respectively, are obtained.
Rieckmann, Anna; Hedden, Trey; Younger, Alayna P; Sperling, Reisa A; Johnson, Keith A; Buckner, Randy L
2016-02-01
Aging-related differences in white matter integrity, the presence of amyloid plaques, and density of biomarkers indicative of dopamine functions can be detected and quantified with in vivo human imaging. The primary aim of the present study was to investigate whether these imaging-based measures constitute independent imaging biomarkers in older adults, which would speak to the hypothesis that the aging brain is characterized by multiple independent neurobiological cascades. We assessed MRI-based markers of white matter integrity and PET-based marker of dopamine transporter density and amyloid deposition in the same set of 53 clinically normal individuals (age 65-87). A multiple regression analysis demonstrated that dopamine transporter availability is predicted by white matter integrity, which was detectable even after controlling for chronological age. Further post-hoc exploration revealed that dopamine transporter availability was further associated with systolic blood pressure, mirroring the established association between cardiovascular health and white matter integrity. Dopamine transporter availability was not associated with the presence of amyloid burden. Neurobiological correlates of dopamine transporter measures in aging are therefore likely unrelated to Alzheimer's disease but are aligned with white matter integrity and cardiovascular risk. More generally, these results suggest that two common imaging markers of the aging brain that are typically investigated separately do not reflect independent neurobiological processes. Hum Brain Mapp 37:621-631, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Google glass based immunochromatographic diagnostic test analysis
NASA Astrophysics Data System (ADS)
Feng, Steve; Caire, Romain; Cortazar, Bingen; Turan, Mehmet; Wong, Andrew; Ozcan, Aydogan
2015-03-01
Integration of optical imagers and sensors into recently emerging wearable computational devices allows for simpler and more intuitive methods of integrating biomedical imaging and medical diagnostics tasks into existing infrastructures. Here we demonstrate the ability of one such device, the Google Glass, to perform qualitative and quantitative analysis of immunochromatographic rapid diagnostic tests (RDTs) using a voice-commandable hands-free software-only interface, as an alternative to larger and more bulky desktop or handheld units. Using the built-in camera of Glass to image one or more RDTs (labeled with Quick Response (QR) codes), our Glass software application uploads the captured image and related information (e.g., user name, GPS, etc.) to our servers for remote analysis and storage. After digital analysis of the RDT images, the results are transmitted back to the originating Glass device, and made available through a website in geospatial and tabular representations. We tested this system on qualitative human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) RDTs. For qualitative HIV tests, we demonstrate successful detection and labeling (i.e., yes/no decisions) for up to 6-fold dilution of HIV samples. For quantitative measurements, we activated and imaged PSA concentrations ranging from 0 to 200 ng/mL and generated calibration curves relating the RDT line intensity values to PSA concentration. By providing automated digitization of both qualitative and quantitative test results, this wearable colorimetric diagnostic test reader platform on Google Glass can reduce operator errors caused by poor training, provide real-time spatiotemporal mapping of test results, and assist with remote monitoring of various biomedical conditions.
Registration and rectification needs of geology
NASA Technical Reports Server (NTRS)
Chavez, P. S., Jr.
1982-01-01
Geologic applications of remotely sensed imaging encompass five areas of interest. The five areas include: (1) enhancement and analysis of individual images; (2) work with small area mosaics of imagery which have been map projection rectified to individual quadrangles; (3) development of large area mosaics of multiple images for several counties or states; (4) registration of multitemporal images; and (5) data integration from several sensors and map sources. Examples for each of these types of applications are summarized.
Imaging has enormous untapped potential to improve cancer research through software to extract and process morphometric and functional biomarkers. In the era of non-cytotoxic treatment agents, multi- modality image-guided ablative therapies and rapidly evolving computational resources, quantitative imaging software can be transformative in enabling minimally invasive, objective and reproducible evaluation of cancer treatment response. Post-processing algorithms are integral to high-throughput analysis and fine- grained differentiation of multiple molecular targets.
Integrating DICOM structure reporting (SR) into the medical imaging informatics data grid
NASA Astrophysics Data System (ADS)
Lee, Jasper; Le, Anh; Liu, Brent
2008-03-01
The Medical Imaging Informatics (MI2) Data Grid developed at the USC Image Processing and Informatics Laboratory enables medical images to be shared securely between multiple imaging centers. Current applications include an imaging-based clinical trial setting where multiple field sites perform image acquisition and a centralized radiology core performs image analysis, often using computer-aided diagnosis tools (CAD) that generate a DICOM-SR to report their findings and measurements. As more and more CAD tools are being developed in the radiology field, the generated DICOM Structure Reports (SR) holding key radiological findings and measurements that are not part of the DICOM image need to be integrated into the existing Medical Imaging Informatics Data Grid with the corresponding imaging studies. We will discuss the significance and method involved in adapting DICOM-SR into the Medical Imaging Informatics Data Grid. The result is a MI2 Data Grid repository from which users can send and receive DICOM-SR objects based on the imaging-based clinical trial application. The services required to extract and categorize information from the structured reports will be discussed, and the workflow to store and retrieve a DICOM-SR file into the existing MI2 Data Grid will be shown.
Downie, H F; Adu, M O; Schmidt, S; Otten, W; Dupuy, L X; White, P J; Valentine, T A
2015-07-01
The morphology of roots and root systems influences the efficiency by which plants acquire nutrients and water, anchor themselves and provide stability to the surrounding soil. Plant genotype and the biotic and abiotic environment significantly influence root morphology, growth and ultimately crop yield. The challenge for researchers interested in phenotyping root systems is, therefore, not just to measure roots and link their phenotype to the plant genotype, but also to understand how the growth of roots is influenced by their environment. This review discusses progress in quantifying root system parameters (e.g. in terms of size, shape and dynamics) using imaging and image analysis technologies and also discusses their potential for providing a better understanding of root:soil interactions. Significant progress has been made in image acquisition techniques, however trade-offs exist between sample throughput, sample size, image resolution and information gained. All of these factors impact on downstream image analysis processes. While there have been significant advances in computation power, limitations still exist in statistical processes involved in image analysis. Utilizing and combining different imaging systems, integrating measurements and image analysis where possible, and amalgamating data will allow researchers to gain a better understanding of root:soil interactions. © 2014 John Wiley & Sons Ltd.
Device and methods for "gold standard" registration of clinical 3D and 2D cerebral angiograms
NASA Astrophysics Data System (ADS)
Madan, Hennadii; Likar, Boštjan; Pernuš, Franjo; Å piclin, Žiga
2015-03-01
Translation of any novel and existing 3D-2D image registration methods into clinical image-guidance systems is limited due to lack of their objective validation on clinical image datasets. The main reason is that, besides the calibration of the 2D imaging system, a reference or "gold standard" registration is very difficult to obtain on clinical image datasets. In the context of cerebral endovascular image-guided interventions (EIGIs), we present a calibration device in the form of a headband with integrated fiducial markers and, secondly, propose an automated pipeline comprising 3D and 2D image processing, analysis and annotation steps, the result of which is a retrospective calibration of the 2D imaging system and an optimal, i.e., "gold standard" registration of 3D and 2D images. The device and methods were used to create the "gold standard" on 15 datasets of 3D and 2D cerebral angiograms, whereas each dataset was acquired on a patient undergoing EIGI for either aneurysm coiling or embolization of arteriovenous malformation. The use of the device integrated seamlessly in the clinical workflow of EIGI. While the automated pipeline eliminated all manual input or interactive image processing, analysis or annotation. In this way, the time to obtain the "gold standard" was reduced from 30 to less than one minute and the "gold standard" of 3D-2D registration on all 15 datasets of cerebral angiograms was obtained with a sub-0.1 mm accuracy.
ConfocalGN: A minimalistic confocal image generator
NASA Astrophysics Data System (ADS)
Dmitrieff, Serge; Nédélec, François
Validating image analysis pipelines and training machine-learning segmentation algorithms require images with known features. Synthetic images can be used for this purpose, with the advantage that large reference sets can be produced easily. It is however essential to obtain images that are as realistic as possible in terms of noise and resolution, which is challenging in the field of microscopy. We describe ConfocalGN, a user-friendly software that can generate synthetic microscopy stacks from a ground truth (i.e. the observed object) specified as a 3D bitmap or a list of fluorophore coordinates. This software can analyze a real microscope image stack to set the noise parameters and directly generate new images of the object with noise characteristics similar to that of the sample image. With a minimal input from the user and a modular architecture, ConfocalGN is easily integrated with existing image analysis solutions.
NASA Astrophysics Data System (ADS)
Augustine, Kurt E.; Camp, Jon J.; Holmes, David R.; Huddleston, Paul M.; Lu, Lichun; Yaszemski, Michael J.; Robb, Richard A.
2012-03-01
Failure of the spine's structural integrity from metastatic disease can lead to both pain and neurologic deficit. Fractures that require treatment occur in over 30% of bony metastases. Our objective is to use computed tomography (CT) in conjunction with analytic techniques that have been previously developed to predict fracture risk in cancer patients with metastatic disease to the spine. Current clinical practice for cancer patients with spine metastasis often requires an empirical decision regarding spinal reconstructive surgery. Early image-based software systems used for CT analysis are time consuming and poorly suited for clinical application. The Biomedical Image Resource (BIR) at Mayo Clinic, Rochester has developed an image analysis computer program that calculates from CT scans, the residual load-bearing capacity in a vertebra with metastatic cancer. The Spine Cancer Assessment (SCA) program is built on a platform designed for clinical practice, with a workflow format that allows for rapid selection of patient CT exams, followed by guided image analysis tasks, resulting in a fracture risk report. The analysis features allow the surgeon to quickly isolate a single vertebra and obtain an immediate pre-surgical multiple parallel section composite beam fracture risk analysis based on algorithms developed at Mayo Clinic. The analysis software is undergoing clinical validation studies. We expect this approach will facilitate patient management and utilization of reliable guidelines for selecting among various treatment option based on fracture risk.
Gu, Yuanyuan; Mai, Xiaoqin; Luo, Yue-jia
2013-01-01
The decoding of social signals from nonverbal cues plays a vital role in the social interactions of socially gregarious animals such as humans. Because nonverbal emotional signals from the face and body are normally seen together, it is important to investigate the mechanism underlying the integration of emotional signals from these two sources. We conducted a study in which the time course of the integration of facial and bodily expressions was examined via analysis of event-related potentials (ERPs) while the focus of attention was manipulated. Distinctive integrating features were found during multiple stages of processing. In the first stage, threatening information from the body was extracted automatically and rapidly, as evidenced by enhanced P1 amplitudes when the subjects viewed compound face-body images with fearful bodies compared with happy bodies. In the second stage, incongruency between emotional information from the face and the body was detected and captured by N2. Incongruent compound images elicited larger N2s than did congruent compound images. The focus of attention modulated the third stage of integration. When the subjects' attention was focused on the face, images with congruent emotional signals elicited larger P3s than did images with incongruent signals, suggesting more sustained attention and elaboration of congruent emotional information extracted from the face and body. On the other hand, when the subjects' attention was focused on the body, images with fearful bodies elicited larger P3s than did images with happy bodies, indicating more sustained attention and elaboration of threatening information from the body during evaluative processes.
Gu, Yuanyuan; Mai, Xiaoqin; Luo, Yue-jia
2013-01-01
The decoding of social signals from nonverbal cues plays a vital role in the social interactions of socially gregarious animals such as humans. Because nonverbal emotional signals from the face and body are normally seen together, it is important to investigate the mechanism underlying the integration of emotional signals from these two sources. We conducted a study in which the time course of the integration of facial and bodily expressions was examined via analysis of event-related potentials (ERPs) while the focus of attention was manipulated. Distinctive integrating features were found during multiple stages of processing. In the first stage, threatening information from the body was extracted automatically and rapidly, as evidenced by enhanced P1 amplitudes when the subjects viewed compound face-body images with fearful bodies compared with happy bodies. In the second stage, incongruency between emotional information from the face and the body was detected and captured by N2. Incongruent compound images elicited larger N2s than did congruent compound images. The focus of attention modulated the third stage of integration. When the subjects' attention was focused on the face, images with congruent emotional signals elicited larger P3s than did images with incongruent signals, suggesting more sustained attention and elaboration of congruent emotional information extracted from the face and body. On the other hand, when the subjects' attention was focused on the body, images with fearful bodies elicited larger P3s than did images with happy bodies, indicating more sustained attention and elaboration of threatening information from the body during evaluative processes. PMID:23935825
Clock Scan Protocol for Image Analysis: ImageJ Plugins.
Dobretsov, Maxim; Petkau, Georg; Hayar, Abdallah; Petkau, Eugen
2017-06-19
The clock scan protocol for image analysis is an efficient tool to quantify the average pixel intensity within, at the border, and outside (background) a closed or segmented convex-shaped region of interest, leading to the generation of an averaged integral radial pixel-intensity profile. This protocol was originally developed in 2006, as a visual basic 6 script, but as such, it had limited distribution. To address this problem and to join similar recent efforts by others, we converted the original clock scan protocol code into two Java-based plugins compatible with NIH-sponsored and freely available image analysis programs like ImageJ or Fiji ImageJ. Furthermore, these plugins have several new functions, further expanding the range of capabilities of the original protocol, such as analysis of multiple regions of interest and image stacks. The latter feature of the program is especially useful in applications in which it is important to determine changes related to time and location. Thus, the clock scan analysis of stacks of biological images may potentially be applied to spreading of Na + or Ca ++ within a single cell, as well as to the analysis of spreading activity (e.g., Ca ++ waves) in populations of synaptically-connected or gap junction-coupled cells. Here, we describe these new clock scan plugins and show some examples of their applications in image analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, D; Anderson, C; Mayo, C
Purpose: To extend the functionality of a commercial treatment planning system (TPS) to support (i) direct use of quantitative image-based metrics within treatment plan optimization and (ii) evaluation of dose-functional volume relationships to assist in functional image adaptive radiotherapy. Methods: A script was written that interfaces with a commercial TPS via an Application Programming Interface (API). The script executes a program that performs dose-functional volume analyses. Written in C#, the script reads the dose grid and correlates it with image data on a voxel-by-voxel basis through API extensions that can access registration transforms. A user interface was designed through WinFormsmore » to input parameters and display results. To test the performance of this program, image- and dose-based metrics computed from perfusion SPECT images aligned to the treatment planning CT were generated, validated, and compared. Results: The integration of image analysis information was successfully implemented as a plug-in to a commercial TPS. Perfusion SPECT images were used to validate the calculation and display of image-based metrics as well as dose-intensity metrics and histograms for defined structures on the treatment planning CT. Various biological dose correction models, custom image-based metrics, dose-intensity computations, and dose-intensity histograms were applied to analyze the image-dose profile. Conclusion: It is possible to add image analysis features to commercial TPSs through custom scripting applications. A tool was developed to enable the evaluation of image-intensity-based metrics in the context of functional targeting and avoidance. In addition to providing dose-intensity metrics and histograms that can be easily extracted from a plan database and correlated with outcomes, the system can also be extended to a plug-in optimization system, which can directly use the computed metrics for optimization of post-treatment tumor or normal tissue response models. Supported by NIH - P01 - CA059827.« less
An evaluation of EREP (Skylab) and ERTS imagery for integrated natural resources survey
NASA Technical Reports Server (NTRS)
Vangenderen, J. L. (Principal Investigator)
1973-01-01
The author has identified the following significant results. An experimental procedure has been devised and is being tested for natural resource surveys to cope with the problems of interpreting and processing the large quantities of data provided by Skylab and ERTS. Some basic aspects of orbital imagery such as scale, the role of repetitive coverage, and types of sensors are being examined in relation to integrated surveys of natural resources and regional development planning. Extrapolation away from known ground conditions, a fundamental technique for mapping resources, becomes very effective when used on orbital imagery supported by field mapping. Meaningful boundary delimitations can be made on orbital images using various image enhancement techniques. To meet the needs of many developing countries, this investigation into the use of satellite imagery for integrated resource surveys involves the analysis of the images by means of standard visual photointerpretation methods.
Lin, Jui-Ching; Heeschen, William; Reffner, John; Hook, John
2012-04-01
The combination of integrated focused ion beam-scanning electron microscope (FIB-SEM) serial sectioning and imaging techniques with image analysis provided quantitative characterization of three-dimensional (3D) pigment dispersion in dried paint films. The focused ion beam in a FIB-SEM dual beam system enables great control in slicing paints, and the sectioning process can be synchronized with SEM imaging providing high quality serial cross-section images for 3D reconstruction. Application of Euclidean distance map and ultimate eroded points image analysis methods can provide quantitative characterization of 3D particle distribution. It is concluded that 3D measurement of binder distribution in paints is effective to characterize the order of pigment dispersion in dried paint films.
Prescott, Jeffrey William
2013-02-01
The importance of medical imaging for clinical decision making has been steadily increasing over the last four decades. Recently, there has also been an emphasis on medical imaging for preclinical decision making, i.e., for use in pharamaceutical and medical device development. There is also a drive towards quantification of imaging findings by using quantitative imaging biomarkers, which can improve sensitivity, specificity, accuracy and reproducibility of imaged characteristics used for diagnostic and therapeutic decisions. An important component of the discovery, characterization, validation and application of quantitative imaging biomarkers is the extraction of information and meaning from images through image processing and subsequent analysis. However, many advanced image processing and analysis methods are not applied directly to questions of clinical interest, i.e., for diagnostic and therapeutic decision making, which is a consideration that should be closely linked to the development of such algorithms. This article is meant to address these concerns. First, quantitative imaging biomarkers are introduced by providing definitions and concepts. Then, potential applications of advanced image processing and analysis to areas of quantitative imaging biomarker research are described; specifically, research into osteoarthritis (OA), Alzheimer's disease (AD) and cancer is presented. Then, challenges in quantitative imaging biomarker research are discussed. Finally, a conceptual framework for integrating clinical and preclinical considerations into the development of quantitative imaging biomarkers and their computer-assisted methods of extraction is presented.
Verifying Air Force Weather Passive Satellite Derived Cloud Analysis Products
NASA Astrophysics Data System (ADS)
Nobis, T. E.
2017-12-01
Air Force Weather (AFW) has developed an hourly World-Wide Merged Cloud Analysis (WWMCA) using imager data from 16 geostationary and polar-orbiting satellites. The analysis product contains information on cloud fraction, height, type and various optical properties including optical depth and integrated water path. All of these products are derived using a suite of algorithms which rely exclusively on passively sensed data from short, mid and long wave imager data. The system integrates satellites with a wide-range of capabilities, from the relatively simple two-channel OLS imager to the 16 channel ABI/AHI to create a seamless global analysis in real time. Over the last couple of years, AFW has started utilizing independent verification data from active sensed cloud measurements to better understand the performance limitations of the WWMCA. Sources utilized include space based lidars (CALIPSO, CATS) and radar (CloudSat) as well as ground based lidars from the Department of Energy ARM sites and several European cloud radars. This work will present findings from our efforts to compare active and passive sensed cloud information including comparison techniques/limitations as well as performance of the passive derived cloud information against the active.
Integrative radiogenomic analysis for multicentric radiophenotype in glioblastoma
Kong, Doo-Sik; Kim, Jinkuk; Lee, In-Hee; Kim, Sung Tae; Seol, Ho Jun; Lee, Jung-Il; Park, Woong-Yang; Ryu, Gyuha; Wang, Zichen; Ma'ayan, Avi; Nam, Do-Hyun
2016-01-01
We postulated that multicentric glioblastoma (GBM) represents more invasiveness form than solitary GBM and has their own genomic characteristics. From May 2004 to June 2010 we retrospectively identified 51 treatment-naïve GBM patients with available clinical information from the Samsung Medical Center data registry. Multicentricity of the tumor was defined as the presence of multiple foci on the T1 contrast enhancement of MR images or having high signal for multiple lesions without contiguity of each other on the FLAIR image. Kaplan-Meier survival analysis demonstrated that multicentric GBM had worse prognosis than solitary GBM (median, 16.03 vs. 20.57 months, p < 0.05). Copy number variation (CNV) analysis revealed there was an increase in 11 regions, and a decrease in 17 regions, in the multicentric GBM. Gene expression profiling identified 738 genes to be increased and 623 genes to be decreased in the multicentric radiophenotype (p < 0.001). Integration of the CNV and expression datasets identified twelve representative genes: CPM, LANCL2, LAMP1, GAS6, DCUN1D2, CDK4, AGAP2, TSPAN33, PDLIM1, CLDN12, and GTPBP10 having high correlation across CNV, gene expression and patient outcome. Network and enrichment analyses showed that the multicentric tumor had elevated fibrotic signaling pathways compared with a more proliferative and mitogenic signal in the solitary tumors. Noninvasive radiological imaging together with integrative radiogenomic analysis can provide an important tool in helping to advance personalized therapy for the more clinically aggressive subset of GBM. PMID:26863628
Image preprocessing study on KPCA-based face recognition
NASA Astrophysics Data System (ADS)
Li, Xuan; Li, Dehua
2015-12-01
Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.
Implementation of stimulated Raman scattering microscopy for single cell analysis
NASA Astrophysics Data System (ADS)
D'Arco, Annalisa; Ferrara, Maria Antonietta; Indolfi, Maurizio; Tufano, Vitaliano; Sirleto, Luigi
2017-05-01
In this work, we present successfully realization of a nonlinear microscope, not purchasable in commerce, based on stimulated Raman scattering. It is obtained by the integration of a femtosecond SRS spectroscopic setup with an inverted research microscope equipped with a scanning unit. Taking account of strength of vibrational contrast of SRS, it provides label-free imaging of single cell analysis. Validation tests on images of polystyrene beads are reported to demonstrate the feasibility of the approach. In order to test the microscope on biological structures, we report and discuss the label-free images of lipid droplets inside fixed adipocyte cells.
Optical design and testing: introduction.
Liang, Chao-Wen; Koshel, John; Sasian, Jose; Breault, Robert; Wang, Yongtian; Fang, Yi Chin
2014-10-10
Optical design and testing has numerous applications in industrial, military, consumer, and medical settings. Assembling a complete imaging or nonimage optical system may require the integration of optics, mechatronics, lighting technology, optimization, ray tracing, aberration analysis, image processing, tolerance compensation, and display rendering. This issue features original research ranging from the optical design of image and nonimage optical stimuli for human perception, optics applications, bio-optics applications, 3D display, solar energy system, opto-mechatronics to novel imaging or nonimage modalities in visible and infrared spectral imaging, modulation transfer function measurement, and innovative interferometry.
The Route to an Integrative Associative Memory Is Influenced by Emotion
Murray, Brendan D.; Kensinger, Elizabeth A.
2014-01-01
Though the hippocampus typically has been implicated in processes related to associative binding, special types of associations – such as those created by integrative mental imagery – may be supported by processes implemented in other medial temporal-lobe or sensory processing regions. Here, we investigated what neural mechanisms underlie the formation and subsequent retrieval of integrated mental images, and whether those mechanisms differ based on the emotionality of the integration (i.e., whether it contains an emotional item or not). Participants viewed pairs of words while undergoing a functional MRI scan. They were instructed to imagine the two items separately from one another (“non-integrative” study) or as a single, integrated mental image (“integrative” study). They provided ratings of how successful they were at generating vivid images that fit the instructions. They were then given a surprise associative recognition test, also while undergoing an fMRI scan. The cuneus showed parametric correspondence to increasing imagery success selectively during encoding and retrieval of emotional integrations, while the parahippocampal gyri and prefrontal cortices showed parametric correspondence during the encoding and retrieval of non-emotional integrations. Connectivity analysis revealed that selectively during negative integration, left amygdala activity was negatively correlated with frontal and hippocampal activity. These data indicate that individuals utilize two different neural routes for forming and retrieving integrations depending on their emotional content, and they suggest a potentially disruptive role for the amygdala on frontal and medial-temporal regions during negative integration. PMID:24427267
NASA Astrophysics Data System (ADS)
Barón-Aznar, C.; Moreno-Jiménez, S.; Celis, M. A.; Lárraga-Gutiérrez, J. M.; Ballesteros-Zebadúa, P.
2008-08-01
Integrated dose is the total energy delivered in a radiotherapy target. This physical parameter could be a predictor for complications such as brain edema and radionecrosis after stereotactic radiotherapy treatments for brain tumors. Integrated Dose depends on the tissue density and volume. Using CT patients images from the National Institute of Neurology and Neurosurgery and BrainScansoftware, this work presents the mean density of 21 multiform glioblastomas, comparative results for normal tissue and estimated integrated dose for each case. The relationship between integrated dose and the probability of complications is discussed.
MULTISCALE TENSOR ANISOTROPIC FILTERING OF FLUORESCENCE MICROSCOPY FOR DENOISING MICROVASCULATURE.
Prasath, V B S; Pelapur, R; Glinskii, O V; Glinsky, V V; Huxley, V H; Palaniappan, K
2015-04-01
Fluorescence microscopy images are contaminated by noise and improving image quality without blurring vascular structures by filtering is an important step in automatic image analysis. The application of interest here is to automatically extract the structural components of the microvascular system with accuracy from images acquired by fluorescence microscopy. A robust denoising process is necessary in order to extract accurate vascular morphology information. For this purpose, we propose a multiscale tensor with anisotropic diffusion model which progressively and adaptively updates the amount of smoothing while preserving vessel boundaries accurately. Based on a coherency enhancing flow with planar confidence measure and fused 3D structure information, our method integrates multiple scales for microvasculature preservation and noise removal membrane structures. Experimental results on simulated synthetic images and epifluorescence images show the advantage of our improvement over other related diffusion filters. We further show that the proposed multiscale integration approach improves denoising accuracy of different tensor diffusion methods to obtain better microvasculature segmentation.
Design and implementation of a fault-tolerant and dynamic metadata database for clinical trials
NASA Astrophysics Data System (ADS)
Lee, J.; Zhou, Z.; Talini, E.; Documet, J.; Liu, B.
2007-03-01
In recent imaging-based clinical trials, quantitative image analysis (QIA) and computer-aided diagnosis (CAD) methods are increasing in productivity due to higher resolution imaging capabilities. A radiology core doing clinical trials have been analyzing more treatment methods and there is a growing quantity of metadata that need to be stored and managed. These radiology centers are also collaborating with many off-site imaging field sites and need a way to communicate metadata between one another in a secure infrastructure. Our solution is to implement a data storage grid with a fault-tolerant and dynamic metadata database design to unify metadata from different clinical trial experiments and field sites. Although metadata from images follow the DICOM standard, clinical trials also produce metadata specific to regions-of-interest and quantitative image analysis. We have implemented a data access and integration (DAI) server layer where multiple field sites can access multiple metadata databases in the data grid through a single web-based grid service. The centralization of metadata database management simplifies the task of adding new databases into the grid and also decreases the risk of configuration errors seen in peer-to-peer grids. In this paper, we address the design and implementation of a data grid metadata storage that has fault-tolerance and dynamic integration for imaging-based clinical trials.
Ravi, Keerthi Sravan; Potdar, Sneha; Poojar, Pavan; Reddy, Ashok Kumar; Kroboth, Stefan; Nielsen, Jon-Fredrik; Zaitsev, Maxim; Venkatesan, Ramesh; Geethanath, Sairam
2018-03-11
To provide a single open-source platform for comprehensive MR algorithm development inclusive of simulations, pulse sequence design and deployment, reconstruction, and image analysis. We integrated the "Pulseq" platform for vendor-independent pulse programming with Graphical Programming Interface (GPI), a scientific development environment based on Python. Our integrated platform, Pulseq-GPI, permits sequences to be defined visually and exported to the Pulseq file format for execution on an MR scanner. For comparison, Pulseq files using either MATLAB only ("MATLAB-Pulseq") or Python only ("Python-Pulseq") were generated. We demonstrated three fundamental sequences on a 1.5 T scanner. Execution times of the three variants of implementation were compared on two operating systems. In vitro phantom images indicate equivalence with the vendor supplied implementations and MATLAB-Pulseq. The examples demonstrated in this work illustrate the unifying capability of Pulseq-GPI. The execution times of all the three implementations were fast (a few seconds). The software is capable of user-interface based development and/or command line programming. The tool demonstrated here, Pulseq-GPI, integrates the open-source simulation, reconstruction and analysis capabilities of GPI Lab with the pulse sequence design and deployment features of Pulseq. Current and future work includes providing an ISMRMRD interface and incorporating Specific Absorption Ratio and Peripheral Nerve Stimulation computations. Copyright © 2018 Elsevier Inc. All rights reserved.
Misaligned Image Integration With Local Linear Model.
Baba, Tatsuya; Matsuoka, Ryo; Shirai, Keiichiro; Okuda, Masahiro
2016-05-01
We present a new image integration technique for a flash and long-exposure image pair to capture a dark scene without incurring blurring or noisy artifacts. Most existing methods require well-aligned images for the integration, which is often a burdensome restriction in practical use. We address this issue by locally transferring the colors of the flash images using a small fraction of the corresponding pixels in the long-exposure images. We formulate the image integration as a convex optimization problem with the local linear model. The proposed method makes it possible to integrate the color of the long-exposure image with the detail of the flash image without causing any harmful effects to its contrast, where we do not need perfect alignment between the images by virtue of our new integration principle. We show that our method successfully outperforms the state of the art in the image integration and reference-based color transfer for challenging misaligned data sets.
Integration of Multi-Modal Biomedical Data to Predict Cancer Grade and Patient Survival.
Phan, John H; Hoffman, Ryan; Kothari, Sonal; Wu, Po-Yen; Wang, May D
2016-02-01
The Big Data era in Biomedical research has resulted in large-cohort data repositories such as The Cancer Genome Atlas (TCGA). These repositories routinely contain hundreds of matched patient samples for genomic, proteomic, imaging, and clinical data modalities, enabling holistic and multi-modal integrative analysis of human disease. Using TCGA renal and ovarian cancer data, we conducted a novel investigation of multi-modal data integration by combining histopathological image and RNA-seq data. We compared the performances of two integrative prediction methods: majority vote and stacked generalization. Results indicate that integration of multiple data modalities improves prediction of cancer grade and outcome. Specifically, stacked generalization, a method that integrates multiple data modalities to produce a single prediction result, outperforms both single-data-modality prediction and majority vote. Moreover, stacked generalization reveals the contribution of each data modality (and specific features within each data modality) to the final prediction result and may provide biological insights to explain prediction performance.
DrishtiCare: a telescreening platform for diabetic retinopathy powered with fundus image analysis.
Joshi, Gopal Datt; Sivaswamy, Jayanthi
2011-01-01
Diabetic retinopathy is the leading cause of blindness in urban populations. Early diagnosis through regular screening and timely treatment has been shown to prevent visual loss and blindness. It is very difficult to cater to this vast set of diabetes patients, primarily because of high costs in reaching out to patients and a scarcity of skilled personnel. Telescreening offers a cost-effective solution to reach out to patients but is still inadequate due to an insufficient number of experts who serve the diabetes population. Developments toward fundus image analysis have shown promise in addressing the scarcity of skilled personnel for large-scale screening. This article aims at addressing the underlying issues in traditional telescreening to develop a solution that leverages the developments carried out in fundus image analysis. We propose a novel Web-based telescreening solution (called DrishtiCare) integrating various value-added fundus image analysis components. A Web-based platform on the software as a service (SaaS) delivery model is chosen to make the service cost-effective, easy to use, and scalable. A server-based prescreening system is employed to scrutinize the fundus images of patients and to refer them to the experts. An automatic quality assessment module ensures transfer of fundus images that meet grading standards. An easy-to-use interface, enabled with new visualization features, is designed for case examination by experts. Three local primary eye hospitals have participated and used DrishtiCare's telescreening service. A preliminary evaluation of the proposed platform is performed on a set of 119 patients, of which 23% are identified with the sight-threatening retinopathy. Currently, evaluation at a larger scale is under process, and a total of 450 patients have been enrolled. The proposed approach provides an innovative way of integrating automated fundus image analysis in the telescreening framework to address well-known challenges in large-scale disease screening. It offers a low-cost, effective, and easily adoptable screening solution to primary care providers. © 2010 Diabetes Technology Society.
Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.
Saadia, Ayesha; Rashdi, Adnan
2016-12-01
Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques reported in the literature. The proposed method for denoising of Echocardiographic images is effective in noise suppression/removal. It not only removes noise from an image but also preserves edges and other important structure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
de Dumast, Priscille; Mirabel, Clément; Cevidanes, Lucia; Ruellas, Antonio; Yatabe, Marilia; Ioshida, Marcos; Ribera, Nina Tubau; Michoud, Loic; Gomes, Liliane; Huang, Chao; Zhu, Hongtu; Muniz, Luciana; Shoukri, Brandon; Paniagua, Beatriz; Styner, Martin; Pieper, Steve; Budin, Francois; Vimort, Jean-Baptiste; Pascal, Laura; Prieto, Juan Carlos
2018-07-01
The purpose of this study is to describe the methodological innovations of a web-based system for storage, integration and computation of biomedical data, using a training imaging dataset to remotely compute a deep neural network classifier of temporomandibular joint osteoarthritis (TMJOA). This study imaging dataset consisted of three-dimensional (3D) surface meshes of mandibular condyles constructed from cone beam computed tomography (CBCT) scans. The training dataset consisted of 259 condyles, 105 from control subjects and 154 from patients with diagnosis of TMJ OA. For the image analysis classification, 34 right and left condyles from 17 patients (39.9 ± 11.7 years), who experienced signs and symptoms of the disease for less than 5 years, were included as the testing dataset. For the integrative statistical model of clinical, biological and imaging markers, the sample consisted of the same 17 test OA subjects and 17 age and sex matched control subjects (39.4 ± 15.4 years), who did not show any sign or symptom of OA. For these 34 subjects, a standardized clinical questionnaire, blood and saliva samples were also collected. The technological methodologies in this study include a deep neural network classifier of 3D condylar morphology (ShapeVariationAnalyzer, SVA), and a flexible web-based system for data storage, computation and integration (DSCI) of high dimensional imaging, clinical, and biological data. The DSCI system trained and tested the neural network, indicating 5 stages of structural degenerative changes in condylar morphology in the TMJ with 91% close agreement between the clinician consensus and the SVA classifier. The DSCI remotely ran with a novel application of a statistical analysis, the Multivariate Functional Shape Data Analysis, that computed high dimensional correlations between shape 3D coordinates, clinical pain levels and levels of biological markers, and then graphically displayed the computation results. The findings of this study demonstrate a comprehensive phenotypic characterization of TMJ health and disease at clinical, imaging and biological levels, using novel flexible and versatile open-source tools for a web-based system that provides advanced shape statistical analysis and a neural network based classification of temporomandibular joint osteoarthritis. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Daffara, C.; Parisotto, S.; Mariotti, P. I.
2015-06-01
Cultural Heritage is discovering how precious is thermal analysis as a tool to improve the restoration, thanks to its ability to inspect hidden details. In this work a novel dual mode imaging approach, based on the integration of thermography and thermal quasi-reflectography (TQR) in the mid-IR is demonstrated for an effective mapping of surface materials and of sub-surface detachments in mural painting. The tool was validated through a unique application: the "Monocromo" by Leonardo da Vinci in Italy. The dual mode acquisition provided two spatially aligned dataset: the TQR image and the thermal sequence. Main steps of the workflow included: 1) TQR analysis to map surface features and 2) to estimate the emissivity; 3) projection of the TQR frame on reference orthophoto and TQR mosaicking; 4) thermography analysis to map detachments; 5) use TQR to solve spatial referencing and mosaicking for the thermal-processed frames. Referencing of thermal images in the visible is a difficult aspect of the thermography technique that the dual mode approach allows to solve in effective way. We finally obtained the TQR and the thermal maps spatially referenced to the mural painting, thus providing the restorer a valuable tool for the restoration of the detachments.
A high-level 3D visualization API for Java and ImageJ.
Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin
2010-05-21
Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.
Optomechanical integrated simulation of Mars medium resolution lens with large field of view
NASA Astrophysics Data System (ADS)
Yang, Wenqiang; Xu, Guangzhou; Yang, Jianfeng; Sun, Yi
2017-10-01
The lens of Mars detector is exposed to solar radiation and space temperature for long periods of time during orbit, so that the ambient temperature of the optical system is in a dynamic state. The optical and mechanical change caused by heat will lead to camera's visual axis drift and the wavefront distortion. The surface distortion of the optical lens includes the displacement of the rigid body and the distortion of the surface shape. This paper used the calculation method based on the integrated optomechanical analysis, to explore the impact of thermodynamic load on image quality. Through the analysis software, established a simulation model of the lens structure. The shape distribution and the surface characterization parameters of the lens in some temperature ranges were analyzed and compared. the PV / RMS value, deformation cloud of the lens surface and quality evaluation of imaging was achieved. This simulation has been successfully measured the lens surface shape and shape distribution under the load which is difficult to measure on the experimental conditions. The integrated simulation method of the optical machine can obtain the change of the optical parameters brought by the temperature load. It shows that the application of Integrated analysis has play an important role in guiding the designing the lens.
Alterations in White Matter Integrity in Young Adults with Smartphone Dependence
Hu, Yuanming; Long, Xiaojing; Lyu, Hanqing; Zhou, Yangyang; Chen, Jianxiang
2017-01-01
Smartphone dependence (SPD) is increasingly regarded as a psychological problem, however, the underlying neural substrates of SPD is still not clear. High resolution magnetic resonance imaging provides a useful tool to help understand and manage the disorder. In this study, a tract-based spatial statistics (TBSS) analysis on diffusion tensor imaging (DTI) was used to measure white matter integrity in young adults with SPD. A total of 49 subjects were recruited and categorized into SPD and control group based on their clinical behavioral tests. To localize regions with abnormal white matter integrity in SPD, the voxel-wise analysis of fractional anisotropy (FA) and mean diffusivity (MD) on the whole brain was performed by TBSS. The correlation between the quantitative variables of brain structures and the behavior measures were performed. Our result demonstrated that SPD had significantly lower white matter integrity than controls in superior longitudinal fasciculus (SLF), superior corona radiata (SCR), internal capsule, external capsule, sagittal stratum, fornix/stria terminalis and midbrain structures. Correlation analysis showed that the observed abnormalities in internal capsule and stria terminalis were correlated with the severity of dependence and behavioral assessments. Our finding facilitated a primary understanding of white matter characteristics in SPD and indicated that the structural deficits might link to behavioral impairments. PMID:29163108
Multimodal digital color imaging system for facial skin lesion analysis
NASA Astrophysics Data System (ADS)
Bae, Youngwoo; Lee, Youn-Heum; Jung, Byungjo
2008-02-01
In dermatology, various digital imaging modalities have been used as an important tool to quantitatively evaluate the treatment effect of skin lesions. Cross-polarization color image was used to evaluate skin chromophores (melanin and hemoglobin) information and parallel-polarization image to evaluate skin texture information. In addition, UV-A induced fluorescent image has been widely used to evaluate various skin conditions such as sebum, keratosis, sun damages, and vitiligo. In order to maximize the evaluation efficacy of various skin lesions, it is necessary to integrate various imaging modalities into an imaging system. In this study, we propose a multimodal digital color imaging system, which provides four different digital color images of standard color image, parallel and cross-polarization color image, and UV-A induced fluorescent color image. Herein, we describe the imaging system and present the examples of image analysis. By analyzing the color information and morphological features of facial skin lesions, we are able to comparably and simultaneously evaluate various skin lesions. In conclusion, we are sure that the multimodal color imaging system can be utilized as an important assistant tool in dermatology.
Timing Analysis with INTEGRAL: Comparing Different Reconstruction Algorithms
NASA Technical Reports Server (NTRS)
Grinberg, V.; Kreykenboehm, I.; Fuerst, F.; Wilms, J.; Pottschmidt, K.; Bel, M. Cadolle; Rodriquez, J.; Marcu, D. M.; Suchy, S.; Markowitz, A.;
2010-01-01
INTEGRAL is one of the few instruments capable of detecting X-rays above 20keV. It is therefore in principle well suited for studying X-ray variability in this regime. Because INTEGRAL uses coded mask instruments for imaging, the reconstruction of light curves of X-ray sources is highly non-trivial. We present results from the comparison of two commonly employed algorithms, which primarily measure flux from mask deconvolution (ii-lc-extract) and from calculating the pixel illuminated fraction (ii-light). Both methods agree well for timescales above about 10 s, the highest time resolution for which image reconstruction is possible. For higher time resolution, ii-light produces meaningful results, although the overall variance of the lightcurves is not preserved.
Sun, Yajuan; Yu, Hongjuan; Ma, Jingquan; Lu, Peiou
2016-01-01
The aim of our study was to evaluate the role of 18F-FDG PET/CT integrated imaging in differentiating malignant from benign pleural effusion. A total of 176 patients with pleural effusion who underwent 18F-FDG PET/CT examination to differentiate malignancy from benignancy were retrospectively researched. The images of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging were visually analyzed. The suspected malignant effusion was characterized by the presence of nodular or irregular pleural thickening on CT imaging. Whereas on PET imaging, pleural 18F-FDG uptake higher than mediastinal activity was interpreted as malignant effusion. Images of 18F-FDG PET/CT integrated imaging were interpreted by combining the morphologic feature of pleura on CT imaging with the degree and form of pleural 18F-FDG uptake on PET imaging. One hundred and eight patients had malignant effusion, including 86 with pleural metastasis and 22 with pleural mesothelioma, whereas 68 patients had benign effusion. The sensitivities of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging in detecting malignant effusion were 75.0%, 91.7% and 93.5%, respectively, which were 69.8%, 91.9% and 93.0% in distinguishing metastatic effusion. The sensitivity of 18F-FDG PET/CT integrated imaging in detecting malignant effusion was higher than that of CT imaging (p = 0.000). For metastatic effusion, 18F-FDG PET imaging had higher sensitivity (p = 0.000) and better diagnostic consistency with 18F-FDG PET/CT integrated imaging compared with CT imaging (Kappa = 0.917 and Kappa = 0.295, respectively). The specificities of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging were 94.1%, 63.2% and 92.6% in detecting benign effusion. The specificities of CT imaging and 18F-FDG PET/CT integrated imaging were higher than that of 18F-FDG PET imaging (p = 0.000 and p = 0.000, respectively), and CT imaging had better diagnostic consistency with 18F-FDG PET/CT integrated imaging compared with 18F-FDG PET imaging (Kappa = 0.881 and Kappa = 0.240, respectively). 18F-FDG PET/CT integrated imaging is a more reliable modality in distinguishing malignant from benign pleural effusion than 18F-FDG PET imaging and CT imaging alone. For image interpretation of 18F-FDG PET/CT integrated imaging, the PET and CT portions play a major diagnostic role in identifying metastatic effusion and benign effusion, respectively.
National Defense Center of Excellence for Industrial Metrology and 3D Imaging
2012-10-18
validation rather than mundane data-reduction/analysis tasks. Indeed, the new financial and technical resources being brought to bear by integrating CT...of extremely fast axial scanners. By replacing the single-spot detector by a detector array, a three-dimensional image is acquired by one depth scan...the number of acquired voxels per complete two-dimensional or three-dimensional image, the axial and lateral resolution, the depth range, the
Three-dimensional passive sensing photon counting for object classification
NASA Astrophysics Data System (ADS)
Yeom, Seokwon; Javidi, Bahram; Watson, Edward
2007-04-01
In this keynote address, we address three-dimensional (3D) distortion-tolerant object recognition using photon-counting integral imaging (II). A photon-counting linear discriminant analysis (LDA) is discussed for classification of photon-limited images. We develop a compact distortion-tolerant recognition system based on the multiple-perspective imaging of II. Experimental and simulation results have shown that a low level of photons is sufficient to classify out-of-plane rotated objects.
GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.
Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A
2017-03-01
We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.
Integrated circuit failure analysis by low-energy charge-induced voltage alteration
Cole, E.I. Jr.
1996-06-04
A scanning electron microscope apparatus and method are described for detecting and imaging open-circuit defects in an integrated circuit (IC). The invention uses a low-energy high-current focused electron beam that is scanned over a device surface of the IC to generate a charge-induced voltage alteration (CIVA) signal at the location of any open-circuit defects. The low-energy CIVA signal may be used to generate an image of the IC showing the location of any open-circuit defects. A low electron beam energy is used to prevent electrical breakdown in any passivation layers in the IC and to minimize radiation damage to the IC. The invention has uses for IC failure analysis, for production-line inspection of ICs, and for qualification of ICs. 5 figs.
Integrated circuit failure analysis by low-energy charge-induced voltage alteration
Cole, Jr., Edward I.
1996-01-01
A scanning electron microscope apparatus and method are described for detecting and imaging open-circuit defects in an integrated circuit (IC). The invention uses a low-energy high-current focused electron beam that is scanned over a device surface of the IC to generate a charge-induced voltage alteration (CIVA) signal at the location of any open-circuit defects. The low-energy CIVA signal may be used to generate an image of the IC showing the location of any open-circuit defects. A low electron beam energy is used to prevent electrical breakdown in any passivation layers in the IC and to minimize radiation damage to the IC. The invention has uses for IC failure analysis, for production-line inspection of ICs, and for qualification of ICs.
Breast Mass Detection in Digital Mammogram Based on Gestalt Psychology
Bu, Qirong; Liu, Feihong; Zhang, Min; Ren, Yu; Lv, Yi
2018-01-01
Inspired by gestalt psychology, we combine human cognitive characteristics with knowledge of radiologists in medical image analysis. In this paper, a novel framework is proposed to detect breast masses in digitized mammograms. It can be divided into three modules: sensation integration, semantic integration, and verification. After analyzing the progress of radiologist's mammography screening, a series of visual rules based on the morphological characteristics of breast masses are presented and quantified by mathematical methods. The framework can be seen as an effective trade-off between bottom-up sensation and top-down recognition methods. This is a new exploratory method for the automatic detection of lesions. The experiments are performed on Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM) data sets. The sensitivity reached to 92% at 1.94 false positive per image (FPI) on MIAS and 93.84% at 2.21 FPI on DDSM. Our framework has achieved a better performance compared with other algorithms. PMID:29854359
Gutman, David A; Khalilia, Mohammed; Lee, Sanghoon; Nalisnik, Michael; Mullen, Zach; Beezley, Jonathan; Chittajallu, Deepak R; Manthey, David; Cooper, Lee A D
2017-11-01
Tissue-based cancer studies can generate large amounts of histology data in the form of glass slides. These slides contain important diagnostic, prognostic, and biological information and can be digitized into expansive and high-resolution whole-slide images using slide-scanning devices. Effectively utilizing digital pathology data in cancer research requires the ability to manage, visualize, share, and perform quantitative analysis on these large amounts of image data, tasks that are often complex and difficult for investigators with the current state of commercial digital pathology software. In this article, we describe the Digital Slide Archive (DSA), an open-source web-based platform for digital pathology. DSA allows investigators to manage large collections of histologic images and integrate them with clinical and genomic metadata. The open-source model enables DSA to be extended to provide additional capabilities. Cancer Res; 77(21); e75-78. ©2017 AACR . ©2017 American Association for Cancer Research.
Mangold, Stefanie; De Cecco, Carlo N; Wichmann, Julian L; Canstein, Christian; Varga-Szemes, Akos; Caruso, Damiano; Fuller, Stephen R; Bamberg, Fabian; Nikolaou, Konstantin; Schoepf, U Joseph
2016-05-01
To compare, on an intra-individual basis, the effect of automated tube voltage selection (ATVS), integrated circuit detector and advanced iterative reconstruction on radiation dose and image quality of aortic CTA studies using 2nd and 3rd generation dual-source CT (DSCT). We retrospectively evaluated 32 patients who had undergone CTA of the entire aorta with both 2nd generation DSCT at 120kV using filtered back projection (FBP) (protocol 1) and 3rd generation DSCT using ATVS, an integrated circuit detector and advanced iterative reconstruction (protocol 2). Contrast-to-noise ratio (CNR) was calculated. Image quality was subjectively evaluated using a five-point scale. Radiation dose parameters were recorded. All studies were considered of diagnostic image quality. CNR was significantly higher with protocol 2 (15.0±5.2 vs 11.0±4.2; p<.0001). Subjective image quality analysis revealed no significant differences for evaluation of attenuation (p=0.08501) but image noise was rated significantly lower with protocol 2 (p=0.0005). Mean tube voltage and effective dose were 94.7±14.1kV and 6.7±3.9mSv with protocol 2; 120±0kV and 11.5±5.2mSv with protocol 1 (p<0.0001, respectively). Aortic CTA performed with 3rd generation DSCT, ATVS, integrated circuit detector, and advanced iterative reconstruction allow a substantial reduction of radiation exposure while improving image quality in comparison to 120kV imaging with FBP. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Ibrahim, Reham S; Fathy, Hoda
2018-03-30
Tracking the impact of commonly applied post-harvesting and industrial processing practices on the compositional integrity of ginger rhizome was implemented in this work. Untargeted metabolite profiling was performed using digitally-enhanced HPTLC method where the chromatographic fingerprints were extracted using ImageJ software then analysed with multivariate Principal Component Analysis (PCA) for pattern recognition. A targeted approach was applied using a new, validated, simple and fast HPTLC image analysis method for simultaneous quantification of the officially recognized markers 6-, 8-, 10-gingerol and 6-shogaol in conjunction with chemometric Hierarchical Clustering Analysis (HCA). The results of both targeted and untargeted metabolite profiling revealed that peeling, drying in addition to storage employed during processing have a great influence on ginger chemo-profile, the different forms of processed ginger shouldn't be used interchangeably. Moreover, it deemed necessary to consider the holistic metabolic profile for comprehensive evaluation of ginger during processing. Copyright © 2018. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Wiseman, Marcie C.; Moradi, Bonnie
2010-01-01
On the basis of integrating objectification theory research with research on body image and eating problems among sexual minority men, the present study examined relations among sociocultural and psychological correlates of eating disorder symptoms with a sample of 231 sexual minority men. Results of a path analysis supported tenets of…
Nicholson, C; Tao, L
1993-12-01
This paper describes the theory of an integrative optical imaging system and its application to the analysis of the diffusion of 3-, 10-, 40-, and 70-kDa fluorescent dextran molecules in agarose gel and brain extracellular microenvironment. The method uses a precisely defined source of fluorescent molecules pressure ejected from a micropipette, and a detailed theory of the intensity contributions from out-of-focus molecules in a three-dimensional medium to a two-dimensional image. Dextrans tagged with either tetramethylrhodamine or Texas Red were ejected into 0.3% agarose gel or rat cortical slices maintained in a perfused chamber at 34 degrees C and imaged using a compound epifluorescent microscope with a 10 x water-immersion objective. About 20 images were taken at 2-10-s intervals, recorded with a cooled CCD camera, then transferred to a 486 PC for quantitative analysis. The diffusion coefficient in agarose gel, D, and the apparent diffusion coefficient, D*, in brain tissue were determined by fitting an integral expression relating the measured two-dimensional image intensity to the theoretical three-dimensional dextran concentration. The measurements in dilute agarose gel provided a reference value of D and validated the method. Values of the tortuosity, lambda = (D/D*)1/2, for the 3- and 10-kDa dextrans were 1.70 and 1.63, respectively, which were consistent with previous values derived from tetramethylammonium measurements in cortex. Tortuosities for the 40- and 70-kDa dextrans had significantly larger values of 2.16 and 2.25, respectively. This suggests that the extracellular space may have local constrictions that hinder the diffusion of molecules above a critical size that lies in the range of many neurotrophic compounds.
CMOS Time-Resolved, Contact, and Multispectral Fluorescence Imaging for DNA Molecular Diagnostics
Guo, Nan; Cheung, Ka Wai; Wong, Hiu Tung; Ho, Derek
2014-01-01
Instrumental limitations such as bulkiness and high cost prevent the fluorescence technique from becoming ubiquitous for point-of-care deoxyribonucleic acid (DNA) detection and other in-field molecular diagnostics applications. The complimentary metal-oxide-semiconductor (CMOS) technology, as benefited from process scaling, provides several advanced capabilities such as high integration density, high-resolution signal processing, and low power consumption, enabling sensitive, integrated, and low-cost fluorescence analytical platforms. In this paper, CMOS time-resolved, contact, and multispectral imaging are reviewed. Recently reported CMOS fluorescence analysis microsystem prototypes are surveyed to highlight the present state of the art. PMID:25365460
Current and future trends in marine image annotation software
NASA Astrophysics Data System (ADS)
Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.
2016-12-01
Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.
Mingus Discontinuous Multiphysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pat Notz, Dan Turner
Mingus provides hybrid coupled local/non-local mechanics analysis capabilities that extend several traditional methods to applications with inherent discontinuities. Its primary features include adaptations of solid mechanics, fluid dynamics and digital image correlation that naturally accommodate dijointed data or irregular solution fields by assimilating a variety of discretizations (such as control volume finite elements, peridynamics and meshless control point clouds). The goal of this software is to provide an analysis framework form multiphysics engineering problems with an integrated image correlation capability that can be used for experimental validation and model
Geyer, Lucas L; Glenn, G Russell; De Cecco, Carlo Nicola; Van Horn, Mark; Canstein, Christian; Silverman, Justin R; Krazinski, Aleksander W; Kemper, Jenny M; Bucher, Andreas; Ebersberger, Ullrich; Costello, Philip; Bamberg, Fabian; Schoepf, U Joseph
2015-09-01
To use suitable objective methods of analysis to assess the influence of the combination of an integrated-circuit computed tomographic (CT) detector and iterative reconstruction (IR) algorithms on the visualization of small (≤3-mm) coronary artery stents. By using a moving heart phantom, 18 data sets obtained from three coronary artery stents with small diameters were investigated. A second-generation dual-source CT system equipped with an integrated-circuit detector was used. Images were reconstructed with filtered back-projection (FBP) and IR at a section thickness of 0.75 mm (FBP75 and IR75, respectively) and IR at a section thickness of 0.50 mm (IR50). Multirow intensity profiles in Hounsfield units were modeled by using a sum-of-Gaussians fit to analyze in-plane image characteristics. Out-of-plane image characteristics were analyzed with z upslope of multicolumn intensity profiles in Hounsfield units. Statistical analysis was conducted with one-way analysis of variance and the Student t test. Independent of stent diameter and heart rate, IR75 resulted in significantly increased xy sharpness, signal-to-noise ratio, and contrast-to-noise ratio, as well as decreased blurring and noise compared with FBP75 (eg, 2.25-mm stent, 0 beats per minute; xy sharpness, 278.2 vs 252.3; signal-to-noise ratio, 46.6 vs 33.5; contrast-to-noise ratio, 26.0 vs 16.8; blurring, 1.4 vs 1.5; noise, 15.4 vs 21.2; all P < .001). In the z direction, the upslopes were substantially higher in the IR50 reconstructions (2.25-mm stent: IR50, 94.0; IR75, 53.1; and FBP75, 48.1; P < .001). The implementation of an integrated-circuit CT detector provides substantially sharper out-of-plane resolution of coronary artery stents at 0.5-mm section thickness, while the use of iterative image reconstruction mostly improves in-plane stent visualization.
Information theoretic analysis of linear shift-invariant edge-detection operators
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2012-06-01
Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hua, E-mail: huli@radonc.wustl.edu; Chen, Hsin
Purpose: For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Methods: Considering the complex H&N structures andmore » ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. Results: The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28% ± 1.46%) and margin error (0.49 ± 0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. Conclusions: The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.« less
Li, Hua; Chen, Hsin-Chen; Dolly, Steven; Li, Harold; Fischer-Valuck, Benjamin; Victoria, James; Dempsey, James; Ruan, Su; Anastasio, Mark; Mazur, Thomas; Gach, Michael; Kashani, Rojano; Green, Olga; Rodriguez, Vivian; Gay, Hiram; Thorstad, Wade; Mutic, Sasa
2016-08-01
For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Considering the complex H&N structures and ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28% ± 1.46%) and margin error (0.49 ± 0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.
Buckler, Andrew J; Liu, Tiffany Ting; Savig, Erica; Suzek, Baris E; Ouellette, M; Danagoulian, J; Wernsing, G; Rubin, Daniel L; Paik, David
2013-08-01
A widening array of novel imaging biomarkers is being developed using ever more powerful clinical and preclinical imaging modalities. These biomarkers have demonstrated effectiveness in quantifying biological processes as they occur in vivo and in the early prediction of therapeutic outcomes. However, quantitative imaging biomarker data and knowledge are not standardized, representing a critical barrier to accumulating medical knowledge based on quantitative imaging data. We use an ontology to represent, integrate, and harmonize heterogeneous knowledge across the domain of imaging biomarkers. This advances the goal of developing applications to (1) improve precision and recall of storage and retrieval of quantitative imaging-related data using standardized terminology; (2) streamline the discovery and development of novel imaging biomarkers by normalizing knowledge across heterogeneous resources; (3) effectively annotate imaging experiments thus aiding comprehension, re-use, and reproducibility; and (4) provide validation frameworks through rigorous specification as a basis for testable hypotheses and compliance tests. We have developed the Quantitative Imaging Biomarker Ontology (QIBO), which currently consists of 488 terms spanning the following upper classes: experimental subject, biological intervention, imaging agent, imaging instrument, image post-processing algorithm, biological target, indicated biology, and biomarker application. We have demonstrated that QIBO can be used to annotate imaging experiments with standardized terms in the ontology and to generate hypotheses for novel imaging biomarker-disease associations. Our results established the utility of QIBO in enabling integrated analysis of quantitative imaging data.
Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging.
Ding, Lei; Yuan, Han
2013-04-01
Electroencephalography (EEG) and magnetoencephalography (MEG) have different sensitivities to differently configured brain activations, making them complimentary in providing independent information for better detection and inverse reconstruction of brain sources. In the present study, we developed an integrative approach, which integrates a novel sparse electromagnetic source imaging method, i.e., variation-based cortical current density (VB-SCCD), together with the combined use of EEG and MEG data in reconstructing complex brain activity. To perform simultaneous analysis of multimodal data, we proposed to normalize EEG and MEG signals according to their individual noise levels to create unit-free measures. Our Monte Carlo simulations demonstrated that this integrative approach is capable of reconstructing complex cortical brain activations (up to 10 simultaneously activated and randomly located sources). Results from experimental data showed that complex brain activations evoked in a face recognition task were successfully reconstructed using the integrative approach, which were consistent with other research findings and validated by independent data from functional magnetic resonance imaging using the same stimulus protocol. Reconstructed cortical brain activations from both simulations and experimental data provided precise source localizations as well as accurate spatial extents of localized sources. In comparison with studies using EEG or MEG alone, the performance of cortical source reconstructions using combined EEG and MEG was significantly improved. We demonstrated that this new sparse ESI methodology with integrated analysis of EEG and MEG data could accurately probe spatiotemporal processes of complex human brain activations. This is promising for noninvasively studying large-scale brain networks of high clinical and scientific significance. Copyright © 2011 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Bangs, Corey F.; Kruse, Fred A.; Olsen, Chris R.
2013-05-01
Hyperspectral data were assessed to determine the effect of integrating spectral data and extracted texture feature data on classification accuracy. Four separate spectral ranges (hundreds of spectral bands total) were used from the Visible and Near Infrared (VNIR) and Shortwave Infrared (SWIR) portions of the electromagnetic spectrum. Haralick texture features (contrast, entropy, and correlation) were extracted from the average gray-level image for each of the four spectral ranges studied. A maximum likelihood classifier was trained using a set of ground truth regions of interest (ROIs) and applied separately to the spectral data, texture data, and a fused dataset containing both. Classification accuracy was measured by comparison of results to a separate verification set of test ROIs. Analysis indicates that the spectral range (source of the gray-level image) used to extract the texture feature data has a significant effect on the classification accuracy. This result applies to texture-only classifications as well as the classification of integrated spectral data and texture feature data sets. Overall classification improvement for the integrated data sets was near 1%. Individual improvement for integrated spectral and texture classification of the "Urban" class showed approximately 9% accuracy increase over spectral-only classification. Texture-only classification accuracy was highest for the "Dirt Path" class at approximately 92% for the spectral range from 947 to 1343nm. This research demonstrates the effectiveness of texture feature data for more accurate analysis of hyperspectral data and the importance of selecting the correct spectral range to be used for the gray-level image source to extract these features.
Twin imaging phenomenon of integral imaging.
Hu, Juanmei; Lou, Yimin; Wu, Fengmin; Chen, Aixi
2018-05-14
The imaging principles and phenomena of integral imaging technique have been studied in detail using geometrical optics, wave optics, or light filed theory. However, most of the conclusions are only suit for the integral imaging systems using diffused illumination. In this work, a kind of twin imaging phenomenon and mechanism has been observed in a non-diffused illumination reflective integral imaging system. Interactive twin images including a real and a virtual 3D image of one object can be activated in the system. The imaging phenomenon is similar to the conjugate imaging effect of hologram, but it base on the refraction and reflection instead of diffraction. The imaging characteristics and mechanisms different from traditional integral imaging are deduced analytically. Thin film integral imaging systems with 80μm thickness have also been made to verify the imaging phenomenon. Vivid lighting interactive twin 3D images have been realized using a light-emitting diode (LED) light source. When the LED is moving, the twin 3D images are moving synchronously. This interesting phenomenon shows a good application prospect in interactive 3D display, argument reality, and security authentication.
Al-Bayati, Mohammad; Grueneisen, Johannes; Lütje, Susanne; Sawicki, Lino M; Suntharalingam, Saravanabavaan; Tschirdewahn, Stephan; Forsting, Michael; Rübben, Herbert; Herrmann, Ken; Umutlu, Lale; Wetter, Axel
2018-01-01
To evaluate diagnostic accuracy of integrated 68Gallium labelled prostate-specific membrane antigen (68Ga-PSMA)-11 positron emission tomography (PET)/MRI in patients with primary prostate cancer (PCa) as compared to multi-parametric MRI. A total of 22 patients with recently diagnosed primary PCa underwent clinically indicated 68Ga-PSMA-11 PET/CT for initial staging followed by integrated 68Ga-PSMA-11 PET/MRI. Images of multi-parametric magnetic resonance imaging (mpMRI), PET and PET/MRI were evaluated separately by applying Prostate Imaging Reporting and Data System (PIRADSv2) for mpMRI and a 5-point Likert scale for PET and PET/MRI. Results were compared with pathology reports of biopsy or resection. Statistical analyses including receiver operating characteristics analysis were performed to compare the diagnostic performance of mpMRI, PET and PET/MRI. PET and integrated PET/MRI demonstrated a higher diagnostic accuracy than mpMRI (area under the curve: mpMRI: 0.679, PET and PET/MRI: 0.951). The proportion of equivocal results (PIRADS 3 and Likert 3) was considerably higher in mpMRI than in PET and PET/MRI. In a notable proportion of equivocal PIRADS results, PET led to a correct shift towards higher suspicion of malignancy and enabled correct lesion classification. Integrated 68Ga-PSMA-11 PET/MRI demonstrates higher diagnostic accuracy than mpMRI and is particularly valuable in tumours with equivocal results from PIRADS classification. © 2018 S. Karger AG, Basel.
Integrating legacy tools and data sources
DOT National Transportation Integrated Search
1999-01-01
Under DARPA and internal funding, Lockheed Martin has been researching information needs profiling to manage information dissemination as applied to logistics, image analysis and exploitation, and battlefield information management. We have demonstra...
Analysis of Orientations of Collagen Fibers by Novel Fiber-Tracking Software
NASA Astrophysics Data System (ADS)
Wu, Jun; Rajwa, Bartlomiej; Filmer, David L.; Hoffmann, Christoph M.; Yuan, Bo; Chiang, Ching-Shoei; Sturgis, Jennie; Robinson, J. Paul
2003-12-01
Recent evidence supports the notion that biological functions of extracellular matrix (ECM) are highly correlated to not only its composition but also its structure. This article integrates confocal microscopy imaging and image-processing techniques to analyze the microstructural properties of ECM. This report describes a two- and three-dimensional fiber middle-line tracing algorithm that may be used to quantify collagen fibril organization. We utilized computer simulation and statistical analysis to validate the developed algorithm. These algorithms were applied to confocal images of collagen gels made with reconstituted bovine collagen type I, to demonstrate the computation of orientations of individual fibers.
He, Jingzhen; Zu, Yuliang; Wang, Qing; Ma, Xiangxing
2014-12-01
The purpose of this study was to determine the performance of low-dose computed tomography (CT) scanning with integrated circuit (IC) detector in defining fine structures of temporal bone in children by comparing with the conventional detector. The study was performed with the approval of our institutional review board and the patients' anonymity was maintained. A total of 86 children<3 years of age underwent imaging of temporal bone with low-dose CT (80 kV/150 mAs) equipped with either IC detector or conventional discrete circuit (DC) detector. The image noise was measured for quantitative analysis. Thirty-five structures of temporal bone were further assessed and rated by 2 radiologists for qualitative analysis. κ Statistics were performed to determine the agreement reached between the 2 radiologists on each image. Mann-Whitney U test was used to determine the difference in image quality between the 2 detector systems. Objective analysis showed that the image noise was significantly lower (P<0.001) with the IC detector than with the DC detector. The κ values for qualitative assessment of the 35 fine anatomical structures revealed high interobserver agreement. The delineation for 30 of the 35 landmarks (86%) with the IC detector was superior to that with the conventional DC detector (P<0.05) although there were no differences in the delineation of the remaining 5 structures (P>0.05). The low-dose CT images acquired with the IC detector provide better depiction of fine osseous structures of temporal bone than that with the conventional DC detector.
Sun, Yajuan; Yu, Hongjuan; Ma, Jingquan
2016-01-01
Objective The aim of our study was to evaluate the role of 18F-FDG PET/CT integrated imaging in differentiating malignant from benign pleural effusion. Methods A total of 176 patients with pleural effusion who underwent 18F-FDG PET/CT examination to differentiate malignancy from benignancy were retrospectively researched. The images of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging were visually analyzed. The suspected malignant effusion was characterized by the presence of nodular or irregular pleural thickening on CT imaging. Whereas on PET imaging, pleural 18F-FDG uptake higher than mediastinal activity was interpreted as malignant effusion. Images of 18F-FDG PET/CT integrated imaging were interpreted by combining the morphologic feature of pleura on CT imaging with the degree and form of pleural 18F-FDG uptake on PET imaging. Results One hundred and eight patients had malignant effusion, including 86 with pleural metastasis and 22 with pleural mesothelioma, whereas 68 patients had benign effusion. The sensitivities of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging in detecting malignant effusion were 75.0%, 91.7% and 93.5%, respectively, which were 69.8%, 91.9% and 93.0% in distinguishing metastatic effusion. The sensitivity of 18F-FDG PET/CT integrated imaging in detecting malignant effusion was higher than that of CT imaging (p = 0.000). For metastatic effusion, 18F-FDG PET imaging had higher sensitivity (p = 0.000) and better diagnostic consistency with 18F-FDG PET/CT integrated imaging compared with CT imaging (Kappa = 0.917 and Kappa = 0.295, respectively). The specificities of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging were 94.1%, 63.2% and 92.6% in detecting benign effusion. The specificities of CT imaging and 18F-FDG PET/CT integrated imaging were higher than that of 18F-FDG PET imaging (p = 0.000 and p = 0.000, respectively), and CT imaging had better diagnostic consistency with 18F-FDG PET/CT integrated imaging compared with 18F-FDG PET imaging (Kappa = 0.881 and Kappa = 0.240, respectively). Conclusion 18F-FDG PET/CT integrated imaging is a more reliable modality in distinguishing malignant from benign pleural effusion than 18F-FDG PET imaging and CT imaging alone. For image interpretation of 18F-FDG PET/CT integrated imaging, the PET and CT portions play a major diagnostic role in identifying metastatic effusion and benign effusion, respectively. PMID:27560933
Artifacts in magnetic spirals retrieved by transport of intensity equation (TIE)
NASA Astrophysics Data System (ADS)
Cui, J.; Yao, Y.; Shen, X.; Wang, Y. G.; Yu, R. C.
2018-05-01
The artifacts in the magnetic structures reconstructed from Lorentz transmission electron microscopy (LTEM) images with TIE method have been analyzed in detail. The processing for the simulated images of Bloch and Neel spirals indicated that the improper parameters in TIE may overestimate the high frequency information and induce some false features in the retrieved images. The specimen tilting will further complicate the analysis of the images because the LTEM image contrast is not the result of the magnetization distribution within the specimen but the integral projection pattern of the magnetic induction filling the entire space including the specimen.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khan, S. F.; Izumi, N.; Glenn, S.
At the National Ignition Facility, the symmetry of the hot spot of imploding capsules is diagnosed by imaging the emitted x-rays using gated cameras and image plates. The symmetry of an implosion is an important factor in the yield generated from the resulting fusion process. The x-ray images are analyzed by decomposing the image intensity contours into Fourier and Legendre modes. This paper focuses on the additional protocols for the time-integrated shape analysis from image plates. Here, for implosions with temperatures above ~4keV, the hard x-ray background can be utilized to infer the temperature of the hot spot.
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
Racoceanu, Daniel; Capron, Frédérique
2016-01-01
Being able to provide a traceable and dynamic second opinion has become an ethical priority for patients and health care professionals in modern computer-aided medicine. In this perspective, a semantic cognitive virtual microscopy approach has been recently initiated, the MICO project, by focusing on cognitive digital pathology. This approach supports the elaboration of pathology-compliant daily protocols dedicated to breast cancer grading, in particular mitotic counts and nuclear atypia. A proof of concept has thus been elaborated, and an extension of these approaches is now underway in a collaborative digital pathology framework, the FlexMIm project. As important milestones on the way to routine digital pathology, a series of pioneer international benchmarking initiatives have been launched for mitosis detection (MITOS), nuclear atypia grading (MITOS-ATYPIA) and glandular structure detection (GlaS), some of the fundamental grading components in diagnosis and prognosis. These initiatives allow envisaging a consolidated validation referential database for digital pathology in the very near future. This reference database will need coordinated efforts from all major teams working in this area worldwide, and it will certainly represent a critical bottleneck for the acceptance of all future imaging modules in clinical practice. In line with recent advances in molecular imaging and genetics, keeping the microscopic modality at the core of future digital systems in pathology is fundamental to insure the acceptance of these new technologies, as well as for a deeper systemic, structured comprehension of the pathologies. After all, at the scale of routine whole-slide imaging (WSI; ∼0.22 µm/pixel), the microscopic image represents a structured 'genomic cluster', enabling a naturally structured support for integrative digital pathology approaches. In order to accelerate and structure the integration of this heterogeneous information, a major effort is and will continue to be devoted to morphological microsemiology (microscopic morphology semantics). Besides insuring the traceability of the results (second opinion) and supporting the orchestration of high-content image analysis modules, the role of semantics will be crucial for the correlation between digital pathology and noninvasive medical imaging modalities. In addition, semantics has an important role in modelling the links between traditional microscopy and recent label-free technologies. The massive amount of visual data is challenging and represents a characteristic intrinsic to digital pathology. The design of an operational integrative microscopy framework needs to focus on scalable multiscale imaging formalism. In this sense, we prospectively consider some of the most recent scalable methodologies adapted to digital pathology as marked point processes for nuclear atypia and point-set mathematical morphology for architecture grading. To orchestrate this scalable framework, semantics-based WSI management (analysis, exploration, indexing, retrieval and report generation support) represents an important means towards approaches to integrating big data into biomedicine. This insight reflects our vision through an instantiation of essential bricks of this type of architecture. The generic approach introduced here is applicable to a number of challenges related to molecular imaging, high-content image management and, more generally, bioinformatics. © 2016 S. Karger AG, Basel.
Automated identification of retained surgical items in radiological images
NASA Astrophysics Data System (ADS)
Agam, Gady; Gan, Lin; Moric, Mario; Gluncic, Vicko
2015-03-01
Retained surgical items (RSIs) in patients is a major operating room (OR) patient safety concern. An RSI is any surgical tool, sponge, needle or other item inadvertently left in a patients body during the course of surgery. If left undetected, RSIs may lead to serious negative health consequences such as sepsis, internal bleeding, and even death. To help physicians efficiently and effectively detect RSIs, we are developing computer-aided detection (CADe) software for X-ray (XR) image analysis, utilizing large amounts of currently available image data to produce a clinically effective RSI detection system. Physician analysis of XRs for the purpose of RSI detection is a relatively lengthy process that may take up to 45 minutes to complete. It is also error prone due to the relatively low acuity of the human eye for RSIs in XR images. The system we are developing is based on computer vision and machine learning algorithms. We address the problem of low incidence by proposing synthesis algorithms. The CADe software we are developing may be integrated into a picture archiving and communication system (PACS), be implemented as a stand-alone software application, or be integrated into portable XR machine software through application programming interfaces. Preliminary experimental results on actual XR images demonstrate the effectiveness of the proposed approach.
Methods and potentials for using satellite image classification in school lessons
NASA Astrophysics Data System (ADS)
Voss, Kerstin; Goetzke, Roland; Hodam, Henryk
2011-11-01
The FIS project - FIS stands for Fernerkundung in Schulen (Remote Sensing in Schools) - aims at a better integration of the topic "satellite remote sensing" in school lessons. According to this, the overarching objective is to teach pupils basic knowledge and fields of application of remote sensing. Despite the growing significance of digital geomedia, the topic "remote sensing" is not broadly supported in schools. Often, the topic is reduced to a short reflection on satellite images and used only for additional illustration of issues relevant for the curriculum. Without addressing the issue of image data, this can hardly contribute to the improvement of the pupils' methodical competences. Because remote sensing covers more than simple, visual interpretation of satellite images, it is necessary to integrate remote sensing methods like preprocessing, classification and change detection. Dealing with these topics often fails because of confusing background information and the lack of easy-to-use software. Based on these insights, the FIS project created different simple analysis tools for remote sensing in school lessons, which enable teachers as well as pupils to be introduced to the topic in a structured way. This functionality as well as the fields of application of these analysis tools will be presented in detail with the help of three different classification tools for satellite image classification.
Prototype Imaging Spectrograph for Coronagraphic Exoplanet Studies (PISCES) for WFIRST/AFTA
NASA Technical Reports Server (NTRS)
Gong, Qian; McElwain, Michael; Greeley, Bradford; Grammer, Bryan; Marx, Catherine; Memarsadeghi, Nargess; Hilton, George; Perrin, Marshall; Sayson, Llop; Domingo, Jorge;
2015-01-01
Prototype Imaging Spectrograph for Coronagraphic Exoplanet Studies (PISCES) is a prototype lenslet array based integral field spectrometer (IFS) designed for high contrast imaging of extrasolar planets. PISCES will be used to advance the technology readiness of the high contrast IFS baselined on the Wide-Field InfraRed Survey TelescopeAstrophysics Focused Telescope Assets (WFIRSTAFTA) coronagraph instrument. PISCES will be integrated into the high contrast imaging testbed (HCIT) at the Jet Propulsion Laboratory and will work with both the Hybrid Lyot Coronagraph (HLC) and the Shaped Pupil Coronagraph (SPC). We will present the PISCES optical design, including the similarities and differences of lenslet based IFSs to normal spectrometers, the trade-off between a refractive design and reflective design, as well as the compatibility to upgrade from the current 1k x 1k detector array to 4k x 4k detector array. The optical analysis, alignment plan, and mechanical design of the instrument will be discussed.
A neural network ActiveX based integrated image processing environment.
Ciuca, I; Jitaru, E; Alaicescu, M; Moisil, I
2000-01-01
The paper outlines an integrated image processing environment that uses neural networks ActiveX technology for object recognition and classification. The image processing environment which is Windows based, encapsulates a Multiple-Document Interface (MDI) and is menu driven. Object (shape) parameter extraction is focused on features that are invariant in terms of translation, rotation and scale transformations. The neural network models that can be incorporated as ActiveX components into the environment allow both clustering and classification of objects from the analysed image. Mapping neural networks perform an input sensitivity analysis on the extracted feature measurements and thus facilitate the removal of irrelevant features and improvements in the degree of generalisation. The program has been used to evaluate the dimensions of the hydrocephalus in a study for calculating the Evans index and the angle of the frontal horns of the ventricular system modifications.
Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D.
2009-01-01
Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings. PMID:19834575
Liu, Jingyu; Demirci, Oguz; Calhoun, Vince D
2008-01-01
Relationships between genomic data and functional brain images are of great interest but require new analysis approaches to integrate the high-dimensional data types. This letter presents an extension of a technique called parallel independent component analysis (paraICA), which enables the joint analysis of multiple modalities including interconnections between them. We extend our earlier work by allowing for multiple interconnections and by providing important overfitting controls. Performance was assessed by simulations under different conditions, and indicated reliable results can be extracted by properly balancing overfitting and underfitting. An application to functional magnetic resonance images and single nucleotide polymorphism array produced interesting findings.
The Precision Formation Flying Integrated Analysis Tool (PFFIAT)
NASA Technical Reports Server (NTRS)
Stoneking, Eric; Lyon, Richard G.; Sears, Edie; Lu, Victor
2004-01-01
Several space missions presently in the concept phase (e.g. Stellar Imager, Submillimeter Probe of Evolutionary Cosmic Structure, Terrestrial Planet Finder) plan to use multiple spacecraft flying in precise formation to synthesize unprecedently large aperture optical systems. These architectures present challenges to the attitude and position determination and control system; optical performance is directly coupled to spacecraft pointing with typical control requirements being on the scale of milliarcseconds and nanometers. To investigate control strategies, rejection of environmental disturbances, and sensor and actuator requirements, a capability is needed to model both the dynamical and optical behavior of such a distributed telescope system. This paper describes work ongoing at NASA Goddard Space Flight Center toward the integration of a set of optical analysis tools (Optical System Characterization and Analysis Research software, or OSCAR) with the Formation Flying Test Bed (FFTB). The resulting system is called the Precision Formation Flying Integrated Analysis Tool (PFFIAT), and it provides the capability to simulate closed-loop control of optical systems composed of elements mounted on multiple spacecraft. The attitude and translation spacecraft dynamics are simulated in the FFTB, including effects of the space environment (e.g. solar radiation pressure, differential orbital motion). The resulting optical configuration is then processed by OSCAR to determine an optical image. From this image, wavefront sensing (e.g. phase retrieval) techniques are being developed to derive attitude and position errors. These error signals will be fed back to the spacecraft control systems, completing the control loop. A simple case study is presented to demonstrate the present capabilities of the tool.
The Precision Formation Flying Integrated Analysis Tool (PFFIAT)
NASA Technical Reports Server (NTRS)
Stoneking, Eric; Lyon, Richard G.; Sears, Edie; Lu, Victor
2004-01-01
Several space missions presently in the concept phase (e.g. Stellar Imager, Sub- millimeter Probe of Evolutionary Cosmic Structure, Terrestrial Planet Finder) plan to use multiple spacecraft flying in precise formation to synthesize unprecedently large aperture optical systems. These architectures present challenges to the attitude and position determination and control system; optical performance is directly coupled to spacecraft pointing with typical control requirements being on the scale of milliarcseconds and nanometers. To investigate control strategies, rejection of environmental disturbances, and sensor and actuator requirements, a capability is needed to model both the dynamical and optical behavior of such a distributed telescope system. This paper describes work ongoing at NASA Goddard Space Flight Center toward the integration of a set of optical analysis tools (Optical System Characterization and Analysis Research software, or OSCAR) with the Formation J?lying Test Bed (FFTB). The resulting system is called the Precision Formation Flying Integrated Analysis Tool (PFFIAT), and it provides the capability to simulate closed-loop control of optical systems composed of elements mounted on multiple spacecraft. The attitude and translation spacecraft dynamics are simulated in the FFTB, including effects of the space environment (e.g. solar radiation pressure, differential orbital motion). The resulting optical configuration is then processed by OSCAR to determine an optical image. From this image, wavefront sensing (e.g. phase retrieval) techniques are being developed to derive attitude and position errors. These error signals will be fed back to the spacecraft control systems, completing the control loop. A simple case study is presented to demonstrate the present capabilities of the tool.
NASA Astrophysics Data System (ADS)
Ma, Kevin; Liu, Joseph; Zhang, Xuejun; Lerner, Alex; Shiroishi, Mark; Amezcua, Lilyana; Liu, Brent
2016-03-01
We have designed and developed a multiple sclerosis eFolder system for patient data storage, image viewing, and automatic lesion quantification results stored in DICOM-SR format. The web-based system aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient treatments and data analysis. The system needs to quantify lesion volumes, identify and register lesion locations to track shifts in volume and quantity of lesions in a longitudinal study. In order to perform lesion registration, we have developed a brain warping and normalizing methodology using Statistical Parametric Mapping (SPM) MATLAB toolkit for brain MRI. Patients' brain MR images are processed via SPM's normalization processes, and the brain images are analyzed and warped according to the tissue probability map. Lesion identification and contouring are completed by neuroradiologists, and lesion volume quantification is completed by the eFolder's CAD program. Lesion comparison results in longitudinal studies show key growth and active regions. The results display successful lesion registration and tracking over a longitudinal study. Lesion change results are graphically represented in the web-based user interface, and users are able to correlate patient progress and changes in the MRI images. The completed lesion and disease tracking tool would enable the eFolder to provide complete patient profiles, improve the efficiency of patient care, and perform comprehensive data analysis through an integrated imaging informatics system.
Integrated thermal disturbance analysis of optical system of astronomical telescope
NASA Astrophysics Data System (ADS)
Yang, Dehua; Jiang, Zibo; Li, Xinnan
2008-07-01
During operation, astronomical telescope will undergo thermal disturbance, especially more serious in solar telescope, which may cause degradation of image quality. As drives careful thermal load investigation and measure applied to assess its effect on final image quality during design phase. Integrated modeling analysis is boosting the process to find comprehensive optimum design scheme by software simulation. In this paper, we focus on the Finite Element Analysis (FEA) software-ANSYS-for thermal disturbance analysis and the optical design software-ZEMAX-for optical system design. The integrated model based on ANSYS and ZEMAX is briefed in the first from an overview of point. Afterwards, we discuss the establishment of thermal model. Complete power series polynomial with spatial coordinates is introduced to present temperature field analytically. We also borrow linear interpolation technique derived from shape function in finite element theory to interface the thermal model and structural model and further to apply the temperatures onto structural model nodes. Thereby, the thermal loads are transferred with as high fidelity as possible. Data interface and communication between the two softwares are discussed mainly on mirror surfaces and hence on the optical figure representation and transformation. We compare and comment the two different methods, Zernike polynomials and power series expansion, for representing and transforming deformed optical surface to ZEMAX. Additionally, these methods applied to surface with non-circular aperture are discussed. At the end, an optical telescope with parabolic primary mirror of 900 mm in diameter is analyzed to illustrate the above discussion. Finite Element Model with most interested parts of the telescope is generated in ANSYS with necessary structural simplification and equivalence. Thermal analysis is performed and the resulted positions and figures of the optics are to be retrieved and transferred to ZEMAX, and thus final image quality is evaluated with thermal disturbance.
1-Million droplet array with wide-field fluorescence imaging for digital PCR.
Hatch, Andrew C; Fisher, Jeffrey S; Tovar, Armando R; Hsieh, Albert T; Lin, Robert; Pentoney, Stephen L; Yang, David L; Lee, Abraham P
2011-11-21
Digital droplet reactors are useful as chemical and biological containers to discretize reagents into picolitre or nanolitre volumes for analysis of single cells, organisms, or molecules. However, most DNA based assays require processing of samples on the order of tens of microlitres and contain as few as one to as many as millions of fragments to be detected. Presented in this work is a droplet microfluidic platform and fluorescence imaging setup designed to better meet the needs of the high-throughput and high-dynamic-range by integrating multiple high-throughput droplet processing schemes on the chip. The design is capable of generating over 1-million, monodisperse, 50 picolitre droplets in 2-7 minutes that then self-assemble into high density 3-dimensional sphere packing configurations in a large viewing chamber for visualization and analysis. This device then undergoes on-chip polymerase chain reaction (PCR) amplification and fluorescence detection to digitally quantify the sample's nucleic acid contents. Wide-field fluorescence images are captured using a low cost 21-megapixel digital camera and macro-lens with an 8-12 cm(2) field-of-view at 1× to 0.85× magnification, respectively. We demonstrate both end-point and real-time imaging ability to perform on-chip quantitative digital PCR analysis of the entire droplet array. Compared to previous work, this highly integrated design yields a 100-fold increase in the number of on-chip digitized reactors with simultaneous fluorescence imaging for digital PCR based assays.
Fully automated muscle quality assessment by Gabor filtering of second harmonic generation images
NASA Astrophysics Data System (ADS)
Paesen, Rik; Smolders, Sophie; Vega, José Manolo de Hoyos; Eijnde, Bert O.; Hansen, Dominique; Ameloot, Marcel
2016-02-01
Although structural changes on the sarcomere level of skeletal muscle are known to occur due to various pathologies, rigorous studies of the reduced sarcomere quality remain scarce. This can possibly be explained by the lack of an objective tool for analyzing and comparing sarcomere images across biological conditions. Recent developments in second harmonic generation (SHG) microscopy and increasing insight into the interpretation of sarcomere SHG intensity profiles have made SHG microscopy a valuable tool to study microstructural properties of sarcomeres. Typically, sarcomere integrity is analyzed by fitting a set of manually selected, one-dimensional SHG intensity profiles with a supramolecular SHG model. To circumvent this tedious manual selection step, we developed a fully automated image analysis procedure to map the sarcomere disorder for the entire image at once. The algorithm relies on a single-frequency wavelet-based Gabor approach and includes a newly developed normalization procedure allowing for unambiguous data interpretation. The method was validated by showing the correlation between the sarcomere disorder, quantified by the M-band size obtained from manually selected profiles, and the normalized Gabor value ranging from 0 to 1 for decreasing disorder. Finally, to elucidate the applicability of our newly developed protocol, Gabor analysis was used to study the effect of experimental autoimmune encephalomyelitis on the sarcomere regularity. We believe that the technique developed in this work holds great promise for high-throughput, unbiased, and automated image analysis to study sarcomere integrity by SHG microscopy.
Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K
2017-10-17
Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p < 0.001) when compared with images reconstructed using the bone-sharpening kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p < 0.001, and 18.2%, p < 0.001, respectively) when compared with the image reconstructed by the bone-sharpening kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
Schoenhagen, Paul; Zimmermann, Mathis; Falkner, Juergen
2013-06-01
Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baron-Aznar, C.; Moreno-Jimenez, S.; Celis, M. A.
2008-08-11
Integrated dose is the total energy delivered in a radiotherapy target. This physical parameter could be a predictor for complications such as brain edema and radionecrosis after stereotactic radiotherapy treatments for brain tumors. Integrated Dose depends on the tissue density and volume. Using CT patients images from the National Institute of Neurology and Neurosurgery and BrainScan(c) software, this work presents the mean density of 21 multiform glioblastomas, comparative results for normal tissue and estimated integrated dose for each case. The relationship between integrated dose and the probability of complications is discussed.
Quantitative assessment of image motion blur in diffraction images of moving biological cells
NASA Astrophysics Data System (ADS)
Wang, He; Jin, Changrong; Feng, Yuanming; Qi, Dandan; Sa, Yu; Hu, Xin-Hua
2016-02-01
Motion blur (MB) presents a significant challenge for obtaining high-contrast image data from biological cells with a polarization diffraction imaging flow cytometry (p-DIFC) method. A new p-DIFC experimental system has been developed to evaluate the MB and its effect on image analysis using a time-delay-integration (TDI) CCD camera. Diffraction images of MCF-7 and K562 cells have been acquired with different speed-mismatch ratios and compared to characterize MB quantitatively. Frequency analysis of the diffraction images shows that the degree of MB can be quantified by bandwidth variations of the diffraction images along the motion direction. The analytical results were confirmed by the p-DIFC image data acquired at different speed-mismatch ratios and used to validate a method of numerical simulation of MB on blur-free diffraction images, which provides a useful tool to examine the blurring effect on diffraction images acquired from the same cell. These results provide insights on the dependence of diffraction image on MB and allow significant improvement on rapid biological cell assay with the p-DIFC method.
Integration of electro-anatomical and imaging data of the left ventricle: An evaluation framework.
Soto-Iglesias, David; Butakoff, Constantine; Andreu, David; Fernández-Armenta, Juan; Berruezo, Antonio; Camara, Oscar
2016-08-01
Integration of electrical and structural information for scar characterization in the left ventricle (LV) is a crucial step to better guide radio-frequency ablation therapies, which are usually performed in complex ventricular tachycardia (VT) cases. This integration requires finding a common representation where to map the electrical information from the electro-anatomical map (EAM) surfaces and tissue viability information from delay-enhancement magnetic resonance images (DE-MRI). However, the development of a consistent integration method is still an open problem due to the lack of a proper evaluation framework to assess its accuracy. In this paper we present both: (i) an evaluation framework to assess the accuracy of EAM and imaging integration strategies with simulated EAM data and a set of global and local measures; and (ii) a new integration methodology based on a planar disk representation where the LV surface meshes are quasi-conformally mapped (QCM) by flattening, allowing for simultaneous visualization and joint analysis of the multi-modal data. The developed evaluation framework was applied to estimate the accuracy of the QCM-based integration strategy on a benchmark dataset of 128 synthetically generated ground-truth cases presenting different scar configurations and EAM characteristics. The obtained results demonstrate a significant reduction in global overlap errors (50-100%) with respect to state-of-the-art integration techniques, also better preserving the local topology of small structures such as conduction channels in scars. Data from seventeen VT patients were also used to study the feasibility of the QCM technique in a clinical setting, consistently outperforming the alternative integration techniques in the presence of sparse and noisy clinical data. The proposed evaluation framework has allowed a rigorous comparison of different EAM and imaging data integration strategies, providing useful information to better guide clinical practice in complex cardiac interventions. Copyright © 2016 Elsevier B.V. All rights reserved.
Mitigating fringing in discrete frequency infrared imaging using time-delayed integration
Ran, Shihao; Berisha, Sebastian; Mankar, Rupali; Shih, Wei-Chuan; Mayerich, David
2018-01-01
Infrared (IR) spectroscopic microscopes provide the potential for label-free quantitative molecular imaging of biological samples, which can be used to aid in histology, forensics, and pharmaceutical analysis. Most IR imaging systems use broadband illumination combined with a spectrometer to separate the signal into spectral components. This technique is currently too slow for many biomedical applications such as clinical diagnosis, primarily due to the availability of bright mid-infrared sources and sensitive MCT detectors. There has been a recent push to increase throughput using coherent light sources, such as synchrotron radiation and quantum cascade lasers. While these sources provide a significant increase in intensity, the coherence introduces fringing artifacts in the final image. We demonstrate that applying time-delayed integration in one dimension can dramatically reduce fringing artifacts with minimal alterations to the standard infrared imaging pipeline. The proposed technique also offers the potential for less expensive focal plane array detectors, since linear arrays can be more readily incorporated into the proposed framework. PMID:29552416
Rapid Prototyping Integrated With Nondestructive Evaluation and Finite Element Analysis
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Baaklini, George Y.
2001-01-01
Most reverse engineering approaches involve imaging or digitizing an object then creating a computerized reconstruction that can be integrated, in three dimensions, into a particular design environment. Rapid prototyping (RP) refers to the practical ability to build high-quality physical prototypes directly from computer aided design (CAD) files. Using rapid prototyping, full-scale models or patterns can be built using a variety of materials in a fraction of the time required by more traditional prototyping techniques (refs. 1 and 2). Many software packages have been developed and are being designed to tackle the reverse engineering and rapid prototyping issues just mentioned. For example, image processing and three-dimensional reconstruction visualization software such as Velocity2 (ref. 3) are being used to carry out the construction process of three-dimensional volume models and the subsequent generation of a stereolithography file that is suitable for CAD applications. Producing three-dimensional models of objects from computed tomography (CT) scans is becoming a valuable nondestructive evaluation methodology (ref. 4). Real components can be rendered and subjected to temperature and stress tests using structural engineering software codes. For this to be achieved, accurate high-resolution images have to be obtained via CT scans and then processed, converted into a traditional file format, and translated into finite element models. Prototyping a three-dimensional volume of a composite structure by reading in a series of two-dimensional images generated via CT and by using and integrating commercial software (e.g. Velocity2, MSC/PATRAN (ref. 5), and Hypermesh (ref. 6)) is being applied successfully at the NASA Glenn Research Center. The building process from structural modeling to the analysis level is outlined in reference 7. Subsequently, a stress analysis of a composite cooling panel under combined thermomechanical loading conditions was performed to validate this process.
Low-level processing for real-time image analysis
NASA Technical Reports Server (NTRS)
Eskenazi, R.; Wilf, J. M.
1979-01-01
A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.
An efficient approach to integrated MeV ion imaging.
Nikbakht, T; Kakuee, O; Solé, V A; Vosuoghi, Y; Lamehi-Rachti, M
2018-03-01
An ionoluminescence (IL) spectral imaging system, besides the common MeV ion imaging facilities such as µ-PIXE and µ-RBS, is implemented at the Van de Graaff laboratory of Tehran. A versatile processing software is required to handle the large amount of data concurrently collected in µ-IL and common MeV ion imaging measurements through the respective methodologies. The open-source freeware PyMca, with image processing and multivariate analysis capabilities, is employed to simultaneously process common MeV ion imaging and µ-IL data. Herein, the program was adapted to support the OM_DAQ listmode data format. The appropriate performance of the µ-IL data acquisition system is confirmed through a case study. Moreover, the capabilities of the software for simultaneous analysis of µ-PIXE and µ-RBS experimental data are presented. Copyright © 2017 Elsevier B.V. All rights reserved.
Malcova, Ivana; Farkasovsky, Marian; Senohrabkova, Lenka; Vasicova, Pavla; Hasek, Jiri
2016-05-01
Live-imaging analysis is performed in many laboratories all over the world. Various tools have been developed to enable protein labeling either in plasmid or genomic context in live yeast cells. Here, we introduce a set of nine integrative modules for the C-terminal gene tagging that combines three fluorescent proteins (FPs)-ymTagBFP, mCherry and yTagRFP-T with three dominant selection markers: geneticin, nourseothricin and hygromycin. In addition, the construction of two episomal modules for Saccharomyces cerevisiae with photostable yTagRFP-T is also referred to. Our cassettes with orange, red and blue FPs can be combined with other fluorescent probes like green fluorescent protein to prepare double- or triple-labeled strains for multicolor live-cell imaging. Primers for PCR amplification of the cassettes were designed in such a way as to be fully compatible with the existing PCR toolbox representing over 50 various integrative modules and also with deletion cassettes either for single or repeated usage to enable a cost-effective and an easy exchange of tags. New modules can also be used for biochemical analysis since antibodies are available for all three fluorescent probes. © FEMS 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Fusion of imaging and nonimaging data for surveillance aircraft
NASA Astrophysics Data System (ADS)
Shahbazian, Elisa; Gagnon, Langis; Duquet, Jean Remi; Macieszczak, Maciej; Valin, Pierre
1997-06-01
This paper describes a phased incremental integration approach for application of image analysis and data fusion technologies to provide automated intelligent target tracking and identification for airborne surveillance on board an Aurora Maritime Patrol Aircraft. The sensor suite of the Aurora consists of a radar, an identification friend or foe (IFF) system, an electronic support measures (ESM) system, a spotlight synthetic aperture radar (SSAR), a forward looking infra-red (FLIR) sensor and a link-11 tactical datalink system. Lockheed Martin Canada (LMCan) is developing a testbed, which will be used to analyze and evaluate approaches for combining the data provided by the existing sensors, which were initially not designed to feed a fusion system. Three concurrent research proof-of-concept activities provide techniques, algorithms and methodology into three sequential phases of integration of this testbed. These activities are: (1) analysis of the fusion architecture (track/contact/hybrid) most appropriate for the type of data available, (2) extraction and fusion of simple features from the imaging data into the fusion system performing automatic target identification, and (3) development of a unique software architecture which will permit integration and independent evolution, enhancement and optimization of various decision aid capabilities, such as multi-sensor data fusion (MSDF), situation and threat assessment (STA) and resource management (RM).
Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis
Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.
2006-01-01
In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709
Madden, Jeremy T.; Toth, Scott J.; Dettmar, Christopher M.; Newman, Justin A.; Oglesbee, Robert A.; Hedderich, Hartmut G.; Everly, R. Michael; Becker, Michael; Ronau, Judith A.; Buchanan, Susan K.; Cherezov, Vadim; Morrow, Marie E.; Xu, Shenglan; Ferguson, Dale; Makarov, Oleg; Das, Chittaranjan; Fischetti, Robert; Simpson, Garth J.
2013-01-01
Nonlinear optical (NLO) instrumentation has been integrated with synchrotron X-ray diffraction (XRD) for combined single-platform analysis, initially targeting applications for automated crystal centering. Second-harmonic-generation microscopy and two-photon-excited ultraviolet fluorescence microscopy were evaluated for crystal detection and assessed by X-ray raster scanning. Two optical designs were constructed and characterized; one positioned downstream of the sample and one integrated into the upstream optical path of the diffractometer. Both instruments enabled protein crystal identification with integration times between 80 and 150 µs per pixel, representing a ∼103–104-fold reduction in the per-pixel exposure time relative to X-ray raster scanning. Quantitative centering and analysis of phenylalanine hydroxylase from Chromobacterium violaceum cPAH, Trichinella spiralis deubiquitinating enzyme TsUCH37, human κ-opioid receptor complex kOR-T4L produced in lipidic cubic phase (LCP), intimin prepared in LCP, and α-cellulose samples were performed by collecting multiple NLO images. The crystalline samples were characterized by single-crystal diffraction patterns, while α-cellulose was characterized by fiber diffraction. Good agreement was observed between the sample positions identified by NLO and XRD raster measurements for all samples studied. PMID:23765294
A Compact Imaging Detector of Polarization and Spectral Content
NASA Technical Reports Server (NTRS)
Rust, D. M.; Kumar, A.; Thompson, K. E.
1993-01-01
A new type of image detector will simultaneously analyze the polarization of light at all picture elements in a scene. The integrated Dual Imaging Detector (IDID) consists of a polarizing beam splitter bonded to a charge-coupled device (CCD), with signal-analysis circuitry and analog-to-digital converters, all integrated on a silicon chip. The polarizing beam splitter can be either a Ronchi ruling, or an array of cylindrical lenslets, bonded to a birefringent wafer. The wafer, in turn, is bonded to the CCD so that light in the two orthogonal planes of polarization falls on adjacent pairs of pixels. The use of a high-index birefringent material, e.g., rutile, allows the IDID to operate at f-numbers as high as f/3.5. Other aspects of the detector are discussed.
Wang, Jianfeng; Zheng, Wei; Lin, Kan; Huang, Zhiwei
2016-01-01
We report the development and implementation of a unique integrated Mueller-matrix (MM) near-infrared (NIR) imaging and Mueller-matrix point-wise diffuse reflectance (DR) spectroscopy technique for improving colonic cancer detection and diagnosis. Point-wise MM DR spectra can be acquired from any suspicious tissue areas indicated by MM imaging. A total of 30 paired colonic tissue specimens (normal vs. cancer) were measured using the integrated MM imaging and point-wise MM DR spectroscopy system. Polar decomposition algorithms are employed on the acquired images and spectra to derive three polarization metrics including depolarization, diattentuation and retardance for colonic tissue characterization. The decomposition results show that tissue depolarization and retardance are significantly decreased (p<0.001, paired 2-sided Student’s t-test, n = 30); while the tissue diattentuation is significantly increased (p<0.001, paired 2-sided Student’s t-test, n = 30) associated with colonic cancer. Further partial least squares discriminant analysis (PLS-DA) and leave-one tissue site-out, cross validation (LOSCV) show that the combination of the three polarization metrics provide the best diagnostic accuracy of 95.0% (sensitivity: 93.3%, and specificity: 96.7%) compared to either of the three polarization metrics (sensitivities of 93.3%, 83.3%, and 80.0%; and specificities of 90.0%, 96.7%, and 80.0%, respectively, for the depolarization, diattentuation and retardance metrics) for colonic cancer detection. This work suggests that the integrated MM NIR imaging and point-wise MM NIR diffuse reflectance spectroscopy has the potential to improve the early detection and diagnosis of malignant lesions in the colon. PMID:27446640
Automatic gang graffiti recognition and interpretation
NASA Astrophysics Data System (ADS)
Parra, Albert; Boutin, Mireille; Delp, Edward J.
2017-09-01
One of the roles of emergency first responders (e.g., police and fire departments) is to prevent and protect against events that can jeopardize the safety and well-being of a community. In the case of criminal gang activity, tools are needed for finding, documenting, and taking the necessary actions to mitigate the problem or issue. We describe an integrated mobile-based system capable of using location-based services, combined with image analysis, to track and analyze gang activity through the acquisition, indexing, and recognition of gang graffiti images. This approach uses image analysis methods for color recognition, image segmentation, and image retrieval and classification. A database of gang graffiti images is described that includes not only the images but also metadata related to the images, such as date and time, geoposition, gang, gang member, colors, and symbols. The user can then query the data in a useful manner. We have implemented these features both as applications for Android and iOS hand-held devices and as a web-based interface.
Large-Scale Overlays and Trends: Visually Mining, Panning and Zooming the Observable Universe.
Luciani, Timothy Basil; Cherinka, Brian; Oliphant, Daniel; Myers, Sean; Wood-Vasey, W Michael; Labrinidis, Alexandros; Marai, G Elisabeta
2014-07-01
We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general.
From Panoramic Photos to a Low-Cost Photogrammetric Workflow for Cultural Heritage 3d Documentation
NASA Astrophysics Data System (ADS)
D'Annibale, E.; Tassetti, A. N.; Malinverni, E. S.
2013-07-01
The research aims to optimize a workflow of architecture documentation: starting from panoramic photos, tackling available instruments and technologies to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric survey. The input data (oriented panoramic photos), the level of reliability and Image-based Modeling methods constitute an integrated and flexible 3D reconstruction approach: from the professional survey of cultural heritage to its communication in virtual museum. The proposed work results from the integration and implementation of different techniques (Multi-Image Spherical Photogrammetry, Structure from Motion, Imagebased Modeling) with the aim to achieve high metric accuracy and photorealistic performance. Different documentation chances are possible within the proposed workflow: from the virtual navigation of spherical panoramas to complex solutions of simulation and virtual reconstruction. VR tools make for the integration of different technologies and the development of new solutions for virtual navigation. Image-based Modeling techniques allow 3D model reconstruction with photo realistic and high-resolution texture. High resolution of panoramic photo and algorithms of panorama orientation and photogrammetric restitution vouch high accuracy and high-resolution texture. Automated techniques and their following integration are subject of this research. Data, advisably processed and integrated, provide different levels of analysis and virtual reconstruction joining the photogrammetric accuracy to the photorealistic performance of the shaped surfaces. Lastly, a new solution of virtual navigation is tested. Inside the same environment, it proposes the chance to interact with high resolution oriented spherical panorama and 3D reconstructed model at once.
Breast histopathology image segmentation using spatio-colour-texture based graph partition method.
Belsare, A D; Mushrif, M M; Pangarkar, M A; Meshram, N
2016-06-01
This paper proposes a novel integrated spatio-colour-texture based graph partitioning method for segmentation of nuclear arrangement in tubules with a lumen or in solid islands without a lumen from digitized Hematoxylin-Eosin stained breast histology images, in order to automate the process of histology breast image analysis to assist the pathologists. We propose a new similarity based super pixel generation method and integrate it with texton representation to form spatio-colour-texture map of Breast Histology Image. Then a new weighted distance based similarity measure is used for generation of graph and final segmentation using normalized cuts method is obtained. The extensive experiments carried shows that the proposed algorithm can segment nuclear arrangement in normal as well as malignant duct in breast histology tissue image. For evaluation of the proposed method the ground-truth image database of 100 malignant and nonmalignant breast histology images is created with the help of two expert pathologists and the quantitative evaluation of proposed breast histology image segmentation has been performed. It shows that the proposed method outperforms over other methods. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.
TheHiveDB image data management and analysis framework.
Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew
2014-01-06
The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative.
TheHiveDB image data management and analysis framework
Muehlboeck, J-Sebastian; Westman, Eric; Simmons, Andrew
2014-01-01
The hive database system (theHiveDB) is a web-based brain imaging database, collaboration, and activity system which has been designed as an imaging workflow management system capable of handling cross-sectional and longitudinal multi-center studies. It can be used to organize and integrate existing data from heterogeneous projects as well as data from ongoing studies. It has been conceived to guide and assist the researcher throughout the entire research process, integrating all relevant types of data across modalities (e.g., brain imaging, clinical, and genetic data). TheHiveDB is a modern activity and resource management system capable of scheduling image processing on both private compute resources and the cloud. The activity component supports common image archival and management tasks as well as established pipeline processing (e.g., Freesurfer for extraction of scalar measures from magnetic resonance images). Furthermore, via theHiveDB activity system algorithm developers may grant access to virtual machines hosting versioned releases of their tools to collaborators and the imaging community. The application of theHiveDB is illustrated with a brief use case based on organizing, processing, and analyzing data from the publically available Alzheimer Disease Neuroimaging Initiative. PMID:24432000
Spatial-scanning hyperspectral imaging probe for bio-imaging applications
NASA Astrophysics Data System (ADS)
Lim, Hoong-Ta; Murukeshan, Vadakke Matham
2016-03-01
The three common methods to perform hyperspectral imaging are the spatial-scanning, spectral-scanning, and snapshot methods. However, only the spectral-scanning and snapshot methods have been configured to a hyperspectral imaging probe as of today. This paper presents a spatial-scanning (pushbroom) hyperspectral imaging probe, which is realized by integrating a pushbroom hyperspectral imager with an imaging probe. The proposed hyperspectral imaging probe can also function as an endoscopic probe by integrating a custom fabricated image fiber bundle unit. The imaging probe is configured by incorporating a gradient-index lens at the end face of an image fiber bundle that consists of about 50 000 individual fiberlets. The necessary simulations, methodology, and detailed instrumentation aspects that are carried out are explained followed by assessing the developed probe's performance. Resolution test targets such as United States Air Force chart as well as bio-samples such as chicken breast tissue with blood clot are used as test samples for resolution analysis and for performance validation. This system is built on a pushbroom hyperspectral imaging system with a video camera and has the advantage of acquiring information from a large number of spectral bands with selectable region of interest. The advantages of this spatial-scanning hyperspectral imaging probe can be extended to test samples or tissues residing in regions that are difficult to access with potential diagnostic bio-imaging applications.
PIRATE: pediatric imaging response assessment and targeting environment
NASA Astrophysics Data System (ADS)
Glenn, Russell; Zhang, Yong; Krasin, Matthew; Hua, Chiaho
2010-02-01
By combining the strengths of various imaging modalities, the multimodality imaging approach has potential to improve tumor staging, delineation of tumor boundaries, chemo-radiotherapy regime design, and treatment response assessment in cancer management. To address the urgent needs for efficient tools to analyze large-scale clinical trial data, we have developed an integrated multimodality, functional and anatomical imaging analysis software package for target definition and therapy response assessment in pediatric radiotherapy (RT) patients. Our software provides quantitative tools for automated image segmentation, region-of-interest (ROI) histogram analysis, spatial volume-of-interest (VOI) analysis, and voxel-wise correlation across modalities. To demonstrate the clinical applicability of this software, histogram analyses were performed on baseline and follow-up 18F-fluorodeoxyglucose (18F-FDG) PET images of nine patients with rhabdomyosarcoma enrolled in an institutional clinical trial at St. Jude Children's Research Hospital. In addition, we combined 18F-FDG PET, dynamic-contrast-enhanced (DCE) MR, and anatomical MR data to visualize the heterogeneity in tumor pathophysiology with the ultimate goal of adaptive targeting of regions with high tumor burden. Our software is able to simultaneously analyze multimodality images across multiple time points, which could greatly speed up the analysis of large-scale clinical trial data and validation of potential imaging biomarkers.
Image processing and machine learning in the morphological analysis of blood cells.
Rodellar, J; Alférez, S; Acevedo, A; Molina, A; Merino, A
2018-05-01
This review focuses on how image processing and machine learning can be useful for the morphological characterization and automatic recognition of cell images captured from peripheral blood smears. The basics of the 3 core elements (segmentation, quantitative features, and classification) are outlined, and recent literature is discussed. Although red blood cells are a significant part of this context, this study focuses on malignant lymphoid cells and blast cells. There is no doubt that these technologies may help the cytologist to perform efficient, objective, and fast morphological analysis of blood cells. They may also help in the interpretation of some morphological features and may serve as learning and survey tools. Although research is still needed, it is important to define screening strategies to exploit the potential of image-based automatic recognition systems integrated in the daily routine of laboratories along with other analysis methodologies. © 2018 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Wolfs, Cecile J. A.; Brás, Mariana G.; Schyns, Lotte E. J. R.; Nijsten, Sebastiaan M. J. J. G.; van Elmpt, Wouter; Scheib, Stefan G.; Baltes, Christof; Podesta, Mark; Verhaegen, Frank
2017-08-01
The aim of this work is to assess the performance of 2D time-integrated (2D-TI), 2D time-resolved (2D-TR) and 3D time-integrated (3D-TI) portal dosimetry in detecting dose discrepancies between the planned and (simulated) delivered dose caused by simulated changes in the anatomy of lung cancer patients. For six lung cancer patients, tumor shift, tumor regression and pleural effusion are simulated by modifying their CT images. Based on the modified CT images, time-integrated (TI) and time-resolved (TR) portal dose images (PDIs) are simulated and 3D-TI doses are calculated. The modified and original PDIs and 3D doses are compared by a gamma analysis with various gamma criteria. Furthermore, the difference in the D 95% (ΔD 95%) of the GTV is calculated and used as a gold standard. The correlation between the gamma fail rate and the ΔD 95% is investigated, as well the sensitivity and specificity of all combinations of portal dosimetry method, gamma criteria and gamma fail rate threshold. On the individual patient level, there is a correlation between the gamma fail rate and the ΔD 95%, which cannot be found at the group level. The sensitivity and specificity analysis showed that there is not one combination of portal dosimetry method, gamma criteria and gamma fail rate threshold that can detect all simulated anatomical changes. This work shows that it will be more beneficial to relate portal dosimetry and DVH analysis on the patient level, rather than trying to quantify a relationship for a group of patients. With regards to optimizing sensitivity and specificity, different combinations of portal dosimetry method, gamma criteria and gamma fail rate should be used to optimally detect certain types of anatomical changes.
Wolfs, Cecile J A; Brás, Mariana G; Schyns, Lotte E J R; Nijsten, Sebastiaan M J J G; van Elmpt, Wouter; Scheib, Stefan G; Baltes, Christof; Podesta, Mark; Verhaegen, Frank
2017-07-12
The aim of this work is to assess the performance of 2D time-integrated (2D-TI), 2D time-resolved (2D-TR) and 3D time-integrated (3D-TI) portal dosimetry in detecting dose discrepancies between the planned and (simulated) delivered dose caused by simulated changes in the anatomy of lung cancer patients. For six lung cancer patients, tumor shift, tumor regression and pleural effusion are simulated by modifying their CT images. Based on the modified CT images, time-integrated (TI) and time-resolved (TR) portal dose images (PDIs) are simulated and 3D-TI doses are calculated. The modified and original PDIs and 3D doses are compared by a gamma analysis with various gamma criteria. Furthermore, the difference in the D 95% (ΔD 95% ) of the GTV is calculated and used as a gold standard. The correlation between the gamma fail rate and the ΔD 95% is investigated, as well the sensitivity and specificity of all combinations of portal dosimetry method, gamma criteria and gamma fail rate threshold. On the individual patient level, there is a correlation between the gamma fail rate and the ΔD 95% , which cannot be found at the group level. The sensitivity and specificity analysis showed that there is not one combination of portal dosimetry method, gamma criteria and gamma fail rate threshold that can detect all simulated anatomical changes. This work shows that it will be more beneficial to relate portal dosimetry and DVH analysis on the patient level, rather than trying to quantify a relationship for a group of patients. With regards to optimizing sensitivity and specificity, different combinations of portal dosimetry method, gamma criteria and gamma fail rate should be used to optimally detect certain types of anatomical changes.
NASA Astrophysics Data System (ADS)
Ai, Lingyu; Kim, Eun-Soo
2018-03-01
We propose a method for refocusing-range and image-quality enhanced optical reconstruction of three-dimensional (3-D) objects from integral images only by using a 3 × 3 periodic δ-function array (PDFA), which is called a principal PDFA (P-PDFA). By directly convolving the elemental image array (EIA) captured from 3-D objects with the P-PDFAs whose spatial periods correspond to each object's depth, a set of spatially-filtered EIAs (SF-EIAs) are extracted, and from which 3-D objects can be reconstructed to be refocused on their real depth. convolutional operations are performed directly on each of the minimum 3 × 3 EIs of the picked-up EIA, the capturing and refocused-depth ranges of 3-D objects can be greatly enhanced, as well as 3-D objects much improved in image quality can be reconstructed without any preprocessing operations. Through ray-optical analysis and optical experiments with actual 3-D objects, the feasibility of the proposed method has been confirmed.
NASA Astrophysics Data System (ADS)
Erberich, Stephan G.; Hoppe, Martin; Jansen, Christian; Schmidt, Thomas; Thron, Armin; Oberschelp, Walter
2001-08-01
In the last few years more and more University Hospitals as well as private hospitals changed to digital information systems for patient record, diagnostic files and digital images. Not only that patient management becomes easier, it is also very remarkable how clinical research can profit from Picture Archiving and Communication Systems (PACS) and diagnostic databases, especially from image databases. Since images are available on the finger tip, difficulties arise when image data needs to be processed, e.g. segmented, classified or co-registered, which usually demands a lot computational power. Today's clinical environment does support PACS very well, but real image processing is still under-developed. The purpose of this paper is to introduce a parallel cluster of standard distributed systems and its software components and how such a system can be integrated into a hospital environment. To demonstrate the cluster technique we present our clinical experience with the crucial but cost-intensive motion correction of clinical routine and research functional MRI (fMRI) data, as it is processed in our Lab on a daily basis.
CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API
NASA Astrophysics Data System (ADS)
Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor
2004-05-01
Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.
Loziuk, Philip; Meier, Florian; Johnson, Caroline
2016-01-01
Quantitative methods for detection of biological molecules are needed more than ever before in the emerging age of “omics” and “big data.” Here, we provide an integrated approach for systematic analysis of the “lipidome” in tissue. To test our approach in a biological context, we utilized brain tissue selectively deficient for the transcription factor Specificity Protein 2 (Sp2). Conditional deletion of Sp2 in the mouse cerebral cortex results in developmental deficiencies including disruption of lipid metabolism. Silver (Ag) cationization was implemented for infrared matrix-assisted laser desorption electrospray ionization (IR-MALDESI) to enhance the ion abundances for olefinic lipids, as these have been linked to regulation by Sp2. Combining Ag-doped and conventional IR-MALDESI imaging, this approach was extended to IR-MALDESI imaging of embryonic mouse brains. Further, our imaging technique was combined with bottom-up shotgun proteomic LC-MS/MS analysis and western blot for comparing Sp2 conditional knockout (Sp2-cKO) and wild-type (WT) cortices of tissue sections. This provided an integrated omics dataset which revealed many specific changes to fundamental cellular processes and biosynthetic pathways. In particular, step-specific altered abundances of nucleotides, lipids, and associated proteins were observed in the cerebral cortices of Sp2-cKO embryos. PMID:26942738
Analysis of smear in high-resolution remote sensing satellites
NASA Astrophysics Data System (ADS)
Wahballah, Walid A.; Bazan, Taher M.; El-Tohamy, Fawzy; Fathy, Mahmoud
2016-10-01
High-resolution remote sensing satellites (HRRSS) that use time delay and integration (TDI) CCDs have the potential to introduce large amounts of image smear. Clocking and velocity mismatch smear are two of the key factors in inducing image smear. Clocking smear is caused by the discrete manner in which the charge is clocked in the TDI-CCDs. The relative motion between the HRRSS and the observed object obliges that the image motion velocity must be strictly synchronized with the velocity of the charge packet transfer (line rate) throughout the integration time. During imaging an object off-nadir, the image motion velocity changes resulting in asynchronization between the image velocity and the CCD's line rate. A Model for estimating the image motion velocity in HRRSS is derived. The influence of this velocity mismatch combined with clocking smear on the modulation transfer function (MTF) is investigated by using Matlab simulation. The analysis is performed for cross-track and along-track imaging with different satellite attitude angles and TDI steps. The results reveal that the velocity mismatch ratio and the number of TDI steps have a serious impact on the smear MTF; a velocity mismatch ratio of 2% degrades the MTFsmear by 32% at Nyquist frequency when the TDI steps change from 32 to 96. In addition, the results show that to achieve the requirement of MTFsmear >= 0.95 , for TDI steps of 16 and 64, the allowable roll angles are 13.7° and 6.85° and the permissible pitch angles are no more than 9.6° and 4.8°, respectively.
Collimating slicer for optical integral field spectroscopy
NASA Astrophysics Data System (ADS)
Laurent, Florence; Hénault, François
2016-07-01
Integral Field Spectroscopy (IFS) is a technique that gives simultaneously the spectrum of each spatial sampling element of a given field. It is a powerful tool which rearranges the data cube represented by two spatial dimensions defining the field and the spectral decomposition (x, y, λ) in a detector plane. In IFS, the "spatial" unit reorganizes the field, the "spectral" unit is being composed of a classical spectrograph. For the spatial unit, three main techniques - microlens array, microlens array associated with fibres and image slicer - are used in astronomical instrumentations. The development of a Collimating Slicer is to propose a new type of optical integral field spectroscopy which should be more compact. The main idea is to combine the image slicer with the collimator of the spectrograph mixing the "spatial" and "spectral" units. The traditional combination of slicer, pupil and slit elements and spectrograph collimator is replaced by a new one composed of a slicer and spectrograph collimator only. After testing few configurations, this new system looks very promising for low resolution spectrographs. In this paper, the state of art of integral field spectroscopy using image slicers will be described. The new system based onto the development of a Collimating Slicer for optical integral field spectroscopy will be depicted. First system analysis results and future improvements will be discussed.
Dynamic CT myocardial perfusion imaging: performance of 3D semi-automated evaluation software.
Ebersberger, Ullrich; Marcus, Roy P; Schoepf, U Joseph; Lo, Gladys G; Wang, Yining; Blanke, Philipp; Geyer, Lucas L; Gray, J Cranston; McQuiston, Andrew D; Cho, Young Jun; Scheuering, Michael; Canstein, Christian; Nikolaou, Konstantin; Hoffmann, Ellen; Bamberg, Fabian
2014-01-01
To evaluate the performance of three-dimensional semi-automated evaluation software for the assessment of myocardial blood flow (MBF) and blood volume (MBV) at dynamic myocardial perfusion computed tomography (CT). Volume-based software relying on marginal space learning and probabilistic boosting tree-based contour fitting was applied to CT myocardial perfusion imaging data of 37 subjects. In addition, all image data were analysed manually and both approaches were compared with SPECT findings. Study endpoints included time of analysis and conventional measures of diagnostic accuracy. Of 592 analysable segments, 42 showed perfusion defects on SPECT. Average analysis times for the manual and software-based approaches were 49.1 ± 11.2 and 16.5 ± 3.7 min respectively (P < 0.01). There was strong agreement between the two measures of interest (MBF, ICC = 0.91, and MBV, ICC = 0.88, both P < 0.01) and no significant difference in MBF/MBV with respect to diagnostic accuracy between the two approaches for both MBF and MBV for manual versus software-based approach; respectively; all comparisons P > 0.05. Three-dimensional semi-automated evaluation of dynamic myocardial perfusion CT data provides similar measures and diagnostic accuracy to manual evaluation, albeit with substantially reduced analysis times. This capability may aid the integration of this test into clinical workflows. • Myocardial perfusion CT is attractive for comprehensive coronary heart disease assessment. • Traditional image analysis methods are cumbersome and time-consuming. • Automated 3D perfusion software shortens analysis times. • Automated 3D perfusion software increases standardisation of myocardial perfusion CT. • Automated, standardised analysis fosters myocardial perfusion CT integration into clinical practice.
Simulation of bright-field microscopy images depicting pap-smear specimen
Malm, Patrik; Brun, Anders; Bengtsson, Ewert
2015-01-01
As digital imaging is becoming a fundamental part of medical and biomedical research, the demand for computer-based evaluation using advanced image analysis is becoming an integral part of many research projects. A common problem when developing new image analysis algorithms is the need of large datasets with ground truth on which the algorithms can be tested and optimized. Generating such datasets is often tedious and introduces subjectivity and interindividual and intraindividual variations. An alternative to manually created ground-truth data is to generate synthetic images where the ground truth is known. The challenge then is to make the images sufficiently similar to the real ones to be useful in algorithm development. One of the first and most widely studied medical image analysis tasks is to automate screening for cervical cancer through Pap-smear analysis. As part of an effort to develop a new generation cervical cancer screening system, we have developed a framework for the creation of realistic synthetic bright-field microscopy images that can be used for algorithm development and benchmarking. The resulting framework has been assessed through a visual evaluation by experts with extensive experience of Pap-smear images. The results show that images produced using our described methods are realistic enough to be mistaken for real microscopy images. The developed simulation framework is very flexible and can be modified to mimic many other types of bright-field microscopy images. © 2015 The Authors. Published by Wiley Periodicals, Inc. on behalf of ISAC PMID:25573002
Guided filter and principal component analysis hybrid method for hyperspectral pansharpening
NASA Astrophysics Data System (ADS)
Qu, Jiahui; Li, Yunsong; Dong, Wenqian
2018-01-01
Hyperspectral (HS) pansharpening aims to generate a fused HS image with high spectral and spatial resolution through integrating an HS image with a panchromatic (PAN) image. A guided filter (GF) and principal component analysis (PCA) hybrid HS pansharpening method is proposed. First, the HS image is interpolated and the PCA transformation is performed on the interpolated HS image. The first principal component (PC1) channel concentrates on the spatial information of the HS image. Different from the traditional PCA method, the proposed method sharpens the PAN image and utilizes the GF to obtain the spatial information difference between the HS image and the enhanced PAN image. Then, in order to reduce spectral and spatial distortion, an appropriate tradeoff parameter is defined and the spatial information difference is injected into the PC1 channel through multiplying by this tradeoff parameter. Once the new PC1 channel is obtained, the fused image is finally generated by the inverse PCA transformation. Experiments performed on both synthetic and real datasets show that the proposed method outperforms other several state-of-the-art HS pansharpening methods in both subjective and objective evaluations.
Photofragment image analysis using the Onion-Peeling Algorithm
NASA Astrophysics Data System (ADS)
Manzhos, Sergei; Loock, Hans-Peter
2003-07-01
With the growing popularity of the velocity map imaging technique, a need for the analysis of photoion and photoelectron images arose. Here, a computer program is presented that allows for the analysis of cylindrically symmetric images. It permits the inversion of the projection of the 3D charged particle distribution using the Onion Peeling Algorithm. Further analysis includes the determination of radial and angular distributions, from which velocity distributions and spatial anisotropy parameters are obtained. Identification and quantification of the different photolysis channels is therefore straightforward. In addition, the program features geometry correction, centering, and multi-Gaussian fitting routines, as well as a user-friendly graphical interface and the possibility of generating synthetic images using either the fitted or user-defined parameters. Program summaryTitle of program: Glass Onion Catalogue identifier: ADRY Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRY Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: none Computer: IBM PC Operating system under which the program has been tested: Windows 98, Windows 2000, Windows NT Programming language used: Delphi 4.0 Memory required to execute with typical data: 18 Mwords No. of bits in a word: 32 No. of bytes in distributed program, including test data, etc.: 9 911 434 Distribution format: zip file Keywords: Photofragment image, onion peeling, anisotropy parameters Nature of physical problem: Information about velocity and angular distributions of photofragments is the basis on which the analysis of the photolysis process resides. Reconstructing the three-dimensional distribution from the photofragment image is the first step, further processing involving angular and radial integration of the inverted image to obtain velocity and angular distributions. Provisions have to be made to correct for slight distortions of the image, and to verify the accuracy of the analysis process. Method of solution: The "Onion Peeling" algorithm described by Helm [Rev. Sci. Instrum. 67 (6) (1996)] is used to perform the image reconstruction. Angular integration with a subsequent multi-Gaussian fit supplies information about the velocity distribution of the photofragments, whereas radial integration with subsequent expansion of the angular distributions over Legendre Polynomials gives the spatial anisotropy parameters. Fitting algorithms have been developed to centre the image and to correct for image distortion. Restrictions on the complexity of the problem: The maximum image size (1280×1280) and resolution (16 bit) are restricted by available memory and can be changed in the source code. Initial centre coordinates within 5 pixels may be required for the correction and the centering algorithm to converge. Peaks on the velocity profile separated by less then the peak width may not be deconvolved. In the charged particle image reconstruction, it is assumed that the kinetic energy released in the dissociation process is small compared to the energy acquired in the electric field. For the fitting parameters to be physically meaningful, cylindrical symmetry of the image has to be assumed but the actual inversion algorithm is stable to distortions of such symmetry in experimental images. Typical running time: The analysis procedure can be divided into three parts: inversion, fitting, and geometry correction. The inversion time grows approx. as R3, where R is the radius of the region of interest: for R=200 pixels it is less than a minute, for R=400 pixels less then 6 min on a 400 MHz IBM personal computer. The time for the velocity fitting procedure to converge depends strongly on the number of peaks in the velocity profile and the convergence criterion. It ranges between less then a second for simple curves and a few minutes for profiles with up to twenty peaks. The time taken for the image correction scales as R2 and depends on the curve profile. It is on the order of a few minutes for images with R=500 pixels. Unusual features of the program: Our centering and image correction algorithm is based on Fourier analysis of the radial distribution to insure the sharpest velocity profile and is insensitive to an uneven intensity distribution. There exists an angular averaging option to stabilize the inversion algorithm and not to loose the resolution at the same time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowen, Benjamin; Ruebel, Oliver; Fischer, Curt Fischer R.
BASTet is an advanced software library written in Python. BASTet serves as the analysis and storage library for the OpenMSI project. BASTet is an integrate framework for: i) storage of spectral imaging data, ii) storage of derived analysis data, iii) provenance of analyses, iv) integration and execution of analyses via complex workflows. BASTet implements the API for the HDF5 storage format used by OpenMSI. Analyses that are developed using BASTet benefit from direct integration with storage format, automatic tracking of provenance, and direct integration with command-line and workflow execution tools. BASTet also defines interfaces to enable developers to directly integratemore » their analysis with OpenMSI's web-based viewing infrastruture without having to know OpenMSI. BASTet also provides numerous helper classes and tools to assist with the conversion of data files, ease parallel implementation of analysis algorithms, ease interaction with web-based functions, description methods for data reduction. BASTet also includes detailed developer documentation, user tutorials, iPython notebooks, and other supporting documents.« less
A CAD Approach to Integrating NDE With Finite Element
NASA Technical Reports Server (NTRS)
Abdul-Aziz, Ali; Downey, James; Ghosn, Louis J.; Baaklini, George Y.
2004-01-01
Nondestructive evaluation (NDE) is one of several technologies applied at NASA Glenn Research Center to determine atypical deformities, cracks, and other anomalies experienced by structural components. NDE consists of applying high-quality imaging techniques (such as x-ray imaging and computed tomography (CT)) to discover hidden manufactured flaws in a structure. Efforts are in progress to integrate NDE with the finite element (FE) computational method to perform detailed structural analysis of a given component. This report presents the core outlines for an in-house technical procedure that incorporates this combined NDE-FE interrelation. An example is presented to demonstrate the applicability of this analytical procedure. FE analysis of a test specimen is performed, and the resulting von Mises stresses and the stress concentrations near the anomalies are observed, which indicates the fidelity of the procedure. Additional information elaborating on the steps needed to perform such an analysis is clearly presented in the form of mini step-by-step guidelines.
Integration of OLEDs in biomedical sensor systems: design and feasibility analysis
NASA Astrophysics Data System (ADS)
Rai, Pratyush; Kumar, Prashanth S.; Varadan, Vijay K.
2010-04-01
Organic (electronic) Light Emitting Diodes (OLEDs) have been shown to have applications in the field of lighting and flexible display. These devices can also be incorporated in sensors as light source for imaging/fluorescence sensing for miniaturized systems for biomedical applications and low-cost displays for sensor output. The current device capability aligns well with the aforementioned applications as low power diffuse lighting and momentary/push button dynamic display. A top emission OLED design has been proposed that can be incorporated with the sensor and peripheral electrical circuitry, also based on organic electronics. Feasibility analysis is carried out for an integrated optical imaging/sensor system, based on luminosity and spectrum band width. A similar study is also carried out for sensor output display system that functions as a pseudo active OLED matrix. A power model is presented for device power requirements and constraints. The feasibility analysis is also supplemented with the discussion about implementation of ink-jet printing and stamping techniques for possibility of roll to roll manufacturing.
Visually enhanced CCTV digital surveillance utilizing Intranet and Internet.
Ozaki, Nobuyuki
2002-07-01
This paper describes a solution for integrated plant supervision utilizing closed circuit television (CCTV) digital surveillance. Three basic requirements are first addressed as the platform of the system, with discussion on the suitable video compression. The system configuration is described in blocks. The system provides surveillance functionality: real-time monitoring, and process analysis functionality: a troubleshooting tool. This paper describes the formulation of practical performance design for determining various encoder parameters. It also introduces image processing techniques for enhancing the original CCTV digital image to lessen the burden on operators. Some screenshots are listed for the surveillance functionality. For the process analysis, an image searching filter supported by image processing techniques is explained with screenshots. Multimedia surveillance, which is the merger with process data surveillance, or the SCADA system, is also explained.
Metadata management for high content screening in OMERO
Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R.
2016-01-01
High content screening (HCS) experiments create a classic data management challenge—multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of “final” results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. PMID:26476368
Metadata management for high content screening in OMERO.
Li, Simon; Besson, Sébastien; Blackburn, Colin; Carroll, Mark; Ferguson, Richard K; Flynn, Helen; Gillen, Kenneth; Leigh, Roger; Lindner, Dominik; Linkert, Melissa; Moore, William J; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Allan, Chris; Burel, Jean-Marie; Moore, Josh; Swedlow, Jason R
2016-03-01
High content screening (HCS) experiments create a classic data management challenge-multiple, large sets of heterogeneous structured and unstructured data, that must be integrated and linked to produce a set of "final" results. These different data include images, reagents, protocols, analytic output, and phenotypes, all of which must be stored, linked and made accessible for users, scientists, collaborators and where appropriate the wider community. The OME Consortium has built several open source tools for managing, linking and sharing these different types of data. The OME Data Model is a metadata specification that supports the image data and metadata recorded in HCS experiments. Bio-Formats is a Java library that reads recorded image data and metadata and includes support for several HCS screening systems. OMERO is an enterprise data management application that integrates image data, experimental and analytic metadata and makes them accessible for visualization, mining, sharing and downstream analysis. We discuss how Bio-Formats and OMERO handle these different data types, and how they can be used to integrate, link and share HCS experiments in facilities and public data repositories. OME specifications and software are open source and are available at https://www.openmicroscopy.org. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang
The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.
Yi, Faliu; Jeoung, Yousun; Moon, Inkyu
2017-05-20
In recent years, many studies have focused on authentication of two-dimensional (2D) images using double random phase encryption techniques. However, there has been little research on three-dimensional (3D) imaging systems, such as integral imaging, for 3D image authentication. We propose a 3D image authentication scheme based on a double random phase integral imaging method. All of the 2D elemental images captured through integral imaging are encrypted with a double random phase encoding algorithm and only partial phase information is reserved. All the amplitude and other miscellaneous phase information in the encrypted elemental images is discarded. Nevertheless, we demonstrate that 3D images from integral imaging can be authenticated at different depths using a nonlinear correlation method. The proposed 3D image authentication algorithm can provide enhanced information security because the decrypted 2D elemental images from the sparse phase cannot be easily observed by the naked eye. Additionally, using sparse phase images without any amplitude information can greatly reduce data storage costs and aid in image compression and data transmission.
Ethnicity identification from face images
NASA Astrophysics Data System (ADS)
Lu, Xiaoguang; Jain, Anil K.
2004-08-01
Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.
Updating Landsat-derived land-cover maps using change detection and masking techniques
NASA Technical Reports Server (NTRS)
Likens, W.; Maw, K.
1982-01-01
The California Integrated Remote Sensing System's San Bernardino County Project was devised to study the utilization of a data base at a number of jurisdictional levels. The present paper discusses the implementation of change-detection and masking techniques in the updating of Landsat-derived land-cover maps. A baseline landcover classification was first created from a 1976 image, then the adjusted 1976 image was compared with a 1979 scene by the techniques of (1) multidate image classification, (2) difference image-distribution tails thresholding, (3) difference image classification, and (4) multi-dimensional chi-square analysis of a difference image. The union of the results of methods 1, 3 and 4 was used to create a mask of possible change areas between 1976 and 1979, which served to limit analysis of the update image and reduce comparison errors in unchanged areas. The techniques of spatial smoothing of change-detection products, and of combining results of difference change-detection algorithms are also shown to improve Landsat change-detection accuracies.
A concept for holistic whole body MRI data analysis, Imiomics
Malmberg, Filip; Johansson, Lars; Lind, Lars; Sundbom, Magnus; Ahlström, Håkan; Kullberg, Joel
2017-01-01
Purpose To present and evaluate a whole-body image analysis concept, Imiomics (imaging–omics) and an image registration method that enables Imiomics analyses by deforming all image data to a common coordinate system, so that the information in each voxel can be compared between persons or within a person over time and integrated with non-imaging data. Methods The presented image registration method utilizes relative elasticity constraints of different tissue obtained from whole-body water-fat MRI. The registration method is evaluated by inverse consistency and Dice coefficients and the Imiomics concept is evaluated by example analyses of importance for metabolic research using non-imaging parameters where we know what to expect. The example analyses include whole body imaging atlas creation, anomaly detection, and cross-sectional and longitudinal analysis. Results The image registration method evaluation on 128 subjects shows low inverse consistency errors and high Dice coefficients. Also, the statistical atlas with fat content intensity values shows low standard deviation values, indicating successful deformations to the common coordinate system. The example analyses show expected associations and correlations which agree with explicit measurements, and thereby illustrate the usefulness of the proposed Imiomics concept. Conclusions The registration method is well-suited for Imiomics analyses, which enable analyses of relationships to non-imaging data, e.g. clinical data, in new types of holistic targeted and untargeted big-data analysis. PMID:28241015
Display of travelling 3D scenes from single integral-imaging capture
NASA Astrophysics Data System (ADS)
Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro
2016-06-01
Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.
Tracking and imaging humans on heterogeneous infrared sensor arrays for law enforcement applications
NASA Astrophysics Data System (ADS)
Feller, Steven D.; Zheng, Y.; Cull, Evan; Brady, David J.
2002-08-01
We present a plan for the integration of geometric constraints in the source, sensor and analysis levels of sensor networks. The goal of geometric analysis is to reduce the dimensionality and complexity of distributed sensor data analysis so as to achieve real-time recognition and response to significant events. Application scenarios include biometric tracking of individuals, counting and analysis of individuals in groups of humans and distributed sentient environments. We are particularly interested in using this approach to provide networks of low cost point detectors, such as infrared motion detectors, with complex imaging capabilities. By extending the capabilities of simple sensors, we expect to reduce the cost of perimeter and site security applications.
The EarthKAM project: creating space imaging tools for teaching and learning
NASA Astrophysics Data System (ADS)
Dodson, Holly; Levin, Paula; Ride, Sally; Souviney, Randall
2000-07-01
The EarthKAM Project is a NASA-supported partnership of secondary and university students with Earth Science and educational researchers. This report describes an ongoing series of activities that more effectively integrate Earth images into classroom instruction. In this project, students select and analyze images of the Earth taken during Shuttle flights and use the tools of modern science (computers, data analysis tools and the Internet) to disseminate the images and results of their research. A related study, the Visualizing Earth Project, explores in greater detail the cognitive aspects of image processing and the educational potential of visualizations in science teaching and learning. The content and organization of the EarthKAM datasystem of images and metadata are also described. An associated project is linking this datasystem of images with the Getty Thesaurus of Geographic Names, which will allow users to access a wide range of geographic and political information for the regions shown in EarthKAM images. Another project will provide tools for automated feature extraction from EarthKAM images. In order to make EarthKAM resources available to a larger number of schools, the next important goal is to create an integrated datasystem that combines iterative resource validation and publication, with multimedia management of instructional materials.
Pu, Hongbin; Sun, Da-Wen; Ma, Ji; Cheng, Jun-Hu
2015-01-01
The potential of visible and near infrared hyperspectral imaging was investigated as a rapid and nondestructive technique for classifying fresh and frozen-thawed meats by integrating critical spectral and image features extracted from hyperspectral images in the region of 400-1000 nm. Six feature wavelengths (400, 446, 477, 516, 592 and 686 nm) were identified using uninformative variable elimination and successive projections algorithm. Image textural features of the principal component images from hyperspectral images were obtained using histogram statistics (HS), gray level co-occurrence matrix (GLCM) and gray level-gradient co-occurrence matrix (GLGCM). By these spectral and textural features, probabilistic neural network (PNN) models for classification of fresh and frozen-thawed pork meats were established. Compared with the models using the optimum wavelengths only, optimum wavelengths with HS image features, and optimum wavelengths with GLCM image features, the model integrating optimum wavelengths with GLGCM gave the highest classification rate of 93.14% and 90.91% for calibration and validation sets, respectively. Results indicated that the classification accuracy can be improved by combining spectral features with textural features and the fusion of critical spectral and textural features had better potential than single spectral extraction in classifying fresh and frozen-thawed pork meat. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Brandner, Wolfgang; Hormuth, Felix
Lucky Imaging improves the angular resolution of astronomical observations hampered by atmospheric turbulence ("seeing"). Unlike adaptive optics, Lucky Imaging is a passive observing technique with individual integration times comparable to the atmospheric coherence time. Thanks to the advent of essentially noise free "Electron multiplying CCD" detectors, Lucky Imaging saw a renewed interest in the past decade. It is now routinely used at a number of 2-5-m class telescopes, such as ESO's NTT. We review the history of Lucky Imaging, present the technical implementation, describe the data analysis philosophy, and show some recent results obtained with this technique. We also discuss the advantages and limitations of Lucky Imaging compared to other passive and active high angular resolution observing techniques.
Blind subjects construct conscious mental images of visual scenes encoded in musical form.
Cronly-Dillon, J; Persaud, K C; Blore, R
2000-01-01
Blind (previously sighted) subjects are able to analyse, describe and graphically represent a number of high-contrast visual images translated into musical form de novo. We presented musical transforms of a random assortment of photographic images of objects and urban scenes to such subjects, a few of which depicted architectural and other landmarks that may be useful in navigating a route to a particular destination. Our blind subjects were able to use the sound representation to construct a conscious mental image that was revealed by their ability to depict a visual target by drawing it. We noted the similarity between the way the visual system integrates information from successive fixations to form a representation that is stable across eye movements and the way a succession of image frames (encoded in sound) which depict different portions of the image are integrated to form a seamless mental image. Finally, we discuss the profound resemblance between the way a professional musician carries out a structural analysis of a musical composition in order to relate its structure to the perception of musical form and the strategies used by our blind subjects in isolating structural features that collectively reveal the identity of visual form. PMID:11413637
Skounakis, Emmanouil; Farmaki, Christina; Sakkalis, Vangelis; Roniotis, Alexandros; Banitsas, Konstantinos; Graf, Norbert; Marias, Konstantinos
2010-01-01
This paper presents a novel, open access interactive platform for 3D medical image analysis, simulation and visualization, focusing in oncology images. The platform was developed through constant interaction and feedback from expert clinicians integrating a thorough analysis of their requirements while having an ultimate goal of assisting in accurately delineating tumors. It allows clinicians not only to work with a large number of 3D tomographic datasets but also to efficiently annotate multiple regions of interest in the same session. Manual and semi-automatic segmentation techniques combined with integrated correction tools assist in the quick and refined delineation of tumors while different users can add different components related to oncology such as tumor growth and simulation algorithms for improving therapy planning. The platform has been tested by different users and over large number of heterogeneous tomographic datasets to ensure stability, usability, extensibility and robustness with promising results. the platform, a manual and tutorial videos are available at: http://biomodeling.ics.forth.gr. it is free to use under the GNU General Public License.
Interactive rendering of acquired materials on dynamic geometry using frequency analysis.
Bagher, Mahdi Mohammad; Soler, Cyril; Subr, Kartic; Belcour, Laurent; Holzschuch, Nicolas
2013-05-01
Shading acquired materials with high-frequency illumination is computationally expensive. Estimating the shading integral requires multiple samples of the incident illumination. The number of samples required may vary across the image, and the image itself may have high- and low-frequency variations, depending on a combination of several factors. Adaptively distributing computational budget across the pixels for shading is a challenging problem. In this paper, we depict complex materials such as acquired reflectances, interactively, without any precomputation based on geometry. In each frame, we first estimate the frequencies in the local light field arriving at each pixel, as well as the variance of the shading integrand. Our frequency analysis accounts for combinations of a variety of factors: the reflectance of the object projecting to the pixel, the nature of the illumination, the local geometry and the camera position relative to the geometry and lighting. We then exploit this frequency information (bandwidth and variance) to adaptively sample for reconstruction and integration. For example, fewer pixels per unit area are shaded for pixels projecting onto diffuse objects, and fewer samples are used for integrating illumination incident on specular objects.
Cao, Hongbao; Duan, Junbo; Lin, Dongdong; Shugart, Yin Yao; Calhoun, Vince; Wang, Yu-Ping
2014-11-15
Integrative analysis of multiple data types can take advantage of their complementary information and therefore may provide higher power to identify potential biomarkers that would be missed using individual data analysis. Due to different natures of diverse data modality, data integration is challenging. Here we address the data integration problem by developing a generalized sparse model (GSM) using weighting factors to integrate multi-modality data for biomarker selection. As an example, we applied the GSM model to a joint analysis of two types of schizophrenia data sets: 759,075 SNPs and 153,594 functional magnetic resonance imaging (fMRI) voxels in 208 subjects (92 cases/116 controls). To solve this small-sample-large-variable problem, we developed a novel sparse representation based variable selection (SRVS) algorithm, with the primary aim to identify biomarkers associated with schizophrenia. To validate the effectiveness of the selected variables, we performed multivariate classification followed by a ten-fold cross validation. We compared our proposed SRVS algorithm with an earlier sparse model based variable selection algorithm for integrated analysis. In addition, we compared with the traditional statistics method for uni-variant data analysis (Chi-squared test for SNP data and ANOVA for fMRI data). Results showed that our proposed SRVS method can identify novel biomarkers that show stronger capability in distinguishing schizophrenia patients from healthy controls. Moreover, better classification ratios were achieved using biomarkers from both types of data, suggesting the importance of integrative analysis. Copyright © 2014 Elsevier Inc. All rights reserved.
Automated Spatio-Temporal Analysis of Remotely Sensed Imagery for Water Resources Management
NASA Astrophysics Data System (ADS)
Bahr, Thomas
2016-04-01
Since 2012, the state of California faces an extreme drought, which impacts water supply in many ways. Advanced remote sensing is an important technology to better assess water resources, monitor drought conditions and water supplies, plan for drought response and mitigation, and measure drought impacts. In the present case study latest time series analysis capabilities are used to examine surface water in reservoirs located along the western flank of the Sierra Nevada region of California. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. A time series from Landsat images (L-5 TM, L-7 ETM+, L-8 OLI) of the AOI was obtained for 1999 to 2015 (October acquisitions). Downloaded from the USGS EarthExplorer web site, they already were georeferenced to a UTM Zone 10N (WGS-84) coordinate system. ENVITasks were used to pre-process the Landsat images as follows: • Triangulation based gap-filling for the SLC-off Landsat-7 ETM+ images. • Spatial subsetting to the same geographic extent. • Radiometric correction to top-of-atmosphere (TOA) reflectance. • Atmospheric correction using QUAC®, which determines atmospheric correction parameters directly from the observed pixel spectra in a scene, without ancillary information. Spatio-temporal analysis was executed with the following tasks: • Creation of Modified Normalized Difference Water Index images (MNDWI, Xu 2006) to enhance open water features while suppressing noise from built-up land, vegetation, and soil. • Threshold based classification of the water index images to extract the water features. • Classification aggregation as a post-classification cleanup process. • Export of the respective water classes to vector layers for further evaluation in a GIS. • Animation of the classification series and export to a common video format. • Plotting the time series of water surface area in square kilometers. The automated spatio-temporal analysis introduced here can be embedded in virtually any existing geospatial workflow for operational applications. Three integration options were implemented in this case study: • Integration within any ArcGIS environment whether deployed on the desktop, in the cloud, or online. Execution uses a customized ArcGIS script tool. A Python script file retrieves the parameters from the user interface and runs the precompiled IDL code. That IDL code is used to interface between the Python script and the relevant ENVITasks. • Publishing the spatio-temporal analysis tasks as services via the ENVI Services Engine (ESE). ESE is a cloud-based image analysis solution to publish and deploy advanced ENVI image and data analytics to existing enterprise infrastructures. For this purpose the entire IDL code can be capsuled in a single ENVITask. • Integration in an existing geospatial workflow using the Python-to-IDL Bridge. This mechanism allows calling IDL code within Python on a user-defined platform. The results of this case study verify the drastic decrease of the amount of surface water in the AOI, indicative of the major drought that is pervasive throughout California. Accordingly, the time series analysis was correlated successfully with the daily reservoir elevations of the Don Pedro reservoir (station DNP, operated by CDEC).
Multi-object segmentation framework using deformable models for medical imaging analysis.
Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel
2016-08-01
Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed framework has a wide range of applications especially in the presence of adjacent structures of interest or under intra-structure inhomogeneities giving excellent quantitative results.
Zhang, Guojin; Senak, Laurence; Moore, David J
2011-05-01
Spatially resolved infrared (IR) and Raman images are acquired from human hair cross sections or intact hair fibers. The full informational content of these spectra are spatially correlated to hair chemistry, anatomy, and structural organization through univariate and multivariate data analysis. Specific IR and Raman images from untreated human hair describing the spatial dependence of lipid and protein distribution, protein secondary structure, lipid chain conformational order, and distribution of disulfide cross-links in hair protein are presented in this study. Factor analysis of the image plane acquired with IR microscopy in hair sections, permits delineation of specific micro-regions within the hair. These data indicate that both IR and Raman imaging of molecular structural changes in a specific region of hair will prove to be valuable tools in the understanding of hair structure, physiology, and the effect of various stresses upon its integrity.
Hyperspectral imaging for non-contact analysis of forensic traces.
Edelman, G J; Gaston, E; van Leeuwen, T G; Cullen, P J; Aalders, M C G
2012-11-30
Hyperspectral imaging (HSI) integrates conventional imaging and spectroscopy, to obtain both spatial and spectral information from a specimen. This technique enables investigators to analyze the chemical composition of traces and simultaneously visualize their spatial distribution. HSI offers significant potential for the detection, visualization, identification and age estimation of forensic traces. The rapid, non-destructive and non-contact features of HSI mark its suitability as an analytical tool for forensic science. This paper provides an overview of the principles, instrumentation and analytical techniques involved in hyperspectral imaging. We describe recent advances in HSI technology motivating forensic science applications, e.g. the development of portable and fast image acquisition systems. Reported forensic science applications are reviewed. Challenges are addressed, such as the analysis of traces on backgrounds encountered in casework, concluded by a summary of possible future applications. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Image analysis-based modelling for flower number estimation in grapevine.
Millan, Borja; Aquino, Arturo; Diago, Maria P; Tardaguila, Javier
2017-02-01
Grapevine flower number per inflorescence provides valuable information that can be used for assessing yield. Considerable research has been conducted at developing a technological tool, based on image analysis and predictive modelling. However, the behaviour of variety-independent predictive models and yield prediction capabilities on a wide set of varieties has never been evaluated. Inflorescence images from 11 grapevine Vitis vinifera L. varieties were acquired under field conditions. The flower number per inflorescence and the flower number visible in the images were calculated manually, and automatically using an image analysis algorithm. These datasets were used to calibrate and evaluate the behaviour of two linear (single-variable and multivariable) and a nonlinear variety-independent model. As a result, the integrated tool composed of the image analysis algorithm and the nonlinear approach showed the highest performance and robustness (RPD = 8.32, RMSE = 37.1). The yield estimation capabilities of the flower number in conjunction with fruit set rate (R 2 = 0.79) and average berry weight (R 2 = 0.91) were also tested. This study proves the accuracy of flower number per inflorescence estimation using an image analysis algorithm and a nonlinear model that is generally applicable to different grapevine varieties. This provides a fast, non-invasive and reliable tool for estimation of yield at harvest. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
Methods of training the graduate level and professional geologist in remote sensing technology
NASA Technical Reports Server (NTRS)
Kolm, K. E.
1981-01-01
Requirements for a basic course in remote sensing to accommodate the needs of the graduate level and professional geologist are described. The course should stress the general topics of basic remote sensing theory, the theory and data types relating to different remote sensing systems, an introduction to the basic concepts of computer image processing and analysis, the characteristics of different data types, the development of methods for geological interpretations, the integration of all scales and data types of remote sensing in a given study, the integration of other data bases (geophysical and geochemical) into a remote sensing study, and geological remote sensing applications. The laboratories should stress hands on experience to reinforce the concepts and procedures presented in the lecture. The geologist should then be encouraged to pursue a second course in computer image processing and analysis of remotely sensed data.
Rough-Fuzzy Clustering and Unsupervised Feature Selection for Wavelet Based MR Image Segmentation
Maji, Pradipta; Roy, Shaswati
2015-01-01
Image segmentation is an indispensable process in the visualization of human tissues, particularly during clinical analysis of brain magnetic resonance (MR) images. For many human experts, manual segmentation is a difficult and time consuming task, which makes an automated brain MR image segmentation method desirable. In this regard, this paper presents a new segmentation method for brain MR images, integrating judiciously the merits of rough-fuzzy computing and multiresolution image analysis technique. The proposed method assumes that the major brain tissues, namely, gray matter, white matter, and cerebrospinal fluid from the MR images are considered to have different textural properties. The dyadic wavelet analysis is used to extract the scale-space feature vector for each pixel, while the rough-fuzzy clustering is used to address the uncertainty problem of brain MR image segmentation. An unsupervised feature selection method is introduced, based on maximum relevance-maximum significance criterion, to select relevant and significant textural features for segmentation problem, while the mathematical morphology based skull stripping preprocessing step is proposed to remove the non-cerebral tissues like skull. The performance of the proposed method, along with a comparison with related approaches, is demonstrated on a set of synthetic and real brain MR images using standard validity indices. PMID:25848961
Yi, Faliu; Lee, Jieun; Moon, Inkyu
2014-05-01
The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.
NASA Technical Reports Server (NTRS)
Rochon, Gilbert L.
1989-01-01
A user requirements analysis (URA) was undertaken to determine and appropriate public domain Geographic Information System (GIS) software package for potential integration with NASA's LAS (Land Analysis System) 5.0 image processing system. The necessity for a public domain system was underscored due to the perceived need for source code access and flexibility in tailoring the GIS system to the needs of a heterogenous group of end-users, and to specific constraints imposed by LAS and its user interface, Transportable Applications Executive (TAE). Subsequently, a review was conducted of a variety of public domain GIS candidates, including GRASS 3.0, MOSS, IEMIS, and two university-based packages, IDRISI and KBGIS. The review method was a modified version of the GIS evaluation process, development by the Federal Interagency Coordinating Committee on Digital Cartography. One IEMIS-derivative product, the ALBE (AirLand Battlefield Environment) GIS, emerged as the most promising candidate for integration with LAS. IEMIS (Integrated Emergency Management Information System) was developed by the Federal Emergency Management Agency (FEMA). ALBE GIS is currently under development at the Pacific Northwest Laboratory under contract with the U.S. Army Corps of Engineers' Engineering Topographic Laboratory (ETL). Accordingly, recommendations are offered with respect to a potential LAS/ALBE GIS linkage and with respect to further system enhancements, including coordination with the development of the Spatial Analysis and Modeling System (SAMS) GIS in Goddard's IDM (Intelligent Data Management) developments in Goddard's National Space Science Data Center.
Gerson, Cindy J; Goldstein, Steven; Heacox, Albert E
2009-10-01
Cryopreservation is commonly used for the long-term storage of heart valve allografts. Despite the excellent hemodynamic performance and durability of cryopreserved allografts, reports have questioned whether cryopreservation affects the valvular structural proteins, collagen and elastin. This study uses two-photon laser scanning confocal microscopy (LSCM) to evaluate the effect of cryopreservation on collagen and elastin integrity within the leaflet and conduit of aortic and pulmonary human heart valves. To permit pairwise comparisons of fresh and cryopreserved tissue, test valves were bisected longitudinally with one segment imaged fresh and the other imaged after cryopreservation and brief storage in liquid nitrogen. Collagen was detected by second harmonic generation (SHG) stimulation and elastin by autofluorescence excitation. Qualitative analysis of all resultant images indicated the maintenance of collagen and elastin structure within leaflet and conduit post-cryopreservation. Analysis of the optimized percent laser transmission (OPLT) required for full dynamic range imaging of collagen and elastin showed that OPLT observations were highly variable among both fresh and cryopreserved samples. Changes in donor-specific average OPLT in response to cryopreservation exhibited no consistent directional trend. The donor-aggregated results predominantly showed no statistically significant change in collagen and elastin average OPLT due to cryopreservation. Since OPLT has an inverse relationship with structural signal intensity, these results indicate that there was largely no statistical difference in collagen and elastin signal strength between fresh and cryopreserved tissue. Overall, this study indicates that the conventional cryopreservation of human heart valve allografts does not detrimentally affect their collagen and elastin structural integrity.
NASA Astrophysics Data System (ADS)
Ma, Kevin; Wang, Ximing; Lerner, Alex; Shiroishi, Mark; Amezcua, Lilyana; Liu, Brent
2015-03-01
In the past, we have developed and displayed a multiple sclerosis eFolder system for patient data storage, image viewing, and automatic lesion quantification results stored in DICOM-SR format. The web-based system aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient treatments and disease tracking. This year, we have further developed the eFolder system to handle big data analysis and data mining in today's medical imaging field. The database has been updated to allow data mining and data look-up from DICOM-SR lesion analysis contents. Longitudinal studies are tracked, and any changes in lesion volumes and brain parenchyma volumes are calculated and shown on the webbased user interface as graphical representations. Longitudinal lesion characteristic changes are compared with patients' disease history, including treatments, symptom progressions, and any other changes in the disease profile. The image viewer is updated such that imaging studies can be viewed side-by-side to allow visual comparisons. We aim to use the web-based medical imaging informatics eFolder system to demonstrate big data analysis in medical imaging, and use the analysis results to predict MS disease trends and patterns in Hispanic and Caucasian populations in our pilot study. The discovery of disease patterns among the two ethnicities is a big data analysis result that will help lead to personalized patient care and treatment planning.
Hybrid Electron Microscopy Normal Mode Analysis graphical interface and protocol.
Sorzano, Carlos Oscar S; de la Rosa-Trevín, José Miguel; Tama, Florence; Jonić, Slavica
2014-11-01
This article presents an integral graphical interface to the Hybrid Electron Microscopy Normal Mode Analysis (HEMNMA) approach that was developed for capturing continuous motions of large macromolecular complexes from single-particle EM images. HEMNMA was shown to be a good approach to analyze multiple conformations of a macromolecular complex but it could not be widely used in the EM field due to a lack of an integral interface. In particular, its use required switching among different software sources as well as selecting modes for image analysis was difficult without the graphical interface. The graphical interface was thus developed to simplify the practical use of HEMNMA. It is implemented in the open-source software package Xmipp 3.1 (http://xmipp.cnb.csic.es) and only a small part of it relies on MATLAB that is accessible through the main interface. Such integration provides the user with an easy way to perform the analysis of macromolecular dynamics and forms a direct connection to the single-particle reconstruction process. A step-by-step HEMNMA protocol with the graphical interface is given in full details in Supplementary material. The graphical interface will be useful to experimentalists who are interested in studies of continuous conformational changes of macromolecular complexes beyond the modeling of continuous heterogeneity in single particle reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Herskovits, E. H.; Megalooikonomou, V.; Davatzikos, C.; Chen, A.; Bryan, R. N.; Gerring, J. P.
1999-01-01
PURPOSE: To determine whether there is an association between the spatial distribution of lesions detected at magnetic resonance (MR) imaging of the brain in children after closed-head injury and the development of secondary attention-deficit/hyperactivity disorder (ADHD). MATERIALS AND METHODS: Data obtained from 76 children without prior history of ADHD were analyzed. MR images were obtained 3 months after closed-head injury. After manual delineation of lesions, images were registered to the Talairach coordinate system. For each subject, registered images and secondary ADHD status were integrated into a brain-image database, which contains depiction (visualization) and statistical analysis software. Using this database, we assessed visually the spatial distributions of lesions and performed statistical analysis of image and clinical variables. RESULTS: Of the 76 children, 15 developed secondary ADHD. Depiction of the data suggested that children who developed secondary ADHD had more lesions in the right putamen than children who did not develop secondary ADHD; this impression was confirmed statistically. After Bonferroni correction, we could not demonstrate significant differences between secondary ADHD status and lesion burdens for the right caudate nucleus or the right globus pallidus. CONCLUSION: Closed-head injury-induced lesions in the right putamen in children are associated with subsequent development of secondary ADHD. Depiction software is useful in guiding statistical analysis of image data.
Time-of-flight depth image enhancement using variable integration time
NASA Astrophysics Data System (ADS)
Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong
2013-03-01
Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.
Platform for Post-Processing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Don J.
2010-01-01
Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.
Lotz, Judith M; Hoffmann, Franziska; Lotz, Johannes; Heldmann, Stefan; Trede, Dennis; Oetjen, Janina; Becker, Michael; Ernst, Günther; Maas, Peter; Alexandrov, Theodore; Guntinas-Lichius, Orlando; Thiele, Herbert; von Eggeling, Ferdinand
2017-07-01
In the last years, matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI MSI) became an imaging technique which has the potential to characterize complex tumor tissue. The combination with other modalities and with standard histology techniques was achieved by the use of image registration methods and enhances analysis possibilities. We analyzed an oral squamous cell carcinoma with up to 162 consecutive sections with MALDI MSI, hematoxylin and eosin (H&E) staining and immunohistochemistry (IHC) against CD31. Spatial segmentation maps of the MALDI MSI data were generated by similarity-based clustering of spectra. Next, the maps were overlaid with the H&E microscopy images and the results were interpreted by an experienced pathologist. Image registration was used to fuse both modalities and to build a three-dimensional (3D) model. To visualize structures below resolution of MALDI MSI, IHC was carried out for CD31 and results were embedded additionally. The integration of 3D MALDI MSI data with H&E and IHC images allows a correlation between histological and molecular information leading to a better understanding of the functional heterogeneity of tumors. This article is part of a Special Issue entitled: MALDI Imaging, edited by Dr. Corinna Henkel and Prof. Peter Hoffmann. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Hua, H.; Owen, S. E.; Yun, S.; Lundgren, P.; Fielding, E. J.; Agram, P.; Manipon, G.; Stough, T. M.; Simons, M.; Rosen, P. A.; Wilson, B. D.; Poland, M. P.; Cervelli, P. F.; Cruz, J.
2013-12-01
Space-based geodetic measurement techniques such as Interferometric Synthetic Aperture Radar (InSAR) and Continuous Global Positioning System (CGPS) are now important elements in our toolset for monitoring earthquake-generating faults, volcanic eruptions, hurricane damage, landslides, reservoir subsidence, and other natural and man-made hazards. Geodetic imaging's unique ability to capture surface deformation with high spatial and temporal resolution has revolutionized both earthquake science and volcanology. Continuous monitoring of surface deformation and surface change before, during, and after natural hazards improves decision-making from better forecasts, increased situational awareness, and more informed recovery. However, analyses of InSAR and GPS data sets are currently handcrafted following events and are not generated rapidly and reliably enough for use in operational response to natural disasters. Additionally, the sheer data volumes needed to handle a continuous stream of InSAR data sets also presents a bottleneck. It has been estimated that continuous processing of InSAR coverage of California alone over 3-years would reach PB-scale data volumes. Our Advanced Rapid Imaging and Analysis for Monitoring Hazards (ARIA-MH) science data system enables both science and decision-making communities to monitor areas of interest with derived geodetic data products via seamless data preparation, processing, discovery, and access. We will present our findings on the use of hybrid-cloud computing to improve the timely processing and delivery of geodetic data products, integrating event notifications from USGS to improve the timely processing for response, as well as providing browse results for quick looks with other tools for integrative analysis.
ERIC Educational Resources Information Center
Hafner, Mathias
2008-01-01
Cell biology and molecular imaging technologies have made enormous progress in basic research. However, the transfer of this knowledge to the pharmaceutical drug discovery process, or even therapeutic improvements for disorders such as neuronal diseases, is still in its infancy. This transfer needs scientists who can integrate basic research with…
Prototype Imaging Spectrograph for Coronagraphic Exoplanet Studies (PISCES) for WFIRST/AFTA
NASA Technical Reports Server (NTRS)
Gong, Qian; Mcelwain, Michael; Greeley, Bradford; Grammer, Bryan; Marx, Catherine; Memarsadeghi, Nargess; Stapelfeldt, Karl; Hilton, George; Sayson, Jorge Llop; Perrin, Marshall;
2015-01-01
Prototype Imaging Spectrograph for Coronagraphic Exoplanet Studies (PISCES) is a lenslet array based integral field spectrometer (IFS) designed for high contrast imaging of extrasolar planets. PISCES will be used to advance the technology readiness of the high contrast IFS baselined on the Wide-Field InfraRed Survey Telescope/Astrophysics Focused Telescope Assets (WFIRST/AFTA) coronagraph instrument. PISCES will be integrated into the high contrast imaging testbed (HCIT) at the Jet Propulsion Laboratory and will work with both the Hybrid Lyot Coronagraph (HLC) and the Shaped Pupil Coronagraph (SPC) cofigurations. We discuss why the lenslet array based IFS is selected for PISCES. We present the PISCES optical design, including the similarities and differences of lenslet based IFSs to normal spectrometers, the trade-off between a refractive design and reflective design, as well as the specific function of our pinhole mask on the back surface of the lenslet array to further suppress star light introduced speckles. The optical analysis, alignment plan, and mechanical design of the instrument will be discussed.
Prototype Imaging Spectrograph for Coronagraphic Exoplanet Studies (PISCES) for WFIRST-AFTA
NASA Technical Reports Server (NTRS)
Gong, Qian; Mcelwain, Michael; Greeley, Bradford; Grammer, Bryan; Marx, Catherine; Memarsadeghi, Nargess; Stapelfeldt, Karl; Hilton, George; Sayson, Jorge Llop; Perrin, Marshall;
2015-01-01
Prototype Imaging Spectrograph for Coronagraphic Exoplanet Studies (PISCES) is a lenslet array based integral field spectrometer (IFS) designed for high contrast imaging of extrasolar planets. PISCES will be used to advance the technology readiness of the high contrast IFS baselined on the Wide-Field InfraRed Survey Telescope/Astrophysics Focused Telescope Assets (WFIRST-AFTA) coronagraph instrument. PISCES will be integrated into the high contrast imaging testbed (HCIT) at the Jet Propulsion Laboratory (JPL) and will work with both the Hybrid Lyot Coronagraph (HLC) and the Shaped Pupil Coronagraph (SPC) configurations. We discuss why the lenslet array based IFS was selected for PISCES. We present the PISCES optical design, including the similarities and differences of lenslet based IFSs to normal spectrometers, the trade-off between a refractive design and reflective design, as well as the specific function of our pinhole mask on the back surface of the lenslet array to reduce the diffraction from the edge of the lenslets. The optical analysis, alignment plan, and mechanical design of the instrument will be discussed.
NASA Technical Reports Server (NTRS)
Solarna, David; Moser, Gabriele; Le Moigne-Stewart, Jacqueline; Serpico, Sebastiano B.
2017-01-01
Because of the large variety of sensors and spacecraft collecting data, planetary science needs to integrate various multi-sensor and multi-temporal images. These multiple data represent a precious asset, as they allow the study of targets spectral responses and of changes in the surface structure; because of their variety, they also require accurate and robust registration. A new crater detection algorithm, used to extract features that will be integrated in an image registration framework, is presented. A marked point process-based method has been developed to model the spatial distribution of elliptical objects (i.e. the craters) and a birth-death Markov chain Monte Carlo method, coupled with a region-based scheme aiming at computational efficiency, is used to find the optimal configuration fitting the image. The extracted features are exploited, together with a newly defined fitness function based on a modified Hausdorff distance, by an image registration algorithm whose architecture has been designed to minimize the computational time.
Parallel computing in experimental mechanics and optical measurement: A review (II)
NASA Astrophysics Data System (ADS)
Wang, Tianyi; Kemao, Qian
2018-05-01
With advantages such as non-destructiveness, high sensitivity and high accuracy, optical techniques have successfully integrated into various important physical quantities in experimental mechanics (EM) and optical measurement (OM). However, in pursuit of higher image resolutions for higher accuracy, the computation burden of optical techniques has become much heavier. Therefore, in recent years, heterogeneous platforms composing of hardware such as CPUs and GPUs, have been widely employed to accelerate these techniques due to their cost-effectiveness, short development cycle, easy portability, and high scalability. In this paper, we analyze various works by first illustrating their different architectures, followed by introducing their various parallel patterns for high speed computation. Next, we review the effects of CPU and GPU parallel computing specifically in EM & OM applications in a broad scope, which include digital image/volume correlation, fringe pattern analysis, tomography, hyperspectral imaging, computer-generated holograms, and integral imaging. In our survey, we have found that high parallelism can always be exploited in such applications for the development of high-performance systems.
Integration of Network Biology and Imaging to Study Cancer Phenotypes and Responses.
Tian, Ye; Wang, Sean S; Zhang, Zhen; Rodriguez, Olga C; Petricoin, Emanuel; Shih, Ie-Ming; Chan, Daniel; Avantaggiati, Maria; Yu, Guoqiang; Ye, Shaozhen; Clarke, Robert; Wang, Chao; Zhang, Bai; Wang, Yue; Albanese, Chris
2014-01-01
Ever growing "omics" data and continuously accumulated biological knowledge provide an unprecedented opportunity to identify molecular biomarkers and their interactions that are responsible for cancer phenotypes that can be accurately defined by clinical measurements such as in vivo imaging. Since signaling or regulatory networks are dynamic and context-specific, systematic efforts to characterize such structural alterations must effectively distinguish significant network rewiring from random background fluctuations. Here we introduced a novel integration of network biology and imaging to study cancer phenotypes and responses to treatments at the molecular systems level. Specifically, Differential Dependence Network (DDN) analysis was used to detect statistically significant topological rewiring in molecular networks between two phenotypic conditions, and in vivo Magnetic Resonance Imaging (MRI) was used to more accurately define phenotypic sample groups for such differential analysis. We applied DDN to analyze two distinct phenotypic groups of breast cancer and study how genomic instability affects the molecular network topologies in high-grade ovarian cancer. Further, FDA-approved arsenic trioxide (ATO) and the ND2-SmoA1 mouse model of Medulloblastoma (MB) were used to extend our analyses of combined MRI and Reverse Phase Protein Microarray (RPMA) data to assess tumor responses to ATO and to uncover the complexity of therapeutic molecular biology.
NASA Astrophysics Data System (ADS)
Großerueschkamp, Frederik; Bracht, Thilo; Diehl, Hanna C.; Kuepper, Claus; Ahrens, Maike; Kallenbach-Thieltges, Angela; Mosig, Axel; Eisenacher, Martin; Marcus, Katrin; Behrens, Thomas; Brüning, Thomas; Theegarten, Dirk; Sitek, Barbara; Gerwert, Klaus
2017-03-01
Diffuse malignant mesothelioma (DMM) is a heterogeneous malignant neoplasia manifesting with three subtypes: epithelioid, sarcomatoid and biphasic. DMM exhibit a high degree of spatial heterogeneity that complicates a thorough understanding of the underlying different molecular processes in each subtype. We present a novel approach to spatially resolve the heterogeneity of a tumour in a label-free manner by integrating FTIR imaging and laser capture microdissection (LCM). Subsequent proteome analysis of the dissected homogenous samples provides in addition molecular resolution. FTIR imaging resolves tumour subtypes within tissue thin-sections in an automated and label-free manner with accuracy of about 85% for DMM subtypes. Even in highly heterogeneous tissue structures, our label-free approach can identify small regions of interest, which can be dissected as homogeneous samples using LCM. Subsequent proteome analysis provides a location specific molecular characterization. Applied to DMM subtypes, we identify 142 differentially expressed proteins, including five protein biomarkers commonly used in DMM immunohistochemistry panels. Thus, FTIR imaging resolves not only morphological alteration within tissue but it resolves even alterations at the level of single proteins in tumour subtypes. Our fully automated workflow FTIR-guided LCM opens new avenues collecting homogeneous samples for precise and predictive biomarkers from omics studies.
Großerueschkamp, Frederik; Bracht, Thilo; Diehl, Hanna C; Kuepper, Claus; Ahrens, Maike; Kallenbach-Thieltges, Angela; Mosig, Axel; Eisenacher, Martin; Marcus, Katrin; Behrens, Thomas; Brüning, Thomas; Theegarten, Dirk; Sitek, Barbara; Gerwert, Klaus
2017-03-30
Diffuse malignant mesothelioma (DMM) is a heterogeneous malignant neoplasia manifesting with three subtypes: epithelioid, sarcomatoid and biphasic. DMM exhibit a high degree of spatial heterogeneity that complicates a thorough understanding of the underlying different molecular processes in each subtype. We present a novel approach to spatially resolve the heterogeneity of a tumour in a label-free manner by integrating FTIR imaging and laser capture microdissection (LCM). Subsequent proteome analysis of the dissected homogenous samples provides in addition molecular resolution. FTIR imaging resolves tumour subtypes within tissue thin-sections in an automated and label-free manner with accuracy of about 85% for DMM subtypes. Even in highly heterogeneous tissue structures, our label-free approach can identify small regions of interest, which can be dissected as homogeneous samples using LCM. Subsequent proteome analysis provides a location specific molecular characterization. Applied to DMM subtypes, we identify 142 differentially expressed proteins, including five protein biomarkers commonly used in DMM immunohistochemistry panels. Thus, FTIR imaging resolves not only morphological alteration within tissue but it resolves even alterations at the level of single proteins in tumour subtypes. Our fully automated workflow FTIR-guided LCM opens new avenues collecting homogeneous samples for precise and predictive biomarkers from omics studies.
Lippolis, Giuseppe; Edsjö, Anders; Helczynski, Leszek; Bjartell, Anders; Overgaard, Niels Chr
2013-09-05
Prostate cancer is one of the leading causes of cancer related deaths. For diagnosis, predicting the outcome of the disease, and for assessing potential new biomarkers, pathologists and researchers routinely analyze histological samples. Morphological and molecular information may be integrated by aligning microscopic histological images in a multiplex fashion. This process is usually time-consuming and results in intra- and inter-user variability. The aim of this study is to investigate the feasibility of using modern image analysis methods for automated alignment of microscopic images from differently stained adjacent paraffin sections from prostatic tissue specimens. Tissue samples, obtained from biopsy or radical prostatectomy, were sectioned and stained with either hematoxylin & eosin (H&E), immunohistochemistry for p63 and AMACR or Time Resolved Fluorescence (TRF) for androgen receptor (AR). Image pairs were aligned allowing for translation, rotation and scaling. The registration was performed automatically by first detecting landmarks in both images, using the scale invariant image transform (SIFT), followed by the well-known RANSAC protocol for finding point correspondences and finally aligned by Procrustes fit. The Registration results were evaluated using both visual and quantitative criteria as defined in the text. Three experiments were carried out. First, images of consecutive tissue sections stained with H&E and p63/AMACR were successfully aligned in 85 of 88 cases (96.6%). The failures occurred in 3 out of 13 cores with highly aggressive cancer (Gleason score ≥ 8). Second, TRF and H&E image pairs were aligned correctly in 103 out of 106 cases (97%).The third experiment considered the alignment of image pairs with the same staining (H&E) coming from a stack of 4 sections. The success rate for alignment dropped from 93.8% in adjacent sections to 22% for sections furthest away. The proposed method is both reliable and fast and therefore well suited for automatic segmentation and analysis of specific areas of interest, combining morphological information with protein expression data from three consecutive tissue sections. Finally, the performance of the algorithm seems to be largely unaffected by the Gleason grade of the prostate tissue samples examined, at least up to Gleason score 7.
Laurinaviciene, Aida; Plancoulaine, Benoit; Baltrusaityte, Indra; Meskauskas, Raimundas; Besusparis, Justinas; Lesciute-Krilaviciene, Daiva; Raudeliunas, Darius; Iqbal, Yasir; Herlin, Paulette; Laurinavicius, Arvydas
2014-01-01
Digital immunohistochemistry (IHC) is one of the most promising applications brought by new generation image analysis (IA). While conventional IHC staining quality is monitored by semi-quantitative visual evaluation of tissue controls, IA may require more sensitive measurement. We designed an automated system to digitally monitor IHC multi-tissue controls, based on SQL-level integration of laboratory information system with image and statistical analysis tools. Consecutive sections of TMA containing 10 cores of breast cancer tissue were used as tissue controls in routine Ki67 IHC testing. Ventana slide label barcode ID was sent to the LIS to register the serial section sequence. The slides were stained and scanned (Aperio ScanScope XT), IA was performed by the Aperio/Leica Colocalization and Genie Classifier/Nuclear algorithms. SQL-based integration ensured automated statistical analysis of the IA data by the SAS Enterprise Guide project. Factor analysis and plot visualizations were performed to explore slide-to-slide variation of the Ki67 IHC staining results in the control tissue. Slide-to-slide intra-core IHC staining analysis revealed rather significant variation of the variables reflecting the sample size, while Brown and Blue Intensity were relatively stable. To further investigate this variation, the IA results from the 10 cores were aggregated to minimize tissue-related variance. Factor analysis revealed association between the variables reflecting the sample size detected by IA and Blue Intensity. Since the main feature to be extracted from the tissue controls was staining intensity, we further explored the variation of the intensity variables in the individual cores. MeanBrownBlue Intensity ((Brown+Blue)/2) and DiffBrownBlue Intensity (Brown-Blue) were introduced to better contrast the absolute intensity and the colour balance variation in each core; relevant factor scores were extracted. Finally, tissue-related factors of IHC staining variance were explored in the individual tissue cores. Our solution enabled to monitor staining of IHC multi-tissue controls by the means of IA, followed by automated statistical analysis, integrated into the laboratory workflow. We found that, even in consecutive serial tissue sections, tissue-related factors affected the IHC IA results; meanwhile, less intense blue counterstain was associated with less amount of tissue, detected by the IA tools.
2014-01-01
Background Digital immunohistochemistry (IHC) is one of the most promising applications brought by new generation image analysis (IA). While conventional IHC staining quality is monitored by semi-quantitative visual evaluation of tissue controls, IA may require more sensitive measurement. We designed an automated system to digitally monitor IHC multi-tissue controls, based on SQL-level integration of laboratory information system with image and statistical analysis tools. Methods Consecutive sections of TMA containing 10 cores of breast cancer tissue were used as tissue controls in routine Ki67 IHC testing. Ventana slide label barcode ID was sent to the LIS to register the serial section sequence. The slides were stained and scanned (Aperio ScanScope XT), IA was performed by the Aperio/Leica Colocalization and Genie Classifier/Nuclear algorithms. SQL-based integration ensured automated statistical analysis of the IA data by the SAS Enterprise Guide project. Factor analysis and plot visualizations were performed to explore slide-to-slide variation of the Ki67 IHC staining results in the control tissue. Results Slide-to-slide intra-core IHC staining analysis revealed rather significant variation of the variables reflecting the sample size, while Brown and Blue Intensity were relatively stable. To further investigate this variation, the IA results from the 10 cores were aggregated to minimize tissue-related variance. Factor analysis revealed association between the variables reflecting the sample size detected by IA and Blue Intensity. Since the main feature to be extracted from the tissue controls was staining intensity, we further explored the variation of the intensity variables in the individual cores. MeanBrownBlue Intensity ((Brown+Blue)/2) and DiffBrownBlue Intensity (Brown-Blue) were introduced to better contrast the absolute intensity and the colour balance variation in each core; relevant factor scores were extracted. Finally, tissue-related factors of IHC staining variance were explored in the individual tissue cores. Conclusions Our solution enabled to monitor staining of IHC multi-tissue controls by the means of IA, followed by automated statistical analysis, integrated into the laboratory workflow. We found that, even in consecutive serial tissue sections, tissue-related factors affected the IHC IA results; meanwhile, less intense blue counterstain was associated with less amount of tissue, detected by the IA tools. PMID:25565007
Bravo-Zanoguera, Miguel E; Laris, Casey A; Nguyen, Lam K; Oliva, Mike; Price, Jeffrey H
2007-01-01
Efficient image cytometry of a conventional microscope slide means rapid acquisition and analysis of 20 gigapixels of image data (at 0.3-microm sampling). The voluminous data motivate increased acquisition speed to enable many biomedical applications. Continuous-motion time-delay-and-integrate (TDI) scanning has the potential to speed image acquisition while retaining sensitivity, but the challenge of implementing high-resolution autofocus operating simultaneously with acquisition has limited its adoption. We develop a dynamic autofocus system for this need using: 1. a "volume camera," consisting of nine fiber optic imaging conduits to charge-coupled device (CCD) sensors, that acquires images in parallel from different focal planes, 2. an array of mixed analog-digital processing circuits that measure the high spatial frequencies of the multiple image streams to create focus indices, and 3. a software system that reads and analyzes the focus data streams and calculates best focus for closed feedback loop control. Our system updates autofocus at 56 Hz (or once every 21 microm of stage travel) to collect sharply focused images sampled at 0.3x0.3 microm(2)/pixel at a stage speed of 2.3 mms. The system, tested by focusing in phase contrast and imaging long fluorescence strips, achieves high-performance closed-loop image-content-based autofocus in continuous scanning for the first time.
NASA Astrophysics Data System (ADS)
Wang, Ximing; Kim, Bokkyu; Park, Ji Hoon; Wang, Erik; Forsyth, Sydney; Lim, Cody; Ravi, Ragini; Karibyan, Sarkis; Sanchez, Alexander; Liu, Brent
2017-03-01
Quantitative imaging biomarkers are used widely in clinical trials for tracking and evaluation of medical interventions. Previously, we have presented a web based informatics system utilizing quantitative imaging features for predicting outcomes in stroke rehabilitation clinical trials. The system integrates imaging features extraction tools and a web-based statistical analysis tool. The tools include a generalized linear mixed model(GLMM) that can investigate potential significance and correlation based on features extracted from clinical data and quantitative biomarkers. The imaging features extraction tools allow the user to collect imaging features and the GLMM module allows the user to select clinical data and imaging features such as stroke lesion characteristics from the database as regressors and regressands. This paper discusses the application scenario and evaluation results of the system in a stroke rehabilitation clinical trial. The system was utilized to manage clinical data and extract imaging biomarkers including stroke lesion volume, location and ventricle/brain ratio. The GLMM module was validated and the efficiency of data analysis was also evaluated.
Yang, Tian T; Weng, Shi F; Zheng, Na; Pan, Qing H; Cao, Hong L; Liu, Liang; Zhang, Hai D; Mu, Da W
2011-04-15
Fourier transform infrared (FTIR) imaging and microspectroscopy have been extensively applied in the identification and investigation of both healthy and diseased tissues. FTIR imaging can be used to determine the biodistribution of several molecules of interest (carbohydrates, lipids, proteins) for tissue analysis, without the need for prior staining of these tissues. Molecular structure data, such as protein secondary structure and collagen triple helix exhibits, can also be obtained from the same analysis. Thus, several histopathological lesions, for example myocardial infarction, can be identified from FTIR-analyzed tissue images, the latter which can allow for more accurate discrimination between healthy tissues and pathological lesions. Accordingly, we propose FTIR imaging as a new tool integrating both molecular and histopathological assessment to investigate the degree of pathological changes in tissues. In this study, myocardial infarction is presented as an illustrative example of the wide potential of FTIR imaging for biomedical applications. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Image edge detection based tool condition monitoring with morphological component analysis.
Yu, Xiaolong; Lin, Xin; Dai, Yiquan; Zhu, Kunpeng
2017-07-01
The measurement and monitoring of tool condition are keys to the product precision in the automated manufacturing. To meet the need, this study proposes a novel tool wear monitoring approach based on the monitored image edge detection. Image edge detection has been a fundamental tool to obtain features of images. This approach extracts the tool edge with morphological component analysis. Through the decomposition of original tool wear image, the approach reduces the influence of texture and noise for edge measurement. Based on the target image sparse representation and edge detection, the approach could accurately extract the tool wear edge with continuous and complete contour, and is convenient in charactering tool conditions. Compared to the celebrated algorithms developed in the literature, this approach improves the integrity and connectivity of edges, and the results have shown that it achieves better geometry accuracy and lower error rate in the estimation of tool conditions. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Brumfield, J. O. (Editor); Schiffman, Y. M. (Editor)
1982-01-01
Topics dealing with the integration of remotely sensed data with geographic information system for application in energy resources management are discussed. Associated remote sensing and image analysis techniques are also addressed.
LLIMAS: Revolutionizing integrating modeling and analysis at MIT Lincoln Laboratory
NASA Astrophysics Data System (ADS)
Doyle, Keith B.; Stoeckel, Gerhard P.; Rey, Justin J.; Bury, Mark E.
2017-08-01
MIT Lincoln Laboratory's Integrated Modeling and Analysis Software (LLIMAS) enables the development of novel engineering solutions for advanced prototype systems through unique insights into engineering performance and interdisciplinary behavior to meet challenging size, weight, power, environmental, and performance requirements. LLIMAS is a multidisciplinary design optimization tool that wraps numerical optimization algorithms around an integrated framework of structural, thermal, optical, stray light, and computational fluid dynamics analysis capabilities. LLIMAS software is highly extensible and has developed organically across a variety of technologies including laser communications, directed energy, photometric detectors, chemical sensing, laser radar, and imaging systems. The custom software architecture leverages the capabilities of existing industry standard commercial software and supports the incorporation of internally developed tools. Recent advances in LLIMAS's Structural-Thermal-Optical Performance (STOP), aeromechanical, and aero-optical capabilities as applied to Lincoln prototypes are presented.
High-speed railway signal trackside equipment patrol inspection system
NASA Astrophysics Data System (ADS)
Wu, Nan
2018-03-01
High-speed railway signal trackside equipment patrol inspection system comprehensively applies TDI (time delay integration), high-speed and highly responsive CMOS architecture, low illumination photosensitive technique, image data compression technique, machine vision technique and so on, installed on high-speed railway inspection train, and achieves the collection, management and analysis of the images of signal trackside equipment appearance while the train is running. The system will automatically filter out the signal trackside equipment images from a large number of the background image, and identify of the equipment changes by comparing the original image data. Combining with ledger data and train location information, the system accurately locate the trackside equipment, conscientiously guiding maintenance.
Color image encryption using random transforms, phase retrieval, chaotic maps, and diffusion
NASA Astrophysics Data System (ADS)
Annaby, M. H.; Rushdi, M. A.; Nehary, E. A.
2018-04-01
The recent tremendous proliferation of color imaging applications has been accompanied by growing research in data encryption to secure color images against adversary attacks. While recent color image encryption techniques perform reasonably well, they still exhibit vulnerabilities and deficiencies in terms of statistical security measures due to image data redundancy and inherent weaknesses. This paper proposes two encryption algorithms that largely treat these deficiencies and boost the security strength through novel integration of the random fractional Fourier transforms, phase retrieval algorithms, as well as chaotic scrambling and diffusion. We show through detailed experiments and statistical analysis that the proposed enhancements significantly improve security measures and immunity to attacks.
Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.
2012-01-01
A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.
Recognizable or Not: Towards Image Semantic Quality Assessment for Compression
NASA Astrophysics Data System (ADS)
Liu, Dong; Wang, Dandan; Li, Houqiang
2017-12-01
Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.
Hyperspectral Raman imaging of bone growth and regrowth chemistry
NASA Astrophysics Data System (ADS)
Pezzuti, Jerilyn A.; Morris, Michael D.; Bonadio, Jeffrey F.; Goldstein, Steven A.
1998-06-01
Hyperspectral Raman microscopic imaging of carbonated hydroxyapatite (HAP) is used to follow the chemistry of bone growth and regrowth. Deep red excitation is employed to minimize protein fluorescence interference. A passive line generator based on Powell lens optics and a motorized translation stage provide the imaging capabilities. Raman image contrast is generated from several lines of the HAP Raman spectrum, primarily the PO4-3. Factor analysis is used to minimize the integration time needed for acceptable contrast and to explore the chemical species within the bone. Bone age is visualized as variations in image intensity. High definition, high resolution images of newly formed bone and mature bone are compared qualitatively. The technique is currently under evaluation for study of experimental therapies for fracture repair.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seibert, J; Imbergamo, P
The expansion and integration of diagnostic imaging technologies such as On Board Imaging (OBI) and Cone Beam Computed Tomography (CBCT) into radiation oncology has required radiation oncology physicists to be responsible for and become familiar with assessing image quality. Unfortunately many radiation oncology physicists have had little or no training or experience in measuring and assessing image quality. Many physicists have turned to automated QA analysis software without having a fundamental understanding of image quality measures. This session will review the basic image quality measures of imaging technologies used in the radiation oncology clinic, such as low contrast resolution, highmore » contrast resolution, uniformity, noise, and contrast scale, and how to measure and assess them in a meaningful way. Additionally a discussion of the implementation of an image quality assurance program in compliance with Task Group recommendations will be presented along with the advantages and disadvantages of automated analysis methods. Learning Objectives: Review and understanding of the fundamentals of image quality. Review and understanding of the basic image quality measures of imaging modalities used in the radiation oncology clinic. Understand how to implement an image quality assurance program and to assess basic image quality measures in a meaningful way.« less
2015-11-05
AFRL-AFOSR-VA-TR-2015-0359 Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast Imaging Viktor Gruev...To) 02/15/2011 - 08/15/2015 4. TITLE AND SUBTITLE Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast...investigate alternative spectral imaging architectures based on my previous experience in this research area. I will develop nanowire polarization
Quantitative Analysis of Venus Radar Backscatter Data in ArcGIS
NASA Technical Reports Server (NTRS)
Long, S. M.; Grosfils, E. B.
2005-01-01
Ongoing mapping of the Ganiki Planitia (V14) quadrangle of Venus and definition of material units has involved an integrated but qualitative analysis of Magellan radar backscatter images and topography using standard geomorphological mapping techniques. However, such analyses do not take full advantage of the quantitative information contained within the images. Analysis of the backscatter coefficient allows a much more rigorous statistical comparison between mapped units, permitting first order selfsimilarity tests of geographically separated materials assigned identical geomorphological labels. Such analyses cannot be performed directly on pixel (DN) values from Magellan backscatter images, because the pixels are scaled to the Muhleman law for radar echoes on Venus and are not corrected for latitudinal variations in incidence angle. Therefore, DN values must be converted based on pixel latitude back to their backscatter coefficient values before accurate statistical analysis can occur. Here we present a method for performing the conversions and analysis of Magellan backscatter data using commonly available ArcGIS software and illustrate the advantages of the process for geological mapping.
Chen, C; Li, H; Zhou, X; Wong, S T C
2008-05-01
Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.
Mulrane, Laoighse; Rexhepaj, Elton; Smart, Valerie; Callanan, John J; Orhan, Diclehan; Eldem, Türkan; Mally, Angela; Schroeder, Susanne; Meyer, Kirstin; Wendt, Maria; O'Shea, Donal; Gallagher, William M
2008-08-01
The widespread use of digital slides has only recently come to the fore with the development of high-throughput scanners and high performance viewing software. This development, along with the optimisation of compression standards and image transfer techniques, has allowed the technology to be used in wide reaching applications including integration of images into hospital information systems and histopathological training, as well as the development of automated image analysis algorithms for prediction of histological aberrations and quantification of immunohistochemical stains. Here, the use of this technology in the creation of a comprehensive library of images of preclinical toxicological relevance is demonstrated. The images, acquired using the Aperio ScanScope CS and XT slide acquisition systems, form part of the ongoing EU FP6 Integrated Project, Innovative Medicines for Europe (InnoMed). In more detail, PredTox (abbreviation for Predictive Toxicology) is a subproject of InnoMed and comprises a consortium of 15 industrial (13 large pharma, 1 technology provider and 1 SME) and three academic partners. The primary aim of this consortium is to assess the value of combining data generated from 'omics technologies (proteomics, transcriptomics, metabolomics) with the results from more conventional toxicology methods, to facilitate further informed decision making in preclinical safety evaluation. A library of 1709 scanned images was created of full-face sections of liver and kidney tissue specimens from male Wistar rats treated with 16 proprietary and reference compounds of known toxicity; additional biological materials from these treated animals were separately used to create 'omics data, that will ultimately be used to populate an integrated toxicological database. In respect to assessment of the digital slides, a web-enabled digital slide management system, Digital SlideServer (DSS), was employed to enable integration of the digital slide content into the 'omics database and to facilitate remote viewing by pathologists connected with the project. DSS also facilitated manual annotation of digital slides by the pathologists, specifically in relation to marking particular lesions of interest. Tissue microarrays (TMAs) were constructed from the specimens for the purpose of creating a repository of tissue from animals used in the study with a view to later-stage biomarker assessment. As the PredTox consortium itself aims to identify new biomarkers of toxicity, these TMAs will be a valuable means of validation. In summary, a large repository of histological images was created enabling the subsequent pathological analysis of samples through remote viewing and, along with the utilisation of TMA technology, will allow the validation of biomarkers identified by the PredTox consortium. The population of the PredTox database with these digitised images represents the creation of the first toxicological database integrating 'omics and preclinical data with histological images.
A hierarchical SVG image abstraction layer for medical imaging
NASA Astrophysics Data System (ADS)
Kim, Edward; Huang, Xiaolei; Tan, Gang; Long, L. Rodney; Antani, Sameer
2010-03-01
As medical imaging rapidly expands, there is an increasing need to structure and organize image data for efficient analysis, storage and retrieval. In response, a large fraction of research in the areas of content-based image retrieval (CBIR) and picture archiving and communication systems (PACS) has focused on structuring information to bridge the "semantic gap", a disparity between machine and human image understanding. An additional consideration in medical images is the organization and integration of clinical diagnostic information. As a step towards bridging the semantic gap, we design and implement a hierarchical image abstraction layer using an XML based language, Scalable Vector Graphics (SVG). Our method encodes features from the raw image and clinical information into an extensible "layer" that can be stored in a SVG document and efficiently searched. Any feature extracted from the raw image including, color, texture, orientation, size, neighbor information, etc., can be combined in our abstraction with high level descriptions or classifications. And our representation can natively characterize an image in a hierarchical tree structure to support multiple levels of segmentation. Furthermore, being a world wide web consortium (W3C) standard, SVG is able to be displayed by most web browsers, interacted with by ECMAScript (standardized scripting language, e.g. JavaScript, JScript), and indexed and retrieved by XML databases and XQuery. Using these open source technologies enables straightforward integration into existing systems. From our results, we show that the flexibility and extensibility of our abstraction facilitates effective storage and retrieval of medical images.
Cao, Lu; Graauw, Marjo de; Yan, Kuan; Winkel, Leah; Verbeek, Fons J
2016-05-03
Endocytosis is regarded as a mechanism of attenuating the epidermal growth factor receptor (EGFR) signaling and of receptor degradation. There is increasing evidence becoming available showing that breast cancer progression is associated with a defect in EGFR endocytosis. In order to find related Ribonucleic acid (RNA) regulators in this process, high-throughput imaging with fluorescent markers is used to visualize the complex EGFR endocytosis process. Subsequently a dedicated automatic image and data analysis system is developed and applied to extract the phenotype measurement and distinguish different developmental episodes from a huge amount of images acquired through high-throughput imaging. For the image analysis, a phenotype measurement quantifies the important image information into distinct features or measurements. Therefore, the manner in which prominent measurements are chosen to represent the dynamics of the EGFR process becomes a crucial step for the identification of the phenotype. In the subsequent data analysis, classification is used to categorize each observation by making use of all prominent measurements obtained from image analysis. Therefore, a better construction for a classification strategy will support to raise the performance level in our image and data analysis system. In this paper, we illustrate an integrated analysis method for EGFR signalling through image analysis of microscopy images. Sophisticated wavelet-based texture measurements are used to obtain a good description of the characteristic stages in the EGFR signalling. A hierarchical classification strategy is designed to improve the recognition of phenotypic episodes of EGFR during endocytosis. Different strategies for normalization, feature selection and classification are evaluated. The results of performance assessment clearly demonstrate that our hierarchical classification scheme combined with a selected set of features provides a notable improvement in the temporal analysis of EGFR endocytosis. Moreover, it is shown that the addition of the wavelet-based texture features contributes to this improvement. Our workflow can be applied to drug discovery to analyze defected EGFR endocytosis processes.
Multimodal image analysis of clinical influences on preterm brain development
Ball, Gareth; Aljabar, Paul; Nongena, Phumza; Kennea, Nigel; Gonzalez‐Cinca, Nuria; Falconer, Shona; Chew, Andrew T.M.; Harper, Nicholas; Wurie, Julia; Rutherford, Mary A.; Edwards, A. David
2017-01-01
Objective Premature birth is associated with numerous complex abnormalities of white and gray matter and a high incidence of long‐term neurocognitive impairment. An integrated understanding of these abnormalities and their association with clinical events is lacking. The aim of this study was to identify specific patterns of abnormal cerebral development and their antenatal and postnatal antecedents. Methods In a prospective cohort of 449 infants (226 male), we performed a multivariate and data‐driven analysis combining multiple imaging modalities. Using canonical correlation analysis, we sought separable multimodal imaging markers associated with specific clinical and environmental factors and correlated to neurodevelopmental outcome at 2 years. Results We found five independent patterns of neuroanatomical variation that related to clinical factors including age, prematurity, sex, intrauterine complications, and postnatal adversity. We also confirmed the association between imaging markers of neuroanatomical abnormality and poor cognitive and motor outcomes at 2 years. Interpretation This data‐driven approach defined novel and clinically relevant imaging markers of cerebral maldevelopment, which offer new insights into the nature of preterm brain injury. Ann Neurol 2017;82:233–246 PMID:28719076
Handels, H; Busch, C; Encarnação, J; Hahn, C; Kühn, V; Miehe, J; Pöppl, S I; Rinast, E; Rossmanith, C; Seibert, F; Will, A
1997-03-01
The software system KAMEDIN (Kooperatives Arbeiten und MEdizinische Diagnostik auf Innovativen Netzen) is a multimedia telemedicine system for exchange, cooperative diagnostics, and remote analysis of digital medical image data. It provides components for visualisation, processing, and synchronised audio-visual discussion of medical images. Techniques of computer supported cooperative work (CSCW) synchronise user interactions during a teleconference. Visibility of both local and remote cursor on the conference workstations facilitates telepointing and reinforces the conference partner's telepresence. Audio communication during teleconferences is supported by an integrated audio component. Furthermore, brain tissue segmentation with artificial neural networks can be performed on an external supercomputer as a remote image analysis procedure. KAMEDIN is designed as a low cost CSCW tool for ISDN based telecommunication. However it can be used on any TCP/IP supporting network. In a field test, KAMEDIN was installed in 15 clinics and medical departments to validate the systems' usability. The telemedicine system KAMEDIN has been developed, tested, and evaluated within a research project sponsored by German Telekom.
NASA Astrophysics Data System (ADS)
Lim, Hoong-Ta; Murukeshan, Vadakke Matham
2017-06-01
Hyperspectral imaging combines imaging and spectroscopy to provide detailed spectral information for each spatial point in the image. This gives a three-dimensional spatial-spatial-spectral datacube with hundreds of spectral images. Probe-based hyperspectral imaging systems have been developed so that they can be used in regions where conventional table-top platforms would find it difficult to access. A fiber bundle, which is made up of specially-arranged optical fibers, has recently been developed and integrated with a spectrograph-based hyperspectral imager. This forms a snapshot hyperspectral imaging probe, which is able to form a datacube using the information from each scan. Compared to the other configurations, which require sequential scanning to form a datacube, the snapshot configuration is preferred in real-time applications where motion artifacts and pixel misregistration can be minimized. Principal component analysis is a dimension-reducing technique that can be applied in hyperspectral imaging to convert the spectral information into uncorrelated variables known as principal components. A confidence ellipse can be used to define the region of each class in the principal component feature space and for classification. This paper demonstrates the use of the snapshot hyperspectral imaging probe to acquire data from samples of different colors. The spectral library of each sample was acquired and then analyzed using principal component analysis. Confidence ellipse was then applied to the principal components of each sample and used as the classification criteria. The results show that the applied analysis can be used to perform classification of the spectral data acquired using the snapshot hyperspectral imaging probe.
Nikolaisen, Julie; Nilsson, Linn I. H.; Pettersen, Ina K. N.; Willems, Peter H. G. M.; Lorens, James B.; Koopman, Werner J. H.; Tronstad, Karl J.
2014-01-01
Mitochondrial morphology and function are coupled in healthy cells, during pathological conditions and (adaptation to) endogenous and exogenous stress. In this sense mitochondrial shape can range from small globular compartments to complex filamentous networks, even within the same cell. Understanding how mitochondrial morphological changes (i.e. “mitochondrial dynamics”) are linked to cellular (patho) physiology is currently the subject of intense study and requires detailed quantitative information. During the last decade, various computational approaches have been developed for automated 2-dimensional (2D) analysis of mitochondrial morphology and number in microscopy images. Although these strategies are well suited for analysis of adhering cells with a flat morphology they are not applicable for thicker cells, which require a three-dimensional (3D) image acquisition and analysis procedure. Here we developed and validated an automated image analysis algorithm allowing simultaneous 3D quantification of mitochondrial morphology and network properties in human endothelial cells (HUVECs). Cells expressing a mitochondria-targeted green fluorescence protein (mitoGFP) were visualized by 3D confocal microscopy and mitochondrial morphology was quantified using both the established 2D method and the new 3D strategy. We demonstrate that both analyses can be used to characterize and discriminate between various mitochondrial morphologies and network properties. However, the results from 2D and 3D analysis were not equivalent when filamentous mitochondria in normal HUVECs were compared with circular/spherical mitochondria in metabolically stressed HUVECs treated with rotenone (ROT). 2D quantification suggested that metabolic stress induced mitochondrial fragmentation and loss of biomass. In contrast, 3D analysis revealed that the mitochondrial network structure was dissolved without affecting the amount and size of the organelles. Thus, our results demonstrate that 3D imaging and quantification are crucial for proper understanding of mitochondrial shape and topology in non-flat cells. In summary, we here present an integrative method for unbiased 3D quantification of mitochondrial shape and network properties in mammalian cells. PMID:24988307
Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W
2017-01-01
The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca 2+ -imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca 2+ imaging datasets, particularly when these have been acquired at different spatial scales.
Rueckl, Martin; Lenzi, Stephen C.; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W.
2017-01-01
The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales. PMID:28706482
Dynamic integral imaging technology for 3D applications (Conference Presentation)
NASA Astrophysics Data System (ADS)
Huang, Yi-Pai; Javidi, Bahram; Martínez-Corral, Manuel; Shieh, Han-Ping D.; Jen, Tai-Hsiang; Hsieh, Po-Yuan; Hassanfiroozi, Amir
2017-05-01
Depth and resolution are always the trade-off in integral imaging technology. With the dynamic adjustable devices, the two factors of integral imaging can be fully compensated with time-multiplexed addressing. Those dynamic devices can be mechanical or electrical driven. In this presentation, we will mainly focused on discussing various Liquid Crystal devices which can change the focal length, scan and shift the image position, or switched in between 2D/3D mode. By using the Liquid Crystal devices, dynamic integral imaging have been successfully applied on 3D Display, capturing, and bio-imaging applications.
NASA Technical Reports Server (NTRS)
McDowell, Mark
2004-01-01
An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.
Interventional Molecular Imaging.
Solomon, Stephen B; Cornelis, Francois
2016-04-01
Although molecular imaging has had a dramatic impact on diagnostic imaging, it has only recently begun to be integrated into interventional procedures. Its significant impact is attributed to its ability to provide noninvasive, physiologic information that supplements conventional morphologic imaging. The four major interventional opportunities for molecular imaging are, first, to provide guidance to localize a target; second, to provide tissue analysis to confirm that the target has been reached; third, to provide in-room, posttherapy assessment; and fourth, to deliver targeted therapeutics. With improved understanding and application of(18)F-FDG, as well as the addition of new molecular probes beyond(18)F-FDG, the future holds significant promise for the expansion of molecular imaging into the realm of interventional procedures. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Roles of universal three-dimensional image analysis devices that assist surgical operations.
Sakamoto, Tsuyoshi
2014-04-01
The circumstances surrounding medical image analysis have undergone rapid evolution. In such a situation, it can be said that "imaging" obtained through medical imaging modality and the "analysis" that we employ have become amalgamated. Recently, we feel the distance between "imaging" and "analysis" has become closer regarding the imaging analysis of any organ system, as if both terms mentioned above have become integrated. The history of medical image analysis started with the appearance of the computer. The invention of multi-planar reconstruction (MPR) used in the helical scan had a significant impact and became the basis for recent image analysis. Subsequently, curbed MPR (CPR) and other methods were developed, and the 3D diagnostic imaging and image analysis of the human body have started on a full scale. Volume rendering: the development of a new rendering algorithm and the significant improvement of memory and CPUs contributed to the development of "volume rendering," which allows 3D views with retained internal information. A new value was created by this development; computed tomography (CT) images that used to be for "diagnosis" before that time have become "applicable to treatment." In the past, before the development of volume rendering, a clinician had to mentally reconstruct an image reconfigured for diagnosis into a 3D image, but these developments have allowed the depiction of a 3D image on a monitor. Current technology: Currently, in Japan, the estimation of the liver volume and the perfusion area of the portal vein and hepatic vein are vigorously being adopted during preoperative planning for hepatectomy. Such a circumstance seems to be brought by the substantial improvement of said basic techniques and by upgrading the user interface, allowing doctors easy manipulation by themselves. The following describes the specific techniques. Future of post-processing technology: It is expected, in terms of the role of image analysis, for better or worse, that computer-aided diagnosis (CAD) will develop to a highly advanced level in every diagnostic field. Further, it is also expected in the treatment field that a technique coordinating various devices will be strongly required as a surgery navigator. Actually, surgery using an image navigator is being widely studied, and coordination with hardware, including robots, will also be developed. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.
Gray, Julie McLaughlin; Frank, Gelya; Roll, Shawn C.
2018-01-01
Musculoskeletal sonography is rapidly extending beyond radiology; however, best practices for successful integration into new practice contexts are unknown. This study explored non-physician experiences with the processes of training and integration of musculoskeletal sonography into rehabilitation. Qualitative data were captured through multiple sources and iterative thematic analysis was used to describe two occupational therapists’ experiences. The dominant emerging theme was competency, in three domains: technical, procedural and analytical. Additionally, three practice considerations were illuminated: (1) understanding imaging within the dynamics of rehabilitation, (2) navigating nuances of interprofessional care, and (3) implications for post-professional training. Findings indicate that sonography training for rehabilitation providers requires multi-level competency development and consideration of practice complexities. These data lay a foundation on which to explore and develop best practices for incorporating sonographic imaging into the clinic as a means for engaging clients as active participants in the rehabilitation process to improve health and rehabilitation outcomes. PMID:28830315
An independent software system for the analysis of dynamic MR images.
Torheim, G; Lombardi, M; Rinck, P A
1997-01-01
A computer system for the manual, semi-automatic, and automatic analysis of dynamic MR images was to be developed on UNIX and personal computer platforms. The system was to offer an integrated and standardized way of performing both image processing and analysis that was independent of the MR unit used. The system consists of modules that are easily adaptable to special needs. Data from MR units or other diagnostic imaging equipment in techniques such as CT, ultrasonography, or nuclear medicine can be processed through the ACR-NEMA/DICOM standard file formats. A full set of functions is available, among them cine-loop visual analysis, and generation of time-intensity curves. Parameters such as cross-correlation coefficients, area under the curve, peak/maximum intensity, wash-in and wash-out slopes, time to peak, and relative signal intensity/contrast enhancement can be calculated. Other parameters can be extracted by fitting functions like the gamma-variate function. Region-of-interest data and parametric values can easily be exported. The system has been successfully tested in animal and patient examinations.
In Situ Characterization of Boehmite Particles in Water Using Liquid SEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yao, Juan; Arey, Bruce W.; Yang, Li
In situ imaging and elemental analysis of boehmite (AlOOH) particles in water is realized using the System for Analysis at the Liquid Vacuum Interface (SALVI) and Scanning Electron Microscopy (SEM). This paper describes the method and key steps in integrating the vacuum compatible SAVLI to SEM and obtaining secondary electron (SE) images of particles in liquid in high vacuum. Energy dispersive x-ray spectroscopy (EDX) is used to obtain elemental analysis of particles in liquid. A synthesized AlOOH particle is used as a model in the liquid SEM illustration. Our results demonstrate that particles can be imaged in the SE modemore » with good resolution. The AlOOH EDX spectrum shows significant signal from the Al compared with deionized water and the empty channel control. In situ liquid SEM is a powerful technique to study particles in liquid with many exciting applications. This procedure aims to provide technical details in how to conduct liquid SEM imaging and EDX analysis using SALVI and reduce potential pitfalls using this approach for other researchers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Mathew; Marshall, Matthew J.; Miller, Erin A.
2014-08-26
Understanding the interactions of structured communities known as “biofilms” and other complex matrixes is possible through the X-ray micro tomography imaging of the biofilms. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilms and bacteria in the datasets. The datasets are very large and often require manual interventions due to low contrast between objects and high noise levels. Thus new software is required for the effectual interpretation and analysis of the data. This work specifies the evolution and application of the ability to analyze and visualize high resolution X-ray micro tomography datasets.
Evaluation of osteoarthritis progression using polarization-sensitive optical coherence tomography
NASA Astrophysics Data System (ADS)
Nassif, Nader A.; Pierce, Mark C.; Park, B. Hyle; Cense, Barry; de Boer, Johannes F.
2004-07-01
Osteoarthritis is a prevalent medical condition that presents a diagnostic and therapeutic challenge to physicians today because of the inability to assess the integrity of the articular cartilage early in the disease. Polarization sensitive optical coherence tomography (PS-OCT) is a high resolution, non-contact imaging modality that provides cross-sectional images with additional information regarding the integrity of the collagen matrix. Using PS-OCT to image provides information regarding thickness of the articular cartilage and gives an index of biochemical changes based on alterations in optical properties (i.e. birefringence) of the tissue. We demonstrate initial experiments performed on specimens collected following total knee replacement surgery. Articular cartilage was imaged using a 1310 nm PS-OCT system where both intensity and phase images were acquired. PS-OCT images were compared with histology, and the changes in tissue optical properties were characterized. Analysis of the intensity images demonstrates differences between healthy and diseased cartilage surface and thickness. Phase maps of the tissue demonstrated distinct differences between healthy and diseased tissue. PS-OCT was able to image a gradual loss of birefringence as the tissue became more diseased. In this way, determining the rate of change of the phase provides a quantitative measure of pathology. Thus, imaging and evaluation of osteoarthritis using PS-OCT can be a useful means of quantitative assessment of the disease.
The effect of CT technical factors on quantification of lung fissure integrity
NASA Astrophysics Data System (ADS)
Chong, D.; Brown, M. S.; Ochs, R.; Abtin, F.; Brown, M.; Ordookhani, A.; Shaw, G.; Kim, H. J.; Gjertson, D.; Goldin, J. G.
2009-02-01
A new emphysema treatment uses endobronchial valves to perform lobar volume reduction. The degree of fissure completeness may predict treatment efficacy. This study investigated the behavior of a semiautomated algorithm for quantifying lung fissure integrity in CT with respect to reconstruction kernel and dose. Raw CT data was obtained for six asymptomatic patients from a high-risk population for lung cancer. The patients were scanned on either a Siemens Sensation 16 or 64, using a low-dose protocol of 120 kVp, 25 mAs. Images were reconstructed using kernels ranging from smooth to sharp (B10f, B30f, B50f, B70f). Research software was used to simulate an even lower-dose acquisition of 15 mAs, and images were generated at the same kernels resulting in 8 series per patient. The left major fissure was manually contoured axially at regular intervals, yielding 37 contours across all patients. These contours were read into an image analysis and pattern classification system which computed a Fissure Integrity Score (FIS) for each kernel and dose. FIS values were analyzed using a mixed-effects model with kernel and dose as fixed effects and patient as random effect to test for difference due to kernel and dose. Analysis revealed no difference in FIS between the smooth kernels (B10f, B30f) nor between sharp kernels (B50f, B70f), but there was a significant difference between the sharp and smooth groups (p = 0.020). There was no significant difference in FIS between the two low-dose reconstructions (p = 0.882). Using a cutoff of 90%, the number of incomplete fissures increased from 5 to 10 when the imaging protocol changed from B50f to B30f. Reconstruction kernel has a significant effect on quantification of fissure integrity in CT. This has potential implications when selecting patients for endobronchial valve therapy.
Floating aerial 3D display based on the freeform-mirror and the improved integral imaging system
NASA Astrophysics Data System (ADS)
Yu, Xunbo; Sang, Xinzhu; Gao, Xin; Yang, Shenwu; Liu, Boyang; Chen, Duo; Yan, Binbin; Yu, Chongxiu
2018-09-01
A floating aerial three-dimensional (3D) display based on the freeform-mirror and the improved integral imaging system is demonstrated. In the traditional integral imaging (II), the distortion originating from lens aberration warps elemental images and degrades the visual effect severely. To correct the distortion of the observed pixels and to improve the image quality, a directional diffuser screen (DDS) is introduced. However, the improved integral imaging system can hardly present realistic images with the large off-screen depth, which limits floating aerial visual experience. To display the 3D image in the free space, the off-axis reflection system with the freeform-mirror is designed. By combining the improved II and the designed freeform optical element, the floating aerial 3D image is presented.
ANAlyte: A modular image analysis tool for ANA testing with indirect immunofluorescence.
Di Cataldo, Santa; Tonti, Simone; Bottino, Andrea; Ficarra, Elisa
2016-05-01
The automated analysis of indirect immunofluorescence images for Anti-Nuclear Autoantibody (ANA) testing is a fairly recent field that is receiving ever-growing interest from the research community. ANA testing leverages on the categorization of intensity level and fluorescent pattern of IIF images of HEp-2 cells to perform a differential diagnosis of important autoimmune diseases. Nevertheless, it suffers from tremendous lack of repeatability due to subjectivity in the visual interpretation of the images. The automatization of the analysis is seen as the only valid solution to this problem. Several works in literature address individual steps of the work-flow, nonetheless integrating such steps and assessing their effectiveness as a whole is still an open challenge. We present a modular tool, ANAlyte, able to characterize a IIF image in terms of fluorescent intensity level and fluorescent pattern without any user-interactions. For this purpose, ANAlyte integrates the following: (i) Intensity Classifier module, that categorizes the intensity level of the input slide based on multi-scale contrast assessment; (ii) Cell Segmenter module, that splits the input slide into individual HEp-2 cells; (iii) Pattern Classifier module, that determines the fluorescent pattern of the slide based on the pattern of the individual cells. To demonstrate the accuracy and robustness of our tool, we experimentally validated ANAlyte on two different public benchmarks of IIF HEp-2 images with rigorous leave-one-out cross-validation strategy. We obtained overall accuracy of fluorescent intensity and pattern classification respectively around 85% and above 90%. We assessed all results by comparisons with some of the most representative state of the art works. Unlike most of the other works in the recent literature, ANAlyte aims at the automatization of all the major steps of ANA image analysis. Results on public benchmarks demonstrate that the tool can characterize HEp-2 slides in terms of intensity and fluorescent pattern with accuracy better or comparable with the state of the art techniques, even when such techniques are run on manually segmented cells. Hence, ANAlyte can be proposed as a valid solution to the problem of ANA testing automatization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines
Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.
2017-01-01
Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.
Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H
2017-04-01
Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Isse, Kumiko; Lesniak, Andrew; Grama, Kedar; Roysam, Badrinath; Minervini, Martha I.; Demetris, Anthony J
2013-01-01
Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. “-Omics” analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: a) spatial-temporal relationships; b) rare events/cells; c) complex structural context; and d) integration into a “systems” model. Nevertheless, except for immunostaining, no transformative advancements have “modernized” routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology - global “–omic” analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. PMID:22053785
Integral imaging with multiple image planes using a uniaxial crystal plate.
Park, Jae-Hyeung; Jung, Sungyong; Choi, Heejin; Lee, Byoungho
2003-08-11
Integral imaging has been attracting much attention recently for its several advantages such as full parallax, continuous view-points, and real-time full-color operation. However, the thickness of the displayed three-dimensional image is limited to relatively small value due to the degradation of the image resolution. In this paper, we propose a method to provide observers with enhanced perception of the depth without severe resolution degradation by the use of the birefringence of a uniaxial crystal plate. The proposed integral imaging system can display images integrated around three central depth planes by dynamically altering the polarization and controlling both elemental images and dynamic slit array mask accordingly. We explain the principle of the proposed method and verify it experimentally.
microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling
NASA Astrophysics Data System (ADS)
Comi, Troy J.; Neumann, Elizabeth K.; Do, Thanh D.; Sweedler, Jonathan V.
2017-09-01
Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. [Figure not available: see fulltext.
microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling.
Comi, Troy J; Neumann, Elizabeth K; Do, Thanh D; Sweedler, Jonathan V
2017-09-01
Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. Graphical Abstract ᅟ.
Integration of a clinical trial database with a PACS
NASA Astrophysics Data System (ADS)
van Herk, M.
2014-03-01
Many clinical trials use Electronic Case Report Forms (ECRF), e.g., from OpenClinica. Trial data is augmented if DICOM scans, dose cubes, etc. from the Picture Archiving and Communication System (PACS) are included for data mining. Unfortunately, there is as yet no structured way to collect DICOM objects in trial databases. In this paper, we obtain a tight integration of ECRF and PACS using open source software. Methods: DICOM identifiers for selected images/series/studies are stored in associated ECRF events (e.g., baseline) as follows: 1) JavaScript added to OpenClinica communicates using HTML with a gateway server inside the hospitals firewall; 2) On this gateway, an open source DICOM server runs scripts to query and select the data, returning anonymized identifiers; 3) The scripts then collects, anonymizes, zips and transmits selected data to a central trial server; 4) Here data is stored in a DICOM archive which allows authorized ECRF users to view and download the anonymous images associated with each event. Results: All integration scripts are open source. The PACS administrator configures the anonymization script and decides to use the gateway in passive (receiving) mode or in an active mode going out to the PACS to gather data. Our ECRF centric approach supports automatic data mining by iterating over the cases in the ECRF database, providing the identifiers to load images and the clinical data to correlate with image analysis results. Conclusions: Using open source software and web technology, a tight integration has been achieved between PACS and ECRF.
Extended depth of field integral imaging using multi-focus fusion
NASA Astrophysics Data System (ADS)
Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua
2018-03-01
In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.
Third party EPID with IGRT capability retrofitted onto an existing medical linear accelerator
Odero, DO; Shimm, DS
2009-01-01
Radiation therapy requires precision to avoid unintended irradiation of normal organs. Electronic Portal Imaging Devices (EPIDs), can help with precise patient positioning for accurate treatment. EPIDs are now bundled with new linear accelerators, or they can be purchased from the Linac manufacturer for retrofit. Retrofitting a third party EPID to a linear accelerator can pose challenges. The authors describe a relatively inexpensive third party CCD camera-based EPID manufactured by TheraView (Cablon Medical B.V.), installed onto a Siemens Primus linear accelerator, and integrated with a Lantis record and verify system, an Oldelft simulator with Digital Therapy Imaging (DTI) unit, and a Philips ADAC Pinnacle treatment planning system (TPS). This system integrates well with existing equipment and its software can process DICOM images from other sources. The system provides a complete imaging system that eliminates the need for separate software for portal image viewing, interpretation, analysis, archiving, image guided radiation therapy and other image management applications. It can also be accessed remotely via safe VPN tunnels. TheraView EPID retrofit therefore presents an example of a less expensive alternative to linear accelerator manufacturers’ proprietary EPIDs suitable for implementation in third world countries radiation therapy departments which are often faced with limited financial resources. PMID:21611056
Third party EPID with IGRT capability retrofitted onto an existing medical linear accelerator.
Odero, D O; Shimm, D S
2009-07-01
Radiation therapy requires precision to avoid unintended irradiation of normal organs. Electronic Portal Imaging Devices (EPIDs), can help with precise patient positioning for accurate treatment. EPIDs are now bundled with new linear accelerators, or they can be purchased from the Linac manufacturer for retrofit. Retrofitting a third party EPID to a linear accelerator can pose challenges. The authors describe a relatively inexpensive third party CCD camera-based EPID manufactured by TheraView (Cablon Medical B.V.), installed onto a Siemens Primus linear accelerator, and integrated with a Lantis record and verify system, an Oldelft simulator with Digital Therapy Imaging (DTI) unit, and a Philips ADAC Pinnacle treatment planning system (TPS). This system integrates well with existing equipment and its software can process DICOM images from other sources. The system provides a complete imaging system that eliminates the need for separate software for portal image viewing, interpretation, analysis, archiving, image guided radiation therapy and other image management applications. It can also be accessed remotely via safe VPN tunnels. TheraView EPID retrofit therefore presents an example of a less expensive alternative to linear accelerator manufacturers' proprietary EPIDs suitable for implementation in third world countries radiation therapy departments which are often faced with limited financial resources.
Ameisen, David; Deroulers, Christophe; Perrier, Valérie; Bouhidel, Fatiha; Battistella, Maxime; Legrès, Luc; Janin, Anne; Bertheau, Philippe; Yunès, Jean-Baptiste
2014-01-01
Since microscopic slides can now be automatically digitized and integrated in the clinical workflow, quality assessment of Whole Slide Images (WSI) has become a crucial issue. We present a no-reference quality assessment method that has been thoroughly tested since 2010 and is under implementation in multiple sites, both public university-hospitals and private entities. It is part of the FlexMIm R&D project which aims to improve the global workflow of digital pathology. For these uses, we have developed two programming libraries, in Java and Python, which can be integrated in various types of WSI acquisition systems, viewers and image analysis tools. Development and testing have been carried out on a MacBook Pro i7 and on a bi-Xeon 2.7GHz server. Libraries implementing the blur assessment method have been developed in Java, Python, PHP5 and MySQL5. For web applications, JavaScript, Ajax, JSON and Sockets were also used, as well as the Google Maps API. Aperio SVS files were converted into the Google Maps format using VIPS and Openslide libraries. We designed the Java library as a Service Provider Interface (SPI), extendable by third parties. Analysis is computed in real-time (3 billion pixels per minute). Tests were made on 5000 single images, 200 NDPI WSI, 100 Aperio SVS WSI converted to the Google Maps format. Applications based on our method and libraries can be used upstream, as calibration and quality control tool for the WSI acquisition systems, or as tools to reacquire tiles while the WSI is being scanned. They can also be used downstream to reacquire the complete slides that are below the quality threshold for surgical pathology analysis. WSI may also be displayed in a smarter way by sending and displaying the regions of highest quality before other regions. Such quality assessment scores could be integrated as WSI's metadata shared in clinical, research or teaching contexts, for a more efficient medical informatics workflow.
Using image mapping towards biomedical and biological data sharing
2013-01-01
Image-based data integration in eHealth and life sciences is typically concerned with the method used for anatomical space mapping, needed to retrieve, compare and analyse large volumes of biomedical data. In mapping one image onto another image, a mechanism is used to match and find the corresponding spatial regions which have the same meaning between the source and the matching image. Image-based data integration is useful for integrating data of various information structures. Here we discuss a broad range of issues related to data integration of various information structures, review exemplary work on image representation and mapping, and discuss the challenges that these techniques may bring. PMID:24059352
The comparative effectiveness of conventional and digital image libraries.
McColl, R I; Johnson, A
2001-03-01
Before introducing a hospital-wide image database to improve access, navigation and retrieval speed, a comparative study between a conventional slide library and a matching image database was undertaken to assess its relative benefits. Paired time trials and personal questionnaires revealed faster retrieval rates, higher image quality, and easier viewing for the pilot digital image database. Analysis of confidentiality, copyright and data protection exposed similar issues for both systems, thus concluding that the digital image database is a more effective library system. The authors suggest that in the future, medical images will be stored on large, professionally administered, centrally located file servers, allowing specialist image libraries to be tailored locally for individual users. The further integration of the database with web technology will enable cheap and efficient remote access for a wide range of users.
Baur, Heidi; Gatterer, Hannes; Hotter, Barbara; Kopp, Martin
2017-06-01
[Purpose] The aim of this study was to examine the influence of Structural Integration and Fascial Fitness, a new form of physical exercise, on body image and the perception of back pain. [Subjects and Methods] In total, 33 participants with non-specific back pain were split into two groups and performed three sessions of Structural Integration or Fascial Fitness within a 3-week period. Before and after the interventions, perception of back pain and body image were evaluated using standardized questionnaires. [Results] Structural Integration significantly decreased non-specified back pain and improved both "negative body image" and "vital body dynamics". Fascial Fitness led to a significant improvement on the "negative body image" subscale. Benefits of Structural Integration did not significantly vary in magnitude from those for fascial fitness. [Conclusion] Both Structural Integration and Fascial Fitness can lead to a more positive body image after only three sessions. Moreover, the therapeutic technique of Structural Integration can reduce back pain.
Bennett, Ilana J; Stark, Craig E L
2016-03-01
Pattern separation describes the orthogonalization of similar inputs into unique, non-overlapping representations. This computational process is thought to serve memory by reducing interference and to be mediated by the dentate gyrus of the hippocampus. Using ultra-high in-plane resolution diffusion tensor imaging (hrDTI) in older adults, we previously demonstrated that integrity of the perforant path, which provides input to the dentate gyrus from entorhinal cortex, was associated with mnemonic discrimination, a behavioral outcome designed to load on pattern separation. The current hrDTI study assessed the specificity of this perforant path integrity-mnemonic discrimination relationship relative to other cognitive constructs (identified using a factor analysis) and white matter tracts (hippocampal cingulum, fornix, corpus callosum) in 112 healthy adults (20-87 years). Results revealed age-related declines in integrity of the perforant path and other medial temporal lobe (MTL) tracts (hippocampal cingulum, fornix). Controlling for global effects of brain aging, perforant path integrity related only to the factor that captured mnemonic discrimination performance. Comparable integrity-mnemonic discrimination relationships were also observed for the hippocampal cingulum and fornix. Thus, whereas perforant path integrity specifically relates to mnemonic discrimination, mnemonic discrimination may be mediated by a broader MTL network. Copyright © 2015 Elsevier Inc. All rights reserved.
Fuin, Niccolo; Catalano, Onofrio Antonio; Scipioni, Michele; Canjels, Lisanne P W; Izquierdo, David; Pedemonte, Stefano; Catana, Ciprian
2018-01-25
Purpose: We present an approach for concurrent reconstruction of respiratory motion compensated abdominal DCE-MRI and PET data in an integrated PET/MR scanner. The MR and PET reconstructions share the same motion vector fields (MVFs) derived from radial MR data; the approach is robust to changes in respiratory pattern and do not increase the total acquisition time. Methods: PET and DCE-MRI data of 12 oncological patients were simultaneously acquired for 6 minutes on an integrated PET/MR system after administration of 18 F-FDG and gadoterate meglumine. Golden-angle radial MR data were continuously acquired simultaneously with PET data and sorted into multiple motion phases based on a respiratory signal derived directly from the radial MR data. The resulting multidimensional dataset was reconstructed using a compressed sensing approach that exploits sparsity among respiratory phases. MVFs obtained using the full 6-minute (MC_6-min) and only the last 1 minute (MC_1-min) of data were incorporated into the PET reconstruction to obtain motion-corrected PET images and in an MR iterative reconstruction algorithm to produce a series of motion-corrected DCE-MRI images (moco_GRASP). The motion-correction methods (MC_6-min and MC_1-min) were evaluated by qualitative analysis of the MR images and quantitative analysis of maximum and mean standardized uptake values (SUV max , SUVmean), contrast, signal-to-noise ratio (SNR) and lesion volume in the PET images. Results: Motion corrected MC_6-min PET images demonstrated 30%, 23%, 34% and 18% increases in average SUV max , SUVmean, contrast and SNR, and an average 40% reduction in lesion volume with respect to the non-motion-corrected PET images. The changes in these figures of merit were smaller but still substantial for the MC_1-min protocol: 19%, 10%, 15% and 9% increases in average SUV max , SUVmean, contrast and SNR; and a 28% reduction in lesion volume. Moco_GRASP images were deemed of acceptable or better diagnostic image quality with respect to conventional breath hold cartesian VIBE acquisitions. Conclusion: We presented a method that allows the simultaneous acquisition of respiratory motion-corrected diagnostic quality DCE-MRI and quantitatively accurate PET data in an integrated PET/MR scanner with negligible prolongation in acquisition time compared to routine PET/DCE-MRI protocols. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Large-Scale medical image analytics: Recent methodologies, applications and Future directions.
Zhang, Shaoting; Metaxas, Dimitris
2016-10-01
Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.
Lin, Wei-Che; Chou, Kun-Hsien; Chen, Chao-Long; Chen, Hsiu-Ling; Lu, Cheng-Hsien; Li, Shau-Hsuan; Huang, Chu-Chung; Lin, Ching-Po; Cheng, Yu-Fan
2014-01-01
Cerebral edema is the common pathogenic mechanism for cognitive impairment in minimal hepatic encephalopathy. Whether complete reversibility of brain edema, cognitive deficits, and their associated imaging can be achieved after liver transplantation remains an open question. To characterize white matter integrity before and after liver transplantation in patients with minimal hepatic encephalopathy, multiple diffusivity indices acquired via diffusion tensor imaging was applied. Twenty-eight patients and thirty age- and sex-matched healthy volunteers were included. Multiple diffusivity indices were obtained from diffusion tensor images, including mean diffusivity, fractional anisotropy, axial diffusivity and radial diffusivity. The assessment was repeated 6-12 month after transplantation. Differences in white matter integrity between groups, as well as longitudinal changes, were evaluated using tract-based spatial statistical analysis. Correlation analyses were performed to identify first scan before transplantation and interval changes among the neuropsychiatric tests, clinical laboratory tests, and diffusion tensor imaging indices. After transplantation, decreased water diffusivity without fractional anisotropy change indicating reversible cerebral edema was found in the left anterior cingulate, claustrum, postcentral gyrus, and right corpus callosum. However, a progressive decrease in fractional anisotropy and an increase in radial diffusivity suggesting demyelination were noted in temporal lobe. Improved pre-transplantation albumin levels and interval changes were associated with better recoveries of diffusion tensor imaging indices. Improvements in interval diffusion tensor imaging indices in the right postcentral gyrus were correlated with visuospatial function score correction. In conclusion, longitudinal voxel-wise analysis of multiple diffusion tensor imaging indices demonstrated different white matter changes in minimal hepatic encephalopathy patients. Transplantation improved extracellular cerebral edema and the results of associated cognition tests. However, white matter demyelination may advance in temporal lobe.
Implementation of Enterprise Imaging Strategy at a Chinese Tertiary Hospital.
Li, Shanshan; Liu, Yao; Yuan, Yifang; Li, Jia; Wei, Lan; Wang, Yuelong; Fei, Xiaolu
2018-01-04
Medical images have become increasingly important in clinical practice and medical research, and the need to manage images at the hospital level has become urgent in China. To unify patient identification in examinations from different medical specialties, increase convenient access to medical images under authentication, and make medical images suitable for further artificial intelligence investigations, we implemented an enterprise imaging strategy by adopting an image integration platform as the main tool at Xuanwu Hospital. Workflow re-engineering and business system transformation was also performed to ensure the quality and content of the imaging data. More than 54 million medical images and approximately 1 million medical reports were integrated, and uniform patient identification, images, and report integration were made available to the medical staff and were accessible via a mobile application, which were achieved by implementing the enterprise imaging strategy. However, to integrate all medical images of different specialties at a hospital and ensure that the images and reports are qualified for data mining, some further policy and management measures are still needed.
Kong, Jun; Wang, Fusheng; Teodoro, George; Cooper, Lee; Moreno, Carlos S; Kurc, Tahsin; Pan, Tony; Saltz, Joel; Brat, Daniel
2013-12-01
In this paper, we present a novel framework for microscopic image analysis of nuclei, data management, and high performance computation to support translational research involving nuclear morphometry features, molecular data, and clinical outcomes. Our image analysis pipeline consists of nuclei segmentation and feature computation facilitated by high performance computing with coordinated execution in multi-core CPUs and Graphical Processor Units (GPUs). All data derived from image analysis are managed in a spatial relational database supporting highly efficient scientific queries. We applied our image analysis workflow to 159 glioblastomas (GBM) from The Cancer Genome Atlas dataset. With integrative studies, we found statistics of four specific nuclear features were significantly associated with patient survival. Additionally, we correlated nuclear features with molecular data and found interesting results that support pathologic domain knowledge. We found that Proneural subtype GBMs had the smallest mean of nuclear Eccentricity and the largest mean of nuclear Extent, and MinorAxisLength. We also found gene expressions of stem cell marker MYC and cell proliferation maker MKI67 were correlated with nuclear features. To complement and inform pathologists of relevant diagnostic features, we queried the most representative nuclear instances from each patient population based on genetic and transcriptional classes. Our results demonstrate that specific nuclear features carry prognostic significance and associations with transcriptional and genetic classes, highlighting the potential of high throughput pathology image analysis as a complementary approach to human-based review and translational research.
Berquist, Rachel M.; Gledhill, Kristen M.; Peterson, Matthew W.; Doan, Allyson H.; Baxter, Gregory T.; Yopak, Kara E.; Kang, Ning; Walker, H. J.; Hastings, Philip A.; Frank, Lawrence R.
2012-01-01
Museum fish collections possess a wealth of anatomical and morphological data that are essential for documenting and understanding biodiversity. Obtaining access to specimens for research, however, is not always practical and frequently conflicts with the need to maintain the physical integrity of specimens and the collection as a whole. Non-invasive three-dimensional (3D) digital imaging therefore serves a critical role in facilitating the digitization of these specimens for anatomical and morphological analysis as well as facilitating an efficient method for online storage and sharing of this imaging data. Here we describe the development of the Digital Fish Library (DFL, http://www.digitalfishlibrary.org), an online digital archive of high-resolution, high-contrast, magnetic resonance imaging (MRI) scans of the soft tissue anatomy of an array of fishes preserved in the Marine Vertebrate Collection of Scripps Institution of Oceanography. We have imaged and uploaded MRI data for over 300 marine and freshwater species, developed a data archival and retrieval system with a web-based image analysis and visualization tool, and integrated these into the public DFL website to disseminate data and associated metadata freely over the web. We show that MRI is a rapid and powerful method for accurately depicting the in-situ soft-tissue anatomy of preserved fishes in sufficient detail for large-scale comparative digital morphology. However these 3D volumetric data require a sophisticated computational and archival infrastructure in order to be broadly accessible to researchers and educators. PMID:22493695
A Novel Unsupervised Segmentation Quality Evaluation Method for Remote Sensing Images
Tang, Yunwei; Jing, Linhai; Ding, Haifeng
2017-01-01
The segmentation of a high spatial resolution remote sensing image is a critical step in geographic object-based image analysis (GEOBIA). Evaluating the performance of segmentation without ground truth data, i.e., unsupervised evaluation, is important for the comparison of segmentation algorithms and the automatic selection of optimal parameters. This unsupervised strategy currently faces several challenges in practice, such as difficulties in designing effective indicators and limitations of the spectral values in the feature representation. This study proposes a novel unsupervised evaluation method to quantitatively measure the quality of segmentation results to overcome these problems. In this method, multiple spectral and spatial features of images are first extracted simultaneously and then integrated into a feature set to improve the quality of the feature representation of ground objects. The indicators designed for spatial stratified heterogeneity and spatial autocorrelation are included to estimate the properties of the segments in this integrated feature set. These two indicators are then combined into a global assessment metric as the final quality score. The trade-offs of the combined indicators are accounted for using a strategy based on the Mahalanobis distance, which can be exhibited geometrically. The method is tested on two segmentation algorithms and three testing images. The proposed method is compared with two existing unsupervised methods and a supervised method to confirm its capabilities. Through comparison and visual analysis, the results verified the effectiveness of the proposed method and demonstrated the reliability and improvements of this method with respect to other methods. PMID:29064416
Berquist, Rachel M; Gledhill, Kristen M; Peterson, Matthew W; Doan, Allyson H; Baxter, Gregory T; Yopak, Kara E; Kang, Ning; Walker, H J; Hastings, Philip A; Frank, Lawrence R
2012-01-01
Museum fish collections possess a wealth of anatomical and morphological data that are essential for documenting and understanding biodiversity. Obtaining access to specimens for research, however, is not always practical and frequently conflicts with the need to maintain the physical integrity of specimens and the collection as a whole. Non-invasive three-dimensional (3D) digital imaging therefore serves a critical role in facilitating the digitization of these specimens for anatomical and morphological analysis as well as facilitating an efficient method for online storage and sharing of this imaging data. Here we describe the development of the Digital Fish Library (DFL, http://www.digitalfishlibrary.org), an online digital archive of high-resolution, high-contrast, magnetic resonance imaging (MRI) scans of the soft tissue anatomy of an array of fishes preserved in the Marine Vertebrate Collection of Scripps Institution of Oceanography. We have imaged and uploaded MRI data for over 300 marine and freshwater species, developed a data archival and retrieval system with a web-based image analysis and visualization tool, and integrated these into the public DFL website to disseminate data and associated metadata freely over the web. We show that MRI is a rapid and powerful method for accurately depicting the in-situ soft-tissue anatomy of preserved fishes in sufficient detail for large-scale comparative digital morphology. However these 3D volumetric data require a sophisticated computational and archival infrastructure in order to be broadly accessible to researchers and educators.
Fundamentals of Structural Geology
NASA Astrophysics Data System (ADS)
Pollard, David D.; Fletcher, Raymond C.
2005-09-01
Fundamentals of Structural Geology provides a new framework for the investigation of geological structures by integrating field mapping and mechanical analysis. Assuming a basic knowledge of physical geology, introductory calculus and physics, it emphasizes the observational data, modern mapping technology, principles of continuum mechanics, and the mathematical and computational skills, necessary to quantitatively map, describe, model, and explain deformation in Earth's lithosphere. By starting from the fundamental conservation laws of mass and momentum, the constitutive laws of material behavior, and the kinematic relationships for strain and rate of deformation, the authors demonstrate the relevance of solid and fluid mechanics to structural geology. This book offers a modern quantitative approach to structural geology for advanced students and researchers in structural geology and tectonics. It is supported by a website hosting images from the book, additional colour images, student exercises and MATLAB scripts. Solutions to the exercises are available to instructors. The book integrates field mapping using modern technology with the analysis of structures based on a complete mechanics MATLAB is used to visualize physical fields and analytical results and MATLAB scripts can be downloaded from the website to recreate textbook graphics and enable students to explore their choice of parameters and boundary conditions The supplementary website hosts color images of outcrop photographs used in the text, supplementary color images, and images of textbook figures for classroom presentations The textbook website also includes student exercises designed to instill the fundamental relationships, and to encourage the visualization of the evolution of geological structures; solutions are available to instructors
NASA Astrophysics Data System (ADS)
Dai, Yunpeng; Chen, Xueli; Yin, Jipeng; Kang, Xiaoyu; Wang, Guodong; Zhang, Xianghan; Nie, Yongzhan; Wu, Kaichun; Liang, Jimin
2016-08-01
The aim of this article is to investigate the influence of a tracer injection dose (ID) and camera integration time (IT) on quantifying pharmacokinetics of Cy5.5-GX1 in gastric cancer BGC-823 cell xenografted mice. Based on three factors, including whether or not to inject free GX1, the ID of Cy5.5-GX1, and the camera IT, 32 mice were randomly divided into eight groups and received 60-min dynamic fluorescence imaging. Gurfinkel exponential model (GEXPM) and Lammertsma simplified reference tissue model (SRTM) combined with a singular value decomposition analysis were used to quantitatively analyze the acquired dynamic fluorescent images. The binding potential (Bp) and the sum of the pharmacokinetic rate constants (SKRC) of Cy5.5-GX1 were determined by the SRTM and EXPM, respectively. In the tumor region, the SKRC value exhibited an obvious trend with change in the tracer ID, but the Bp value was not sensitive to it. Both the Bp and SKRC values were independent of the camera IT. In addition, the ratio of the tumor-to-muscle region was correlated with the camera IT but was independent of the tracer ID. Dynamic fluorescence imaging in conjunction with a kinetic analysis may provide more quantitative information than static fluorescence imaging, especially for a priori information on the optimal ID of targeted probes for individual therapy.
Data layer integration for the national map of the united states
Usery, E.L.; Finn, M.P.; Starbuck, M.
2009-01-01
The integration of geographic data layers in multiple raster and vector formats, from many different organizations and at a variety of resolutions and scales, is a significant problem for The National Map of the United States being developed by the U.S. Geological Survey. Our research has examined data integration from a layer-based approach for five of The National Map data layers: digital orthoimages, elevation, land cover, hydrography, and transportation. An empirical approach has included visual assessment by a set of respondents with statistical analysis to establish the meaning of various types of integration. A separate theoretical approach with established hypotheses tested against actual data sets has resulted in an automated procedure for integration of specific layers and is being tested. The empirical analysis has established resolution bounds on meanings of integration with raster datasets and distance bounds for vector data. The theoretical approach has used a combination of theories on cartographic transformation and generalization, such as T??pfer's radical law, and additional research concerning optimum viewing scales for digital images to establish a set of guiding principles for integrating data of different resolutions.
Use of sonic tomography to detect and quantify wood decay in living trees1
Gilbert, Gregory S.; Ballesteros, Javier O.; Barrios-Rodriguez, Cesar A.; Bonadies, Ernesto F.; Cedeño-Sánchez, Marjorie L.; Fossatti-Caballero, Nohely J.; Trejos-Rodríguez, Mariam M.; Pérez-Suñiga, José Moises; Holub-Young, Katharine S.; Henn, Laura A. W.; Thompson, Jennifer B.; García-López, Cesar G.; Romo, Amanda C.; Johnston, Daniel C.; Barrick, Pablo P.; Jordan, Fulvia A.; Hershcovich, Shiran; Russo, Natalie; Sánchez, Juan David; Fábrega, Juan Pablo; Lumpkin, Raleigh; McWilliams, Hunter A.; Chester, Kathleen N.; Burgos, Alana C.; Wong, E. Beatriz; Diab, Jonathan H.; Renteria, Sonia A.; Harrower, Jennifer T.; Hooton, Douglas A.; Glenn, Travis C.; Faircloth, Brant C.; Hubbell, Stephen P.
2016-01-01
Premise of the study: Field methodology and image analysis protocols using acoustic tomography were developed and evaluated as a tool to estimate the amount of internal decay and damage of living trees, with special attention to tropical rainforest trees with irregular trunk shapes. Methods and Results: Living trunks of a diversity of tree species in tropical rainforests in the Republic of Panama were scanned using an Argus Electronic PiCUS 3 Sonic Tomograph and evaluated for the amount and patterns of internal decay. A protocol using ImageJ analysis software was used to quantify the proportions of intact and compromised wood. The protocols provide replicable estimates of internal decay and cavities for trees of varying shapes, wood density, and bark thickness. Conclusions: Sonic tomography, coupled with image analysis, provides an efficient, noninvasive approach to evaluate decay patterns and structural integrity of even irregularly shaped living trees. PMID:28101433
Quantication and analysis of respiratory motion from 4D MRI
NASA Astrophysics Data System (ADS)
Aizzuddin Abd Rahni, Ashrani; Lewis, Emma; Wells, Kevin
2014-11-01
It is well known that respiratory motion affects image acquisition and also external beam radiotherapy (EBRT) treatment planning and delivery. However often the existing approaches for respiratory motion management are based on a generic view of respiratory motion such as the general movement of organ, tissue or fiducials. This paper thus aims to present a more in depth analysis of respiratory motion based on 4D MRI for further integration into motion correction in image acquisition or image based EBRT. Internal and external motion was first analysed separately, on a per-organ basis for internal motion. Principal component analysis (PCA) was then performed on the internal and external motion vectors separately and the relationship between the two PCA spaces was analysed. The motion extracted from 4D MRI on general was found to be consistent with what has been reported in literature.
FIMTrack: An open source tracking and locomotion analysis software for small animals.
Risse, Benjamin; Berh, Dimitri; Otto, Nils; Klämbt, Christian; Jiang, Xiaoyi
2017-05-01
Imaging and analyzing the locomotion behavior of small animals such as Drosophila larvae or C. elegans worms has become an integral subject of biological research. In the past we have introduced FIM, a novel imaging system feasible to extract high contrast images. This system in combination with the associated tracking software FIMTrack is already used by many groups all over the world. However, so far there has not been an in-depth discussion of the technical aspects. Here we elaborate on the implementation details of FIMTrack and give an in-depth explanation of the used algorithms. Among others, the software offers several tracking strategies to cover a wide range of different model organisms, locomotion types, and camera properties. Furthermore, the software facilitates stimuli-based analysis in combination with built-in manual tracking and correction functionalities. All features are integrated in an easy-to-use graphical user interface. To demonstrate the potential of FIMTrack we provide an evaluation of its accuracy using manually labeled data. The source code is available under the GNU GPLv3 at https://github.com/i-git/FIMTrack and pre-compiled binaries for Windows and Mac are available at http://fim.uni-muenster.de.
Rice, Lauren J; Lagopoulos, Jim; Brammer, Michael; Einfeld, Stewart L
2017-09-01
Prader-Willi Syndrome (PWS) is a genetic disorder characterized by infantile hypotonia, hyperphagia, hypogonadism, growth hormone deficiency, intellectual disability, and severe emotional and behavioral problems. The brain mechanisms that underpin these disturbances are unknown. Diffusion tensor imaging (DTI) enables in vivo investigation of the microstructural integrity of white matter pathways. To date, only one study has used DTI to examine white matter alterations in PWS. However, that study used selected regions of interest, rather than a whole brain analysis. In the present study, we used diffusion tensor and magnetic resonance (T 1-weighted) imaging to examine microstructural white matter changes in 15 individuals with PWS (17-30 years) and 15 age-and-gender-matched controls. Whole-brain voxel-wise statistical analysis of FA was carried out using tract-based spatial statistics (TBSS). Significantly decreased fractional anisotropy was found localized to the left hemisphere in individuals with PWS within the splenium of the corpus callosum, the internal capsule including the posterior thalamic radiation and the inferior frontal occipital fasciculus (IFOF). Reduced integrity of these white matter pathways in individuals with PWS may relate to orientating attention, emotion recognition, semantic processing, and sensorimotor dysfunction. © 2017 Wiley Periodicals, Inc.
Zöllner, Frank G; Daab, Markus; Sourbron, Steven P; Schad, Lothar R; Schoenberg, Stefan O; Weisser, Gerald
2016-01-14
Perfusion imaging has become an important image based tool to derive the physiological information in various applications, like tumor diagnostics and therapy, stroke, (cardio-) vascular diseases, or functional assessment of organs. However, even after 20 years of intense research in this field, perfusion imaging still remains a research tool without a broad clinical usage. One problem is the lack of standardization in technical aspects which have to be considered for successful quantitative evaluation; the second problem is a lack of tools that allow a direct integration into the diagnostic workflow in radiology. Five compartment models, namely, a one compartment model (1CP), a two compartment exchange (2CXM), a two compartment uptake model (2CUM), a two compartment filtration model (2FM) and eventually the extended Toft's model (ETM) were implemented as plugin for the DICOM workstation OsiriX. Moreover, the plugin has a clean graphical user interface and provides means for quality management during the perfusion data analysis. Based on reference test data, the implementation was validated against a reference implementation. No differences were found in the calculated parameters. We developed open source software to analyse DCE-MRI perfusion data. The software is designed as plugin for the DICOM Workstation OsiriX. It features a clean GUI and provides a simple workflow for data analysis while it could also be seen as a toolbox providing an implementation of several recent compartment models to be applied in research tasks. Integration into the infrastructure of a radiology department is given via OsiriX. Results can be saved automatically and reports generated automatically during data analysis ensure certain quality control.
On the release of cppxfel for processing X-ray free-electron laser images.
Ginn, Helen Mary; Evans, Gwyndaf; Sauter, Nicholas K; Stuart, David Ian
2016-06-01
As serial femtosecond crystallography expands towards a variety of delivery methods, including chip-based methods, and smaller collected data sets, the requirement to optimize the data analysis to produce maximum structure quality is becoming increasingly pressing. Here cppxfel , a software package primarily written in C++, which showcases several data analysis techniques, is released. This software package presently indexes images using DIALS (diffraction integration for advanced light sources) and performs an initial orientation matrix refinement, followed by post-refinement of individual images against a reference data set. Cppxfel is released with the hope that the unique and useful elements of this package can be repurposed for existing software packages. However, as released, it produces high-quality crystal structures and is therefore likely to be also useful to experienced users of X-ray free-electron laser (XFEL) software who wish to maximize the information extracted from a limited number of XFEL images.
A versatile atomic force microscope integrated with a scanning electron microscope.
Kreith, J; Strunz, T; Fantner, E J; Fantner, G E; Cordill, M J
2017-05-01
A versatile atomic force microscope (AFM), which can be installed in a scanning electron microscope (SEM), is introduced. The flexible design of the instrument enables correlated analysis for different experimental configurations, such as AFM imaging directly after nanoindentation in vacuum. In order to demonstrate the capabilities of the specially designed AFM installed inside a SEM, slip steps emanating around nanoindents in single crystalline brass were examined. This example showcases how the combination of AFM and SEM imaging can be utilized for quantitative dislocation analysis through the measurement of the slip step heights without the hindrance of oxide formation. Finally, an in situ nanoindentation technique is introduced, illustrating the use of AFM imaging during indentation experiments to examine plastic deformation occurring under the indenter tip. The mechanical indentation data are correlated to the SEM and AFM images to estimate the number of dislocations emitted to the surface.
Image analysis for microelectronic retinal prosthesis.
Hallum, L E; Cloherty, S L; Lovell, N H
2008-01-01
By way of extracellular, stimulating electrodes, a microelectronic retinal prosthesis aims to render discrete, luminous spots-so-called phosphenes-in the visual field, thereby providing a phosphene image (PI) as a rudimentary remediation of profound blindness. As part thereof, a digital camera, or some other photosensitive array, captures frames, frames are analyzed, and phosphenes are actuated accordingly by way of modulated charge injections. Here, we present a method that allows the assessment of image analysis schemes for integration with a prosthetic device, that is, the means of converting the captured image (high resolution) to modulated charge injections (low resolution). We use the mutual-information function to quantify the amount of information conveyed to the PI observer (device implantee), while accounting for the statistics of visual stimuli. We demonstrate an effective scheme involving overlapping, Gaussian kernels, and discuss extensions of the method to account for shortterm visual memory in observers, and their perceptual errors of omission and commission.
NASA Technical Reports Server (NTRS)
Kemp, James Herbert (Inventor); Talukder, Ashit (Inventor); Lambert, James (Inventor); Lam, Raymond (Inventor)
2008-01-01
A computer-implemented system and method of intra-oral analysis for measuring plaque removal is disclosed. The system includes hardware for real-time image acquisition and software to store the acquired images on a patient-by-patient basis. The system implements algorithms to segment teeth of interest from surrounding gum, and uses a real-time image-based morphing procedure to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The system integrates these components into a single software suite with an easy-to-use graphical user interface (GUI) that allows users to do an end-to-end run of a patient record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image.
On the release of cppxfel for processing X-ray free-electron laser images
Ginn, Helen Mary; Evans, Gwyndaf; Sauter, Nicholas K.; ...
2016-05-11
As serial femtosecond crystallography expands towards a variety of delivery methods, including chip-based methods, and smaller collected data sets, the requirement to optimize the data analysis to produce maximum structure quality is becoming increasingly pressing. Herecppxfel, a software package primarily written in C++, which showcases several data analysis techniques, is released. This software package presently indexes images using DIALS (diffraction integration for advanced light sources) and performs an initial orientation matrix refinement, followed by post-refinement of individual images against a reference data set.Cppxfelis released with the hope that the unique and useful elements of this package can be repurposed formore » existing software packages. However, as released, it produces high-quality crystal structures and is therefore likely to be also useful to experienced users of X-ray free-electron laser (XFEL) software who wish to maximize the information extracted from a limited number of XFEL images.« less
Wang, Jun; Hwang, Kiwook; Braas, Daniel; Dooraghi, Alex; Nathanson, David; Campbell, Dean O.; Gu, Yuchao; Sandberg, Troy; Mischel, Paul; Radu, Caius; Chatziioannou, Arion F.; Phelps, Michael E.; Christofk, Heather; Heath, James R.
2014-01-01
We report on a radiopharmaceutical imaging platform designed to capture the kinetics of cellular responses to drugs. Methods A portable in vitro molecular imaging system, comprised of a microchip and a beta-particle imaging camera, permits routine cell-based radioassays on small number of either suspension or adherent cells. We investigate the response kinetics of model lymphoma and glioblastoma cancer cell lines to [18F]fluorodeoxyglucose ([18F]FDG) uptake following drug exposure. Those responses are correlated with kinetic changes in the cell cycle, or with changes in receptor-tyrosine kinase signaling. Results The platform enables radioassays directly on multiple cell types, and yields results comparable to conventional approaches, but uses smaller sample sizes, permits a higher level of quantitation, and doesn’t require cell lysis. Conclusion The kinetic analysis enabled by the platform provides a rapid (~1 hour) drug screening assay. PMID:23978446
Walton, Barbara L; Verbeck, Guido F
2014-08-19
Matrix-assisted laser desorption ionization (MALDI) imaging is gaining popularity, but matrix effects such as mass spectral interference and damage to the sample limit its applications. Replacing traditional matrices with silver particles capable of equivalent or increased photon energy absorption from the incoming laser has proven to be beneficial for low mass analysis. Not only can silver clusters be advantageous for low mass compound detection, but they can be used for imaging as well. Conventional matrix application methods can obstruct samples, such as fingerprints, rendering them useless after mass analysis. The ability to image latent fingerprints without causing damage to the ridge pattern is important as it allows for further characterization of the print. The application of silver clusters by soft-landing ion mobility allows for enhanced MALDI and preservation of fingerprint integrity.
The optimal algorithm for Multi-source RS image fusion.
Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan
2016-01-01
In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.
NASA Astrophysics Data System (ADS)
Nagarajan, Sounderya; Pioche-Durieu, Catherine; Tizei, Luiz H. G.; Fang, Chia-Yi; Bertrand, Jean-Rémi; Le Cam, Eric; Chang, Huan-Cheng; Treussart, François; Kociak, Mathieu
2016-06-01
Light and Transmission Electron Microscopies (LM and TEM) hold potential in bioimaging owing to the advantages of fast imaging of multiple cells with LM and ultrastructure resolution offered by TEM. Integrated or correlated LM and TEM are the current approaches to combine the advantages of both techniques. Here we propose an alternative in which the electron beam of a scanning TEM (STEM) is used to excite concomitantly the luminescence of nanoparticle labels (a process known as cathodoluminescence, CL), and image the cell ultrastructure. This CL-STEM imaging allows obtaining luminescence spectra and imaging ultrastructure simultaneously. We present a proof of principle experiment, showing the potential of this technique in image cytometry of cell vesicular components. To label the vesicles we used fluorescent diamond nanocrystals (nanodiamonds, NDs) of size ~150 nm coated with different cationic polymers, known to trigger different internalization pathways. Each polymer was associated with a type of ND with a different emission spectrum. With CL-STEM, for each individual vesicle, we were able to measure (i) their size with nanometric resolution, (ii) their content in different ND labels, and realize intracellular component cytometry. In contrast to the recently reported organelle flow cytometry technique that requires cell sonication, CL-STEM-based image cytometry preserves the cell integrity and provides a much higher resolution in size. Although this novel approach is still limited by a low throughput, the automatization of data acquisition and image analysis, combined with improved intracellular targeting, should facilitate applications in cell biology at the subcellular level.Light and Transmission Electron Microscopies (LM and TEM) hold potential in bioimaging owing to the advantages of fast imaging of multiple cells with LM and ultrastructure resolution offered by TEM. Integrated or correlated LM and TEM are the current approaches to combine the advantages of both techniques. Here we propose an alternative in which the electron beam of a scanning TEM (STEM) is used to excite concomitantly the luminescence of nanoparticle labels (a process known as cathodoluminescence, CL), and image the cell ultrastructure. This CL-STEM imaging allows obtaining luminescence spectra and imaging ultrastructure simultaneously. We present a proof of principle experiment, showing the potential of this technique in image cytometry of cell vesicular components. To label the vesicles we used fluorescent diamond nanocrystals (nanodiamonds, NDs) of size ~150 nm coated with different cationic polymers, known to trigger different internalization pathways. Each polymer was associated with a type of ND with a different emission spectrum. With CL-STEM, for each individual vesicle, we were able to measure (i) their size with nanometric resolution, (ii) their content in different ND labels, and realize intracellular component cytometry. In contrast to the recently reported organelle flow cytometry technique that requires cell sonication, CL-STEM-based image cytometry preserves the cell integrity and provides a much higher resolution in size. Although this novel approach is still limited by a low throughput, the automatization of data acquisition and image analysis, combined with improved intracellular targeting, should facilitate applications in cell biology at the subcellular level. Electronic supplementary information (ESI) available. See DOI: 10.1039/c6nr01908k
DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool
Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary
2008-01-01
Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444
Gulati, Gaurav; Jones, Jordan T; Lee, Gregory; Altaye, Mekibib; Beebe, Dean W; Meyers-Eaton, Jamie; Wiley, Kasha; Brunner, Hermine I; DiFrancesco, Mark W
2017-02-01
To evaluate a safe, noninvasive magnetic resonance imaging (MRI) method to measure regional blood-brain barrier integrity and investigate its relationship with neurocognitive function and regional gray matter volume in juvenile-onset systemic lupus erythematosus (SLE). In this cross-sectional, case-control study, capillary permeability was measured as a marker of blood-brain barrier integrity in juvenile SLE patients and matched healthy controls, using a combination of arterial spin labeling and diffusion-weighted brain MRI. Regional gray matter volume was measured by voxel-based morphometry. Correlation analysis was done to investigate the relationship between regional capillary permeability and regional gray matter volume. Formal neurocognitive testing was completed (measuring attention, visuoconstructional ability, working memory, and psychomotor speed), and scores were regressed against regional blood-brain barrier integrity among juvenile SLE patients. Formal cognitive testing confirmed normal cognitive ability in all juvenile SLE subjects (n = 11) included in the analysis. Regional capillary permeability was negatively associated (P = 0.026) with neurocognitive performance concerning psychomotor speed in the juvenile SLE cohort. Compared with controls (n = 11), juvenile SLE patients had significantly greater capillary permeability involving Brodmann's areas 19, 28, 36, and 37 and caudate structures (P < 0.05 for all). There is imaging evidence of increased regional capillary permeability in juvenile SLE patients with normal cognitive performance using a novel noninvasive MRI technique. These blood-brain barrier outcomes appear consistent with functional neuronal network alterations and gray matter volume loss previously observed in juvenile SLE patients with overt neurocognitive deficits, supporting the notion that blood-brain barrier integrity loss precedes the loss of cognitive ability in juvenile SLE. Longitudinal studies are needed to confirm the findings of this pilot study. © 2016, American College of Rheumatology.
Liu, Jinxia; Cao, Yue; Wang, Qiu; Pan, Wenjuan; Ma, Fei; Liu, Changhong; Chen, Wei; Yang, Jianbo; Zheng, Lei
2016-01-01
Water-injected beef has aroused public concern as a major food-safety issue in meat products. In the study, the potential of multispectral imaging analysis in the visible and near-infrared (405-970 nm) regions was evaluated for identifying water-injected beef. A multispectral vision system was used to acquire images of beef injected with up to 21% content of water, and partial least squares regression (PLSR) algorithm was employed to establish prediction model, leading to quantitative estimations of actual water increase with a correlation coefficient (r) of 0.923. Subsequently, an optimized model was achieved by integrating spectral data with feature information extracted from ordinary RGB data, yielding better predictions (r = 0.946). Moreover, the prediction equation was transferred to each pixel within the images for visualizing the distribution of actual water increase. These results demonstrate the capability of multispectral imaging technology as a rapid and non-destructive tool for the identification of water-injected beef. Copyright © 2015 Elsevier Ltd. All rights reserved.
From microscopy to whole slide digital images: a century and a half of image analysis.
Taylor, Clive R
2011-12-01
In the year 1850, microscopes had evolved in quality to the point that the "first pathologists emerged from the treacherous swamps of medieval practice onto the relatively firm ground that histopathology seemed to offer." These early pathologists began to practice the art of image analysis, and diagnostic surgical pathology was born. Today the traditional microscope, in the hands of an experienced pathologist, is established as the gold standard for diagnosis of cancer and other diseases. Nonetheless, it is a tool and a technology that is more than 150 years old. Rapid advances in the capabilities of digital imaging hardware and software now offer the real possibility of moving to a new level of practice, using whole slide digital images for diagnosis, education, and research in morphologic pathology. Potential efficiencies in work flow and diagnostic integration, coupled with the use of powerful new analytic methods, promise radically to change the future shape of surgical pathology.
Wells, Darren M.; French, Andrew P.; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein; Bennett, Malcolm J.; Pridmore, Tony P.
2012-01-01
Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana. PMID:22527394
Wells, Darren M; French, Andrew P; Naeem, Asad; Ishaq, Omer; Traini, Richard; Hijazi, Hussein I; Hijazi, Hussein; Bennett, Malcolm J; Pridmore, Tony P
2012-06-05
Roots are highly responsive to environmental signals encountered in the rhizosphere, such as nutrients, mechanical resistance and gravity. As a result, root growth and development is very plastic. If this complex and vital process is to be understood, methods and tools are required to capture the dynamics of root responses. Tools are needed which are high-throughput, supporting large-scale experimental work, and provide accurate, high-resolution, quantitative data. We describe and demonstrate the efficacy of the high-throughput and high-resolution root imaging systems recently developed within the Centre for Plant Integrative Biology (CPIB). This toolset includes (i) robotic imaging hardware to generate time-lapse datasets from standard cameras under infrared illumination and (ii) automated image analysis methods and software to extract quantitative information about root growth and development both from these images and via high-resolution light microscopy. These methods are demonstrated using data gathered during an experimental study of the gravitropic response of Arabidopsis thaliana.
Hologlyphics: volumetric image synthesis performance system
NASA Astrophysics Data System (ADS)
Funk, Walter
2008-02-01
This paper describes a novel volumetric image synthesis system and artistic technique, which generate moving volumetric images in real-time, integrated with music. The system, called the Hologlyphic Funkalizer, is performance based, wherein the images and sound are controlled by a live performer, for the purposes of entertaining a live audience and creating a performance art form unique to volumetric and autostereoscopic images. While currently configured for a specific parallax barrier display, the Hologlyphic Funkalizer's architecture is completely adaptable to various volumetric and autostereoscopic display technologies. Sound is distributed through a multi-channel audio system; currently a quadraphonic speaker setup is implemented. The system controls volumetric image synthesis, production of music and spatial sound via acoustic analysis and human gestural control, using a dedicated control panel, motion sensors, and multiple musical keyboards. Music can be produced by external acoustic instruments, pre-recorded sounds or custom audio synthesis integrated with the volumetric image synthesis. Aspects of the sound can control the evolution of images and visa versa. Sounds can be associated and interact with images, for example voice synthesis can be combined with an animated volumetric mouth, where nuances of generated speech modulate the mouth's expressiveness. Different images can be sent to up to 4 separate displays. The system applies many novel volumetric special effects, and extends several film and video special effects into the volumetric realm. Extensive and various content has been developed and shown to live audiences by a live performer. Real world applications will be explored, with feedback on the human factors.
Design of an automated imaging system for use in a space experiment
NASA Technical Reports Server (NTRS)
Hartz, William G.; Bozzolo, Nora G.; Lewis, Catherine C.; Pestak, Christopher J.
1991-01-01
An experiment, occurring in an orbiting platform, examines the mass transfer across gas-liquid and liquid-liquid interfaces. It employs an imaging system with real time image analysis. The design includes optical design, imager selection and integration, positioner control, image recording, software development for processing and interfaces to telemetry. It addresses the constraints of weight, volume, and electric power associated with placing the experiment in the Space Shuttle cargo bay. Challenging elements of the design are: imaging and recording of a 200-micron-diameter bubble with a resolution of 2 microns to serve a primary source of data; varying frame rates from 500 per second to 1 frame per second, depending on the experiment phase; and providing three-dimensional information to determine the shape of the bubble.
Linking Science Analysis with Observation Planning: A Full Circle Data Lifecycle
NASA Technical Reports Server (NTRS)
Grosvenor, Sandy; Jones, Jeremy; Koratkar, Anuradha; Li, Connie; Mackey, Jennifer; Neher, Ken; Wolf, Karl; Obenschain, Arthur F. (Technical Monitor)
2001-01-01
A clear goal of the Virtual Observatory (VO) is to enable new science through analysis of integrated astronomical archives. An additional and powerful possibility of the VO is to link and integrate these new analyses with planning of new observations. By providing tools that can be used for observation planning in the VO, the VO will allow the data lifecycle to come full circle: from theory to observations to data and back around to new theories and new observations. The Scientist's Expert Assistant (SEA) Simulation Facility (SSF) is working to combine the ability to access existing archives with the ability to model and visualize new observations. Integrating the two will allow astronomers to better use the integrated archives of the VO to plan and predict the success of potential new observations more efficiently, The full circle lifecycle enabled by SEA can allow astronomers to make substantial leaps in the quality of data and science returns on new observations. Our paper examines the exciting potential of integrating archival analysis with new observation planning, such as performing data calibration analysis on archival images and using that analysis to predict the success of new observations, or performing dynamic signal-to-noise analysis combining historical results with modeling of new instruments or targets. We will also describe how the development of the SSF is progressing and what have been its successes and challenges.
Linking Science Analysis with Observation Planning: A Full Circle Data Lifecycle
NASA Technical Reports Server (NTRS)
Jones, Jeremy; Grosvenor, Sandy; Wolf, Karl; Li, Connie; Koratkar, Anuradha; Powers, Edward I. (Technical Monitor)
2001-01-01
A clear goal of the Virtual Observatory (VO) is to enable new science through analysis of integrated astronomical archives. An additional and powerful possibility of the VO is to link and integrate these new analyses with planning of new observations. By providing tools that can be used for observation planning in the VO, the VO will allow the data lifecycle to come full circle: from theory to observations to data and back around to new theories and new observations. The Scientist's Expert Assistant (SEA) Simulation Facility (SSF) is working to combine the ability to access existing archives with the ability to model and visualize new observations. Integrating the two will allow astronomers to better use the integrated archives of the VO to plan and predict the success of potential new observations. The full circle lifecycle enabled by SEA can allow astronomers to make substantial leaps in the quality of data and science returns on new observations. Our paper will examine the exciting potential of integrating archival analysis with new observation planning, such as performing data calibration analysis on archival images and using that analysis to predict the success of new observations, or performing dynamic signal-to-noise analysis combining historical results with modeling of new instruments or targets. We will also describe how the development of the SSF is progressing and what has been its successes and challenges.
THz optical design considerations and optimization for medical imaging applications
NASA Astrophysics Data System (ADS)
Sung, Shijun; Garritano, James; Bajwa, Neha; Nowroozi, Bryan; Llombart, Nuria; Grundfest, Warren; Taylor, Zachary D.
2014-09-01
THz imaging system design will play an important role making possible imaging of targets with arbitrary properties and geometries. This study discusses design consideration and imaging performance optimization techniques in THz quasioptical imaging system optics. Analysis of field and polarization distortion by off-axis parabolic (OAP) mirrors in THz imaging optics shows how distortions are carried in a series of mirrors while guiding the THz beam. While distortions of the beam profile by individual mirrors are not significant, these effects are compounded by a series of mirrors in antisymmetric orientation. It is shown that symmetric orientation of the OAP mirror effectively cancels this distortion to recover the original beam profile. Additionally, symmetric orientation can correct for some geometrical off-focusing due to misalignment. We also demonstrate an alternative method to test for overall system optics alignment by investigating the imaging performance of the tilted target plane. Asymmetric signal profile as a function of the target plane's tilt angle indicates when one or more imaging components are misaligned, giving a preferred tilt direction. Such analysis can offer additional insight into often elusive source device misalignment at an integrated system. Imaging plane tilting characteristics are representative of a 3-D modulation transfer function of the imaging system. A symmetric tilted plane is preferred to optimize imaging performance.
Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina
2018-01-01
The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.
Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.
2015-01-01
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-07-10
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
Radiology and Enterprise Medical Imaging Extensions (REMIX).
Erdal, Barbaros S; Prevedello, Luciano M; Qian, Songyue; Demirer, Mutlu; Little, Kevin; Ryu, John; O'Donnell, Thomas; White, Richard D
2018-02-01
Radiology and Enterprise Medical Imaging Extensions (REMIX) is a platform originally designed to both support the medical imaging-driven clinical and clinical research operational needs of Department of Radiology of The Ohio State University Wexner Medical Center. REMIX accommodates the storage and handling of "big imaging data," as needed for large multi-disciplinary cancer-focused programs. The evolving REMIX platform contains an array of integrated tools/software packages for the following: (1) server and storage management; (2) image reconstruction; (3) digital pathology; (4) de-identification; (5) business intelligence; (6) texture analysis; and (7) artificial intelligence. These capabilities, along with documentation and guidance, explaining how to interact with a commercial system (e.g., PACS, EHR, commercial database) that currently exists in clinical environments, are to be made freely available.
Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy.
Wang, Quanli; Niemi, Jarad; Tan, Chee-Meng; You, Lingchong; West, Mike
2010-01-01
An increasingly common component of studies in synthetic and systems biology is analysis of dynamics of gene expression at the single-cell level, a context that is heavily dependent on the use of time-lapse movies. Extracting quantitative data on the single-cell temporal dynamics from such movies remains a major challenge. Here, we describe novel methods for automating key steps in the analysis of single-cell, fluorescent images-segmentation and lineage reconstruction-to recognize and track individual cells over time. The automated analysis iteratively combines a set of extended morphological methods for segmentation, and uses a neighborhood-based scoring method for frame-to-frame lineage linking. Our studies with bacteria, budding yeast and human cells, demonstrate the portability and usability of these methods, whether using phase, bright field or fluorescent images. These examples also demonstrate the utility of our integrated approach in facilitating analyses of engineered and natural cellular networks in diverse settings. The automated methods are implemented in freely available, open-source software.
Design and Construction of a Field Capable Snapshot Hyperspectral Imaging Spectrometer
NASA Technical Reports Server (NTRS)
Arik, Glenda H.
2005-01-01
The computed-tomography imaging spectrometer (CTIS) is a device which captures the spatial and spectral content of a rapidly evolving same in a single image frame. The most recent CTIS design is optically all reflective and uses as its dispersive device a stated the-art reflective computer generated hologram (CGH). This project focuses on the instrument's transition from laboratory to field. This design will enable the CTIS to withstand a harsh desert environment. The system is modeled in optical design software using a tolerance analysis. The tolerances guide the design of the athermal mount and component parts. The parts are assembled into a working mount shell where the performance of the mounts is tested for thermal integrity. An interferometric analysis of the reflective CGH is also performed.
Reconfigurable and responsive droplet-based compound micro-lenses.
Nagelberg, Sara; Zarzar, Lauren D; Nicolas, Natalie; Subramanian, Kaushikaram; Kalow, Julia A; Sresht, Vishnu; Blankschtein, Daniel; Barbastathis, George; Kreysing, Moritz; Swager, Timothy M; Kolle, Mathias
2017-03-07
Micro-scale optical components play a crucial role in imaging and display technology, biosensing, beam shaping, optical switching, wavefront-analysis, and device miniaturization. Herein, we demonstrate liquid compound micro-lenses with dynamically tunable focal lengths. We employ bi-phase emulsion droplets fabricated from immiscible hydrocarbon and fluorocarbon liquids to form responsive micro-lenses that can be reconfigured to focus or scatter light, form real or virtual images, and display variable focal lengths. Experimental demonstrations of dynamic refractive control are complemented by theoretical analysis and wave-optical modelling. Additionally, we provide evidence of the micro-lenses' functionality for two potential applications-integral micro-scale imaging devices and light field display technology-thereby demonstrating both the fundamental characteristics and the promising opportunities for fluid-based dynamic refractive micro-scale compound lenses.
Reconfigurable and responsive droplet-based compound micro-lenses
Nagelberg, Sara; Zarzar, Lauren D.; Nicolas, Natalie; Subramanian, Kaushikaram; Kalow, Julia A.; Sresht, Vishnu; Blankschtein, Daniel; Barbastathis, George; Kreysing, Moritz; Swager, Timothy M.; Kolle, Mathias
2017-01-01
Micro-scale optical components play a crucial role in imaging and display technology, biosensing, beam shaping, optical switching, wavefront-analysis, and device miniaturization. Herein, we demonstrate liquid compound micro-lenses with dynamically tunable focal lengths. We employ bi-phase emulsion droplets fabricated from immiscible hydrocarbon and fluorocarbon liquids to form responsive micro-lenses that can be reconfigured to focus or scatter light, form real or virtual images, and display variable focal lengths. Experimental demonstrations of dynamic refractive control are complemented by theoretical analysis and wave-optical modelling. Additionally, we provide evidence of the micro-lenses' functionality for two potential applications—integral micro-scale imaging devices and light field display technology—thereby demonstrating both the fundamental characteristics and the promising opportunities for fluid-based dynamic refractive micro-scale compound lenses. PMID:28266505
Position Accuracy Analysis of a Robust Vision-Based Navigation
NASA Astrophysics Data System (ADS)
Gaglione, S.; Del Pizzo, S.; Troisi, S.; Angrisano, A.
2018-05-01
Using images to determine camera position and attitude is a consolidated method, very widespread for application like UAV navigation. In harsh environment, where GNSS could be degraded or denied, image-based positioning could represent a possible candidate for an integrated or alternative system. In this paper, such method is investigated using a system based on single camera and 3D maps. A robust estimation method is proposed in order to limit the effect of blunders or noisy measurements on position solution. The proposed approach is tested using images collected in an urban canyon, where GNSS positioning is very unaccurate. A previous photogrammetry survey has been performed to build the 3D model of tested area. The position accuracy analysis is performed and the effect of the robust method proposed is validated.
IHE cross-enterprise document sharing for imaging: design challenges
NASA Astrophysics Data System (ADS)
Noumeir, Rita
2006-03-01
Integrating the Healthcare Enterprise (IHE) has recently published a new integration profile for sharing documents between multiple enterprises. The Cross-Enterprise Document Sharing Integration Profile (XDS) lays the basic framework for deploying regional and national Electronic Health Record (EHR). This profile proposes an architecture based on a central Registry that holds metadata information describing published Documents residing in one or multiple Documents Repositories. As medical images constitute important information of the patient health record, it is logical to extend the XDS Integration Profile to include images. However, including images in the EHR presents many challenges. The complete image set is very large; it is useful for radiologists and other specialists such as surgeons and orthopedists. The imaging report, on the other hand, is widely needed and its broad accessibility is vital for achieving optimal patient care. Moreover, a subset of relevant images may also be of wide interest along with the report. Therefore, IHE recently published a new integration profile for sharing images and imaging reports between multiple enterprises. This new profile, the Cross-Enterprise Document Sharing for Imaging (XDS-I), is based on the XDS architecture. The XDS-I integration solution that is published as part of the IHE Technical Framework is the result of an extensive investigation effort of several design solutions. This paper presents and discusses the design challenges and the rationales behind the design decisions of the IHE XDS-I Integration Profile, for a better understanding and appreciation of the final published solution.
NASA Astrophysics Data System (ADS)
Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.
2017-03-01
In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample.
Haring, Martijn T; Liv, Nalan; Zonnevylle, A Christiaan; Narvaez, Angela C; Voortman, Lenard M; Kruit, Pieter; Hoogenboom, Jacob P
2017-03-02
In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample.
Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.
2017-01-01
In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample. PMID:28252673
Kaneta, Tomohiro; Nakatsuka, Masahiro; Nakamura, Kei; Seki, Takashi; Yamaguchi, Satoshi; Tsuboi, Masahiro; Meguro, Kenichi
2016-01-01
SPECT is an important diagnostic tool for dementia. Recently, statistical analysis of SPECT has been commonly used for dementia research. In this study, we evaluated the accuracy of visual SPECT evaluation and/or statistical analysis for the diagnosis (Dx) of Alzheimer disease (AD) and other forms of dementia in our community-based study "The Osaki-Tajiri Project." Eighty-nine consecutive outpatients with dementia were enrolled and underwent brain perfusion SPECT with 99mTc-ECD. Diagnostic accuracy of SPECT was tested using 3 methods: visual inspection (SPECT Dx), automated diagnostic tool using statistical analysis with easy Z-score imaging system (eZIS Dx), and visual inspection plus eZIS (integrated Dx). Integrated Dx showed the highest sensitivity, specificity, and accuracy, whereas eZIS was the second most accurate method. We also observed that a higher than expected rate of SPECT images indicated false-negative cases of AD. Among these, 50% showed hypofrontality and were diagnosed as frontotemporal lobar degeneration. These cases typically showed regional "hot spots" in the primary sensorimotor cortex (ie, a sensorimotor hot spot sign), which we determined were associated with AD rather than frontotemporal lobar degeneration. We concluded that the diagnostic abilities were improved by the integrated use of visual assessment and statistical analysis. In addition, the detection of a sensorimotor hot spot sign was useful to detect AD when hypofrontality is present and improved the ability to properly diagnose AD.
Scanning fluorescent microthermal imaging apparatus and method
Barton, Daniel L.; Tangyunyong, Paiboon
1998-01-01
A scanning fluorescent microthermal imaging (FMI) apparatus and method is disclosed, useful for integrated circuit (IC) failure analysis, that uses a scanned and focused beam from a laser to excite a thin fluorescent film disposed over the surface of the IC. By collecting fluorescent radiation from the film, and performing point-by-point data collection with a single-point photodetector, a thermal map of the IC is formed to measure any localized heating associated with defects in the IC.
Analysis of an integrated 8-channel Tx/Rx body array for use as a body coil in 7-Tesla MRI
NASA Astrophysics Data System (ADS)
Orzada, Stephan; Bitz, Andreas K.; Johst, Sören; Gratz, Marcel; Völker, Maximilian N.; Kraff, Oliver; Abuelhaija, Ashraf; Fiedler, Thomas M.; Solbach, Klaus; Quick, Harald H.; Ladd, Mark E.
2017-06-01
Object In this work an 8-channel array integrated into the gap between the gradient coil and bore liner of a 7-Tesla whole-body magnet is presented that would allow a workflow closer to that of systems at lower magnetic fields that have a built-in body coil; this integrated coil is compared to a local 8-channel array built from identical elements placed directly on the patient. Materials and Methods SAR efficiency and the homogeneity of the right-rotating B1 field component (B_1^+) are investigated numerically and compared to the local array. Power efficiency measurements are performed in the MRI System. First in vivo gradient echo images are acquired with the integrated array. Results While the remote array shows a slightly better performance in terms of B_1^+ homogeneity, the power efficiency and the SAR efficiency are inferior to those of the local array: the transmit voltage has to be increased by a factor of 3.15 to achieve equal flip angles in a central axial slice. The g-factor calculations show a better parallel imaging g-factor for the local array. The field of view of the integrated array is larger than that of the local array. First in vivo images with the integrated array look subjectively promising. Conclusion Although some RF performance parameters of the integrated array are inferior to a tight-fitting local array, these disadvantages might be compensated by the use of amplifiers with higher power and the use of local receive arrays. In addition, the distant placement provides the potential to include more elements in the array design.
NASA Astrophysics Data System (ADS)
Hamedianfar, Alireza; Shafri, Helmi Zulhaidi Mohd
2016-04-01
This paper integrates decision tree-based data mining (DM) and object-based image analysis (OBIA) to provide a transferable model for the detailed characterization of urban land-cover classes using WorldView-2 (WV-2) satellite images. Many articles have been published on OBIA in recent years based on DM for different applications. However, less attention has been paid to the generation of a transferable model for characterizing detailed urban land cover features. Three subsets of WV-2 images were used in this paper to generate transferable OBIA rule-sets. Many features were explored by using a DM algorithm, which created the classification rules as a decision tree (DT) structure from the first study area. The developed DT algorithm was applied to object-based classifications in the first study area. After this process, we validated the capability and transferability of the classification rules into second and third subsets. Detailed ground truth samples were collected to assess the classification results. The first, second, and third study areas achieved 88%, 85%, and 85% overall accuracies, respectively. Results from the investigation indicate that DM was an efficient method to provide the optimal and transferable classification rules for OBIA, which accelerates the rule-sets creation stage in the OBIA classification domain.
Nondestructive cryomicro-CT imaging enables structural and molecular analysis of human lung tissue.
Vasilescu, Dragoş M; Phillion, André B; Tanabe, Naoya; Kinose, Daisuke; Paige, David F; Kantrowitz, Jacob J; Liu, Gang; Liu, Hanqiao; Fishbane, Nick; Verleden, Stijn E; Vanaudenaerde, Bart M; Lenburg, Marc; Stevenson, Christopher S; Spira, Avrum; Cooper, Joel D; Hackett, Tillie-Louise; Hogg, James C
2017-01-01
Micro-computed tomography (CT) enables three-dimensional (3D) imaging of complex soft tissue structures, but current protocols used to achieve this goal preclude cellular and molecular phenotyping of the tissue. Here we describe a radiolucent cryostage that permits micro-CT imaging of unfixed frozen human lung samples at an isotropic voxel size of (11 µm) 3 under conditions where the sample is maintained frozen at -30°C during imaging. The cryostage was tested for thermal stability to maintain samples frozen up to 8 h. This report describes the methods used to choose the materials required for cryostage construction and demonstrates that whole genome mRNA integrity and expression are not compromised by exposure to micro-CT radiation and that the tissue can be used for immunohistochemistry. The new cryostage provides a novel method enabling integration of 3D tissue structure with cellular and molecular analysis to facilitate the identification of molecular determinants of disease. The described micro-CT cryostage provides a novel way to study the three-dimensional lung structure preserved without the effects of fixatives while enabling subsequent studies of the cellular matrix composition and gene expression. This approach will, for the first time, enable researchers to study structural changes of lung tissues that occur with disease and correlate them with changes in gene or protein signatures. Copyright © 2017 the American Physiological Society.
Wojdyla, Justyna Aleksandra; Panepucci, Ezequiel; Martiel, Isabelle; Ebner, Simon; Huang, Chia-Ying; Caffrey, Martin; Bunk, Oliver; Wang, Meitian
2016-01-01
A fast continuous grid scan protocol has been incorporated into the Swiss Light Source (SLS) data acquisition and analysis software suite on the macromolecular crystallography (MX) beamlines. Its combination with fast readout single-photon counting hybrid pixel array detectors (PILATUS and EIGER) allows for diffraction-based identification of crystal diffraction hotspots and the location and centering of membrane protein microcrystals in the lipid cubic phase (LCP) in in meso in situ serial crystallography plates and silicon nitride supports. Diffraction-based continuous grid scans with both still and oscillation images are supported. Examples that include a grid scan of a large (50 nl) LCP bolus and analysis of the resulting diffraction images are presented. Scanning transmission X-ray microscopy (STXM) complements and benefits from fast grid scanning. STXM has been demonstrated at the SLS beamline X06SA for near-zero-dose detection of protein crystals mounted on different types of sample supports at room and cryogenic temperatures. Flash-cooled crystals in nylon loops were successfully identified in differential and integrated phase images. Crystals of just 10 µm thickness were visible in integrated phase images using data collected with the EIGER detector. STXM offers a truly low-dose method for locating crystals on solid supports prior to diffraction data collection at both synchrotron microfocusing and free-electron laser X-ray facilities. PMID:27275141
NASA Technical Reports Server (NTRS)
Mcmurdie, Lynn; Katsaros, Kristina
1992-01-01
We examine integrated water vapor fields and rain intensity patterns derived from the Scanning Multichannel Microwave Radiometer (SMMR) and Special Sensor Microwave/Imager (SSM/I) for several rapidly deepening and non-rapidly deepening midlatitude cyclones in the North Atlantic. Our goal is to identify features in the satellite data unique to the rapidly deepening cases, and to explore how these data can potentially be used in the analysis and forecasting of these events.
Integrating TV/digital data spectrograph system
NASA Technical Reports Server (NTRS)
Duncan, B. J.; Fay, T. D.; Miller, E. R.; Wamsteker, W.; Brown, R. M.; Neely, P. L.
1975-01-01
A 25-mm vidicon camera was previously modified to allow operation in an integration mode for low-light-level astronomical work. The camera was then mated to a low-dispersion spectrograph for obtaining spectral information in the 400 to 750 nm range. A high speed digital video image system was utilized to digitize the analog video signal, place the information directly into computer-type memory, and record data on digital magnetic tape for permanent storage and subsequent analysis.
An integrated compact airborne multispectral imaging system using embedded computer
NASA Astrophysics Data System (ADS)
Zhang, Yuedong; Wang, Li; Zhang, Xuguo
2015-08-01
An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.
Comparison and evaluation on image fusion methods for GaoFen-1 imagery
NASA Astrophysics Data System (ADS)
Zhang, Ningyu; Zhao, Junqing; Zhang, Ling
2016-10-01
Currently, there are many research works focusing on the best fusion method suitable for satellite images of SPOT, QuickBird, Landsat and so on, but only a few of them discuss the application of GaoFen-1 satellite images. This paper proposes a novel idea by using four fusion methods, such as principal component analysis transform, Brovey transform, hue-saturation-value transform, and Gram-Schmidt transform, from the perspective of keeping the original image spectral information. The experimental results showed that the transformed images by the four fusion methods not only retain high spatial resolution on panchromatic band but also have the abundant spectral information. Through comparison and evaluation, the integration of Brovey transform is better, but the color fidelity is not the premium. The brightness and color distortion in hue saturation-value transformed image is the largest. Principal component analysis transform did a good job in color fidelity, but its clarity still need improvement. Gram-Schmidt transform works best in color fidelity, and the edge of the vegetation is the most obvious, the fused image sharpness is higher than that of principal component analysis. Brovey transform, is suitable for distinguishing the Gram-Schmidt transform, and the most appropriate for GaoFen-1 satellite image in vegetation and non-vegetation area. In brief, different fusion methods have different advantages in image quality and class extraction, and should be used according to the actual application information and image fusion algorithm.
Mohammed, Ali I; Gritton, Howard J; Tseng, Hua-an; Bucklin, Mark E; Yao, Zhaojie; Han, Xue
2016-02-08
Advances in neurotechnology have been integral to the investigation of neural circuit function in systems neuroscience. Recent improvements in high performance fluorescent sensors and scientific CMOS cameras enables optical imaging of neural networks at a much larger scale. While exciting technical advances demonstrate the potential of this technique, further improvement in data acquisition and analysis, especially those that allow effective processing of increasingly larger datasets, would greatly promote the application of optical imaging in systems neuroscience. Here we demonstrate the ability of wide-field imaging to capture the concurrent dynamic activity from hundreds to thousands of neurons over millimeters of brain tissue in behaving mice. This system allows the visualization of morphological details at a higher spatial resolution than has been previously achieved using similar functional imaging modalities. To analyze the expansive data sets, we developed software to facilitate rapid downstream data processing. Using this system, we show that a large fraction of anatomically distinct hippocampal neurons respond to discrete environmental stimuli associated with classical conditioning, and that the observed temporal dynamics of transient calcium signals are sufficient for exploring certain spatiotemporal features of large neural networks.
Computer-Assisted Microscopy in Science Teaching and Research.
ERIC Educational Resources Information Center
Radice, Gary P.
1997-01-01
Describes a technological approach to teaching the relationships between biological form and function. Computer-assisted image analysis was integrated into a microanatomy course. Students spend less time memorizing and more time observing, measuring, and interpreting, building technical and analytical skills. Appendices list hardware and software…
Magnetic force microscopy method and apparatus to detect and image currents in integrated circuits
Campbell, Ann. N.; Anderson, Richard E.; Cole, Jr., Edward I.
1995-01-01
A magnetic force microscopy method and improved magnetic tip for detecting and quantifying internal magnetic fields resulting from current of integrated circuits. Detection of the current is used for failure analysis, design verification, and model validation. The interaction of the current on the integrated chip with a magnetic field can be detected using a cantilevered magnetic tip. Enhanced sensitivity for both ac and dc current and voltage detection is achieved with voltage by an ac coupling or a heterodyne technique. The techniques can be used to extract information from analog circuits.
Magnetic force microscopy method and apparatus to detect and image currents in integrated circuits
Campbell, A.N.; Anderson, R.E.; Cole, E.I. Jr.
1995-11-07
A magnetic force microscopy method and improved magnetic tip for detecting and quantifying internal magnetic fields resulting from current of integrated circuits are disclosed. Detection of the current is used for failure analysis, design verification, and model validation. The interaction of the current on the integrated chip with a magnetic field can be detected using a cantilevered magnetic tip. Enhanced sensitivity for both ac and dc current and voltage detection is achieved with voltage by an ac coupling or a heterodyne technique. The techniques can be used to extract information from analog circuits. 17 figs.
Graph Laplacian Regularization for Image Denoising: Analysis in the Continuous Domain.
Pang, Jiahao; Cheung, Gene
2017-04-01
Inverse imaging problems are inherently underdetermined, and hence, it is important to employ appropriate image priors for regularization. One recent popular prior-the graph Laplacian regularizer-assumes that the target pixel patch is smooth with respect to an appropriately chosen graph. However, the mechanisms and implications of imposing the graph Laplacian regularizer on the original inverse problem are not well understood. To address this problem, in this paper, we interpret neighborhood graphs of pixel patches as discrete counterparts of Riemannian manifolds and perform analysis in the continuous domain, providing insights into several fundamental aspects of graph Laplacian regularization for image denoising. Specifically, we first show the convergence of the graph Laplacian regularizer to a continuous-domain functional, integrating a norm measured in a locally adaptive metric space. Focusing on image denoising, we derive an optimal metric space assuming non-local self-similarity of pixel patches, leading to an optimal graph Laplacian regularizer for denoising in the discrete domain. We then interpret graph Laplacian regularization as an anisotropic diffusion scheme to explain its behavior during iterations, e.g., its tendency to promote piecewise smooth signals under certain settings. To verify our analysis, an iterative image denoising algorithm is developed. Experimental results show that our algorithm performs competitively with state-of-the-art denoising methods, such as BM3D for natural images, and outperforms them significantly for piecewise smooth images.
Spraggins, Jeffrey M; Rizzo, David G; Moore, Jessica L; Noto, Michael J; Skaar, Eric P; Caprioli, Richard M
2016-06-01
MALDI imaging mass spectrometry is a powerful analytical tool enabling the visualization of biomolecules in tissue. However, there are unique challenges associated with protein imaging experiments including the need for higher spatial resolution capabilities, improved image acquisition rates, and better molecular specificity. Here we demonstrate the capabilities of ultra-high speed MALDI-TOF and high mass resolution MALDI FTICR IMS platforms as they relate to these challenges. High spatial resolution MALDI-TOF protein images of rat brain tissue and cystic fibrosis lung tissue were acquired at image acquisition rates >25 pixels/s. Structures as small as 50 μm were spatially resolved and proteins associated with host immune response were observed in cystic fibrosis lung tissue. Ultra-high speed MALDI-TOF enables unique applications including megapixel molecular imaging as demonstrated for lipid analysis of cystic fibrosis lung tissue. Additionally, imaging experiments using MALDI FTICR IMS were shown to produce data with high mass accuracy (<5 ppm) and resolving power (∼75 000 at m/z 5000) for proteins up to ∼20 kDa. Analysis of clear cell renal cell carcinoma using MALDI FTICR IMS identified specific proteins localized to healthy tissue regions, within the tumor, and also in areas of increased vascularization around the tumor. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Contextual analysis of immunological response through whole-organ fluorescent imaging.
Woodruff, Matthew C; Herndon, Caroline N; Heesters, B A; Carroll, Michael C
2013-09-01
As fluorescent microscopy has developed, significant insights have been gained into the establishment of immune response within secondary lymphoid organs, particularly in draining lymph nodes. While established techniques such as confocal imaging and intravital multi-photon microscopy have proven invaluable, they provide limited insight into the architectural and structural context in which these responses occur. To interrogate the role of the lymph node environment in immune response effectively, a new set of imaging tools taking into account broader architectural context must be implemented into emerging immunological questions. Using two different methods of whole-organ imaging, optical clearing and three-dimensional reconstruction of serially sectioned lymph nodes, fluorescent representations of whole lymph nodes can be acquired at cellular resolution. Using freely available post-processing tools, images of unlimited size and depth can be assembled into cohesive, contextual snapshots of immunological response. Through the implementation of robust iterative analysis techniques, these highly complex three-dimensional images can be objectified into sortable object data sets. These data can then be used to interrogate complex questions at the cellular level within the broader context of lymph node biology. By combining existing imaging technology with complex methods of sample preparation and capture, we have developed efficient systems for contextualizing immunological phenomena within lymphatic architecture. In combination with robust approaches to image analysis, these advances provide a path to integrating scientific understanding of basic lymphatic biology into the complex nature of immunological response.
Integrating Robotic Observatories into Astronomy Labs
NASA Astrophysics Data System (ADS)
Ruch, Gerald T.
2015-01-01
The University of St. Thomas (UST) and a consortium of five local schools is using the UST Robotic Observatory, housing a 17' telescope, to develop labs and image processing tools that allow easy integration of observational labs into existing introductory astronomy curriculum. Our lab design removes the burden of equipment ownership by sharing access to a common resource and removes the burden of data processing by automating processing tasks that are not relevant to the learning objectives.Each laboratory exercise takes place over two lab periods. During period one, students design and submit observation requests via the lab website. Between periods, the telescope automatically acquires the data and our image processing pipeline produces data ready for student analysis. During period two, the students retrieve their data from the website and perform the analysis. The first lab, 'Weighing Jupiter,' was successfully implemented at UST and several of our partner schools. We are currently developing a second lab to measure the age of and distance to a globular cluster.
Zhang, Douglas; Lee, Junmin; Kilian, Kristopher A
2017-10-01
Cells in tissue receive a host of soluble and insoluble signals in a context-dependent fashion, where integration of these cues through a complex network of signal transduction cascades will define a particular outcome. Biomaterials scientists and engineers are tasked with designing materials that can at least partially recreate this complex signaling milieu towards new materials for biomedical applications. In this progress report, recent advances in high throughput techniques and high content imaging approaches that are facilitating the discovery of efficacious biomaterials are described. From microarrays of synthetic polymers, peptides and full-length proteins, to designer cell culture systems that present multiple biophysical and biochemical cues in tandem, it is discussed how the integration of combinatorics with high content imaging and analysis is essential to extracting biologically meaningful information from large scale cellular screens to inform the design of next generation biomaterials. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
Davis, Frank W.; Quattrochi, Dale A.; Ridd, Merrill K.; Lam, Nina S.-N.; Walsh, Stephen J.
1991-01-01
This paper discusses some basic scientific issues and research needs in the joint processing of remotely sensed and GIS data for environmental analysis. Two general topics are treated in detail: (1) scale dependence of geographic data and the analysis of multiscale remotely sensed and GIS data, and (2) data transformations and information flow during data processing. The discussion of scale dependence focuses on the theory and applications of spatial autocorrelation, geostatistics, and fractals for characterizing and modeling spatial variation. Data transformations during processing are described within the larger framework of geographical analysis, encompassing sampling, cartography, remote sensing, and GIS. Development of better user interfaces between image processing, GIS, database management, and statistical software is needed to expedite research on these and other impediments to integrated analysis of remotely sensed and GIS data.
NASA Astrophysics Data System (ADS)
Wang, Ximing; Edwardson, Matthew; Dromerick, Alexander; Winstein, Carolee; Wang, Jing; Liu, Brent
2015-03-01
Previously, we presented an Interdisciplinary Comprehensive Arm Rehabilitation Evaluation (ICARE) imaging informatics system that supports a large-scale phase III stroke rehabilitation trial. The ePR system is capable of displaying anonymized patient imaging studies and reports, and the system is accessible to multiple clinical trial sites and users across the United States via the web. However, the prior multicenter stroke rehabilitation trials lack any significant neuroimaging analysis infrastructure. In stroke related clinical trials, identification of the stroke lesion characteristics can be meaningful as recent research shows that lesion characteristics are related to stroke scale and functional recovery after stroke. To facilitate the stroke clinical trials, we hope to gain insight into specific lesion characteristics, such as vascular territory, for patients enrolled into large stroke rehabilitation trials. To enhance the system's capability for data analysis and data reporting, we have integrated new features with the system: a digital brain template display, a lesion quantification tool and a digital case report form. The digital brain templates are compiled from published vascular territory templates at each of 5 angles of incidence. These templates were updated to include territories in the brainstem using a vascular territory atlas and the Medical Image Processing, Analysis and Visualization (MIPAV) tool. The digital templates are displayed for side-by-side comparisons and transparent template overlay onto patients' images in the image viewer. The lesion quantification tool quantifies planimetric lesion area from user-defined contour. The digital case report form stores user input into a database, then displays contents in the interface to allow for reviewing, editing, and new inputs. In sum, the newly integrated system features provide the user with readily-accessible web-based tools to identify the vascular territory involved, estimate lesion area, and store these results in a web-based digital format.
Kim, Joowhan; Min, Sung-Wook; Lee, Byoungho
2007-10-01
Integral floating display is a recently proposed three-dimensional (3D) display method which provides a dynamic 3D image in the vicinity to an observer. It has a viewing window only through which correct 3D images can be observed. However, the positional difference between the viewing window and the floating image causes limited viewing zone in integral floating system. In this paper, we provide the principle and experimental results of the location adjustment of the viewing window of the integral floating display system by modifying the elemental image region for integral imaging. We explain the characteristics of the viewing window and propose how to move the viewing window to maximize the viewing zone.
Despeckle filtering software toolbox for ultrasound imaging of the common carotid artery.
Loizou, Christos P; Theofanous, Charoula; Pantziaris, Marios; Kasparis, Takis
2014-04-01
Ultrasound imaging of the common carotid artery (CCA) is a non-invasive tool used in medicine to assess the severity of atherosclerosis and monitor its progression through time. It is also used in border detection and texture characterization of the atherosclerotic carotid plaque in the CCA, the identification and measurement of the intima-media thickness (IMT) and the lumen diameter that all are very important in the assessment of cardiovascular disease (CVD). Visual perception, however, is hindered by speckle, a multiplicative noise, that degrades the quality of ultrasound B-mode imaging. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image segmentation of the IMT and the atherosclerotic carotid plaque in ultrasound images. In order to facilitate this preprocessing step, we have developed in MATLAB(®) a unified toolbox that integrates image despeckle filtering (IDF), texture analysis and image quality evaluation techniques to automate the pre-processing and complement the disease evaluation in ultrasound CCA images. The proposed software, is based on a graphical user interface (GUI) and incorporates image normalization, 10 different despeckle filtering techniques (DsFlsmv, DsFwiener, DsFlsminsc, DsFkuwahara, DsFgf, DsFmedian, DsFhmedian, DsFad, DsFnldif, DsFsrad), image intensity normalization, 65 texture features, 15 quantitative image quality metrics and objective image quality evaluation. The software is publicly available in an executable form, which can be downloaded from http://www.cs.ucy.ac.cy/medinfo/. It was validated on 100 ultrasound images of the CCA, by comparing its results with quantitative visual analysis performed by a medical expert. It was observed that the despeckle filters DsFlsmv, and DsFhmedian improved image quality perception (based on the expert's assessment and the image texture and quality metrics). It is anticipated that the system could help the physician in the assessment of cardiovascular image analysis. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Rinehart, Matthew T.; LaCroix, Jeffrey; Henderson, Marcus; Katz, David; Wax, Adam
2011-03-01
The effectiveness of microbicidal gels, topical products developed to prevent infection by sexually transmitted diseases including HIV/AIDS, is governed by extent of gel coverage, pharmacokinetics of active pharmaceutical ingredients (APIs), and integrity of vaginal epithelium. While biopsies provide localized information about drug delivery and tissue structure, in vivo measurements are preferable in providing objective data on API and gel coating distribution as well as tissue integrity. We are developing a system combining confocal fluorescence microscopy with optical coherence tomography (OCT) to simultaneously measure local concentrations and diffusion coefficients of APIs during transport from microbicidal gels into tissue, while assessing tissue integrity. The confocal module acquires 2-D images of fluorescent APIs multiple times per second allowing analysis of lateral diffusion kinetics. The custom Fourier domain OCT module has a maximum a-scan rate of 54 kHz and provides depth-resolved tissue integrity information coregistered with the confocal fluorescence measurements. The combined system is validated by imaging phantoms with a surrogate fluorophore. Time-resolved API concentration measured at fixed depths is analyzed for diffusion kinetics. This multimodal system will eventually be implemented in vivo for objective evaluation of microbicide product performance.
Multimodal biophotonic workstation for live cell analysis.
Esseling, Michael; Kemper, Björn; Antkowiak, Maciej; Stevenson, David J; Chaudet, Lionel; Neil, Mark A A; French, Paul W; von Bally, Gert; Dholakia, Kishan; Denz, Cornelia
2012-01-01
A reliable description and quantification of the complex physiology and reactions of living cells requires a multimodal analysis with various measurement techniques. We have investigated the integration of different techniques into a biophotonic workstation that can provide biological researchers with these capabilities. The combination of a micromanipulation tool with three different imaging principles is accomplished in a single inverted microscope which makes the results from all the techniques directly comparable. Chinese Hamster Ovary (CHO) cells were manipulated by optical tweezers while the feedback was directly analyzed by fluorescence lifetime imaging, digital holographic microscopy and dynamic phase-contrast microscopy. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Technical Reports Server (NTRS)
1999-01-01
Through an initial SBIR contract with Langley Research Center, Stress Photonics, Inc. was able to successfully market their thermal strain measurement device, known as the Delta Therm 1000. The company was able to further its research on structural integrity analysis by signing another contract with Langley, this time a STTR contract, to develop its polariscope stress technology. Their commercial polariscope, the GFP 1000, involves a single rotating optical element and a digital camera for full-field image acquisition. The digital camera allows automated data to be acquired quickly and efficiently. Software analysis presents the data in an easy to interpret image format, depicting the magnitude of the shear strains and the directions of the principal strains.
Advances in three-dimensional integral imaging: sensing, display, and applications [Invited].
Xiao, Xiao; Javidi, Bahram; Martinez-Corral, Manuel; Stern, Adrian
2013-02-01
Three-dimensional (3D) sensing and imaging technologies have been extensively researched for many applications in the fields of entertainment, medicine, robotics, manufacturing, industrial inspection, security, surveillance, and defense due to their diverse and significant benefits. Integral imaging is a passive multiperspective imaging technique, which records multiple two-dimensional images of a scene from different perspectives. Unlike holography, it can capture a scene such as outdoor events with incoherent or ambient light. Integral imaging can display a true 3D color image with full parallax and continuous viewing angles by incoherent light; thus it does not suffer from speckle degradation. Because of its unique properties, integral imaging has been revived over the past decade or so as a promising approach for massive 3D commercialization. A series of key articles on this topic have appeared in the OSA journals, including Applied Optics. Thus, it is fitting that this Commemorative Review presents an overview of literature on physical principles and applications of integral imaging. Several data capture configurations, reconstruction, and display methods are overviewed. In addition, applications including 3D underwater imaging, 3D imaging in photon-starved environments, 3D tracking of occluded objects, 3D optical microscopy, and 3D polarimetric imaging are reviewed.
Heiland, Max; Pohlenz, Philipp; Blessmann, Marco; Habermann, Christian R; Oesterhelweg, Lars; Begemann, Philipp C; Schmidgunst, Christian; Blake, Felix A S; Püschel, Klaus; Schmelzle, Rainer; Schulze, Dirk
2007-12-01
The aim of this study was to evaluate soft tissue image quality of a mobile cone-beam computed tomography (CBCT) scanner with an integrated flat-panel detector. Eight fresh human cadavers were used in this study. For evaluation of soft tissue visualization, CBCT data sets and corresponding computed tomography (CT) and magnetic resonance imaging (MRI) data sets were acquired. Evaluation was performed with the help of 10 defined cervical anatomical structures. The statistical analysis of the scoring results of 3 examiners revealed the CBCT images to be of inferior quality regarding the visualization of most of the predefined structures. Visualization without a significant difference was found regarding the demarcation of the vertebral bodies and the pyramidal cartilages, the arteriosclerosis of the carotids (compared with CT), and the laryngeal skeleton (compared with MRI). Regarding arteriosclerosis of the carotids compared with MRI, CBCT proved to be superior. The integration of a flat-panel detector improves soft tissue visualization using a mobile CBCT scanner.
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data.
Carmichael, Owen; Sakhanenko, Lyudmila
2015-05-15
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way.
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data
Carmichael, Owen; Sakhanenko, Lyudmila
2015-01-01
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way. PMID:25937674
The Role of Laser Speckle Imaging in Port-Wine Stain Research: Recent Advances and Opportunities
Choi, Bernard; Tan, Wenbin; Jia, Wangcun; White, Sean M.; Moy, Wesley J.; Yang, Bruce Y.; Zhu, Jiang; Chen, Zhongping; Kelly, Kristen M.; Nelson, J. Stuart
2016-01-01
Here, we review our current knowledge on the etiology and treatment of port-wine stain (PWS) birthmarks. Current treatment options have significant limitations in terms of efficacy. With the combination of 1) a suitable preclinical microvascular model, 2) laser speckle imaging (LSI) to evaluate blood-flow dynamics, and 3) a longitudinal experimental design, rapid preclinical assessment of new phototherapies can be translated from the lab to the clinic. The combination of photodynamic therapy (PDT) and pulsed-dye laser (PDL) irradiation achieves a synergistic effect that reduces the required radiant exposures of the individual phototherapies to achieve persistent vascular shutdown. PDL combined with anti-angiogenic agents is a promising strategy to achieve persistent vascular shutdown by preventing reformation and reperfusion of photocoagulated blood vessels. Integration of LSI into the clinical workflow may lead to surgical image guidance that maximizes acute photocoagulation, is expected to improve PWS therapeutic outcome. Continued integration of noninvasive optical imaging technologies and biochemical analysis collectively are expected to lead to more robust treatment strategies. PMID:27013846
Knauer, Uwe; Matros, Andrea; Petrovic, Tijana; Zanker, Timothy; Scott, Eileen S; Seiffert, Udo
2017-01-01
Hyperspectral imaging is an emerging means of assessing plant vitality, stress parameters, nutrition status, and diseases. Extraction of target values from the high-dimensional datasets either relies on pixel-wise processing of the full spectral information, appropriate selection of individual bands, or calculation of spectral indices. Limitations of such approaches are reduced classification accuracy, reduced robustness due to spatial variation of the spectral information across the surface of the objects measured as well as a loss of information intrinsic to band selection and use of spectral indices. In this paper we present an improved spatial-spectral segmentation approach for the analysis of hyperspectral imaging data and its application for the prediction of powdery mildew infection levels (disease severity) of intact Chardonnay grape bunches shortly before veraison. Instead of calculating texture features (spatial features) for the huge number of spectral bands independently, dimensionality reduction by means of Linear Discriminant Analysis (LDA) was applied first to derive a few descriptive image bands. Subsequent classification was based on modified Random Forest classifiers and selective extraction of texture parameters from the integral image representation of the image bands generated. Dimensionality reduction, integral images, and the selective feature extraction led to improved classification accuracies of up to [Formula: see text] for detached berries used as a reference sample (training dataset). Our approach was validated by predicting infection levels for a sample of 30 intact bunches. Classification accuracy improved with the number of decision trees of the Random Forest classifier. These results corresponded with qPCR results. An accuracy of 0.87 was achieved in classification of healthy, infected, and severely diseased bunches. However, discrimination between visually healthy and infected bunches proved to be challenging for a few samples, perhaps due to colonized berries or sparse mycelia hidden within the bunch or airborne conidia on the berries that were detected by qPCR. An advanced approach to hyperspectral image classification based on combined spatial and spectral image features, potentially applicable to many available hyperspectral sensor technologies, has been developed and validated to improve the detection of powdery mildew infection levels of Chardonnay grape bunches. The spatial-spectral approach improved especially the detection of light infection levels compared with pixel-wise spectral data analysis. This approach is expected to improve the speed and accuracy of disease detection once the thresholds for fungal biomass detected by hyperspectral imaging are established; it can also facilitate monitoring in plant phenotyping of grapevine and additional crops.
FISSA: A neuropil decontamination toolbox for calcium imaging signals.
Keemink, Sander W; Lowe, Scott C; Pakan, Janelle M P; Dylda, Evelyn; van Rossum, Mark C W; Rochefort, Nathalie L
2018-02-22
In vivo calcium imaging has become a method of choice to image neuronal population activity throughout the nervous system. These experiments generate large sequences of images. Their analysis is computationally intensive and typically involves motion correction, image segmentation into regions of interest (ROIs), and extraction of fluorescence traces from each ROI. Out of focus fluorescence from surrounding neuropil and other cells can strongly contaminate the signal assigned to a given ROI. In this study, we introduce the FISSA toolbox (Fast Image Signal Separation Analysis) for neuropil decontamination. Given pre-defined ROIs, the FISSA toolbox automatically extracts the surrounding local neuropil and performs blind-source separation with non-negative matrix factorization. Using both simulated and in vivo data, we show that this toolbox performs similarly or better than existing published methods. FISSA requires only little RAM, and allows for fast processing of large datasets even on a standard laptop. The FISSA toolbox is available in Python, with an option for MATLAB format outputs, and can easily be integrated into existing workflows. It is available from Github and the standard Python repositories.
An improved level set method for brain MR images segmentation and bias correction.
Chen, Yunjie; Zhang, Jianwei; Macione, Jim
2009-10-01
Intensity inhomogeneities cause considerable difficulty in the quantitative analysis of magnetic resonance (MR) images. Thus, bias field estimation is a necessary step before quantitative analysis of MR data can be undertaken. This paper presents a variational level set approach to bias correction and segmentation for images with intensity inhomogeneities. Our method is based on an observation that intensities in a relatively small local region are separable, despite of the inseparability of the intensities in the whole image caused by the overall intensity inhomogeneity. We first define a localized K-means-type clustering objective function for image intensities in a neighborhood around each point. The cluster centers in this objective function have a multiplicative factor that estimates the bias within the neighborhood. The objective function is then integrated over the entire domain to define the data term into the level set framework. Our method is able to capture bias of quite general profiles. Moreover, it is robust to initialization, and thereby allows fully automated applications. The proposed method has been used for images of various modalities with promising results.
Fully automated three-dimensional microscopy system
NASA Astrophysics Data System (ADS)
Kerschmann, Russell L.
2000-04-01
Tissue-scale structures such as vessel networks are imaged at micron resolution with the Virtual Tissue System (VT System). VT System imaging of cubic millimeters of tissue and other material extends the capabilities of conventional volumetric techniques such as confocal microscopy, and allows for the first time the integrated 2D and 3D analysis of important tissue structural relationships. The VT System eliminates the need for glass slide-mounted tissue sections and instead captures images directly from the surface of a block containing a sample. Tissues are en bloc stained with fluorochrome compounds, embedded in an optically conditioned polymer that suppresses image signals form dep within the block , and serially sectioned for imaging. Thousands of fully registered 2D images are automatically captured digitally to completely convert tissue samples into blocks of high-resolution information. The resulting multi gigabyte data sets constitute the raw material for precision visualization and analysis. Cellular function may be seen in a larger anatomical context. VT System technology makes tissue metrics, accurate cell enumeration and cell cycle analyses possible while preserving full histologic setting.
NASA Astrophysics Data System (ADS)
Matula, Petr; Kumar, Anil; Wörz, Ilka; Harder, Nathalie; Erfle, Holger; Bartenschlager, Ralf; Eils, Roland; Rohr, Karl
2008-03-01
We present an image analysis approach as part of a high-throughput microscopy siRNA-based screening system using cell arrays for the identification of cellular genes involved in hepatitis C and dengue virus replication. Our approach comprises: cell nucleus segmentation, quantification of virus replication level in the neighborhood of segmented cell nuclei, localization of regions with transfected cells, cell classification by infection status, and quality assessment of an experiment and single images. In particular, we propose a novel approach for the localization of regions of transfected cells within cell array images, which combines model-based circle fitting and grid fitting. By this scheme we integrate information from single cell array images and knowledge from the complete cell arrays. The approach is fully automatic and has been successfully applied to a large number of cell array images from screening experiments. The experimental results show a good agreement with the expected behaviour of positive as well as negative controls and encourage the application to screens from further high-throughput experiments.
Computer program documentation for the patch subsampling processor
NASA Technical Reports Server (NTRS)
Nieves, M. J.; Obrien, S. O.; Oney, J. K. (Principal Investigator)
1981-01-01
The programs presented are intended to provide a way to extract a sample from a full-frame scene and summarize it in a useful way. The sample in each case was chosen to fill a 512-by-512 pixel (sample-by-line) image since this is the largest image that can be displayed on the Integrated Multivariant Data Analysis and Classification System. This sample size provides one megabyte of data for manipulation and storage and contains about 3% of the full-frame data. A patch image processor computes means for 256 32-by-32 pixel squares which constitute the 512-by-512 pixel image. Thus, 256 measurements are available for 8 vegetation indexes over a 100-mile square.
Object-oriented analysis and design of an ECG storage and retrieval system integrated with an HIS.
Wang, C; Ohe, K; Sakurai, T; Nagase, T; Kaihara, S
1996-03-01
For a hospital information system, object-oriented methodology plays an increasingly important role, especially for the management of digitized data, e.g., the electrocardiogram, electroencephalogram, electromyogram, spirogram, X-ray, CT and histopathological images, which are not yet computerized in most hospitals. As a first step in an object-oriented approach to hospital information management and storing medical data in an object-oriented database, we connected electrocardiographs to a hospital network and established the integration of ECG storage and retrieval systems with a hospital information system. In this paper, the object-oriented analysis and design of the ECG storage and retrieval systems is reported.
NASA Astrophysics Data System (ADS)
Chen, H.; Ye, Sh.; Nedzvedz, O. V.; Ablameyko, S. V.
2018-03-01
Study of crowd movement is an important practical problem, and its solution is used in video surveillance systems for preventing various emergency situations. In the general case, a group of fast-moving people is of more interest than a group of stationary or slow-moving people. We propose a new method for crowd movement analysis using a video sequence, based on integral optical flow. We have determined several characteristics of a moving crowd such as density, speed, direction of motion, symmetry, and in/out index. These characteristics are used for further analysis of a video scene.
Li, Jiong; Wang, Xuandong; Zheng, Dongye; Lin, Xinyi; Wei, Zuwu; Zhang, Da; Li, Zhuanfang; Zhang, Yun; Wu, Ming; Liu, Xiaolong
2018-05-22
Theranostic nanoprobes integrated with dual-modal imaging and therapeutic functions, such as photodynamic therapy (PDT), have exhibited significant potency in cancer treatments due to their high imaging accuracy and non-invasive advantages for cancer elimination. However, biocompatibility and highly efficient accumulation of these nanoprobes in tumor are still unsatisfactory for clinical application. In this study, a photosensitizer -loaded magnetic nanobead with surface further coated with a layer of cancer cell membrane (SSAP-Ce6@CCM) was designed to improve the biocompatibility and cellular uptake and ultimately achieve enhanced MR/NIR fluorescence imaging and PDT efficacy. Compared with similar nanobeads without CCM coating, SSAP-Ce6@CCM showed significantly enhanced cellular uptake, as evidenced by Prussian blue staining, confocal laser scanning microscopy (CLSM) and flow cytometric analysis. Consequently, SSAP-Ce6@CCM displayed a more distinct MR/NIR imaging ability and more obvious photo-cytotoxicity towards cancer cells under 670 nm laser irradiation. Furthermore, the enhanced PDT effect benefited from the surface coating of cancer cell membrane was demonstrated in SMMC-7721 tumor-bearing mice through tumor growth observation and tumor tissue pathological examination. Therefore, this CCM-disguised nanobead that integrated the abilities of MR/NIR fluorescence dual-modal imaging and photodynamic therapy might be a promising theranostic platform for tumor treatment.
Hwang, Wonjun; Wang, Haitao; Kim, Hyunwoo; Kee, Seok-Cheol; Kim, Junmo
2011-04-01
The authors present a robust face recognition system for large-scale data sets taken under uncontrolled illumination variations. The proposed face recognition system consists of a novel illumination-insensitive preprocessing method, a hybrid Fourier-based facial feature extraction, and a score fusion scheme. First, in the preprocessing stage, a face image is transformed into an illumination-insensitive image, called an "integral normalized gradient image," by normalizing and integrating the smoothed gradients of a facial image. Then, for feature extraction of complementary classifiers, multiple face models based upon hybrid Fourier features are applied. The hybrid Fourier features are extracted from different Fourier domains in different frequency bandwidths, and then each feature is individually classified by linear discriminant analysis. In addition, multiple face models are generated by plural normalized face images that have different eye distances. Finally, to combine scores from multiple complementary classifiers, a log likelihood ratio-based score fusion scheme is applied. The proposed system using the face recognition grand challenge (FRGC) experimental protocols is evaluated; FRGC is a large available data set. Experimental results on the FRGC version 2.0 data sets have shown that the proposed method shows an average of 81.49% verification rate on 2-D face images under various environmental variations such as illumination changes, expression changes, and time elapses.
Design of a front-end integrated circuit for 3D acoustic imaging using 2D CMUT arrays.
Ciçek, Ihsan; Bozkurt, Ayhan; Karaman, Mustafa
2005-12-01
Integration of front-end electronics with 2D capacitive micromachined ultrasonic transducer (CMUT) arrays has been a challenging issue due to the small element size and large channel count. We present design and verification of a front-end drive-readout integrated circuit for 3D ultrasonic imaging using 2D CMUT arrays. The circuit cell dedicated to a single CMUT array element consists of a high-voltage pulser and a low-noise readout amplifier. To analyze the circuit cell together with the CMUT element, we developed an electrical CMUT model with parameters derived through finite element analysis, and performed both the pre- and postlayout verification. An experimental chip consisting of 4 X 4 array of the designed circuit cells, each cell occupying a 200 X 200 microm2 area, was formed for the initial test studies and scheduled for fabrication in 0.8 microm, 50 V CMOS technology. The designed circuit is suitable for integration with CMUT arrays through flip-chip bonding and the CMUT-on-CMOS process.
A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems
NASA Technical Reports Server (NTRS)
Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.
1993-01-01
A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.
EEG and MEG data analysis in SPM8.
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools.
EEG and MEG Data Analysis in SPM8
Litvak, Vladimir; Mattout, Jérémie; Kiebel, Stefan; Phillips, Christophe; Henson, Richard; Kilner, James; Barnes, Gareth; Oostenveld, Robert; Daunizeau, Jean; Flandin, Guillaume; Penny, Will; Friston, Karl
2011-01-01
SPM is a free and open source software written in MATLAB (The MathWorks, Inc.). In addition to standard M/EEG preprocessing, we presently offer three main analysis tools: (i) statistical analysis of scalp-maps, time-frequency images, and volumetric 3D source reconstruction images based on the general linear model, with correction for multiple comparisons using random field theory; (ii) Bayesian M/EEG source reconstruction, including support for group studies, simultaneous EEG and MEG, and fMRI priors; (iii) dynamic causal modelling (DCM), an approach combining neural modelling with data analysis for which there are several variants dealing with evoked responses, steady state responses (power spectra and cross-spectra), induced responses, and phase coupling. SPM8 is integrated with the FieldTrip toolbox , making it possible for users to combine a variety of standard analysis methods with new schemes implemented in SPM and build custom analysis tools using powerful graphical user interface (GUI) and batching tools. PMID:21437221
Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A
2016-06-01
To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Nho, Kwangsik; Horgusluoglu, Emrin; Kim, Sungeun; Risacher, Shannon L; Kim, Dokyoon; Foroud, Tatiana; Aisen, Paul S; Petersen, Ronald C; Jack, Clifford R; Shaw, Leslie M; Trojanowski, John Q; Weiner, Michael W; Green, Robert C; Toga, Arthur W; Saykin, Andrew J
2016-08-12
Pathogenic mutations in PSEN1 are known to cause familial early-onset Alzheimer's disease (EOAD) but common variants in PSEN1 have not been found to strongly influence late-onset AD (LOAD). The association of rare variants in PSEN1 with LOAD-related endophenotypes has received little attention. In this study, we performed a rare variant association analysis of PSEN1 with quantitative biomarkers of LOAD using whole genome sequencing (WGS) by integrating bioinformatics and imaging informatics. A WGS data set (N = 815) from the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort was used in this analysis. 757 non-Hispanic Caucasian participants underwent WGS from a blood sample and high resolution T1-weighted structural MRI at baseline. An automated MRI analysis technique (FreeSurfer) was used to measure cortical thickness and volume of neuroanatomical structures. We assessed imaging and cerebrospinal fluid (CSF) biomarkers as LOAD-related quantitative endophenotypes. Single variant analyses were performed using PLINK and gene-based analyses of rare variants were performed using the optimal Sequence Kernel Association Test (SKAT-O). A total of 839 rare variants (MAF < 1/√(2 N) = 0.0257) were found within a region of ±10 kb from PSEN1. Among them, six exonic (three non-synonymous) variants were observed. A single variant association analysis showed that the PSEN1 p. E318G variant increases the risk of LOAD only in participants carrying APOE ε4 allele where individuals carrying the minor allele of this PSEN1 risk variant have lower CSF Aβ1-42 and higher CSF tau. A gene-based analysis resulted in a significant association of rare but not common (MAF ≥ 0.0257) PSEN1 variants with bilateral entorhinal cortical thickness. This is the first study to show that PSEN1 rare variants collectively show a significant association with the brain atrophy in regions preferentially affected by LOAD, providing further support for a role of PSEN1 in LOAD. The PSEN1 p. E318G variant increases the risk of LOAD only in APOE ε4 carriers. Integrating bioinformatics with imaging informatics for identification of rare variants could help explain the missing heritability in LOAD.
Securing Digital Images Integrity using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Hajji, Tarik; Itahriouan, Zakaria; Ouazzani Jamil, Mohammed
2018-05-01
Digital image signature is a technique used to protect the image integrity. The application of this technique can serve several areas of imaging applied to smart cities. The objective of this work is to propose two methods to protect digital image integrity. We present a description of two approaches using artificial neural networks (ANN) to digitally sign an image. The first one is “Direct Signature without learning” and the second is “Direct Signature with learning”. This paper presents the theory of proposed approaches and an experimental study to test their effectiveness.
Identification of Alfalfa Leaf Diseases Using Image Recognition Technology
Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang
2016-01-01
Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease. PMID:27977767
Identification of Alfalfa Leaf Diseases Using Image Recognition Technology.
Qin, Feng; Liu, Dongxia; Sun, Bingda; Ruan, Liu; Ma, Zhanhong; Wang, Haiguang
2016-01-01
Common leaf spot (caused by Pseudopeziza medicaginis), rust (caused by Uromyces striatus), Leptosphaerulina leaf spot (caused by Leptosphaerulina briosiana) and Cercospora leaf spot (caused by Cercospora medicaginis) are the four common types of alfalfa leaf diseases. Timely and accurate diagnoses of these diseases are critical for disease management, alfalfa quality control and the healthy development of the alfalfa industry. In this study, the identification and diagnosis of the four types of alfalfa leaf diseases were investigated using pattern recognition algorithms based on image-processing technology. A sub-image with one or multiple typical lesions was obtained by artificial cutting from each acquired digital disease image. Then the sub-images were segmented using twelve lesion segmentation methods integrated with clustering algorithms (including K_means clustering, fuzzy C-means clustering and K_median clustering) and supervised classification algorithms (including logistic regression analysis, Naive Bayes algorithm, classification and regression tree, and linear discriminant analysis). After a comprehensive comparison, the segmentation method integrating the K_median clustering algorithm and linear discriminant analysis was chosen to obtain lesion images. After the lesion segmentation using this method, a total of 129 texture, color and shape features were extracted from the lesion images. Based on the features selected using three methods (ReliefF, 1R and correlation-based feature selection), disease recognition models were built using three supervised learning methods, including the random forest, support vector machine (SVM) and K-nearest neighbor methods. A comparison of the recognition results of the models was conducted. The results showed that when the ReliefF method was used for feature selection, the SVM model built with the most important 45 features (selected from a total of 129 features) was the optimal model. For this SVM model, the recognition accuracies of the training set and the testing set were 97.64% and 94.74%, respectively. Semi-supervised models for disease recognition were built based on the 45 effective features that were used for building the optimal SVM model. For the optimal semi-supervised models built with three ratios of labeled to unlabeled samples in the training set, the recognition accuracies of the training set and the testing set were both approximately 80%. The results indicated that image recognition of the four alfalfa leaf diseases can be implemented with high accuracy. This study provides a feasible solution for lesion image segmentation and image recognition of alfalfa leaf disease.
Mankoff, David A; Farwell, Michael D; Clark, Amy S; Pryma, Daniel A
2015-01-01
The ability to measure biochemical and molecular processes to guide cancer treatment represents a potentially powerful tool for trials of targeted cancer therapy. These assays have traditionally been performed by analysis of tissue samples. However, more recently, functional and molecular imaging has been developed that is capable of in vivo assays of cancer biochemistry and molecular biology and is highly complementary to tissue-based assays. Cancer imaging biomarkers can play a key role in increasing the efficacy and efficiency of therapeutic clinical trials and also provide insight into the biologic mechanisms that bring about a therapeutic response. Future progress will depend on close collaboration between imaging scientists and cancer physicians and on public and commercial sponsors, to take full advantage of what imaging has to offer for clinical trials of targeted cancer therapy. This review will provide examples of how molecular imaging can inform targeted cancer clinical trials and clinical decision making by (1) measuring regional expression of the therapeutic target, (2) assessing early (pharmacodynamic) response to treatment, and (3) predicting therapeutic outcome. The review includes a discussion of basic principles of molecular imaging biomarkers in cancer, with an emphasis on those methods that have been tested in patients. We then review clinical trials designed to evaluate imaging tests as integrated markers embedded in a therapeutic clinical trial with the goal of validating the imaging tests as integral markers that can aid patient selection and direct response-adapted treatment strategies. Examples of recently completed multicenter trials using imaging biomarkers are highlighted.
Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J
2012-01-01
Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. ©Copyright 2011 The American Society of Transplantation and the American Society of Transplant Surgeons.
Infrared and visible image fusion with spectral graph wavelet transform.
Yan, Xiang; Qin, Hanlin; Li, Jia; Zhou, Huixin; Zong, Jing-guo
2015-09-01
Infrared and visible image fusion technique is a popular topic in image analysis because it can integrate complementary information and obtain reliable and accurate description of scenes. Multiscale transform theory as a signal representation method is widely used in image fusion. In this paper, a novel infrared and visible image fusion method is proposed based on spectral graph wavelet transform (SGWT) and bilateral filter. The main novelty of this study is that SGWT is used for image fusion. On the one hand, source images are decomposed by SGWT in its transform domain. The proposed approach not only effectively preserves the details of different source images, but also excellently represents the irregular areas of the source images. On the other hand, a novel weighted average method based on bilateral filter is proposed to fuse low- and high-frequency subbands by taking advantage of spatial consistency of natural images. Experimental results demonstrate that the proposed method outperforms seven recently proposed image fusion methods in terms of both visual effect and objective evaluation metrics.
Dzyubachyk, Oleh; Essers, Jeroen; van Cappellen, Wiggert A; Baldeyron, Céline; Inagaki, Akiko; Niessen, Wiro J; Meijering, Erik
2010-10-01
Complete, accurate and reproducible analysis of intracellular foci from fluorescence microscopy image sequences of live cells requires full automation of all processing steps involved: cell segmentation and tracking followed by foci segmentation and pattern analysis. Integrated systems for this purpose are lacking. Extending our previous work in cell segmentation and tracking, we developed a new system for performing fully automated analysis of fluorescent foci in single cells. The system was validated by applying it to two common tasks: intracellular foci counting (in DNA damage repair experiments) and cell-phase identification based on foci pattern analysis (in DNA replication experiments). Experimental results show that the system performs comparably to expert human observers. Thus, it may replace tedious manual analyses for the considered tasks, and enables high-content screening. The described system was implemented in MATLAB (The MathWorks, Inc., USA) and compiled to run within the MATLAB environment. The routines together with four sample datasets are available at http://celmia.bigr.nl/. The software is planned for public release, free of charge for non-commercial use, after publication of this article.
Registration and Fusion of Multiple Source Remotely Sensed Image Data
NASA Technical Reports Server (NTRS)
LeMoigne, Jacqueline
2004-01-01
Earth and Space Science often involve the comparison, fusion, and integration of multiple types of remotely sensed data at various temporal, radiometric, and spatial resolutions. Results of this integration may be utilized for global change analysis, global coverage of an area at multiple resolutions, map updating or validation of new instruments, as well as integration of data provided by multiple instruments carried on multiple platforms, e.g. in spacecraft constellations or fleets of planetary rovers. Our focus is on developing methods to perform fast, accurate and automatic image registration and fusion. General methods for automatic image registration are being reviewed and evaluated. Various choices for feature extraction, feature matching and similarity measurements are being compared, including wavelet-based algorithms, mutual information and statistically robust techniques. Our work also involves studies related to image fusion and investigates dimension reduction and co-kriging for application-dependent fusion. All methods are being tested using several multi-sensor datasets, acquired at EOS Core Sites, and including multiple sensors such as IKONOS, Landsat-7/ETM+, EO1/ALI and Hyperion, MODIS, and SeaWIFS instruments. Issues related to the coregistration of data from the same platform (i.e., AIRS and MODIS from Aqua) or from several platforms of the A-train (i.e., MLS, HIRDLS, OMI from Aura with AIRS and MODIS from Terra and Aqua) will also be considered.
Zou, Ke; Huang, Xiaoqi; Li, Tao; Gong, Qiyong; Li, Zhe; Ou-yang, Luo; Deng, Wei; Chen, Qin; Li, Chunxiao; Ding, Yi; Sun, Xueli
2008-01-01
Objective The purpose of our study was to investigate alterations of white matter integrity in adults with major depressive disorder (MDD) using magnetic resonance imaging (MRI). Methods We performed diffusion tensor imaging with a 3T MRI scanner on 45 patients with major depression and 45 healthy controls matched for age, sex and education. Using a voxel-based analysis, we measured the fractional anisotropy (FA), and we investigated the differences between the patient and control groups. We examined the correlations between the microstructure abnormalities of white matter and symptom severity, age of illness onset and cumulative illness duration, respectively. Results We found a significant decrease in FA in the left hemisphere, including the anterior limb of the internal capsule and the inferior parietal portion of the superior longitudinal fasciculus, in patients with MDD compared with healthy controls. Diffusion tensor imaging measures in the left anterior limb of the internal capsule were negatively related to the severity of depressive symptoms, even after we controlled for age and sex. Conclusion Our findings provide new evidence of microstructural changes of white matter in non–late-onset adult depression. Our results complement those observed in late-life depression and support the hypothesis that the disruption of cortical– subcortical circuit integrity may be involved in the etiology of major depressive disorder. PMID:18982175
Solution processed integrated pixel element for an imaging device
NASA Astrophysics Data System (ADS)
Swathi, K.; Narayan, K. S.
2016-09-01
We demonstrate the implementation of a solid state circuit/structure comprising of a high performing polymer field effect transistor (PFET) utilizing an oxide layer in conjunction with a self-assembled monolayer (SAM) as the dielectric and a bulk-heterostructure based organic photodiode as a CMOS-like pixel element for an imaging sensor. Practical usage of functional organic photon detectors requires on chip components for image capture and signal transfer as in the CMOS/CCD architecture rather than simple photodiode arrays in order to increase speed and sensitivity of the sensor. The availability of high performing PFETs with low operating voltage and photodiodes with high sensitivity provides the necessary prerequisite to implement a CMOS type image sensing device structure based on organic electronic devices. Solution processing routes in organic electronics offers relatively facile procedures to integrate these components, combined with unique features of large-area, form factor and multiple optical attributes. We utilize the inherent property of a binary mixture in a blend to phase-separate vertically and create a graded junction for effective photocurrent response. The implemented design enables photocharge generation along with on chip charge to voltage conversion with performance parameters comparable to traditional counterparts. Charge integration analysis for the passive pixel element using 2D TCAD simulations is also presented to evaluate the different processes that take place in the monolithic structure.
A prototype of mammography CADx scheme integrated to imaging quality evaluation techniques
NASA Astrophysics Data System (ADS)
Schiabel, Homero; Matheus, Bruno R. N.; Angelo, Michele F.; Patrocínio, Ana Claudia; Ventura, Liliane
2011-03-01
As all women over the age of 40 are recommended to perform mammographic exams every two years, the demands on radiologists to evaluate mammographic images in short periods of time has increased considerably. As a tool to improve quality and accelerate analysis CADe/Dx (computer-aided detection/diagnosis) schemes have been investigated, but very few complete CADe/Dx schemes have been developed and most are restricted to detection and not diagnosis. The existent ones usually are associated to specific mammographic equipment (usually DR), which makes them very expensive. So this paper describes a prototype of a complete mammography CADx scheme developed by our research group integrated to an imaging quality evaluation process. The basic structure consists of pre-processing modules based on image acquisition and digitization procedures (FFDM, CR or film + scanner), a segmentation tool to detect clustered microcalcifications and suspect masses and a classification scheme, which evaluates as the presence of microcalcifications clusters as well as possible malignant masses based on their contour. The aim is to provide enough information not only on the detected structures but also a pre-report with a BI-RADS classification. At this time the system is still lacking an interface integrating all the modules. Despite this, it is functional as a prototype for clinical practice testing, with results comparable to others reported in literature.
NASA Astrophysics Data System (ADS)
Prodanovic, M.; Esteva, M.; Ketcham, R. A.
2017-12-01
Nanometer to centimeter-scale imaging such as (focused ion beam) scattered electron microscopy, magnetic resonance imaging and X-ray (micro)tomography has since 1990s introduced 2D and 3D datasets of rock microstructure that allow investigation of nonlinear flow and mechanical phenomena on the length scales that are otherwise impervious to laboratory measurements. The numerical approaches that use such images produce various upscaled parameters required by subsurface flow and deformation simulators. All of this has revolutionized our knowledge about grain scale phenomena. However, a lack of data-sharing infrastructure among research groups makes it difficult to integrate different length scales. We have developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal (https://www.digitalrocksportal.org), that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of engineering or geosciences researchers not necessarily trained in computer science or data analysis. Digital Rocks Portal (NSF EarthCube Grant 1541008) is the first repository for imaged porous microstructure data. It is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (University of Texas at Austin). Long-term storage is provided through the University of Texas System Research Cyber-infrastructure initiative. We show how the data can be documented, referenced in publications via digital object identifiers (see Figure below for examples), visualized, searched for and linked to other repositories. We show recently implemented integration of the remote parallel visualization, bulk upload for large datasets as well as preliminary flow simulation workflow with the pore structures currently stored in the repository. We discuss the issues of collecting correct metadata, data discoverability and repository sustainability.
Merging Dietary Assessment with the Adolescent Lifestyle
Schap, TusaRebecca E; Zhu, Fengqing M; Delp, Edward J; Boushey, Carol J
2013-01-01
The use of image-based dietary assessment methods shows promise for improving dietary self-report among children. The Technology Assisted Dietary Assessment (TADA) food record application is a self-administered food record specifically designed to address the burden and human error associated with conventional methods of dietary assessment. Users would take images of foods and beverages at all eating occasions using a mobile telephone or mobile device with an integrated camera, (e.g., Apple iPhone, Google Nexus One, Apple iPod Touch). Once the images are taken, the images are transferred to a back-end server for automated analysis. The first step in this process is image analysis, i.e., segmentation, feature extraction, and classification, allows for automated food identification. Portion size estimation is also automated via segmentation and geometric shape template modeling. The results of the automated food identification and volume estimation can be indexed with the Food and Nutrient Database for Dietary Studies (FNDDS) to provide a detailed diet analysis for use in epidemiologic or intervention studies. Data collected during controlled feeding studies in a camp-like setting have allowed for formative evaluation and validation of the TADA food record application. This review summarizes the system design and the evidence-based development of image-based methods for dietary assessment among children. PMID:23489518
Tameem, Hussain Z.; Sinha, Usha S.
2011-01-01
Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features. PMID:21785520
NASA Astrophysics Data System (ADS)
Tameem, Hussain Z.; Sinha, Usha S.
2007-11-01
Osteoarthritis (OA) is a heterogeneous and multi-factorial disease characterized by the progressive loss of articular cartilage. Magnetic Resonance Imaging has been established as an accurate technique to assess cartilage damage through both cartilage morphology (volume and thickness) and cartilage water mobility (Spin-lattice relaxation, T2). The Osteoarthritis Initiative, OAI, is a large scale serial assessment of subjects at different stages of OA including those with pre-clinical symptoms. The electronic availability of the comprehensive data collected as part of the initiative provides an unprecedented opportunity to discover new relationships in complex diseases such as OA. However, imaging data, which provides the most accurate non-invasive assessment of OA, is not directly amenable for data mining. Changes in morphometry and relaxivity with OA disease are both complex and subtle, making manual methods extremely difficult. This chapter focuses on the image analysis techniques to automatically localize the differences in morphometry and relaxivity changes in different population sub-groups (normal and OA subjects segregated by age, gender, and race). The image analysis infrastructure will enable automatic extraction of cartilage features at the voxel level; the ultimate goal is to integrate this infrastructure to discover relationships between the image findings and other clinical features.
Saliency-Guided Change Detection of Remotely Sensed Images Using Random Forest
NASA Astrophysics Data System (ADS)
Feng, W.; Sui, H.; Chen, X.
2018-04-01
Studies based on object-based image analysis (OBIA) representing the paradigm shift in change detection (CD) have achieved remarkable progress in the last decade. Their aim has been developing more intelligent interpretation analysis methods in the future. The prediction effect and performance stability of random forest (RF), as a new kind of machine learning algorithm, are better than many single predictors and integrated forecasting method. In this paper, we present a novel CD approach for high-resolution remote sensing images, which incorporates visual saliency and RF. First, highly homogeneous and compact image super-pixels are generated using super-pixel segmentation, and the optimal segmentation result is obtained through image superimposition and principal component analysis (PCA). Second, saliency detection is used to guide the search of interest regions in the initial difference image obtained via the improved robust change vector analysis (RCVA) algorithm. The salient regions within the difference image that correspond to the binarized saliency map are extracted, and the regions are subject to the fuzzy c-means (FCM) clustering to obtain the pixel-level pre-classification result, which can be used as a prerequisite for superpixel-based analysis. Third, on the basis of the optimal segmentation and pixel-level pre-classification results, different super-pixel change possibilities are calculated. Furthermore, the changed and unchanged super-pixels that serve as the training samples are automatically selected. The spectral features and Gabor features of each super-pixel are extracted. Finally, superpixel-based CD is implemented by applying RF based on these samples. Experimental results on Ziyuan 3 (ZY3) multi-spectral images show that the proposed method outperforms the compared methods in the accuracy of CD, and also confirm the feasibility and effectiveness of the proposed approach.
In Situ Characterization of Boehmite Particles in Water Using Liquid SEM.
Yao, Juan; Arey, Bruce W; Yang, Li; Zhang, Fei; Komorek, Rachel; Chun, Jaehun; Yu, Xiao-Ying
2017-09-27
In situ imaging and elemental analysis of boehmite (AlOOH) particles in water is realized using the System for Analysis at the Liquid Vacuum Interface (SALVI) and Scanning Electron Microscopy (SEM). This paper describes the method and key steps in integrating the vacuum compatible SAVLI to SEM and obtaining secondary electron (SE) images of particles in liquid in high vacuum. Energy dispersive x-ray spectroscopy (EDX) is used to obtain elemental analysis of particles in liquid and control samples including deionized (DI) water only and an empty channel as well. Synthesized boehmite (AlOOH) particles suspended in liquid are used as a model in the liquid SEM illustration. The results demonstrate that the particles can be imaged in the SE mode with good resolution (i.e., 400 nm). The AlOOH EDX spectrum shows significant signal from the aluminum (Al) when compared with the DI water and the empty channel control. In situ liquid SEM is a powerful technique to study particles in liquid with many exciting applications. This procedure aims to provide technical know-how in order to conduct liquid SEM imaging and EDX analysis using SALVI and to reduce potential pitfalls when using this approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davis, E.L.
A novel method for performing real-time acquisition and processing Landsat/EROS data covers all aspects including radiometric and geometric corrections of multispectral scanner or return-beam vidicon inputs, image enhancement, statistical analysis, feature extraction, and classification. Radiometric transformations include bias/gain adjustment, noise suppression, calibration, scan angle compensation, and illumination compensation, including topography and atmospheric effects. Correction or compensation for geometric distortion includes sensor-related distortions, such as centering, skew, size, scan nonlinearity, radial symmetry, and tangential symmetry. Also included are object image-related distortions such as aspect angle (altitude), scale distortion (altitude), terrain relief, and earth curvature. Ephemeral corrections are also applied to compensatemore » for satellite forward movement, earth rotation, altitude variations, satellite vibration, and mirror scan velocity. Image enhancement includes high-pass, low-pass, and Laplacian mask filtering and data restoration for intermittent losses. Resource classification is provided by statistical analysis including histograms, correlational analysis, matrix manipulations, and determination of spectral responses. Feature extraction includes spatial frequency analysis, which is used in parallel discriminant functions in each array processor for rapid determination. The technique uses integrated parallel array processors that decimate the tasks concurrently under supervision of a control processor. The operator-machine interface is optimized for programming ease and graphics image windowing.« less
HCS road: an enterprise system for integrated HCS data management and analysis.
Jackson, Donald; Lenard, Michael; Zelensky, Alexander; Shaikh, Mohammad; Scharpf, James V; Shaginaw, Richard; Nawade, Mahesh; Agler, Michele; Cloutier, Normand J; Fennell, Myles; Guo, Qi; Wardwell-Swanson, Judith; Zhao, Dandan; Zhu, Yingjie; Miller, Christopher; Gill, James
2010-08-01
The effective analysis and interpretation of high-content screening (HCS) data requires joining results to information on experimental treatments and controls, normalizing data, and selecting hits or fitting concentration-response curves. HCS data have unique requirements that are not supported by traditional high-throughput screening databases, including the ability to designate separate positive and negative controls for different measurements in multiplexed assays; the ability to capture information on the cell lines, fluorescent reagents, and treatments in each assay; the ability to store and use individual-cell and image data; and the ability to support HCS readers and software from multiple vendors along with third-party image analysis tools. To address these requirements, the authors developed an enterprise system for the storage and processing of HCS images and results. This system, HCS Road, supports target identification, lead discovery, lead evaluation, and lead profiling activities. A dedicated client supports experimental design, data review, and core analyses and displays images together with results for assay development, hit assessment, and troubleshooting. Data can be exported to third-party applications for further analysis and exploration. HCS Road provides a single source for high-content results across the organization, regardless of the group or instrument that produced them.
Image Decoding of Photonic Crystal Beads Array in the Microfluidic Chip for Multiplex Assays
Yuan, Junjie; Zhao, Xiangwei; Wang, Xiaoxia; Gu, Zhongze
2014-01-01
Along with the miniaturization and intellectualization of biomedical instruments, the increasing demand of health monitoring at anywhere and anytime elevates the need for the development of point of care testing (POCT). Photonic crystal beads (PCBs) as one kind of good encoded microcarriers can be integrated with microfluidic chips in order to realize cost-effective and high sensitive multiplex bioassays. However, there are difficulties in analyzing them towards automated analysis due to the characters of the PCBs and the unique detection manner. In this paper, we propose a strategy to take advantage of automated image processing for the color decoding of the PCBs array in the microfluidic chip for multiplex assays. By processing and alignment of two modal images of epi-fluorescence and epi-white light, every intact bead in the image is accurately extracted and decoded by PC colors, which stand for the target species. This method, which shows high robustness and accuracy under various configurations, eliminates the high hardware requirement of spectroscopy analysis and user-interaction software, and provides adequate supports for the general automated analysis of POCT based on PCBs array. PMID:25341876
Campagnola, Luke; Kratz, Megan B; Manis, Paul B
2014-01-01
The complexity of modern neurophysiology experiments requires specialized software to coordinate multiple acquisition devices and analyze the collected data. We have developed ACQ4, an open-source software platform for performing data acquisition and analysis in experimental neurophysiology. This software integrates the tasks of acquiring, managing, and analyzing experimental data. ACQ4 has been used primarily for standard patch-clamp electrophysiology, laser scanning photostimulation, multiphoton microscopy, intrinsic imaging, and calcium imaging. The system is highly modular, which facilitates the addition of new devices and functionality. The modules included with ACQ4 provide for rapid construction of acquisition protocols, live video display, and customizable analysis tools. Position-aware data collection allows automated construction of image mosaics and registration of images with 3-dimensional anatomical atlases. ACQ4 uses free and open-source tools including Python, NumPy/SciPy for numerical computation, PyQt for the user interface, and PyQtGraph for scientific graphics. Supported hardware includes cameras, patch clamp amplifiers, scanning mirrors, lasers, shutters, Pockels cells, motorized stages, and more. ACQ4 is available for download at http://www.acq4.org.
Blackboard architecture for medical image interpretation
NASA Astrophysics Data System (ADS)
Davis, Darryl N.; Taylor, Christopher J.
1991-06-01
There is a growing interest in using sophisticated knowledge-based systems for biomedical image interpretation. We present a principled attempt to use artificial intelligence methodologies in interpreting lateral skull x-ray images. Such radiographs are routinely used in cephalometric analysis to provide quantitative measurements useful to clinical orthodontists. Manual and interactive methods of analysis are known to be error prone and previous attempts to automate this analysis typically fail to capture the expertise and adaptability required to cope with the variability in biological structure and image quality. An integrated model-based system has been developed which makes use of a blackboard architecture and multiple knowledge sources. A model definition interface allows quantitative models, of feature appearance and location, to be built from examples as well as more qualitative modelling constructs. Visual task definition and blackboard control modules allow task-specific knowledge sources to act on information available to the blackboard in a hypothesise and test reasoning cycle. Further knowledge-based modules include object selection, location hypothesis, intelligent segmentation, and constraint propagation systems. Alternative solutions to given tasks are permitted.
Goals and potential career advancement of licensed practical nurses in Japan.
Ikeda, Mari; Inoue, Katsuya; Kamibeppu, Kiyoko
2008-10-01
To investigate the effects of personal and professional variables on career advancement intentions of working Licensed Practical Nurses (LPNs). In Japan, two levels of professional nursing licensures, the LPN and the registered nurse (RN), are likely to be integrated in the future. Therefore, it is important to know the career advancement intentions of LPNs. Questionnaires were sent to a sample of 356 LPNs. Analysis of variance (anova) and discriminative analysis were used. We found that those who had a positive image of LPNs along with a positive image of RNs were identified as showing interest in career advancement. The results of anova showed that age had a negative effect; however, discriminative analysis suggested that age is not as significant compared with other variables. Our results indicate that the 'image of RNs', and 'role-acceptance factors' have an effect on career advancement intentions of LPNs. Our results suggest that Nursing Managers should create a supportive working environment where the LPN would feel encouraged to carry out the nursing role, thereby creating a positive image of nursing in general which would lead to career motivation and pursuing RN status.
Information theoretic analysis of edge detection in visual communication
NASA Astrophysics Data System (ADS)
Jiang, Bo; Rahman, Zia-ur
2010-08-01
Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the artifacts introduced into the process by the image gathering process. However, experiments show that the image gathering process profoundly impacts the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. In this paper, we perform an end-to-end information theory based system analysis to assess edge detection methods. We evaluate the performance of the different algorithms as a function of the characteristics of the scene, and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge detection algorithm is regarded to have high performance only if the information rate from the scene to the edge approaches the maximum possible. This goal can be achieved only by jointly optimizing all processes. People generally use subjective judgment to compare different edge detection methods. There is not a common tool that can be used to evaluate the performance of the different algorithms, and to give people a guide for selecting the best algorithm for a given system or scene. Our information-theoretic assessment becomes this new tool to which allows us to compare the different edge detection operators in a common environment.
Barteneva, Natasha S; Vorobjev, Ivan A
2018-01-01
In this paper, we review some of the recent advances in cellular heterogeneity and single-cell analysis methods. In modern research of cellular heterogeneity, there are four major approaches: analysis of pooled samples, single-cell analysis, high-throughput single-cell analysis, and lately integrated analysis of cellular population at a single-cell level. Recently developed high-throughput single-cell genetic analysis methods such as RNA-Seq require purification step and destruction of an analyzed cell often are providing a snapshot of the investigated cell without spatiotemporal context. Correlative analysis of multiparameter morphological, functional, and molecular information is important for differentiation of more uniform groups in the spectrum of different cell types. Simplified distributions (histograms and 2D plots) can underrepresent biologically significant subpopulations. Future directions may include the development of nondestructive methods for dissecting molecular events in intact cells, simultaneous correlative cellular analysis of phenotypic and molecular features by hybrid technologies such as imaging flow cytometry, and further progress in supervised and non-supervised statistical analysis algorithms.
Scanning fluorescent microthermal imaging apparatus and method
Barton, D.L.; Tangyunyong, P.
1998-01-06
A scanning fluorescent microthermal imaging (FMI) apparatus and method is disclosed, useful for integrated circuit (IC) failure analysis, that uses a scanned and focused beam from a laser to excite a thin fluorescent film disposed over the surface of the IC. By collecting fluorescent radiation from the film, and performing point-by-point data collection with a single-point photodetector, a thermal map of the IC is formed to measure any localized heating associated with defects in the IC. 1 fig.
Verbalization and imagery in the process of formation of operator labor skills
NASA Technical Reports Server (NTRS)
Mistyuk, V. V.
1975-01-01
Sensorimotor control tests show that mastering operational skills occurs under conditions that stimulate the operator to independent active analysis and summarization of current information with the goal of clarifying the signs and the integral images that are a model of the situation. Goal directed determination of such an image requires inner and external speech, activates and improves the thinking of the operator, accelerates the training process, increases its effectiveness, and enables the formation of strategies in anticipating the course of events.
MassImager: A software for interactive and in-depth analysis of mass spectrometry imaging data.
He, Jiuming; Huang, Luojiao; Tian, Runtao; Li, Tiegang; Sun, Chenglong; Song, Xiaowei; Lv, Yiwei; Luo, Zhigang; Li, Xin; Abliz, Zeper
2018-07-26
Mass spectrometry imaging (MSI) has become a powerful tool to probe molecule events in biological tissue. However, it is a widely held viewpoint that one of the biggest challenges is an easy-to-use data processing software for discovering the underlying biological information from complicated and huge MSI dataset. Here, a user-friendly and full-featured MSI software including three subsystems, Solution, Visualization and Intelligence, named MassImager, is developed focusing on interactive visualization, in-situ biomarker discovery and artificial intelligent pathological diagnosis. Simplified data preprocessing and high-throughput MSI data exchange, serialization jointly guarantee the quick reconstruction of ion image and rapid analysis of dozens of gigabytes datasets. It also offers diverse self-defined operations for visual processing, including multiple ion visualization, multiple channel superposition, image normalization, visual resolution enhancement and image filter. Regions-of-interest analysis can be performed precisely through the interactive visualization between the ion images and mass spectra, also the overlaid optical image guide, to directly find out the region-specific biomarkers. Moreover, automatic pattern recognition can be achieved immediately upon the supervised or unsupervised multivariate statistical modeling. Clear discrimination between cancer tissue and adjacent tissue within a MSI dataset can be seen in the generated pattern image, which shows great potential in visually in-situ biomarker discovery and artificial intelligent pathological diagnosis of cancer. All the features are integrated together in MassImager to provide a deep MSI processing solution at the in-situ metabolomics level for biomarker discovery and future clinical pathological diagnosis. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Rutten-Jacobs, Loes C A; Tozer, Daniel J; Duering, Marco; Malik, Rainer; Dichgans, Martin; Markus, Hugh S; Traylor, Matthew
2018-06-01
Structural integrity of the white matter is a marker of cerebral small vessel disease, which is the major cause of vascular dementia and a quarter of all strokes. Genetic studies provide a way to obtain novel insights in the disease mechanism underlying cerebral small vessel disease. The aim was to identify common variants associated with microstructural integrity of the white matter and to elucidate the relationships of white matter structural integrity with stroke, major depressive disorder, and Alzheimer disease. This genome-wide association analysis included 8448 individuals from UK Biobank-a population-based cohort study that recruited individuals from across the United Kingdom between 2006 and 2010, aged 40 to 69 years. Microstructural integrity was measured as fractional anisotropy- (FA) and mean diffusivity (MD)-derived parameters on diffusion tensor images. White matter hyperintensity volumes (WMHV) were assessed on T2-weighted fluid-attenuated inversion recovery images. We identified 1 novel locus at genome-wide significance ( VCAN [versican]: rs13164785; P =3.7×10 -18 for MD and rs67827860; P =1.3×10 -14 for FA). LD score regression showed a significant genome-wide correlation between FA, MD, and WMHV (FA-WMHV rG 0.39 [SE, 0.15]; MD-WMHV rG 0.56 [SE, 0.19]). In polygenic risk score analysis, FA, MD, and WMHV were significantly associated with lacunar stroke, MD with major depressive disorder, and WMHV with Alzheimer disease. Genetic variants within the VCAN gene may play a role in the mechanisms underlying microstructural integrity of the white matter in the brain measured as FA and MD. Mechanisms underlying white matter alterations are shared with cerebrovascular disease, and inherited differences in white matter microstructure impact on Alzheimer disease and major depressive disorder. © 2018 The Authors.
Hahn, Paul; Migacz, Justin; O'Donnell, Rachelle; Day, Shelley; Lee, Annie; Lin, Phoebe; Vann, Robin; Kuo, Anthony; Fekrat, Sharon; Mruthyunjaya, Prithvi; Postel, Eric A; Izatt, Joseph A; Toth, Cynthia A
2013-01-01
The authors have recently developed a high-resolution microscope-integrated spectral domain optical coherence tomography (MIOCT) device designed to enable OCT acquisition simultaneous with surgical maneuvers. The purpose of this report is to describe translation of this device from preclinical testing into human intraoperative imaging. Before human imaging, surgical conditions were fully simulated for extensive preclinical MIOCT evaluation in a custom model eye system. Microscope-integrated spectral domain OCT images were then acquired in normal human volunteers and during vitreoretinal surgery in patients who consented to participate in a prospective institutional review board-approved study. Microscope-integrated spectral domain OCT images were obtained before and at pauses in surgical maneuvers and were compared based on predetermined diagnostic criteria to images obtained with a high-resolution spectral domain research handheld OCT system (HHOCT; Bioptigen, Inc) at the same time point. Cohorts of five consecutive patients were imaged. Successful end points were predefined, including ≥80% correlation in identification of pathology between MIOCT and HHOCT in ≥80% of the patients. Microscope-integrated spectral domain OCT was favorably evaluated by study surgeons and scrub nurses, all of whom responded that they would consider participating in human intraoperative imaging trials. The preclinical evaluation identified significant improvements that were made before MIOCT use during human surgery. The MIOCT transition into clinical human research was smooth. Microscope-integrated spectral domain OCT imaging in normal human volunteers demonstrated high resolution comparable to tabletop scanners. In the operating room, after an initial learning curve, surgeons successfully acquired human macular MIOCT images before and after surgical maneuvers. Microscope-integrated spectral domain OCT imaging confirmed preoperative diagnoses, such as full-thickness macular hole and vitreomacular traction, and demonstrated postsurgical changes in retinal morphology. Two cohorts of five patients were imaged. In the second cohort, the predefined end points were exceeded with ≥80% correlation between microscope-mounted OCT and HHOCT imaging in 100% of the patients. This report describes high-resolution MIOCT imaging using the prototype device in human eyes during vitreoretinal surgery, with successful achievement of predefined end points for imaging. Further refinements and investigations will be directed toward fully integrating MIOCT with vitreoretinal and other ocular surgery to image surgical maneuvers in real time.
NASA Astrophysics Data System (ADS)
Lemmens, R.; Maathuis, B.; Mannaerts, C.; Foerster, T.; Schaeffer, B.; Wytzisk, A.
2009-12-01
This paper involves easy accessible integrated web-based analysis of satellite images with a plug-in based open source software. The paper is targeted to both users and developers of geospatial software. Guided by a use case scenario, we describe the ILWIS software and its toolbox to access satellite images through the GEONETCast broadcasting system. The last two decades have shown a major shift from stand-alone software systems to networked ones, often client/server applications using distributed geo-(web-)services. This allows organisations to combine without much effort their own data with remotely available data and processing functionality. Key to this integrated spatial data analysis is a low-cost access to data from within a user-friendly and flexible software. Web-based open source software solutions are more often a powerful option for developing countries. The Integrated Land and Water Information System (ILWIS) is a PC-based GIS & Remote Sensing software, comprising a complete package of image processing, spatial analysis and digital mapping and was developed as commercial software from the early nineties onwards. Recent project efforts have migrated ILWIS into a modular, plug-in-based open source software, and provide web-service support for OGC-based web mapping and processing. The core objective of the ILWIS Open source project is to provide a maintainable framework for researchers and software developers to implement training components, scientific toolboxes and (web-) services. The latest plug-ins have been developed for multi-criteria decision making, water resources analysis and spatial statistics analysis. The development of this framework is done since 2007 in the context of 52°North, which is an open initiative that advances the development of cutting edge open source geospatial software, using the GPL license. GEONETCast, as part of the emerging Global Earth Observation System of Systems (GEOSS), puts essential environmental data at the fingertips of users around the globe. This user-friendly and low-cost information dissemination provides global information as a basis for decision-making in a number of critical areas, including public health, energy, agriculture, weather, water, climate, natural disasters and ecosystems. GEONETCast makes available satellite images via Digital Video Broadcast (DVB) technology. An OGC WMS interface and plug-ins which convert GEONETCast data streams allow an ILWIS user to integrate various distributed data sources with data locally stored on his machine. Our paper describes a use case in which ILWIS is used with GEONETCast satellite imagery for decision making processes in Ghana. We also explain how the ILWIS software can be extended with additional functionality by means of building plug-ins and unfold our plans to implement other OGC standards, such as WCS and WPS in the same context. Especially, the latter one can be seen as a major step forward in terms of moving well-proven desktop based processing functionality to the web. This enables the embedding of ILWIS functionality in Spatial Data Infrastructures or even the execution in scalable and on-demand cloud computing environments.
Bühnemann, Claudia; Li, Simon; Yu, Haiyue; Branford White, Harriet; Schäfer, Karl L; Llombart-Bosch, Antonio; Machado, Isidro; Picci, Piero; Hogendoorn, Pancras C W; Athanasou, Nicholas A; Noble, J Alison; Hassan, A Bassim
2014-01-01
Driven by genomic somatic variation, tumour tissues are typically heterogeneous, yet unbiased quantitative methods are rarely used to analyse heterogeneity at the protein level. Motivated by this problem, we developed automated image segmentation of images of multiple biomarkers in Ewing sarcoma to generate distributions of biomarkers between and within tumour cells. We further integrate high dimensional data with patient clinical outcomes utilising random survival forest (RSF) machine learning. Using material from cohorts of genetically diagnosed Ewing sarcoma with EWSR1 chromosomal translocations, confocal images of tissue microarrays were segmented with level sets and watershed algorithms. Each cell nucleus and cytoplasm were identified in relation to DAPI and CD99, respectively, and protein biomarkers (e.g. Ki67, pS6, Foxo3a, EGR1, MAPK) localised relative to nuclear and cytoplasmic regions of each cell in order to generate image feature distributions. The image distribution features were analysed with RSF in relation to known overall patient survival from three separate cohorts (185 informative cases). Variation in pre-analytical processing resulted in elimination of a high number of non-informative images that had poor DAPI localisation or biomarker preservation (67 cases, 36%). The distribution of image features for biomarkers in the remaining high quality material (118 cases, 104 features per case) were analysed by RSF with feature selection, and performance assessed using internal cross-validation, rather than a separate validation cohort. A prognostic classifier for Ewing sarcoma with low cross-validation error rates (0.36) was comprised of multiple features, including the Ki67 proliferative marker and a sub-population of cells with low cytoplasmic/nuclear ratio of CD99. Through elimination of bias, the evaluation of high-dimensionality biomarker distribution within cell populations of a tumour using random forest analysis in quality controlled tumour material could be achieved. Such an automated and integrated methodology has potential application in the identification of prognostic classifiers based on tumour cell heterogeneity.