Representation of scientific methodology in secondary science textbooks
NASA Astrophysics Data System (ADS)
Binns, Ian C.
The purpose of this investigation was to assess the representation of scientific methodology in secondary science textbooks. More specifically, this study looked at how textbooks introduced scientific methodology and to what degree the examples from the rest of the textbook, the investigations, and the images were consistent with the text's description of scientific methodology, if at all. The sample included eight secondary science textbooks from two publishers, McGraw-Hill/Glencoe and Harcourt/Holt, Rinehart & Winston. Data consisted of all student text and teacher text that referred to scientific methodology. Second, all investigations in the textbooks were analyzed. Finally, any images that depicted scientists working were also collected and analyzed. The text analysis and activity analysis used the ethnographic content analysis approach developed by Altheide (1996). The rubrics used for the text analysis and activity analysis were initially guided by the Benchmarks (AAAS, 1993), the NSES (NRC, 1996), and the nature of science literature. Preliminary analyses helped to refine each of the rubrics and grounded them in the data. Image analysis used stereotypes identified in the DAST literature. Findings indicated that all eight textbooks presented mixed views of scientific methodology in their initial descriptions. Five textbooks placed more emphasis on the traditional view and three placed more emphasis on the broad view. Results also revealed that the initial descriptions, examples, investigations, and images all emphasized the broad view for Glencoe Biology and the traditional view for Chemistry: Matter and Change. The initial descriptions, examples, investigations, and images in the other six textbooks were not consistent. Overall, the textbook with the most appropriate depiction of scientific methodology was Glencoe Biology and the textbook with the least appropriate depiction of scientific methodology was Physics: Principles and Problems. These findings suggest that compared to earlier investigations, textbooks have begun to improve in how they represent scientific methodology. However, there is still much room for improvement. Future research needs to consider how textbooks impact teachers' and students' understandings of scientific methodology.
Imaging Girls: Visual Methodologies and Messages for Girls' Education
ERIC Educational Resources Information Center
Magno, Cathryn; Kirk, Jackie
2008-01-01
This article describes the use of visual methodologies to examine images of girls used by development agencies to portray and promote their work in girls' education, and provides a detailed discussion of three report cover images. It details the processes of methodology and tool development for the visual analysis and presents initial 'readings'…
Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study
NASA Technical Reports Server (NTRS)
Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia
2015-01-01
Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.
Leavesley, Silas J; Sweat, Brenner; Abbott, Caitlyn; Favreau, Peter; Rich, Thomas C
2018-01-01
Spectral imaging technologies have been used for many years by the remote sensing community. More recently, these approaches have been applied to biomedical problems, where they have shown great promise. However, biomedical spectral imaging has been complicated by the high variance of biological data and the reduced ability to construct test scenarios with fixed ground truths. Hence, it has been difficult to objectively assess and compare biomedical spectral imaging assays and technologies. Here, we present a standardized methodology that allows assessment of the performance of biomedical spectral imaging equipment, assays, and analysis algorithms. This methodology incorporates real experimental data and a theoretical sensitivity analysis, preserving the variability present in biomedical image data. We demonstrate that this approach can be applied in several ways: to compare the effectiveness of spectral analysis algorithms, to compare the response of different imaging platforms, and to assess the level of target signature required to achieve a desired performance. Results indicate that it is possible to compare even very different hardware platforms using this methodology. Future applications could include a range of optimization tasks, such as maximizing detection sensitivity or acquisition speed, providing high utility for investigators ranging from design engineers to biomedical scientists. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Object-Based Image Analysis Beyond Remote Sensing - the Human Perspective
NASA Astrophysics Data System (ADS)
Blaschke, T.; Lang, S.; Tiede, D.; Papadakis, M.; Györi, A.
2016-06-01
We introduce a prototypical methodological framework for a place-based GIS-RS system for the spatial delineation of place while incorporating spatial analysis and mapping techniques using methods from different fields such as environmental psychology, geography, and computer science. The methodological lynchpin for this to happen - when aiming to delineate place in terms of objects - is object-based image analysis (OBIA).
Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis.
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué
2015-10-01
In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%.
Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué
2015-01-01
In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%. PMID:26504638
A Methodology for Anatomic Ultrasound Image Diagnostic Quality Assessment.
Hemmsen, Martin Christian; Lange, Theis; Brandt, Andreas Hjelm; Nielsen, Michael Bachmann; Jensen, Jorgen Arendt
2017-01-01
This paper discusses the methods for the assessment of ultrasound image quality based on our experiences with evaluating new methods for anatomic imaging. It presents a methodology to ensure a fair assessment between competing imaging methods using clinically relevant evaluations. The methodology is valuable in the continuing process of method optimization and guided development of new imaging methods. It includes a three phased study plan covering from initial prototype development to clinical assessment. Recommendations to the clinical assessment protocol, software, and statistical analysis are presented. Earlier uses of the methodology has shown that it ensures validity of the assessment, as it separates the influences between developer, investigator, and assessor once a research protocol has been established. This separation reduces confounding influences on the result from the developer to properly reveal the clinical value. This paper exemplifies the methodology using recent studies of synthetic aperture sequential beamforming tissue harmonic imaging.
STEM_CELL: a software tool for electron microscopy: part 2--analysis of crystalline materials.
Grillo, Vincenzo; Rossi, Francesca
2013-02-01
A new graphical software (STEM_CELL) for analysis of HRTEM and STEM-HAADF images is here introduced in detail. The advantage of the software, beyond its graphic interface, is to put together different analysis algorithms and simulation (described in an associated article) to produce novel analysis methodologies. Different implementations and improvements to state of the art approach are reported in the image analysis, filtering, normalization, background subtraction. In particular two important methodological results are here highlighted: (i) the definition of a procedure for atomic scale quantitative analysis of HAADF images, (ii) the extension of geometric phase analysis to large regions up to potentially 1μm through the use of under sampled images with aliasing effects. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ortiz-Jaramillo, B.; Fandiño Toro, H. A.; Benitez-Restrepo, H. D.; Orjuela-Vargas, S. A.; Castellanos-Domínguez, G.; Philips, W.
2012-03-01
Infrared Non-Destructive Testing (INDT) is known as an effective and rapid method for nondestructive inspection. It can detect a broad range of near-surface structuring flaws in metallic and composite components. Those flaws are modeled as a smooth contour centered at peaks of stored thermal energy, termed Regions of Interest (ROI). Dedicated methodologies must detect the presence of those ROIs. In this paper, we present a methodology for ROI extraction in INDT tasks. The methodology deals with the difficulties due to the non-uniform heating. The non-uniform heating affects low spatial/frequencies and hinders the detection of relevant points in the image. In this paper, a methodology for ROI extraction in INDT using multi-resolution analysis is proposed, which is robust to ROI low contrast and non-uniform heating. The former methodology includes local correlation, Gaussian scale analysis and local edge detection. In this methodology local correlation between image and Gaussian window provides interest points related to ROIs. We use a Gaussian window because thermal behavior is well modeled by Gaussian smooth contours. Also, the Gaussian scale is used to analyze details in the image using multi-resolution analysis avoiding low contrast, non-uniform heating and selection of the Gaussian window size. Finally, local edge detection is used to provide a good estimation of the boundaries in the ROI. Thus, we provide a methodology for ROI extraction based on multi-resolution analysis that is better or equal compared with the other dedicate algorithms proposed in the state of art.
A methodology to event reconstruction from trace images.
Milliet, Quentin; Delémont, Olivier; Sapin, Eric; Margot, Pierre
2015-03-01
The widespread use of digital imaging devices for surveillance (CCTV) and entertainment (e.g., mobile phones, compact cameras) has increased the number of images recorded and opportunities to consider the images as traces or documentation of criminal activity. The forensic science literature focuses almost exclusively on technical issues and evidence assessment [1]. Earlier steps in the investigation phase have been neglected and must be considered. This article is the first comprehensive description of a methodology to event reconstruction using images. This formal methodology was conceptualised from practical experiences and applied to different contexts and case studies to test and refine it. Based on this practical analysis, we propose a systematic approach that includes a preliminary analysis followed by four main steps. These steps form a sequence for which the results from each step rely on the previous step. However, the methodology is not linear, but it is a cyclic, iterative progression for obtaining knowledge about an event. The preliminary analysis is a pre-evaluation phase, wherein potential relevance of images is assessed. In the first step, images are detected and collected as pertinent trace material; the second step involves organising and assessing their quality and informative potential. The third step includes reconstruction using clues about space, time and actions. Finally, in the fourth step, the images are evaluated and selected as evidence. These steps are described and illustrated using practical examples. The paper outlines how images elicit information about persons, objects, space, time and actions throughout the investigation process to reconstruct an event step by step. We emphasise the hypothetico-deductive reasoning framework, which demonstrates the contribution of images to generating, refining or eliminating propositions or hypotheses. This methodology provides a sound basis for extending image use as evidence and, more generally, as clues in investigation and crime reconstruction processes. Copyright © 2015 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
An innovative and shared methodology for event reconstruction using images in forensic science.
Milliet, Quentin; Jendly, Manon; Delémont, Olivier
2015-09-01
This study presents an innovative methodology for forensic science image analysis for event reconstruction. The methodology is based on experiences from real cases. It provides real added value to technical guidelines such as standard operating procedures (SOPs) and enriches the community of practices at stake in this field. This bottom-up solution outlines the many facets of analysis and the complexity of the decision-making process. Additionally, the methodology provides a backbone for articulating more detailed and technical procedures and SOPs. It emerged from a grounded theory approach; data from individual and collective interviews with eight Swiss and nine European forensic image analysis experts were collected and interpreted in a continuous, circular and reflexive manner. Throughout the process of conducting interviews and panel discussions, similarities and discrepancies were discussed in detail to provide a comprehensive picture of practices and points of view and to ultimately formalise shared know-how. Our contribution sheds light on the complexity of the choices, actions and interactions along the path of data collection and analysis, enhancing both the researchers' and participants' reflexivity. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Tsipouras, Markos G; Giannakeas, Nikolaos; Tzallas, Alexandros T; Tsianou, Zoe E; Manousou, Pinelopi; Hall, Andrew; Tsoulos, Ioannis; Tsianos, Epameinondas
2017-03-01
Collagen proportional area (CPA) extraction in liver biopsy images provides the degree of fibrosis expansion in liver tissue, which is the most characteristic histological alteration in hepatitis C virus (HCV). Assessment of the fibrotic tissue is currently based on semiquantitative staging scores such as Ishak and Metavir. Since its introduction as a fibrotic tissue assessment technique, CPA calculation based on image analysis techniques has proven to be more accurate than semiquantitative scores. However, CPA has yet to reach everyday clinical practice, since the lack of standardized and robust methods for computerized image analysis for CPA assessment have proven to be a major limitation. The current work introduces a three-stage fully automated methodology for CPA extraction based on machine learning techniques. Specifically, clustering algorithms have been employed for background-tissue separation, as well as for fibrosis detection in liver tissue regions, in the first and the third stage of the methodology, respectively. Due to the existence of several types of tissue regions in the image (such as blood clots, muscle tissue, structural collagen, etc.), classification algorithms have been employed to identify liver tissue regions and exclude all other non-liver tissue regions from CPA computation. For the evaluation of the methodology, 79 liver biopsy images have been employed, obtaining 1.31% mean absolute CPA error, with 0.923 concordance correlation coefficient. The proposed methodology is designed to (i) avoid manual threshold-based and region selection processes, widely used in similar approaches presented in the literature, and (ii) minimize CPA calculation time. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Almeida, Mariana R; Correa, Deleon N; Zacca, Jorge J; Logrado, Lucio Paulo Lima; Poppi, Ronei J
2015-02-20
The aim of this study was to develop a methodology using Raman hyperspectral imaging and chemometric methods for identification of pre- and post-blast explosive residues on banknote surfaces. The explosives studied were of military, commercial and propellant uses. After the acquisition of the hyperspectral imaging, independent component analysis (ICA) was applied to extract the pure spectra and the distribution of the corresponding image constituents. The performance of the methodology was evaluated by the explained variance and the lack of fit of the models, by comparing the ICA recovered spectra with the reference spectra using correlation coefficients and by the presence of rotational ambiguity in the ICA solutions. The methodology was applied to forensic samples to solve an automated teller machine explosion case. Independent component analysis proved to be a suitable method of resolving curves, achieving equivalent performance with the multivariate curve resolution with alternating least squares (MCR-ALS) method. At low concentrations, MCR-ALS presents some limitations, as it did not provide the correct solution. The detection limit of the methodology presented in this study was 50 μg cm(-2). Copyright © 2014 Elsevier B.V. All rights reserved.
Development and Current Status of Skull-Image Superimposition - Methodology and Instrumentation.
Lan, Y
1992-12-01
This article presents a review of the literature and an evaluation on the development and application of skull-image superimposition technology - both instrumentation and methodology - contributed by a number of scholars since 1935. Along with a comparison of the methodologies involved in the two superimposition techniques - photographic and video - the author characterized the techniques in action and the recent advances in computer image superimposition processing technology. The major disadvantage of conventional approaches is its relying on subjective interpretation. Through painstaking comparison and analysis, computer image processing technology can make more conclusive identifications by direct testing and evaluating the various programmed indices. Copyright © 1992 Central Police University.
Vergucht, Eva; Brans, Toon; Beunis, Filip; Garrevoet, Jan; Bauters, Stephen; De Rijcke, Maarten; Deruytter, David; Janssen, Colin; Riekel, Christian; Burghammer, Manfred; Vincze, Laszlo
2015-07-01
Recently, a radically new synchrotron radiation-based elemental imaging approach for the analysis of biological model organisms and single cells in their natural in vivo state was introduced. The methodology combines optical tweezers (OT) technology for non-contact laser-based sample manipulation with synchrotron radiation confocal X-ray fluorescence (XRF) microimaging for the first time at ESRF-ID13. The optical manipulation possibilities and limitations of biological model organisms, the OT setup developments for XRF imaging and the confocal XRF-related challenges are reported. In general, the applicability of the OT-based setup is extended with the aim of introducing the OT XRF methodology in all research fields where highly sensitive in vivo multi-elemental analysis is of relevance at the (sub)micrometre spatial resolution level.
Tularosa Basin Play Fairway Analysis: Methodology Flow Charts
Adam Brandt
2015-11-15
These images show the comprehensive methodology used for creation of a Play Fairway Analysis to explore the geothermal resource potential of the Tularosa Basin, New Mexico. The deterministic methodology was originated by the petroleum industry, but was custom-modified to function as a knowledge-based geothermal exploration tool. The stochastic PFA flow chart uses weights of evidence, and is data-driven.
Debuc, Delia Cabrera; Salinas, Harry M; Ranganathan, Sudarshan; Tátrai, Erika; Gao, Wei; Shen, Meixiao; Wang, Jianhua; Somfai, Gábor M; Puliafito, Carmen A
2010-01-01
We demonstrate quantitative analysis and error correction of optical coherence tomography (OCT) retinal images by using a custom-built, computer-aided grading methodology. A total of 60 Stratus OCT (Carl Zeiss Meditec, Dublin, California) B-scans collected from ten normal healthy eyes are analyzed by two independent graders. The average retinal thickness per macular region is compared with the automated Stratus OCT results. Intergrader and intragrader reproducibility is calculated by Bland-Altman plots of the mean difference between both gradings and by Pearson correlation coefficients. In addition, the correlation between Stratus OCT and our methodology-derived thickness is also presented. The mean thickness difference between Stratus OCT and our methodology is 6.53 microm and 26.71 microm when using the inner segment/outer segment (IS/OS) junction and outer segment/retinal pigment epithelium (OS/RPE) junction as the outer retinal border, respectively. Overall, the median of the thickness differences as a percentage of the mean thickness is less than 1% and 2% for the intragrader and intergrader reproducibility test, respectively. The measurement accuracy range of the OCT retinal image analysis (OCTRIMA) algorithm is between 0.27 and 1.47 microm and 0.6 and 1.76 microm for the intragrader and intergrader reproducibility tests, respectively. Pearson correlation coefficients demonstrate R(2)>0.98 for all Early Treatment Diabetic Retinopathy Study (ETDRS) regions. Our methodology facilitates a more robust and localized quantification of the retinal structure in normal healthy controls and patients with clinically significant intraretinal features.
NASA Astrophysics Data System (ADS)
Cabrera Debuc, Delia; Salinas, Harry M.; Ranganathan, Sudarshan; Tátrai, Erika; Gao, Wei; Shen, Meixiao; Wang, Jianhua; Somfai, Gábor M.; Puliafito, Carmen A.
2010-07-01
We demonstrate quantitative analysis and error correction of optical coherence tomography (OCT) retinal images by using a custom-built, computer-aided grading methodology. A total of 60 Stratus OCT (Carl Zeiss Meditec, Dublin, California) B-scans collected from ten normal healthy eyes are analyzed by two independent graders. The average retinal thickness per macular region is compared with the automated Stratus OCT results. Intergrader and intragrader reproducibility is calculated by Bland-Altman plots of the mean difference between both gradings and by Pearson correlation coefficients. In addition, the correlation between Stratus OCT and our methodology-derived thickness is also presented. The mean thickness difference between Stratus OCT and our methodology is 6.53 μm and 26.71 μm when using the inner segment/outer segment (IS/OS) junction and outer segment/retinal pigment epithelium (OS/RPE) junction as the outer retinal border, respectively. Overall, the median of the thickness differences as a percentage of the mean thickness is less than 1% and 2% for the intragrader and intergrader reproducibility test, respectively. The measurement accuracy range of the OCT retinal image analysis (OCTRIMA) algorithm is between 0.27 and 1.47 μm and 0.6 and 1.76 μm for the intragrader and intergrader reproducibility tests, respectively. Pearson correlation coefficients demonstrate R2>0.98 for all Early Treatment Diabetic Retinopathy Study (ETDRS) regions. Our methodology facilitates a more robust and localized quantification of the retinal structure in normal healthy controls and patients with clinically significant intraretinal features.
Segmentation-free image processing and analysis of precipitate shapes in 2D and 3D
NASA Astrophysics Data System (ADS)
Bales, Ben; Pollock, Tresa; Petzold, Linda
2017-06-01
Segmentation based image analysis techniques are routinely employed for quantitative analysis of complex microstructures containing two or more phases. The primary advantage of these approaches is that spatial information on the distribution of phases is retained, enabling subjective judgements of the quality of the segmentation and subsequent analysis process. The downside is that computing micrograph segmentations with data from morphologically complex microstructures gathered with error-prone detectors is challenging and, if no special care is taken, the artifacts of the segmentation will make any subsequent analysis and conclusions uncertain. In this paper we demonstrate, using a two phase nickel-base superalloy microstructure as a model system, a new methodology for analysis of precipitate shapes using a segmentation-free approach based on the histogram of oriented gradients feature descriptor, a classic tool in image analysis. The benefits of this methodology for analysis of microstructure in two and three-dimensions are demonstrated.
Veress, Alexander I.; Klein, Gregory; Gullberg, Grant T.
2013-01-01
Tmore » he objectives of the following research were to evaluate the utility of a deformable image registration technique known as hyperelastic warping for the measurement of local strains in the left ventricle through the analysis of clinical, gated PE image datasets. wo normal human male subjects were sequentially imaged with PE and tagged MRI imaging. Strain predictions were made for systolic contraction using warping analyses of the PE images and HARP based strain analyses of the MRI images. Coefficient of determination R 2 values were computed for the comparison of circumferential and radial strain predictions produced by each methodology. here was good correspondence between the methodologies, with R 2 values of 0.78 for the radial strains of both hearts and from an R 2 = 0.81 and R 2 = 0.83 for the circumferential strains. he strain predictions were not statistically different ( P ≤ 0.01 ) . A series of sensitivity results indicated that the methodology was relatively insensitive to alterations in image intensity, random image noise, and alterations in fiber structure. his study demonstrated that warping was able to provide strain predictions of systolic contraction of the LV consistent with those provided by tagged MRI Warping.« less
Collagen morphology and texture analysis: from statistics to classification
Mostaço-Guidolin, Leila B.; Ko, Alex C.-T.; Wang, Fei; Xiang, Bo; Hewko, Mark; Tian, Ganghong; Major, Arkady; Shiomi, Masashi; Sowa, Michael G.
2013-01-01
In this study we present an image analysis methodology capable of quantifying morphological changes in tissue collagen fibril organization caused by pathological conditions. Texture analysis based on first-order statistics (FOS) and second-order statistics such as gray level co-occurrence matrix (GLCM) was explored to extract second-harmonic generation (SHG) image features that are associated with the structural and biochemical changes of tissue collagen networks. Based on these extracted quantitative parameters, multi-group classification of SHG images was performed. With combined FOS and GLCM texture values, we achieved reliable classification of SHG collagen images acquired from atherosclerosis arteries with >90% accuracy, sensitivity and specificity. The proposed methodology can be applied to a wide range of conditions involving collagen re-modeling, such as in skin disorders, different types of fibrosis and muscular-skeletal diseases affecting ligaments and cartilage. PMID:23846580
Athanasiou, Lambros; Sakellarios, Antonis I; Bourantas, Christos V; Tsirka, Georgia; Siogkas, Panagiotis; Exarchos, Themis P; Naka, Katerina K; Michalis, Lampros K; Fotiadis, Dimitrios I
2014-07-01
Optical coherence tomography and intravascular ultrasound are the most widely used methodologies in clinical practice as they provide high resolution cross-sectional images that allow comprehensive visualization of the lumen and plaque morphology. Several methods have been developed in recent years to process the output of these imaging modalities, which allow fast, reliable and reproducible detection of the luminal borders and characterization of plaque composition. These methods have proven useful in the study of the atherosclerotic process as they have facilitated analysis of a vast amount of data. This review presents currently available intravascular ultrasound and optical coherence tomography processing methodologies for segmenting and characterizing the plaque area, highlighting their advantages and disadvantages, and discusses the future trends in intravascular imaging.
Delakis, Ioannis; Wise, Robert; Morris, Lauren; Kulama, Eugenia
2015-11-01
The purpose of this work was to evaluate the contrast-detail performance of full field digital mammography (FFDM) systems using ideal (Hotelling) observer Signal-to-Noise Ratio (SNR) methodology and ascertain whether it can be considered an alternative to the conventional, automated analysis of CDMAM phantom images. Five FFDM units currently used in the national breast screening programme were evaluated, which differed with respect to age, detector, Automatic Exposure Control (AEC) and target/filter combination. Contrast-detail performance was analysed using CDMAM and ideal observer SNR methodology. The ideal observer SNR was calculated for input signal originating from gold discs of varying thicknesses and diameters, and then used to estimate the threshold gold thickness for each diameter as per CDMAM analysis. The variability of both methods and the dependence of CDMAM analysis on phantom manufacturing discrepancies also investigated. Results from both CDMAM and ideal observer methodologies were informative differentiators of FFDM systems' contrast-detail performance, displaying comparable patterns with respect to the FFDM systems' type and age. CDMAM results suggested higher threshold gold thickness values compared with the ideal observer methodology, especially for small-diameter details, which can be attributed to the behaviour of the CDMAM phantom used in this study. In addition, ideal observer methodology results showed lower variability than CDMAM results. The Ideal observer SNR methodology can provide a useful metric of the FFDM systems' contrast detail characteristics and could be considered a surrogate for conventional, automated analysis of CDMAM images. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Digital-image processing and image analysis of glacier ice
Fitzpatrick, Joan J.
2013-01-01
This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.
Healy, Sinead; McMahon, Jill; Owens, Peter; Dockery, Peter; FitzGerald, Una
2018-02-01
Image segmentation is often imperfect, particularly in complex image sets such z-stack micrographs of slice cultures and there is a need for sufficient details of parameters used in quantitative image analysis to allow independent repeatability and appraisal. For the first time, we have critically evaluated, quantified and validated the performance of different segmentation methodologies using z-stack images of ex vivo glial cells. The BioVoxxel toolbox plugin, available in FIJI, was used to measure the relative quality, accuracy, specificity and sensitivity of 16 global and 9 local threshold automatic thresholding algorithms. Automatic thresholding yields improved binary representation of glial cells compared with the conventional user-chosen single threshold approach for confocal z-stacks acquired from ex vivo slice cultures. The performance of threshold algorithms varies considerably in quality, specificity, accuracy and sensitivity with entropy-based thresholds scoring highest for fluorescent staining. We have used the BioVoxxel toolbox to correctly and consistently select the best automated threshold algorithm to segment z-projected images of ex vivo glial cells for downstream digital image analysis and to define segmentation quality. The automated OLIG2 cell count was validated using stereology. As image segmentation and feature extraction can quite critically affect the performance of successive steps in the image analysis workflow, it is becoming increasingly necessary to consider the quality of digital segmenting methodologies. Here, we have applied, validated and extended an existing performance-check methodology in the BioVoxxel toolbox to z-projected images of ex vivo glia cells. Copyright © 2017 Elsevier B.V. All rights reserved.
Mollison, Daisy; Sellar, Robin; Bastin, Mark; Mollison, Denis; Chandran, Siddharthan; Wardlaw, Joanna; Connick, Peter
2017-01-01
Moderate correlation exists between the imaging quantification of brain white matter lesions and cognitive performance in people with multiple sclerosis (MS). This may reflect the greater importance of other features, including subvisible pathology, or methodological limitations of the primary literature. To summarise the cognitive clinico-radiological paradox and explore the potential methodological factors that could influence the assessment of this relationship. Systematic review and meta-analysis of primary research relating cognitive function to white matter lesion burden. Fifty papers met eligibility criteria for review, and meta-analysis of overall results was possible in thirty-two (2050 participants). Aggregate correlation between cognition and T2 lesion burden was r = -0.30 (95% confidence interval: -0.34, -0.26). Wide methodological variability was seen, particularly related to key factors in the cognitive data capture and image analysis techniques. Resolving the persistent clinico-radiological paradox will likely require simultaneous evaluation of multiple components of the complex pathology using optimum measurement techniques for both cognitive and MRI feature quantification. We recommend a consensus initiative to support common standards for image analysis in MS, enabling benchmarking while also supporting ongoing innovation.
Nguyen, Hien D; Ullmann, Jeremy F P; McLachlan, Geoffrey J; Voleti, Venkatakaushik; Li, Wenze; Hillman, Elizabeth M C; Reutens, David C; Janke, Andrew L
2018-02-01
Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.
NASA Astrophysics Data System (ADS)
Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.
2018-05-01
A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.
Biological Parametric Mapping: A Statistical Toolbox for Multi-Modality Brain Image Analysis
Casanova, Ramon; Ryali, Srikanth; Baer, Aaron; Laurienti, Paul J.; Burdette, Jonathan H.; Hayasaka, Satoru; Flowers, Lynn; Wood, Frank; Maldjian, Joseph A.
2006-01-01
In recent years multiple brain MR imaging modalities have emerged; however, analysis methodologies have mainly remained modality specific. In addition, when comparing across imaging modalities, most researchers have been forced to rely on simple region-of-interest type analyses, which do not allow the voxel-by-voxel comparisons necessary to answer more sophisticated neuroscience questions. To overcome these limitations, we developed a toolbox for multimodal image analysis called biological parametric mapping (BPM), based on a voxel-wise use of the general linear model. The BPM toolbox incorporates information obtained from other modalities as regressors in a voxel-wise analysis, thereby permitting investigation of more sophisticated hypotheses. The BPM toolbox has been developed in MATLAB with a user friendly interface for performing analyses, including voxel-wise multimodal correlation, ANCOVA, and multiple regression. It has a high degree of integration with the SPM (statistical parametric mapping) software relying on it for visualization and statistical inference. Furthermore, statistical inference for a correlation field, rather than a widely-used T-field, has been implemented in the correlation analysis for more accurate results. An example with in-vivo data is presented demonstrating the potential of the BPM methodology as a tool for multimodal image analysis. PMID:17070709
Mesquita, D P; Dias, O; Amaral, A L; Ferreira, E C
2009-04-01
In recent years, a great deal of attention has been focused on the research of activated sludge processes, where the solid-liquid separation phase is frequently considered of critical importance, due to the different problems that severely affect the compaction and the settling of the sludge. Bearing that in mind, in this work, image analysis routines were developed in Matlab environment, allowing the identification and characterization of microbial aggregates and protruding filaments in eight different wastewater treatment plants, for a combined period of 2 years. The monitoring of the activated sludge contents allowed for the detection of bulking events proving that the developed image analysis methodology is adequate for a continuous examination of the morphological changes in microbial aggregates and subsequent estimation of the sludge volume index. In fact, the obtained results proved that the developed image analysis methodology is a feasible method for the continuous monitoring of activated sludge systems and identification of disturbances.
A methodology for the semi-automatic digital image analysis of fragmental impactites
NASA Astrophysics Data System (ADS)
Chanou, A.; Osinski, G. R.; Grieve, R. A. F.
2014-04-01
A semi-automated digital image analysis method is developed for the comparative textural study of impact melt-bearing breccias. This method uses the freeware software ImageJ developed by the National Institute of Health (NIH). Digital image analysis is performed on scans of hand samples (10-15 cm across), based on macroscopic interpretations of the rock components. All image processing and segmentation are done semi-automatically, with the least possible manual intervention. The areal fraction of components is estimated and modal abundances can be deduced, where the physical optical properties (e.g., contrast, color) of the samples allow it. Other parameters that can be measured include, for example, clast size, clast-preferred orientations, average box-counting dimension or fragment shape complexity, and nearest neighbor distances (NnD). This semi-automated method allows the analysis of a larger number of samples in a relatively short time. Textures, granulometry, and shape descriptors are of considerable importance in rock characterization. The methodology is used to determine the variations of the physical characteristics of some examples of fragmental impactites.
Super-Resolution Imaging Strategies for Cell Biologists Using a Spinning Disk Microscope
Hosny, Neveen A.; Song, Mingying; Connelly, John T.; Ameer-Beg, Simon; Knight, Martin M.; Wheeler, Ann P.
2013-01-01
In this study we use a spinning disk confocal microscope (SD) to generate super-resolution images of multiple cellular features from any plane in the cell. We obtain super-resolution images by using stochastic intensity fluctuations of biological probes, combining Photoactivation Light-Microscopy (PALM)/Stochastic Optical Reconstruction Microscopy (STORM) methodologies. We compared different image analysis algorithms for processing super-resolution data to identify the most suitable for analysis of particular cell structures. SOFI was chosen for X and Y and was able to achieve a resolution of ca. 80 nm; however higher resolution was possible >30 nm, dependant on the super-resolution image analysis algorithm used. Our method uses low laser power and fluorescent probes which are available either commercially or through the scientific community, and therefore it is gentle enough for biological imaging. Through comparative studies with structured illumination microscopy (SIM) and widefield epifluorescence imaging we identified that our methodology was advantageous for imaging cellular structures which are not immediately at the cell-substrate interface, which include the nuclear architecture and mitochondria. We have shown that it was possible to obtain two coloured images, which highlights the potential this technique has for high-content screening, imaging of multiple epitopes and live cell imaging. PMID:24130668
2014-01-01
Background Digital image analysis has the potential to address issues surrounding traditional histological techniques including a lack of objectivity and high variability, through the application of quantitative analysis. A key initial step in image analysis is the identification of regions of interest. A widely applied methodology is that of segmentation. This paper proposes the application of image analysis techniques to segment skin tissue with varying degrees of histopathological damage. The segmentation of human tissue is challenging as a consequence of the complexity of the tissue structures and inconsistencies in tissue preparation, hence there is a need for a new robust method with the capability to handle the additional challenges materialising from histopathological damage. Methods A new algorithm has been developed which combines enhanced colour information, created following a transformation to the L*a*b* colourspace, with general image intensity information. A colour normalisation step is included to enhance the algorithm’s robustness to variations in the lighting and staining of the input images. The resulting optimised image is subjected to thresholding and the segmentation is fine-tuned using a combination of morphological processing and object classification rules. The segmentation algorithm was tested on 40 digital images of haematoxylin & eosin (H&E) stained skin biopsies. Accuracy, sensitivity and specificity of the algorithmic procedure were assessed through the comparison of the proposed methodology against manual methods. Results Experimental results show the proposed fully automated methodology segments the epidermis with a mean specificity of 97.7%, a mean sensitivity of 89.4% and a mean accuracy of 96.5%. When a simple user interaction step is included, the specificity increases to 98.0%, the sensitivity to 91.0% and the accuracy to 96.8%. The algorithm segments effectively for different severities of tissue damage. Conclusions Epidermal segmentation is a crucial first step in a range of applications including melanoma detection and the assessment of histopathological damage in skin. The proposed methodology is able to segment the epidermis with different levels of histological damage. The basic method framework could be applied to segmentation of other epithelial tissues. PMID:24521154
Churilov, Leonid; Liu, Daniel; Ma, Henry; Christensen, Soren; Nagakane, Yoshinari; Campbell, Bruce; Parsons, Mark W; Levi, Christopher R; Davis, Stephen M; Donnan, Geoffrey A
2013-04-01
The appropriateness of a software platform for rapid MRI assessment of the amount of salvageable brain tissue after stroke is critical for both the validity of the Extending the Time for Thrombolysis in Emergency Neurological Deficits (EXTEND) Clinical Trial of stroke thrombolysis beyond 4.5 hours and for stroke patient care outcomes. The objective of this research is to develop and implement a methodology for selecting the acute stroke imaging software platform most appropriate for the setting of a multi-centre clinical trial. A multi-disciplinary decision making panel formulated the set of preferentially independent evaluation attributes. Alternative Multi-Attribute Value Measurement methods were used to identify the best imaging software platform followed by sensitivity analysis to ensure the validity and robustness of the proposed solution. Four alternative imaging software platforms were identified. RApid processing of PerfusIon and Diffusion (RAPID) software was selected as the most appropriate for the needs of the EXTEND trial. A theoretically grounded generic multi-attribute selection methodology for imaging software was developed and implemented. The developed methodology assured both a high quality decision outcome and a rational and transparent decision process. This development contributes to stroke literature in the area of comprehensive evaluation of MRI clinical software. At the time of evaluation, RAPID software presented the most appropriate imaging software platform for use in the EXTEND clinical trial. The proposed multi-attribute imaging software evaluation methodology is based on sound theoretical foundations of multiple criteria decision analysis and can be successfully used for choosing the most appropriate imaging software while ensuring both robust decision process and outcomes. © 2012 The Authors. International Journal of Stroke © 2012 World Stroke Organization.
ERIC Educational Resources Information Center
Sims, Judy R.; Giordano, Joseph
A research study assessed the amount of front page newspaper coverage allotted to "character/competence/image" issues versus "platform/political" issues in the 1992 presidential campaign. Using textual analysis, methodology of content analysis, researchers coded the front page of the following 5 newspapers between August 1 and…
Correcting sample drift using Fourier harmonics.
Bárcena-González, G; Guerrero-Lebrero, M P; Guerrero, E; Reyes, D F; Braza, V; Yañez, A; Nuñez-Moraleda, B; González, D; Galindo, P L
2018-07-01
During image acquisition of crystalline materials by high-resolution scanning transmission electron microscopy, the sample drift could lead to distortions and shears that hinder their quantitative analysis and characterization. In order to measure and correct this effect, several authors have proposed different methodologies making use of series of images. In this work, we introduce a methodology to determine the drift angle via Fourier analysis by using a single image based on the measurements between the angles of the second Fourier harmonics in different quadrants. Two different approaches, that are independent of the angle of acquisition of the image, are evaluated. In addition, our results demonstrate that the determination of the drift angle is more accurate by using the measurements of non-consecutive quadrants when the angle of acquisition is an odd multiple of 45°. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimization of oncological {sup 18}F-FDG PET/CT imaging based on a multiparameter analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menezes, Vinicius O., E-mail: vinicius@radtec.com.br; Machado, Marcos A. D.; Queiroz, Cleiton C.
2016-02-15
Purpose: This paper describes a method to achieve consistent clinical image quality in {sup 18}F-FDG scans accounting for patient habitus, dose regimen, image acquisition, and processing techniques. Methods: Oncological PET/CT scan data for 58 subjects were evaluated retrospectively to derive analytical curves that predict image quality. Patient noise equivalent count rate and coefficient of variation (CV) were used as metrics in their analysis. Optimized acquisition protocols were identified and prospectively applied to 179 subjects. Results: The adoption of different schemes for three body mass ranges (<60 kg, 60–90 kg, >90 kg) allows improved image quality with both point spread functionmore » and ordered-subsets expectation maximization-3D reconstruction methods. The application of this methodology showed that CV improved significantly (p < 0.0001) in clinical practice. Conclusions: Consistent oncological PET/CT image quality on a high-performance scanner was achieved from an analysis of the relations existing between dose regimen, patient habitus, acquisition, and processing techniques. The proposed methodology may be used by PET/CT centers to develop protocols to standardize PET/CT imaging procedures and achieve better patient management and cost-effective operations.« less
NASA Astrophysics Data System (ADS)
Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.
2012-05-01
The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.
Segmentation of medical images using explicit anatomical knowledge
NASA Astrophysics Data System (ADS)
Wilson, Laurie S.; Brown, Stephen; Brown, Matthew S.; Young, Jeanne; Li, Rongxin; Luo, Suhuai; Brandt, Lee
1999-07-01
Knowledge-based image segmentation is defined in terms of the separation of image analysis procedures and representation of knowledge. Such architecture is particularly suitable for medical image segmentation, because of the large amount of structured domain knowledge. A general methodology for the application of knowledge-based methods to medical image segmentation is described. This includes frames for knowledge representation, fuzzy logic for anatomical variations, and a strategy for determining the order of segmentation from the modal specification. This method has been applied to three separate problems, 3D thoracic CT, chest X-rays and CT angiography. The application of the same methodology to such a range of applications suggests a major role in medical imaging for segmentation methods incorporating representation of anatomical knowledge.
Assessment of cluster yield components by image analysis.
Diago, Maria P; Tardaguila, Javier; Aleixos, Nuria; Millan, Borja; Prats-Montalban, Jose M; Cubero, Sergio; Blasco, Jose
2015-04-01
Berry weight, berry number and cluster weight are key parameters for yield estimation for wine and tablegrape industry. Current yield prediction methods are destructive, labour-demanding and time-consuming. In this work, a new methodology, based on image analysis was developed to determine cluster yield components in a fast and inexpensive way. Clusters of seven different red varieties of grapevine (Vitis vinifera L.) were photographed under laboratory conditions and their cluster yield components manually determined after image acquisition. Two algorithms based on the Canny and the logarithmic image processing approaches were tested to find the contours of the berries in the images prior to berry detection performed by means of the Hough Transform. Results were obtained in two ways: by analysing either a single image of the cluster or using four images per cluster from different orientations. The best results (R(2) between 69% and 95% in berry detection and between 65% and 97% in cluster weight estimation) were achieved using four images and the Canny algorithm. The model's capability based on image analysis to predict berry weight was 84%. The new and low-cost methodology presented here enabled the assessment of cluster yield components, saving time and providing inexpensive information in comparison with current manual methods. © 2014 Society of Chemical Industry.
Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.
Arganda-Carreras, Ignacio; Andrey, Philippe
2017-01-01
With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.
Samsi, Siddharth; Krishnamurthy, Ashok K.; Gurcan, Metin N.
2012-01-01
Follicular Lymphoma (FL) is one of the most common non-Hodgkin Lymphoma in the United States. Diagnosis and grading of FL is based on the review of histopathological tissue sections under a microscope and is influenced by human factors such as fatigue and reader bias. Computer-aided image analysis tools can help improve the accuracy of diagnosis and grading and act as another tool at the pathologist’s disposal. Our group has been developing algorithms for identifying follicles in immunohistochemical images. These algorithms have been tested and validated on small images extracted from whole slide images. However, the use of these algorithms for analyzing the entire whole slide image requires significant changes to the processing methodology since the images are relatively large (on the order of 100k × 100k pixels). In this paper we discuss the challenges involved in analyzing whole slide images and propose potential computational methodologies for addressing these challenges. We discuss the use of parallel computing tools on commodity clusters and compare performance of the serial and parallel implementations of our approach. PMID:22962572
Kopriva, Ivica; Hadžija, Mirko; Popović Hadžija, Marijana; Korolija, Marina; Cichocki, Andrzej
2011-01-01
A methodology is proposed for nonlinear contrast-enhanced unsupervised segmentation of multispectral (color) microscopy images of principally unstained specimens. The methodology exploits spectral diversity and spatial sparseness to find anatomical differences between materials (cells, nuclei, and background) present in the image. It consists of rth-order rational variety mapping (RVM) followed by matrix/tensor factorization. Sparseness constraint implies duality between nonlinear unsupervised segmentation and multiclass pattern assignment problems. Classes not linearly separable in the original input space become separable with high probability in the higher-dimensional mapped space. Hence, RVM mapping has two advantages: it takes implicitly into account nonlinearities present in the image (ie, they are not required to be known) and it increases spectral diversity (ie, contrast) between materials, due to increased dimensionality of the mapped space. This is expected to improve performance of systems for automated classification and analysis of microscopic histopathological images. The methodology was validated using RVM of the second and third orders of the experimental multispectral microscopy images of unstained sciatic nerve fibers (nervus ischiadicus) and of unstained white pulp in the spleen tissue, compared with a manually defined ground truth labeled by two trained pathophysiologists. The methodology can also be useful for additional contrast enhancement of images of stained specimens. PMID:21708116
Detection of white matter lesion regions in MRI using SLIC0 and convolutional neural network.
Diniz, Pedro Henrique Bandeira; Valente, Thales Levi Azevedo; Diniz, João Otávio Bandeira; Silva, Aristófanes Corrêa; Gattass, Marcelo; Ventura, Nina; Muniz, Bernardo Carvalho; Gasparetto, Emerson Leandro
2018-04-19
White matter lesions are non-static brain lesions that have a prevalence rate up to 98% in the elderly population. Because they may be associated with several brain diseases, it is important that they are detected as soon as possible. Magnetic Resonance Imaging (MRI) provides three-dimensional data with the possibility to detect and emphasize contrast differences in soft tissues, providing rich information about the human soft tissue anatomy. However, the amount of data provided for these images is far too much for manual analysis/interpretation, representing a difficult and time-consuming task for specialists. This work presents a computational methodology capable of detecting regions of white matter lesions of the brain in MRI of FLAIR modality. The techniques highlighted in this methodology are SLIC0 clustering for candidate segmentation and convolutional neural networks for candidate classification. The methodology proposed here consists of four steps: (1) images acquisition, (2) images preprocessing, (3) candidates segmentation and (4) candidates classification. The methodology was applied on 91 magnetic resonance images provided by DASA, and achieved an accuracy of 98.73%, specificity of 98.77% and sensitivity of 78.79% with 0.005 of false positives, without any false positives reduction technique, in detection of white matter lesion regions. It is demonstrated the feasibility of the analysis of brain MRI using SLIC0 and convolutional neural network techniques to achieve success in detection of white matter lesions regions. Copyright © 2018. Published by Elsevier B.V.
Shinde, V; Burke, K E; Chakravarty, A; Fleming, M; McDonald, A A; Berger, A; Ecsedy, J; Blakemore, S J; Tirrell, S M; Bowman, D
2014-01-01
Immunohistochemistry-based biomarkers are commonly used to understand target inhibition in key cancer pathways in preclinical models and clinical studies. Automated slide-scanning and advanced high-throughput image analysis software technologies have evolved into a routine methodology for quantitative analysis of immunohistochemistry-based biomarkers. Alongside the traditional pathology H-score based on physical slides, the pathology world is welcoming digital pathology and advanced quantitative image analysis, which have enabled tissue- and cellular-level analysis. An automated workflow was implemented that includes automated staining, slide-scanning, and image analysis methodologies to explore biomarkers involved in 2 cancer targets: Aurora A and NEDD8-activating enzyme (NAE). The 2 workflows highlight the evolution of our immunohistochemistry laboratory and the different needs and requirements of each biological assay. Skin biopsies obtained from MLN8237 (Aurora A inhibitor) phase 1 clinical trials were evaluated for mitotic and apoptotic index, while mitotic index and defects in chromosome alignment and spindles were assessed in tumor biopsies to demonstrate Aurora A inhibition. Additionally, in both preclinical xenograft models and an acute myeloid leukemia phase 1 trial of the NAE inhibitor MLN4924, development of a novel image algorithm enabled measurement of downstream pathway modulation upon NAE inhibition. In the highlighted studies, developing a biomarker strategy based on automated image analysis solutions enabled project teams to confirm target and pathway inhibition and understand downstream outcomes of target inhibition with increased throughput and quantitative accuracy. These case studies demonstrate a strategy that combines a pathologist's expertise with automated image analysis to support oncology drug discovery and development programs.
Speed-Accuracy Tradeoffs in Speech Production
2017-06-01
imaging data of speech production. A theoretical framework for considering Fitts’ law in the domain of speech production is elucidated. Methodological ...articulatory kinematics conform to Fitts’ law. A second, associated goal is to address the methodological challenges inherent in performing Fitts-style...analysis on rtMRI data of speech production. Methodological challenges include segmenting continuous speech into specific motor tasks, defining key
Rosa C. Goodman; Douglass F. Jacobs; Robert P. Karrfalt
2006-01-01
This paper discusses the potential to use X-ray image analysis as a rapid and nondestructive test of viability of northern red oak (Quercus rubra L.) acorns and the methodology to do so. Acorns are sensitive to desiccation and lose viability as moisture content (MC) decreases, so we examined X-ray images for cotyledon damage in dried acorns to...
Kopriva, Ivica; Hadžija, Mirko; Popović Hadžija, Marijana; Korolija, Marina; Cichocki, Andrzej
2011-08-01
A methodology is proposed for nonlinear contrast-enhanced unsupervised segmentation of multispectral (color) microscopy images of principally unstained specimens. The methodology exploits spectral diversity and spatial sparseness to find anatomical differences between materials (cells, nuclei, and background) present in the image. It consists of rth-order rational variety mapping (RVM) followed by matrix/tensor factorization. Sparseness constraint implies duality between nonlinear unsupervised segmentation and multiclass pattern assignment problems. Classes not linearly separable in the original input space become separable with high probability in the higher-dimensional mapped space. Hence, RVM mapping has two advantages: it takes implicitly into account nonlinearities present in the image (ie, they are not required to be known) and it increases spectral diversity (ie, contrast) between materials, due to increased dimensionality of the mapped space. This is expected to improve performance of systems for automated classification and analysis of microscopic histopathological images. The methodology was validated using RVM of the second and third orders of the experimental multispectral microscopy images of unstained sciatic nerve fibers (nervus ischiadicus) and of unstained white pulp in the spleen tissue, compared with a manually defined ground truth labeled by two trained pathophysiologists. The methodology can also be useful for additional contrast enhancement of images of stained specimens. Copyright © 2011 American Society for Investigative Pathology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Tene, Yair; Tene, Noam; Tene, G.
1993-08-01
An interactive data fusion methodology of video, audio, and nonlinear structural dynamic analysis for potential application in forensic engineering is presented. The methodology was developed and successfully demonstrated in the analysis of heavy transportable bridge collapse during preparation for testing. Multiple bridge elements failures were identified after the collapse, including fracture, cracks and rupture of high performance structural materials. Videotape recording by hand held camcorder was the only source of information about the collapse sequence. The interactive data fusion methodology resulted in extracting relevant information form the videotape and from dynamic nonlinear structural analysis, leading to full account of the sequence of events during the bridge collapse.
GPR image analysis to locate water leaks from buried pipes by applying variance filters
NASA Astrophysics Data System (ADS)
Ocaña-Levario, Silvia J.; Carreño-Alvarado, Elizabeth P.; Ayala-Cabrera, David; Izquierdo, Joaquín
2018-05-01
Nowadays, there is growing interest in controlling and reducing the amount of water lost through leakage in water supply systems (WSSs). Leakage is, in fact, one of the biggest problems faced by the managers of these utilities. This work addresses the problem of leakage in WSSs by using GPR (Ground Penetrating Radar) as a non-destructive method. The main objective is to identify and extract features from GPR images such as leaks and components in a controlled laboratory condition by a methodology based on second order statistical parameters and, using the obtained features, to create 3D models that allows quick visualization of components and leaks in WSSs from GPR image analysis and subsequent interpretation. This methodology has been used before in other fields and provided promising results. The results obtained with the proposed methodology are presented, analyzed, interpreted and compared with the results obtained by using a well-established multi-agent based methodology. These results show that the variance filter is capable of highlighting the characteristics of components and anomalies, in an intuitive manner, which can be identified by non-highly qualified personnel, using the 3D models we develop. This research intends to pave the way towards future intelligent detection systems that enable the automatic detection of leaks in WSSs.
Focus control enhancement and on-product focus response analysis methodology
NASA Astrophysics Data System (ADS)
Kim, Young Ki; Chen, Yen-Jen; Hao, Xueli; Samudrala, Pavan; Gomez, Juan-Manuel; Mahoney, Mark O.; Kamalizadeh, Ferhad; Hanson, Justin K.; Lee, Shawn; Tian, Ye
2016-03-01
With decreasing CDOF (Critical Depth Of Focus) for 20/14nm technology and beyond, focus errors are becoming increasingly critical for on-product performance. Current on product focus control techniques in high volume manufacturing are limited; It is difficult to define measurable focus error and optimize focus response on product with existing methods due to lack of credible focus measurement methodologies. Next to developments in imaging and focus control capability of scanners and general tool stability maintenance, on-product focus control improvements are also required to meet on-product imaging specifications. In this paper, we discuss focus monitoring, wafer (edge) fingerprint correction and on-product focus budget analysis through diffraction based focus (DBF) measurement methodology. Several examples will be presented showing better focus response and control on product wafers. Also, a method will be discussed for a focus interlock automation system on product for a high volume manufacturing (HVM) environment.
Plancoulaine, Benoit; Laurinaviciene, Aida; Herlin, Paulette; Besusparis, Justinas; Meskauskas, Raimundas; Baltrusaityte, Indra; Iqbal, Yasir; Laurinavicius, Arvydas
2015-10-19
Digital image analysis (DIA) enables higher accuracy, reproducibility, and capacity to enumerate cell populations by immunohistochemistry; however, the most unique benefits may be obtained by evaluating the spatial distribution and intra-tissue variance of markers. The proliferative activity of breast cancer tissue, estimated by the Ki67 labeling index (Ki67 LI), is a prognostic and predictive biomarker requiring robust measurement methodologies. We performed DIA on whole-slide images (WSI) of 302 surgically removed Ki67-stained breast cancer specimens; the tumour classifier algorithm was used to automatically detect tumour tissue but was not trained to distinguish between invasive and non-invasive carcinoma cells. The WSI DIA-generated data were subsampled by hexagonal tiling (HexT). Distribution and texture parameters were compared to conventional WSI DIA and pathology report data. Factor analysis of the data set, including total numbers of tumor cells, the Ki67 LI and Ki67 distribution, and texture indicators, extracted 4 factors, identified as entropy, proliferation, bimodality, and cellularity. The factor scores were further utilized in cluster analysis, outlining subcategories of heterogeneous tumors with predominant entropy, bimodality, or both at different levels of proliferative activity. The methodology also allowed the visualization of Ki67 LI heterogeneity in tumors and the automated detection and quantitative evaluation of Ki67 hotspots, based on the upper quintile of the HexT data, conceptualized as the "Pareto hotspot". We conclude that systematic subsampling of DIA-generated data into HexT enables comprehensive Ki67 LI analysis that reflects aspects of intra-tumor heterogeneity and may serve as a methodology to improve digital immunohistochemistry in general.
Stochastic HKMDHE: A multi-objective contrast enhancement algorithm
NASA Astrophysics Data System (ADS)
Pratiher, Sawon; Mukhopadhyay, Sabyasachi; Maity, Srideep; Pradhan, Asima; Ghosh, Nirmalya; Panigrahi, Prasanta K.
2018-02-01
This contribution proposes a novel extension of the existing `Hyper Kurtosis based Modified Duo-Histogram Equalization' (HKMDHE) algorithm, for multi-objective contrast enhancement of biomedical images. A novel modified objective function has been formulated by joint optimization of the individual histogram equalization objectives. The optimal adequacy of the proposed methodology with respect to image quality metrics such as brightness preserving abilities, peak signal-to-noise ratio (PSNR), Structural Similarity Index (SSIM) and universal image quality metric has been experimentally validated. The performance analysis of the proposed Stochastic HKMDHE with existing histogram equalization methodologies like Global Histogram Equalization (GHE) and Contrast Limited Adaptive Histogram Equalization (CLAHE) has been given for comparative evaluation.
Ziessman, Harvey A; Majd, Massoud
2009-07-01
We reviewed our experience with (99m)technetium dimercapto-succinic acid scintigraphy obtained during an imaging pilot study for a multicenter investigation (Randomized Intervention for Children With Vesicoureteral Reflux) of the effectiveness of daily antimicrobial prophylaxis for preventing recurrent urinary tract infection and renal scarring. We analyzed imaging methodology and its relation to diagnostic image quality. (99m)Technetium dimercapto-succinic acid imaging guidelines were provided to participating sites. High-resolution planar imaging with parallel hole or pinhole collimation was required. Two core reviewers evaluated all submitted images. Analysis included appropriate views, presence or lack of patient motion, adequate magnification, sufficient counts and diagnostic image quality. Inter-reader agreement was evaluated. We evaluated 70, (99m)technetium dimercapto-succinic acid studies from 14 institutions. Variability was noted in methodology and image quality. Correlation (r value) between dose administered and patient age was 0.780. For parallel hole collimator imaging good correlation was noted between activity administered and counts (r = 0.800). For pinhole imaging the correlation was poor (r = 0.110). A total of 10 studies (17%) were rejected for quality issues of motion, kidney overlap, inadequate magnification, inadequate counts and poor quality images. The submitting institution was informed and provided with recommendations for improving quality, and resubmission of another study was required. Only 4 studies (6%) were judged differently by the 2 reviewers, and the differences were minor. Methodology and image quality for (99m)technetium dimercapto-succinic acid scintigraphy varied more than expected between institutions. The most common reason for poor image quality was inadequate count acquisition with insufficient attention to the tradeoff between administered dose, length of image acquisition, start time of imaging and resulting image quality. Inter-observer core reader agreement was high. The pilot study ensured good diagnostic quality standardized images for the Randomized Intervention for Children With Vesicoureteral Reflux investigation.
[Quantitative data analysis for live imaging of bone.
Seno, Shigeto
Bone tissue is a hard tissue, it was difficult to observe the interior of the bone tissue alive. With the progress of microscopic technology and fluorescent probe technology in recent years, it becomes possible to observe various activities of various cells forming bone society. On the other hand, the quantitative increase in data and the diversification and complexity of the images makes it difficult to perform quantitative analysis by visual inspection. It has been expected to develop a methodology for processing microscopic images and data analysis. In this article, we introduce the research field of bioimage informatics which is the boundary area of biology and information science, and then outline the basic image processing technology for quantitative analysis of live imaging data of bone.
NASA Astrophysics Data System (ADS)
Mozgovoy, Dmitry k.; Hnatushenko, Volodymyr V.; Vasyliev, Volodymyr V.
2018-04-01
Vegetation and water bodies are a fundamental element of urban ecosystems, and water mapping is critical for urban and landscape planning and management. A methodology of automated recognition of vegetation and water bodies on the territory of megacities in satellite images of sub-meter spatial resolution of the visible and IR bands is proposed. By processing multispectral images from the satellite SuperView-1A, vector layers of recognized plant and water objects were obtained. Analysis of the results of image processing showed a sufficiently high accuracy of the delineation of the boundaries of recognized objects and a good separation of classes. The developed methodology provides a significant increase of the efficiency and reliability of updating maps of large cities while reducing financial costs. Due to the high degree of automation, the proposed methodology can be implemented in the form of a geo-information web service functioning in the interests of a wide range of public services and commercial institutions.
Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun
2015-02-01
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun
2017-01-01
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353
Diagnosis of cutaneous thermal burn injuries by multispectral imaging analysis
NASA Technical Reports Server (NTRS)
Anselmo, V. J.; Zawacki, B. E.
1978-01-01
Special photographic or television image analysis is shown to be a potentially useful technique to assist the physician in the early diagnosis of thermal burn injury. A background on the medical and physiological problems of burns is presented. The proposed methodology for burns diagnosis from both the theoretical and clinical points of view is discussed. The television/computer system constructed to accomplish this analysis is described, and the clinical results are discussed.
Contribution to the application of two-colour imaging to diesel combustion
NASA Astrophysics Data System (ADS)
Payri, F.; Pastor, J. V.; García, J. M.; Pastor, J. M.
2007-08-01
The two-colour method (2C) is a well-known methodology for the estimation of flame temperature and the soot-related KL factor. A 2C imaging system has been built with a single charge-coupled device (CCD) camera for visualization of the diesel flame in a single-cylinder 2-stroke engine with optical accesses. The work presented here focuses on methodological aspects. In that sense, the influence of calibration uncertainties on the measured temperature and KL factor has been analysed. Besides, a theoretical study is presented that tries to link the true flame temperature and soot distributions with those derived from the 2C images. Finally, an experimental study has been carried out in order to show the influence of injection pressure, air density and temperature on the 2C-derived parameters. Comparison with the expected results has shown the limitations of this methodology for diesel flame analysis.
Cellular neural network-based hybrid approach toward automatic image registration
NASA Astrophysics Data System (ADS)
Arun, Pattathal VijayaKumar; Katiyar, Sunil Kumar
2013-01-01
Image registration is a key component of various image processing operations that involve the analysis of different image data sets. Automatic image registration domains have witnessed the application of many intelligent methodologies over the past decade; however, inability to properly model object shape as well as contextual information has limited the attainable accuracy. A framework for accurate feature shape modeling and adaptive resampling using advanced techniques such as vector machines, cellular neural network (CNN), scale invariant feature transform (SIFT), coreset, and cellular automata is proposed. CNN has been found to be effective in improving feature matching as well as resampling stages of registration and complexity of the approach has been considerably reduced using coreset optimization. The salient features of this work are cellular neural network approach-based SIFT feature point optimization, adaptive resampling, and intelligent object modelling. Developed methodology has been compared with contemporary methods using different statistical measures. Investigations over various satellite images revealed that considerable success was achieved with the approach. This system has dynamically used spectral and spatial information for representing contextual knowledge using CNN-prolog approach. This methodology is also illustrated to be effective in providing intelligent interpretation and adaptive resampling.
High resolution earth observation from geostationary orbit by optical aperture synthesys
NASA Astrophysics Data System (ADS)
Mesrine, M.; Thomas, E.; Garin, S.; Blanc, P.; Alis, C.; Cassaing, F.; Laubier, D.
2017-11-01
In this paper, we describe Optical Aperture Synthesis (OAS) imaging instrument concepts studied by Alcatel Alenia Space under a CNES R&T contract in term of technical feasibility. First, the methodology to select the aperture configuration is proposed, based on the definition and quantification of image quality criteria adapted to an OAS instrument for direct imaging of extended objects. The following section presents, for each interferometer type (Michelson and Fizeau), the corresponding optical configurations compatible with a large field of view from GEO orbit. These optical concepts take into account the constraints imposed by the foreseen resolution and the implementation of the co-phasing functions. The fourth section is dedicated to the analysis of the co-phasing methodologies, from the configuration deployment to the fine stabilization during observation. Finally, we present a trade-off analysis allowing to select the concept wrt mission specification and constraints related to instrument accommodation under launcher shroud and in-orbit deployment.
GuidosToolbox: universal digital image object analysis
Peter Vogt; Kurt Riitters
2017-01-01
The increased availability of mapped environmental data calls for better tools to analyze the spatial characteristics and information contained in those maps. Publicly available, userfriendly and universal tools are needed to foster the interdisciplinary development and application of methodologies for the extraction of image object information properties contained in...
Artistic image analysis using graph-based learning approaches.
Carneiro, Gustavo
2013-08-01
We introduce a new methodology for the problem of artistic image analysis, which among other tasks, involves the automatic identification of visual classes present in an art work. In this paper, we advocate the idea that artistic image analysis must explore a graph that captures the network of artistic influences by computing the similarities in terms of appearance and manual annotation. One of the novelties of our methodology is the proposed formulation that is a principled way of combining these two similarities in a single graph. Using this graph, we show that an efficient random walk algorithm based on an inverted label propagation formulation produces more accurate annotation and retrieval results compared with the following baseline algorithms: bag of visual words, label propagation, matrix completion, and structural learning. We also show that the proposed approach leads to a more efficient inference and training procedures. This experiment is run on a database containing 988 artistic images (with 49 visual classification problems divided into a multiclass problem with 27 classes and 48 binary problems), where we show the inference and training running times, and quantitative comparisons with respect to several retrieval and annotation performance measures.
NASA Astrophysics Data System (ADS)
Davila, Yves; Crouzeix, Laurent; Douchin, Bernard; Collombet, Francis; Grunevald, Yves-Henri
2017-08-01
Reinforcement angle orientation has a significant effect on the mechanical properties of composite materials. This work presents a methodology to introduce variable reinforcement angles into finite element (FE) models of composite structures. The study of reinforcement orientation variations uses meta-models to identify and control a continuous variation across the composite ply. First, the reinforcement angle is measured through image analysis techniques of the composite plies during the lay-up phase. Image analysis results show that variations in the mean ply orientations are between -0.5 and 0.5° with standard deviations ranging between 0.34 and 0.41°. An automatic post-treatment of the images determines the global and local angle variations yielding good agreements visually and numerically between the analysed images and the identified parameters. A composite plate analysed at the end of the cooling phase is presented as a case of study. Here, the variation in residual strains induced by the variability in the reinforcement orientation are up to 28% of the strain field of the homogeneous FE model. The proposed methodology has shown its capabilities to introduce material and geometrical variability into FE analysis of layered composite structures.
NASA Astrophysics Data System (ADS)
Davila, Yves; Crouzeix, Laurent; Douchin, Bernard; Collombet, Francis; Grunevald, Yves-Henri
2018-06-01
Reinforcement angle orientation has a significant effect on the mechanical properties of composite materials. This work presents a methodology to introduce variable reinforcement angles into finite element (FE) models of composite structures. The study of reinforcement orientation variations uses meta-models to identify and control a continuous variation across the composite ply. First, the reinforcement angle is measured through image analysis techniques of the composite plies during the lay-up phase. Image analysis results show that variations in the mean ply orientations are between -0.5 and 0.5° with standard deviations ranging between 0.34 and 0.41°. An automatic post-treatment of the images determines the global and local angle variations yielding good agreements visually and numerically between the analysed images and the identified parameters. A composite plate analysed at the end of the cooling phase is presented as a case of study. Here, the variation in residual strains induced by the variability in the reinforcement orientation are up to 28% of the strain field of the homogeneous FE model. The proposed methodology has shown its capabilities to introduce material and geometrical variability into FE analysis of layered composite structures.
A Robust Actin Filaments Image Analysis Framework
Alioscha-Perez, Mitchel; Benadiba, Carine; Goossens, Katty; Kasas, Sandor; Dietler, Giovanni; Willaert, Ronnie; Sahli, Hichem
2016-01-01
The cytoskeleton is a highly dynamical protein network that plays a central role in numerous cellular physiological processes, and is traditionally divided into three components according to its chemical composition, i.e. actin, tubulin and intermediate filament cytoskeletons. Understanding the cytoskeleton dynamics is of prime importance to unveil mechanisms involved in cell adaptation to any stress type. Fluorescence imaging of cytoskeleton structures allows analyzing the impact of mechanical stimulation in the cytoskeleton, but it also imposes additional challenges in the image processing stage, such as the presence of imaging-related artifacts and heavy blurring introduced by (high-throughput) automated scans. However, although there exists a considerable number of image-based analytical tools to address the image processing and analysis, most of them are unfit to cope with the aforementioned challenges. Filamentous structures in images can be considered as a piecewise composition of quasi-straight segments (at least in some finer or coarser scale). Based on this observation, we propose a three-steps actin filaments extraction methodology: (i) first the input image is decomposed into a ‘cartoon’ part corresponding to the filament structures in the image, and a noise/texture part, (ii) on the ‘cartoon’ image, we apply a multi-scale line detector coupled with a (iii) quasi-straight filaments merging algorithm for fiber extraction. The proposed robust actin filaments image analysis framework allows extracting individual filaments in the presence of noise, artifacts and heavy blurring. Moreover, it provides numerous parameters such as filaments orientation, position and length, useful for further analysis. Cell image decomposition is relatively under-exploited in biological images processing, and our study shows the benefits it provides when addressing such tasks. Experimental validation was conducted using publicly available datasets, and in osteoblasts grown in two different conditions: static (control) and fluid shear stress. The proposed methodology exhibited higher sensitivity values and similar accuracy compared to state-of-the-art methods. PMID:27551746
Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms
NASA Technical Reports Server (NTRS)
Kurdila, Andrew J.; Sharpley, Robert C.
1999-01-01
This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.
Retinal imaging analysis based on vessel detection.
Jamal, Arshad; Hazim Alkawaz, Mohammed; Rehman, Amjad; Saba, Tanzila
2017-07-01
With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art. © 2017 Wiley Periodicals, Inc.
Application of Six Sigma methodology to a diagnostic imaging process.
Taner, Mehmet Tolga; Sezen, Bulent; Atwat, Kamal M
2012-01-01
This paper aims to apply the Six Sigma methodology to improve workflow by eliminating the causes of failure in the medical imaging department of a private Turkish hospital. Implementation of the design, measure, analyse, improve and control (DMAIC) improvement cycle, workflow chart, fishbone diagrams and Pareto charts were employed, together with rigorous data collection in the department. The identification of root causes of repeat sessions and delays was followed by failure, mode and effect analysis, hazard analysis and decision tree analysis. The most frequent causes of failure were malfunction of the RIS/PACS system and improper positioning of patients. Subsequent to extensive training of professionals, the sigma level was increased from 3.5 to 4.2. The data were collected over only four months. Six Sigma's data measurement and process improvement methodology is the impetus for health care organisations to rethink their workflow and reduce malpractice. It involves measuring, recording and reporting data on a regular basis. This enables the administration to monitor workflow continuously. The improvements in the workflow under study, made by determining the failures and potential risks associated with radiologic care, will have a positive impact on society in terms of patient safety. Having eliminated repeat examinations, the risk of being exposed to more radiation was also minimised. This paper supports the need to apply Six Sigma and present an evaluation of the process in an imaging department.
Performance evaluation methodology for historical document image binarization.
Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis
2013-02-01
Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.
3D Texture Features Mining for MRI Brain Tumor Identification
NASA Astrophysics Data System (ADS)
Rahim, Mohd Shafry Mohd; Saba, Tanzila; Nayer, Fatima; Syed, Afraz Zahra
2014-03-01
Medical image segmentation is a process to extract region of interest and to divide an image into its individual meaningful, homogeneous components. Actually, these components will have a strong relationship with the objects of interest in an image. For computer-aided diagnosis and therapy process, medical image segmentation is an initial mandatory step. Medical image segmentation is a sophisticated and challenging task because of the sophisticated nature of the medical images. Indeed, successful medical image analysis heavily dependent on the segmentation accuracy. Texture is one of the major features to identify region of interests in an image or to classify an object. 2D textures features yields poor classification results. Hence, this paper represents 3D features extraction using texture analysis and SVM as segmentation technique in the testing methodologies.
Radiomics: Extracting more information from medical images using advanced feature analysis
Lambin, Philippe; Rios-Velazquez, Emmanuel; Leijenaar, Ralph; Carvalho, Sara; van Stiphout, Ruud G.P.M.; Granton, Patrick; Zegers, Catharina M.L.; Gillies, Robert; Boellard, Ronald; Dekker, André; Aerts, Hugo J.W.L.
2015-01-01
Solid cancers are spatially and temporally heterogeneous. This limits the use of invasive biopsy based molecular assays but gives huge potential for medical imaging, which has the ability to capture intra-tumoural heterogeneity in a non-invasive way. During the past decades, medical imaging innovations with new hardware, new imaging agents and standardised protocols, allows the field to move towards quantitative imaging. Therefore, also the development of automated and reproducible analysis methodologies to extract more information from image-based features is a requirement. Radiomics – the high-throughput extraction of large amounts of image features from radiographic images – addresses this problem and is one of the approaches that hold great promises but need further validation in multi-centric settings and in the laboratory. PMID:22257792
Forsythe, Alex; Street, Nichola; Helmy, Mai
2017-08-01
Differences between norm ratings collected when participants are asked to consider more than one picture characteristic are contrasted with the traditional methodological approaches of collecting ratings separately for image constructs. We present data that suggest that reporting normative data, based on methodological procedures that ask participants to consider multiple image constructs simultaneously, could potentially confounded norm data. We provide data for two new image constructs, beauty and the extent to which participants encountered the stimuli in their everyday lives. Analysis of this data suggests that familiarity and encounter are tapping different image constructs. The extent to which an observer encounters an object predicts human judgments of visual complexity. Encountering an image was also found to be an important predictor of beauty, but familiarity with that image was not. Taken together, these results suggest that continuing to collect complexity measures from human judgments is a pointless exercise. Automated measures are more reliable and valid measures, which are demonstrated here as predicting human preferences.
Non-destructive, high-content analysis of wheat grain traits using X-ray micro computed tomography.
Hughes, Nathan; Askew, Karen; Scotson, Callum P; Williams, Kevin; Sauze, Colin; Corke, Fiona; Doonan, John H; Nibau, Candida
2017-01-01
Wheat is one of the most widely grown crop in temperate climates for food and animal feed. In order to meet the demands of the predicted population increase in an ever-changing climate, wheat production needs to dramatically increase. Spike and grain traits are critical determinants of final yield and grain uniformity a commercially desired trait, but their analysis is laborious and often requires destructive harvest. One of the current challenges is to develop an accurate, non-destructive method for spike and grain trait analysis capable of handling large populations. In this study we describe the development of a robust method for the accurate extraction and measurement of spike and grain morphometric parameters from images acquired by X-ray micro-computed tomography (μCT). The image analysis pipeline developed automatically identifies plant material of interest in μCT images, performs image analysis, and extracts morphometric data. As a proof of principle, this integrated methodology was used to analyse the spikes from a population of wheat plants subjected to high temperatures under two different water regimes. Temperature has a negative effect on spike height and grain number with the middle of the spike being the most affected region. The data also confirmed that increased grain volume was correlated with the decrease in grain number under mild stress. Being able to quickly measure plant phenotypes in a non-destructive manner is crucial to advance our understanding of gene function and the effects of the environment. We report on the development of an image analysis pipeline capable of accurately and reliably extracting spike and grain traits from crops without the loss of positional information. This methodology was applied to the analysis of wheat spikes can be readily applied to other economically important crop species.
Process perspective on image quality evaluation
NASA Astrophysics Data System (ADS)
Leisti, Tuomas; Halonen, Raisa; Kokkonen, Anna; Weckman, Hanna; Mettänen, Marja; Lensu, Lasse; Ritala, Risto; Oittinen, Pirkko; Nyman, Göte
2008-01-01
The psychological complexity of multivariate image quality evaluation makes it difficult to develop general image quality metrics. Quality evaluation includes several mental processes and ignoring these processes and the use of a few test images can lead to biased results. By using a qualitative/quantitative (Interpretation Based Quality, IBQ) methodology, we examined the process of pair-wise comparison in a setting, where the quality of the images printed by laser printer on different paper grades was evaluated. Test image consisted of a picture of a table covered with several objects. Three other images were also used, photographs of a woman, cityscape and countryside. In addition to the pair-wise comparisons, observers (N=10) were interviewed about the subjective quality attributes they used in making their quality decisions. An examination of the individual pair-wise comparisons revealed serious inconsistencies in observers' evaluations on the test image content, but not on other contexts. The qualitative analysis showed that this inconsistency was due to the observers' focus of attention. The lack of easily recognizable context in the test image may have contributed to this inconsistency. To obtain reliable knowledge of the effect of image context or attention on subjective image quality, a qualitative methodology is needed.
Whalley, H C; Kestelman, J N; Rimmington, J E; Kelso, A; Abukmeil, S S; Best, J J; Johnstone, E C; Lawrie, S M
1999-07-30
The Edinburgh High Risk Project is a longitudinal study of brain structure (and function) in subjects at high risk of developing schizophrenia in the next 5-10 years for genetic reasons. In this article we describe the methods of volumetric analysis of structural magnetic resonance images used in the study. We also consider potential sources of error in these methods: the validity of our image analysis techniques; inter- and intra-rater reliability; possible positional variation; and thresholding criteria used in separating brain from cerebro-spinal fluid (CSF). Investigation with a phantom test object (of similar imaging characteristics to the brain) provided evidence for the validity of our image acquisition and analysis techniques. Both inter- and intra-rater reliability were found to be good in whole brain measures but less so for smaller regions. There were no statistically significant differences in positioning across the three study groups (patients with schizophrenia, high risk subjects and normal volunteers). A new technique for thresholding MRI scans longitudinally is described (the 'rescale' method) and compared with our established method (thresholding by eye). Few differences between the two techniques were seen at 3- and 6-month follow-up. These findings demonstrate the validity and reliability of the structural MRI analysis techniques used in the Edinburgh High Risk Project, and highlight methodological issues of general concern in cross-sectional and longitudinal studies of brain structure in healthy control subjects and neuropsychiatric populations.
The cell monolayer trajectory from the system state point of view.
Stys, Dalibor; Vanek, Jan; Nahlik, Tomas; Urban, Jan; Cisar, Petr
2011-10-01
Time-lapse microscopic movies are being increasingly utilized for understanding the derivation of cell states and predicting cell future. Often, fluorescence and other types of labeling are not available or desirable, and cell state-definitions based on observable structures must be used. We present the methodology for cell behavior recognition and prediction based on the short term cell recurrent behavior analysis. This approach has theoretical justification in non-linear dynamics theory. The methodology is based on the general stochastic systems theory which allows us to define the cell states, trajectory and the system itself. We introduce the usage of a novel image content descriptor based on information contribution (gain) by each image point for the cell state characterization as the first step. The linkage between the method and the general system theory is presented as a general frame for cell behavior interpretation. We also discuss extended cell description, system theory and methodology for future development. This methodology may be used for many practical purposes, ranging from advanced, medically relevant, precise cell culture diagnostics to very utilitarian cell recognition in a noisy or uneven image background. In addition, the results are theoretically justified.
ERIC Educational Resources Information Center
Py, Bernard, Ed.
2000-01-01
Articles in this issue include the following: "Representations sociales et discours. Questions epistemologiques et methodologiques" (Social Representations and Discourse. Questions of Epistemology and Methodology); "Aspects theoretiques et methodologiques de la recherche sur le traitement discursif des representations sociales"…
An image-processing methodology for extracting bloodstain pattern features.
Arthur, Ravishka M; Humburg, Philomena J; Hoogenboom, Jerry; Baiker, Martin; Taylor, Michael C; de Bruin, Karla G
2017-08-01
There is a growing trend in forensic science to develop methods to make forensic pattern comparison tasks more objective. This has generally involved the application of suitable image-processing methods to provide numerical data for identification or comparison. This paper outlines a unique image-processing methodology that can be utilised by analysts to generate reliable pattern data that will assist them in forming objective conclusions about a pattern. A range of features were defined and extracted from a laboratory-generated impact spatter pattern. These features were based in part on bloodstain properties commonly used in the analysis of spatter bloodstain patterns. The values of these features were consistent with properties reported qualitatively for such patterns. The image-processing method developed shows considerable promise as a way to establish measurable discriminating pattern criteria that are lacking in current bloodstain pattern taxonomies. Copyright © 2017 Elsevier B.V. All rights reserved.
Marin, Diego; Gegundez-Arias, Manuel E; Suero, Angel; Bravo, Jose M
2015-02-01
Development of automatic retinal disease diagnosis systems based on retinal image computer analysis can provide remarkably quicker screening programs for early detection. Such systems are mainly focused on the detection of the earliest ophthalmic signs of illness and require previous identification of fundal landmark features such as optic disc (OD), fovea or blood vessels. A methodology for accurate center-position location and OD retinal region segmentation on digital fundus images is presented in this paper. The methodology performs a set of iterative opening-closing morphological operations on the original retinography intensity channel to produce a bright region-enhanced image. Taking blood vessel confluence at the OD into account, a 2-step automatic thresholding procedure is then applied to obtain a reduced region of interest, where the center and the OD pixel region are finally obtained by performing the circular Hough transform on a set of OD boundary candidates generated through the application of the Prewitt edge detector. The methodology was evaluated on 1200 and 1748 fundus images from the publicly available MESSIDOR and MESSIDOR-2 databases, acquired from diabetic patients and thus being clinical cases of interest within the framework of automated diagnosis of retinal diseases associated to diabetes mellitus. This methodology proved highly accurate in OD-center location: average Euclidean distance between the methodology-provided and actual OD-center position was 6.08, 9.22 and 9.72 pixels for retinas of 910, 1380 and 1455 pixels in size, respectively. On the other hand, OD segmentation evaluation was performed in terms of Jaccard and Dice coefficients, as well as the mean average distance between estimated and actual OD boundaries. Comparison with the results reported by other reviewed OD segmentation methodologies shows our proposal renders better overall performance. Its effectiveness and robustness make this proposed automated OD location and segmentation method a suitable tool to be integrated into a complete prescreening system for early diagnosis of retinal diseases. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Cancer Detection Using Neural Computing Methodology
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad; Kohen, Hamid S.; Bearman, Gregory H.; Seligson, David B.
2001-01-01
This paper describes a novel learning methodology used to analyze bio-materials. The premise of this research is to help pathologists quickly identify anomalous cells in a cost efficient method. Skilled pathologists must methodically, efficiently and carefully analyze manually histopathologic materials for the presence, amount and degree of malignancy and/or other disease states. The prolonged attention required to accomplish this task induces fatigue that may result in a higher rate of diagnostic errors. In addition, automated image analysis systems to date lack a sufficiently intelligent means of identifying even the most general regions of interest in tissue based studies and this shortfall greatly limits their utility. An intelligent data understanding system that could quickly and accurately identify diseased tissues and/or could choose regions of interest would be expected to increase the accuracy of diagnosis and usher in truly automated tissue based image analysis.
Preprocessing film-copied MRI for studying morphological brain changes.
Pham, Tuan D; Eisenblätter, Uwe; Baune, Bernhard T; Berger, Klaus
2009-06-15
The magnetic resonance imaging (MRI) of the brain is one of the important data items for studying memory and morbidity in elderly as these images can provide useful information through the quantitative measures of various regions of interest of the brain. As an effort to fully automate the biomedical analysis of the brain that can be combined with the genetic data of the same human population and where the records of the original MRI data are missing, this paper presents two effective methods for addressing this imaging problem. The first method handles the restoration of the film-copied MRI. The second method involves the segmentation of the image data. Experimental results and comparisons with other methods suggest the usefulness of the proposed image analysis methodology.
Cost-effectiveness modelling in diagnostic imaging: a stepwise approach.
Sailer, Anna M; van Zwam, Wim H; Wildberger, Joachim E; Grutters, Janneke P C
2015-12-01
Diagnostic imaging (DI) is the fastest growing sector in medical expenditures and takes a central role in medical decision-making. The increasing number of various and new imaging technologies induces a growing demand for cost-effectiveness analysis (CEA) in imaging technology assessment. In this article we provide a comprehensive framework of direct and indirect effects that should be considered for CEA in DI, suitable for all imaging modalities. We describe and explain the methodology of decision analytic modelling in six steps aiming to transfer theory of CEA to clinical research by demonstrating key principles of CEA in a practical approach. We thereby provide radiologists with an introduction to the tools necessary to perform and interpret CEA as part of their research and clinical practice. • DI influences medical decision making, affecting both costs and health outcome. • This article provides a comprehensive framework for CEA in DI. • A six-step methodology for conducting and interpreting cost-effectiveness modelling is proposed.
GIS Methodology for Planning Planetary-Rover Operations
NASA Technical Reports Server (NTRS)
Powell, Mark; Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang
2007-01-01
A document describes a methodology for utilizing image data downlinked from cameras aboard a robotic ground vehicle (rover) on a remote planet for analyzing and planning operations of the vehicle and of any associated spacecraft. Traditionally, the cataloging and presentation of large numbers of downlinked planetary-exploration images have been done by use of two organizational methods: temporal organization and correlation between activity plans and images. In contrast, the present methodology involves spatial indexing of image data by use of the computational discipline of geographic information systems (GIS), which has been maturing in terrestrial applications for decades, but, until now, has not been widely used in support of exploration of remote planets. The use of GIS to catalog data products for analysis is intended to increase efficiency and effectiveness in planning rover operations, just as GIS has proven to be a source of powerful computational tools in such terrestrial endeavors as law enforcement, military strategic planning, surveying, political science, and epidemiology. The use of GIS also satisfies the need for a map-based user interface that is intuitive to rover-activity planners, many of whom are deeply familiar with maps and know how to use them effectively in field geology.
Imaging of polysaccharides in the tomato cell wall with Raman microspectroscopy
2014-01-01
Background The primary cell wall of fruits and vegetables is a structure mainly composed of polysaccharides (pectins, hemicelluloses, cellulose). Polysaccharides are assembled into a network and linked together. It is thought that the percentage of components and of plant cell wall has an important influence on mechanical properties of fruits and vegetables. Results In this study the Raman microspectroscopy technique was introduced to the visualization of the distribution of polysaccharides in cell wall of fruit. The methodology of the sample preparation, the measurement using Raman microscope and multivariate image analysis are discussed. Single band imaging (for preliminary analysis) and multivariate image analysis methods (principal component analysis and multivariate curve resolution) were used for the identification and localization of the components in the primary cell wall. Conclusions Raman microspectroscopy supported by multivariate image analysis methods is useful in distinguishing cellulose and pectins in the cell wall in tomatoes. It presents how the localization of biopolymers was possible with minimally prepared samples. PMID:24917885
Ilovich, Ohad; Qutaish, Mohammed; Hesterman, Jacob; Orcutt, Kelly; Hoppin, Jack; Polyak, Ildiko; Seaman, Marc; Abu-Yousif, Adnan; Cvet, Donna; Bradley, Daniel
2018-05-04
In vitro properties of antibody drug conjugates (ADCs) such as binding, internalization, and cytotoxicity are often well characterized prior to in vivo studies. Interpretation of in vivo studies could significantly be enhanced by molecular imaging tools. We present here a dual-isotope cryo-imaging quantitative autoradiography (CIQA) methodology combined with advanced 3D imaging and analysis allowing for the simultaneous study of both antibody and payload distribution in tissues of interest. in a pre-clinical setting. Methods: TAK-264, an investigational anti-guanylyl cyclase C (GCC) targeting ADC was synthesized utilizing tritiated Monomethyl auristatin E (MMAE). The tritiated ADC was then conjugated to DTPA, labeled with indium-111 and evaluated in vivo in GCC-positive and GCC-negative tumor-bearing animals. Results: Cryo-imaging Quantitative Autoradiography (CIQA) reveals the time course of drug release from ADC and its distribution into various tumor regions seemingly impenetrablethat are less accessible to the antibody. For GCC-positive tumors, a representative section obtained 96 hours post tracer injection showed only 0.8% of the voxels have co-localized signal versus over 15% of the voxels for a GCC-negative tumor section., suggesting successful and specific cleaving of the toxin in the antigen positive lesions. Conclusion: The combination of a veteran established autoradiography technology with advanced image analysis methodologies affords an experimental tool that can support detailed characterization of ADC tumor penetration and pharmacokinetics. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
A survey on deep learning in medical image analysis.
Litjens, Geert; Kooi, Thijs; Bejnordi, Babak Ehteshami; Setio, Arnaud Arindra Adiyoso; Ciompi, Francesco; Ghafoorian, Mohsen; van der Laak, Jeroen A W M; van Ginneken, Bram; Sánchez, Clara I
2017-12-01
Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Leijenaar, Ralph T. H.; Nalbantov, Georgi; Carvalho, Sara; van Elmpt, Wouter J. C.; Troost, Esther G. C.; Boellaard, Ronald; Aerts, Hugo J. W. L.; Gillies, Robert J.; Lambin, Philippe
2015-08-01
FDG-PET-derived textural features describing intra-tumor heterogeneity are increasingly investigated as imaging biomarkers. As part of the process of quantifying heterogeneity, image intensities (SUVs) are typically resampled into a reduced number of discrete bins. We focused on the implications of the manner in which this discretization is implemented. Two methods were evaluated: (1) RD, dividing the SUV range into D equally spaced bins, where the intensity resolution (i.e. bin size) varies per image; and (2) RB, maintaining a constant intensity resolution B. Clinical feasibility was assessed on 35 lung cancer patients, imaged before and in the second week of radiotherapy. Forty-four textural features were determined for different D and B for both imaging time points. Feature values depended on the intensity resolution and out of both assessed methods, RB was shown to allow for a meaningful inter- and intra-patient comparison of feature values. Overall, patients ranked differently according to feature values-which was used as a surrogate for textural feature interpretation-between both discretization methods. Our study shows that the manner of SUV discretization has a crucial effect on the resulting textural features and the interpretation thereof, emphasizing the importance of standardized methodology in tumor texture analysis.
Banzato, Tommaso; Fiore, Enrico; Morgante, Massimo; Manuali, Elisabetta; Zotti, Alessandro
2016-10-01
Hepatic lipidosis is the most diffused hepatic disease in the lactating cow. A new methodology to estimate the degree of fatty infiltration of the liver in lactating cows by means of texture analysis of B-mode ultrasound images is proposed. B-mode ultrasonography of the liver was performed in 48 Holstein Friesian cows using standardized ultrasound parameters. Liver biopsies to determine the triacylglycerol content of the liver (TAGqa) were obtained from each animal. A large number of texture parameters were calculated on the ultrasound images by means of a free software. Based on the TAGqa content of the liver, 29 samples were classified as mild (TAGqa<50mg/g), 6 as moderate (50mg/g
Nguyen, Phan; Bashirzadeh, Farzad; Hundloe, Justin; Salvado, Olivier; Dowson, Nicholas; Ware, Robert; Masters, Ian Brent; Bhatt, Manoj; Kumar, Aravind Ravi; Fielding, David
2012-03-01
Morphologic and sonographic features of endobronchial ultrasound (EBUS) convex probe images are helpful in predicting metastatic lymph nodes. Grey scale texture analysis is a well-established methodology that has been applied to ultrasound images in other fields of medicine. The aim of this study was to determine if this methodology could differentiate between benign and malignant lymphadenopathy of EBUS images. Lymph nodes from digital images of EBUS procedures were manually mapped to obtain a region of interest and were analyzed in a prediction set. The regions of interest were analyzed for the following grey scale texture features in MATLAB (version 7.8.0.347 [R2009a]): mean pixel value, difference between maximal and minimal pixel value, SEM pixel value, entropy, correlation, energy, and homogeneity. Significant grey scale texture features were used to assess a validation set compared with fluoro-D-glucose (FDG)-PET-CT scan findings where available. Fifty-two malignant nodes and 48 benign nodes were in the prediction set. Malignant nodes had a greater difference in the maximal and minimal pixel values, SEM pixel value, entropy, and correlation, and a lower energy (P < .0001 for all values). Fifty-one lymph nodes were in the validation set; 44 of 51 (86.3%) were classified correctly. Eighteen of these lymph nodes also had FDG-PET-CT scan assessment, which correctly classified 14 of 18 nodes (77.8%), compared with grey scale texture analysis, which correctly classified 16 of 18 nodes (88.9%). Grey scale texture analysis of EBUS convex probe images can be used to differentiate malignant and benign lymphadenopathy. Preliminary results are comparable to FDG-PET-CT scan.
NASA Astrophysics Data System (ADS)
Carneiro, Renato Lajarim; Poppi, Ronei Jesus
2014-01-01
In the present work the homogeneity of a pharmaceutical formulation presented as a cream was studied using infrared imaging spectroscopy and chemometric methodologies such as principal component analysis (PCA) and multivariate curve resolution with alternating least squares (MCR-ALS). A cream formulation, presented as an emulsion, was prepared using imiquimod as the active pharmaceutical ingredient (API) and the excipients: water, vaseline, an emulsifier and a carboxylic acid in order to dissolve the API. After exposure at 45 °C during 3 months to perform accelerated stability test, the presence of some crystals was observed, indicating homogeneity problems in the formulation. PCA exploratory analysis showed that the crystal composition was different from the composition of the emulsion, since the score maps presented crystal structures in the emulsion. MCR-ALS estimated the spectra of the crystals and the emulsion. The crystals presented amine and C-H bands, suggesting that the precipitate was a salt formed by carboxylic acid and imiquimod. These results indicate the potential of infrared imaging spectroscopy in conjunction with chemometric methodologies as an analytical tool to ensure the quality of cream formulations in the pharmaceutical industry.
Kids'Cam: An Objective Methodology to Study the World in Which Children Live.
Signal, Louise N; Smith, Moira B; Barr, Michelle; Stanley, James; Chambers, Tim J; Zhou, Jiang; Duane, Aaron; Jenkin, Gabrielle L S; Pearson, Amber L; Gurrin, Cathal; Smeaton, Alan F; Hoek, Janet; Ni Mhurchu, Cliona
2017-09-01
This paper reports on a new methodology to objectively study the world in which children live. The primary research study (Kids'Cam Food Marketing) illustrates the method; numerous ancillary studies include exploration of children's exposure to alcohol, smoking, "blue" space and gambling, and their use of "green" space, transport, and sun protection. One hundred sixty-eight randomly selected children (aged 11-13 years) recruited from 16 randomly selected schools in Wellington, New Zealand used wearable cameras and GPS units for 4 days, recording imagery every 7 seconds and longitude/latitude locations every 5 seconds. Data were collected from July 2014 to June 2015. Analysis commenced in 2015 and is ongoing. Bespoke software was used to manually code images for variables of interest including setting, marketing media, and product category to produce variables for statistical analysis. GPS data were extracted and cleaned in ArcGIS, version 10.3 for exposure spatial analysis. Approximately 1.4 million images and 2.2 million GPS coordinates were generated (most were usable) from many settings including the difficult to measure aspects of exposures in the home, at school, and during leisure time. The method is ethical, legal, and acceptable to children and the wider community. This methodology enabled objective analysis of the world in which children live. The main arm examined the frequency and nature of children's exposure to food and beverage marketing and provided data on difficult to measure settings. The methodology will likely generate robust evidence facilitating more effective policymaking to address numerous public health concerns. Copyright © 2017. Published by Elsevier Inc.
Invariant correlation to position and rotation using a binary mask applied to binary and gray images
NASA Astrophysics Data System (ADS)
Álvarez-Borrego, Josué; Solorza, Selene; Bueno-Ibarra, Mario A.
2013-05-01
In this paper more alternative ways to generate the binary ring masks are studied and a new methodology is presented when in the analysis the image come with some distortion due to rotation. This new algorithm requires low computational cost. Signature vectors of the target so like signature vectors of the object to be recognized in the problem image are obtained using a binary ring mask constructed in accordance with the real or the imaginary part of their Fourier transform analyzing two different conditions in each one. In this manner, each image target or problem image, will have four unique binary ring masks. The four ways are analyzed and the best is chosen. In addition, due to any image with rotation include some distortion, the best transect is chosen in the Fourier plane in order to obtain the best signature through the different ways to obtain the binary mask. This methodology is applied to two cases: to identify different types of alphabetic letters in Arial font and to identify different fossil diatoms images. Considering the great similarity between diatom images the results obtained are excellent.
Bjornsson, Christopher S; Lin, Gang; Al-Kofahi, Yousef; Narayanaswamy, Arunachalam; Smith, Karen L; Shain, William; Roysam, Badrinath
2009-01-01
Brain structural complexity has confounded prior efforts to extract quantitative image-based measurements. We present a systematic ‘divide and conquer’ methodology for analyzing three-dimensional (3D) multi-parameter images of brain tissue to delineate and classify key structures, and compute quantitative associations among them. To demonstrate the method, thick (~100 μm) slices of rat brain tissue were labeled using 3 – 5 fluorescent signals, and imaged using spectral confocal microscopy and unmixing algorithms. Automated 3D segmentation and tracing algorithms were used to delineate cell nuclei, vasculature, and cell processes. From these segmentations, a set of 23 intrinsic and 8 associative image-based measurements was computed for each cell. These features were used to classify astrocytes, microglia, neurons, and endothelial cells. Associations among cells and between cells and vasculature were computed and represented as graphical networks to enable further analysis. The automated results were validated using a graphical interface that permits investigator inspection and corrective editing of each cell in 3D. Nuclear counting accuracy was >89%, and cell classification accuracy ranged from 81–92% depending on cell type. We present a software system named FARSIGHT implementing our methodology. Its output is a detailed XML file containing measurements that may be used for diverse quantitative hypothesis-driven and exploratory studies of the central nervous system. PMID:18294697
Automated image-based phenotypic analysis in zebrafish embryos
Vogt, Andreas; Cholewinski, Andrzej; Shen, Xiaoqiang; Nelson, Scott; Lazo, John S.; Tsang, Michael; Hukriede, Neil A.
2009-01-01
Presently, the zebrafish is the only vertebrate model compatible with contemporary paradigms of drug discovery. Zebrafish embryos are amenable to automation necessary for high-throughput chemical screens, and optical transparency makes them potentially suited for image-based screening. However, the lack of tools for automated analysis of complex images presents an obstacle to utilizing the zebrafish as a high-throughput screening model. We have developed an automated system for imaging and analyzing zebrafish embryos in multi-well plates regardless of embryo orientation and without user intervention. Images of fluorescent embryos were acquired on a high-content reader and analyzed using an artificial intelligence-based image analysis method termed Cognition Network Technology (CNT). CNT reliably detected transgenic fluorescent embryos (Tg(fli1:EGFP)y1) arrayed in 96-well plates and quantified intersegmental blood vessel development in embryos treated with small molecule inhibitors of anigiogenesis. The results demonstrate it is feasible to adapt image-based high-content screening methodology to measure complex whole organism phenotypes. PMID:19235725
Quantitative imaging assay for NF-κB nuclear translocation in primary human macrophages
Noursadeghi, Mahdad; Tsang, Jhen; Haustein, Thomas; Miller, Robert F.; Chain, Benjamin M.; Katz, David R.
2008-01-01
Quantitative measurement of NF-κB nuclear translocation is an important research tool in cellular immunology. Established methodologies have a number of limitations, such as poor sensitivity, high cost or dependence on cell lines. Novel imaging methods to measure nuclear translocation of transcriptionally active components of NF-κB are being used but are also partly limited by the need for specialist imaging equipment or image analysis software. Herein we present a method for quantitative detection of NF-κB rel A nuclear translocation, using immunofluorescence microscopy and the public domain image analysis software ImageJ that can be easily adopted for cellular immunology research without the need for specialist image analysis expertise and at low cost. The method presented here is validated by demonstrating the time course and dose response of NF-κB nuclear translocation in primary human macrophages stimulated with LPS, and by comparison with a commercial NF-κB activation reporter cell line. PMID:18036607
Kinjo, Masataka
2018-01-01
Neurodegenerative diseases, including amyotrophic lateral sclerosis (ALS), Alzheimer’s disease, Parkinson’s disease, and Huntington’s disease, are devastating proteinopathies with misfolded protein aggregates accumulating in neuronal cells. Inclusion bodies of protein aggregates are frequently observed in the neuronal cells of patients. Investigation of the underlying causes of neurodegeneration requires the establishment and selection of appropriate methodologies for detailed investigation of the state and conformation of protein aggregates. In the current review, we present an overview of the principles and application of several methodologies used for the elucidation of protein aggregation, specifically ones based on determination of fluctuations of fluorescence. The discussed methods include fluorescence correlation spectroscopy (FCS), imaging FCS, image correlation spectroscopy (ICS), photobleaching ICS (pbICS), number and brightness (N&B) analysis, super-resolution optical fluctuation imaging (SOFI), and transient state (TRAST) monitoring spectroscopy. Some of these methodologies are classical protein aggregation analyses, while others are not yet widely used. Collectively, the methods presented here should help the future development of research not only into protein aggregation but also neurodegenerative diseases. PMID:29570669
Computer analysis of gallbladder ultrasonic images towards recognition of pathological lesions
NASA Astrophysics Data System (ADS)
Ogiela, M. R.; Bodzioch, S.
2011-06-01
This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards automatic detection and interpretation of disease symptoms on processed US images. First, in this paper, there is presented a new heuristic method of filtering gallbladder contours from images. A major stage in this filtration is to segment and section off areas occupied by the said organ. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours, based on rank filtration, as well as on the analysis of line profile sections on tested organs. The second part concerns detecting the most important lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. The methodology of computer analysis of US gallbladder images presented here is clearly utilitarian in nature and after standardising can be used as a technique for supporting the diagnostics of selected gallbladder disorders using the images of this organ.
Daniel, Floréal; Mounier, Aurélie; Pérez-Arantegui, Josefina; Pardos, Carlos; Prieto-Taboada, Nagore; Fdez-Ortiz de Vallejuelo, Silvia; Castro, Kepa
2017-06-01
The development of non-invasive techniques for the characterization of pigments is crucial in order to preserve the integrity of the artwork. In this sense, the usefulness of hyperspectral imaging was demonstrated. It allows pigment characterization of the whole painting. However, it also sometimes requires the complementation of other point-by-point techniques. In the present article, the advantages of hyperspectral imaging over point-by-point spectroscopic analysis were evaluated. For that purpose, three paintings were analysed by hyperspectral imaging, handheld X-ray fluorescence and handheld Raman spectroscopy in order to determine the best non-invasive technique for pigment identifications. Thanks to this work, the main pigments used in Aragonese artworks, and especially in Goya's paintings, were identified and mapped by imaging reflection spectroscopy. All the analysed pigments corresponded to those used at the time of Goya. Regarding the techniques used, the information obtained by the hyperspectral imaging and point-by-point analysis has been, in general, different and complementary. Given this fact, selecting only one technique is not recommended, and the present work demonstrates the usefulness of the combination of all the techniques used as the best non-invasive methodology for the pigments' characterization. Moreover, the proposed methodology is a relatively quick procedure that allows a larger number of Goya's paintings in the museum to be surveyed, increasing the possibility of obtaining significant results and providing a chance for extensive comparisons, which are relevant from the point of view of art history issues.
NASA Astrophysics Data System (ADS)
Korte, Stefan; Berger, Roland; Hänze, Martin
2017-05-01
We assessed the impact of teaching methodological aspects of physics on students' scientistic beliefs and subject interest in physics in a repeated-measurement design with a total of 142 students of upper secondary physics classes. Students gained knowledge of methodological aspects from the pre-test to the post-test and reported reduced scientistic beliefs, both from their own views and from their presumed prototypical physicists' views. We found no direct impact of teaching on students' subject interest in physics. As path analysis indicates, this result can be traced back to opposing paths: Lower scientistic beliefs of students attenuate subject interest while lower presumed scientistic beliefs that they hold of physicists foster subject interest. This finding is in accordance with the self-to-prototype matching theory that predicts an impact of the overlap between students' self-image and their prototypical image on subject interest in physics.
NASA Astrophysics Data System (ADS)
Mohammad Sadeghi, Majid; Kececi, Emin Faruk; Bilsel, Kerem; Aralasmak, Ayse
2017-03-01
Medical imaging has great importance in earlier detection, better treatment and follow-up of diseases. 3D Medical image analysis with CT Scan and MRI images has also been used to aid surgeries by enabling patient specific implant fabrication, where having a precise three dimensional model of associated body parts is essential. In this paper, a 3D image processing methodology for finding the plane on which the glenoid surface has a maximum surface area is proposed. Finding this surface is the first step in designing patient specific shoulder joint implant.
An evaluation of the directed flow graph methodology
NASA Technical Reports Server (NTRS)
Snyder, W. E.; Rajala, S. A.
1984-01-01
The applicability of the Directed Graph Methodology (DGM) to the design and analysis of special purpose image and signal processing hardware was evaluated. A special purpose image processing system was designed and described using DGM. The design, suitable for very large scale integration (VLSI) implements a region labeling technique. Two computer chips were designed, both using metal-nitride-oxide-silicon (MNOS) technology, as well as a functional system utilizing those chips to perform real time region labeling. The system is described in terms of DGM primitives. As it is currently implemented, DGM is inappropriate for describing synchronous, tightly coupled, special purpose systems. The nature of the DGM formalism lends itself more readily to modeling networks of general purpose processors.
Delakis, Ioannis; Hammad, Omer; Kitney, Richard I
2007-07-07
Wavelet-based de-noising has been shown to improve image signal-to-noise ratio in magnetic resonance imaging (MRI) while maintaining spatial resolution. Wavelet-based de-noising techniques typically implemented in MRI require that noise displays uniform spatial distribution. However, images acquired with parallel MRI have spatially varying noise levels. In this work, a new algorithm for filtering images with parallel MRI is presented. The proposed algorithm extracts the edges from the original image and then generates a noise map from the wavelet coefficients at finer scales. The noise map is zeroed at locations where edges have been detected and directional analysis is also used to calculate noise in regions of low-contrast edges that may not have been detected. The new methodology was applied on phantom and brain images and compared with other applicable de-noising techniques. The performance of the proposed algorithm was shown to be comparable with other techniques in central areas of the images, where noise levels are high. In addition, finer details and edges were maintained in peripheral areas, where noise levels are low. The proposed methodology is fully automated and can be applied on final reconstructed images without requiring sensitivity profiles or noise matrices of the receiver coils, therefore making it suitable for implementation in a clinical MRI setting.
Uranga, Jon; Arrizabalaga, Haritz; Boyra, Guillermo; Hernandez, Maria Carmen; Goñi, Nicolas; Arregui, Igor; Fernandes, Jose A; Yurramendi, Yosu; Santiago, Josu
2017-01-01
This study presents a methodology for the automated analysis of commercial medium-range sonar signals for detecting presence/absence of bluefin tuna (Tunnus thynnus) in the Bay of Biscay. The approach uses image processing techniques to analyze sonar screenshots. For each sonar image we extracted measurable regions and analyzed their characteristics. Scientific data was used to classify each region into a class ("tuna" or "no-tuna") and build a dataset to train and evaluate classification models by using supervised learning. The methodology performed well when validated with commercial sonar screenshots, and has the potential to automatically analyze high volumes of data at a low cost. This represents a first milestone towards the development of acoustic, fishery-independent indices of abundance for bluefin tuna in the Bay of Biscay. Future research lines and additional alternatives to inform stock assessments are also discussed.
Uranga, Jon; Arrizabalaga, Haritz; Boyra, Guillermo; Hernandez, Maria Carmen; Goñi, Nicolas; Arregui, Igor; Fernandes, Jose A.; Yurramendi, Yosu; Santiago, Josu
2017-01-01
This study presents a methodology for the automated analysis of commercial medium-range sonar signals for detecting presence/absence of bluefin tuna (Tunnus thynnus) in the Bay of Biscay. The approach uses image processing techniques to analyze sonar screenshots. For each sonar image we extracted measurable regions and analyzed their characteristics. Scientific data was used to classify each region into a class (“tuna” or “no-tuna”) and build a dataset to train and evaluate classification models by using supervised learning. The methodology performed well when validated with commercial sonar screenshots, and has the potential to automatically analyze high volumes of data at a low cost. This represents a first milestone towards the development of acoustic, fishery-independent indices of abundance for bluefin tuna in the Bay of Biscay. Future research lines and additional alternatives to inform stock assessments are also discussed. PMID:28152032
Characterizing pigments with hyperspectral imaging variable false-color composites
NASA Astrophysics Data System (ADS)
Hayem-Ghez, Anita; Ravaud, Elisabeth; Boust, Clotilde; Bastian, Gilles; Menu, Michel; Brodie-Linder, Nancy
2015-11-01
Hyperspectral imaging has been used for pigment characterization on paintings for the last 10 years. It is a noninvasive technique, which mixes the power of spectrophotometry and that of imaging technologies. We have access to a visible and near-infrared hyperspectral camera, ranging from 400 to 1000 nm in 80-160 spectral bands. In order to treat the large amount of data that this imaging technique generates, one can use statistical tools such as principal component analysis (PCA). To conduct the characterization of pigments, researchers mostly use PCA, convex geometry algorithms and the comparison of resulting clusters to database spectra with a specific tolerance (like the Spectral Angle Mapper tool on the dedicated software ENVI). Our approach originates from false-color photography and aims at providing a simple tool to identify pigments thanks to imaging spectroscopy. It can be considered as a quick first analysis to see the principal pigments of a painting, before using a more complete multivariate statistical tool. We study pigment spectra, for each kind of hue (blue, green, red and yellow) to identify the wavelength maximizing spectral differences. The case of red pigments is most interesting because our methodology can discriminate the red pigments very well—even red lakes, which are always difficult to identify. As for the yellow and blue categories, it represents a good progress of IRFC photography for pigment discrimination. We apply our methodology to study the pigments on a painting by Eustache Le Sueur, a French painter of the seventeenth century. We compare the results to other noninvasive analysis like X-ray fluorescence and optical microscopy. Finally, we draw conclusions about the advantages and limits of the variable false-color image method using hyperspectral imaging.
Kikuchi, Kumiko; Masuda, Yuji; Yamashita, Toyonobu; Kawai, Eriko; Hirao, Tetsuji
2015-05-01
Heterogeneity with respect to skin color tone is one of the key factors in visual perception of facial attractiveness and age. However, there have been few studies on quantitative analyses of the color heterogeneity of facial skin. The purpose of this study was to develop image evaluation methods for skin color heterogeneity focusing on skin chromophores and then characterize ethnic differences and age-related changes. A facial imaging system equipped with an illumination unit and a high-resolution digital camera was used to develop image evaluation methods for skin color heterogeneity. First, melanin and/or hemoglobin images were obtained using pigment-specific image-processing techniques, which involved conversion from Commission Internationale de l'Eclairage XYZ color values to melanin and/or hemoglobin indexes as measures of their contents. Second, a spatial frequency analysis with threshold settings was applied to the individual images. Cheek skin images of 194 healthy Asian and Caucasian female subjects were acquired using the imaging system. Applying this methodology, the skin color heterogeneity of Asian and Caucasian faces was characterized. The proposed pigment-specific image-processing techniques allowed visual discrimination of skin redness from skin pigmentation. In the heterogeneity analyses of cheek skin color, age-related changes in melanin were clearly detected in Asian and Caucasian skin. Furthermore, it was found that the heterogeneity indexes of hemoglobin were significantly higher in Caucasian skin than in Asian skin. We have developed evaluation methods for skin color heterogeneity by image analyses based on the major chromophores, melanin and hemoglobin, with special reference to their size. This methodology focusing on skin color heterogeneity should be useful for better understanding of aging and ethnic differences. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Image-derived input function with factor analysis and a-priori information.
Simončič, Urban; Zanotti-Fregonara, Paolo
2015-02-01
Quantitative PET studies often require the cumbersome and invasive procedure of arterial cannulation to measure the input function. This study sought to minimize the number of necessary blood samples by developing a factor-analysis-based image-derived input function (IDIF) methodology for dynamic PET brain studies. IDIF estimation was performed as follows: (a) carotid and background regions were segmented manually on an early PET time frame; (b) blood-weighted and tissue-weighted time-activity curves (TACs) were extracted with factor analysis; (c) factor analysis results were denoised and scaled using the voxels with the highest blood signal; (d) using population data and one blood sample at 40 min, whole-blood TAC was estimated from postprocessed factor analysis results; and (e) the parent concentration was finally estimated by correcting the whole-blood curve with measured radiometabolite concentrations. The methodology was tested using data from 10 healthy individuals imaged with [(11)C](R)-rolipram. The accuracy of IDIFs was assessed against full arterial sampling by comparing the area under the curve of the input functions and by calculating the total distribution volume (VT). The shape of the image-derived whole-blood TAC matched the reference arterial curves well, and the whole-blood area under the curves were accurately estimated (mean error 1.0±4.3%). The relative Logan-V(T) error was -4.1±6.4%. Compartmental modeling and spectral analysis gave less accurate V(T) results compared with Logan. A factor-analysis-based IDIF for [(11)C](R)-rolipram brain PET studies that relies on a single blood sample and population data can be used for accurate quantification of Logan-V(T) values.
Biologically-inspired data decorrelation for hyper-spectral imaging
NASA Astrophysics Data System (ADS)
Picon, Artzai; Ghita, Ovidiu; Rodriguez-Vaamonde, Sergio; Iriondo, Pedro Ma; Whelan, Paul F.
2011-12-01
Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors) is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA), linear discriminant analysis (LDA), wavelet decomposition (WD), or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral) characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification
Masè, Michela; Cristoforetti, Alessandro; Avogaro, Laura; Tessarolo, Francesco; Piccoli, Federico; Caola, Iole; Pederzolli, Carlo; Graffigna, Angelo; Ravelli, Flavia
2015-01-01
The assessment of collagen structure in cardiac pathology, such as atrial fibrillation (AF), is essential for a complete understanding of the disease. This paper introduces a novel methodology for the quantitative description of collagen network properties, based on the combination of nonlinear optical microscopy with a spectral approach of image processing and analysis. Second-harmonic generation (SHG) microscopy was applied to atrial tissue samples from cardiac surgery patients, providing label-free, selective visualization of the collagen structure. The spectral analysis framework, based on 2D-FFT, was applied to the SHG images, yielding a multiparametric description of collagen fiber orientation (angle and anisotropy indexes) and texture scale (dominant wavelength and peak dispersion indexes). The proof-of-concept application of the methodology showed the capability of our approach to detect and quantify differences in the structural properties of the collagen network in AF versus sinus rhythm patients. These results suggest the potential of our approach in the assessment of collagen properties in cardiac pathologies related to a fibrotic structural component.
NASA Astrophysics Data System (ADS)
Felipe-Sesé, Luis; López-Alba, Elías; Siegmann, Philip; Díaz, Francisco A.
2016-12-01
A low-cost approach for three-dimensional (3-D) full-field displacement measurement is applied for the analysis of large displacements involved in two different mechanical events. The method is based on a combination of fringe projection and two-dimensional digital image correlation (DIC) techniques. The two techniques have been employed simultaneously using an RGB camera and a color encoding method; therefore, it is possible to measure in-plane and out-of-plane displacements at the same time with only one camera even at high speed rates. The potential of the proposed methodology has been employed for the analysis of large displacements during contact experiments in a soft material block. Displacement results have been successfully compared with those obtained using a 3D-DIC commercial system. Moreover, the analysis of displacements during an impact test on a metal plate was performed to emphasize the application of the methodology for dynamics events. Results show a good level of agreement, highlighting the potential of FP + 2D DIC as low-cost alternative for the analysis of large deformations problems.
NASA Astrophysics Data System (ADS)
Tustison, Nicholas J.; Contrella, Benjamin; Altes, Talissa A.; Avants, Brian B.; de Lange, Eduard E.; Mugler, John P.
2013-03-01
The utitlity of pulmonary functional imaging techniques, such as hyperpolarized 3He MRI, has encouraged their inclusion in research studies for longitudinal assessment of disease progression and the study of treatment effects. We present methodology for performing voxelwise statistical analysis of ventilation maps derived from hyper polarized 3He MRI which incorporates multivariate template construction using simultaneous acquisition of IH and 3He images. Additional processing steps include intensity normalization, bias correction, 4-D longitudinal segmentation, and generation of expected ventilation maps prior to voxelwise regression analysis. Analysis is demonstrated on a cohort of eight individuals with diagnosed cystic fibrosis (CF) undergoing treatment imaged five times every two weeks with a prescribed treatment schedule.
Analytical robustness of quantitative NIR chemical imaging for Islamic paper characterization
NASA Astrophysics Data System (ADS)
Mahgoub, Hend; Gilchrist, John R.; Fearn, Thomas; Strlič, Matija
2017-07-01
Recently, spectral imaging techniques such as Multispectral (MSI) and Hyperspectral Imaging (HSI) have gained importance in the field of heritage conservation. This paper explores the analytical robustness of quantitative chemical imaging for Islamic paper characterization by focusing on the effect of different measurement and processing parameters, i.e. acquisition conditions and calibration on the accuracy of the collected spectral data. This will provide a better understanding of the technique that can provide a measure of change in collections through imaging. For the quantitative model, special calibration target was devised using 105 samples from a well-characterized reference Islamic paper collection. Two material properties were of interest: starch sizing and cellulose degree of polymerization (DP). Multivariate data analysis methods were used to develop discrimination and regression models which were used as an evaluation methodology for the metrology of quantitative NIR chemical imaging. Spectral data were collected using a pushbroom HSI scanner (Gilden Photonics Ltd) in the 1000-2500 nm range with a spectral resolution of 6.3 nm using a mirror scanning setup and halogen illumination. Data were acquired at different measurement conditions and acquisition parameters. Preliminary results showed the potential of the evaluation methodology to show that measurement parameters such as the use of different lenses and different scanning backgrounds may not have a great influence on the quantitative results. Moreover, the evaluation methodology allowed for the selection of the best pre-treatment method to be applied to the data.
Axelrod, Noel; Radko, Anna; Lewis, Aaron; Ben-Yosef, Nissim
2004-04-10
A methodology is described for phase restoration of an object function from differential interference contrast (DIC) images. The methodology involves collecting a set of DIC images in the same plane with different bias retardation between the two illuminating light components produced by a Wollaston prism. These images, together with one conventional bright-field image, allows for reduction of the phase deconvolution restoration problem from a highly complex nonlinear mathematical formulation to a set of linear equations that can be applied to resolve the phase for images with a relatively large number of pixels. Additionally, under certain conditions, an on-line atomic force imaging system that does not interfere with the standard DIC illumination modes resolves uncertainties in large topographical variations that generally lead to a basic problem in DIC imaging, i.e., phase unwrapping. Furthermore, the availability of confocal detection allows for a three-dimensional reconstruction with high accuracy of the refractive-index measurement of the object that is to be imaged. This has been applied to reconstruction of the refractive index of an arrayed waveguide in a region in which a defect in the sample is present. The results of this paper highlight the synergism of far-field microscopies integrated with scanned probe microscopies and restoration algorithms for phase reconstruction.
Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao
2015-01-01
Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383
NASA Astrophysics Data System (ADS)
Xu, Yuanhong; Liu, Jingquan; Zhang, Jizhen; Zong, Xidan; Jia, Xiaofang; Li, Dan; Wang, Erkang
2015-05-01
A portable lab-on-a-chip methodology to generate ionic liquid-functionalized carbon nanodots (CNDs) was developed via electrochemical oxidation of screen printed carbon electrodes. The CNDs can be successfully applied for efficient cell imaging and solid-state electrochemiluminescence sensor fabrication on the paper-based chips.A portable lab-on-a-chip methodology to generate ionic liquid-functionalized carbon nanodots (CNDs) was developed via electrochemical oxidation of screen printed carbon electrodes. The CNDs can be successfully applied for efficient cell imaging and solid-state electrochemiluminescence sensor fabrication on the paper-based chips. Electronic supplementary information (ESI) available: Experimental section; Fig. S1. XPS spectra of the as-prepared CNDs after being dialyzed for 72 hours; Fig. S2. LSCM images showing time-dependent fluorescence signals of HeLa cells treated by the as-prepared CNDs; Tripropylamine analysis using the Nafion/CNDs modified ECL sensor. See DOI: 10.1039/c5nr01765c
An update on technical and methodological aspects for cardiac PET applications.
Presotto, Luca; Busnardo, Elena; Gianolli, Luigi; Bettinardi, Valentino
2016-12-01
Positron emission tomography (PET) is indicated for a large number of cardiac diseases: perfusion and viability studies are commonly used to evaluate coronary artery disease; PET can also be used to assess sarcoidosis and endocarditis, as well as to investigate amyloidosis. Furthermore, a hot topic for research is plaque characterization. Most of these studies are technically very challenging. High count rates and short acquisition times characterize perfusion scans while very small targets have to be imaged in inflammation/infection and plaques examinations. Furthermore, cardiac PET suffers from respiratory and cardiac motion blur. Each type of studies has specific requirements from the technical and methodological point of view, thus PET systems with overall high performances are required. Furthermore, in the era of hybrid PET/computed tomography (CT) and PET/Magnetic Resonance Imaging (MRI) systems, the combination of complementary functional and anatomical information can be used to improve diagnosis and prognosis. Moreover, PET images can be qualitatively and quantitatively improved exploiting information from the other modality, using advanced algorithms. In this review we will report the latest technological and methodological innovations for PET cardiac applications, with particular reference to the state of the art of the hybrid PET/CT and PET/MRI. We will also report the most recent advancements in software, from reconstruction algorithms to image processing and analysis programs.
"Big Data" in Rheumatology: Intelligent Data Modeling Improves the Quality of Imaging Data.
Landewé, Robert B M; van der Heijde, Désirée
2018-05-01
Analysis of imaging data in rheumatology is a challenge. Reliability of scores is an issue for several reasons. Signal-to-noise ratio of most imaging techniques is rather unfavorable (too little signal in relation to too much noise). Optimal use of all available data may help to increase credibility of imaging data, but knowledge of complicated statistical methodology and the help of skilled statisticians are required. Clinicians should appreciate the merits of sophisticated data modeling and liaise with statisticians to increase the quality of imaging results, as proper imaging studies in rheumatology imply more than a supersensitive imaging technique alone. Copyright © 2018 Elsevier Inc. All rights reserved.
Modeling 4D Pathological Changes by Leveraging Normative Models
Wang, Bo; Prastawa, Marcel; Irimia, Andrei; Saha, Avishek; Liu, Wei; Goh, S.Y. Matthew; Vespa, Paul M.; Van Horn, John D.; Gerig, Guido
2016-01-01
With the increasing use of efficient multimodal 3D imaging, clinicians are able to access longitudinal imaging to stage pathological diseases, to monitor the efficacy of therapeutic interventions, or to assess and quantify rehabilitation efforts. Analysis of such four-dimensional (4D) image data presenting pathologies, including disappearing and newly appearing lesions, represents a significant challenge due to the presence of complex spatio-temporal changes. Image analysis methods for such 4D image data have to include not only a concept for joint segmentation of 3D datasets to account for inherent correlations of subject-specific repeated scans but also a mechanism to account for large deformations and the destruction and formation of lesions (e.g., edema, bleeding) due to underlying physiological processes associated with damage, intervention, and recovery. In this paper, we propose a novel framework that provides a joint segmentation-registration framework to tackle the inherent problem of image registration in the presence of objects not present in all images of the time series. Our methodology models 4D changes in pathological anatomy across time and and also provides an explicit mapping of a healthy normative template to a subject’s image data with pathologies. Since atlas-moderated segmentation methods cannot explain appearance and locality pathological structures that are not represented in the template atlas, the new framework provides different options for initialization via a supervised learning approach, iterative semisupervised active learning, and also transfer learning, which results in a fully automatic 4D segmentation method. We demonstrate the effectiveness of our novel approach with synthetic experiments and a 4D multimodal MRI dataset of severe traumatic brain injury (TBI), including validation via comparison to expert segmentations. However, the proposed methodology is generic in regard to different clinical applications requiring quantitative analysis of 4D imaging representing spatio-temporal changes of pathologies. PMID:27818606
Athanasiou, Lambros S; Rigas, George A; Sakellarios, Antonis I; Exarchos, Themis P; Siogkas, Panagiotis K; Naka, Katerina K; Panetta, Daniele; Pelosi, Gualtiero; Vozzi, Federico; Michalis, Lampros K; Parodi, Oberdan; Fotiadis, Dimitrios I
2015-10-01
A framework for the inflation of micro-CT and histology data using intravascular ultrasound (IVUS) images, is presented. The proposed methodology consists of three steps. In the first step the micro-CT/histological images are manually co-registered with IVUS by experts using fiducial points as landmarks. In the second step the lumen of both the micro-CT/histological images and IVUS images are automatically segmented. Finally, in the third step the micro-CT/histological images are inflated by applying a transformation method on each image. The transformation method is based on the IVUS and micro-CT/histological contour difference. In order to validate the proposed image inflation methodology, plaque areas in the inflated micro-CT and histological images are compared with the ones in the IVUS images. The proposed methodology for inflating micro-CT/histological images increases the sensitivity of plaque area matching between the inflated and the IVUS images (7% and 22% in histological and micro-CT images, respectively). Copyright © 2015 Elsevier Ltd. All rights reserved.
Thompkins, Andie M.; Deshpande, Gopikrishna; Waggoner, Paul; Katz, Jeffrey S.
2017-01-01
Neuroimaging of the domestic dog is a rapidly expanding research topic in terms of the cognitive domains being investigated. Because dogs have shared both a physical and social world with humans for thousands of years, they provide a unique and socially relevant means of investigating a variety of shared human and canine psychological phenomena. Additionally, their trainability allows for neuroimaging to be carried out noninvasively in an awake and unrestrained state. In this review, a brief overview of functional magnetic resonance imaging (fMRI) is followed by an analysis of recent research with dogs using fMRI. Methodological and conceptual concerns found across multiple studies are raised, and solutions to these issues are suggested. With the research capabilities brought by canine functional imaging, findings may improve our understanding of canine cognitive processes, identify neural correlates of behavioral traits, and provide early-life selection measures for dogs in working roles. PMID:29456781
Oil Spill Detection: Past and Future Trends
NASA Astrophysics Data System (ADS)
Topouzelis, Konstantinos; Singha, Suman
2016-08-01
In the last 15 years, the detection of oil spills by satellite means has been moved from experimental to operational. Actually, what is really changed is the satellite image availability. From the late 1990's, in the age of "no data" we have moved forward 15 years to the age of "Sentinels" with an abundance of data. Either large accident related to offshore oil exploration and production activity or illegal discharges from tankers, oil on the sea surface is or can be now regularly monitored, over European Waters. National and transnational organizations (i.e. European Maritime Safety Agency's 'CleanSeaNet' Service) are routinely using SAR imagery to detect oil due to it's all weather, day and night imaging capability. However, all these years the scientific methodology on the detection remains relatively constant. From manual analysis to fully automatic detection methodologies, no significant contribution has been published in the last years and certainly none has dramatically changed the rules of the detection. On the contrary, although the overall accuracy of the methodology is questioned, the four main classification steps (dark area detection, features extraction, statistic database creation, and classification) are continuously improving. In recent years, researchers came up with the use of polarimetric SAR data for oil spill detection and characterizations, although utilization of Pol-SAR data for this purpose still remains questionable due to lack of verified dataset and low spatial coverage of Pol-SAR data. The present paper is trying to point out the drawbacks of the oil spill detection in the last years and focus on the bottlenecks of the oil spill detection methodologies. Also, solutions on the basis of data availability, management and analysis are proposed. Moreover, an ideal detection system is discussed regarding satellite image and in situ observations using different scales and sensors.
Goede, Patricia A.; Lauman, Jason R.; Cochella, Christopher; Katzman, Gregory L.; Morton, David A.; Albertine, Kurt H.
2004-01-01
Use of digital medical images has become common over the last several years, coincident with the release of inexpensive, mega-pixel quality digital cameras and the transition to digital radiology operation by hospitals. One problem that clinicians, medical educators, and basic scientists encounter when handling images is the difficulty of using business and graphic arts commercial-off-the-shelf (COTS) software in multicontext authoring and interactive teaching environments. The authors investigated and developed software-supported methodologies to help clinicians, medical educators, and basic scientists become more efficient and effective in their digital imaging environments. The software that the authors developed provides the ability to annotate images based on a multispecialty methodology for annotation and visual knowledge representation. This annotation methodology is designed by consensus, with contributions from the authors and physicians, medical educators, and basic scientists in the Departments of Radiology, Neurobiology and Anatomy, Dermatology, and Ophthalmology at the University of Utah. The annotation methodology functions as a foundation for creating, using, reusing, and extending dynamic annotations in a context-appropriate, interactive digital environment. The annotation methodology supports the authoring process as well as output and presentation mechanisms. The annotation methodology is the foundation for a Windows implementation that allows annotated elements to be represented as structured eXtensible Markup Language and stored separate from the image(s). PMID:14527971
Performance characterization of image and video analysis systems at Siemens Corporate Research
NASA Astrophysics Data System (ADS)
Ramesh, Visvanathan; Jolly, Marie-Pierre; Greiffenhagen, Michael
2000-06-01
There has been a significant increase in commercial products using imaging analysis techniques to solve real-world problems in diverse fields such as manufacturing, medical imaging, document analysis, transportation and public security, etc. This has been accelerated by various factors: more advanced algorithms, the availability of cheaper sensors, and faster processors. While algorithms continue to improve in performance, a major stumbling block in translating improvements in algorithms to faster deployment of image analysis systems is the lack of characterization of limits of algorithms and how they affect total system performance. The research community has realized the need for performance analysis and there have been significant efforts in the last few years to remedy the situation. Our efforts at SCR have been on statistical modeling and characterization of modules and systems. The emphasis is on both white-box and black box methodologies to evaluate and optimize vision systems. In the first part of this paper we review the literature on performance characterization and then provide an overview of the status of research in performance characterization of image and video understanding systems. The second part of the paper is on performance evaluation of medical image segmentation algorithms. Finally, we highlight some research issues in performance analysis in medical imaging systems.
An efficient approach to integrated MeV ion imaging.
Nikbakht, T; Kakuee, O; Solé, V A; Vosuoghi, Y; Lamehi-Rachti, M
2018-03-01
An ionoluminescence (IL) spectral imaging system, besides the common MeV ion imaging facilities such as µ-PIXE and µ-RBS, is implemented at the Van de Graaff laboratory of Tehran. A versatile processing software is required to handle the large amount of data concurrently collected in µ-IL and common MeV ion imaging measurements through the respective methodologies. The open-source freeware PyMca, with image processing and multivariate analysis capabilities, is employed to simultaneously process common MeV ion imaging and µ-IL data. Herein, the program was adapted to support the OM_DAQ listmode data format. The appropriate performance of the µ-IL data acquisition system is confirmed through a case study. Moreover, the capabilities of the software for simultaneous analysis of µ-PIXE and µ-RBS experimental data are presented. Copyright © 2017 Elsevier B.V. All rights reserved.
Kang, Stella K; Rawson, James V; Recht, Michael P
2017-12-05
Provided methodologic training, more imagers can contribute to the evidence basis on improved health outcomes and value in diagnostic imaging. The Value of Imaging Through Comparative Effectiveness Research Program was developed to provide hands-on, practical training in five core areas for comparative effectiveness and big biomedical data research: decision analysis, cost-effectiveness analysis, evidence synthesis, big data principles, and applications of big data analytics. The program's mixed format consists of web-based modules for asynchronous learning as well as in-person sessions for practical skills and group discussion. Seven diagnostic radiology subspecialties and cardiology are represented in the first group of program participants, showing the collective potential for greater depth of comparative effectiveness research in the imaging community. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Jagadale, Basavaraj N.; Udupa, Jayaram K.; Tong, Yubing; Wu, Caiyun; McDonough, Joseph; Torigian, Drew A.; Campbell, Robert M.
2018-02-01
General surgeons, orthopedists, and pulmonologists individually treat patients with thoracic insufficiency syndrome (TIS). The benefits of growth-sparing procedures such as Vertical Expandable Prosthetic Titanium Rib (VEPTR)insertionfor treating patients with TIS have been demonstrated. However, at present there is no objective assessment metricto examine different thoracic structural components individually as to their roles in the syndrome, in contributing to dynamics and function, and in influencing treatment outcome. Using thoracic dynamic MRI (dMRI), we have been developing a methodology to overcome this problem. In this paper, we extend this methodology from our previous structural analysis approaches to examining lung tissue properties. We process the T2-weighted dMRI images through a series of steps involving 4D image construction of the acquired dMRI images, intensity non-uniformity correction and standardization of the 4D image, lung segmentation, and estimation of the parameters describing lung tissue intensity distributions in the 4D image. Based on pre- and post-operative dMRI data sets from 25 TIS patients (predominantly neuromuscular and congenital conditions), we demonstrate how lung tissue can be characterized by the estimated distribution parameters. Our results show that standardized T2-weighted image intensity values decrease from the pre- to post-operative condition, likely reflecting improved lung aeration post-operatively. In both pre- and post-operative conditions, the intensity values decrease also from end-expiration to end-inspiration, supporting the basic premise of our results.
Nonlinear Image Denoising Methodologies
2002-05-01
53 5.3 A Multiscale Approach to Scale-Space Analysis . . . . . . . . . . . . . . . . 53 5.4...etc. In this thesis, Our approach to denoising is first based on a controlled nonlinear stochastic random walk to achieve a scale space analysis ( as in... stochastic treatment or interpretation of the diffusion. In addition, unless a specific stopping time is known to be adequate, the resulting evolution
Advances in interpretation of subsurface processes with time-lapse electrical imaging
Singha, Kaminit; Day-Lewis, Frederick D.; Johnson, Tim B.; Slater, Lee D.
2015-01-01
Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.
Advances in interpretation of subsurface processes with time-lapse electrical imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singha, Kamini; Day-Lewis, Frederick D.; Johnson, Timothy C.
2015-03-15
Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.
Image processing and machine learning in the morphological analysis of blood cells.
Rodellar, J; Alférez, S; Acevedo, A; Molina, A; Merino, A
2018-05-01
This review focuses on how image processing and machine learning can be useful for the morphological characterization and automatic recognition of cell images captured from peripheral blood smears. The basics of the 3 core elements (segmentation, quantitative features, and classification) are outlined, and recent literature is discussed. Although red blood cells are a significant part of this context, this study focuses on malignant lymphoid cells and blast cells. There is no doubt that these technologies may help the cytologist to perform efficient, objective, and fast morphological analysis of blood cells. They may also help in the interpretation of some morphological features and may serve as learning and survey tools. Although research is still needed, it is important to define screening strategies to exploit the potential of image-based automatic recognition systems integrated in the daily routine of laboratories along with other analysis methodologies. © 2018 John Wiley & Sons Ltd.
High content image analysis for human H4 neuroglioma cells exposed to CuO nanoparticles.
Li, Fuhai; Zhou, Xiaobo; Zhu, Jinmin; Ma, Jinwen; Huang, Xudong; Wong, Stephen T C
2007-10-09
High content screening (HCS)-based image analysis is becoming an important and widely used research tool. Capitalizing this technology, ample cellular information can be extracted from the high content cellular images. In this study, an automated, reliable and quantitative cellular image analysis system developed in house has been employed to quantify the toxic responses of human H4 neuroglioma cells exposed to metal oxide nanoparticles. This system has been proved to be an essential tool in our study. The cellular images of H4 neuroglioma cells exposed to different concentrations of CuO nanoparticles were sampled using IN Cell Analyzer 1000. A fully automated cellular image analysis system has been developed to perform the image analysis for cell viability. A multiple adaptive thresholding method was used to classify the pixels of the nuclei image into three classes: bright nuclei, dark nuclei, and background. During the development of our image analysis methodology, we have achieved the followings: (1) The Gaussian filtering with proper scale has been applied to the cellular images for generation of a local intensity maximum inside each nucleus; (2) a novel local intensity maxima detection method based on the gradient vector field has been established; and (3) a statistical model based splitting method was proposed to overcome the under segmentation problem. Computational results indicate that 95.9% nuclei can be detected and segmented correctly by the proposed image analysis system. The proposed automated image analysis system can effectively segment the images of human H4 neuroglioma cells exposed to CuO nanoparticles. The computational results confirmed our biological finding that human H4 neuroglioma cells had a dose-dependent toxic response to the insult of CuO nanoparticles.
Light Microscopy at Maximal Precision
NASA Astrophysics Data System (ADS)
Bierbaum, Matthew; Leahy, Brian D.; Alemi, Alexander A.; Cohen, Itai; Sethna, James P.
2017-10-01
Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI). As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10-100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klein, Adam
2015-01-01
This thesis presents work on advancements and applications of methodology for the analysis of biological samples using mass spectrometry. Included in this work are improvements to chemical cross-linking mass spectrometry (CXMS) for the study of protein structures and mass spectrometry imaging and quantitative analysis to study plant metabolites. Applications include using matrix-assisted laser desorption/ionization-mass spectrometry imaging (MALDI-MSI) to further explore metabolic heterogeneity in plant tissues and chemical interactions at the interface between plants and pests. Additional work was focused on developing liquid chromatography-mass spectrometry (LC-MS) methods to investigate metabolites associated with plant-pest interactions.
Recent developments in imaging system assessment methodology, FROC analysis and the search model.
Chakraborty, Dev P
2011-08-21
A frequent problem in imaging is assessing whether a new imaging system is an improvement over an existing standard. Observer performance methods, in particular the receiver operating characteristic (ROC) paradigm, are widely used in this context. In ROC analysis lesion location information is not used and consequently scoring ambiguities can arise in tasks, such as nodule detection, involving finding localized lesions. This paper reviews progress in the free-response ROC (FROC) paradigm in which the observer marks and rates suspicious regions and the location information is used to determine whether lesions were correctly localized. Reviewed are FROC data analysis, a search-model for simulating FROC data, predictions of the model and a method for estimating the parameters. The search model parameters are physically meaningful quantities that can guide system optimization.
NASA Astrophysics Data System (ADS)
Lin, H.; Zhang, X.; Wu, X.; Tarnas, J. D.; Mustard, J. F.
2018-04-01
Quantitative analysis of hydrated minerals from hyperspectral remote sensing data is fundamental for understanding Martian geologic process. Because of the difficulties for selecting endmembers from hyperspectral images, a sparse unmixing algorithm has been proposed to be applied to CRISM data on Mars. However, it's challenge when the endmember library increases dramatically. Here, we proposed a new methodology termed Target Transformation Constrained Sparse Unmixing (TTCSU) to accurately detect hydrous minerals on Mars. A new version of target transformation technique proposed in our recent work was used to obtain the potential detections from CRISM data. Sparse unmixing constrained with these detections as prior information was applied to CRISM single-scattering albedo images, which were calculated using a Hapke radiative transfer model. This methodology increases success rate of the automatic endmember selection of sparse unmixing and could get more accurate abundances. CRISM images with well analyzed in Southwest Melas Chasma was used to validate our methodology in this study. The sulfates jarosite was detected from Southwest Melas Chasma, the distribution is consistent with previous work and the abundance is comparable. More validations will be done in our future work.
Application of tolerance limits to the characterization of image registration performance.
Fedorov, Andriy; Wells, William M; Kikinis, Ron; Tempany, Clare M; Vangel, Mark G
2014-07-01
Deformable image registration is used increasingly in image-guided interventions and other applications. However, validation and characterization of registration performance remain areas that require further study. We propose an analysis methodology for deriving tolerance limits on the initial conditions for deformable registration that reliably lead to a successful registration. This approach results in a concise summary of the probability of registration failure, while accounting for the variability in the test data. The (β, γ) tolerance limit can be interpreted as a value of the input parameter that leads to successful registration outcome in at least 100β% of cases with the 100γ% confidence. The utility of the methodology is illustrated by summarizing the performance of a deformable registration algorithm evaluated in three different experimental setups of increasing complexity. Our examples are based on clinical data collected during MRI-guided prostate biopsy registered using publicly available deformable registration tool. The results indicate that the proposed methodology can be used to generate concise graphical summaries of the experiments, as well as a probabilistic estimate of the registration outcome for a future sample. Its use may facilitate improved objective assessment, comparison and retrospective stress-testing of deformable.
NASA Astrophysics Data System (ADS)
Lima Neto, Irineu A.; Misságia, Roseane M.; Ceia, Marco A.; Archilha, Nathaly L.; Oliveira, Lucas C.
2014-11-01
Carbonate reservoirs exhibit heterogeneous pore systems and a wide variety of grain types, which affect the rock's elastic properties and the reservoir parameter relationships. To study the Albian carbonates in the Campos Basin, a methodology is proposed to predict the amount of microporosity and the representative aspect ratio of these inclusions. The method assumes three pore-space scales in two representative inclusion scenarios: 1) a macro-mesopore median aspect ratio from the thin-section digital image analysis (DIA) and 2) a microporosity aspect ratio predicted based on the measured P-wave velocities. Through a laboratory analysis of 10 grainstone core samples of the Albian age, the P- and S-wave velocities (Vp and Vs) are evaluated at effective pressures of 0-10 MPa. The analytical theories in the proposed methodology are functions of the aspect ratios from the differential effective medium (DEM) theory, the macro-mesopore system recognized from the DIA, the amount of microporosity determined by the difference between the porosities estimated from laboratorial helium-gas and the thin-section petrographic images, and the P-wave velocities under dry effective pressure conditions. The DIA procedure is applied to estimate the local and global parameters, and the textural implications concerning ultrasonic velocities and image resolution. The macro-mesopore inclusions contribute to stiffer rocks and higher velocities, whereas the microporosity inclusions contribute to softer rocks and lower velocities. We observe a high potential for this methodology, which uses the microporosity aspect ratio inverted from Vp to predict Vs with a good agreement. The results acceptably characterize the Albian grainstones. The representative macro-mesopore aspect ratio is 0.5, and the inverted microporosity aspect ratio ranges from 0.01 to 0.07. The effective pressure induced an effect of slight porosity reduction during the triaxial tests, mainly in the microporosity inclusions, slightly changing the amount and the aspect ratio of the microporosity.
Functional connectomics from a "big data" perspective.
Xia, Mingrui; He, Yong
2017-10-15
In the last decade, explosive growth regarding functional connectome studies has been observed. Accumulating knowledge has significantly contributed to our understanding of the brain's functional network architectures in health and disease. With the development of innovative neuroimaging techniques, the establishment of large brain datasets and the increasing accumulation of published findings, functional connectomic research has begun to move into the era of "big data", which generates unprecedented opportunities for discovery in brain science and simultaneously encounters various challenging issues, such as data acquisition, management and analyses. Big data on the functional connectome exhibits several critical features: high spatial and/or temporal precision, large sample sizes, long-term recording of brain activity, multidimensional biological variables (e.g., imaging, genetic, demographic, cognitive and clinic) and/or vast quantities of existing findings. We review studies regarding functional connectomics from a big data perspective, with a focus on recent methodological advances in state-of-the-art image acquisition (e.g., multiband imaging), analysis approaches and statistical strategies (e.g., graph theoretical analysis, dynamic network analysis, independent component analysis, multivariate pattern analysis and machine learning), as well as reliability and reproducibility validations. We highlight the novel findings in the application of functional connectomic big data to the exploration of the biological mechanisms of cognitive functions, normal development and aging and of neurological and psychiatric disorders. We advocate the urgent need to expand efforts directed at the methodological challenges and discuss the direction of applications in this field. Copyright © 2017 Elsevier Inc. All rights reserved.
Flame filtering and perimeter localization of wildfires using aerial thermal imagery
NASA Astrophysics Data System (ADS)
Valero, Mario M.; Verstockt, Steven; Rios, Oriol; Pastor, Elsa; Vandecasteele, Florian; Planas, Eulàlia
2017-05-01
Airborne thermal infrared (TIR) imaging systems are being increasingly used for wild fire tactical monitoring since they show important advantages over spaceborne platforms and visible sensors while becoming much more affordable and much lighter than multispectral cameras. However, the analysis of aerial TIR images entails a number of difficulties which have thus far prevented monitoring tasks from being totally automated. One of these issues that needs to be addressed is the appearance of flame projections during the geo-correction of off-nadir images. Filtering these flames is essential in order to accurately estimate the geographical location of the fuel burning interface. Therefore, we present a methodology which allows the automatic localisation of the active fire contour free of flame projections. The actively burning area is detected in TIR georeferenced images through a combination of intensity thresholding techniques, morphological processing and active contours. Subsequently, flame projections are filtered out by the temporal frequency analysis of the appropriate contour descriptors. The proposed algorithm was tested on footages acquired during three large-scale field experimental burns. Results suggest this methodology may be suitable to automatise the acquisition of quantitative data about the fire evolution. As future work, a revision of the low-pass filter implemented for the temporal analysis (currently a median filter) was recommended. The availability of up-to-date information about the fire state would improve situational awareness during an emergency response and may be used to calibrate data-driven simulators capable of emitting short-term accurate forecasts of the subsequent fire evolution.
Imaging genetics of schizophrenia in the post-GWAS era.
Arslan, Ayla
2018-01-03
Imaging genetics is a research methodology studying the effect of genetic variation on brain structure, function, behavior, and risk for psychopathology. Since the early 2000s, imaging genetics has been increasingly used in the research of schizophrenia (SZ). SZ is a severe mental disorder with no precise knowledge of its underlying neurobiology, however, new genetic and neurobiological data generate a climate for new avenues. The accumulating data of genome wide association studies (GWAS) continuously decode SZ risk genes. Global neuroimaging consortia produce collections of brain phenotypes from tens of thousands of people. In this context, imaging genetics will be strategically important both for the validation and discovery of SZ related findings. Thus, the study of GWAS supported risk variants as candidate genes to validate by neuroimaging is one trend. The study of epigenetic differences in relation to variations of brain phenotypes and the study of large scale multivariate analysis of genome wide and brain wide associations are other trends. While these studies hold a big potential for understanding the neurobiology of SZ, the problem of reproducibility appears as a major challenge, which requires standardizations in study designs and compensations of methodological limitations such as sensitivity and specificity. On the other hand, advancements of neuroimaging, optical and electron microscopy along with the use of genetically encoded fluorescent probes and robust statistical approaches will not only catalyze integrative methodologies but also will help better design the imaging genetics studies. In this invited paper, I will discuss the current perspective of imaging genetics and emerging opportunities of SZ research. Copyright © 2017 Elsevier Inc. All rights reserved.
Tavares, Ana P M; Silva, Rui P; Amaral, António L; Ferreira, Eugénio C; Xavier, Ana M R B
2014-02-01
Image analysis technique was applied to identify morphological changes of pellets from white-rot fungus Trametes versicolor on agitated submerged cultures during the production of exopolysaccharide (EPS) or ligninolytic enzymes. Batch tests with four different experimental conditions were carried out. Two different culture media were used, namely yeast medium or Trametes defined medium and the addition of lignolytic inducers as xylidine or pulp and paper industrial effluent were evaluated. Laccase activity, EPS production, and final biomass contents were determined for batch assays and the pellets morphology was assessed by image analysis techniques. The obtained data allowed establishing the choice of the metabolic pathways according to the experimental conditions, either for laccase enzymatic production in the Trametes defined medium, or for EPS production in the rich Yeast Medium experiments. Furthermore, the image processing and analysis methodology allowed for a better comprehension of the physiological phenomena with respect to the corresponding pellets morphological stages.
NASA Astrophysics Data System (ADS)
Mallepudi, Sri Abhishikth; Calix, Ricardo A.; Knapp, Gerald M.
2011-02-01
In recent years there has been a rapid increase in the size of video and image databases. Effective searching and retrieving of images from these databases is a significant current research area. In particular, there is a growing interest in query capabilities based on semantic image features such as objects, locations, and materials, known as content-based image retrieval. This study investigated mechanisms for identifying materials present in an image. These capabilities provide additional information impacting conditional probabilities about images (e.g. objects made of steel are more likely to be buildings). These capabilities are useful in Building Information Modeling (BIM) and in automatic enrichment of images. I2T methodologies are a way to enrich an image by generating text descriptions based on image analysis. In this work, a learning model is trained to detect certain materials in images. To train the model, an image dataset was constructed containing single material images of bricks, cloth, grass, sand, stones, and wood. For generalization purposes, an additional set of 50 images containing multiple materials (some not used in training) was constructed. Two different supervised learning classification models were investigated: a single multi-class SVM classifier, and multiple binary SVM classifiers (one per material). Image features included Gabor filter parameters for texture, and color histogram data for RGB components. All classification accuracy scores using the SVM-based method were above 85%. The second model helped in gathering more information from the images since it assigned multiple classes to the images. A framework for the I2T methodology is presented.
Automated determination of dust particles trajectories in the coma of comet 67P
NASA Astrophysics Data System (ADS)
Marín-Yaseli de la Parra, J.; Küppers, M.; Perez Lopez, F.; Besse, S.; Moissl, R.
2017-09-01
During more than two years Rosetta spent at comet 67P, it took thousands of images that contain individual dust particles. To arrive at a statistics of the dust properties, automatic image analysis is required. We present a new methodology for fast-dust identification using a star mask reference system for matching a set of images automatically. The main goal is to derive particle size distributions and to determine if traces of the size distribution of primordial pebbles are still present in today's cometary dust [1].
NASA Astrophysics Data System (ADS)
Molina-Viedma, Ángel J.; López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A.
2017-10-01
In recent years, many efforts have been made to exploit full-field measurement optical techniques for modal identification. Three-dimensional digital image correlation using high-speed cameras has been extensively employed for this purpose. Modal identification algorithms are applied to process the frequency response functions (FRF), which relate the displacement response of the structure to the excitation force. However, one of the most common tests for modal analysis involves the base motion excitation of a structural element instead of force excitation. In this case, the relationship between response and excitation is typically based on displacements, which are known as transmissibility functions. In this study, a methodology for experimental modal analysis using high-speed 3D digital image correlation and base motion excitation tests is proposed. In particular, a cantilever beam was excited from its base with a random signal, using a clamped edge join. Full-field transmissibility functions were obtained through the beam and converted into FRF for proper identification, considering a single degree-of-freedom theoretical conversion. Subsequently, modal identification was performed using a circle-fit approach. The proposed methodology facilitates the management of the typically large amounts of data points involved in the DIC measurement during modal identification. Moreover, it was possible to determine the natural frequencies, damping ratios and full-field mode shapes without requiring any additional tests. Finally, the results were experimentally validated by comparing them with those obtained by employing traditional accelerometers, analytical models and finite element method analyses. The comparison was performed by using the quantitative indicator modal assurance criterion. The results showed a high level of correspondence, consolidating the proposed experimental methodology.
WE-B-BRC-01: Current Methodologies in Risk Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rath, F.
Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less
A novel mesh processing based technique for 3D plant analysis
2012-01-01
Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean absolute errors of 9.34%, 5.75%, 8.78%, and correlation coefficients 0.88, 0.96, and 0.95 respectively. The temporal matching of leaves was accurate in 95% of the cases and the average execution time required to analyse a plant over four time-points was 4.9 minutes. The mesh processing based methodology is thus considered suitable for quantitative 4D monitoring of plant phenotypic features. PMID:22553969
Content-based quality evaluation of color images: overview and proposals
NASA Astrophysics Data System (ADS)
Tremeau, Alain; Richard, Noel; Colantoni, Philippe; Fernandez-Maloigne, Christine
2003-12-01
The automatic prediction of perceived quality from image data in general, and the assessment of particular image characteristics or attributes that may need improvement in particular, becomes an increasingly important part of intelligent imaging systems. The purpose of this paper is to propose to the color imaging community in general to develop a software package available on internet to help the user to select among all these approaches which is better appropriated to a given application. The ultimate goal of this project is to propose, next to implement, an open and unified color imaging system to set up a favourable context for the evaluation and analysis of color imaging processes. Many different methods for measuring the performance of a process have been proposed by different researchers. In this paper, we will discuss the advantages and shortcomings of most of main analysis criteria and performance measures currently used. The aim is not to establish a harsh competition between algorithms or processes, but rather to test and compare the efficiency of methodologies firstly to highlight strengths and weaknesses of a given algorithm or methodology on a given image type and secondly to have these results publicly available. This paper is focused on two important unsolved problems. Why it is so difficult to select a color space which gives better results than another one? Why it is so difficult to select an image quality metric which gives better results than another one, with respect to the judgment of the Human Visual System? Several methods used either in color imaging or in image quality will be thus discussed. Proposals for content-based image measures and means of developing a standard test suite for will be then presented. The above reference advocates for an evaluation protocol based on an automated procedure. This is the ultimate goal of our proposal.
ERIC Educational Resources Information Center
Hultman, Karin; Lenz Taguchi, Hillevi
2010-01-01
The purpose of this paper is to challenge the habitual anthropocentric gaze we use when analysing educational data, which takes human beings as the starting point and centre, and gives humans a self-evident higher position above other matter in reality. By enacting analysis of photographic images from a preschool playground, using a "relational…
ERIC Educational Resources Information Center
Massie, Keith R.
2011-01-01
The current study examined over 3000 visual images on the homepages of 234 National University to determine how power relations are depicted. Using a hybrid methodology of grounded theory, critical discursive analysis, and facial prominence scoring, the work culminates in a theory: The (Im)Balanced Theory of College Identity Formation Online. The…
Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.
2014-01-01
The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019
Near-infrared hyperspectral imaging for quality analysis of agricultural and food products
NASA Astrophysics Data System (ADS)
Singh, C. B.; Jayas, D. S.; Paliwal, J.; White, N. D. G.
2010-04-01
Agricultural and food processing industries are always looking to implement real-time quality monitoring techniques as a part of good manufacturing practices (GMPs) to ensure high-quality and safety of their products. Near-infrared (NIR) hyperspectral imaging is gaining popularity as a powerful non-destructive tool for quality analysis of several agricultural and food products. This technique has the ability to analyse spectral data in a spatially resolved manner (i.e., each pixel in the image has its own spectrum) by applying both conventional image processing and chemometric tools used in spectral analyses. Hyperspectral imaging technique has demonstrated potential in detecting defects and contaminants in meats, fruits, cereals, and processed food products. This paper discusses the methodology of hyperspectral imaging in terms of hardware, software, calibration, data acquisition and compression, and development of prediction and classification algorithms and it presents a thorough review of the current applications of hyperspectral imaging in the analyses of agricultural and food products.
Axial segmentation of lungs CT scan images using canny method and morphological operation
NASA Astrophysics Data System (ADS)
Noviana, Rina; Febriani, Rasal, Isram; Lubis, Eva Utari Cintamurni
2017-08-01
Segmentation is a very important topic in digital image process. It is found simply in varied fields of image analysis, particularly within the medical imaging field. Axial segmentation of lungs CT scan is beneficial in designation of abnormalities and surgery planning. It will do to ascertain every section within the lungs. The results of the segmentation are accustomed discover the presence of nodules. The method which utilized in this analysis are image cropping, image binarization, Canny edge detection and morphological operation. Image cropping is done so as to separate the lungs areas, that is the region of interest. Binarization method generates a binary image that has 2 values with grey level, that is black and white (ROI), from another space of lungs CT scan image. Canny method used for the edge detection. Morphological operation is applied to smoothing the lungs edge. The segmentation methodology shows an honest result. It obtains an awfully smooth edge. Moreover, the image background can also be removed in order to get the main focus, the lungs.
Solar Data Mining at Georgia State University
NASA Astrophysics Data System (ADS)
Angryk, R.; Martens, P. C.; Schuh, M.; Aydin, B.; Kempton, D.; Banda, J.; Ma, R.; Naduvil-Vadukootu, S.; Akkineni, V.; Küçük, A.; Filali Boubrahimi, S.; Hamdi, S. M.
2016-12-01
In this talk we give an overview of research projects related to solar data analysis that are conducted at Georgia State University. We will provide update on multiple advances made by our research team on the analysis of image parameters, spatio-temporal patterns mining, temporal data analysis and our experiences with big, heterogeneous solar data visualization, analysis, processing and storage. We will talk about up-to-date data mining methodologies, and their importance for big data-driven solar physics research.
Context-sensitive extraction of tree crown objects in urban areas using VHR satellite images
NASA Astrophysics Data System (ADS)
Ardila, Juan P.; Bijker, Wietske; Tolpekin, Valentyn A.; Stein, Alfred
2012-04-01
Municipalities need accurate and updated inventories of urban vegetation in order to manage green resources and estimate their return on investment in urban forestry activities. Earlier studies have shown that semi-automatic tree detection using remote sensing is a challenging task. This study aims to develop a reproducible geographic object-based image analysis (GEOBIA) methodology to locate and delineate tree crowns in urban areas using high resolution imagery. We propose a GEOBIA approach that considers the spectral, spatial and contextual characteristics of tree objects in the urban space. The study presents classification rules that exploit object features at multiple segmentation scales modifying the labeling and shape of image-objects. The GEOBIA methodology was implemented on QuickBird images acquired over the cities of Enschede and Delft (The Netherlands), resulting in an identification rate of 70% and 82% respectively. False negative errors concentrated on small trees and false positive errors in private gardens. The quality of crown boundaries was acceptable, with an overall delineation error <0.24 outside of gardens and backyards.
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data.
Carmichael, Owen; Sakhanenko, Lyudmila
2015-05-15
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way.
Estimation of integral curves from high angular resolution diffusion imaging (HARDI) data
Carmichael, Owen; Sakhanenko, Lyudmila
2015-01-01
We develop statistical methodology for a popular brain imaging technique HARDI based on the high order tensor model by Özarslan and Mareci [10]. We investigate how uncertainty in the imaging procedure propagates through all levels of the model: signals, tensor fields, vector fields, and fibers. We construct asymptotically normal estimators of the integral curves or fibers which allow us to trace the fibers together with confidence ellipsoids. The procedure is computationally intense as it blends linear algebra concepts from high order tensors with asymptotical statistical analysis. The theoretical results are illustrated on simulated and real datasets. This work generalizes the statistical methodology proposed for low angular resolution diffusion tensor imaging by Carmichael and Sakhanenko [3], to several fibers per voxel. It is also a pioneering statistical work on tractography from HARDI data. It avoids all the typical limitations of the deterministic tractography methods and it delivers the same information as probabilistic tractography methods. Our method is computationally cheap and it provides well-founded mathematical and statistical framework where diverse functionals on fibers, directions and tensors can be studied in a systematic and rigorous way. PMID:25937674
Michez, Adrien; Piégay, Hervé; Lisein, Jonathan; Claessens, Hugues; Lejeune, Philippe
2016-03-01
Riparian forests are critically endangered many anthropogenic pressures and natural hazards. The importance of riparian zones has been acknowledged by European Directives, involving multi-scale monitoring. The use of this very-high-resolution and hyperspatial imagery in a multi-temporal approach is an emerging topic. The trend is reinforced by the recent and rapid growth of the use of the unmanned aerial system (UAS), which has prompted the development of innovative methodology. Our study proposes a methodological framework to explore how a set of multi-temporal images acquired during a vegetative period can differentiate some of the deciduous riparian forest species and their health conditions. More specifically, the developed approach intends to identify, through a process of variable selection, which variables derived from UAS imagery and which scale of image analysis are the most relevant to our objectives.The methodological framework is applied to two study sites to describe the riparian forest through two fundamental characteristics: the species composition and the health condition. These characteristics were selected not only because of their use as proxies for the riparian zone ecological integrity but also because of their use for river management.The comparison of various scales of image analysis identified the smallest object-based image analysis (OBIA) objects (ca. 1 m(2)) as the most relevant scale. Variables derived from spectral information (bands ratios) were identified as the most appropriate, followed by variables related to the vertical structure of the forest. Classification results show good overall accuracies for the species composition of the riparian forest (five classes, 79.5 and 84.1% for site 1 and site 2). The classification scenario regarding the health condition of the black alders of the site 1 performed the best (90.6%).The quality of the classification models developed with a UAS-based, cost-effective, and semi-automatic approach competes successfully with those developed using more expensive imagery, such as multi-spectral and hyperspectral airborne imagery. The high overall accuracy results obtained by the classification of the diseased alders open the door to applications dedicated to monitoring of the health conditions of riparian forest. Our methodological framework will allow UAS users to manage large imagery metric datasets derived from those dense time series.
Brain imaging registry for neurologic diagnosis and research
NASA Astrophysics Data System (ADS)
Hoo, Kent S., Jr.; Wong, Stephen T. C.; Knowlton, Robert C.; Young, Geoffrey S.; Walker, John; Cao, Xinhua; Dillon, William P.; Hawkins, Randall A.; Laxer, Kenneth D.
2002-05-01
The purpose of this paper is to demonstrate the importance of building a brain imaging registry (BIR) on top of existing medical information systems including Picture Archiving Communication Systems (PACS) environment. We describe the design framework for a cluster of data marts whose purpose is to provide clinicians and researchers efficient access to a large volume of raw and processed patient images and associated data originating from multiple operational systems over time and spread out across different hospital departments and laboratories. The framework is designed using object-oriented analysis and design methodology. The BIR data marts each contain complete image and textual data relating to patients with a particular disease.
Optical System Design for Noncontact, Normal Incidence, THz Imaging of in vivo Human Cornea.
Sung, Shijun; Dabironezare, Shahab; Llombart, Nuria; Selvin, Skyler; Bajwa, Neha; Chantra, Somporn; Nowroozi, Bryan; Garritano, James; Goell, Jacob; Li, Alex; Deng, Sophie X; Brown, Elliott; Grundfest, Warren S; Taylor, Zachary D
2018-01-01
Reflection mode Terahertz (THz) imaging of corneal tissue water content (CTWC) is a proposed method for early, accurate detection and study of corneal diseases. Despite promising results from ex vivo and in vivo cornea studies, interpretation of the reflectivity data is confounded by the contact between corneal tissue and dielectric windows used to flatten the imaging field. Herein, we present an optical design for non-contact THz imaging of cornea. A beam scanning methodology performs angular, normal incidence sweeps of a focused beam over the corneal surface while keeping the source, detector, and patient stationary. A quasioptical analysis method is developed to analyze the theoretical resolution and imaging field intensity profile. These results are compared to the electric field distribution computed with a physical optics analysis code. Imaging experiments validate the optical theories behind the design and suggest that quasioptical methods are sufficient for designing of THz corneal imaging systems. Successful imaging operations support the feasibility of non-contact in vivo imaging. We believe that this optical system design will enable the first, clinically relevant, in vivo exploration of CTWC using THz technology.
Imaging biomarkers in multiple Sclerosis: From image analysis to population imaging.
Barillot, Christian; Edan, Gilles; Commowick, Olivier
2016-10-01
The production of imaging data in medicine increases more rapidly than the capacity of computing models to extract information from it. The grand challenges of better understanding the brain, offering better care for neurological disorders, and stimulating new drug design will not be achieved without significant advances in computational neuroscience. The road to success is to develop a new, generic, computational methodology and to confront and validate this methodology on relevant diseases with adapted computational infrastructures. This new concept sustains the need to build new research paradigms to better understand the natural history of the pathology at the early phase; to better aggregate data that will provide the most complete representation of the pathology in order to better correlate imaging with other relevant features such as clinical, biological or genetic data. In this context, one of the major challenges of neuroimaging in clinical neurosciences is to detect quantitative signs of pathological evolution as early as possible to prevent disease progression, evaluate therapeutic protocols or even better understand and model the natural history of a given neurological pathology. Many diseases encompass brain alterations often not visible on conventional MRI sequences, especially in normal appearing brain tissues (NABT). MRI has often a low specificity for differentiating between possible pathological changes which could help in discriminating between the different pathological stages or grades. The objective of medical image analysis procedures is to define new quantitative neuroimaging biomarkers to track the evolution of the pathology at different levels. This paper illustrates this issue in one acute neuro-inflammatory pathology: Multiple Sclerosis (MS). It exhibits the current medical image analysis approaches and explains how this field of research will evolve in the next decade to integrate larger scale of information at the temporal, cellular, structural and morphological levels. Copyright © 2016 Elsevier B.V. All rights reserved.
Herring, Kristen D; Oppenheimer, Stacey R; Caprioli, Richard M
2007-11-01
Direct tissue analysis using matrix-assisted laser desorption ionization mass spectrometry (MALDI MS) provides in situ molecular analysis of a wide variety of biological molecules including xenobiotics. This technology allows measurement of these species in their native biological environment without the use of target-specific reagents such as antibodies. It can be used to profile discrete cellular regions and obtain region-specific images, providing information on the relative abundance and spatial distribution of proteins, peptides, lipids, and drugs. In this article, we report the sample preparation, MS data acquisition and analysis, and protein identification methodologies used in our laboratory for profiling/imaging MS and how this has been applied to kidney disease and toxicity.
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-09-01
Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography leading to underestimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multiresolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low-resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model, which may introduce artifacts in regions where no significant correlation exists between anatomical and functional details. A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present, the new model outperformed the 2D global approach, avoiding artifacts and significantly improving quality of the corrected images and their quantitative accuracy. A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multiresolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information.
Le Pogam, Adrien; Hatt, Mathieu; Descourt, Patrice; Boussion, Nicolas; Tsoumpas, Charalampos; Turkheimer, Federico E.; Prunier-Aesch, Caroline; Baulieu, Jean-Louis; Guilloteau, Denis; Visvikis, Dimitris
2011-01-01
Purpose Partial volume effects (PVE) are consequences of the limited spatial resolution in emission tomography leading to under-estimation of uptake in tissues of size similar to the point spread function (PSF) of the scanner as well as activity spillover between adjacent structures. Among PVE correction methodologies, a voxel-wise mutual multi-resolution analysis (MMA) was recently introduced. MMA is based on the extraction and transformation of high resolution details from an anatomical image (MR/CT) and their subsequent incorporation into a low resolution PET image using wavelet decompositions. Although this method allows creating PVE corrected images, it is based on a 2D global correlation model which may introduce artefacts in regions where no significant correlation exists between anatomical and functional details. Methods A new model was designed to overcome these two issues (2D only and global correlation) using a 3D wavelet decomposition process combined with a local analysis. The algorithm was evaluated on synthetic, simulated and patient images, and its performance was compared to the original approach as well as the geometric transfer matrix (GTM) method. Results Quantitative performance was similar to the 2D global model and GTM in correlated cases. In cases where mismatches between anatomical and functional information were present the new model outperformed the 2D global approach, avoiding artefacts and significantly improving quality of the corrected images and their quantitative accuracy. Conclusions A new 3D local model was proposed for a voxel-wise PVE correction based on the original mutual multi-resolution analysis approach. Its evaluation demonstrated an improved and more robust qualitative and quantitative accuracy compared to the original MMA methodology, particularly in the absence of full correlation between anatomical and functional information. PMID:21978037
Diamond, James; Anderson, Neil H; Bartels, Peter H; Montironi, Rodolfo; Hamilton, Peter W
2004-09-01
Quantitative examination of prostate histology offers clues in the diagnostic classification of lesions and in the prediction of response to treatment and prognosis. To facilitate the collection of quantitative data, the development of machine vision systems is necessary. This study explored the use of imaging for identifying tissue abnormalities in prostate histology. Medium-power histological scenes were recorded from whole-mount radical prostatectomy sections at x 40 objective magnification and assessed by a pathologist as exhibiting stroma, normal tissue (nonneoplastic epithelial component), or prostatic carcinoma (PCa). A machine vision system was developed that divided the scenes into subregions of 100 x 100 pixels and subjected each to image-processing techniques. Analysis of morphological characteristics allowed the identification of normal tissue. Analysis of image texture demonstrated that Haralick feature 4 was the most suitable for discriminating stroma from PCa. Using these morphological and texture measurements, it was possible to define a classification scheme for each subregion. The machine vision system is designed to integrate these classification rules and generate digital maps of tissue composition from the classification of subregions; 79.3% of subregions were correctly classified. Established classification rates have demonstrated the validity of the methodology on small scenes; a logical extension was to apply the methodology to whole slide images via scanning technology. The machine vision system is capable of classifying these images. The machine vision system developed in this project facilitates the exploration of morphological and texture characteristics in quantifying tissue composition. It also illustrates the potential of quantitative methods to provide highly discriminatory information in the automated identification of prostatic lesions using computer vision.
NASA Astrophysics Data System (ADS)
Agarwal, Smriti; Bisht, Amit Singh; Singh, Dharmendra; Pathak, Nagendra Prasad
2014-12-01
Millimetre wave imaging (MMW) is gaining tremendous interest among researchers, which has potential applications for security check, standoff personal screening, automotive collision-avoidance, and lot more. Current state-of-art imaging techniques viz. microwave and X-ray imaging suffers from lower resolution and harmful ionizing radiation, respectively. In contrast, MMW imaging operates at lower power and is non-ionizing, hence, medically safe. Despite these favourable attributes, MMW imaging encounters various challenges as; still it is very less explored area and lacks suitable imaging methodology for extracting complete target information. Keeping in view of these challenges, a MMW active imaging radar system at 60 GHz was designed for standoff imaging application. A C-scan (horizontal and vertical scanning) methodology was developed that provides cross-range resolution of 8.59 mm. The paper further details a suitable target identification and classification methodology. For identification of regular shape targets: mean-standard deviation based segmentation technique was formulated and further validated using a different target shape. For classification: probability density function based target material discrimination methodology was proposed and further validated on different dataset. Lastly, a novel artificial neural network based scale and rotation invariant, image reconstruction methodology has been proposed to counter the distortions in the image caused due to noise, rotation or scale variations. The designed neural network once trained with sample images, automatically takes care of these deformations and successfully reconstructs the corrected image for the test targets. Techniques developed in this paper are tested and validated using four different regular shapes viz. rectangle, square, triangle and circle.
Multi-scale Gaussian representation and outline-learning based cell image segmentation.
Farhan, Muhammad; Ruusuvuori, Pekka; Emmenlauer, Mario; Rämö, Pauli; Dehio, Christoph; Yli-Harja, Olli
2013-01-01
High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks.
Multi-scale Gaussian representation and outline-learning based cell image segmentation
2013-01-01
Background High-throughput genome-wide screening to study gene-specific functions, e.g. for drug discovery, demands fast automated image analysis methods to assist in unraveling the full potential of such studies. Image segmentation is typically at the forefront of such analysis as the performance of the subsequent steps, for example, cell classification, cell tracking etc., often relies on the results of segmentation. Methods We present a cell cytoplasm segmentation framework which first separates cell cytoplasm from image background using novel approach of image enhancement and coefficient of variation of multi-scale Gaussian scale-space representation. A novel outline-learning based classification method is developed using regularized logistic regression with embedded feature selection which classifies image pixels as outline/non-outline to give cytoplasm outlines. Refinement of the detected outlines to separate cells from each other is performed in a post-processing step where the nuclei segmentation is used as contextual information. Results and conclusions We evaluate the proposed segmentation methodology using two challenging test cases, presenting images with completely different characteristics, with cells of varying size, shape, texture and degrees of overlap. The feature selection and classification framework for outline detection produces very simple sparse models which use only a small subset of the large, generic feature set, that is, only 7 and 5 features for the two cases. Quantitative comparison of the results for the two test cases against state-of-the-art methods show that our methodology outperforms them with an increase of 4-9% in segmentation accuracy with maximum accuracy of 93%. Finally, the results obtained for diverse datasets demonstrate that our framework not only produces accurate segmentation but also generalizes well to different segmentation tasks. PMID:24267488
Benefits of utilizing CellProfiler as a characterization tool for U-10Mo nuclear fuel
Collette, R.; Douglas, J.; Patterson, L.; ...
2015-05-01
Automated image processing techniques have the potential to aid in the performance evaluation of nuclear fuels by eliminating judgment calls that may vary from person-to-person or sample-to-sample. Analysis of in-core fuel performance is required for design and safety evaluations related to almost every aspect of the nuclear fuel cycle. This study presents a methodology for assessing the quality of uranium-molybdenum fuel images and describes image analysis routines designed for the characterization of several important microstructural properties. The analyses are performed in CellProfiler, an open-source program designed to enable biologists without training in computer vision or programming to automatically extract cellularmore » measurements from large image sets. The quality metric scores an image based on three parameters: the illumination gradient across the image, the overall focus of the image, and the fraction of the image that contains scratches. The metric presents the user with the ability to ‘pass’ or ‘fail’ an image based on a reproducible quality score. Passable images may then be characterized through a separate CellProfiler pipeline, which enlists a variety of common image analysis techniques. The results demonstrate the ability to reliably pass or fail images based on the illumination, focus, and scratch fraction of the image, followed by automatic extraction of morphological data with respect to fission gas voids, interaction layers, and grain boundaries.« less
3D Volumetric Analysis of Fluid Inclusions Using Confocal Microscopy
NASA Astrophysics Data System (ADS)
Proussevitch, A.; Mulukutla, G.; Sahagian, D.; Bodnar, B.
2009-05-01
Fluid inclusions preserve valuable information regarding hydrothermal, metamorphic, and magmatic processes. The molar quantities of liquid and gaseous components in the inclusions can be estimated from their volumetric measurements at room temperatures combined with knowledge of the PVTX properties of the fluid and homogenization temperatures. Thus, accurate measurements of inclusion volumes and their two phase components are critical. One of the greatest advantages of the Laser Scanning Confocal Microscopy (LSCM) in application to fluid inclsion analsyis is that it is affordable for large numbers of samples, given the appropriate software analysis tools and methodology. Our present work is directed toward developing those tools and methods. For the last decade LSCM has been considered as a potential method for inclusion volume measurements. Nevertheless, the adequate and accurate measurement by LSCM has not yet been successful for fluid inclusions containing non-fluorescing fluids due to many technical challenges in image analysis despite the fact that the cost of collecting raw LSCM imagery has dramatically decreased in recent years. These problems mostly relate to image analysis methodology and software tools that are needed for pre-processing and image segmentation, which enable solid, liquid and gaseous components to be delineated. Other challenges involve image quality and contrast, which is controlled by fluorescence of the material (most aqueous fluid inclusions do not fluoresce at the appropriate laser wavelengths), material optical properties, and application of transmitted and/or reflected confocal illumination. In this work we have identified the key problems of image analysis and propose some potential solutions. For instance, we found that better contrast of pseudo-confocal transmitted light images could be overlayed with poor-contrast true-confocal reflected light images within the same stack of z-ordered slices. This approach allows one to narrow the interface boundaries between the phases before the application of segmentation routines. In turn, we found that an active contour segmentation technique works best for these types of geomaterials. The method was developed by adapting a medical software package implemented using the Insight Toolkit (ITK) set of algorithms developed for segmentation of anatomical structures. We have developed a manual analysis procedure with the potential of 2 micron resolution in 3D volume rendering that is specifically designed for application to fluid inclusion volume measurements.
Near ground level sensing for spatial analysis of vegetation
NASA Technical Reports Server (NTRS)
Sauer, Tom; Rasure, John; Gage, Charlie
1991-01-01
Measured changes in vegetation indicate the dynamics of ecological processes and can identify the impacts from disturbances. Traditional methods of vegetation analysis tend to be slow because they are labor intensive; as a result, these methods are often confined to small local area measurements. Scientists need new algorithms and instruments that will allow them to efficiently study environmental dynamics across a range of different spatial scales. A new methodology that addresses this problem is presented. This methodology includes the acquisition, processing, and presentation of near ground level image data and its corresponding spatial characteristics. The systematic approach taken encompasses a feature extraction process, a supervised and unsupervised classification process, and a region labeling process yielding spatial information.
Unsupervised Detection of Planetary Craters by a Marked Point Process
NASA Technical Reports Server (NTRS)
Troglio, G.; Benediktsson, J. A.; Le Moigne, J.; Moser, G.; Serpico, S. B.
2011-01-01
With the launch of several planetary missions in the last decade, a large amount of planetary images is being acquired. Preferably, automatic and robust processing techniques need to be used for data analysis because of the huge amount of the acquired data. Here, the aim is to achieve a robust and general methodology for crater detection. A novel technique based on a marked point process is proposed. First, the contours in the image are extracted. The object boundaries are modeled as a configuration of an unknown number of random ellipses, i.e., the contour image is considered as a realization of a marked point process. Then, an energy function is defined, containing both an a priori energy and a likelihood term. The global minimum of this function is estimated by using reversible jump Monte-Carlo Markov chain dynamics and a simulated annealing scheme. The main idea behind marked point processes is to model objects within a stochastic framework: Marked point processes represent a very promising current approach in the stochastic image modeling and provide a powerful and methodologically rigorous framework to efficiently map and detect objects and structures in an image with an excellent robustness to noise. The proposed method for crater detection has several feasible applications. One such application area is image registration by matching the extracted features.
NASA Astrophysics Data System (ADS)
Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.
2006-09-01
Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.
Peirlinck, Mathias; De Beule, Matthieu; Segers, Patrick; Rebelo, Nuno
2018-05-28
Patient-specific biomechanical modeling of the cardiovascular system is complicated by the presence of a physiological pressure load given that the imaged tissue is in a pre-stressed and -strained state. Neglect of this prestressed state into solid tissue mechanics models leads to erroneous metrics (e.g. wall deformation, peak stress, wall shear stress) which in their turn are used for device design choices, risk assessment (e.g. procedure, rupture) and surgery planning. It is thus of utmost importance to incorporate this deformed and loaded tissue state into the computational models, which implies solving an inverse problem (calculating an undeformed geometry given the load and the deformed geometry). Methodologies to solve this inverse problem can be categorized into iterative and direct methodologies, both having their inherent advantages and disadvantages. Direct methodologies are typically based on the inverse elastostatics (IE) approach and offer a computationally efficient single shot methodology to compute the in vivo stress state. However, cumbersome and problem-specific derivations of the formulations and non-trivial access to the finite element analysis (FEA) code, especially for commercial products, refrain a broad implementation of these methodologies. For that reason, we developed a novel, modular IE approach and implemented this methodology in a commercial FEA solver with minor user subroutine interventions. The accuracy of this methodology was demonstrated in an arterial tube and porcine biventricular myocardium model. The computational power and efficiency of the methodology was shown by computing the in vivo stress and strain state, and the corresponding unloaded geometry, for two models containing multiple interacting incompressible, anisotropic (fiber-embedded) and hyperelastic material behaviors: a patient-specific abdominal aortic aneurysm and a full 4-chamber heart model. Copyright © 2018 Elsevier Ltd. All rights reserved.
A new hyperspectral image compression paradigm based on fusion
NASA Astrophysics Data System (ADS)
Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto
2016-10-01
The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.
Dialoguing with Dreams in Existential Art Therapy
ERIC Educational Resources Information Center
Moon, Bruce L.
2007-01-01
This article presents a theoretical and methodological framework for interactive dialogue and analysis of dream images in existential art therapy. In this phenomenological-existential approach, the client and art therapist are regarded as equal partners with respect to sharing in the process of creation and discovery of meaning (Frankl, 1955,…
Li, Qian; Chang, Young-Tae
2006-01-01
This protocol outlines a methodology for the preparation and characterization of three RNA-specific fluorescent probes (E36, E144 and F22) and their use in live cell imaging. It describes a detailed procedure for their chemical synthesis and purification; serial product characterization and quality control tests, including measurements of their fluorescence properties in solution, measurement of RNA specificity and analysis of cellular toxicity; and live cell staining and counterstaining with Hoechst or DAPI. Preparation and application of these RNA imaging probes takes 1 week.
Medical image computing for computer-supported diagnostics and therapy. Advances and perspectives.
Handels, H; Ehrhardt, J
2009-01-01
Medical image computing has become one of the most challenging fields in medical informatics. In image-based diagnostics of the future software assistance will become more and more important, and image analysis systems integrating advanced image computing methods are needed to extract quantitative image parameters to characterize the state and changes of image structures of interest (e.g. tumors, organs, vessels, bones etc.) in a reproducible and objective way. Furthermore, in the field of software-assisted and navigated surgery medical image computing methods play a key role and have opened up new perspectives for patient treatment. However, further developments are needed to increase the grade of automation, accuracy, reproducibility and robustness. Moreover, the systems developed have to be integrated into the clinical workflow. For the development of advanced image computing systems methods of different scientific fields have to be adapted and used in combination. The principal methodologies in medical image computing are the following: image segmentation, image registration, image analysis for quantification and computer assisted image interpretation, modeling and simulation as well as visualization and virtual reality. Especially, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients and will gain importance in diagnostic and therapy of the future. From a methodical point of view the authors identify the following future trends and perspectives in medical image computing: development of optimized application-specific systems and integration into the clinical workflow, enhanced computational models for image analysis and virtual reality training systems, integration of different image computing methods, further integration of multimodal image data and biosignals and advanced methods for 4D medical image computing. The development of image analysis systems for diagnostic support or operation planning is a complex interdisciplinary process. Image computing methods enable new insights into the patient's image data and have the future potential to improve medical diagnostics and patient treatment.
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Editor); Schenker, Paul (Editor)
1987-01-01
The papers presented in this volume provide an overview of current research in both optical and digital pattern recognition, with a theme of identifying overlapping research problems and methodologies. Topics discussed include image analysis and low-level vision, optical system design, object analysis and recognition, real-time hybrid architectures and algorithms, high-level image understanding, and optical matched filter design. Papers are presented on synthetic estimation filters for a control system; white-light correlator character recognition; optical AI architectures for intelligent sensors; interpreting aerial photographs by segmentation and search; and optical information processing using a new photopolymer.
Applications of artificial intelligence V; Proceedings of the Meeting, Orlando, FL, May 18-20, 1987
NASA Technical Reports Server (NTRS)
Gilmore, John F. (Editor)
1987-01-01
The papers contained in this volume focus on current trends in applications of artificial intelligence. Topics discussed include expert systems, image understanding, artificial intelligence tools, knowledge-based systems, heuristic systems, manufacturing applications, and image analysis. Papers are presented on expert system issues in automated, autonomous space vehicle rendezvous; traditional versus rule-based programming techniques; applications to the control of optional flight information; methodology for evaluating knowledge-based systems; and real-time advisory system for airborne early warning.
Simulation and optimization of volume holographic imaging systems in Zemax.
Wissmann, Patrick; Oh, Se Baek; Barbastathis, George
2008-05-12
We present a new methodology for ray-tracing analysis of volume holographic imaging (VHI) systems. Using the k-sphere formulation, we apply geometrical relationships to describe the volumetric diffraction effects imposed on rays passing through a volume hologram. We explain the k-sphere formulation in conjunction with ray tracing process and describe its implementation in a Zemax UDS (User Defined Surface). We conclude with examples of simulation and optimization results and show proof of consistency and usefulness of the proposed model.
Use of sonic tomography to detect and quantify wood decay in living trees1
Gilbert, Gregory S.; Ballesteros, Javier O.; Barrios-Rodriguez, Cesar A.; Bonadies, Ernesto F.; Cedeño-Sánchez, Marjorie L.; Fossatti-Caballero, Nohely J.; Trejos-Rodríguez, Mariam M.; Pérez-Suñiga, José Moises; Holub-Young, Katharine S.; Henn, Laura A. W.; Thompson, Jennifer B.; García-López, Cesar G.; Romo, Amanda C.; Johnston, Daniel C.; Barrick, Pablo P.; Jordan, Fulvia A.; Hershcovich, Shiran; Russo, Natalie; Sánchez, Juan David; Fábrega, Juan Pablo; Lumpkin, Raleigh; McWilliams, Hunter A.; Chester, Kathleen N.; Burgos, Alana C.; Wong, E. Beatriz; Diab, Jonathan H.; Renteria, Sonia A.; Harrower, Jennifer T.; Hooton, Douglas A.; Glenn, Travis C.; Faircloth, Brant C.; Hubbell, Stephen P.
2016-01-01
Premise of the study: Field methodology and image analysis protocols using acoustic tomography were developed and evaluated as a tool to estimate the amount of internal decay and damage of living trees, with special attention to tropical rainforest trees with irregular trunk shapes. Methods and Results: Living trunks of a diversity of tree species in tropical rainforests in the Republic of Panama were scanned using an Argus Electronic PiCUS 3 Sonic Tomograph and evaluated for the amount and patterns of internal decay. A protocol using ImageJ analysis software was used to quantify the proportions of intact and compromised wood. The protocols provide replicable estimates of internal decay and cavities for trees of varying shapes, wood density, and bark thickness. Conclusions: Sonic tomography, coupled with image analysis, provides an efficient, noninvasive approach to evaluate decay patterns and structural integrity of even irregularly shaped living trees. PMID:28101433
NASA Astrophysics Data System (ADS)
Robinson, G.; Ahmed, Ashraf A.; Hamill, G. A.
2016-07-01
This paper presents the applications of a novel methodology to quantify saltwater intrusion parameters in laboratory-scale experiments. The methodology uses an automated image analysis procedure, minimising manual inputs and the subsequent systematic errors that can be introduced. This allowed the quantification of the width of the mixing zone which is difficult to measure in experimental methods that are based on visual observations. Glass beads of different grain sizes were tested for both steady-state and transient conditions. The transient results showed good correlation between experimental and numerical intrusion rates. The experimental intrusion rates revealed that the saltwater wedge reached a steady state condition sooner while receding than advancing. The hydrodynamics of the experimental mixing zone exhibited similar traits; a greater increase in the width of the mixing zone was observed in the receding saltwater wedge, which indicates faster fluid velocities and higher dispersion. The angle of intrusion analysis revealed the formation of a volume of diluted saltwater at the toe position when the saltwater wedge is prompted to recede. In addition, results of different physical repeats of the experiment produced an average coefficient of variation less than 0.18 of the measured toe length and width of the mixing zone.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, Adriana L.; Varga, Tamas
Branching structures such as lungs, blood vessels and plant roots play a critical role in life. Growth, structure, and function of these branching structures have an immense effect on our lives. Therefore, quantitative size information on such structures in their native environment is invaluable for studying their growth and the effect of the environment on them. X-ray computed tomography (XCT) has been an effective tool for in situ imaging and analysis of branching structures. We developed a costless tool that approximates the surface and volume of branching structures. Our methodology of noninvasive imaging, segmentation and extraction of quantitative information ismore » demonstrated through the analysis of a plant root in its soil medium from 3D tomography data. XCT data collected on a grass specimen was used to visualize its root structure. A suite of open-source software was employed to segment the root from the soil and determine its isosurface, which was used to calculate its volume and surface. This methodology of processing 3D data is applicable to other branching structures even when the structure of interest is of similar x-ray attenuation to its environment and difficulties arise with sample segmentation.« less
Epel, Boris; Sundramoorthy, Subramanian V.; Barth, Eugene D.; Mailer, Colin; Halpern, Howard J.
2011-01-01
Purpose: The authors compare two electron paramagnetic resonance imaging modalities at 250 MHz to determine advantages and disadvantages of those modalities for in vivo oxygen imaging. Methods: Electron spin echo (ESE) and continuous wave (CW) methodologies were used to obtain three-dimensional images of a narrow linewidth, water soluble, nontoxic oxygen-sensitive trityl molecule OX063 in vitro and in vivo. The authors also examined sequential images obtained from the same animal injected intravenously with trityl spin probe to determine temporal stability of methodologies. Results: A study of phantoms with different oxygen concentrations revealed a threefold advantage of the ESE methodology in terms of reduced imaging time and more precise oxygen resolution for samples with less than 70 torr oxygen partial pressure. Above∼100 torr, CW performed better. The images produced by both methodologies showed pO2 distributions with similar mean values. However, ESE images demonstrated superior performance in low pO2 regions while missing voxels in high pO2 regions. Conclusions: ESE and CW have different areas of applicability. ESE is superior for hypoxia studies in tumors. PMID:21626937
Physics-based deformable organisms for medical image analysis
NASA Astrophysics Data System (ADS)
Hamarneh, Ghassan; McIntosh, Chris
2005-04-01
Previously, "Deformable organisms" were introduced as a novel paradigm for medical image analysis that uses artificial life modelling concepts. Deformable organisms were designed to complement the classical bottom-up deformable models methodologies (geometrical and physical layers), with top-down intelligent deformation control mechanisms (behavioral and cognitive layers). However, a true physical layer was absent and in order to complete medical image segmentation tasks, deformable organisms relied on pure geometry-based shape deformations guided by sensory data, prior structural knowledge, and expert-generated schedules of behaviors. In this paper we introduce the use of physics-based shape deformations within the deformable organisms framework yielding additional robustness by allowing intuitive real-time user guidance and interaction when necessary. We present the results of applying our physics-based deformable organisms, with an underlying dynamic spring-mass mesh model, to segmenting and labelling the corpus callosum in 2D midsagittal magnetic resonance images.
CT radiation profile width measurement using CR imaging plate raw data
Yang, Chang‐Ying Joseph
2015-01-01
This technical note demonstrates computed tomography (CT) radiation profile measurement using computed radiography (CR) imaging plate raw data showing it is possible to perform the CT collimation width measurement using a single scan without saturating the imaging plate. Previously described methods require careful adjustments to the CR reader settings in order to avoid signal clipping in the CR processed image. CT radiation profile measurements were taken as part of routine quality control on 14 CT scanners from four vendors. CR cassettes were placed on the CT scanner bed, raised to isocenter, and leveled. Axial scans were taken at all available collimations, advancing the cassette for each scan. The CR plates were processed and raw CR data were analyzed using MATLAB scripts to measure collimation widths. The raw data approach was compared with previously established methodology. The quality control analysis scripts are released as open source using creative commons licensing. A log‐linear relationship was found between raw pixel value and air kerma, and raw data collimation width measurements were in agreement with CR‐processed, bit‐reduced data, using previously described methodology. The raw data approach, with intrinsically wider dynamic range, allows improved measurement flexibility and precision. As a result, we demonstrate a methodology for CT collimation width measurements using a single CT scan and without the need for CR scanning parameter adjustments which is more convenient for routine quality control work. PACS numbers: 87.57.Q‐, 87.59.bd, 87.57.uq PMID:26699559
NASA Astrophysics Data System (ADS)
Davis, Benjamin L.; Berrier, Joel C.; Shields, Douglas W.; Kennefick, Julia; Kennefick, Daniel; Seigar, Marc S.; Lacy, Claud H. S.; Puerari, Ivânio
2012-04-01
A logarithmic spiral is a prominent feature appearing in a majority of observed galaxies. This feature has long been associated with the traditional Hubble classification scheme, but historical quotes of pitch angle of spiral galaxies have been almost exclusively qualitative. We have developed a methodology, utilizing two-dimensional fast Fourier transformations of images of spiral galaxies, in order to isolate and measure the pitch angles of their spiral arms. Our technique provides a quantitative way to measure this morphological feature. This will allow comparison of spiral galaxy pitch angle to other galactic parameters and test spiral arm genesis theories. In this work, we detail our image processing and analysis of spiral galaxy images and discuss the robustness of our analysis techniques.
Three-dimensional analysis of implanted magnetic-resonance-visible meshes.
Sindhwani, Nikhil; Feola, Andrew; De Keyzer, Frederik; Claus, Filip; Callewaert, Geertje; Urbankova, Iva; Ourselin, Sebastien; D'hooge, Jan; Deprest, Jan
2015-10-01
Our primary objective was to develop relevant algorithms for quantification of mesh position and 3D shape in magnetic resonance (MR) images. In this proof-of-principle study, one patient with severe anterior vaginal wall prolapse was implanted with an MR-visible mesh. High-resolution MR images of the pelvis were acquired 6 weeks and 8 months postsurgery. 3D models were created using semiautomatic segmentation techniques. Conformational changes were recorded quantitatively using part-comparison analysis. An ellipticity measure is proposed to record longitudinal conformational changes in the mesh arms. The surface that is the effective reinforcement provided by the mesh is calculated using a novel methodology. The area of this surface is the effective support area (ESA). MR-visible mesh was clearly outlined in the images, which allowed us to longitudinally quantify mesh configuration between 6 weeks and 8 months after implantation. No significant changes were found in mesh position, effective support area, conformation of the mesh's main body, and arm length during the period of observation. Ellipticity profiles show longitudinal conformational changes in posterior arms. This paper proposes novel methodologies for a systematic 3D assessment of the position and morphology of MR-visible meshes. A novel semiautomatic tool was developed to calculate the effective area of support provided by the mesh, a potentially clinically important parameter.
Weiskopf, Nikolaus; Veit, Ralf; Erb, Michael; Mathiak, Klaus; Grodd, Wolfgang; Goebel, Rainer; Birbaumer, Niels
2003-07-01
A brain-computer interface (BCI) based on real-time functional magnetic resonance imaging (fMRI) is presented which allows human subjects to observe and control changes of their own blood oxygen level-dependent (BOLD) response. This BCI performs data preprocessing (including linear trend removal, 3D motion correction) and statistical analysis on-line. Local BOLD signals are continuously fed back to the subject in the magnetic resonance scanner with a delay of less than 2 s from image acquisition. The mean signal of a region of interest is plotted as a time-series superimposed on color-coded stripes which indicate the task, i.e., to increase or decrease the BOLD signal. We exemplify the presented BCI with one volunteer intending to control the signal of the rostral-ventral and dorsal part of the anterior cingulate cortex (ACC). The subject achieved significant changes of local BOLD responses as revealed by region of interest analysis and statistical parametric maps. The percent signal change increased across fMRI-feedback sessions suggesting a learning effect with training. This methodology of fMRI-feedback can assess voluntary control of circumscribed brain areas. As a further extension, behavioral effects of local self-regulation become accessible as a new field of research.
Lackey, Daniel P; Carruth, Eric D; Lasher, Richard A; Boenisch, Jan; Sachse, Frank B; Hitchcock, Robert W
2011-11-01
Gap junctions play a fundamental role in intercellular communication in cardiac tissue. Various types of heart disease including hypertrophy and ischemia are associated with alterations of the spatial arrangement of gap junctions. Previous studies applied two-dimensional optical and electron-microscopy to visualize gap junction arrangements. In normal cardiomyocytes, gap junctions were primarily found at cell ends, but can be found also in more central regions. In this study, we extended these approaches toward three-dimensional reconstruction of gap junction distributions based on high-resolution scanning confocal microscopy and image processing. We developed methods for quantitative characterization of gap junction distributions based on analysis of intensity profiles along the principal axes of myocytes. The analyses characterized gap junction polarization at cell ends and higher-order statistical image moments of intensity profiles. The methodology was tested in rat ventricular myocardium. Our analysis yielded novel quantitative data on gap junction distributions. In particular, the analysis demonstrated that the distributions exhibit significant variability with respect to polarization, skewness, and kurtosis. We suggest that this methodology provides a quantitative alternative to current approaches based on visual inspection, with applications in particular in characterization of engineered and diseased myocardium. Furthermore, we propose that these data provide improved input for computational modeling of cardiac conduction.
Cichy, Radoslaw Martin; Teng, Santani
2017-02-19
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Authors.
Alexakis, Dimitrios D.; Mexis, Filippos-Dimitrios K.; Vozinaki, Anthi-Eirini K.; Daliakopoulos, Ioannis N.; Tsanis, Ioannis K.
2017-01-01
A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R2 values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies. PMID:28635625
Alexakis, Dimitrios D; Mexis, Filippos-Dimitrios K; Vozinaki, Anthi-Eirini K; Daliakopoulos, Ioannis N; Tsanis, Ioannis K
2017-06-21
A methodology for elaborating multi-temporal Sentinel-1 and Landsat 8 satellite images for estimating topsoil Soil Moisture Content (SMC) to support hydrological simulation studies is proposed. After pre-processing the remote sensing data, backscattering coefficient, Normalized Difference Vegetation Index (NDVI), thermal infrared temperature and incidence angle parameters are assessed for their potential to infer ground measurements of SMC, collected at the top 5 cm. A non-linear approach using Artificial Neural Networks (ANNs) is tested. The methodology is applied in Western Crete, Greece, where a SMC gauge network was deployed during 2015. The performance of the proposed algorithm is evaluated using leave-one-out cross validation and sensitivity analysis. ANNs prove to be the most efficient in SMC estimation yielding R² values between 0.7 and 0.9. The proposed methodology is used to support a hydrological simulation with the HEC-HMS model, applied at the Keramianos basin which is ungauged for SMC. Results and model sensitivity highlight the contribution of combining Sentinel-1 SAR and Landsat 8 images for improving SMC estimates and supporting hydrological studies.
Ristivojević, Petar; Trifković, Jelena; Vovk, Irena; Milojković-Opsenica, Dušanka
2017-01-01
Considering the introduction of phytochemical fingerprint analysis, as a method of screening the complex natural products for the presence of most bioactive compounds, use of chemometric classification methods, application of powerful scanning and image capturing and processing devices and algorithms, advancement in development of novel stationary phases as well as various separation modalities, high-performance thin-layer chromatography (HPTLC) fingerprinting is becoming attractive and fruitful field of separation science. Multivariate image analysis is crucial in the light of proper data acquisition. In a current study, different image processing procedures were studied and compared in detail on the example of HPTLC chromatograms of plant resins. In that sense, obtained variables such as gray intensities of pixels along the solvent front, peak area and mean values of peak were used as input data and compared to obtained best classification models. Important steps in image analysis, baseline removal, denoising, target peak alignment and normalization were pointed out. Numerical data set based on mean value of selected bands and intensities of pixels along the solvent front proved to be the most convenient for planar-chromatographic profiling, although required at least the basic knowledge on image processing methodology, and could be proposed for further investigation in HPLTC fingerprinting. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Emge, Darren K.; Adalı, Tülay
2014-06-01
As the availability and use of imaging methodologies continues to increase, there is a fundamental need to jointly analyze data that is collected from multiple modalities. This analysis is further complicated when, the size or resolution of the images differ, implying that the observation lengths of each of modality can be highly varying. To address this expanding landscape, we introduce the multiset singular value decomposition (MSVD), which can perform a joint analysis on any number of modalities regardless of their individual observation lengths. Through simulations, the inter modal relationships across the different modalities which are revealed by the MSVD are shown. We apply the MSVD to forensic fingerprint analysis, showing that MSVD joint analysis successfully identifies relevant similarities for further analysis, significantly reducing the processing time required. This reduction, takes this technique from a laboratory method to a useful forensic tool with applications across the law enforcement and security regimes.
microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling
NASA Astrophysics Data System (ADS)
Comi, Troy J.; Neumann, Elizabeth K.; Do, Thanh D.; Sweedler, Jonathan V.
2017-09-01
Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. [Figure not available: see fulltext.
microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling.
Comi, Troy J; Neumann, Elizabeth K; Do, Thanh D; Sweedler, Jonathan V
2017-09-01
Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. Graphical Abstract ᅟ.
A comparative study of 2 computer-assisted methods of quantifying brightfield microscopy images.
Tse, George H; Marson, Lorna P
2013-10-01
Immunohistochemistry continues to be a powerful tool for the detection of antigens. There are several commercially available software packages that allow image analysis; however, these can be complex, require relatively high level of computer skills, and can be expensive. We compared 2 commonly available software packages, Adobe Photoshop CS6 and ImageJ, in their ability to quantify percentage positive area after picrosirius red (PSR) staining and 3,3'-diaminobenzidine (DAB) staining. On analysis of DAB-stained B cells in the mouse spleen, with a biotinylated primary rat anti-mouse-B220 antibody, there was no significant difference on converting images from brightfield microscopy to binary images to measure black and white pixels using ImageJ compared with measuring a range of brown pixels with Photoshop (Student t test, P=0.243, correlation r=0.985). When analyzing mouse kidney allografts stained with PSR, Photoshop achieved a greater interquartile range while maintaining a lower 10th percentile value compared with analysis with ImageJ. A lower 10% percentile reflects that Photoshop analysis is better at analyzing tissues with low levels of positive pixels; particularly relevant for control tissues or negative controls, whereas after ImageJ analysis the same images would result in spuriously high levels of positivity. Furthermore comparing the 2 methods by Bland-Altman plot revealed that these 2 methodologies did not agree when measuring images with a higher percentage of positive staining and correlation was poor (r=0.804). We conclude that for computer-assisted analysis of images of DAB-stained tissue there is no difference between using Photoshop or ImageJ. However, for analysis of color images where differentiation into a binary pattern is not easy, such as with PSR, Photoshop is superior at identifying higher levels of positivity while maintaining differentiation of low levels of positive staining.
Radiotracer imaging studies in hepatic encephalopathy: ISHEN practice guidelines.
Berding, Georg; Banati, Richard B; Buchert, Ralph; Chierichetti, Franca; Grover, Vijay P B; Kato, Akinobu; Keiding, Susanne; Taylor-Robinson, Simon D
2009-05-01
There is lack of consensus on radiotracer usage in hepatic encephalopathy (HE). We have focused our attention on three main areas: (i) radiotracer imaging in animal models of HE, (ii) methodological issues of radiotracer imaging in HE and (iii) radiotracer imaging studies on the pathophysiology and (new) therapies in HE. We suggest the following: 1. Positron emission tomography (PET) and single photon emission computed tomography lend themselves to the study of animal models of HE, but the models that are suitable depend on the specific research question. Magnetic resonance imaging (MRI) may be a useful alternative technique. 2. Owing to the cost of the technique, there is a need for multicentre human PET studies to overcome the problem of underpowered small studies being undertaken in individual research centres. There should be a unified PET protocol with central, anonymised data analysis in one centre, using validated methodology, on behalf of all participating centres. Such studies would be useful for the assessment of early intervention in patients with subtle neuropsychiatric symptoms, or for clarification of the effect of liver transplantation on HE. 3. While radiotracer imaging modalities remain useful research tools for the study of pathogenesis and for the assessment of treatment effects, there is no consensus on the use of imaging in routine clinical practice for diagnosis and prognosis. The most promising objective tools appear to be magnetic resonance spectroscopy (MRS) and volumetric MRI, which can be performed in multiple centres without the difficulties that radiotracer imaging entail.
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines
Kurç, Tahsin M.; Taveira, Luís F. R.; Melo, Alba C. M. A.; Gao, Yi; Kong, Jun; Saltz, Joel H.
2017-01-01
Abstract Motivation: Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. Results: The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Conclusions: Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Availability and Implementation: Source code: https://github.com/SBU-BMI/region-templates/. Contact: teodoro@unb.br Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28062445
Algorithm sensitivity analysis and parameter tuning for tissue image segmentation pipelines.
Teodoro, George; Kurç, Tahsin M; Taveira, Luís F R; Melo, Alba C M A; Gao, Yi; Kong, Jun; Saltz, Joel H
2017-04-01
Sensitivity analysis and parameter tuning are important processes in large-scale image analysis. They are very costly because the image analysis workflows are required to be executed several times to systematically correlate output variations with parameter changes or to tune parameters. An integrated solution with minimum user interaction that uses effective methodologies and high performance computing is required to scale these studies to large imaging datasets and expensive analysis workflows. The experiments with two segmentation workflows show that the proposed approach can (i) quickly identify and prune parameters that are non-influential; (ii) search a small fraction (about 100 points) of the parameter search space with billions to trillions of points and improve the quality of segmentation results (Dice and Jaccard metrics) by as much as 1.42× compared to the results from the default parameters; (iii) attain good scalability on a high performance cluster with several effective optimizations. Our work demonstrates the feasibility of performing sensitivity analyses, parameter studies and auto-tuning with large datasets. The proposed framework can enable the quantification of error estimations and output variations in image segmentation pipelines. Source code: https://github.com/SBU-BMI/region-templates/ . teodoro@unb.br. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Greenwood, Taylor J; Lopez-Costa, Rodrigo I; Rhoades, Patrick D; Ramírez-Giraldo, Juan C; Starr, Matthew; Street, Mandie; Duncan, James; McKinstry, Robert C
2015-01-01
The marked increase in radiation exposure from medical imaging, especially in children, has caused considerable alarm and spurred efforts to preserve the benefits but reduce the risks of imaging. Applying the principles of the Image Gently campaign, data-driven process and quality improvement techniques such as process mapping and flowcharting, cause-and-effect diagrams, Pareto analysis, statistical process control (control charts), failure mode and effects analysis, "lean" or Six Sigma methodology, and closed feedback loops led to a multiyear program that has reduced overall computed tomographic (CT) examination volume by more than fourfold and concurrently decreased radiation exposure per CT study without compromising diagnostic utility. This systematic approach involving education, streamlining access to magnetic resonance imaging and ultrasonography, auditing with comparison with benchmarks, applying modern CT technology, and revising CT protocols has led to a more than twofold reduction in CT radiation exposure between 2005 and 2012 for patients at the authors' institution while maintaining diagnostic utility. (©)RSNA, 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harben, P E; Harris, D; Myers, S
Seismic imaging and tracking methods have intelligence and monitoring applications. Current systems, however, do not adequately calibrate or model the unknown geological heterogeneity. Current systems are also not designed for rapid data acquisition and analysis in the field. This project seeks to build the core technological capabilities coupled with innovative deployment, processing, and analysis methodologies to allow seismic methods to be effectively utilized in the applications of seismic imaging and vehicle tracking where rapid (minutes to hours) and real-time analysis is required. The goal of this project is to build capabilities in acquisition system design, utilization and in full 3Dmore » finite difference modeling as well as statistical characterization of geological heterogeneity. Such capabilities coupled with a rapid field analysis methodology based on matched field processing are applied to problems associated with surveillance, battlefield management, finding hard and deeply buried targets, and portal monitoring. This project benefits the U.S. military and intelligence community in support of LLNL's national-security mission. FY03 was the final year of this project. In the 2.5 years this project has been active, numerous and varied developments and milestones have been accomplished. A wireless communication module for seismic data was developed to facilitate rapid seismic data acquisition and analysis. The E3D code was enhanced to include topographic effects. Codes were developed to implement the Karhunen-Loeve (K-L) statistical methodology for generating geological heterogeneity that can be utilized in E3D modeling. The matched field processing methodology applied to vehicle tracking and based on a field calibration to characterize geological heterogeneity was tested and successfully demonstrated in a tank tracking experiment at the Nevada Test Site. A 3-seismic-array vehicle tracking testbed was installed on-site at LLNL for testing real-time seismic tracking methods. A field experiment was conducted over a tunnel at the Nevada Site that quantified the tunnel reflection signal and, coupled with modeling, identified key needs and requirements in experimental layout of sensors. A large field experiment was conducted at the Lake Lynn Laboratory, a mine safety research facility in Pennsylvania, over a tunnel complex in realistic, difficult conditions. This experiment gathered the necessary data for a full 3D attempt to apply the methodology. The experiment also collected data to analyze the capabilities to detect and locate in-tunnel explosions for mine safety and other applications.« less
Identification of novel loci for the generation of reporter mice
Rebecchi, Monica; Levandis, Giovanna
2017-01-01
Abstract Deciphering the etiology of complex pathologies at molecular level requires longitudinal studies encompassing multiple biochemical pathways (apoptosis, proliferation, inflammation, oxidative stress). In vivo imaging of current reporter animals enabled the spatio-temporal analysis of specific molecular events, however, the lack of a multiplicity of loci for the generalized and regulated expression of the integrated transgenes hampers the creation of systems for the simultaneous analysis of more than a biochemical pathways at the time. We here developed and tested an in vivo-based methodology for the identification of multiple insertional loci suitable for the generation of reliable reporter mice. The validity of the methodology was tested with the generation of novel mice useful to report on inflammation and oxidative stress. PMID:27899606
Larue, Ruben T H M; Defraene, Gilles; De Ruysscher, Dirk; Lambin, Philippe; van Elmpt, Wouter
2017-02-01
Quantitative analysis of tumour characteristics based on medical imaging is an emerging field of research. In recent years, quantitative imaging features derived from CT, positron emission tomography and MR scans were shown to be of added value in the prediction of outcome parameters in oncology, in what is called the radiomics field. However, results might be difficult to compare owing to a lack of standardized methodologies to conduct quantitative image analyses. In this review, we aim to present an overview of the current challenges, technical routines and protocols that are involved in quantitative imaging studies. The first issue that should be overcome is the dependency of several features on the scan acquisition and image reconstruction parameters. Adopting consistent methods in the subsequent target segmentation step is evenly crucial. To further establish robust quantitative image analyses, standardization or at least calibration of imaging features based on different feature extraction settings is required, especially for texture- and filter-based features. Several open-source and commercial software packages to perform feature extraction are currently available, all with slightly different functionalities, which makes benchmarking quite challenging. The number of imaging features calculated is typically larger than the number of patients studied, which emphasizes the importance of proper feature selection and prediction model-building routines to prevent overfitting. Even though many of these challenges still need to be addressed before quantitative imaging can be brought into daily clinical practice, radiomics is expected to be a critical component for the integration of image-derived information to personalize treatment in the future.
Large-Scale medical image analytics: Recent methodologies, applications and Future directions.
Zhang, Shaoting; Metaxas, Dimitris
2016-10-01
Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.
A data grid for imaging-based clinical trials
NASA Astrophysics Data System (ADS)
Zhou, Zheng; Chao, Sander S.; Lee, Jasper; Liu, Brent; Documet, Jorge; Huang, H. K.
2007-03-01
Clinical trials play a crucial role in testing new drugs or devices in modern medicine. Medical imaging has also become an important tool in clinical trials because images provide a unique and fast diagnosis with visual observation and quantitative assessment. A typical imaging-based clinical trial consists of: 1) A well-defined rigorous clinical trial protocol, 2) a radiology core that has a quality control mechanism, a biostatistics component, and a server for storing and distributing data and analysis results; and 3) many field sites that generate and send image studies to the radiology core. As the number of clinical trials increases, it becomes a challenge for a radiology core servicing multiple trials to have a server robust enough to administrate and quickly distribute information to participating radiologists/clinicians worldwide. The Data Grid can satisfy the aforementioned requirements of imaging based clinical trials. In this paper, we present a Data Grid architecture for imaging-based clinical trials. A Data Grid prototype has been implemented in the Image Processing and Informatics (IPI) Laboratory at the University of Southern California to test and evaluate performance in storing trial images and analysis results for a clinical trial. The implementation methodology and evaluation protocol of the Data Grid are presented.
NASA Astrophysics Data System (ADS)
Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.
2017-09-01
Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.
ERIC Educational Resources Information Center
Rosique-Blasco, Mario; Madrid-Guijarro, Antonia; García-Pérez-de-Lema, Domingo
2016-01-01
Purpose: The purpose of this paper is to explore how entrepreneurial skills (such as creativity, proactivity and risk tolerance) and socio-cultural factors (such as role model and businessman image) affect secondary education students' propensity towards entrepreneurial options in their future careers. Design/methodology/approach: A sample of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korte, Andrew R
This thesis presents efforts to improve the methodology of matrix-assisted laser desorption ionization-mass spectrometry imaging (MALDI-MSI) as a method for analysis of metabolites from plant tissue samples. The first chapter consists of a general introduction to the technique of MALDI-MSI, and the sixth and final chapter provides a brief summary and an outlook on future work.
Noninvasive three-dimensional live imaging methodology for the spindles at meiosis and mitosis
NASA Astrophysics Data System (ADS)
Zheng, Jing-gao; Huo, Tiancheng; Tian, Ning; Chen, Tianyuan; Wang, Chengming; Zhang, Ning; Zhao, Fengying; Lu, Danyu; Chen, Dieyan; Ma, Wanyun; Sun, Jia-lin; Xue, Ping
2013-05-01
The spindle plays a crucial role in normal chromosome alignment and segregation during meiosis and mitosis. Studying spindles in living cells noninvasively is of great value in assisted reproduction technology (ART). Here, we present a novel spindle imaging methodology, full-field optical coherence tomography (FF-OCT). Without any dye labeling and fixation, we demonstrate the first successful application of FF-OCT to noninvasive three-dimensional (3-D) live imaging of the meiotic spindles within the mouse living oocytes at metaphase II as well as the mitotic spindles in the living zygotes at metaphase and telophase. By post-processing of the 3-D dataset obtained with FF-OCT, the important morphological and spatial parameters of the spindles, such as short and long axes, spatial localization, and the angle of meiotic spindle deviation from the first polar body in the oocyte were precisely measured with the spatial resolution of 0.7 μm. Our results reveal the potential of FF-OCT as an imaging tool capable of noninvasive 3-D live morphological analysis for spindles, which might be useful to ART related procedures and many other spindle related studies.
de Dumast, Priscille; Mirabel, Clément; Cevidanes, Lucia; Ruellas, Antonio; Yatabe, Marilia; Ioshida, Marcos; Ribera, Nina Tubau; Michoud, Loic; Gomes, Liliane; Huang, Chao; Zhu, Hongtu; Muniz, Luciana; Shoukri, Brandon; Paniagua, Beatriz; Styner, Martin; Pieper, Steve; Budin, Francois; Vimort, Jean-Baptiste; Pascal, Laura; Prieto, Juan Carlos
2018-07-01
The purpose of this study is to describe the methodological innovations of a web-based system for storage, integration and computation of biomedical data, using a training imaging dataset to remotely compute a deep neural network classifier of temporomandibular joint osteoarthritis (TMJOA). This study imaging dataset consisted of three-dimensional (3D) surface meshes of mandibular condyles constructed from cone beam computed tomography (CBCT) scans. The training dataset consisted of 259 condyles, 105 from control subjects and 154 from patients with diagnosis of TMJ OA. For the image analysis classification, 34 right and left condyles from 17 patients (39.9 ± 11.7 years), who experienced signs and symptoms of the disease for less than 5 years, were included as the testing dataset. For the integrative statistical model of clinical, biological and imaging markers, the sample consisted of the same 17 test OA subjects and 17 age and sex matched control subjects (39.4 ± 15.4 years), who did not show any sign or symptom of OA. For these 34 subjects, a standardized clinical questionnaire, blood and saliva samples were also collected. The technological methodologies in this study include a deep neural network classifier of 3D condylar morphology (ShapeVariationAnalyzer, SVA), and a flexible web-based system for data storage, computation and integration (DSCI) of high dimensional imaging, clinical, and biological data. The DSCI system trained and tested the neural network, indicating 5 stages of structural degenerative changes in condylar morphology in the TMJ with 91% close agreement between the clinician consensus and the SVA classifier. The DSCI remotely ran with a novel application of a statistical analysis, the Multivariate Functional Shape Data Analysis, that computed high dimensional correlations between shape 3D coordinates, clinical pain levels and levels of biological markers, and then graphically displayed the computation results. The findings of this study demonstrate a comprehensive phenotypic characterization of TMJ health and disease at clinical, imaging and biological levels, using novel flexible and versatile open-source tools for a web-based system that provides advanced shape statistical analysis and a neural network based classification of temporomandibular joint osteoarthritis. Published by Elsevier Ltd.
Assessing the validity of discourse analysis: transdisciplinary convergence
NASA Astrophysics Data System (ADS)
Jaipal-Jamani, Kamini
2014-12-01
Research studies using discourse analysis approaches make claims about phenomena or issues based on interpretation of written or spoken text, which includes images and gestures. How are findings/interpretations from discourse analysis validated? This paper proposes transdisciplinary convergence as a way to validate discourse analysis approaches to research. The argument is made that discourse analysis explicitly grounded in semiotics, systemic functional linguistics, and critical theory, offers a credible research methodology. The underlying assumptions, constructs, and techniques of analysis of these three theoretical disciplines can be drawn on to show convergence of data at multiple levels, validating interpretations from text analysis.
Proceedings of the Airborne Imaging Spectrometer Data Analysis Workshop
NASA Technical Reports Server (NTRS)
Vane, G. (Editor); Goetz, A. F. H. (Editor)
1985-01-01
The Airborne Imaging Spectrometer (AIS) Data Analysis Workshop was held at the Jet Propulsion Laboratory on April 8 to 10, 1985. It was attended by 92 people who heard reports on 30 investigations currently under way using AIS data that have been collected over the past two years. Written summaries of 27 of the presentations are in these Proceedings. Many of the results presented at the Workshop are preliminary because most investigators have been working with this fundamentally new type of data for only a relatively short time. Nevertheless, several conclusions can be drawn from the Workshop presentations concerning the value of imaging spectrometry to Earth remote sensing. First, work with AIS has shown that direct identification of minerals through high spectral resolution imaging is a reality for a wide range of materials and geological settings. Second, there are strong indications that high spectral resolution remote sensing will enhance the ability to map vegetation species. There are also good indications that imaging spectrometry will be useful for biochemical studies of vegetation. Finally, there are a number of new data analysis techniques under development which should lead to more efficient and complete information extraction from imaging spectrometer data. The results of the Workshop indicate that as experience is gained with this new class of data, and as new analysis methodologies are developed and applied, the value of imaging spectrometry should increase.
Bagci, Ulas; Foster, Brent; Miller-Jaster, Kirsten; Luna, Brian; Dey, Bappaditya; Bishai, William R; Jonsson, Colleen B; Jain, Sanjay; Mollura, Daniel J
2013-07-23
Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers' understanding of infectious diseases. We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists' delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework. Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis. We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy.
NASA Astrophysics Data System (ADS)
van Eycke, Yves-Rémi; Allard, Justine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine
2017-02-01
Immunohistochemistry (IHC) is a widely used technique in pathology to evidence protein expression in tissue samples. However, this staining technique is known for presenting inter-batch variations. Whole slide imaging in digital pathology offers a possibility to overcome this problem by means of image normalisation techniques. In the present paper we propose a methodology to objectively evaluate the need of image normalisation and to identify the best way to perform it. This methodology uses tissue microarray (TMA) materials and statistical analyses to evidence the possible variations occurring at colour and intensity levels as well as to evaluate the efficiency of image normalisation methods in correcting them. We applied our methodology to test different methods of image normalisation based on blind colour deconvolution that we adapted for IHC staining. These tests were carried out for different IHC experiments on different tissue types and targeting different proteins with different subcellular localisations. Our methodology enabled us to establish and to validate inter-batch normalization transforms which correct the non-relevant IHC staining variations. The normalised image series were then processed to extract coherent quantitative features characterising the IHC staining patterns.
Van Eycke, Yves-Rémi; Allard, Justine; Salmon, Isabelle; Debeir, Olivier; Decaestecker, Christine
2017-01-01
Immunohistochemistry (IHC) is a widely used technique in pathology to evidence protein expression in tissue samples. However, this staining technique is known for presenting inter-batch variations. Whole slide imaging in digital pathology offers a possibility to overcome this problem by means of image normalisation techniques. In the present paper we propose a methodology to objectively evaluate the need of image normalisation and to identify the best way to perform it. This methodology uses tissue microarray (TMA) materials and statistical analyses to evidence the possible variations occurring at colour and intensity levels as well as to evaluate the efficiency of image normalisation methods in correcting them. We applied our methodology to test different methods of image normalisation based on blind colour deconvolution that we adapted for IHC staining. These tests were carried out for different IHC experiments on different tissue types and targeting different proteins with different subcellular localisations. Our methodology enabled us to establish and to validate inter-batch normalization transforms which correct the non-relevant IHC staining variations. The normalised image series were then processed to extract coherent quantitative features characterising the IHC staining patterns. PMID:28220842
NASA Astrophysics Data System (ADS)
Ward, T.; Fleming, J. S.; Hoffmann, S. M. A.; Kemp, P. M.
2005-11-01
Simulation is useful in the validation of functional image analysis methods, particularly when considering the number of analysis techniques currently available lacking thorough validation. Problems exist with current simulation methods due to long run times or unrealistic results making it problematic to generate complete datasets. A method is presented for simulating known abnormalities within normal brain SPECT images using a measured point spread function (PSF), and incorporating a stereotactic atlas of the brain for anatomical positioning. This allows for the simulation of realistic images through the use of prior information regarding disease progression. SPECT images of cerebral perfusion have been generated consisting of a control database and a group of simulated abnormal subjects that are to be used in a UK audit of analysis methods. The abnormality is defined in the stereotactic space, then transformed to the individual subject space, convolved with a measured PSF and removed from the normal subject image. The dataset was analysed using SPM99 (Wellcome Department of Imaging Neuroscience, University College, London) and the MarsBaR volume of interest (VOI) analysis toolbox. The results were evaluated by comparison with the known ground truth. The analysis showed improvement when using a smoothing kernel equal to system resolution over the slightly larger kernel used routinely. Significant correlation was found between effective volume of a simulated abnormality and the detected size using SPM99. Improvements in VOI analysis sensitivity were found when using the region median over the region mean. The method and dataset provide an efficient methodology for use in the comparison and cross validation of semi-quantitative analysis methods in brain SPECT, and allow the optimization of analysis parameters.
2011-01-01
When applying echo-Doppler imaging for either clinical or research purposes it is very important to select the most adequate modality/technology and choose the most reliable and reproducible measurements. Quality control is a mainstay to reduce variability among institutions and operators and must be obtained by using appropriate procedures for data acquisition, storage and interpretation of echo-Doppler data. This goal can be achieved by employing an echo core laboratory (ECL), with the responsibility for standardizing image acquisition processes (performed at the peripheral echo-labs) and analysis (by monitoring and optimizing the internal intra- and inter-reader variability of measurements). Accordingly, the Working Group of Echocardiography of the Italian Society of Cardiology decided to design standardized procedures for imaging acquisition in peripheral laboratories and reading procedures and to propose a methodological approach to assess the reproducibility of echo-Doppler parameters of cardiac structure and function by using both standard and advanced technologies. A number of cardiologists experienced in cardiac ultrasound was involved to set up an ECL available for future studies involving complex imaging or including echo-Doppler measures as primary or secondary efficacy or safety end-points. The present manuscript describes the methodology of the procedures (imaging acquisition and measurement reading) and provides the documentation of the work done so far to test the reproducibility of the different echo-Doppler modalities (standard and advanced). These procedures can be suggested for utilization also in non referall echocardiographic laboratories as an "inside" quality check, with the aim at optimizing clinical consistency of echo-Doppler data. PMID:21943283
Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen
2017-02-01
The SIRT (Simultaneous Iterative Reconstruction Technique) algorithm is commonly used in Electron Tomography to calculate the original volume of the sample from noisy images, but the results provided by this iterative procedure are strongly dependent on the specific implementation of the algorithm, as well as on the number of iterations employed for the reconstruction. In this work, a methodology for selecting the iteration number of the SIRT reconstruction that provides the most accurate segmentation is proposed. The methodology is based on the statistical analysis of the intensity profiles at the edge of the objects in the reconstructed volume. A phantom which resembles a a carbon black aggregate has been created to validate the methodology and the SIRT implementations of two free software packages (TOMOJ and TOMO3D) have been used. Copyright © 2016 Elsevier B.V. All rights reserved.
On estimation of secret message length in LSB steganography in spatial domain
NASA Astrophysics Data System (ADS)
Fridrich, Jessica; Goljan, Miroslav
2004-06-01
In this paper, we present a new method for estimating the secret message length of bit-streams embedded using the Least Significant Bit embedding (LSB) at random pixel positions. We introduce the concept of a weighted stego image and then formulate the problem of determining the unknown message length as a simple optimization problem. The methodology is further refined to obtain more stable and accurate results for a wide spectrum of natural images. One of the advantages of the new method is its modular structure and a clean mathematical derivation that enables elegant estimator accuracy analysis using statistical image models.
Laurinavicius, Arvydas; Plancoulaine, Benoit; Laurinaviciene, Aida; Herlin, Paulette; Meskauskas, Raimundas; Baltrusaityte, Indra; Besusparis, Justinas; Dasevicius, Darius; Elie, Nicolas; Iqbal, Yasir; Bor, Catherine
2014-01-01
Immunohistochemical Ki67 labelling index (Ki67 LI) reflects proliferative activity and is a potential prognostic/predictive marker of breast cancer. However, its clinical utility is hindered by the lack of standardized measurement methodologies. Besides tissue heterogeneity aspects, the key element of methodology remains accurate estimation of Ki67-stained/counterstained tumour cell profiles. We aimed to develop a methodology to ensure and improve accuracy of the digital image analysis (DIA) approach. Tissue microarrays (one 1-mm spot per patient, n = 164) from invasive ductal breast carcinoma were stained for Ki67 and scanned. Criterion standard (Ki67-Count) was obtained by counting positive and negative tumour cell profiles using a stereology grid overlaid on a spot image. DIA was performed with Aperio Genie/Nuclear algorithms. A bias was estimated by ANOVA, correlation and regression analyses. Calibration steps of the DIA by adjusting the algorithm settings were performed: first, by subjective DIA quality assessment (DIA-1), and second, to compensate the bias established (DIA-2). Visual estimate (Ki67-VE) on the same images was performed by five pathologists independently. ANOVA revealed significant underestimation bias (P < 0.05) for DIA-0, DIA-1 and two pathologists' VE, while DIA-2, VE-median and three other VEs were within the same range. Regression analyses revealed best accuracy for the DIA-2 (R-square = 0.90) exceeding that of VE-median, individual VEs and other DIA settings. Bidirectional bias for the DIA-2 with overestimation at low, and underestimation at high ends of the scale was detected. Measurement error correction by inverse regression was applied to improve DIA-2-based prediction of the Ki67-Count, in particularfor the clinically relevant interval of Ki67-Count < 40%. Potential clinical impact of the prediction was tested by dichotomising the cases at the cut-off values of 10, 15, and 20%. Misclassification rate of 5-7% was achieved, compared to that of 11-18% for the VE-median-based prediction. Our experiments provide methodology to achieve accurate Ki67-LI estimation by DIA, based on proper validation, calibration, and measurement error correction procedures, guided by quantified bias from reference values obtained by stereology grid count. This basic validation step is an important prerequisite for high-throughput automated DIA applications to investigate tissue heterogeneity and clinical utility aspects of Ki67 and other immunohistochemistry (IHC) biomarkers.
2014-01-01
Introduction Immunohistochemical Ki67 labelling index (Ki67 LI) reflects proliferative activity and is a potential prognostic/predictive marker of breast cancer. However, its clinical utility is hindered by the lack of standardized measurement methodologies. Besides tissue heterogeneity aspects, the key element of methodology remains accurate estimation of Ki67-stained/counterstained tumour cell profiles. We aimed to develop a methodology to ensure and improve accuracy of the digital image analysis (DIA) approach. Methods Tissue microarrays (one 1-mm spot per patient, n = 164) from invasive ductal breast carcinoma were stained for Ki67 and scanned. Criterion standard (Ki67-Count) was obtained by counting positive and negative tumour cell profiles using a stereology grid overlaid on a spot image. DIA was performed with Aperio Genie/Nuclear algorithms. A bias was estimated by ANOVA, correlation and regression analyses. Calibration steps of the DIA by adjusting the algorithm settings were performed: first, by subjective DIA quality assessment (DIA-1), and second, to compensate the bias established (DIA-2). Visual estimate (Ki67-VE) on the same images was performed by five pathologists independently. Results ANOVA revealed significant underestimation bias (P < 0.05) for DIA-0, DIA-1 and two pathologists’ VE, while DIA-2, VE-median and three other VEs were within the same range. Regression analyses revealed best accuracy for the DIA-2 (R-square = 0.90) exceeding that of VE-median, individual VEs and other DIA settings. Bidirectional bias for the DIA-2 with overestimation at low, and underestimation at high ends of the scale was detected. Measurement error correction by inverse regression was applied to improve DIA-2-based prediction of the Ki67-Count, in particular for the clinically relevant interval of Ki67-Count < 40%. Potential clinical impact of the prediction was tested by dichotomising the cases at the cut-off values of 10, 15, and 20%. Misclassification rate of 5-7% was achieved, compared to that of 11-18% for the VE-median-based prediction. Conclusions Our experiments provide methodology to achieve accurate Ki67-LI estimation by DIA, based on proper validation, calibration, and measurement error correction procedures, guided by quantified bias from reference values obtained by stereology grid count. This basic validation step is an important prerequisite for high-throughput automated DIA applications to investigate tissue heterogeneity and clinical utility aspects of Ki67 and other immunohistochemistry (IHC) biomarkers. PMID:24708745
NASA Astrophysics Data System (ADS)
Qie, G.; Wang, G.; Wang, M.
2016-12-01
Mixed pixels and shadows due to buildings in urban areas impede accurate estimation and mapping of city vegetation carbon density. In most of previous studies, these factors are often ignored, which thus result in underestimation of city vegetation carbon density. In this study we presented an integrated methodology to improve the accuracy of mapping city vegetation carbon density. Firstly, we applied a linear shadow remove analysis (LSRA) on remotely sensed Landsat 8 images to reduce the shadow effects on carbon estimation. Secondly, we integrated a linear spectral unmixing analysis (LSUA) with a linear stepwise regression (LSR), a logistic model-based stepwise regression (LMSR) and k-Nearest Neighbors (kNN), and utilized and compared the integrated models on shadow-removed images to map vegetation carbon density. This methodology was examined in Shenzhen City of Southeast China. A data set from a total of 175 sample plots measured in 2013 and 2014 was used to train the models. The independent variables statistically significantly contributing to improving the fit of the models to the data and reducing the sum of squared errors were selected from a total of 608 variables derived from different image band combinations and transformations. The vegetation fraction from LSUA was then added into the models as an important independent variable. The estimates obtained were evaluated using a cross-validation method. Our results showed that higher accuracies were obtained from the integrated models compared with the ones using traditional methods which ignore the effects of mixed pixels and shadows. This study indicates that the integrated method has great potential on improving the accuracy of urban vegetation carbon density estimation. Key words: Urban vegetation carbon, shadow, spectral unmixing, spatial modeling, Landsat 8 images
Duncan, Niall W; Wiebking, Christine; Muñoz-Torres, Zeidy; Northoff, Georg
2014-01-15
There is an increasing interest in combining different imaging modalities to investigate the relationship between neural and biochemical activity. More specifically, imaging techniques like MRS and PET that allow for biochemical measurement are combined with techniques like fMRI and EEG that measure neural activity in different states. Such combination of neural and biochemical measures raises not only technical issues, such as merging the different data sets, but also several methodological issues. These methodological issues – ranging from hypothesis generation and hypothesis-guided use of technical facilities to target measures and experimental measures – are the focus of this paper. We discuss the various methodological problems and issues raised by the combination of different imaging methodologies in order to investigate neuro-biochemical relationships on a regional level in humans. For example, the choice of transmitter and scan type is discussed, along with approaches to allow the establishment of particular specificities (such as regional or biochemical) to in turn make results fully interpretable. An algorithm that can be used as a form of checklist for designing such multimodal studies is presented. The paper concludes that while several methodological and technical caveats needs to be overcome and addressed, multimodal imaging of the neuro-biochemical relationship provides an important tool to better understand the physiological mechanisms of the human brain.
Duncan, Niall W; Wiebking, Christine; Munoz-Torres, Zeidy; Northoff, Georg
2013-10-25
There is an increasing interest in combining different imaging modalities to investigate the relationship between neural and biochemical activity. More specifically, imaging techniques like MRS and PET that allow for biochemical measurement are combined with techniques like fMRI and EEG that measure neural activity in different states. Such combination of neural and biochemical measures raises not only technical issues, such as merging the different data sets, but also several methodological issues. These methodological issues - ranging from hypothesis generation and hypothesis-guided use of technical facilities to target measures and experimental measures - are the focus of this paper. We discuss the various methodological problems and issues raised by the combination of different imaging methodologies in order to investigate neuro-biochemical relationships on a regional level in humans. For example, the choice of transmitter and scan type is discussed, along with approaches to allow the establishment of particular specificities (such as regional or biochemical) to in turn make results fully interpretable. An algorithm that can be used as a form of checklist for designing such multimodal studies is presented. The paper concludes that while several methodological and technical caveats needs to be overcome and addressed, multimodal imaging of the neuro-biochemical relationship provides an important tool to better understand the physiological mechanisms of the human brain. Copyright © 2013. Published by Elsevier B.V.
Contrast-detail phantom scoring methodology.
Thomas, Jerry A; Chakrabarti, Kish; Kaczmarek, Richard; Romanyukha, Alexander
2005-03-01
Published results of medical imaging studies which make use of contrast detail mammography (CDMAM) phantom images for analysis are difficult to compare since data are often not analyzed in the same way. In order to address this situation, the concept of ideal contrast detail curves is suggested. The ideal contrast detail curves are constructed based on the requirement of having the same product of the diameter and contrast (disk thickness) of the minimal correctly determined object for every row of the CDMAM phantom image. A correlation and comparison of five different quality parameters of the CDMAM phantom image determined for obtained ideal contrast detail curves is performed. The image quality parameters compared include: (1) contrast detail curve--a graph correlation between "minimal correct reading" diameter and disk thickness; (2) correct observation ratio--the ratio of the number of correctly identified objects to the actual total number of objects multiplied by 100; (3) image quality figure--the sum of the product of the diameter of the smallest scored object and its relative contrast; (4) figure-of-merit--the zero disk diameter value obtained from extrapolation of the contrast detail curve to the origin (e.g., zero disk diameter); and (5) k-factor--the product of the thickness and the diameter of the smallest correctly identified disks. The analysis carried out showed the existence of a nonlinear relationship between the above parameters, which means that use of different parameters of CDMAM image quality potentially can cause different conclusions about changes in image quality. Construction of the ideal contrast detail curves for CDMAM phantom is an attempt to determine the quantitative limits of the CDMAM phantom as employed for image quality evaluation. These limits are determined by the relationship between certain parameters of a digital mammography system and the set of the gold disks sizes in the CDMAM phantom. Recommendations are made on selections of CDMAM phantom regions which should be used for scoring at different image quality and which scoring methodology may be most appropriate. Special attention is also paid to the use of the CDMAM phantom for image quality assessment of digital mammography systems particularly in the vicinity of the Nyquist frequency.
NASA Astrophysics Data System (ADS)
Gwinner, K.; Jaumann, R.; Hauber, E.; Hoffmann, H.; Heipke, C.; Oberst, J.; Neukum, G.; Ansan, V.; Bostelmann, J.; Dumke, A.; Elgner, S.; Erkeling, G.; Fueten, F.; Hiesinger, H.; Hoekzema, N. M.; Kersten, E.; Loizeau, D.; Matz, K.-D.; McGuire, P. C.; Mertens, V.; Michael, G.; Pasewaldt, A.; Pinet, P.; Preusker, F.; Reiss, D.; Roatsch, T.; Schmidt, R.; Scholten, F.; Spiegel, M.; Stesky, R.; Tirsch, D.; van Gasselt, S.; Walter, S.; Wählisch, M.; Willner, K.
2016-07-01
The High Resolution Stereo Camera (HRSC) of ESA's Mars Express is designed to map and investigate the topography of Mars. The camera, in particular its Super Resolution Channel (SRC), also obtains images of Phobos and Deimos on a regular basis. As HRSC is a push broom scanning instrument with nine CCD line detectors mounted in parallel, its unique feature is the ability to obtain along-track stereo images and four colors during a single orbital pass. The sub-pixel accuracy of 3D points derived from stereo analysis allows producing DTMs with grid size of up to 50 m and height accuracy on the order of one image ground pixel and better, as well as corresponding orthoimages. Such data products have been produced systematically for approximately 40% of the surface of Mars so far, while global shape models and a near-global orthoimage mosaic could be produced for Phobos. HRSC is also unique because it bridges between laser altimetry and topography data derived from other stereo imaging instruments, and provides geodetic reference data and geological context to a variety of non-stereo datasets. This paper, in addition to an overview of the status and evolution of the experiment, provides a review of relevant methods applied for 3D reconstruction and mapping, and respective achievements. We will also review the methodology of specific approaches to science analysis based on joint analysis of DTM and orthoimage information, or benefitting from high accuracy of co-registration between multiple datasets, such as studies using multi-temporal or multi-angular observations, from the fields of geomorphology, structural geology, compositional mapping, and atmospheric science. Related exemplary results from analysis of HRSC data will be discussed. After 10 years of operation, HRSC covered about 70% of the surface by panchromatic images at 10-20 m/pixel, and about 97% at better than 100 m/pixel. As the areas with contiguous coverage by stereo data are increasingly abundant, we also present original data related to the analysis of image blocks and address methodology aspects of newly established procedures for the generation of multi-orbit DTMs and image mosaics. The current results suggest that multi-orbit DTMs with grid spacing of 50 m can be feasible for large parts of the surface, as well as brightness-adjusted image mosaics with co-registration accuracy of adjacent strips on the order of one pixel, and at the highest image resolution available. These characteristics are demonstrated by regional multi-orbit data products covering the MC-11 (East) quadrangle of Mars, representing the first prototype of a new HRSC data product level.
A FPGA implementation for linearly unmixing a hyperspectral image using OpenCL
NASA Astrophysics Data System (ADS)
Guerra, Raúl; López, Sebastián.; Sarmiento, Roberto
2017-10-01
Hyperspectral imaging systems provide images in which single pixels have information from across the electromagnetic spectrum of the scene under analysis. These systems divide the spectrum into many contiguos channels, which may be even out of the visible part of the spectra. The main advantage of the hyperspectral imaging technology is that certain objects leave unique fingerprints in the electromagnetic spectrum, known as spectral signatures, which allow to distinguish between different materials that may look like the same in a traditional RGB image. Accordingly, the most important hyperspectral imaging applications are related with distinguishing or identifying materials in a particular scene. In hyperspectral imaging applications under real-time constraints, the huge amount of information provided by the hyperspectral sensors has to be rapidly processed and analysed. For such purpose, parallel hardware devices, such as Field Programmable Gate Arrays (FPGAs) are typically used. However, developing hardware applications typically requires expertise in the specific targeted device, as well as in the tools and methodologies which can be used to perform the implementation of the desired algorithms in the specific device. In this scenario, the Open Computing Language (OpenCL) emerges as a very interesting solution in which a single high-level synthesis design language can be used to efficiently develop applications in multiple and different hardware devices. In this work, the Fast Algorithm for Linearly Unmixing Hyperspectral Images (FUN) has been implemented into a Bitware Stratix V Altera FPGA using OpenCL. The obtained results demonstrate the suitability of OpenCL as a viable design methodology for quickly creating efficient FPGAs designs for real-time hyperspectral imaging applications.
Carvalho, Fabiola B; Gonçalves, Marcelo; Tanomaru-Filho, Mário
2007-04-01
The purpose of this study was to describe a new technique by using Adobe Photoshop CS (San Jose, CA) image-analysis software to evaluate the radiographic changes of chronic periapical lesions after root canal treatment by digital subtraction radiography. Thirteen upper anterior human teeth with pulp necrosis and radiographic image of chronic periapical lesion were endodontically treated and radiographed 0, 2, 4, and 6 months after root canal treatment by using a film holder. The radiographic films were automatically developed and digitized. The radiographic images taken 0, 2, 4, and 6 months after root canal therapy were submitted to digital subtraction in pairs (0 and 2 months, 2 and 4 months, and 4 and 6 months) choosing "image," "calculation," "subtract," and "new document" tools from Adobe Photoshop CS image-analysis software toolbar. The resulting images showed areas of periapical healing in all cases. According to this methodology, the healing or expansion of periapical lesions can be evaluated by means of digital subtraction radiography by using Adobe Photoshop CS software.
A forensic science perspective on the role of images in crime investigation and reconstruction.
Milliet, Quentin; Delémont, Olivier; Margot, Pierre
2014-12-01
This article presents a global vision of images in forensic science. The proliferation of perspectives on the use of images throughout criminal investigations and the increasing demand for research on this topic seem to demand a forensic science-based analysis. In this study, the definitions of and concepts related to material traces are revisited and applied to images, and a structured approach is used to persuade the scientific community to extend and improve the use of images as traces in criminal investigations. Current research efforts focus on technical issues and evidence assessment. This article provides a sound foundation for rationalising and explaining the processes involved in the production of clues from trace images. For example, the mechanisms through which these visual traces become clues of presence or action are described. An extensive literature review of forensic image analysis emphasises the existing guidelines and knowledge available for answering investigative questions (who, what, where, when and how). However, complementary developments are still necessary to demystify many aspects of image analysis in forensic science, including how to review and select images or use them to reconstruct an event or assist intelligence efforts. The hypothetico-deductive reasoning pathway used to discover unknown elements of an event or crime can also help scientists understand the underlying processes involved in their decision making. An analysis of a single image in an investigative or probative context is used to demonstrate the highly informative potential of images as traces and/or clues. Research efforts should be directed toward formalising the extraction and combination of clues from images. An appropriate methodology is key to expanding the use of images in forensic science. Copyright © 2014 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Potter, Christopher
2013-01-01
The Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) methodology was applied to detected changes in perennial vegetation cover at marshland sites in Northern California reported to have undergone restoration between 1999 and 2009. Results showed extensive contiguous areas of restored marshland plant cover at 10 of the 14 sites selected. Gains in either woody shrub cover and/or from recovery of herbaceous cover that remains productive and evergreen on a year-round basis could be mapped out from the image results. However, LEDAPS may not be highly sensitive changes in wetlands that have been restored mainly with seasonal herbaceous cover (e.g., vernal pools), due to the ephemeral nature of the plant greenness signal. Based on this evaluation, the LEDAPS methodology would be capable of fulfilling a pressing need for consistent, continual, low-cost monitoring of changes in marshland ecosystems of the Pacific Flyway.
Botta, Francesca; Ferrari, Mahila; Chiesa, Carlo; Vitali, Sara; Guerriero, Francesco; Nile, Maria Chiara De; Mira, Marta; Lorenzon, Leda; Pacilio, Massimiliano; Cremonesi, Marta
2018-04-01
To investigate the clinical implication of performing pre-treatment dosimetry for 90 Y-microspheres liver radioembolization on 99m Tc-MAA SPECT images reconstructed without attenuation or scatter correction and quantified with the patient relative calibration methodology. Twenty-five patients treated with SIR-Spheres ® at Istituto Europeo di Oncologia and 31 patients treated with TheraSphere ® at Istituto Nazionale Tumori were considered. For each acquired 99m Tc-MAA SPECT, four reconstructions were performed: with attenuation and scatter correction (AC_SC), only attenuation (AC_NoSC), only scatter (NoAC_SC) and without corrections (NoAC_NoSC). Absorbed dose maps were calculated from the activity maps, quantified applying the patient relative calibration to the SPECT images. Whole Liver (WL) and Tumor (T) regions were drawn on CT images. Injected Liver (IL) region was defined including the voxels receiving absorbed dose >3.8 Gy/GBq. Whole Healthy Liver (WHL) and Healthy Injected Liver (HIL) regions were obtained as WHL = WL - T and HIL = IL - T. Average absorbed dose to WHL and HIL were calculated, and the injection activity was derived following each Institute's procedure. The values obtained from AC_NoSC, NoAC_SC and NoAC_NoSC images were compared to the reference value suggested by AC_SC images using Bland-Altman analysis and Wilcoxon paired test (5% significance threshold). Absorbed-dose maps were compared to the reference map (AC_SC) in global terms using the Voxel Normalized Mean Square Error (%VNMSE), and at voxel level by calculating for each voxel the normalized difference with the reference value. The uncertainty affecting absorbed dose at voxel level was accounted for in the comparison; to this purpose, the voxel counts fluctuation due to Poisson and reconstruction noise was estimated from SPECT images of a water phantom acquired and reconstructed as patient images. NoAC_SC images lead to activity prescriptions not significantly different from the reference AC_SC images; the individual differences (<0.1 GBq for all IEO patients, <0.6 GBq for all but one INT patients) were comparable to the uncertainty affecting activity measurement. AC_NoSC and NoAC_NoSC images, instead, yielded significantly different activity prescriptions and wider 95% confidence intervals in the Bland-Altman analysis. Concerning the absorbed dose map, AC_NoSC images had the smallest %VNMSE value and the highest fraction of voxels differing less than 2 standard deviations from AC_SC. The patient relative calibration methodology can compensate for the missing attenuation correction when performing healthy liver pre-treatment dosimetry: safe treatments can be planned even on NoAC_SC images, suggesting activities comparable to AC_SC images. Scatter correction is recommended due to its heavy impact on healthy liver dosimetry. © 2018 American Association of Physicists in Medicine.
Barbosa, Daniel J C; Ramos, Jaime; Lima, Carlos S
2008-01-01
Capsule endoscopy is an important tool to diagnose tumor lesions in the small bowel. The capsule endoscopic images possess vital information expressed by color and texture. This paper presents an approach based in the textural analysis of the different color channels, using the wavelet transform to select the bands with the most significant texture information. A new image is then synthesized from the selected wavelet bands, trough the inverse wavelet transform. The features of each image are based on second-order textural information, and they are used in a classification scheme using a multilayer perceptron neural network. The proposed methodology has been applied in real data taken from capsule endoscopic exams and reached 98.7% sensibility and 96.6% specificity. These results support the feasibility of the proposed algorithm.
Evolutionary Computing Methods for Spectral Retrieval
NASA Technical Reports Server (NTRS)
Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna
2009-01-01
A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.
Fast localization of optic disc and fovea in retinal images for eye disease screening
NASA Astrophysics Data System (ADS)
Yu, H.; Barriga, S.; Agurto, C.; Echegaray, S.; Pattichis, M.; Zamora, G.; Bauman, W.; Soliz, P.
2011-03-01
Optic disc (OD) and fovea locations are two important anatomical landmarks in automated analysis of retinal disease in color fundus photographs. This paper presents a new, fast, fully automatic optic disc and fovea localization algorithm developed for diabetic retinopathy (DR) screening. The optic disc localization methodology comprises of two steps. First, the OD location is identified using template matching and directional matched filter. To reduce false positives due to bright areas of pathology, we exploit vessel characteristics inside the optic disc. The location of the fovea is estimated as the point of lowest matched filter response within a search area determined by the optic disc location. Second, optic disc segmentation is performed. Based on the detected optic disc location, a fast hybrid level-set algorithm which combines the region information and edge gradient to drive the curve evolution is used to segment the optic disc boundary. Extensive evaluation was performed on 1200 images (Messidor) composed of 540 images of healthy retinas, 431 images with DR but no risk of macular edema (ME), and 229 images with DR and risk of ME. The OD location methodology obtained 98.3% success rate, while fovea location achieved 95% success rate. The average mean absolute distance (MAD) between the OD segmentation algorithm and "gold standard" is 10.5% of estimated OD radius. Qualitatively, 97% of the images achieved Excellent to Fair performance for OD segmentation. The segmentation algorithm performs well even on blurred images.
NASA Technical Reports Server (NTRS)
Franks, Shannon; Masek, Jeffrey G.; Headley, Rachel M.; Gasch, John; Arvidson, Terry
2009-01-01
The Global Land Survey (GLS) 2005 is a cloud-free, orthorectified collection of Landsat imagery acquired during the 2004-2007 epoch intended to support global land-cover and ecological monitoring. Due to the numerous complexities in selecting imagery for the GLS2005, NASA and the U.S. Geological Survey (USGS) sponsored the development of an automated scene selection tool, the Large Area Scene Selection Interface (LASSI), to aid in the selection of imagery for this data set. This innovative approach to scene selection applied a user-defined weighting system to various scene parameters: image cloud cover, image vegetation greenness, choice of sensor, and the ability of the Landsat 7 Scan Line Corrector (SLC)-off pair to completely fill image gaps, among others. The parameters considered in scene selection were weighted according to their relative importance to the data set, along with the algorithm's sensitivity to that weight. This paper describes the methodology and analysis that established the parameter weighting strategy, as well as the post-screening processes used in selecting the optimal data set for GLS2005.
Christodoulou, Asterios; Mikrogeorgis, Georgios; Vouzara, Triantafillia; Papachristou, Konstantinos; Angelopoulos, Christos; Nikolaidis, Nikolaos; Pitas, Ioannis; Lyroudia, Kleoniki
2018-02-15
In this study, the three-dimensional (3D) modification of root canal curvature was measured, after the application of Reciproc instrumentation technique, by using cone beam computed tomography (CBCT) imaging and a special algorithm developed for the 3D measurement of the curvature of the root canal. Thirty extracted upper molars were selected. Digital radiographs for each tooth were taken. Root curvature was measured by using Schneider method and they were divided into three groups, each one consisting of 10 roots, according to their curvature: Group 1 (0°-20°), Group 2 (21°-40°), Group 3 (41°-60°). CBCT imaging was applied to each tooth before and after its instrumentation, and the data were examined by using a specially developed CBCT image analysis algorithm. The instrumentation with Reciproc led to a decrease of the curvature by 30.23% (on average) in all groups. The proposed methodology proved to be able to measure the curvature of the root canal and its 3D modification after the instrumentation.
Functional imaging with low-resolution brain electromagnetic tomography (LORETA): a review.
Pascual-Marqui, R D; Esslen, M; Kochi, K; Lehmann, D
2002-01-01
This paper reviews several recent publications that have successfully used the functional brain imaging method known as LORETA. Emphasis is placed on the electrophysiological and neuroanatomical basis of the method, on the localization properties of the method, and on the validation of the method in real experimental human data. Papers that criticize LORETA are briefly discussed. LORETA publications in the 1994-1997 period based localization inference on images of raw electric neuronal activity. In 1998, a series of papers appeared that based localization inference on the statistical parametric mapping methodology applied to high-time resolution LORETA images. Starting in 1999, quantitative neuroanatomy was added to the methodology, based on the digitized Talairach atlas provided by the Brain Imaging Centre, Montreal Neurological Institute. The combination of these methodological developments has placed LORETA at a level that compares favorably to the more classical functional imaging methods, such as PET and fMRI.
Blackboard architecture for medical image interpretation
NASA Astrophysics Data System (ADS)
Davis, Darryl N.; Taylor, Christopher J.
1991-06-01
There is a growing interest in using sophisticated knowledge-based systems for biomedical image interpretation. We present a principled attempt to use artificial intelligence methodologies in interpreting lateral skull x-ray images. Such radiographs are routinely used in cephalometric analysis to provide quantitative measurements useful to clinical orthodontists. Manual and interactive methods of analysis are known to be error prone and previous attempts to automate this analysis typically fail to capture the expertise and adaptability required to cope with the variability in biological structure and image quality. An integrated model-based system has been developed which makes use of a blackboard architecture and multiple knowledge sources. A model definition interface allows quantitative models, of feature appearance and location, to be built from examples as well as more qualitative modelling constructs. Visual task definition and blackboard control modules allow task-specific knowledge sources to act on information available to the blackboard in a hypothesise and test reasoning cycle. Further knowledge-based modules include object selection, location hypothesis, intelligent segmentation, and constraint propagation systems. Alternative solutions to given tasks are permitted.
Cognitive approaches for patterns analysis and security applications
NASA Astrophysics Data System (ADS)
Ogiela, Marek R.; Ogiela, Lidia
2017-08-01
In this paper will be presented new opportunities for developing innovative solutions for semantic pattern classification and visual cryptography, which will base on cognitive and bio-inspired approaches. Such techniques can be used for evaluation of the meaning of analyzed patterns or encrypted information, and allow to involve such meaning into the classification task or encryption process. It also allows using some crypto-biometric solutions to extend personalized cryptography methodologies based on visual pattern analysis. In particular application of cognitive information systems for semantic analysis of different patterns will be presented, and also a novel application of such systems for visual secret sharing will be described. Visual shares for divided information can be created based on threshold procedure, which may be dependent on personal abilities to recognize some image details visible on divided images.
Robinson, Jonathan P; Lumley, Philip J; Claridge, Ela; Cooper, Paul R; Grover, Liam M; Williams, Richard L; Walmsley, A Damien
2012-11-01
MicroCT allows the complex canal network of teeth to be mapped but does not readily distinguish between structural tissue (dentine) and the debris generated during cleaning. The aim was to introduce a validated approach for identifying debris following routine instrumentation and disinfection. The mesial canals of 12 mandibular molars were instrumented, and irrigated with EDTA and NaOCl. MicroCT images before and after instrumentation and images were assessed qualitatively and quantitatively. Debris in the canal space was identified through morphological image analysis and superimposition of the images before and after instrumentation. This revealed that the removal of debris is prohibited by protrusions and micro-canals within the tooth creating areas which are inaccessible to the irrigant. Although the results arising from the analytical methodology did provide measurements of debris produced, biological differences in the canals resulted in variances. Both irrigants reduced debris compared to the control which decreased with EDTA and further with NaOCl. However, anatomical variation did not allow definitive conclusions on which irrigant was best to use although both reduced debris build up. This work presents a new approach for distinguishing between debris and structural inorganic tissue in root canals of teeth. The application may prove useful in other calcified tissue shape determination. Remaining debris may contain bacteria and obstruct the flow of irrigating solutions into lateral canal anatomy. This new approach for detecting the amount of remaining debris in canal systems following instrumentation provides a clearer methodology of the identification of such debris. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)
2006-01-01
High resolution B-mode ultrasound images of the common carotid artery are obtained with an ultrasound transducer using a standardized methodology. Subjects are supine with the head counter-rotated 45 degrees using a head pillow. The jugular vein and carotid artery are located and positioned in a vertical stacked orientation. The transducer is rotated 90 degrees around the centerline of the transverse image of the stacked structure to obtain a longitudinal image while maintaining the vessels in a stacked position. A computerized methodology assists operators to accurately replicate images obtained over several spaced-apart examinations. The methodology utilizes a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time live ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, measurement of vascular dimensions such as carotid arterial IMT and diameter, the coefficient of variation is substantially reduced to values approximating from about 1.0% to about 1.25%. All images contain anatomical landmarks for reproducing probe angulation, including visualization of the carotid bulb, stacking of the jugular vein above the carotid artery, and initial instrumentation settings, used at a baseline measurement are maintained during all follow-up examinations.
Spectral imaging: principles and applications.
Garini, Yuval; Young, Ian T; McNamara, George
2006-08-01
Spectral imaging extends the capabilities of biological and clinical studies to simultaneously study multiple features such as organelles and proteins qualitatively and quantitatively. Spectral imaging combines two well-known scientific methodologies, namely spectroscopy and imaging, to provide a new advantageous tool. The need to measure the spectrum at each point of the image requires combining dispersive optics with the more common imaging equipment, and introduces constrains as well. The principles of spectral imaging and a few representative applications are described. Spectral imaging analysis is necessary because the complex data structure cannot be analyzed visually. A few of the algorithms are discussed with emphasis on the usage for different experimental modes (fluorescence and bright field). Finally, spectral imaging, like any method, should be evaluated in light of its advantages to specific applications, a selection of which is described. Spectral imaging is a relatively new technique and its full potential is yet to be exploited. Nevertheless, several applications have already shown its potential. (c) 2006 International Society for Analytical Cytology.
User Oriented Platform for Data Analytics in Medical Imaging Repositories.
Valerio, Miguel; Godinho, Tiago Marques; Costa, Carlos
2016-01-01
The production of medical imaging studies and associated data has been growing in the last decades. Their primary use is to support medical diagnosis and treatment processes. However, the secondary use of the tremendous amount of stored data is generally more limited. Nowadays, medical imaging repositories have turned into rich databanks holding not only the images themselves, but also a wide range of metadata related to the medical practice. Exploring these repositories through data analysis and business intelligence techniques has the potential of increasing the efficiency and quality of the medical practice. Nevertheless, the continuous production of tremendous amounts of data makes their analysis difficult by conventional approaches. This article proposes a novel automated methodology to derive knowledge from medical imaging repositories that does not disrupt the regular medical practice. Our method is able to apply statistical analysis and business intelligence techniques directly on top of live institutional repositories. It is a Web-based solution that provides extensive dashboard capabilities, including complete charting and reporting options, combined with data mining components. Moreover, it enables the operator to set a wide multitude of query parameters and operators through the use of an intuitive graphical interface.
Cancer Biotechnology | Center for Cancer Research
Biotechnology advances continue to underscore the need to educate NCI fellows in new methodologies. The Cancer Biotechnology course will be held on the NCI-Frederick campus on January 29, 2016 (Bldg. 549, Main Auditorium) and the course will be repeated on the Bethesda campus on February 9, 2016 (Natcher Balcony C). The latest advances in DNA, protein and image analysis will
Joint PET-MR respiratory motion models for clinical PET motion correction
NASA Astrophysics Data System (ADS)
Manber, Richard; Thielemans, Kris; Hutton, Brian F.; Wan, Simon; McClelland, Jamie; Barnes, Anna; Arridge, Simon; Ourselin, Sébastien; Atkinson, David
2016-09-01
Patient motion due to respiration can lead to artefacts and blurring in positron emission tomography (PET) images, in addition to quantification errors. The integration of PET with magnetic resonance (MR) imaging in PET-MR scanners provides complementary clinical information, and allows the use of high spatial resolution and high contrast MR images to monitor and correct motion-corrupted PET data. In this paper we build on previous work to form a methodology for respiratory motion correction of PET data, and show it can improve PET image quality whilst having minimal impact on clinical PET-MR protocols. We introduce a joint PET-MR motion model, using only 1 min per PET bed position of simultaneously acquired PET and MR data to provide a respiratory motion correspondence model that captures inter-cycle and intra-cycle breathing variations. In the model setup, 2D multi-slice MR provides the dynamic imaging component, and PET data, via low spatial resolution framing and principal component analysis, provides the model surrogate. We evaluate different motion models (1D and 2D linear, and 1D and 2D polynomial) by computing model-fit and model-prediction errors on dynamic MR images on a data set of 45 patients. Finally we apply the motion model methodology to 5 clinical PET-MR oncology patient datasets. Qualitative PET reconstruction improvements and artefact reduction are assessed with visual analysis, and quantitative improvements are calculated using standardised uptake value (SUVpeak and SUVmax) changes in avid lesions. We demonstrate the capability of a joint PET-MR motion model to predict respiratory motion by showing significantly improved image quality of PET data acquired before the motion model data. The method can be used to incorporate motion into the reconstruction of any length of PET acquisition, with only 1 min of extra scan time, and with no external hardware required.
Application of atomic force microscopy as a nanotechnology tool in food science.
Yang, Hongshun; Wang, Yifen; Lai, Shaojuan; An, Hongjie; Li, Yunfei; Chen, Fusheng
2007-05-01
Atomic force microscopy (AFM) provides a method for detecting nanoscale structural information. First, this review explains the fundamentals of AFM, including principle, manipulation, and analysis. Applications of AFM are then reported in food science and technology research, including qualitative macromolecule and polymer imaging, complicated or quantitative structure analysis, molecular interaction, molecular manipulation, surface topography, and nanofood characterization. The results suggested that AFM could bring insightful knowledge on food properties, and the AFM analysis could be used to illustrate some mechanisms of property changes during processing and storage. However, the current difficulty in applying AFM to food research is lacking appropriate methodology for different food systems. Better understanding of AFM technology and developing corresponding methodology for complicated food systems would lead to a more in-depth understanding of food properties at macromolecular levels and enlarge their applications. The AFM results could greatly improve the food processing and storage technologies.
Luster measurements of lips treated with lipstick formulations.
Yadav, Santosh; Issa, Nevine; Streuli, David; McMullen, Roger; Fares, Hani
2011-01-01
In this study, digital photography in combination with image analysis was used to measure the luster of several lipstick formulations containing varying amounts and types of polymers. A weighed amount of lipstick was applied to a mannequin's lips and the mannequin was illuminated by a uniform beam of a white light source. Digital images of the mannequin were captured with a high-resolution camera and the images were analyzed using image analysis software. Luster analysis was performed using Stamm (L(Stamm)) and Reich-Robbins (L(R-R)) luster parameters. Statistical analysis was performed on each luster parameter (L(Stamm) and L(R-R)), peak height, and peak width. Peak heights for lipstick formulation containing 11% and 5% VP/eicosene copolymer were statistically different from those of the control. The L(Stamm) and L(R-R) parameters for the treatment containing 11% VP/eicosene copolymer were statistically different from these of the control. Based on the results obtained in this study, we are able to determine whether a polymer is a good pigment dispersant and contributes to visually detected shine of a lipstick upon application. The methodology presented in this paper could serve as a tool for investigators to screen their ingredients for shine in lipstick formulations.
Chemical Applications of a Programmable Image Acquisition System
NASA Astrophysics Data System (ADS)
Ogren, Paul J.; Henry, Ian; Fletcher, Steven E. S.; Kelly, Ian
2003-06-01
Image analysis is widely used in chemistry, both for rapid qualitative evaluations using techniques such as thin layer chromatography (TLC) and for quantitative purposes such as well-plate measurements of analyte concentrations or fragment-size determinations in gel electrophoresis. This paper describes a programmable system for image acquisition and processing that is currently used in the laboratories of our organic and physical chemistry courses. It has also been used in student research projects in analytical chemistry and biochemistry. The potential range of applications is illustrated by brief presentations of four examples: (1) using well-plate optical transmission data to construct a standard concentration absorbance curve; (2) the quantitative analysis of acetaminophen in Tylenol and acetylsalicylic acid in aspirin using TLC with fluorescence detection; (3) the analysis of electrophoresis gels to determine DNA fragment sizes and amounts; and, (4) using color change to follow reaction kinetics. The supplemental material in JCE Online contains information on two additional examples: deconvolution of overlapping bands in protein gel electrophoresis, and the recovery of data from published images or graphs. The JCE Online material also presents additional information on each example, on the system hardware and software, and on the data analysis methodology.
Time-varying bispectral analysis of visually evoked multi-channel EEG
NASA Astrophysics Data System (ADS)
Chandran, Vinod
2012-12-01
Theoretical foundations of higher order spectral analysis are revisited to examine the use of time-varying bicoherence on non-stationary signals using a classical short-time Fourier approach. A methodology is developed to apply this to evoked EEG responses where a stimulus-locked time reference is available. Short-time windowed ensembles of the response at the same offset from the reference are considered as ergodic cyclostationary processes within a non-stationary random process. Bicoherence can be estimated reliably with known levels at which it is significantly different from zero and can be tracked as a function of offset from the stimulus. When this methodology is applied to multi-channel EEG, it is possible to obtain information about phase synchronization at different regions of the brain as the neural response develops. The methodology is applied to analyze evoked EEG response to flash visual stimulii to the left and right eye separately. The EEG electrode array is segmented based on bicoherence evolution with time using the mean absolute difference as a measure of dissimilarity. Segment maps confirm the importance of the occipital region in visual processing and demonstrate a link between the frontal and occipital regions during the response. Maps are constructed using bicoherence at bifrequencies that include the alpha band frequency of 8Hz as well as 4 and 20Hz. Differences are observed between responses from the left eye and the right eye, and also between subjects. The methodology shows potential as a neurological functional imaging technique that can be further developed for diagnosis and monitoring using scalp EEG which is less invasive and less expensive than magnetic resonance imaging.
Goudeketting, Seline R; Heinen, Stefan G H; Ünlü, Çağdaş; van den Heuvel, Daniel A F; de Vries, Jean-Paul P M; van Strijen, Marco J; Sailer, Anna M
2017-08-01
To systematically review and meta-analyze the added value of 3-dimensional (3D) image fusion technology in endovascular aortic repair for its potential to reduce contrast media volume, radiation dose, procedure time, and fluoroscopy time. Electronic databases were systematically searched for studies published between January 2010 and March 2016 that included a control group describing 3D fusion imaging in endovascular aortic procedures. Two independent reviewers assessed the methodological quality of the included studies and extracted data on iodinated contrast volume, radiation dose, procedure time, and fluoroscopy time. The contrast use for standard and complex endovascular aortic repairs (fenestrated, branched, and chimney) were pooled using a random-effects model; outcomes are reported as the mean difference with 95% confidence intervals (CIs). Seven studies, 5 retrospective and 2 prospective, involving 921 patients were selected for analysis. The methodological quality of the studies was moderate (median 17, range 15-18). The use of fusion imaging led to an estimated mean reduction in iodinated contrast of 40.1 mL (95% CI 16.4 to 63.7, p=0.002) for standard procedures and a mean 70.7 mL (95% CI 44.8 to 96.6, p<0.001) for complex repairs. Secondary outcome measures were not pooled because of potential bias in nonrandomized data, but radiation doses, procedure times, and fluoroscopy times were lower, although not always significantly, in the fusion group in 6 of the 7 studies. Compared with the control group, 3D fusion imaging is associated with a significant reduction in the volume of contrast employed for standard and complex endovascular aortic procedures, which can be particularly important in patients with renal failure. Radiation doses, procedure times, and fluoroscopy times were reduced when 3D fusion was used.
A strategy to optimize CT pediatric dose with a visual discrimination model
NASA Astrophysics Data System (ADS)
Gutierrez, Daniel; Gudinchet, François; Alamo-Maestre, Leonor T.; Bochud, François O.; Verdun, Francis R.
2008-03-01
Technological developments of computed tomography (CT) have led to a drastic increase of its clinical utilization, creating concerns about patient exposure. To better control dose to patients, we propose a methodology to find an objective compromise between dose and image quality by means of a visual discrimination model. A GE LightSpeed-Ultra scanner was used to perform the acquisitions. A QRM 3D low contrast resolution phantom (QRM - Germany) was scanned using CTDI vol values in the range of 1.7 to 103 mGy. Raw data obtained with the highest CTDI vol were afterwards processed to simulate dose reductions by white noise addition. Noise realism of the simulations was verified by comparing normalized noise power spectra aspect and amplitudes (NNPS) and standard deviation measurements. Patient images were acquired using the Diagnostic Reference Levels (DRL) proposed in Switzerland. Noise reduction was then simulated, as for the QRM phantom, to obtain five different CTDI vol levels, down to 3.0 mGy. Image quality of phantom images was assessed with the Sarnoff JNDmetrix visual discrimination model and compared to an assessment made by means of the ROC methodology, taken as a reference. For patient images a similar approach was taken but using as reference the Visual Grading Analysis (VGA) method. A relationship between Sarnoff JNDmetrix and ROC results was established for low contrast detection in phantom images, demonstrating that the Sarnoff JNDmetrix can be used for qualification of images with highly correlated noise. Patient image qualification showed a threshold of conspicuity loss only for children over 35 kg.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; Chen, H; Mutic, S
Purpose: A persistent challenge for the quality assessment of radiation therapy treatments (e.g. contouring accuracy) is the absence of the known, ground truth for patient data. Moreover, assessment results are often patient-dependent. Computer simulation studies utilizing numerical phantoms can be performed for quality assessment with a known ground truth. However, previously reported numerical phantoms do not include the statistical properties of inter-patient variations, as their models are based on only one patient. In addition, these models do not incorporate tumor data. In this study, a methodology was developed for generating numerical phantoms which encapsulate the statistical variations of patients withinmore » radiation therapy, including tumors. Methods: Based on previous work in contouring assessment, geometric attribute distribution (GAD) models were employed to model both the deterministic and stochastic properties of individual organs via principle component analysis. Using pre-existing radiation therapy contour data, the GAD models are trained to model the shape and centroid distributions of each organ. Then, organs with different shapes and positions can be generated by assigning statistically sound weights to the GAD model parameters. Organ contour data from 20 retrospective prostate patient cases were manually extracted and utilized to train the GAD models. As a demonstration, computer-simulated CT images of generated numerical phantoms were calculated and assessed subjectively and objectively for realism. Results: A cohort of numerical phantoms of the male human pelvis was generated. CT images were deemed realistic both subjectively and objectively in terms of image noise power spectrum. Conclusion: A methodology has been developed to generate realistic numerical anthropomorphic phantoms using pre-existing radiation therapy data. The GAD models guarantee that generated organs span the statistical distribution of observed radiation therapy patients, according to the training dataset. The methodology enables radiation therapy treatment assessment with multi-modality imaging and a known ground truth, and without patient-dependent bias.« less
Teng, Santani
2017-01-01
In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044019
NASA Astrophysics Data System (ADS)
Belfort, Benjamin; Weill, Sylvain; Lehmann, François
2017-04-01
A novel, non-invasive imaging technique that determines 2D maps of water content in unsaturated porous media is presented. This method directly relates digitally measured intensities to the water content of the porous medium. This method requires the classical image analysis steps, i.e., normalization, filtering, background subtraction, scaling and calibration. The main advantages of this approach are that no calibration experiment is needed and that no tracer or dye is injected into the flow tank. The procedure enables effective processing of a large number of photographs and thus produces 2D water content maps at high temporal resolution. A drainage / imbibition experiment in a 2D flow tank with inner dimensions of 40 cm x 14 cm x 6 cm (L x W x D) is carried out to validate the methodology. The accuracy of the proposed approach is assessed using numerical simulations with a state-of-the-art computational code that solves the Richards. Comparison of the cumulative mass leaving and entering the flow tank and water content maps produced by the photographic measurement technique and the numerical simulations demonstrate the efficiency and high accuracy of the proposed method for investigating vadose zone flow processes. Application examples to a larger flow tank with various boundary conditions are finally presented to illustrate the potential of the methodology.
Collaborative Initiative on Fetal Alcohol Spectrum Disorders: Methodology of Clinical Projects
Mattson, Sarah N.; Foroud, Tatiana; Sowell, Elizabeth R.; Jones, Kenneth Lyons; Coles, Claire D.; Fagerlund, Åse; Autti-Rämö, Ilona; May, Philip A.; Adnams, Colleen M.; Konovalova, Valentina; Wetherill, Leah; Arenson, Andrew D.; Barnett, William K.; Riley, Edward P.
2009-01-01
The Collaborative Initiative on Fetal Alcohol Spectrum Disorders (CIFASD) was created in 2003 to further understanding of fetal alcohol spectrum disorders. Clinical and basic science projects collect data across multiple sites using standardized methodology. This paper describes the methodology being used by the clinical projects that pertain to assessment of children and adolescents. Domains being addressed are dysmorphology, neurobehavior, 3D facial imaging, and brain imaging. PMID:20036488
Automated fluorescent miscroscopic image analysis of PTBP1 expression in glioma
Becker, Aline; Elder, Brad; Puduvalli, Vinay; Winter, Jessica; Gurcan, Metin
2017-01-01
Multiplexed immunofluorescent testing has not entered into diagnostic neuropathology due to the presence of several technical barriers, amongst which includes autofluorescence. This study presents the implementation of a methodology capable of overcoming the visual challenges of fluorescent microscopy for diagnostic neuropathology by using automated digital image analysis, with long term goal of providing unbiased quantitative analyses of multiplexed biomarkers for solid tissue neuropathology. In this study, we validated PTBP1, a putative biomarker for glioma, and tested the extent to which immunofluorescent microscopy combined with automated and unbiased image analysis would permit the utility of PTBP1 as a biomarker to distinguish diagnostically challenging surgical biopsies. As a paradigm, we utilized second resections from patients diagnosed either with reactive brain changes (pseudoprogression) and recurrent glioblastoma (true progression). Our image analysis workflow was capable of removing background autofluorescence and permitted quantification of DAPI-PTBP1 positive cells. PTBP1-positive nuclei, and the mean intensity value of PTBP1 signal in cells. Traditional pathological interpretation was unable to distinguish between groups due to unacceptably high discordance rates amongst expert neuropathologists. Our data demonstrated that recurrent glioblastoma showed more DAPI-PTBP1 positive cells and a higher mean intensity value of PTBP1 signal compared to resections from second surgeries that showed only reactive gliosis. Our work demonstrates the potential of utilizing automated image analysis to overcome the challenges of implementing fluorescent microscopy in diagnostic neuropathology. PMID:28282372
Optimisation and evaluation of hyperspectral imaging system using machine learning algorithm
NASA Astrophysics Data System (ADS)
Suthar, Gajendra; Huang, Jung Y.; Chidangil, Santhosh
2017-10-01
Hyperspectral imaging (HSI), also called imaging spectrometer, originated from remote sensing. Hyperspectral imaging is an emerging imaging modality for medical applications, especially in disease diagnosis and image-guided surgery. HSI acquires a three-dimensional dataset called hypercube, with two spatial dimensions and one spectral dimension. Spatially resolved spectral imaging obtained by HSI provides diagnostic information about the objects physiology, morphology, and composition. The present work involves testing and evaluating the performance of the hyperspectral imaging system. The methodology involved manually taking reflectance of the object in many images or scan of the object. The object used for the evaluation of the system was cabbage and tomato. The data is further converted to the required format and the analysis is done using machine learning algorithm. The machine learning algorithms applied were able to distinguish between the object present in the hypercube obtain by the scan. It was concluded from the results that system was working as expected. This was observed by the different spectra obtained by using the machine-learning algorithm.
NASA Astrophysics Data System (ADS)
Huang, Chao; Nie, Liming; Schoonover, Robert W.; Guo, Zijian; Schirra, Carsten O.; Anastasio, Mark A.; Wang, Lihong V.
2012-06-01
A challenge in photoacoustic tomography (PAT) brain imaging is to compensate for aberrations in the measured photoacoustic data due to their propagation through the skull. By use of information regarding the skull morphology and composition obtained from adjunct x-ray computed tomography image data, we developed a subject-specific imaging model that accounts for such aberrations. A time-reversal-based reconstruction algorithm was employed with this model for image reconstruction. The image reconstruction methodology was evaluated in experimental studies involving phantoms and monkey heads. The results establish that our reconstruction methodology can effectively compensate for skull-induced acoustic aberrations and improve image fidelity in transcranial PAT.
NASA Astrophysics Data System (ADS)
Wood, M.; Neal, J. C.; Hostache, R.; Corato, G.; Chini, M.; Giustarini, L.; Matgen, P.; Wagener, T.; Bates, P. D.
2015-12-01
Synthetic Aperture Radar (SAR) satellites are capable of all-weather day and night observations that can discriminate between land and smooth open water surfaces over large scales. Because of this there has been much interest in the use of SAR satellite data to improve our understanding of water processes, in particular for fluvial flood inundation mechanisms. Past studies prove that integrating SAR derived data with hydraulic models can improve simulations of flooding. However while much of this work focusses on improving model channel roughness values or inflows in ungauged catchments, improvement of model bathymetry is often overlooked. The provision of good bathymetric data is critical to the performance of hydraulic models but there are only a small number of ways to obtain bathymetry information where no direct measurements exist. Spatially distributed river depths are also rarely available. We present a methodology for calibration of model average channel depth and roughness parameters concurrently using SAR images of flood extent and a Sub-Grid model utilising hydraulic geometry concepts. The methodology uses real data from the European Space Agency's archive of ENVISAT[1] Wide Swath Mode images of the River Severn between Worcester and Tewkesbury during flood peaks between 2007 and 2010. Historic ENVISAT WSM images are currently free and easy to access from archive but the methodology can be applied with any available SAR data. The approach makes use of the SAR image processing algorithm of Giustarini[2] et al. (2013) to generate binary flood maps. A unique feature of the calibration methodology is to also use parameter 'identifiability' to locate the parameters with higher accuracy from a pre-assigned range (adopting the DYNIA method proposed by Wagener[3] et al., 2003). [1] https://gpod.eo.esa.int/services/ [2] Giustarini. 2013. 'A Change Detection Approach to Flood Mapping in Urban Areas Using TerraSAR-X'. IEEE Transactions on Geoscience and Remote Sensing, vol. 51, no. 4. [3] Wagener. 2003. 'Towards reduced uncertainty in conceptual rainfall-runoff modelling: Dynamic identifiability analysis'. Hydrol. Process. 17, 455-476.
A Hitchhiker's Guide to Functional Magnetic Resonance Imaging
Soares, José M.; Magalhães, Ricardo; Moreira, Pedro S.; Sousa, Alexandre; Ganz, Edward; Sampaio, Adriana; Alves, Victor; Marques, Paulo; Sousa, Nuno
2016-01-01
Functional Magnetic Resonance Imaging (fMRI) studies have become increasingly popular both with clinicians and researchers as they are capable of providing unique insights into brain functions. However, multiple technical considerations (ranging from specifics of paradigm design to imaging artifacts, complex protocol definition, and multitude of processing and methods of analysis, as well as intrinsic methodological limitations) must be considered and addressed in order to optimize fMRI analysis and to arrive at the most accurate and grounded interpretation of the data. In practice, the researcher/clinician must choose, from many available options, the most suitable software tool for each stage of the fMRI analysis pipeline. Herein we provide a straightforward guide designed to address, for each of the major stages, the techniques, and tools involved in the process. We have developed this guide both to help those new to the technique to overcome the most critical difficulties in its use, as well as to serve as a resource for the neuroimaging community. PMID:27891073
Beam uniformity analysis of infrared laser illuminators
NASA Astrophysics Data System (ADS)
Allik, Toomas H.; Dixon, Roberta E.; Proffitt, R. Patrick; Fung, Susan; Ramboyong, Len; Soyka, Thomas J.
2015-02-01
Uniform near-infrared (NIR) and short-wave infrared (SWIR) illuminators are desired in low ambient light detection, recognition, and identification of military applications. Factors that contribute to laser illumination image degradation are high frequency, coherent laser speckle and low frequency nonuniformities created by the laser or external laser cavity optics. Laser speckle analysis and beam uniformity improvements have been independently studied by numerous authors, but analysis to separate these two effects from a single measurement technique has not been published. In this study, profiles of compact, diode laser NIR and SWIR illuminators were measured and evaluated. Digital 12-bit images were recorded with a flat-field calibrated InGaAs camera with measurements at F/1.4 and F/16. Separating beam uniformity components from laser speckle was approximated by filtering the original image. The goal of this paper is to identify and quantify the beam quality variation of illumination prototypes, draw awareness to its impact on range performance modeling, and develop measurement techniques and methodologies for military, industry, and vendors of active sources.
Retinal image registration for eye movement estimation.
Kolar, Radim; Tornow, Ralf P; Odstrcilik, Jan
2015-01-01
This paper describes a novel methodology for eye fixation measurement using a unique videoophthalmoscope setup and advanced image registration approach. The representation of the eye movements via Poincare plot is also introduced. The properties, limitations and perspective of this methodology are finally discussed.
Infrared spectroscopy and spectroscopic imaging in forensic science.
Ewing, Andrew V; Kazarian, Sergei G
2017-01-16
Infrared spectroscopy and spectroscopic imaging, are robust, label free and inherently non-destructive methods with a high chemical specificity and sensitivity that are frequently employed in forensic science research and practices. This review aims to discuss the applications and recent developments of these methodologies in this field. Furthermore, the use of recently emerged Fourier transform infrared (FT-IR) spectroscopic imaging in transmission, external reflection and Attenuated Total Reflection (ATR) modes are summarised with relevance and potential for forensic science applications. This spectroscopic imaging approach provides the opportunity to obtain the chemical composition of fingermarks and information about possible contaminants deposited at a crime scene. Research that demonstrates the great potential of these techniques for analysis of fingerprint residues, explosive materials and counterfeit drugs will be reviewed. The implications of this research for the examination of different materials are considered, along with an outlook of possible future research avenues for the application of vibrational spectroscopic methods to the analysis of forensic samples.
Peregrina-Barreto, Hayde; Perez-Corona, Elizabeth; Rangel-Magdaleno, Jose; Ramos-Garcia, Ruben; Chiu, Roger; Ramirez-San-Juan, Julio C
2017-06-01
Visualization of deep blood vessels in speckle images is an important task as it is used to analyze the dynamics of the blood flow and the health status of biological tissue. Laser speckle imaging is a wide-field optical technique to measure relative blood flow speed based on the local speckle contrast analysis. However, it has been reported that this technique is limited to certain deep blood vessels (about ? = 300 ?? ? m ) because of the high scattering of the sample; beyond this depth, the quality of the vessel’s image decreases. The use of a representation based on homogeneity values, computed from the co-occurrence matrix, is proposed as it provides an improved vessel definition and its corresponding diameter. Moreover, a methodology is proposed for automatic blood vessel location based on the kurtosis analysis. Results were obtained from the different skin phantoms, showing that it is possible to identify the vessel region for different morphologies, even up to 900 ?? ? m in depth.
Shaikh, Faiq; Franc, Benjamin; Allen, Erastus; Sala, Evis; Awan, Omer; Hendrata, Kenneth; Halabi, Safwan; Mohiuddin, Sohaib; Malik, Sana; Hadley, Dexter; Shrestha, Rasu
2018-03-01
Enterprise imaging has channeled various technological innovations to the field of clinical radiology, ranging from advanced imaging equipment and postacquisition iterative reconstruction tools to image analysis and computer-aided detection tools. More recently, the advancement in the field of quantitative image analysis coupled with machine learning-based data analytics, classification, and integration has ushered in the era of radiomics, a paradigm shift that holds tremendous potential in clinical decision support as well as drug discovery. However, there are important issues to consider to incorporate radiomics into a clinically applicable system and a commercially viable solution. In this two-part series, we offer insights into the development of the translational pipeline for radiomics from methodology to clinical implementation (Part 1) and from that point to enterprise development (Part 2). In Part 2 of this two-part series, we study the components of the strategy pipeline, from clinical implementation to building enterprise solutions. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Felipe-Sesé, Luis; López-Alba, Elías; Hannemann, Benedikt; Schmeer, Sebastian; Diaz, Francisco A
2017-06-28
A quasistatic indentation numerical analysis in a round section specimen made of soft material has been performed and validated with a full field experimental technique, i.e., Digital Image Correlation 3D. The contact experiment specifically consisted of loading a 25 mm diameter rubber cylinder of up to a 5 mm indentation and then unloading. Experimental strains fields measured at the surface of the specimen during the experiment were compared with those obtained by performing two numerical analyses employing two different hyperplastic material models. The comparison was performed using an Image Decomposition new methodology that makes a direct comparison of full-field data independently of their scale or orientation possible. Numerical results show a good level of agreement with those measured during the experiments. However, since image decomposition allows for the differences to be quantified, it was observed that one of the adopted material models reproduces lower differences compared to experimental results.
Felipe-Sesé, Luis; López-Alba, Elías; Hannemann, Benedikt; Schmeer, Sebastian; Diaz, Francisco A.
2017-01-01
A quasistatic indentation numerical analysis in a round section specimen made of soft material has been performed and validated with a full field experimental technique, i.e., Digital Image Correlation 3D. The contact experiment specifically consisted of loading a 25 mm diameter rubber cylinder of up to a 5 mm indentation and then unloading. Experimental strains fields measured at the surface of the specimen during the experiment were compared with those obtained by performing two numerical analyses employing two different hyperplastic material models. The comparison was performed using an Image Decomposition new methodology that makes a direct comparison of full-field data independently of their scale or orientation possible. Numerical results show a good level of agreement with those measured during the experiments. However, since image decomposition allows for the differences to be quantified, it was observed that one of the adopted material models reproduces lower differences compared to experimental results. PMID:28773081
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep
2010-06-05
Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less
Benefits of utilizing CellProfiler as a characterization tool for U–10Mo nuclear fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collette, R.; Douglas, J.; Patterson, L.
2015-07-15
Automated image processing techniques have the potential to aid in the performance evaluation of nuclear fuels by eliminating judgment calls that may vary from person-to-person or sample-to-sample. Analysis of in-core fuel performance is required for design and safety evaluations related to almost every aspect of the nuclear fuel cycle. This study presents a methodology for assessing the quality of uranium–molybdenum fuel images and describes image analysis routines designed for the characterization of several important microstructural properties. The analyses are performed in CellProfiler, an open-source program designed to enable biologists without training in computer vision or programming to automatically extract cellularmore » measurements from large image sets. The quality metric scores an image based on three parameters: the illumination gradient across the image, the overall focus of the image, and the fraction of the image that contains scratches. The metric presents the user with the ability to ‘pass’ or ‘fail’ an image based on a reproducible quality score. Passable images may then be characterized through a separate CellProfiler pipeline, which enlists a variety of common image analysis techniques. The results demonstrate the ability to reliably pass or fail images based on the illumination, focus, and scratch fraction of the image, followed by automatic extraction of morphological data with respect to fission gas voids, interaction layers, and grain boundaries. - Graphical abstract: Display Omitted - Highlights: • A technique is developed to score U–10Mo FIB-SEM image quality using CellProfiler. • The pass/fail metric is based on image illumination, focus, and area scratched. • Automated image analysis is performed in pipeline fashion to characterize images. • Fission gas void, interaction layer, and grain boundary coverage data is extracted. • Preliminary characterization results demonstrate consistency of the algorithm.« less
Mining biomedical images towards valuable information retrieval in biomedical and life sciences
Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas
2016-01-01
Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. PMID:27538578
Determination of Shed Ice Particle Size Using High Speed Digital Imaging
NASA Technical Reports Server (NTRS)
Broughton, Howard; Owens, Jay; Sims, James J.; Bond, Thomas H.
1996-01-01
A full scale model of an aircraft engine inlet was tested at NASA Lewis Research Center's Icing Research Tunnel. Simulated natural ice sheds from the engine inlet lip were studied using high speed digital image acquisition and image analysis. Strategic camera placement integrated at the model design phase allowed the study of ice accretion on the inlet lip and the resulting shed ice particles at the aerodynamic interface plane at the rear of the inlet prior to engine ingestion. The resulting digital images were analyzed using commercial and proprietary software to determine the size of the ice particles that could potentially be ingested by the engine during a natural shedding event. A methodology was developed to calibrate the imaging system and insure consistent and accurate measurements of the ice particles for a wide range of icing conditions.
A graph-based approach to detect spatiotemporal dynamics in satellite image time series
NASA Astrophysics Data System (ADS)
Guttler, Fabio; Ienco, Dino; Nin, Jordi; Teisseire, Maguelonne; Poncelet, Pascal
2017-08-01
Enhancing the frequency of satellite acquisitions represents a key issue for Earth Observation community nowadays. Repeated observations are crucial for monitoring purposes, particularly when intra-annual process should be taken into account. Time series of images constitute a valuable source of information in these cases. The goal of this paper is to propose a new methodological framework to automatically detect and extract spatiotemporal information from satellite image time series (SITS). Existing methods dealing with such kind of data are usually classification-oriented and cannot provide information about evolutions and temporal behaviors. In this paper we propose a graph-based strategy that combines object-based image analysis (OBIA) with data mining techniques. Image objects computed at each individual timestamp are connected across the time series and generates a set of evolution graphs. Each evolution graph is associated to a particular area within the study site and stores information about its temporal evolution. Such information can be deeply explored at the evolution graph scale or used to compare the graphs and supply a general picture at the study site scale. We validated our framework on two study sites located in the South of France and involving different types of natural, semi-natural and agricultural areas. The results obtained from a Landsat SITS support the quality of the methodological approach and illustrate how the framework can be employed to extract and characterize spatiotemporal dynamics.
Teresa E. Jordan
2015-10-22
The files included in this submission contain all data pertinent to the methods and results of this task’s output, which is a cohesive multi-state map of all known potential geothermal reservoirs in our region, ranked by their potential favorability. Favorability is quantified using a new metric, Reservoir Productivity Index, as explained in the Reservoirs Methodology Memo (included in zip file). Shapefile and images of the Reservoir Productivity and Reservoir Uncertainty are included as well.
NASA Astrophysics Data System (ADS)
Haridas, Aswin; Crivoi, Alexandru; Prabhathan, P.; Chan, Kelvin; Murukeshan, V. M.
2017-06-01
The use of carbon fiber-reinforced polymer (CFRP) composite materials in the aerospace industry have far improved the load carrying properties and the design flexibility of aircraft structures. A high strength to weight ratio, low thermal conductivity, and a low thermal expansion coefficient gives it an edge for applications demanding stringent loading conditions. Specifically, this paper focuses on the behavior of CFRP composites under stringent thermal loads. The properties of composites are largely affected by external thermal loads, especially when the loads are beyond the glass temperature, Tg, of the composite. Beyond this, the composites are subject to prominent changes in mechanical and thermal properties which may further lead to material decomposition. Furthermore, thermal damage formation being chaotic, a strict dimension cannot be associated with the formed damage. In this context, this paper focuses on comparing multiple speckle image analysis algorithms to effectively characterize the formed thermal damages on the CFRP specimen. This would provide us with a fast method for quantifying the extent of heat damage in carbon composites, thus reducing the required time for inspection. The image analysis methods used for the comparison include fractal dimensional analysis of the formed speckle pattern and analysis of number and size of various connecting elements in the binary image.
WE-B-BRC-02: Risk Analysis and Incident Learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fraass, B.
Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less
NASA Astrophysics Data System (ADS)
Bosca, Ryan J.; Jackson, Edward F.
2016-01-01
Assessing and mitigating the various sources of bias and variance associated with image quantification algorithms is essential to the use of such algorithms in clinical research and practice. Assessment is usually accomplished with grid-based digital reference objects (DRO) or, more recently, digital anthropomorphic phantoms based on normal human anatomy. Publicly available digital anthropomorphic phantoms can provide a basis for generating realistic model-based DROs that incorporate the heterogeneity commonly found in pathology. Using a publicly available vascular input function (VIF) and digital anthropomorphic phantom of a normal human brain, a methodology was developed to generate a DRO based on the general kinetic model (GKM) that represented realistic and heterogeneously enhancing pathology. GKM parameters were estimated from a deidentified clinical dynamic contrast-enhanced (DCE) MRI exam. This clinical imaging volume was co-registered with a discrete tissue model, and model parameters estimated from clinical images were used to synthesize a DCE-MRI exam that consisted of normal brain tissues and a heterogeneously enhancing brain tumor. An example application of spatial smoothing was used to illustrate potential applications in assessing quantitative imaging algorithms. A voxel-wise Bland-Altman analysis demonstrated negligible differences between the parameters estimated with and without spatial smoothing (using a small radius Gaussian kernel). In this work, we reported an extensible methodology for generating model-based anthropomorphic DROs containing normal and pathological tissue that can be used to assess quantitative imaging algorithms.
High Throughput Multispectral Image Processing with Applications in Food Science.
Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John
2015-01-01
Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.
NASA Astrophysics Data System (ADS)
Davis, Benjamin L.; Berrier, J. C.; Shields, D. W.; Kennefick, J.; Kennefick, D.; Seigar, M. S.; Lacy, C. H. S.; Puerari, I.
2012-01-01
A logarithmic spiral is a prominent feature appearing in a majority of observed galaxies. This feature has long been associated with the traditional Hubble classification scheme, but historical quotes of pitch angle of spiral galaxies have been almost exclusively qualitative. We have developed a methodology, utilizing Two-Dimensional Fast Fourier Transformations of images of spiral galaxies, in order to isolate and measure the pitch angles of their spiral arms. Our technique provides a quantitative way to measure this morphological feature. This will allow the precise comparison of spiral galaxy evolution to other galactic parameters and test spiral arm genesis theories. In this work, we detail our image processing and analysis of spiral galaxy images and discuss the robustness of our analysis techniques. The authors gratefully acknowledge support for this work from NASA Grant NNX08AW03A.
Color model comparative analysis for breast cancer diagnosis using H and E stained images
NASA Astrophysics Data System (ADS)
Li, Xingyu; Plataniotis, Konstantinos N.
2015-03-01
Digital cancer diagnosis is a research realm where signal processing techniques are used to analyze and to classify color histopathology images. Different from grayscale image analysis of magnetic resonance imaging or X-ray, colors in histopathology images convey large amount of histological information and thus play significant role in cancer diagnosis. Though color information is widely used in histopathology works, as today, there is few study on color model selections for feature extraction in cancer diagnosis schemes. This paper addresses the problem of color space selection for digital cancer classification using H and E stained images, and investigates the effectiveness of various color models (RGB, HSV, CIE L*a*b*, and stain-dependent H and E decomposition model) in breast cancer diagnosis. Particularly, we build a diagnosis framework as a comparison benchmark and take specific concerns of medical decision systems into account in evaluation. The evaluation methodologies include feature discriminate power evaluation and final diagnosis performance comparison. Experimentation on a publicly accessible histopathology image set suggests that the H and E decomposition model outperforms other assessed color spaces. For reasons behind various performance of color spaces, our analysis via mutual information estimation demonstrates that color components in the H and E model are less dependent, and thus most feature discriminate power is collected in one channel instead of spreading out among channels in other color spaces.
Jeong, Jeong-Won; Shin, Dae C; Do, Synho; Marmarelis, Vasilis Z
2006-08-01
This paper presents a novel segmentation methodology for automated classification and differentiation of soft tissues using multiband data obtained with the newly developed system of high-resolution ultrasonic transmission tomography (HUTT) for imaging biological organs. This methodology extends and combines two existing approaches: the L-level set active contour (AC) segmentation approach and the agglomerative hierarchical kappa-means approach for unsupervised clustering (UC). To prevent the trapping of the current iterative minimization AC algorithm in a local minimum, we introduce a multiresolution approach that applies the level set functions at successively increasing resolutions of the image data. The resulting AC clusters are subsequently rearranged by the UC algorithm that seeks the optimal set of clusters yielding the minimum within-cluster distances in the feature space. The presented results from Monte Carlo simulations and experimental animal-tissue data demonstrate that the proposed methodology outperforms other existing methods without depending on heuristic parameters and provides a reliable means for soft tissue differentiation in HUTT images.
Ciraj-Bjelac, Olivera; Faj, Dario; Stimac, Damir; Kosutic, Dusko; Arandjic, Danijela; Brkic, Hrvoje
2011-04-01
The purpose of this study is to investigate the need for and the possible achievements of a comprehensive QA programme and to look at effects of simple corrective actions on image quality in Croatia and in Serbia. The paper focuses on activities related to the technical and radiological aspects of QA. The methodology consisted of two phases. The aim of the first phase was the initial assessment of mammography practice in terms of image quality, patient dose and equipment performance in selected number of mammography units in Croatia and Serbia. Subsequently, corrective actions were suggested and implemented. Then the same parameters were re-assessed. Most of the suggested corrective actions were simple, low-cost and possible to implement immediately, as these were related to working habits in mammography units, such as film processing and darkroom conditions. It has been demonstrated how simple quantitative assessment of image quality can be used for optimisation purposes. Analysis of image quality parameters as OD, gradient and contrast demonstrated general similarities between mammography practices in Croatia and Serbia. The applied methodology should be expanded to larger number of hospitals and applied on a regular basis. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.
Automated quantification of pancreatic β-cell mass
Golson, Maria L.; Bush, William S.
2014-01-01
β-Cell mass is a parameter commonly measured in studies of islet biology and diabetes. However, the rigorous quantification of pancreatic β-cell mass using conventional histological methods is a time-consuming process. Rapidly evolving virtual slide technology with high-resolution slide scanners and newly developed image analysis tools has the potential to transform β-cell mass measurement. To test the effectiveness and accuracy of this new approach, we assessed pancreata from normal C57Bl/6J mice and from mouse models of β-cell ablation (streptozotocin-treated mice) and β-cell hyperplasia (leptin-deficient mice), using a standardized systematic sampling of pancreatic specimens. Our data indicate that automated analysis of virtual pancreatic slides is highly reliable and yields results consistent with those obtained by conventional morphometric analysis. This new methodology will allow investigators to dramatically reduce the time required for β-cell mass measurement by automating high-resolution image capture and analysis of entire pancreatic sections. PMID:24760991
From intuition to statistics in building subsurface structural models
Brandenburg, J.P.; Alpak, F.O.; Naruk, S.; Solum, J.
2011-01-01
Experts associated with the oil and gas exploration industry suggest that combining forward trishear models with stochastic global optimization algorithms allows a quantitative assessment of the uncertainty associated with a given structural model. The methodology is applied to incompletely imaged structures related to deepwater hydrocarbon reservoirs and results are compared to prior manual palinspastic restorations and borehole data. This methodology is also useful for extending structural interpretations into other areas of limited resolution, such as subsalt in addition to extrapolating existing data into seismic data gaps. This technique can be used for rapid reservoir appraisal and potentially have other applications for seismic processing, well planning, and borehole stability analysis.
NASA Astrophysics Data System (ADS)
Alperovich, Leonid; Averbuch, Amir; Eppelbaum, Lev; Zheludev, Valery
2013-04-01
Karst areas occupy about 14% of the world land. Karst terranes of different origin have caused difficult conditions for building, industrial activity and tourism, and are the source of heightened danger for environment. Mapping of karst (sinkhole) hazards, obviously, will be one of the most significant problems of engineering geophysics in the XXI century. Taking into account the complexity of geological media, some unfavourable environments and known ambiguity of geophysical data analysis, a single geophysical method examination might be insufficient. Wavelet methodology as whole has a significant impact on cardinal problems of geophysical signal processing such as: denoising of signals, enhancement of signals and distinguishing of signals with closely related characteristics and integrated analysis of different geophysical fields (satellite, airborne, earth surface or underground observed data). We developed a three-phase approach to the integrated geophysical localization of subsurface karsts (the same approach could be used for following monitoring of karst dynamics). The first phase consists of modeling devoted to compute various geophysical effects characterizing karst phenomena. The second phase determines development of the signal processing approaches to analyzing of profile or areal geophysical observations. Finally, at the third phase provides integration of these methods in order to create a new method of the combined interpretation of different geophysical data. In the base of our combine geophysical analysis we put modern developments in the wavelet technique of the signal and image processing. The development of the integrated methodology of geophysical field examination will enable to recognizing the karst terranes even by a small ratio of "useful signal - noise" in complex geological environments. For analyzing the geophysical data, we used a technique based on the algorithm to characterize a geophysical image by a limited number of parameters. This set of parameters serves as a signature of the image and is to be utilized for discrimination of images containing karst cavity (K) from the images non-containing karst (N). The constructed algorithm consists of the following main phases: (a) collection of the database, (b) characterization of geophysical images, (c) and dimensionality reduction. Then, each image is characterized by the histogram of the coherency directions. As a result of the previous steps we obtain two sets K and N of the signatures vectors for images from sections containing karst cavity and non-karst subsurface, respectively.
NASA Technical Reports Server (NTRS)
Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.
2005-01-01
A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.
Effects of image processing on the detective quantum efficiency
NASA Astrophysics Data System (ADS)
Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na
2010-04-01
Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.
3D/4D multiscale imaging in acute lymphoblastic leukemia cells: visualizing dynamics of cell death
NASA Astrophysics Data System (ADS)
Sarangapani, Sreelatha; Mohan, Rosmin Elsa; Patil, Ajeetkumar; Lang, Matthew J.; Asundi, Anand
2017-06-01
Quantitative phase detection is a new methodology that provides quantitative information on cellular morphology to monitor the cell status, drug response and toxicity. In this paper the morphological changes in acute leukemia cells treated with chitosan were detected using d'Bioimager a robust imaging system. Quantitative phase image of the cells was obtained with numerical analysis. Results show that the average area and optical volume of the chitosan treated cells is significantly reduced when compared with the control cells, which reveals the effect of chitosan on the cancer cells. From the results it can be attributed that d'Bioimager can be used as a non-invasive imaging alternative to measure the morphological changes of the living cells in real time.
Mass Spectrometry Using Nanomechanical Systems: Beyond the Point-Mass Approximation.
Sader, John E; Hanay, M Selim; Neumann, Adam P; Roukes, Michael L
2018-03-14
The mass measurement of single molecules, in real time, is performed routinely using resonant nanomechanical devices. This approach models the molecules as point particles. A recent development now allows the spatial extent (and, indeed, image) of the adsorbate to be characterized using multimode measurements ( Hanay , M. S. , Nature Nanotechnol. , 10 , 2015 , pp 339 - 344 ). This "inertial imaging" capability is achieved through virtual re-engineering of the resonator's vibrating modes, by linear superposition of their measured frequency shifts. Here, we present a complementary and simplified methodology for the analysis of these inertial imaging measurements that exhibits similar performance while streamlining implementation. This development, together with the software that we provide, enables the broad implementation of inertial imaging that opens the door to a range of novel characterization studies of nanoscale adsorbates.
WE-B-BRC-03: Risk in the Context of Medical Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samei, E.
Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less
NASA Astrophysics Data System (ADS)
Doebrich, Marcus; Markstaller, Klaus; Karmrodt, Jens; Kauczor, Hans-Ulrich; Eberle, Balthasar; Weiler, Norbert; Thelen, Manfred; Schreiber, Wolfgang G.
2005-04-01
In this study, an algorithm was developed to measure the distribution of pulmonary time constants (TCs) from dynamic computed tomography (CT) data sets during a sudden airway pressure step up. Simulations with synthetic data were performed to test the methodology as well as the influence of experimental noise. Furthermore the algorithm was applied to in vivo data. In five pigs sudden changes in airway pressure were imposed during dynamic CT acquisition in healthy lungs and in a saline lavage ARDS model. The fractional gas content in the imaged slice (FGC) was calculated by density measurements for each CT image. Temporal variations of the FGC were analysed assuming a model with a continuous distribution of exponentially decaying time constants. The simulations proved the feasibility of the method. The influence of experimental noise could be well evaluated. Analysis of the in vivo data showed that in healthy lungs ventilation processes can be more likely characterized by discrete TCs whereas in ARDS lungs continuous distributions of TCs are observed. The temporal behaviour of lung inflation and deflation can be characterized objectively using the described new methodology. This study indicates that continuous distributions of TCs reflect lung ventilation mechanics more accurately compared to discrete TCs.
Parks, Nathan A.
2013-01-01
The simultaneous application of transcranial magnetic stimulation (TMS) with non-invasive neuroimaging provides a powerful method for investigating functional connectivity in the human brain and the causal relationships between areas in distributed brain networks. TMS has been combined with numerous neuroimaging techniques including, electroencephalography (EEG), functional magnetic resonance imaging (fMRI), and positron emission tomography (PET). Recent work has also demonstrated the feasibility and utility of combining TMS with non-invasive near-infrared optical imaging techniques, functional near-infrared spectroscopy (fNIRS) and the event-related optical signal (EROS). Simultaneous TMS and optical imaging affords a number of advantages over other neuroimaging methods but also involves a unique set of methodological challenges and considerations. This paper describes the methodology of concurrently performing optical imaging during the administration of TMS, focusing on experimental design, potential artifacts, and approaches to controlling for these artifacts. PMID:24065911
Biostatistical analysis of quantitative immunofluorescence microscopy images.
Giles, C; Albrecht, M A; Lam, V; Takechi, R; Mamo, J C
2016-12-01
Semiquantitative immunofluorescence microscopy has become a key methodology in biomedical research. Typical statistical workflows are considered in the context of avoiding pseudo-replication and marginalising experimental error. However, immunofluorescence microscopy naturally generates hierarchically structured data that can be leveraged to improve statistical power and enrich biological interpretation. Herein, we describe a robust distribution fitting procedure and compare several statistical tests, outlining their potential advantages/disadvantages in the context of biological interpretation. Further, we describe tractable procedures for power analysis that incorporates the underlying distribution, sample size and number of images captured per sample. The procedures outlined have significant potential for increasing understanding of biological processes and decreasing both ethical and financial burden through experimental optimization. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Responding mindfully to distressing psychosis: A grounded theory analysis.
Abba, Nicola; Chadwick, Paul; Stevenson, Chris
2008-01-01
This study investigates the psychological process involved when people with current distressing psychosis learned to respond mindfully to unpleasant psychotic sensations (voices, thoughts, and images). Sixteen participants were interviewed on completion of a mindfulness group program. Grounded theory methodology was used to generate a theory of the core psychological process using a systematically applied set of methods linking analysis with data collection. The theory inducted describes the experience of relating differently to psychosis through a three-stage process: centering in awareness of psychosis; allowing voices, thoughts, and images to come and go without reacting or struggle; and reclaiming power through acceptance of psychosis and the self. The conceptual and clinical applications of the theory and its limits are discussed.
NASA Astrophysics Data System (ADS)
Luccas, R. F.; Granados, X.; Obradors, X.; Puig, T.
2014-10-01
A methodology based on real space vortex image analysis is presented able to estimate semi-quantitatively the relevant energy densities of an arbitrary array of vortices, map the interaction energy distributions and evaluate the pinning energy associated to particular defects. The combined study using nanostructuration tools, a vortex visualization technique and the energy method is seen as an opportunity to estimate vortex pinning potentials strengths. Particularly, spatial distributions of vortex energy densities induced by surface nanoindented scratches are evaluated and compared to those of twin boundaries. This comparative study underlines the remarkable role of surface nanoscratches in pinning vortices and its potentiality in the design of novel devices for pinning and guiding vortex motion.
2013-01-01
Background Infectious diseases are the second leading cause of death worldwide. In order to better understand and treat them, an accurate evaluation using multi-modal imaging techniques for anatomical and functional characterizations is needed. For non-invasive imaging techniques such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET), there have been many engineering improvements that have significantly enhanced the resolution and contrast of the images, but there are still insufficient computational algorithms available for researchers to use when accurately quantifying imaging data from anatomical structures and functional biological processes. Since the development of such tools may potentially translate basic research into the clinic, this study focuses on the development of a quantitative and qualitative image analysis platform that provides a computational radiology perspective for pulmonary infections in small animal models. Specifically, we designed (a) a fast and robust automated and semi-automated image analysis platform and a quantification tool that can facilitate accurate diagnostic measurements of pulmonary lesions as well as volumetric measurements of anatomical structures, and incorporated (b) an image registration pipeline to our proposed framework for volumetric comparison of serial scans. This is an important investigational tool for small animal infectious disease models that can help advance researchers’ understanding of infectious diseases. Methods We tested the utility of our proposed methodology by using sequentially acquired CT and PET images of rabbit, ferret, and mouse models with respiratory infections of Mycobacterium tuberculosis (TB), H1N1 flu virus, and an aerosolized respiratory pathogen (necrotic TB) for a total of 92, 44, and 24 scans for the respective studies with half of the scans from CT and the other half from PET. Institutional Administrative Panel on Laboratory Animal Care approvals were obtained prior to conducting this research. First, the proposed computational framework registered PET and CT images to provide spatial correspondences between images. Second, the lungs from the CT scans were segmented using an interactive region growing (IRG) segmentation algorithm with mathematical morphology operations to avoid false positive (FP) uptake in PET images. Finally, we segmented significant radiotracer uptake from the PET images in lung regions determined from CT and computed metabolic volumes of the significant uptake. All segmentation processes were compared with expert radiologists’ delineations (ground truths). Metabolic and gross volume of lesions were automatically computed with the segmentation processes using PET and CT images, and percentage changes in those volumes over time were calculated. (Continued on next page)(Continued from previous page) Standardized uptake value (SUV) analysis from PET images was conducted as a complementary quantitative metric for disease severity assessment. Thus, severity and extent of pulmonary lesions were examined through both PET and CT images using the aforementioned quantification metrics outputted from the proposed framework. Results Each animal study was evaluated within the same subject class, and all steps of the proposed methodology were evaluated separately. We quantified the accuracy of the proposed algorithm with respect to the state-of-the-art segmentation algorithms. For evaluation of the segmentation results, dice similarity coefficient (DSC) as an overlap measure and Haussdorf distance as a shape dissimilarity measure were used. Significant correlations regarding the estimated lesion volumes were obtained both in CT and PET images with respect to the ground truths (R2=0.8922,p<0.01 and R2=0.8664,p<0.01, respectively). The segmentation accuracy (DSC (%)) was 93.4±4.5% for normal lung CT scans and 86.0±7.1% for pathological lung CT scans. Experiments showed excellent agreements (all above 85%) with expert evaluations for both structural and functional imaging modalities. Apart from quantitative analysis of each animal, we also qualitatively showed how metabolic volumes were changing over time by examining serial PET/CT scans. Evaluation of the registration processes was based on precisely defined anatomical landmark points by expert clinicians. An average of 2.66, 3.93, and 2.52 mm errors was found in rabbit, ferret, and mouse data (all within the resolution limits), respectively. Quantitative results obtained from the proposed methodology were visually related to the progress and severity of the pulmonary infections as verified by the participating radiologists. Moreover, we demonstrated that lesions due to the infections were metabolically active and appeared multi-focal in nature, and we observed similar patterns in the CT images as well. Consolidation and ground glass opacity were the main abnormal imaging patterns and consistently appeared in all CT images. We also found that the gross and metabolic lesion volume percentage follow the same trend as the SUV-based evaluation in the longitudinal analysis. Conclusions We explored the feasibility of using PET and CT imaging modalities in three distinct small animal models for two diverse pulmonary infections. We concluded from the clinical findings, derived from the proposed computational pipeline, that PET-CT imaging is an invaluable hybrid modality for tracking pulmonary infections longitudinally in small animals and has great potential to become routinely used in clinics. Our proposed methodology showed that automated computed-aided lesion detection and quantification of pulmonary infections in small animal models are efficient and accurate as compared to the clinical standard of manual and semi-automated approaches. Automated analysis of images in pre-clinical applications can increase the efficiency and quality of pre-clinical findings that ultimately inform downstream experimental design in human clinical studies; this innovation will allow researchers and clinicians to more effectively allocate study resources with respect to research demands without compromising accuracy. PMID:23879987
Aerial Images and Convolutional Neural Network for Cotton Bloom Detection.
Xu, Rui; Li, Changying; Paterson, Andrew H; Jiang, Yu; Sun, Shangpeng; Robertson, Jon S
2017-01-01
Monitoring flower development can provide useful information for production management, estimating yield and selecting specific genotypes of crops. The main goal of this study was to develop a methodology to detect and count cotton flowers, or blooms, using color images acquired by an unmanned aerial system. The aerial images were collected from two test fields in 4 days. A convolutional neural network (CNN) was designed and trained to detect cotton blooms in raw images, and their 3D locations were calculated using the dense point cloud constructed from the aerial images with the structure from motion method. The quality of the dense point cloud was analyzed and plots with poor quality were excluded from data analysis. A constrained clustering algorithm was developed to register the same bloom detected from different images based on the 3D location of the bloom. The accuracy and incompleteness of the dense point cloud were analyzed because they affected the accuracy of the 3D location of the blooms and thus the accuracy of the bloom registration result. The constrained clustering algorithm was validated using simulated data, showing good efficiency and accuracy. The bloom count from the proposed method was comparable with the number counted manually with an error of -4 to 3 blooms for the field with a single plant per plot. However, more plots were underestimated in the field with multiple plants per plot due to hidden blooms that were not captured by the aerial images. The proposed methodology provides a high-throughput method to continuously monitor the flowering progress of cotton.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giannantonio, T.; et al.
Optical imaging surveys measure both the galaxy density and the gravitational lensing-induced shear fields across the sky. Recently, the Dark Energy Survey (DES) collaboration used a joint fit to two-point correlations between these observables to place tight constraints on cosmology (DES Collaboration et al. 2017). In this work, we develop the methodology to extend the DES Collaboration et al. (2017) analysis to include cross-correlations of the optical survey observables with gravitational lensing of the cosmic microwave background (CMB) as measured by the South Pole Telescope (SPT) and Planck. Using simulated analyses, we show how the resulting set of five two-pointmore » functions increases the robustness of the cosmological constraints to systematic errors in galaxy lensing shear calibration. Additionally, we show that contamination of the SPT+Planck CMB lensing map by the thermal Sunyaev-Zel'dovich effect is a potentially large source of systematic error for two-point function analyses, but show that it can be reduced to acceptable levels in our analysis by masking clusters of galaxies and imposing angular scale cuts on the two-point functions. The methodology developed here will be applied to the analysis of data from the DES, the SPT, and Planck in a companion work.« less
Real Time Intelligent Target Detection and Analysis with Machine Vision
NASA Technical Reports Server (NTRS)
Howard, Ayanna; Padgett, Curtis; Brown, Kenneth
2000-01-01
We present an algorithm for detecting a specified set of targets for an Automatic Target Recognition (ATR) application. ATR involves processing images for detecting, classifying, and tracking targets embedded in a background scene. We address the problem of discriminating between targets and nontarget objects in a scene by evaluating 40x40 image blocks belonging to an image. Each image block is first projected onto a set of templates specifically designed to separate images of targets embedded in a typical background scene from those background images without targets. These filters are found using directed principal component analysis which maximally separates the two groups. The projected images are then clustered into one of n classes based on a minimum distance to a set of n cluster prototypes. These cluster prototypes have previously been identified using a modified clustering algorithm based on prior sensed data. Each projected image pattern is then fed into the associated cluster's trained neural network for classification. A detailed description of our algorithm will be given in this paper. We outline our methodology for designing the templates, describe our modified clustering algorithm, and provide details on the neural network classifiers. Evaluation of the overall algorithm demonstrates that our detection rates approach 96% with a false positive rate of less than 0.03%.
Reyes, Mauricio; Zysset, Philippe
2017-01-01
Osteoporosis leads to hip fractures in aging populations and is diagnosed by modern medical imaging techniques such as quantitative computed tomography (QCT). Hip fracture sites involve trabecular bone, whose strength is determined by volume fraction and orientation, known as fabric. However, bone fabric cannot be reliably assessed in clinical QCT images of proximal femur. Accordingly, we propose a novel registration-based estimation of bone fabric designed to preserve tensor properties of bone fabric and to map bone fabric by a global and local decomposition of the gradient of a non-rigid image registration transformation. Furthermore, no comprehensive analysis on the critical components of this methodology has been previously conducted. Hence, the aim of this work was to identify the best registration-based strategy to assign bone fabric to the QCT image of a patient’s proximal femur. The normalized correlation coefficient and curvature-based regularization were used for image-based registration and the Frobenius norm of the stretch tensor of the local gradient was selected to quantify the distance among the proximal femora in the population. Based on this distance, closest, farthest and mean femora with a distinction of sex were chosen as alternative atlases to evaluate their influence on bone fabric prediction. Second, we analyzed different tensor mapping schemes for bone fabric prediction: identity, rotation-only, rotation and stretch tensor. Third, we investigated the use of a population average fabric atlas. A leave one out (LOO) evaluation study was performed with a dual QCT and HR-pQCT database of 36 pairs of human femora. The quality of the fabric prediction was assessed with three metrics, the tensor norm (TN) error, the degree of anisotropy (DA) error and the angular deviation of the principal tensor direction (PTD). The closest femur atlas (CTP) with a full rotation (CR) for fabric mapping delivered the best results with a TN error of 7.3 ± 0.9%, a DA error of 6.6 ± 1.3% and a PTD error of 25 ± 2°. The closest to the population mean femur atlas (MTP) using the same mapping scheme yielded only slightly higher errors than CTP for substantially less computing efforts. The population average fabric atlas yielded substantially higher errors than the MTP with the CR mapping scheme. Accounting for sex did not bring any significant improvements. The identified fabric mapping methodology will be exploited in patient-specific QCT-based finite element analysis of the proximal femur to improve the prediction of hip fracture risk. PMID:29176881
Bhateja, Vikrant; Moin, Aisha; Srivastava, Anuja; Bao, Le Nguyen; Lay-Ekuakille, Aimé; Le, Dac-Nhuong
2016-07-01
Computer based diagnosis of Alzheimer's disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer's disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Component Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bhateja, Vikrant, E-mail: bhateja.vikrant@gmail.com, E-mail: nhuongld@hus.edu.vn; Moin, Aisha; Srivastava, Anuja
Computer based diagnosis of Alzheimer’s disease can be performed by dint of the analysis of the functional and structural changes in the brain. Multispectral image fusion deliberates upon fusion of the complementary information while discarding the surplus information to achieve a solitary image which encloses both spatial and spectral details. This paper presents a Non-Sub-sampled Contourlet Transform (NSCT) based multispectral image fusion model for computer-aided diagnosis of Alzheimer’s disease. The proposed fusion methodology involves color transformation of the input multispectral image. The multispectral image in YIQ color space is decomposed using NSCT followed by dimensionality reduction using modified Principal Componentmore » Analysis algorithm on the low frequency coefficients. Further, the high frequency coefficients are enhanced using non-linear enhancement function. Two different fusion rules are then applied to the low-pass and high-pass sub-bands: Phase congruency is applied to low frequency coefficients and a combination of directive contrast and normalized Shannon entropy is applied to high frequency coefficients. The superiority of the fusion response is depicted by the comparisons made with the other state-of-the-art fusion approaches (in terms of various fusion metrics).« less
Rodrigues, Pedro L.; Rodrigues, Nuno F.; Duque, Duarte; Granja, Sara; Correia-Pinto, Jorge; Vilaça, João L.
2014-01-01
Background. Regulating mechanisms of branching morphogenesis of fetal lung rat explants have been an essential tool for molecular research. This work presents a new methodology to accurately quantify the epithelial, outer contour, and peripheral airway buds of lung explants during cellular development from microscopic images. Methods. The outer contour was defined using an adaptive and multiscale threshold algorithm whose level was automatically calculated based on an entropy maximization criterion. The inner lung epithelium was defined by a clustering procedure that groups small image regions according to the minimum description length principle and local statistical properties. Finally, the number of peripheral buds was counted as the skeleton branched ends from a skeletonized image of the lung inner epithelia. Results. The time for lung branching morphometric analysis was reduced in 98% in contrast to the manual method. Best results were obtained in the first two days of cellular development, with lesser standard deviations. Nonsignificant differences were found between the automatic and manual results in all culture days. Conclusions. The proposed method introduces a series of advantages related to its intuitive use and accuracy, making the technique suitable to images with different lighting characteristics and allowing a reliable comparison between different researchers. PMID:25250057
IJ-OpenCV: Combining ImageJ and OpenCV for processing images in biomedicine.
Domínguez, César; Heras, Jónathan; Pascual, Vico
2017-05-01
The effective processing of biomedical images usually requires the interoperability of diverse software tools that have different aims but are complementary. The goal of this work is to develop a bridge to connect two of those tools: ImageJ, a program for image analysis in life sciences, and OpenCV, a computer vision and machine learning library. Based on a thorough analysis of ImageJ and OpenCV, we detected the features of these systems that could be enhanced, and developed a library to combine both tools, taking advantage of the strengths of each system. The library was implemented on top of the SciJava converter framework. We also provide a methodology to use this library. We have developed the publicly available library IJ-OpenCV that can be employed to create applications combining features from both ImageJ and OpenCV. From the perspective of ImageJ developers, they can use IJ-OpenCV to easily create plugins that use any functionality provided by the OpenCV library and explore different alternatives. From the perspective of OpenCV developers, this library provides a link to the ImageJ graphical user interface and all its features to handle regions of interest. The IJ-OpenCV library bridges the gap between ImageJ and OpenCV, allowing the connection and the cooperation of these two systems. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wollenweber, Scott D; Kemp, Brad J
2016-11-01
This investigation aimed to develop a scanner quantification performance methodology and compare multiple metrics between two scanners under different imaging conditions. Most PET scanners are designed to work over a wide dynamic range of patient imaging conditions. Clinical constraints, however, often impact the realization of the entitlement performance for a particular scanner design. Using less injected dose and imaging for a shorter time are often key considerations, all while maintaining "acceptable" image quality and quantitative capability. A dual phantom measurement including resolution inserts was used to measure the effects of in-plane (x, y) and axial (z) system resolution between two PET/CT systems with different block detector crystal dimensions. One of the scanners had significantly thinner slices. Several quantitative measures, including feature contrast recovery, max/min value, and feature profile accuracy were derived from the resulting data and compared between the two scanners and multiple phantoms and alignments. At the clinically relevant count levels used, the scanner with thinner slices had improved performance of approximately 2%, averaged over phantom alignments, measures, and reconstruction methods, for the head-sized phantom, mainly demonstrated with the rods aligned perpendicular to the scanner axis. That same scanner had a slightly decreased performance of -1% for the larger body-size phantom, mostly due to an apparent noise increase in the images. Most of the differences in the metrics between the two scanners were less than 10%. Using the proposed scanner performance methodology, it was shown that smaller detector elements and a larger number of image voxels require higher count density in order to demonstrate improved image quality and quantitation. In a body imaging scenario under typical clinical conditions, the potential advantages of the design must overcome increases in noise due to lower count density.
SU-C-304-05: Use of Local Noise Power Spectrum and Wavelets in Comprehensive EPID Quality Assurance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, S; Gopal, A; Yan, G
2015-06-15
Purpose: As EPIDs are increasingly used for IMRT QA and real-time treatment verification, comprehensive quality assurance (QA) of EPIDs becomes critical. Current QA with phantoms such as the Las Vegas and PIPSpro™ can fail in the early detection of EPID artifacts. Beyond image quality assessment, we propose a quantitative methodology using local noise power spectrum (NPS) to characterize image noise and wavelet transform to identify bad pixels and inter-subpanel flat-fielding artifacts. Methods: A total of 93 image sets including bar-pattern images and open exposure images were collected from four iViewGT a-Si EPID systems over three years. Quantitative metrics such asmore » modulation transform function (MTF), NPS and detective quantum efficiency (DQE) were computed for each image set. Local 2D NPS was calculated for each subpanel. A 1D NPS was obtained by radial averaging the 2D NPS and fitted to a power-law function. R-square and slope of the linear regression analysis were used for panel performance assessment. Haar wavelet transformation was employed to identify pixel defects and non-uniform gain correction across subpanels. Results: Overall image quality was assessed with DQE based on empirically derived area under curve (AUC) thresholds. Using linear regression analysis of 1D NPS, panels with acceptable flat fielding were indicated by r-square between 0.8 and 1, and slopes of −0.4 to −0.7. However, for panels requiring flat fielding recalibration, r-square values less than 0.8 and slopes from +0.2 to −0.4 were observed. The wavelet transform successfully identified pixel defects and inter-subpanel flat fielding artifacts. Standard QA with the Las Vegas and PIPSpro phantoms failed to detect these artifacts. Conclusion: The proposed QA methodology is promising for the early detection of imaging and dosimetric artifacts of EPIDs. Local NPS can accurately characterize the noise level within each subpanel, while the wavelet transforms can detect bad pixels and inter-subpanel flat fielding artifacts.« less
MRI Superresolution Using Self-Similarity and Image Priors
Manjón, José V.; Coupé, Pierrick; Buades, Antonio; Collins, D. Louis; Robles, Montserrat
2010-01-01
In Magnetic Resonance Imaging typical clinical settings, both low- and high-resolution images of different types are routinarily acquired. In some cases, the acquired low-resolution images have to be upsampled to match with other high-resolution images for posterior analysis or postprocessing such as registration or multimodal segmentation. However, classical interpolation techniques are not able to recover the high-frequency information lost during the acquisition process. In the present paper, a new superresolution method is proposed to reconstruct high-resolution images from the low-resolution ones using information from coplanar high resolution images acquired of the same subject. Furthermore, the reconstruction process is constrained to be physically plausible with the MR acquisition model that allows a meaningful interpretation of the results. Experiments on synthetic and real data are supplied to show the effectiveness of the proposed approach. A comparison with classical state-of-the-art interpolation techniques is presented to demonstrate the improved performance of the proposed methodology. PMID:21197094
Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data
Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.
2016-08-09
In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less
Imaging nanoscale lattice variations by machine learning of x-ray diffraction microscopy data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laanait, Nouamane; Zhang, Zhan; Schlepütz, Christian M.
In this paper, we present a novel methodology based on machine learning to extract lattice variations in crystalline materials, at the nanoscale, from an x-ray Bragg diffraction-based imaging technique. By employing a full-field microscopy setup, we capture real space images of materials, with imaging contrast determined solely by the x-ray diffracted signal. The data sets that emanate from this imaging technique are a hybrid of real space information (image spatial support) and reciprocal lattice space information (image contrast), and are intrinsically multidimensional (5D). By a judicious application of established unsupervised machine learning techniques and multivariate analysis to this multidimensional datamore » cube, we show how to extract features that can be ascribed physical interpretations in terms of common structural distortions, such as lattice tilts and dislocation arrays. Finally, we demonstrate this 'big data' approach to x-ray diffraction microscopy by identifying structural defects present in an epitaxial ferroelectric thin-film of lead zirconate titanate.« less
A whole brain morphometric analysis of changes associated with pre-term birth
NASA Astrophysics Data System (ADS)
Thomaz, C. E.; Boardman, J. P.; Counsell, S.; Hill, D. L. G.; Hajnal, J. V.; Edwards, A. D.; Rutherford, M. A.; Gillies, D. F.; Rueckert, D.
2006-03-01
Pre-term birth is strongly associated with subsequent neuropsychiatric impairment. To identify structural differences in preterm infants we have examined a dataset of magnetic resonance (MR) images containing 88 preterm infants and 19 term born controls. We have analyzed these images by combining image registration, deformation based morphometry (DBM), multivariate statistics, and effect size maps (ESM). The methodology described has been performed directly on the MR intensity images rather than on segmented versions of the images. The results indicate that the approach described makes clear the statistical differences between the control and preterm samples, showing a leave-one-out classification accuracy of 94.74% and 95.45% respectively. In addition, finding the most discriminant direction between the groups and using DBM features and ESM we are able to identify not only what are the changes between preterm and term groups but also how relatively relevant they are in terms of volume expansion and contraction.
Analysis and Implementation of Methodologies for the Monitoring of Changes in Eye Fundus Images
NASA Astrophysics Data System (ADS)
Gelroth, A.; Rodríguez, D.; Salvatelli, A.; Drozdowicz, B.; Bizai, G.
2011-12-01
We present a support system for changes detection in fundus images of the same patient taken at different time intervals. This process is useful for monitoring pathologies lasting for long periods of time, as are usually the ophthalmologic. We propose a flow of preprocessing, processing and postprocessing applied to a set of images selected from a public database, presenting pathological advances. A test interface was developed designed to select the images to be compared in order to apply the different methods developed and to display the results. We measure the system performance in terms of sensitivity, specificity and computation times. We have obtained good results, higher than 84% for the first two parameters and processing times lower than 3 seconds for 512x512 pixel images. For the specific case of detection of changes associated with bleeding, the system responds with sensitivity and specificity over 98%.
Mining biomedical images towards valuable information retrieval in biomedical and life sciences.
Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas
2016-01-01
Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. © The Author(s) 2016. Published by Oxford University Press.
Heuristic Enhancement of Magneto-Optical Images for NDE
NASA Astrophysics Data System (ADS)
Cacciola, Matteo; Megali, Giuseppe; Pellicanò, Diego; Calcagno, Salvatore; Versaci, Mario; Morabito, FrancescoCarlo
2010-12-01
The quality of measurements in nondestructive testing and evaluation plays a key role in assessing the reliability of different inspection techniques. Each different technique, like the magneto-optic imaging here treated, is affected by some special types of noise which are related to the specific device used for their acquisition. Therefore, the design of even more accurate image processing is often required by relevant applications, for instance, in implementing integrated solutions for flaw detection and characterization. The aim of this paper is to propose a preprocessing procedure based on independent component analysis (ICA) to ease the detection of rivets and/or flaws in the specimens under test. A comparison of the proposed approach with some other advanced image processing methodologies used for denoising magneto-optic images (MOIs) is carried out, in order to show advantages and weakness of ICA in improving the accuracy and performance of the rivets/flaw detection.
Wood texture classification by fuzzy neural networks
NASA Astrophysics Data System (ADS)
Gonzaga, Adilson; de Franca, Celso A.; Frere, Annie F.
1999-03-01
The majority of scientific papers focusing on wood classification for pencil manufacturing take into account defects and visual appearance. Traditional methodologies are base don texture analysis by co-occurrence matrix, by image modeling, or by tonal measures over the plate surface. In this work, we propose to classify plates of wood without biological defects like insect holes, nodes, and cracks, by analyzing their texture. By this methodology we divide the plate image in several rectangular windows or local areas and reduce the number of gray levels. From each local area, we compute the histogram of difference sand extract texture features, given them as input to a Local Neuro-Fuzzy Network. Those features are from the histogram of differences instead of the image pixels due to their better performance and illumination independence. Among several features like media, contrast, second moment, entropy, and IDN, the last three ones have showed better results for network training. Each LNN output is taken as input to a Partial Neuro-Fuzzy Network (PNFN) classifying a pencil region on the plate. At last, the outputs from the PNFN are taken as input to a Global Fuzzy Logic doing the plate classification. Each pencil classification within the plate is done taking into account each quality index.
Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D
2013-01-01
Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.
European research platform IPANEMA at the SOLEIL synchrotron for ancient and historical materials.
Bertrand, L; Languille, M-A; Cohen, S X; Robinet, L; Gervais, C; Leroy, S; Bernard, D; Le Pennec, E; Josse, W; Doucet, J; Schöder, S
2011-09-01
IPANEMA, a research platform devoted to ancient and historical materials (archaeology, cultural heritage, palaeontology and past environments), is currently being set up at the synchrotron facility SOLEIL (Saint-Aubin, France; SOLEIL opened to users in January 2008). The new platform is open to French, European and international users. The activities of the platform are centred on two main fields: increased support to synchrotron projects on ancient materials and methodological research. The IPANEMA team currently occupies temporary premises at SOLEIL, but the platform comprises construction of a new building that will comply with conservation and environmental standards and of a hard X-ray imaging beamline today in its conceptual design phase, named PUMA. Since 2008, the team has supported synchrotron works at SOLEIL and at European synchrotron facilities on a range of topics including pigment degradation in paintings, composition of musical instrument varnishes, and provenancing of medieval archaeological ferrous artefacts. Once the platform is fully operational, user support will primarily take place within medium-term research projects for `hosted' scientists, PhDs and post-docs. IPANEMA methodological research is focused on advanced two-dimensional/three-dimensional imaging and spectroscopy and statistical image analysis, both optimized for ancient materials.
The Escherichia coli Proteome: Past, Present, and Future Prospects†
Han, Mee-Jung; Lee, Sang Yup
2006-01-01
Proteomics has emerged as an indispensable methodology for large-scale protein analysis in functional genomics. The Escherichia coli proteome has been extensively studied and is well defined in terms of biochemical, biological, and biotechnological data. Even before the entire E. coli proteome was fully elucidated, the largest available data set had been integrated to decipher regulatory circuits and metabolic pathways, providing valuable insights into global cellular physiology and the development of metabolic and cellular engineering strategies. With the recent advent of advanced proteomic technologies, the E. coli proteome has been used for the validation of new technologies and methodologies such as sample prefractionation, protein enrichment, two-dimensional gel electrophoresis, protein detection, mass spectrometry (MS), combinatorial assays with n-dimensional chromatographies and MS, and image analysis software. These important technologies will not only provide a great amount of additional information on the E. coli proteome but also synergistically contribute to other proteomic studies. Here, we review the past development and current status of E. coli proteome research in terms of its biological, biotechnological, and methodological significance and suggest future prospects. PMID:16760308
Ergonomic assessment methodologies in manual handling of loads--opportunities in organizations.
Pires, Claudia
2012-01-01
The present study was developed based on the analysis of workplaces in the engineering industry, particularly in automotive companies. The main objectives of the study were to evaluate the activities present in the workplace concerning manual handling, using assessment methodologies NIOSH Ergonomic Equation [1] and Manual Material Handling [2], present in ISO 11228 [3-4], and to consider the possibility of developing musculoskeletal injuries associated with these activities, an issue of great concern in all industrial sectors. Similarly, it was also shown the suitability of each method to the task concerned. The study was conducted in three steps. The first step was to collect images and information about the target tasks. As a second step proceeded to the analysis, determining the method to use and to evaluate activities. Finally, we found the results obtained and acted on accordingly. With the study observed situations considered urgent action, according to the methodologies used, and proceeded to develop solutions in order to solve the problems identified, eliminating and / or minimizing embarrassing situations and harmful to employees.
Self-registering spread-spectrum barcode method
Cummings, Eric B.; Even Jr., William R.
2004-11-09
A novel spread spectrum barcode methodology is disclosed that allows a barcode to be read in its entirety even when a significant fraction or majority of the barcode is obscured. The barcode methodology makes use of registration or clocking information that is distributed along with the encoded user data across the barcode image. This registration information allows for the barcode image to be corrected for imaging distortion such as zoom, rotation, tilt, curvature, and perspective.
A Chip and Pixel Qualification Methodology on Imaging Sensors
NASA Technical Reports Server (NTRS)
Chen, Yuan; Guertin, Steven M.; Petkov, Mihail; Nguyen, Duc N.; Novak, Frank
2004-01-01
This paper presents a qualification methodology on imaging sensors. In addition to overall chip reliability characterization based on sensor s overall figure of merit, such as Dark Rate, Linearity, Dark Current Non-Uniformity, Fixed Pattern Noise and Photon Response Non-Uniformity, a simulation technique is proposed and used to project pixel reliability. The projected pixel reliability is directly related to imaging quality and provides additional sensor reliability information and performance control.
Brain tumor classification using AFM in combination with data mining techniques.
Huml, Marlene; Silye, René; Zauner, Gerald; Hutterer, Stephan; Schilcher, Kurt
2013-01-01
Although classification of astrocytic tumors is standardized by the WHO grading system, which is mainly based on microscopy-derived, histomorphological features, there is great interobserver variability. The main causes are thought to be the complexity of morphological details varying from tumor to tumor and from patient to patient, variations in the technical histopathological procedures like staining protocols, and finally the individual experience of the diagnosing pathologist. Thus, to raise astrocytoma grading to a more objective standard, this paper proposes a methodology based on atomic force microscopy (AFM) derived images made from histopathological samples in combination with data mining techniques. By comparing AFM images with corresponding light microscopy images of the same area, the progressive formation of cavities due to cell necrosis was identified as a typical morphological marker for a computer-assisted analysis. Using genetic programming as a tool for feature analysis, a best model was created that achieved 94.74% classification accuracy in distinguishing grade II tumors from grade IV ones. While utilizing modern image analysis techniques, AFM may become an important tool in astrocytic tumor diagnosis. By this way patients suffering from grade II tumors are identified unambiguously, having a less risk for malignant transformation. They would benefit from early adjuvant therapies.
Pourfarzad, Amir; Haddad Khodaparast, Mohammad Hossein; Karimi, Mehdi; Mortazavi, Seyed Ali
2014-10-01
Nowadays, the use of bread improvers has become an essential part of improving the production methods and quality of bakery products. In the present study, the Response Surface Methodology (RSM) was used to determine the optimum improver gel formulation which gave the best quality, shelf life, sensory and image properties for Barbari flat bread. Sodium stearoyl-2-lactylate (SSL), diacetyl tartaric acid esters of monoglyceride (DATEM) and propylene glycol (PG) were constituents of the gel and considered in this study. A second-order polynomial model was fitted to each response and the regression coefficients were determined using least square method. The optimum gel formulation was found to be 0.49 % of SSL, 0.36 % of DATEM and 0.5 % of PG when desirability function method was applied. There was a good agreement between the experimental data and their predicted counterparts. Results showed that the RSM, image processing and texture analysis are useful tools to investigate, approximate and predict a large number of bread properties.
A statistical estimation of Snow Water Equivalent coupling ground data and MODIS images
NASA Astrophysics Data System (ADS)
Bavera, D.; Bocchiola, D.; de Michele, C.
2007-12-01
The Snow Water Equivalent (SWE) is an important component of the hydrologic balance of mountain basins and snow fed areas in general. The total cumulated snow water equivalent at the end of the accumulation season represents the water availability at melt. Here, a statistical methodology to estimate the Snow Water Equivalent, at April 1st, is developed coupling ground data (snow depth and snow density measurements) and MODIS images. The methodology is applied to the Mallero river basin (about 320 km²) located in the Central Alps, northern Italy, where are available 11 snow gauges and a lot of sparse snow density measurements. The application covers 7 years from 2001 to 2007. The analysis has identified some problems in the MODIS information due to the cloud cover and misclassification for orographic shadow. The study is performed in the framework of AWARE (A tool for monitoring and forecasting Available WAter REsource in mountain environment) EU-project, a STREP Project in the VI F.P., GMES Initiative.
Spatial/Spectral Identification of Endmembers from AVIRIS Data using Mathematical Morphology
NASA Technical Reports Server (NTRS)
Plaza, Antonio; Martinez, Pablo; Gualtieri, J. Anthony; Perez, Rosa M.
2001-01-01
During the last several years, a number of airborne and satellite hyperspectral sensors have been developed or improved for remote sensing applications. Imaging spectrometry allows the detection of materials, objects and regions in a particular scene with a high degree of accuracy. Hyperspectral data typically consist of hundreds of thousands of spectra, so the analysis of this information is a key issue. Mathematical morphology theory is a widely used nonlinear technique for image analysis and pattern recognition. Although it is especially well suited to segment binary or grayscale images with irregular and complex shapes, its application in the classification/segmentation of multispectral or hyperspectral images has been quite rare. In this paper, we discuss a new completely automated methodology to find endmembers in the hyperspectral data cube using mathematical morphology. The extension of classic morphology to the hyperspectral domain allows us to integrate spectral and spatial information in the analysis process. In Section 3, some basic concepts about mathematical morphology and the technical details of our algorithm are provided. In Section 4, the accuracy of the proposed method is tested by its application to real hyperspectral data obtained from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imaging spectrometer. Some details about these data and reference results, obtained by well-known endmember extraction techniques, are provided in Section 2. Finally, in Section 5 we expose the main conclusions at which we have arrived.
Processing Satellite Imagery To Detect Waste Tire Piles
NASA Technical Reports Server (NTRS)
Skiles, Joseph; Schmidt, Cynthia; Wuinlan, Becky; Huybrechts, Catherine
2007-01-01
A methodology for processing commercially available satellite spectral imagery has been developed to enable identification and mapping of waste tire piles in California. The California Integrated Waste Management Board initiated the project and provided funding for the method s development. The methodology includes the use of a combination of previously commercially available image-processing and georeferencing software used to develop a model that specifically distinguishes between tire piles and other objects. The methodology reduces the time that must be spent to initially survey a region for tire sites, thereby increasing inspectors and managers time available for remediation of the sites. Remediation is needed because millions of used tires are discarded every year, waste tire piles pose fire hazards, and mosquitoes often breed in water trapped in tires. It should be possible to adapt the methodology to regions outside California by modifying some of the algorithms implemented in the software to account for geographic differences in spectral characteristics associated with terrain and climate. The task of identifying tire piles in satellite imagery is uniquely challenging because of their low reflectance levels: Tires tend to be spectrally confused with shadows and deep water, both of which reflect little light to satellite-borne imaging systems. In this methodology, the challenge is met, in part, by use of software that implements the Tire Identification from Reflectance (TIRe) model. The development of the TIRe model included incorporation of lessons learned in previous research on the detection and mapping of tire piles by use of manual/ visual and/or computational analysis of aerial and satellite imagery. The TIRe model is a computational model for identifying tire piles and discriminating between tire piles and other objects. The input to the TIRe model is the georeferenced but otherwise raw satellite spectral images of a geographic region to be surveyed. The TIRe model identifies the darkest objects in the images and, on the basis of spatial and spectral image characteristics, discriminates against other dark objects, which can include vegetation, some bodies of water, and dark soils. The TIRe model can identify piles of as few as 100 tires. The output of the TIRe model is a binary mask showing areas containing suspected tire piles and spectrally similar features. This mask is overlaid on the original satellite imagery and examined by a trained image analyst, who strives to further discriminate against non-tire objects that the TIRe model tentatively identified as tire piles. After the analyst has made adjustments, the mask is used to create a synoptic, geographically accurate tire-pile survey map, which can be overlaid with a road map and/or any other map or set of georeferenced data, according to a customer s preferences.
A, Anila H; G, Upendar Reddy; Ali, Firoj; Taye, Nandaraj; Chattopadhyay, Samit; Das, Amitava
2015-11-04
We report a new chemodosimetric probe () for specific recognition of cysteine (Cys) in aqueous buffer and in whey protein isolated from fresh cow's milk. Using this reagent we could develop a luminescence-based methodology for estimation of Cys released from a commercially available Cys-supplement drug by aminoacylase-1 in live cells.
Dolled-Filhart, Marisa P; Gustavson, Mark D
2012-11-01
Translational oncology has been improved by using tissue microarrays (TMAs), which facilitate biomarker analysis of large cohorts on a single slide. This has allowed for rapid analysis and validation of potential biomarkers for prognostic and predictive value, as well as for evaluation of biomarker prevalence. Coupled with quantitative analysis of immunohistochemical (IHC) staining, objective and standardized biomarker data from tumor samples can further advance companion diagnostic approaches for the identification of drug-responsive or resistant patient subpopulations. This review covers the advantages, disadvantages and applications of TMAs for biomarker research. Research literature and reviews of TMAs and quantitative image analysis methodology have been surveyed for this review (with an AQUA® analysis focus). Applications such as multi-marker diagnostic development and pathway-based biomarker subpopulation analyses are described. Tissue microarrays are a useful tool for biomarker analyses including prevalence surveys, disease progression assessment and addressing potential prognostic or predictive value. By combining quantitative image analysis with TMAs, analyses will be more objective and reproducible, allowing for more robust IHC-based diagnostic test development. Quantitative multi-biomarker IHC diagnostic tests that can predict drug response will allow for greater success of clinical trials for targeted therapies and provide more personalized clinical decision making.
Barker, John R; Martinez, Antonio
2018-04-04
Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self-energy relative to un-gated nanowires.
NASA Astrophysics Data System (ADS)
Barker, John R.; Martinez, Antonio
2018-04-01
Efficient analytical image charge models are derived for the full spatial variation of the electrostatic self-energy of electrons in semiconductor nanostructures that arises from dielectric mismatch using semi-classical analysis. The methodology provides a fast, compact and physically transparent computation for advanced device modeling. The underlying semi-classical model for the self-energy has been established and validated during recent years and depends on a slight modification of the macroscopic static dielectric constants for individual homogeneous dielectric regions. The model has been validated for point charges as close as one interatomic spacing to a sharp interface. A brief introduction to image charge methodology is followed by a discussion and demonstration of the traditional failure of the methodology to derive the electrostatic potential at arbitrary distances from a source charge. However, the self-energy involves the local limit of the difference between the electrostatic Green functions for the full dielectric heterostructure and the homogeneous equivalent. It is shown that high convergence may be achieved for the image charge method for this local limit. A simple re-normalisation technique is introduced to reduce the number of image terms to a minimum. A number of progressively complex 3D models are evaluated analytically and compared with high precision numerical computations. Accuracies of 1% are demonstrated. Introducing a simple technique for modeling the transition of the self-energy between disparate dielectric structures we generate an analytical model that describes the self-energy as a function of position within the source, drain and gated channel of a silicon wrap round gate field effect transistor on a scale of a few nanometers cross-section. At such scales the self-energies become large (typically up to ~100 meV) close to the interfaces as well as along the channel. The screening of a gated structure is shown to reduce the self-energy relative to un-gated nanowires.
Jethanandani, Amit; Lin, Timothy A; Volpe, Stefania; Elhalawani, Hesham; Mohamed, Abdallah S R; Yang, Pei; Fuller, Clifton D
2018-01-01
Radiomics has been widely investigated for non-invasive acquisition of quantitative textural information from anatomic structures. While the vast majority of radiomic analysis is performed on images obtained from computed tomography, magnetic resonance imaging (MRI)-based radiomics has generated increased attention. In head and neck cancer (HNC), however, attempts to perform consistent investigations are sparse, and it is unclear whether the resulting textural features can be reproduced. To address this unmet need, we systematically reviewed the quality of existing MRI radiomics research in HNC. Literature search was conducted in accordance with guidelines established by Preferred Reporting Items for Systematic Reviews and Meta-Analyses. Electronic databases were examined from January 1990 through November 2017 for common radiomic keywords. Eligible completed studies were then scored using a standardized checklist that we developed from Enhancing the Quality and Transparency of Health Research guidelines for reporting machine-learning predictive model specifications and results in biomedical research, defined by Luo et al. (1). Descriptive statistics of checklist scores were populated, and a subgroup analysis of methodology items alone was conducted in comparison to overall scores. Sixteen completed studies and four ongoing trials were selected for inclusion. Of the completed studies, the nasopharynx was the most common site of study (37.5%). MRI modalities varied with only four of the completed studies (25%) extracting radiomic features from a single sequence. Study sample sizes ranged between 13 and 118 patients (median of 40), and final radiomic signatures ranged from 2 to 279 features. Analyzed endpoints included either segmentation or histopathological classification parameters (44%) or prognostic and predictive biomarkers (56%). Liu et al. (2) addressed the highest number of our checklist items (total score: 48), and a subgroup analysis of methodology checklist items alone did not demonstrate any difference in scoring trends between studies [Spearman's ρ = 0.94 ( p < 0.0001)]. Although MRI radiomic applications demonstrate predictive potential in analyzing diverse HNC outcomes, methodological variances preclude accurate and collective interpretation of data.
MTF evaluation of white pixel sensors
NASA Astrophysics Data System (ADS)
Lindner, Albrecht; Atanassov, Kalin; Luo, Jiafu; Goma, Sergio
2015-01-01
We present a methodology to compare image sensors with traditional Bayer RGB layouts to sensors with alternative layouts containing white pixels. We focused on the sensors' resolving powers, which we measured in the form of a modulation transfer function for variations in both luma and chroma channels. We present the design of the test chart, the acquisition of images, the image analysis, and an interpretation of results. We demonstrate the approach at the example of two sensors that only differ in their color filter arrays. We confirmed that the sensor with white pixels and the corresponding demosaicing result in a higher resolving power in the luma channel, but a lower resolving power in the chroma channels when compared to the traditional Bayer sensor.
EEG-Informed fMRI: A Review of Data Analysis Methods
Abreu, Rodolfo; Leal, Alberto; Figueiredo, Patrícia
2018-01-01
The simultaneous acquisition of electroencephalography (EEG) with functional magnetic resonance imaging (fMRI) is a very promising non-invasive technique for the study of human brain function. Despite continuous improvements, it remains a challenging technique, and a standard methodology for data analysis is yet to be established. Here we review the methodologies that are currently available to address the challenges at each step of the data analysis pipeline. We start by surveying methods for pre-processing both EEG and fMRI data. On the EEG side, we focus on the correction for several MR-induced artifacts, particularly the gradient and pulse artifacts, as well as other sources of EEG artifacts. On the fMRI side, we consider image artifacts induced by the presence of EEG hardware inside the MR scanner, and the contamination of the fMRI signal by physiological noise of non-neuronal origin, including a review of several approaches to model and remove it. We then provide an overview of the approaches specifically employed for the integration of EEG and fMRI when using EEG to predict the blood oxygenation level dependent (BOLD) fMRI signal, the so-called EEG-informed fMRI integration strategy, the most commonly used strategy in EEG-fMRI research. Finally, we systematically review methods used for the extraction of EEG features reflecting neuronal phenomena of interest. PMID:29467634
Buildings Change Detection Based on Shape Matching for Multi-Resolution Remote Sensing Imagery
NASA Astrophysics Data System (ADS)
Abdessetar, M.; Zhong, Y.
2017-09-01
Buildings change detection has the ability to quantify the temporal effect, on urban area, for urban evolution study or damage assessment in disaster cases. In this context, changes analysis might involve the utilization of the available satellite images with different resolutions for quick responses. In this paper, to avoid using traditional method with image resampling outcomes and salt-pepper effect, building change detection based on shape matching is proposed for multi-resolution remote sensing images. Since the object's shape can be extracted from remote sensing imagery and the shapes of corresponding objects in multi-scale images are similar, it is practical for detecting buildings changes in multi-scale imagery using shape analysis. Therefore, the proposed methodology can deal with different pixel size for identifying new and demolished buildings in urban area using geometric properties of objects of interest. After rectifying the desired multi-dates and multi-resolutions images, by image to image registration with optimal RMS value, objects based image classification is performed to extract buildings shape from the images. Next, Centroid-Coincident Matching is conducted, on the extracted building shapes, based on the Euclidean distance measurement between shapes centroid (from shape T0 to shape T1 and vice versa), in order to define corresponding building objects. Then, New and Demolished buildings are identified based on the obtained distances those are greater than RMS value (No match in the same location).
Quantitative assessment of dynamic PET imaging data in cancer imaging.
Muzi, Mark; O'Sullivan, Finbarr; Mankoff, David A; Doot, Robert K; Pierce, Larry A; Kurland, Brenda F; Linden, Hannah M; Kinahan, Paul E
2012-11-01
Clinical imaging in positron emission tomography (PET) is often performed using single-time-point estimates of tracer uptake or static imaging that provides a spatial map of regional tracer concentration. However, dynamic tracer imaging can provide considerably more information about in vivo biology by delineating both the temporal and spatial pattern of tracer uptake. In addition, several potential sources of error that occur in static imaging can be mitigated. This review focuses on the application of dynamic PET imaging to measuring regional cancer biologic features and especially in using dynamic PET imaging for quantitative therapeutic response monitoring for cancer clinical trials. Dynamic PET imaging output parameters, particularly transport (flow) and overall metabolic rate, have provided imaging end points for clinical trials at single-center institutions for years. However, dynamic imaging poses many challenges for multicenter clinical trial implementations from cross-center calibration to the inadequacy of a common informatics infrastructure. Underlying principles and methodology of PET dynamic imaging are first reviewed, followed by an examination of current approaches to dynamic PET image analysis with a specific case example of dynamic fluorothymidine imaging to illustrate the approach. Copyright © 2012 Elsevier Inc. All rights reserved.
David, Ortiz P; Sierra-Sosa, Daniel; Zapirain, Begoña García
2017-01-06
Pressure ulcers have become subject of study in recent years due to the treatment high costs and decreased life quality from patients. These chronic wounds are related to the global life expectancy increment, being the geriatric and physical disable patients the principal affected by this condition. Injuries diagnosis and treatment usually takes weeks or even months by medical personel. Using non-invasive techniques, such as image processing techniques, it is possible to conduct an analysis from ulcers and aid in its diagnosis. This paper proposes a novel technique for image segmentation based on contrast changes by using synthetic frequencies obtained from the grayscale value available in each pixel of the image. These synthetic frequencies are calculated using the model of energy density over an electric field to describe a relation between a constant density and the image amplitude in a pixel. A toroidal geometry is used to decompose the image into different contrast levels by variating the synthetic frequencies. Then, the decomposed image is binarized applying Otsu's threshold allowing for obtaining the contours that describe the contrast variations. Morphological operations are used to obtain the desired segment of the image. The proposed technique is evaluated by synthesizing a Data Base with 51 images of pressure ulcers, provided by the Centre IGURCO. With the segmentation of these pressure ulcer images it is possible to aid in its diagnosis and treatment. To provide evidences of technique performance, digital image correlation was used as a measure, where the segments obtained using the methodology are compared with the real segments. The proposed technique is compared with two benchmarked algorithms. The results over the technique present an average correlation of 0.89 with a variation of ±0.1 and a computational time of 9.04 seconds. The methodology presents better segmentation results than the benchmarked algorithms using less computational time and without the need of an initial condition.
Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.
Saadia, Ayesha; Rashdi, Adnan
2016-12-01
Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques reported in the literature. The proposed method for denoising of Echocardiographic images is effective in noise suppression/removal. It not only removes noise from an image but also preserves edges and other important structure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Van Pamel, Anton; Brett, Colin R; Lowe, Michael J S
2014-12-01
Improving the ultrasound inspection capability for coarse-grained metals remains of longstanding interest and is expected to become increasingly important for next-generation electricity power plants. Conventional ultrasonic A-, B-, and C-scans have been found to suffer from strong background noise caused by grain scattering, which can severely limit the detection of defects. However, in recent years, array probes and full matrix capture (FMC) imaging algorithms have unlocked exciting possibilities for improvements. To improve and compare these algorithms, we must rely on robust methodologies to quantify their performance. This article proposes such a methodology to evaluate the detection performance of imaging algorithms. For illustration, the methodology is applied to some example data using three FMC imaging algorithms; total focusing method (TFM), phase-coherent imaging (PCI), and decomposition of the time-reversal operator with multiple scattering filter (DORT MSF). However, it is important to note that this is solely to illustrate the methodology; this article does not attempt the broader investigation of different cases that would be needed to compare the performance of these algorithms in general. The methodology considers the statistics of detection, presenting the detection performance as probability of detection (POD) and probability of false alarm (PFA). A test sample of coarse-grained nickel super alloy, manufactured to represent materials used for future power plant components and containing some simple artificial defects, is used to illustrate the method on the candidate algorithms. The data are captured in pulse-echo mode using 64-element array probes at center frequencies of 1 and 5 MHz. In this particular case, it turns out that all three algorithms are shown to perform very similarly when comparing their flaw detection capabilities.
ΤND: a thyroid nodule detection system for analysis of ultrasound images and videos.
Keramidas, Eystratios G; Maroulis, Dimitris; Iakovidis, Dimitris K
2012-06-01
In this paper, we present a computer-aided-diagnosis (CAD) system prototype, named TND (Thyroid Nodule Detector), for the detection of nodular tissue in ultrasound (US) thyroid images and videos acquired during thyroid US examinations. The proposed system incorporates an original methodology that involves a novel algorithm for automatic definition of the boundaries of the thyroid gland, and a novel approach for the extraction of noise resilient image features effectively representing the textural and the echogenic properties of the thyroid tissue. Through extensive experimental evaluation on real thyroid US data, its accuracy in thyroid nodule detection has been estimated to exceed 95%. These results attest to the feasibility of the clinical application of TND, for the provision of a second more objective opinion to the radiologists by exploiting image evidences.
NASA Astrophysics Data System (ADS)
Lesparre, N.; Boyle, A.; Grychtol, B.; Cabrera, J.; Marteau, J.; Adler, A.
2016-05-01
Electrical resistivity images supply information on sub-surface structures and are classically performed to characterize faults geometry. Here we use the presence of a tunnel intersecting a regional fault to inject electrical currents between surface and the tunnel to improve the image resolution at depth. We apply an original methodology for defining the inversion parametrization based on pilot points to better deal with the heterogeneous sounding of the medium. An increased region of high spatial resolution is shown by analysis of point spread functions as well as inversion of synthetics. Such evaluations highlight the advantages of using transmission measurements by transferring a few electrodes from the main profile to increase the sounding depth. Based on the resulting image we propose a revised structure for the medium surrounding the Cernon fault supported by geological observations and muon flux measurements.
Analysis of crystalline lens coloration using a black and white charge-coupled device camera.
Sakamoto, Y; Sasaki, K; Kojima, M
1994-01-01
To analyze lens coloration in vivo, we used a new type of Scheimpflug camera that is a black and white type of charge-coupled device (CCD) camera. A new methodology was proposed. Scheimpflug images of the lens were taken three times through red (R), green (G), and blue (B) filters, respectively. Three images corresponding with the R, G, and B channels were combined into one image on the cathode-ray tube (CRT) display. The spectral transmittance of the tricolor filters and the spectral sensitivity of the CCD camera were used to correct the scattering-light intensity of each image. Coloration of the lens was expressed on a CIE standard chromaticity diagram. The lens coloration of seven eyes analyzed by this method showed values almost the same as those obtained by the previous method using color film.
Tekes, A; Jackson, E M; Ogborn, J; Liang, S; Bledsoe, M; Durand, D J; Jallo, G; Huisman, T A G M
2016-06-01
Lean Six Sigma methodology is increasingly used to drive improvement in patient safety, quality of care, and cost-effectiveness throughout the US health care delivery system. To demonstrate our value as specialists, radiologists can combine lean methodologies along with imaging expertise to optimize imaging elements-of-care pathways. In this article, we describe a Lean Six Sigma project with the goal of reducing the relative use of pediatric head CTs in our population of patients with hydrocephalus by 50% within 6 months. We applied a Lean Six Sigma methodology using a multidisciplinary team at a quaternary care academic children's center. The existing baseline imaging practice for hydrocephalus was outlined in a Kaizen session, and potential interventions were discussed. An improved radiation-free workflow with ultrafast MR imaging was created. Baseline data were collected for 3 months by using the departmental radiology information system. Data collection continued postintervention and during the control phase (each for 3 months). The percentage of neuroimaging per technique (head CT, head ultrasound, ultrafast brain MR imaging, and routine brain MR imaging) was recorded during each phase. The improved workflow resulted in a 75% relative reduction in the percentage of hydrocephalus imaging performed by CT between the pre- and postintervention/control phases (Z-test, P = .0001). Our lean interventions in the pediatric hydrocephalus care pathway resulted in a significant reduction in head CT orders and increased use of ultrafast brain MR imaging. © 2016 by American Journal of Neuroradiology.
NASA Astrophysics Data System (ADS)
Maurer, Thomas; Gustavos Trujillo Siliézar, Carlos; Oeser, Anne; Pohle, Ina; Hinz, Christoph
2016-04-01
In evolving initial landscapes, vegetation development depends on a variety of feedback effects. One of the less understood feedback loops is the interaction between throughfall and plant canopy development. The amount of throughfall is governed by the characteristics of the vegetation canopy, whereas vegetation pattern evolution may in turn depend on the spatio-temporal distribution of throughfall. Meteorological factors that may influence throughfall, while at the same time interacting with the canopy, are e.g. wind speed, wind direction and rainfall intensity. Our objective is to investigate how throughfall, vegetation canopy and meteorological variables interact in an exemplary eco-hydrological system in its initial development phase, in which the canopy is very heterogeneous and rapidly changing. For that purpose, we developed a methodological approach combining field methods, raster image analysis and multivariate statistics. The research area for this study is the Hühnerwasser ('Chicken Creek') catchment in Lower Lusatia, Brandenburg, Germany, where after eight years of succession, the spatial distribution of plant species is highly heterogeneous, leading to increasingly differentiated throughfall patterns. The constructed 6-ha catchment offers ideal conditions for our study due to the rapidly changing vegetation structure and the availability of complementary monitoring data. Throughfall data were obtained by 50 tipping bucket rain gauges arranged in two transects and connected via a wireless sensor network that cover the predominant vegetation types on the catchment (locust copses, dense sallow thorn bushes and reeds, base herbaceous and medium-rise small-reed vegetation, and open areas covered by moss and lichens). The spatial configuration of the vegetation canopy for each measurement site was described via digital image analysis of hemispheric photographs of the canopy using the ArcGIS Spatial Analyst, GapLight and ImageJ software. Meteorological data from two on-site weather stations (wind direction, wind speed, air temperature, air humidity, insolation, soil temperature, precipitation) were provided by the 'Research Platform Chicken Creek' (https://www.tu-cottbus.de/projekte/en/oekosysteme/startseite.html). Data were combined and multivariate statistical analysis (PCA, cluster analysis, regression trees) were conducted using the R-software to i) obtain statistical indices describing the relevant characteristics of the data and ii) to identify the determining factors for throughfall intensity. The methodology is currently tested and results will be presented. Preliminary evaluation of the image analysis approach showed only marginal, systematic deviation of results for the different software tools applied, which makes the developed workflow a viable tool for canopy characterization. Results from this study will have a broad spectrum of possible applications, for instance the development / calibration of rainfall interception models, the incorporation into eco-hydrological models, or to test the fault tolerance of wireless rainfall sensor networks.
Resting-State Functional Connectivity in the Infant Brain: Methods, Pitfalls, and Potentiality.
Mongerson, Chandler R L; Jennings, Russell W; Borsook, David; Becerra, Lino; Bajic, Dusica
2017-01-01
Early brain development is characterized by rapid growth and perpetual reconfiguration, driven by a dynamic milieu of heterogeneous processes. Postnatal brain plasticity is associated with increased vulnerability to environmental stimuli. However, little is known regarding the ontogeny and temporal manifestations of inter- and intra-regional functional connectivity that comprise functional brain networks. Resting-state functional magnetic resonance imaging (rs-fMRI) has emerged as a promising non-invasive neuroinvestigative tool, measuring spontaneous fluctuations in blood oxygen level dependent (BOLD) signal at rest that reflect baseline neuronal activity. Over the past decade, its application has expanded to infant populations providing unprecedented insight into functional organization of the developing brain, as well as early biomarkers of abnormal states. However, many methodological issues of rs-fMRI analysis need to be resolved prior to standardization of the technique to infant populations. As a primary goal, this methodological manuscript will (1) present a robust methodological protocol to extract and assess resting-state networks in early infancy using independent component analysis (ICA), such that investigators without previous knowledge in the field can implement the analysis and reliably obtain viable results consistent with previous literature; (2) review the current methodological challenges and ethical considerations associated with emerging field of infant rs-fMRI analysis; and (3) discuss the significance of rs-fMRI application in infants for future investigations of neurodevelopment in the context of early life stressors and pathological processes. The overarching goal is to catalyze efforts toward development of robust, infant-specific acquisition, and preprocessing pipelines, as well as promote greater transparency by researchers regarding methods used.
Resting-State Functional Connectivity in the Infant Brain: Methods, Pitfalls, and Potentiality
Mongerson, Chandler R. L.; Jennings, Russell W.; Borsook, David; Becerra, Lino; Bajic, Dusica
2017-01-01
Early brain development is characterized by rapid growth and perpetual reconfiguration, driven by a dynamic milieu of heterogeneous processes. Postnatal brain plasticity is associated with increased vulnerability to environmental stimuli. However, little is known regarding the ontogeny and temporal manifestations of inter- and intra-regional functional connectivity that comprise functional brain networks. Resting-state functional magnetic resonance imaging (rs-fMRI) has emerged as a promising non-invasive neuroinvestigative tool, measuring spontaneous fluctuations in blood oxygen level dependent (BOLD) signal at rest that reflect baseline neuronal activity. Over the past decade, its application has expanded to infant populations providing unprecedented insight into functional organization of the developing brain, as well as early biomarkers of abnormal states. However, many methodological issues of rs-fMRI analysis need to be resolved prior to standardization of the technique to infant populations. As a primary goal, this methodological manuscript will (1) present a robust methodological protocol to extract and assess resting-state networks in early infancy using independent component analysis (ICA), such that investigators without previous knowledge in the field can implement the analysis and reliably obtain viable results consistent with previous literature; (2) review the current methodological challenges and ethical considerations associated with emerging field of infant rs-fMRI analysis; and (3) discuss the significance of rs-fMRI application in infants for future investigations of neurodevelopment in the context of early life stressors and pathological processes. The overarching goal is to catalyze efforts toward development of robust, infant-specific acquisition, and preprocessing pipelines, as well as promote greater transparency by researchers regarding methods used. PMID:28856131
Mezgec, Simon; Eftimov, Tome; Bucher, Tamara; Koroušić Seljak, Barbara
2018-04-06
The present study tested the combination of an established and a validated food-choice research method (the 'fake food buffet') with a new food-matching technology to automate the data collection and analysis. The methodology combines fake-food image recognition using deep learning and food matching and standardization based on natural language processing. The former is specific because it uses a single deep learning network to perform both the segmentation and the classification at the pixel level of the image. To assess its performance, measures based on the standard pixel accuracy and Intersection over Union were applied. Food matching firstly describes each of the recognized food items in the image and then matches the food items with their compositional data, considering both their food names and their descriptors. The final accuracy of the deep learning model trained on fake-food images acquired by 124 study participants and providing fifty-five food classes was 92·18 %, while the food matching was performed with a classification accuracy of 93 %. The present findings are a step towards automating dietary assessment and food-choice research. The methodology outperforms other approaches in pixel accuracy, and since it is the first automatic solution for recognizing the images of fake foods, the results could be used as a baseline for possible future studies. As the approach enables a semi-automatic description of recognized food items (e.g. with respect to FoodEx2), these can be linked to any food composition database that applies the same classification and description system.
Mapping land cover from satellite images: A basic, low cost approach
NASA Technical Reports Server (NTRS)
Elifrits, C. D.; Barney, T. W.; Barr, D. J.; Johannsen, C. J.
1978-01-01
Simple, inexpensive methodologies developed for mapping general land cover and land use categories from LANDSAT images are reported. One methodology, a stepwise, interpretive, direct tracing technique was developed through working with university students from different disciplines with no previous experience in satellite image interpretation. The technique results in maps that are very accurate in relation to actual land cover and relative to the small investment in skill, time, and money needed to produce the products.
Zoppellaro, Giacomo; Venneri, Lucia; Khattar, Rajdeep S; Li, Wei; Senior, Roxy
2016-06-01
Ultrasound contrast agents may be used for the assessment of regional wall motion and myocardial perfusion, but are generally considered not suitable for deformation analysis. The aim of our study was to assess the feasibility of deformation imaging on contrast-enhanced images using a novel methodology. We prospectively enrolled 40 patients who underwent stress echocardiography with continuous intravenous infusion of SonoVue for the assessment of myocardial perfusion imaging with flash replenishment technique. We compared longitudinal strain (Lε) values, assessed with a vendor-independent software (2D CPA), on 68 resting contrast-enhanced and 68 resting noncontrast recordings. Strain analysis on contrast recordings was evaluated in the first cardiac cycles after the flash. Tracking of contrast images was deemed feasible in all subjects and in all views. Contrast administration improved image quality and increased the number of segments used for deformation analysis. Lε of noncontrast and contrast-enhanced images were statistically different (-18.8 ± 4.5% and -22.8 ± 5.4%, respectively; P < 0.001), but their correlation was good (ICC 0.65, 95%CI 0.42-0.78). Patients with resting wall-motion abnormalities showed lower Lε values on contrast recordings (-18.6 ± 6.0% vs. -24.2 ± 5.5%, respectively; P < 0.01). Intra-operator and inter-operator reproducibility was good for both noncontrast and contrast images with no statistical differences. Our study shows that deformation analysis on postflash contrast-enhanced images is feasible and reproducible. Therefore, it would be possible to perform a simultaneous evaluation of wall-motion abnormalities, volumes, ejection fraction, perfusion defects, and cardiac deformation on the same contrast recording. © 2016, Wiley Periodicals, Inc.
Weng, Hsu-Huei; Noll, Kyle R; Johnson, Jason M; Prabhu, Sujit S; Tsai, Yuan-Hsiung; Chang, Sheng-Wei; Huang, Yen-Chu; Lee, Jiann-Der; Yang, Jen-Tsung; Yang, Cheng-Ta; Tsai, Ying-Huang; Yang, Chun-Yuh; Hazle, John D; Schomer, Donald F; Liu, Ho-Ling
2018-02-01
Purpose To compare functional magnetic resonance (MR) imaging for language mapping (hereafter, language functional MR imaging) with direct cortical stimulation (DCS) in patients with brain tumors and to assess factors associated with its accuracy. Materials and Methods PubMed/MEDLINE and related databases were searched for research articles published between January 2000 and September 2016. Findings were pooled by using bivariate random-effects and hierarchic summary receiver operating characteristic curve models. Meta-regression and subgroup analyses were performed to evaluate whether publication year, functional MR imaging paradigm, magnetic field strength, statistical threshold, and analysis software affected classification accuracy. Results Ten articles with a total of 214 patients were included in the analysis. On a per-patient basis, the pooled sensitivity and specificity of functional MR imaging was 44% (95% confidence interval [CI]: 14%, 78%) and 80% (95% CI: 54%, 93%), respectively. On a per-tag basis (ie, each DCS stimulation site or "tag" was considered a separate data point across all patients), the pooled sensitivity and specificity were 67% (95% CI: 51%, 80%) and 55% (95% CI: 25%, 82%), respectively. The per-tag analysis showed significantly higher sensitivity for studies with shorter functional MR imaging session times (P = .03) and relaxed statistical threshold (P = .05). Significantly higher specificity was found when expressive language task (P = .02), longer functional MR imaging session times (P < .01), visual presentation of stimuli (P = .04), and stringent statistical threshold (P = .01) were used. Conclusion Results of this study showed moderate accuracy of language functional MR imaging when compared with intraoperative DCS, and the included studies displayed significant methodologic heterogeneity. © RSNA, 2017 Online supplemental material is available for this article.
Data mining for the analysis of hippocampal zones in Alzheimer's disease
NASA Astrophysics Data System (ADS)
Ovando Vázquez, Cesaré M.
2012-02-01
In this work, a methodology to classify people with Alzheimer's Disease (AD), Healthy Controls (HC) and people with Mild Cognitive Impairment (MCI) is presented. This methodology consists of an ensemble of Support Vector Machines (SVM) with the hippocampal boxes (HB) as input data, these hippocampal zones are taken from Magnetic Resonance (MRI) and Positron Emission Tomography (PET) images. Two ways of constructing this ensemble are presented, the first consists of linear SVM models and the second of non-linear SVM models. Results demonstrate that the linear models classify HBs more accurately than the non-linear models between HC and MCI and that there are no differences between HC and AD.
Quantification of Posterior Globe Flattening: Methodology Development and Validationc
NASA Technical Reports Server (NTRS)
Lumpkins, S. B.; Garcia, K. M.; Sargsyan, A. E.; Hamilton, D. R.; Berggren, M. D.; Antonsen, E.; Ebert, D.
2011-01-01
Microgravity exposure affects visual acuity in a subset of astronauts, and mechanisms may include structural changes in the posterior globe and orbit. Particularly, posterior globe flattening has been implicated in several astronauts. This phenomenon is known to affect some terrestrial patient populations, and has been shown to be associated with intracranial hypertension. It is commonly assessed by magnetic resonance imaging (MRI), computed tomography (CT), or B-mode ultrasound (US), without consistent objective criteria. NASA uses a semi-quantitative scale of 0-3 as part of eye/orbit MRI and US analysis for occupational monitoring purposes. The goal of this study was to initiate development of an objective quantification methodology for posterior globe flattening.
Automated analysis of clonal cancer cells by intravital imaging
Coffey, Sarah Earley; Giedt, Randy J; Weissleder, Ralph
2013-01-01
Longitudinal analyses of single cell lineages over prolonged periods have been challenging particularly in processes characterized by high cell turn-over such as inflammation, proliferation, or cancer. RGB marking has emerged as an elegant approach for enabling such investigations. However, methods for automated image analysis continue to be lacking. Here, to address this, we created a number of different multicolored poly- and monoclonal cancer cell lines for in vitro and in vivo use. To classify these cells in large scale data sets, we subsequently developed and tested an automated algorithm based on hue selection. Our results showed that this method allows accurate analyses at a fraction of the computational time required by more complex color classification methods. Moreover, the methodology should be broadly applicable to both in vitro and in vivo analyses. PMID:24349895
Goal-oriented rectification of camera-based document images.
Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J
2011-04-01
Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.
Borri, Marco; Schmidt, Maria A; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M; Partridge, Mike; Bhide, Shreerang A; Nutting, Christopher M; Harrington, Kevin J; Newbold, Katie L; Leach, Martin O
2015-01-01
To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes.
Visualisation and quantitative analysis of the rodent malaria liver stage by real time imaging.
Ploemen, Ivo H J; Prudêncio, Miguel; Douradinha, Bruno G; Ramesar, Jai; Fonager, Jannik; van Gemert, Geert-Jan; Luty, Adrian J F; Hermsen, Cornelus C; Sauerwein, Robert W; Baptista, Fernanda G; Mota, Maria M; Waters, Andrew P; Que, Ivo; Lowik, Clemens W G M; Khan, Shahid M; Janse, Chris J; Franke-Fayard, Blandine M D
2009-11-18
The quantitative analysis of Plasmodium development in the liver in laboratory animals in cultured cells is hampered by low parasite infection rates and the complicated methods required to monitor intracellular development. As a consequence, this important phase of the parasite's life cycle has been poorly studied compared to blood stages, for example in screening anti-malarial drugs. Here we report the use of a transgenic P. berghei parasite, PbGFP-Luc(con), expressing the bioluminescent reporter protein luciferase to visualize and quantify parasite development in liver cells both in culture and in live mice using real-time luminescence imaging. The reporter-parasite based quantification in cultured hepatocytes by real-time imaging or using a microplate reader correlates very well with established quantitative RT-PCR methods. For the first time the liver stage of Plasmodium is visualized in whole bodies of live mice and we were able to discriminate as few as 1-5 infected hepatocytes per liver in mice using 2D-imaging and to identify individual infected hepatocytes by 3D-imaging. The analysis of liver infections by whole body imaging shows a good correlation with quantitative RT-PCR analysis of extracted livers. The luminescence-based analysis of the effects of various drugs on in vitro hepatocyte infection shows that this method can effectively be used for in vitro screening of compounds targeting Plasmodium liver stages. Furthermore, by analysing the effect of primaquine and tafenoquine in vivo we demonstrate the applicability of real time imaging to assess parasite drug sensitivity in the liver. The simplicity and speed of quantitative analysis of liver-stage development by real-time imaging compared to the PCR methodologies, as well as the possibility to analyse liver development in live mice without surgery, opens up new possibilities for research on Plasmodium liver infections and for validating the effect of drugs and vaccines on the liver stage of Plasmodium.
Visualisation and Quantitative Analysis of the Rodent Malaria Liver Stage by Real Time Imaging
Douradinha, Bruno G.; Ramesar, Jai; Fonager, Jannik; van Gemert, Geert-Jan; Luty, Adrian J. F.; Hermsen, Cornelus C.; Sauerwein, Robert W.; Baptista, Fernanda G.; Mota, Maria M.; Waters, Andrew P.; Que, Ivo; Lowik, Clemens W. G. M.; Khan, Shahid M.; Janse, Chris J.; Franke-Fayard, Blandine M. D.
2009-01-01
The quantitative analysis of Plasmodium development in the liver in laboratory animals in cultured cells is hampered by low parasite infection rates and the complicated methods required to monitor intracellular development. As a consequence, this important phase of the parasite's life cycle has been poorly studied compared to blood stages, for example in screening anti-malarial drugs. Here we report the use of a transgenic P. berghei parasite, PbGFP-Luccon, expressing the bioluminescent reporter protein luciferase to visualize and quantify parasite development in liver cells both in culture and in live mice using real-time luminescence imaging. The reporter-parasite based quantification in cultured hepatocytes by real-time imaging or using a microplate reader correlates very well with established quantitative RT-PCR methods. For the first time the liver stage of Plasmodium is visualized in whole bodies of live mice and we were able to discriminate as few as 1–5 infected hepatocytes per liver in mice using 2D-imaging and to identify individual infected hepatocytes by 3D-imaging. The analysis of liver infections by whole body imaging shows a good correlation with quantitative RT-PCR analysis of extracted livers. The luminescence-based analysis of the effects of various drugs on in vitro hepatocyte infection shows that this method can effectively be used for in vitro screening of compounds targeting Plasmodium liver stages. Furthermore, by analysing the effect of primaquine and tafenoquine in vivo we demonstrate the applicability of real time imaging to assess parasite drug sensitivity in the liver. The simplicity and speed of quantitative analysis of liver-stage development by real-time imaging compared to the PCR methodologies, as well as the possibility to analyse liver development in live mice without surgery, opens up new possibilities for research on Plasmodium liver infections and for validating the effect of drugs and vaccines on the liver stage of Plasmodium. PMID:19924309
Olsen, Timothy W.
2008-01-01
Purpose To establish a grading system of eye bank eyes using fundus autofluorescence (FAF) and identify a methodology that correlates FAF to age-related macular degeneration (AMD) with clinical correlation to the Age-Related Eye Disease Study (AREDS). Methods Two hundred sixty-two eye bank eyes were evaluated using a standardized analysis of FAF. Measurements were taken with the confocal scanning laser ophthalmoscope (cSLO). First, high-resolution, digital, stereoscopic, color images were obtained and graded according to AREDS criteria. With the neurosensory retina removed, mean FAF values were obtained from cSLO images using software analysis that excludes areas of atrophy and other artifact, generating an FAF value from a grading template. Age and AMD grade were compared to FAF values. An internal fluorescence reference standard was tested. Results Standardization of the cSLO machine demonstrated that reliable data could be acquired after a 1-hour warm-up. Images obtained prior to 1 hour had falsely elevated levels of FAF. In this initial analysis, there was no statistical correlation of age to mean FAF. There was a statistically significant decrease in FAF from AREDS grade 1, 2 to 3, 4 (P < .0001). An internal fluorescent standard may serve as a quantitative reference. Conclusions The Minnesota Grading System (MGS) of FAF (MGS-FAF) establishes a standardized methodology for grading eye bank tissue to quantify FAF compounds in the retinal pigment epithelium and correlate these findings to the AREDS. Future studies could then correlate specific FAF to the aging process, histopathology AMD phenotypes, and other maculopathies, as well as to analyze the biochemistry of autofluorescent fluorophores. PMID:19277247
MRI measurements of Blood-Brain Barrier function in dementia: A review of recent studies.
Raja, Rajikha; Rosenberg, Gary A; Caprihan, Arvind
2018-05-15
Blood-brain barrier (BBB) separates the systemic circulation and the brain, regulating transport of most molecules to protect the brain microenvironment. Multiple structural and functional components preserve the integrity of the BBB. Several imaging modalities are available to study disruption of the BBB. However, the subtle changes in BBB leakage that occurs in vascular cognitive impairment and Alzheimer's disease have been less well studied. Dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) is the most widely adopted non-invasive imaging technique for evaluating BBB breakdown. It is used as a significant marker for a wide variety of diseases with large permeability leaks, such as brain tumors and multiple sclerosis, to more subtle disruption in chronic vascular disease and dementia. DCE-MRI analysis of BBB includes both model-free parameters and quantitative parameters using pharmacokinetic modelling. We review MRI studies of BBB breakdown in dementia. The challenges in measuring subtle BBB changes and the state of the art techniques are initially examined. Subsequently, a systematic review comparing methodologies from recent in-vivo MRI studies is presented. Various factors related to subtle BBB permeability measurement such as DCE-MRI acquisition parameters, arterial input assessment, T 1 mapping and data analysis methods are reviewed with the focus on finding the optimal technique. Finally, the reported BBB permeability values in dementia are compared across different studies and across various brain regions. We conclude that reliable measurement of low-level BBB permeability across sites remains a difficult problem and a standardization of the methodology for both data acquisition and quantitative analysis is required. This article is part of the Special Issue entitled 'Cerebral Ischemia'. Copyright © 2017 Elsevier Ltd. All rights reserved.
Olsen, Timothy W
2008-01-01
To establish a grading system of eye bank eyes using fundus autofluorescence (FAF) and identify a methodology that correlates FAF to age-related macular degeneration (AMD) with clinical correlation to the Age-Related Eye Disease Study (AREDS). Two hundred sixty-two eye bank eyes were evaluated using a standardized analysis of FAF. Measurements were taken with the confocal scanning laser ophthalmoscope (cSLO). First, high-resolution, digital, stereoscopic, color images were obtained and graded according to AREDS criteria. With the neurosensory retina removed, mean FAF values were obtained from cSLO images using software analysis that excludes areas of atrophy and other artifact, generating an FAF value from a grading template. Age and AMD grade were compared to FAF values. An internal fluorescence reference standard was tested. Standardization of the cSLO machine demonstrated that reliable data could be acquired after a 1-hour warm-up. Images obtained prior to 1 hour had falsely elevated levels of FAF. In this initial analysis, there was no statistical correlation of age to mean FAF. There was a statistically significant decrease in FAF from AREDS grade 1, 2 to 3, 4 (P < .0001). An internal fluorescent standard may serve as a quantitative reference. The Minnesota Grading System (MGS) of FAF (MGS-FAF) establishes a standardized methodology for grading eye bank tissue to quantify FAF compounds in the retinal pigment epithelium and correlate these findings to the AREDS. Future studies could then correlate specific FAF to the aging process, histopathology AMD phenotypes, and other maculopathies, as well as to analyze the biochemistry of autofluorescent fluorophores.
Current target acquisition methodology in force on force simulations
NASA Astrophysics Data System (ADS)
Hixson, Jonathan G.; Miller, Brian; Mazz, John P.
2017-05-01
The U.S. Army RDECOM CERDEC NVESD MSD's target acquisition models have been used for many years by the military community in force on force simulations for training, testing, and analysis. There have been significant improvements to these models over the past few years. The significant improvements are the transition of ACQUIRE TTP-TAS (ACQUIRE Targeting Task Performance Target Angular Size) methodology for all imaging sensors and the development of new discrimination criteria for urban environments and humans. This paper is intended to provide an overview of the current target acquisition modeling approach and provide data for the new discrimination tasks. This paper will discuss advances and changes to the models and methodologies used to: (1) design and compare sensors' performance, (2) predict expected target acquisition performance in the field, (3) predict target acquisition performance for combat simulations, and (4) how to conduct model data validation for combat simulations.
Vaccine Images on Twitter: Analysis of What Images are Shared
Dredze, Mark
2018-01-01
Background Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. Objective The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. Methods We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Results Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet’s textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. Conclusions We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. PMID:29615386
Vaccine Images on Twitter: Analysis of What Images are Shared.
Chen, Tao; Dredze, Mark
2018-04-03
Visual imagery plays a key role in health communication; however, there is little understanding of what aspects of vaccine-related images make them effective communication aids. Twitter, a popular venue for discussions related to vaccination, provides numerous images that are shared with tweets. The objectives of this study were to understand how images are used in vaccine-related tweets and provide guidance with respect to the characteristics of vaccine-related images that correlate with the higher likelihood of being retweeted. We collected more than one million vaccine image messages from Twitter and characterized various properties of these images using automated image analytics. We fit a logistic regression model to predict whether or not a vaccine image tweet was retweeted, thus identifying characteristics that correlate with a higher likelihood of being shared. For comparison, we built similar models for the sharing of vaccine news on Facebook and for general image tweets. Most vaccine-related images are duplicates (125,916/237,478; 53.02%) or taken from other sources, not necessarily created by the author of the tweet. Almost half of the images contain embedded text, and many include images of people and syringes. The visual content is highly correlated with a tweet's textual topics. Vaccine image tweets are twice as likely to be shared as nonimage tweets. The sentiment of an image and the objects shown in the image were the predictive factors in determining whether an image was retweeted. We are the first to study vaccine images on Twitter. Our findings suggest future directions for the study and use of vaccine imagery and may inform communication strategies around vaccination. Furthermore, our study demonstrates an effective study methodology for image analysis. ©Tao Chen, Mark Dredze. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 03.04.2018.
Effects of and attention to graphic warning labels on cigarette packages.
Süssenbach, Philipp; Niemeier, Sarah; Glock, Sabine
2013-01-01
The present study investigates the effects of graphic cigarette warnings compared to text-only cigarette warnings on smokers' explicit (i.e. ratings of the packages, cognitions about smoking, perceived health risk, quit intentions) and implicit attitudes. In addition, participants' visual attention towards the graphic warnings was recorded using eye-tracking methodology. Sixty-three smokers participated in the present study and either viewed graphic cigarette warnings with aversive and non-aversive images or text-only warnings. Data were analysed using analysis of variance and correlation analysis. Especially, graphic cigarette warnings with aversive content drew attention and elicited high threat. However, whereas attention directed to the textual information of the graphic warnings predicted smokers' risk perceptions, attention directed to the images of the graphic warnings did not. Moreover, smokers' in the graphic warning condition reported more positive cognitions about smoking, thus revealing cognitive dissonance. Smokers employ defensive psychological mechanisms when confronted with threatening warnings. Although aversive images attract attention, they do not promote health knowledge. Implications for graphic health warnings and the importance of taking their content (i.e. aversive vs. non-aversive images) into account are discussed.
Homogeneous Canine Chest Phantom Construction: A Tool for Image Quality Optimization.
Pavan, Ana Luiza Menegatti; Rosa, Maria Eugênia Dela; Giacomini, Guilherme; Bacchim Neto, Fernando Antonio; Yamashita, Seizo; Vulcano, Luiz Carlos; Duarte, Sergio Barbosa; Miranda, José Ricardo de Arruda; de Pina, Diana Rodrigues
2016-01-01
Digital radiographic imaging is increasing in veterinary practice. The use of radiation demands responsibility to maintain high image quality. Low doses are necessary because workers are requested to restrain the animal. Optimizing digital systems is necessary to avoid unnecessary exposure, causing the phenomenon known as dose creep. Homogeneous phantoms are widely used to optimize image quality and dose. We developed an automatic computational methodology to classify and quantify tissues (i.e., lung tissue, adipose tissue, muscle tissue, and bone) in canine chest computed tomography exams. The thickness of each tissue was converted to simulator materials (i.e., Lucite, aluminum, and air). Dogs were separated into groups of 20 animals each according to weight. Mean weights were 6.5 ± 2.0 kg, 15.0 ± 5.0 kg, 32.0 ± 5.5 kg, and 50.0 ± 12.0 kg, for the small, medium, large, and giant groups, respectively. The one-way analysis of variance revealed significant differences in all simulator material thicknesses (p < 0.05) quantified between groups. As a result, four phantoms were constructed for dorsoventral and lateral views. In conclusion, the present methodology allows the development of phantoms of the canine chest and possibly other body regions and/or animals. The proposed phantom is a practical tool that may be employed in future work to optimize veterinary X-ray procedures.
NASA Astrophysics Data System (ADS)
Renaud, Olivier; Heintzmann, Rainer; Sáez-Cirión, Asier; Schnelle, Thomas; Mueller, Torsten; Shorte, Spencer
2007-02-01
Three dimensional imaging provides high-content information from living intact biology, and can serve as a visual screening cue. In the case of single cell imaging the current state of the art uses so-called "axial through-stacking". However, three-dimensional axial through-stacking requires that the object (i.e. a living cell) be adherently stabilized on an optically transparent surface, usually glass; evidently precluding use of cells in suspension. Aiming to overcome this limitation we present here the utility of dielectric field trapping of single cells in three-dimensional electrode cages. Our approach allows gentle and precise spatial orientation and vectored rotation of living, non-adherent cells in fluid suspension. Using various modes of widefield, and confocal microscope imaging we show how so-called "microrotation" can provide a unique and powerful method for multiple point-of-view (three-dimensional) interrogation of intact living biological micro-objects (e.g. single-cells, cell aggregates, and embryos). Further, we show how visual screening by micro-rotation imaging can be combined with micro-fluidic sorting, allowing selection of rare phenotype targets from small populations of cells in suspension, and subsequent one-step single cell cloning (with high-viability). Our methodology combining high-content 3D visual screening with one-step single cell cloning, will impact diverse paradigms, for example cytological and cytogenetic analysis on haematopoietic stem cells, blood cells including lymphocytes, and cancer cells.
NASA Astrophysics Data System (ADS)
Pietrzyk, Mariusz W.; Manning, David J.; Dix, Alan; Donovan, Tim
2009-02-01
Aim: The goal of the study is to determine the spatial frequency characteristics at locations in the image of overt and covert observers' decisions and find out if there are any similarities in different observers' groups: the same radiological experience group or the same accuracy scored level. Background: The radiological task is described as a visual searching decision making procedure involving visual perception and cognitive processing. Humans perceive the world through a number of spatial frequency channels, each sensitive to visual information carried by different spatial frequency ranges and orientations. Recent studies have shown that particular physical properties of local and global image-based elements are correlated with the performance and the level of experience of human observers in breast cancer and lung nodule detections. Neurological findings in visual perception were an inspiration for wavelet applications in vision research because the methodology tries to mimic the brain processing algorithms. Methods: The wavelet approach to the set of postero-anterior chest radiographs analysis has been used to characterize perceptual preferences observers with different levels of experience in the radiological task. Psychophysical methodology has been applied to track eye movements over the image, where particular ROIs related to the observers' fixation clusters has been analysed in the spaces frame by Daubechies functions. Results: Significance differences have been found between the spatial frequency characteristics at the location of different decisions.
Homogeneous Canine Chest Phantom Construction: A Tool for Image Quality Optimization
2016-01-01
Digital radiographic imaging is increasing in veterinary practice. The use of radiation demands responsibility to maintain high image quality. Low doses are necessary because workers are requested to restrain the animal. Optimizing digital systems is necessary to avoid unnecessary exposure, causing the phenomenon known as dose creep. Homogeneous phantoms are widely used to optimize image quality and dose. We developed an automatic computational methodology to classify and quantify tissues (i.e., lung tissue, adipose tissue, muscle tissue, and bone) in canine chest computed tomography exams. The thickness of each tissue was converted to simulator materials (i.e., Lucite, aluminum, and air). Dogs were separated into groups of 20 animals each according to weight. Mean weights were 6.5 ± 2.0 kg, 15.0 ± 5.0 kg, 32.0 ± 5.5 kg, and 50.0 ± 12.0 kg, for the small, medium, large, and giant groups, respectively. The one-way analysis of variance revealed significant differences in all simulator material thicknesses (p < 0.05) quantified between groups. As a result, four phantoms were constructed for dorsoventral and lateral views. In conclusion, the present methodology allows the development of phantoms of the canine chest and possibly other body regions and/or animals. The proposed phantom is a practical tool that may be employed in future work to optimize veterinary X-ray procedures. PMID:27101001
Foundations for Measuring Volume Rendering Quality
NASA Technical Reports Server (NTRS)
Williams, Peter L.; Uselton, Samuel P.; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
The goal of this paper is to provide a foundation for objectively comparing volume rendered images. The key elements of the foundation are: (1) a rigorous specification of all the parameters that need to be specified to define the conditions under which a volume rendered image is generated; (2) a methodology for difference classification, including a suite of functions or metrics to quantify and classify the difference between two volume rendered images that will support an analysis of the relative importance of particular differences. The results of this method can be used to study the changes caused by modifying particular parameter values, to compare and quantify changes between images of similar data sets rendered in the same way, and even to detect errors in the design, implementation or modification of a volume rendering system. If one has a benchmark image, for example one created by a high accuracy volume rendering system, the method can be used to evaluate the accuracy of a given image.
Towards adaptive, streaming analysis of x-ray tomography data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thomas, Mathew; Kleese van Dam, Kerstin; Marshall, Matthew J.
2015-03-04
Temporal and spatial resolution of chemical imaging methodologies such as x-ray tomography are rapidly increasing, leading to more complex experimental procedures and fast growing data volumes. Automated analysis pipelines and big data analytics are becoming essential to effectively evaluate the results of such experiments. Offering those data techniques in an adaptive, streaming environment can further substantially improve the scientific discovery process, by enabling experimental control and steering based on the evaluation of emerging phenomena as they are observed by the experiment. Pacific Northwest National Laboratory (PNNL)’ Chemical Imaging Initiative (CII - http://imaging.pnnl.gov/ ) has worked since 2011 towards developing amore » framework that allows users to rapidly compose and customize high throughput experimental analysis pipelines for multiple instrument types. The framework, named ‘Rapid Experimental Analysis’ (REXAN) Framework [1], is based on the idea of reusable component libraries and utilizes the PNNL developed collaborative data management and analysis environment ‘Velo’, to provide a user friendly analysis and data management environment for experimental facilities. This article will, discuss the capabilities established for X-Ray tomography, discuss lessons learned, and provide an overview of our more recent work in the Analysis in Motion Initiative (AIM - http://aim.pnnl.gov/ ) at PNNL to provide REXAN capabilities in a streaming environment.« less
NASA Technical Reports Server (NTRS)
Martin, David; Borowski, Allan; Bungo, Michael W.; Dulchavsky, Scott; Gladding, Patrick; Greenberg, Neil; Hamilton, Doug; Levine, Benjamin D.; Norwoord, Kelly; Platts, Steven H.;
2011-01-01
Echocardiography is ideally suited for cardiovascular imaging in remote environments, but the expertise to perform it is often lacking. In 2001, an ATL HDI5000 was delivered to the International Space Station (ISS). The instrument is currently being used in a study to investigate the impact of long-term microgravity on cardiovascular function. The purpose of this report is to describe the methodology for remote guidance of echocardiography in space. Methods: In the year before launch of an ISS mission, potential astronaut echocardiographic operators participate in 5 sessions to train for echo acquisitions that occur roughly monthly during the mission, including one exercise echocardiogram. The focus of training is familiarity with the study protocol and remote guidance procedures. On-orbit, real-time guidance of in-flight acquisitions is provided by a sonographer in the Telescience Center of Mission Control. Physician investigators with remote access are able to relay comments on image optimization to the sonographer. Live video feed is relayed from the ISS to the ground via the Tracking and Data Relay Satellite System with a 2 second transmission delay. The expert sonographer uses these images along with two-way audio to provide instructions and feedback. Images are stored in non-compressed DICOM format for asynchronous relay to the ground for subsequent off-line analysis. Results: Since June, 2009, a total of 19 resting echocardiograms and 4 exercise studies have been performed in-flight. Average acquisition time has been 45 minutes, reflecting 26,000 km of ISS travel per study. Image quality has been adequate in all studies, but remote guidance has proven imperative for fine-tuning imaging and prioritizing views when communication outages limit the study duration. Typical resting studies have included 12 video loops and 21 still-frame images requiring 750 MB of storage. Conclusions: Despite limited crew training, remote guidance allows research-quality echocardiography to be performed by non-experts aboard the ISS. Analysis is underway and additional subjects are being recruited to define the impact of microgravity on cardiac structure and systolic and diastolic function.
3D-Printed Tissue-Mimicking Phantoms for Medical Imaging and Computational Validation Applications
Shahmirzadi, Danial; Li, Ronny X.; Doyle, Barry J.; Konofagou, Elisa E.; McGloughlin, Tim M.
2014-01-01
Abstract Abdominal aortic aneurysm (AAA) is a permanent, irreversible dilation of the distal region of the aorta. Recent efforts have focused on improved AAA screening and biomechanics-based failure prediction. Idealized and patient-specific AAA phantoms are often employed to validate numerical models and imaging modalities. To produce such phantoms, the investment casting process is frequently used, reconstructing the 3D vessel geometry from computed tomography patient scans. In this study the alternative use of 3D printing to produce phantoms is investigated. The mechanical properties of flexible 3D-printed materials are benchmarked against proven elastomers. We demonstrate the utility of this process with particular application to the emerging imaging modality of ultrasound-based pulse wave imaging, a noninvasive diagnostic methodology being developed to obtain regional vascular wall stiffness properties, differentiating normal and pathologic tissue in vivo. Phantom wall displacements under pulsatile loading conditions were observed, showing good correlation to fluid–structure interaction simulations and regions of peak wall stress predicted by finite element analysis. 3D-printed phantoms show a strong potential to improve medical imaging and computational analysis, potentially helping bridge the gap between experimental and clinical diagnostic tools. PMID:28804733
Infrared Imaging of Carbon and Ceramic Composites: Data Reproducibility
NASA Astrophysics Data System (ADS)
Knight, B.; Howard, D. R.; Ringermacher, H. I.; Hudson, L. D.
2010-02-01
Infrared NDE techniques have proven to be superior for imaging of flaws in ceramic matrix composites (CMC) and carbon silicon carbide composites (C/SiC). Not only can one obtain accurate depth gauging of flaws such as delaminations and layered porosity in complex-shaped components such as airfoils and other aeronautical components, but also excellent reproducibility of image data is obtainable using the STTOF (Synthetic Thermal Time-of-Flight) methodology. The imaging of large complex shapes is fast and reliable. This methodology as applied to large C/SiC flight components at the NASA Dryden Flight Research Center will be described.
WE-B-BRC-00: Concepts in Risk-Based Assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
Prospective quality management techniques, long used by engineering and industry, have become a growing aspect of efforts to improve quality management and safety in healthcare. These techniques are of particular interest to medical physics as scope and complexity of clinical practice continue to grow, thus making the prescriptive methods we have used harder to apply and potentially less effective for our interconnected and highly complex healthcare enterprise, especially in imaging and radiation oncology. An essential part of most prospective methods is the need to assess the various risks associated with problems, failures, errors, and design flaws in our systems. Wemore » therefore begin with an overview of risk assessment methodologies used in healthcare and industry and discuss their strengths and weaknesses. The rationale for use of process mapping, failure modes and effects analysis (FMEA) and fault tree analysis (FTA) by TG-100 will be described, as well as suggestions for the way forward. This is followed by discussion of radiation oncology specific risk assessment strategies and issues, including the TG-100 effort to evaluate IMRT and other ways to think about risk in the context of radiotherapy. Incident learning systems, local as well as the ASTRO/AAPM ROILS system, can also be useful in the risk assessment process. Finally, risk in the context of medical imaging will be discussed. Radiation (and other) safety considerations, as well as lack of quality and certainty all contribute to the potential risks associated with suboptimal imaging. The goal of this session is to summarize a wide variety of risk analysis methods and issues to give the medical physicist access to tools which can better define risks (and their importance) which we work to mitigate with both prescriptive and prospective risk-based quality management methods. Learning Objectives: Description of risk assessment methodologies used in healthcare and industry Discussion of radiation oncology-specific risk assessment strategies and issues Evaluation of risk in the context of medical imaging and image quality E. Samei: Research grants from Siemens and GE.« less
Population-based imaging biobanks as source of big data.
Gatidis, Sergios; Heber, Sophia D; Storz, Corinna; Bamberg, Fabian
2017-06-01
Advances of computational sciences over the last decades have enabled the introduction of novel methodological approaches in biomedical research. Acquiring extensive and comprehensive data about a research subject and subsequently extracting significant information has opened new possibilities in gaining insight into biological and medical processes. This so-called big data approach has recently found entrance into medical imaging and numerous epidemiological studies have been implementing advanced imaging to identify imaging biomarkers that provide information about physiological processes, including normal development and aging but also on the development of pathological disease states. The purpose of this article is to present existing epidemiological imaging studies and to discuss opportunities, methodological and organizational aspects, and challenges that population imaging poses to the field of big data research.
2014-01-01
Background The ability to walk independently is a primary goal for rehabilitation after stroke. Gait analysis provides a great amount of valuable information, while functional magnetic resonance imaging (fMRI) offers a powerful approach to define networks involved in motor control. The present study reports a new methodology based on both fMRI and gait analysis outcomes in order to investigate the ability of fMRI to reflect the phases of motor learning before/after electromyographic biofeedback treatment: the preliminary fMRI results of a post stroke subject’s brain activation, during passive and active ankle dorsal/plantarflexion, before and after biofeedback (BFB) rehabilitation are reported and their correlation with gait analysis data investigated. Methods A control subject and a post-stroke patient with chronic hemiparesis were studied. Functional magnetic resonance images were acquired during a block-design protocol on both subjects while performing passive and active ankle dorsal/plantarflexion. fMRI and gait analysis were assessed on the patient before and after electromyographic biofeedback rehabilitation treatment during gait activities. Lower limb three-dimensional kinematics, kinetics and surface electromyography were evaluated. Correlation between fMRI and gait analysis categorical variables was assessed: agreement/disagreement was assigned to each variable if the value was in/outside the normative range (gait analysis), or for presence of normal/diffuse/no activation of motor area (fMRI). Results Altered fMRI activity was found on the post-stroke patient before biofeedback rehabilitation with respect to the control one. Meanwhile the patient showed a diffuse, but more limited brain activation after treatment (less voxels). The post-stroke gait data showed a trend towards the normal range: speed, stride length, ankle power, and ankle positive work increased. Preliminary correlation analysis revealed that consistent changes were observed both for the fMRI data, and the gait analysis data after treatment (R > 0.89): this could be related to the possible effects BFB might have on the central as well as on the peripheral nervous system. Conclusions Our findings showed that this methodology allows evaluation of the relationship between alterations in gait and brain activation of a post-stroke patient. Such methodology, if applied on a larger sample subjects, could provide information about the specific motor area involved in a rehabilitation treatment. PMID:24716475
GUIDOS: tools for the assessment of pattern, connectivity, and fragmentation
NASA Astrophysics Data System (ADS)
Vogt, Peter
2013-04-01
Pattern, connectivity, and fragmentation can be considered as pillars for a quantitative analysis of digital landscape images. The free software toolbox GUIDOS (http://forest.jrc.ec.europa.eu/download/software/guidos) includes a variety of dedicated methodologies for the quantitative assessment of these features. Amongst others, Morphological Spatial Pattern Analysis (MSPA) is used for an intuitive description of image pattern structures and the automatic detection of connectivity pathways. GUIDOS includes tools for the detection and quantitative assessment of key nodes and links as well as to define connectedness in raster images and to setup appropriate input files for an enhanced network analysis using Conefor Sensinode. Finally, fragmentation is usually defined from a species point of view but a generic and quantifiable indicator is needed to measure fragmentation and its changes. Some preliminary results for different conceptual approaches will be shown for a sample dataset. Complemented by pre- and post-processing routines and a complete GIS environment the portable GUIDOS Toolbox may facilitate a holistic assessment in risk assessment studies, landscape planning, and conservation/restoration policies. Alternatively, individual analysis components may contribute to or enhance studies conducted with other software packages in landscape ecology.
Corkhill, Claire L; Bridge, Jonathan W; Chen, Xiaohui C; Hillel, Phil; Thornton, Steve F; Romero-Gonzalez, Maria E; Banwart, Steven A; Hyatt, Neil C
2013-12-03
We present a novel methodology for determining the transport of technetium-99m, a γ-emitting metastable isomer of (99)Tc, through quartz sand and porous media relevant to the disposal of nuclear waste in a geological disposal facility (GDF). Quartz sand is utilized as a model medium, and the applicability of the methodology to determine radionuclide transport in engineered backfill cement is explored using the UK GDF candidate backfill cement, Nirex Reference Vault Backfill (NRVB), in a model system. Two-dimensional distributions in (99m)Tc activity were collected at millimeter-resolution using decay-corrected gamma camera images. Pulse-inputs of ~20 MBq (99m)Tc were introduced into short (<10 cm) water-saturated columns at a constant flow of 0.33 mL min(-1). Changes in calibrated mass distribution of (99m)Tc at 30 s intervals, over a period of several hours, were quantified by spatial moments analysis. Transport parameters were fitted to the experimental data using a one-dimensional convection-dispersion equation, yielding transport properties for this radionuclide in a model GDF environment. These data demonstrate that (99)Tc in the pertechnetate form (Tc(VII)O4(-)) does not sorb to cement backfill during transport under model conditions, resulting in closely conservative transport behavior. This methodology represents a quantitative development of radiotracer imaging and offers the opportunity to conveniently and rapidly characterize transport of gamma-emitting isotopes in opaque media, relevant to the geological disposal of nuclear waste and potentially to a wide variety of other subsurface environments.
2013-01-01
We present a novel methodology for determining the transport of technetium-99m, a γ-emitting metastable isomer of 99Tc, through quartz sand and porous media relevant to the disposal of nuclear waste in a geological disposal facility (GDF). Quartz sand is utilized as a model medium, and the applicability of the methodology to determine radionuclide transport in engineered backfill cement is explored using the UK GDF candidate backfill cement, Nirex Reference Vault Backfill (NRVB), in a model system. Two-dimensional distributions in 99mTc activity were collected at millimeter-resolution using decay-corrected gamma camera images. Pulse-inputs of ∼20 MBq 99mTc were introduced into short (<10 cm) water-saturated columns at a constant flow of 0.33 mL min–1. Changes in calibrated mass distribution of 99mTc at 30 s intervals, over a period of several hours, were quantified by spatial moments analysis. Transport parameters were fitted to the experimental data using a one-dimensional convection–dispersion equation, yielding transport properties for this radionuclide in a model GDF environment. These data demonstrate that 99Tc in the pertechnetate form (Tc(VII)O4–) does not sorb to cement backfill during transport under model conditions, resulting in closely conservative transport behavior. This methodology represents a quantitative development of radiotracer imaging and offers the opportunity to conveniently and rapidly characterize transport of gamma-emitting isotopes in opaque media, relevant to the geological disposal of nuclear waste and potentially to a wide variety of other subsurface environments. PMID:24147650
NASA Astrophysics Data System (ADS)
Neriani, Kelly E.; Herbranson, Travis J.; Reis, George A.; Pinkus, Alan R.; Goodyear, Charles D.
2006-05-01
While vast numbers of image enhancing algorithms have already been developed, the majority of these algorithms have not been assessed in terms of their visual performance-enhancing effects using militarily relevant scenarios. The goal of this research was to apply a visual performance-based assessment methodology to evaluate six algorithms that were specifically designed to enhance the contrast of digital images. The image enhancing algorithms used in this study included three different histogram equalization algorithms, the Autolevels function, the Recursive Rational Filter technique described in Marsi, Ramponi, and Carrato1 and the multiscale Retinex algorithm described in Rahman, Jobson and Woodell2. The methodology used in the assessment has been developed to acquire objective human visual performance data as a means of evaluating the contrast enhancement algorithms. Objective performance metrics, response time and error rate, were used to compare algorithm enhanced images versus two baseline conditions, original non-enhanced images and contrast-degraded images. Observers completed a visual search task using a spatial-forcedchoice paradigm. Observers searched images for a target (a military vehicle) hidden among foliage and then indicated in which quadrant of the screen the target was located. Response time and percent correct were measured for each observer. Results of the study and future directions are discussed.
7T MRI subthalamic nucleus atlas for use with 3T MRI.
Milchenko, Mikhail; Norris, Scott A; Poston, Kathleen; Campbell, Meghan C; Ushe, Mwiza; Perlmutter, Joel S; Snyder, Abraham Z
2018-01-01
Deep brain stimulation (DBS) of the subthalamic nucleus (STN) reduces motor symptoms in most patients with Parkinson disease (PD), yet may produce untoward effects. Investigation of DBS effects requires accurate localization of the STN, which can be difficult to identify on magnetic resonance images collected with clinically available 3T scanners. The goal of this study is to develop a high-quality STN atlas that can be applied to standard 3T images. We created a high-definition STN atlas derived from seven older participants imaged at 7T. This atlas was nonlinearly registered to a standard template representing 56 patients with PD imaged at 3T. This process required development of methodology for nonlinear multimodal image registration. We demonstrate mm-scale STN localization accuracy by comparison of our 3T atlas with a publicly available 7T atlas. We also demonstrate less agreement with an earlier histological atlas. STN localization error in the 56 patients imaged at 3T was less than 1 mm on average. Our methodology enables accurate STN localization in individuals imaged at 3T. The STN atlas and underlying 3T average template in MNI space are freely available to the research community. The image registration methodology developed in the course of this work may be generally applicable to other datasets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coolens, Catherine, E-mail: catherine.coolens@rmp.uhn.on.ca; Department of Radiation Oncology, University of Toronto, Toronto, Ontario; Institute of Biomaterials and Biomedical Engineering, University of Toronto, Toronto, Ontario
2015-01-01
Objectives: Development of perfusion imaging as a biomarker requires more robust methodologies for quantification of tumor physiology that allow assessment of volumetric tumor heterogeneity over time. This study proposes a parametric method for automatically analyzing perfused tissue from volumetric dynamic contrast-enhanced (DCE) computed tomography (CT) scans and assesses whether this 4-dimensional (4D) DCE approach is more robust and accurate than conventional, region-of-interest (ROI)-based CT methods in quantifying tumor perfusion with preliminary evaluation in metastatic brain cancer. Methods and Materials: Functional parameter reproducibility and analysis of sensitivity to imaging resolution and arterial input function were evaluated in image sets acquired from amore » 320-slice CT with a controlled flow phantom and patients with brain metastases, whose treatments were planned for stereotactic radiation surgery and who consented to a research ethics board-approved prospective imaging biomarker study. A voxel-based temporal dynamic analysis (TDA) methodology was used at baseline, at day 7, and at day 20 after treatment. The ability to detect changes in kinetic parameter maps in clinical data sets was investigated for both 4D TDA and conventional 2D ROI-based analysis methods. Results: A total of 7 brain metastases in 3 patients were evaluated over the 3 time points. The 4D TDA method showed improved spatial efficacy and accuracy of perfusion parameters compared to ROI-based DCE analysis (P<.005), with a reproducibility error of less than 2% when tested with DCE phantom data. Clinically, changes in transfer constant from the blood plasma into the extracellular extravascular space (K{sub trans}) were seen when using TDA, with substantially smaller errors than the 2D method on both day 7 post radiation surgery (±13%; P<.05) and by day 20 (±12%; P<.04). Standard methods showed a decrease in K{sub trans} but with large uncertainty (111.6 ± 150.5) %. Conclusions: Parametric voxel-based analysis of 4D DCE CT data resulted in greater accuracy and reliability in measuring changes in perfusion CT-based kinetic metrics, which have the potential to be used as biomarkers in patients with metastatic brain cancer.« less
Sanz-Requena, Roberto; Moratal, David; García-Sánchez, Diego Ramón; Bodí, Vicente; Rieta, José Joaquín; Sanchis, Juan Manuel
2007-03-01
Intravascular ultrasound (IVUS) imaging is used along with X-ray coronary angiography to detect vessel pathologies. Manual analysis of IVUS images is slow and time-consuming and it is not feasible for clinical purposes. A semi-automated method is proposed to generate 3D reconstructions from IVUS video sequences, so that a fast diagnose can be easily done, quantifying plaque length and severity as well as plaque volume of the vessels under study. The methodology described in this work has four steps: a pre-processing of IVUS images, a segmentation of media-adventitia contour, a detection of intima and plaque and a 3D reconstruction of the vessel. Preprocessing is intended to remove noise from the images without blurring the edges. Segmentation of media-adventitia contour is achieved using active contours (snakes). In particular, we use the gradient vector flow (GVF) as external force for the snakes. The detection of lumen border is obtained taking into account gray-level information of the inner part of the previously detected contours. A knowledge-based approach is used to determine which level of gray corresponds statistically to the different regions of interest: intima, plaque and lumen. The catheter region is automatically discarded. An estimate of plaque type is also given. Finally, 3D reconstruction of all detected regions is made. The suitability of this methodology has been verified for the analysis and visualization of plaque length, stenosis severity, automatic detection of the most problematic regions, calculus of plaque volumes and a preliminary estimation of plaque type obtaining for automatic measures of lumen and vessel area an average error smaller than 1mm(2) (equivalent aproximately to 10% of the average measure), for calculus of plaque and lumen volume errors smaller than 0.5mm(3) (equivalent approximately to 20% of the average measure) and for plaque type estimates a mismatch of less than 8% in the analysed frames.
Pharmacokinetics Application in Biophysics Experiments
NASA Astrophysics Data System (ADS)
Millet, Philippe; Lemoigne, Yves
Among the available computerised tomography devices, the Positron Emission Tomography (PET) has the advantage to be sensitive to pico-molar concentrations of radiotracers inside living matter. Devices adapted to small animal imaging are now commercially available and allow us to study the function rather than the structure of living tissues by in vivo analysis. PET methodology, from the physics of electron-positron annihilation to the biophysics involved in tracers, is treated by other authors in this book. The basics of coincidence detection, image reconstruction, spatial resolution and sensitivity are discussed in the paper by R. Ott. The use of compartment analysis combined with pharmacokinetics is described here to illustrate an application to neuroimaging and to show how parametric imaging can bring insight on the in vivo bio-distribution of a radioactive tracer with small animal PET scanners. After reporting on the use of an intracerebral β+ radiosensitive probe (βP), we describe a small animal PET experiment used to measure the density of 5HT 1 a receptors in rat brain.
Widefield quantitative multiplex surface enhanced Raman scattering imaging in vivo
NASA Astrophysics Data System (ADS)
McVeigh, Patrick Z.; Mallia, Rupananda J.; Veilleux, Israel; Wilson, Brian C.
2013-04-01
In recent years numerous studies have shown the potential advantages of molecular imaging in vitro and in vivo using contrast agents based on surface enhanced Raman scattering (SERS), however the low throughput of traditional point-scanned imaging methodologies have limited their use in biological imaging. In this work we demonstrate that direct widefield Raman imaging based on a tunable filter is capable of quantitative multiplex SERS imaging in vivo, and that this imaging is possible with acquisition times which are orders of magnitude lower than achievable with comparable point-scanned methodologies. The system, designed for small animal imaging, has a linear response from (0.01 to 100 pM), acquires typical in vivo images in <10 s, and with suitable SERS reporter molecules is capable of multiplex imaging without compensation for spectral overlap. To demonstrate the utility of widefield Raman imaging in biological applications, we show quantitative imaging of four simultaneous SERS reporter molecules in vivo with resulting probe quantification that is in excellent agreement with known quantities (R2>0.98).
Burton, Kirsteen R; Perlis, Nathan; Aviv, Richard I; Moody, Alan R; Kapral, Moira K; Krahn, Murray D; Laupacis, Andreas
2014-03-01
This study reviews the quality of economic evaluations of imaging after acute stroke and identifies areas for improvement. We performed full-text searches of electronic databases that included Medline, Econlit, the National Health Service Economic Evaluation Database, and the Tufts Cost Effectiveness Analysis Registry through July 2012. Search strategy terms included the following: stroke*; cost*; or cost-benefit analysis*; and imag*. Inclusion criteria were empirical studies published in any language that reported the results of economic evaluations of imaging interventions for patients with stroke symptoms. Study quality was assessed by a commonly used checklist (with a score range of 0% to 100%). Of 568 unique potential articles identified, 5 were included in the review. Four of 5 articles were explicit in their analysis perspectives, which included healthcare system payers, hospitals, and stroke services. Two studies reported results during a 5-year time horizon, and 3 studies reported lifetime results. All included the modified Rankin Scale score as an outcome measure. The median quality score was 84.4% (range=71.9%-93.5%). Most studies did not consider the possibility that patients could not tolerate contrast media or could incur contrast-induced nephropathy. Three studies compared perfusion computed tomography with unenhanced computed tomography but assumed that outcomes guided by the results of perfusion computed tomography were equivalent to outcomes guided by the results of magnetic resonance imaging or noncontrast computed tomography. Economic evaluations of imaging modalities after acute ischemic stroke were generally of high methodological quality. However, important radiology-specific clinical components were missing from all of these analyses.
Object-Based Change Detection Using High-Resolution Remotely Sensed Data and GIS
NASA Astrophysics Data System (ADS)
Sofina, N.; Ehlers, M.
2012-08-01
High resolution remotely sensed images provide current, detailed, and accurate information for large areas of the earth surface which can be used for change detection analyses. Conventional methods of image processing permit detection of changes by comparing remotely sensed multitemporal images. However, for performing a successful analysis it is desirable to take images from the same sensor which should be acquired at the same time of season, at the same time of a day, and - for electro-optical sensors - in cloudless conditions. Thus, a change detection analysis could be problematic especially for sudden catastrophic events. A promising alternative is the use of vector-based maps containing information about the original urban layout which can be related to a single image obtained after the catastrophe. The paper describes a methodology for an object-based search of destroyed buildings as a consequence of a natural or man-made catastrophe (e.g., earthquakes, flooding, civil war). The analysis is based on remotely sensed and vector GIS data. It includes three main steps: (i) generation of features describing the state of buildings; (ii) classification of building conditions; and (iii) data import into a GIS. One of the proposed features is a newly developed 'Detected Part of Contour' (DPC). Additionally, several features based on the analysis of textural information corresponding to the investigated vector objects are calculated. The method is applied to remotely sensed images of areas that have been subjected to an earthquake. The results show the high reliability of the DPC feature as an indicator for change.
NanoSIMS for biological applications: Current practices and analyses
Nunez, Jamie R.; Renslow, Ryan S.; Cliff, III, John B.; ...
2017-09-27
Secondary ion mass spectrometry (SIMS) has become an increasingly utilized tool in biologically-relevant studies. Of these, high lateral resolution methodologies using the NanoSIMS 50/50L have been especially powerful within many biological fields over the past decade. Here, we provide a review of this technology, sample preparation and analysis considerations, examples of recent biological studies, data analysis, and current outlooks. Specifically, we offer an overview of SIMS and development of the NanoSIMS. We describe the major experimental factors that should be considered prior to NanoSIMS analysis and then provide information on best practices for data analysis and image generation, which includesmore » an in-depth discussion of appropriate colormaps. Additionally, we provide an open-source method for data representation that allows simultaneous visualization of secondary electron and ion information within a single image. Lastly, we present a perspective on the future of this technology and where we think it will have the greatest impact in near future.« less
NanoSIMS for biological applications: Current practices and analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nunez, Jamie R.; Renslow, Ryan S.; Cliff, III, John B.
Secondary ion mass spectrometry (SIMS) has become an increasingly utilized tool in biologically-relevant studies. Of these, high lateral resolution methodologies using the NanoSIMS 50/50L have been especially powerful within many biological fields over the past decade. Here, we provide a review of this technology, sample preparation and analysis considerations, examples of recent biological studies, data analysis, and current outlooks. Specifically, we offer an overview of SIMS and development of the NanoSIMS. We describe the major experimental factors that should be considered prior to NanoSIMS analysis and then provide information on best practices for data analysis and image generation, which includesmore » an in-depth discussion of appropriate colormaps. Additionally, we provide an open-source method for data representation that allows simultaneous visualization of secondary electron and ion information within a single image. Lastly, we present a perspective on the future of this technology and where we think it will have the greatest impact in near future.« less
Methodology for cost analysis of film-based and filmless portable chest systems
NASA Astrophysics Data System (ADS)
Melson, David L.; Gauvain, Karen M.; Beardslee, Brian M.; Kraitsik, Michael J.; Burton, Larry; Blaine, G. James; Brink, Gary S.
1996-05-01
Many studies analyzing the costs of film-based and filmless radiology have focused on multi- modality, hospital-wide solutions. Yet due to the enormous cost of converting an entire large radiology department or hospital to a filmless environment all at once, institutions often choose to eliminate film one area at a time. Narrowing the focus of cost-analysis may be useful in making such decisions. This presentation will outline a methodology for analyzing the cost per exam of film-based and filmless solutions for providing portable chest exams to Intensive Care Units (ICUs). The methodology, unlike most in the literature, is based on parallel data collection from existing filmless and film-based ICUs, and is currently being utilized at our institution. Direct costs, taken from the perspective of the hospital, for portable computed radiography chest exams in one filmless and two film-based ICUs are identified. The major cost components are labor, equipment, materials, and storage. Methods for gathering and analyzing each of the cost components are discussed, including FTE-based and time-based labor analysis, incorporation of equipment depreciation, lease, and maintenance costs, and estimation of materials costs. Extrapolation of data from three ICUs to model hypothetical, hospital-wide film-based and filmless ICU imaging systems is described. Performance of sensitivity analysis on the filmless model to assess the impact of anticipated reductions in specific labor, equipment, and archiving costs is detailed. A number of indirect costs, which are not explicitly included in the analysis, are identified and discussed.
Spatio-temporal alignment of pedobarographic image sequences.
Oliveira, Francisco P M; Sousa, Andreia; Santos, Rubim; Tavares, João Manuel R S
2011-07-01
This article presents a methodology to align plantar pressure image sequences simultaneously in time and space. The spatial position and orientation of a foot in a sequence are changed to match the foot represented in a second sequence. Simultaneously with the spatial alignment, the temporal scale of the first sequence is transformed with the aim of synchronizing the two input footsteps. Consequently, the spatial correspondence of the foot regions along the sequences as well as the temporal synchronizing is automatically attained, making the study easier and more straightforward. In terms of spatial alignment, the methodology can use one of four possible geometric transformation models: rigid, similarity, affine, or projective. In the temporal alignment, a polynomial transformation up to the 4th degree can be adopted in order to model linear and curved time behaviors. Suitable geometric and temporal transformations are found by minimizing the mean squared error (MSE) between the input sequences. The methodology was tested on a set of real image sequences acquired from a common pedobarographic device. When used in experimental cases generated by applying geometric and temporal control transformations, the methodology revealed high accuracy. In addition, the intra-subject alignment tests from real plantar pressure image sequences showed that the curved temporal models produced better MSE results (P < 0.001) than the linear temporal model. This article represents an important step forward in the alignment of pedobarographic image data, since previous methods can only be applied on static images.
Classical Statistics and Statistical Learning in Imaging Neuroscience
Bzdok, Danilo
2017-01-01
Brain-imaging research has predominantly generated insight by means of classical statistics, including regression-type analyses and null-hypothesis testing using t-test and ANOVA. Throughout recent years, statistical learning methods enjoy increasing popularity especially for applications in rich and complex data, including cross-validated out-of-sample prediction using pattern classification and sparsity-inducing regression. This concept paper discusses the implications of inferential justifications and algorithmic methodologies in common data analysis scenarios in neuroimaging. It is retraced how classical statistics and statistical learning originated from different historical contexts, build on different theoretical foundations, make different assumptions, and evaluate different outcome metrics to permit differently nuanced conclusions. The present considerations should help reduce current confusion between model-driven classical hypothesis testing and data-driven learning algorithms for investigating the brain with imaging techniques. PMID:29056896
Determination of Local Densities in Accreted Ice Samples Using X-Rays and Digital Imaging
NASA Technical Reports Server (NTRS)
Broughton, Howard; Sims, James; Vargas, Mario
1996-01-01
At the NASA Lewis Research Center's Icing Research Tunnel ice shapes, similar to those which develop in-flight icing conditions, were formed on an airfoil. Under cold room conditions these experimental samples were carefully removed from the airfoil, sliced into thin sections, and x-rayed. The resulting microradiographs were developed and the film digitized using a high resolution scanner to extract fine detail in the radiographs. A procedure was devised to calibrate the scanner and to maintain repeatability during the experiment. The techniques of image acquisition and analysis provide accurate local density measurements and reveal the internal characteristics of the accreted ice with greater detail. This paper will discuss the methodology by which these samples were prepared with emphasis on the digital imaging techniques.
Fornasaro, Stefano; Vicario, Annalisa; De Leo, Luigina; Bonifacio, Alois; Not, Tarcisio; Sergo, Valter
2018-05-14
Raman hyperspectral imaging is an emerging practice in biological and biomedical research for label free analysis of tissues and cells. Using this method, both spatial distribution and spectral information of analyzed samples can be obtained. The current study reports the first Raman microspectroscopic characterisation of colon tissues from patients with Coeliac Disease (CD). The aim was to assess if Raman imaging coupled with hyperspectral multivariate image analysis is capable of detecting the alterations in the biochemical composition of intestinal tissues associated with CD. The analytical approach was based on a multi-step methodology: duodenal biopsies from healthy and coeliac patients were measured and processed with Multivariate Curve Resolution Alternating Least Squares (MCR-ALS). Based on the distribution maps and the pure spectra of the image constituents obtained from MCR-ALS, interesting biochemical differences between healthy and coeliac patients has been derived. Noticeably, a reduced distribution of complex lipids in the pericryptic space, and a different distribution and abundance of proteins rich in beta-sheet structures was found in CD patients. The output of the MCR-ALS analysis was then used as a starting point for two clustering algorithms (k-means clustering and hierarchical clustering methods). Both methods converged with similar results providing precise segmentation over multiple Raman images of studied tissues.
Eberhardt, S H; Marone, F; Stampanoni, M; Büchi, F N; Schmidt, T J
2014-11-01
Synchrotron-based X-ray tomographic microscopy is investigated for imaging the local distribution and concentration of phosphoric acid in high-temperature polymer electrolyte fuel cells. Phosphoric acid fills the pores of the macro- and microporous fuel cell components. Its concentration in the fuel cell varies over a wide range (40-100 wt% H3PO4). This renders the quantification and concentration determination challenging. The problem is solved by using propagation-based phase contrast imaging and a referencing method. Fuel cell components with known acid concentrations were used to correlate greyscale values and acid concentrations. Thus calibration curves were established for the gas diffusion layer, catalyst layer and membrane in a non-operating fuel cell. The non-destructive imaging methodology was verified by comparing image-based values for acid content and concentration in the gas diffusion layer with those from chemical analysis.
NASA Astrophysics Data System (ADS)
Costaridou, Lena
Although a wide variety of Computer-Aided Diagnosis (CADx) schemes have been proposed across breast imaging modalities, and especially in mammography, research is still ongoing to meet the high performance CADx requirements. In this chapter, methodological contributions to CADx in mammography and adjunct breast imaging modalities are reviewed, as they play a major role in early detection, diagnosis and clinical management of breast cancer. At first, basic terms and definitions are provided. Then, emphasis is given to lesion content derivation, both anatomical and functional, considering only quantitative image features of micro-calcification clusters and masses across modalities. Additionally, two CADx application examples are provided. The first example investigates the effect of segmentation accuracy on micro-calcification cluster morphology derivation in X-ray mammography. The second one demonstrates the efficiency of texture analysis in quantification of enhancement kinetics, related to vascular heterogeneity, for mass classification in dynamic contrast-enhanced magnetic resonance imaging.
Characterisation of Feature Points in Eye Fundus Images
NASA Astrophysics Data System (ADS)
Calvo, D.; Ortega, M.; Penedo, M. G.; Rouco, J.
The retinal vessel tree adds decisive knowledge in the diagnosis of numerous opthalmologic pathologies such as hypertension or diabetes. One of the problems in the analysis of the retinal vessel tree is the lack of information in terms of vessels depth as the image acquisition usually leads to a 2D image. This situation provokes a scenario where two different vessels coinciding in a point could be interpreted as a vessel forking into a bifurcation. That is why, for traking and labelling the retinal vascular tree, bifurcations and crossovers of vessels are considered feature points. In this work a novel method for these retinal vessel tree feature points detection and classification is introduced. The method applies image techniques such as filters or thinning to obtain the adequate structure to detect the points and sets a classification of these points studying its environment. The methodology is tested using a standard database and the results show high classification capabilities.
On a methodology for robust segmentation of nonideal iris images.
Schmid, Natalia A; Zuo, Jinyu
2010-06-01
Iris biometric is one of the most reliable biometrics with respect to performance. However, this reliability is a function of the ideality of the data. One of the most important steps in processing nonideal data is reliable and precise segmentation of the iris pattern from remaining background. In this paper, a segmentation methodology that aims at compensating various nonidealities contained in iris images during segmentation is proposed. The virtue of this methodology lies in its capability to reliably segment nonideal imagery that is simultaneously affected with such factors as specular reflection, blur, lighting variation, occlusion, and off-angle images. We demonstrate the robustness of our segmentation methodology by evaluating ideal and nonideal data sets, namely, the Chinese Academy of Sciences iris data version 3 interval subdirectory, the iris challenge evaluation data, the West Virginia University (WVU) data, and the WVU off-angle data. Furthermore, we compare our performance to that of our implementation of Camus and Wildes's algorithm and Masek's algorithm. We demonstrate considerable improvement in segmentation performance over the formerly mentioned algorithms.
Plyku, Donika; Loeb, David M.; Prideaux, Andrew R.; Baechler, Sébastien; Wahl, Richard L.; Sgouros, George
2015-01-01
Abstract Purpose: Dosimetric accuracy depends directly upon the accuracy of the activity measurements in tumors and organs. The authors present the methods and results of a retrospective tumor dosimetry analysis in 14 patients with a total of 28 tumors treated with high activities of 153Sm-ethylenediaminetetramethylenephosphonate (153Sm-EDTMP) for therapy of metastatic osteosarcoma using planar images and compare the results with three-dimensional dosimetry. Materials and Methods: Analysis of phantom data provided a complete set of parameters for dosimetric calculations, including buildup factor, attenuation coefficient, and camera dead-time compensation. The latter was obtained using a previously developed methodology that accounts for the relative motion of the camera and patient during whole-body (WB) imaging. Tumor activity values calculated from the anterior and posterior views of WB planar images of patients treated with 153Sm-EDTMP for pediatric osteosarcoma were compared with the geometric mean value. The mean activities were integrated over time and tumor-absorbed doses were calculated using the software package OLINDA/EXM. Results: The authors found that it was necessary to employ the dead-time correction algorithm to prevent measured tumor activity half-lives from often exceeding the physical decay half-life of 153Sm. Measured half-lives so long are unquestionably in error. Tumor-absorbed doses varied between 0.0022 and 0.27 cGy/MBq with an average of 0.065 cGy/MBq; however, a comparison with absorbed dose values derived from a three-dimensional analysis for the same tumors showed no correlation; moreover, the ratio of three-dimensional absorbed dose value to planar absorbed dose value was 2.19. From the anterior and posterior activity comparisons, the order of clinical uncertainty for activity and dose calculations from WB planar images, with the present methodology, is hypothesized to be about 70%. Conclusion: The dosimetric results from clinical patient data indicate that absolute planar dosimetry is unreliable and dosimetry using three-dimensional imaging is preferable, particularly for tumors, except perhaps for the most sophisticated planar methods. The relative activity and patient kinetics derived from planar imaging show a greater level of reliability than the dosimetry. PMID:26560193
Automated motion artifact removal for intravital microscopy, without a priori information.
Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph
2014-03-28
Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber.
Automated motion artifact removal for intravital microscopy, without a priori information
Lee, Sungon; Vinegoni, Claudio; Sebas, Matthew; Weissleder, Ralph
2014-01-01
Intravital fluorescence microscopy, through extended penetration depth and imaging resolution, provides the ability to image at cellular and subcellular resolution in live animals, presenting an opportunity for new insights into in vivo biology. Unfortunately, physiological induced motion components due to respiration and cardiac activity are major sources of image artifacts and impose severe limitations on the effective imaging resolution that can be ultimately achieved in vivo. Here we present a novel imaging methodology capable of automatically removing motion artifacts during intravital microscopy imaging of organs and orthotopic tumors. The method is universally applicable to different laser scanning modalities including confocal and multiphoton microscopy, and offers artifact free reconstructions independent of the physiological motion source and imaged organ. The methodology, which is based on raw data acquisition followed by image processing, is here demonstrated for both cardiac and respiratory motion compensation in mice heart, kidney, liver, pancreas and dorsal window chamber. PMID:24676021
Backscattering analysis of high frequency ultrasonic imaging for ultrasound-guided breast biopsy
NASA Astrophysics Data System (ADS)
Cummins, Thomas; Akiyama, Takahiro; Lee, Changyang; Martin, Sue E.; Shung, K. Kirk
2017-03-01
A new ultrasound-guided breast biopsy technique is proposed. The technique utilizes conventional ultrasound guidance coupled with a high frequency embedded ultrasound array located within the biopsy needle to improve the accuracy in breast cancer diagnosis.1 The array within the needle is intended to be used to detect micro- calcifications indicative of early breast cancers such as ductal carcinoma in situ (DCIS). Backscattering analysis has the potential to characterize tissues to improve localization of lesions. This paper describes initial results of the application of backscattering analysis of breast biopsy tissue specimens and shows the usefulness of high frequency ultrasound for the new biopsy related technique. Ultrasound echoes of ex-vivo breast biopsy tissue specimens were acquired by using a single-element transducer with a bandwidth from 41 MHz to 88 MHz utilizing a UBM methodology, and the backscattering coefficients were calculated. These values as well as B-mode image data were mapped in 2D and matched with each pathology image for the identification of tissue type for the comparison to the pathology images corresponding to each plane. Microcalcifications were significantly distinguished from normal tissue. Adenocarcinoma was also successfully differentiated from adipose tissue. These results indicate that backscattering analysis is able to quantitatively distinguish tissues into normal and abnormal, which should help radiologists locate abnormal areas during the proposed ultrasound-guided breast biopsy with high frequency ultrasound.
Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.
Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué
2018-02-15
We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.
Newlander, Shawn M; Chu, Alan; Sinha, Usha S; Lu, Po H; Bartzokis, George
2014-02-01
To identify regional differences in apparent diffusion coefficient (ADC) and fractional anisotropy (FA) using customized preprocessing before voxel-based analysis (VBA) in 14 normal subjects with the specific genes that decrease (apolipoprotein [APO] E ε2) and that increase (APOE ε4) the risk of Alzheimer's disease. Diffusion tensor images (DTI) acquired at 1.5 Tesla were denoised with a total variation tensor regularization algorithm before affine and nonlinear registration to generate a common reference frame for the image volumes of all subjects. Anisotropic and isotropic smoothing with varying kernel sizes was applied to the aligned data before VBA to determine regional differences between cohorts segregated by allele status. VBA on the denoised tensor data identified regions of reduced FA in APOE ε4 compared with the APOE ε2 healthy older carriers. The most consistent results were obtained using the denoised tensor and anisotropic smoothing before statistical testing. In contrast, isotropic smoothing identified regional differences for small filter sizes alone, emphasizing that this method introduces bias in FA values for higher kernel sizes. Voxel-based DTI analysis can be performed on low signal to noise ratio images to detect subtle regional differences in cohorts using the proposed preprocessing techniques. Copyright © 2013 Wiley Periodicals, Inc.
Wu, L C; D'Amelio, F; Fox, R A; Polyakov, I; Daunton, N G
1997-06-06
The present report describes a desktop computer-based method for the quantitative assessment of the area occupied by immunoreactive terminals in close apposition to nerve cells in relation to the perimeter of the cell soma. This method is based on Fast Fourier Transform (FFT) routines incorporated in NIH-Image public domain software. Pyramidal cells of layer V of the somatosensory cortex outlined by GABA immunolabeled terminals were chosen for our analysis. A Leitz Diaplan light microscope was employed for the visualization of the sections. A Sierra Scientific Model 4030 CCD camera was used to capture the images into a Macintosh Centris 650 computer. After preprocessing, filtering was performed on the power spectrum in the frequency domain produced by the FFT operation. An inverse FFT with filter procedure was employed to restore the images to the spatial domain. Pasting of the original image to the transformed one using a Boolean logic operation called 'AND'ing produced an image with the terminals enhanced. This procedure allowed the creation of a binary image using a well-defined threshold of 128. Thus, the terminal area appears in black against a white background. This methodology provides an objective means of measurement of area by counting the total number of pixels occupied by immunoreactive terminals in light microscopic sections in which the difficulties of labeling intensity, size, shape and numerical density of terminals are avoided.
Noninvasive Test Detects Cardiovascular Disease
NASA Technical Reports Server (NTRS)
2007-01-01
At NASA's Jet Propulsion Laboratory (JPL), NASA-developed Video Imaging Communication and Retrieval (VICAR) software laid the groundwork for analyzing images of all kinds. A project seeking to use imaging technology for health care diagnosis began when the imaging team considered using the VICAR software to analyze X-ray images of soft tissue. With marginal success using X-rays, the team applied the same methodology to ultrasound imagery, which was already digitally formatted. The new approach proved successful for assessing amounts of plaque build-up and arterial wall thickness, direct predictors of heart disease, and the result was a noninvasive diagnostic system with the ability to accurately predict heart health. Medical Technologies International Inc. (MTI) further developed and then submitted the technology to a vigorous review process at the FDA, which cleared the software for public use. The software, patented under the name Prowin, is being used in MTI's patented ArterioVision, a carotid intima-media thickness (CIMT) test that uses ultrasound image-capturing and analysis software to noninvasively identify the risk for the major cause of heart attack and strokes: atherosclerosis. ArterioVision provides a direct measurement of atherosclerosis by safely and painlessly measuring the thickness of the first two layers of the carotid artery wall using an ultrasound procedure and advanced image-analysis software. The technology is now in use in all 50 states and in many countries throughout the world.
NASA Technical Reports Server (NTRS)
Wu, L. C.; D'Amelio, F.; Fox, R. A.; Polyakov, I.; Daunton, N. G.
1997-01-01
The present report describes a desktop computer-based method for the quantitative assessment of the area occupied by immunoreactive terminals in close apposition to nerve cells in relation to the perimeter of the cell soma. This method is based on Fast Fourier Transform (FFT) routines incorporated in NIH-Image public domain software. Pyramidal cells of layer V of the somatosensory cortex outlined by GABA immunolabeled terminals were chosen for our analysis. A Leitz Diaplan light microscope was employed for the visualization of the sections. A Sierra Scientific Model 4030 CCD camera was used to capture the images into a Macintosh Centris 650 computer. After preprocessing, filtering was performed on the power spectrum in the frequency domain produced by the FFT operation. An inverse FFT with filter procedure was employed to restore the images to the spatial domain. Pasting of the original image to the transformed one using a Boolean logic operation called 'AND'ing produced an image with the terminals enhanced. This procedure allowed the creation of a binary image using a well-defined threshold of 128. Thus, the terminal area appears in black against a white background. This methodology provides an objective means of measurement of area by counting the total number of pixels occupied by immunoreactive terminals in light microscopic sections in which the difficulties of labeling intensity, size, shape and numerical density of terminals are avoided.
Computational Burden Resulting from Image Recognition of High Resolution Radar Sensors
López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L.; Rufo, Elena
2013-01-01
This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation. PMID:23609804
Computational burden resulting from image recognition of high resolution radar sensors.
López-Rodríguez, Patricia; Fernández-Recio, Raúl; Bravo, Ignacio; Gardel, Alfredo; Lázaro, José L; Rufo, Elena
2013-04-22
This paper presents a methodology for high resolution radar image generation and automatic target recognition emphasizing the computational cost involved in the process. In order to obtain focused inverse synthetic aperture radar (ISAR) images certain signal processing algorithms must be applied to the information sensed by the radar. From actual data collected by radar the stages and algorithms needed to obtain ISAR images are revised, including high resolution range profile generation, motion compensation and ISAR formation. Target recognition is achieved by comparing the generated set of actual ISAR images with a database of ISAR images generated by electromagnetic software. High resolution radar image generation and target recognition processes are burdensome and time consuming, so to determine the most suitable implementation platform the analysis of the computational complexity is of great interest. To this end and since target identification must be completed in real time, computational burden of both processes the generation and comparison with a database is explained separately. Conclusions are drawn about implementation platforms and calculation efficiency in order to reduce time consumption in a possible future implementation.
High-throughput high-volume nuclear imaging for preclinical in vivo compound screening§.
Macholl, Sven; Finucane, Ciara M; Hesterman, Jacob; Mather, Stephen J; Pauplis, Rachel; Scully, Deirdre; Sosabowski, Jane K; Jouannot, Erwan
2017-12-01
Preclinical single-photon emission computed tomography (SPECT)/CT imaging studies are hampered by low throughput, hence are found typically within small volume feasibility studies. Here, imaging and image analysis procedures are presented that allow profiling of a large volume of radiolabelled compounds within a reasonably short total study time. Particular emphasis was put on quality control (QC) and on fast and unbiased image analysis. 2-3 His-tagged proteins were simultaneously radiolabelled by 99m Tc-tricarbonyl methodology and injected intravenously (20 nmol/kg; 100 MBq; n = 3) into patient-derived xenograft (PDX) mouse models. Whole-body SPECT/CT images of 3 mice simultaneously were acquired 1, 4, and 24 h post-injection, extended to 48 h and/or by 0-2 h dynamic SPECT for pre-selected compounds. Organ uptake was quantified by automated multi-atlas and manual segmentations. Data were plotted automatically, quality controlled and stored on a collaborative image management platform. Ex vivo uptake data were collected semi-automatically and analysis performed as for imaging data. >500 single animal SPECT images were acquired for 25 proteins over 5 weeks, eventually generating >3500 ROI and >1000 items of tissue data. SPECT/CT images clearly visualized uptake in tumour and other tissues even at 48 h post-injection. Intersubject uptake variability was typically 13% (coefficient of variation, COV). Imaging results correlated well with ex vivo data. The large data set of tumour, background and systemic uptake/clearance data from 75 mice for 25 compounds allows identification of compounds of interest. The number of animals required was reduced considerably by longitudinal imaging compared to dissection experiments. All experimental work and analyses were accomplished within 3 months expected to be compatible with drug development programmes. QC along all workflow steps, blinding of the imaging contract research organization to compound properties and automation provide confidence in the data set. Additional ex vivo data were useful as a control but could be omitted from future studies in the same centre. For even larger compound libraries, radiolabelling could be expedited and the number of imaging time points adapted to increase weekly throughput. Multi-atlas segmentation could be expanded via SPECT/MRI; however, this would require an MRI-compatible mouse hotel. Finally, analysis of nuclear images of radiopharmaceuticals in clinical trials may benefit from the automated analysis procedures developed.
NASA Astrophysics Data System (ADS)
Pérez Peña, José Vicente; Baldó, Mane; Acosta, Yarci; Verschueren, Laurent; Thibaud, Kenmognie; Bilivogui, Pépé; Jean-Paul Ngandu, Alain; Beavogui, Maoro
2017-04-01
In the last decade the increasing interest for public health has promoted specific regulations for the transport, storage, transformation and/or elimination of potentially toxic waste. A special concern should focus on the effective management of biomedical waste, due to the environmental and health risk associated with them. The first stage for the effective management these waste includes the selection of the best sites for the location of facilities for its storage and/or elimination. Best-site selection is accomplished by means of multi-criteria decision analyses (MCDA) that aim to minimize the social and environmental impact, and to maximize management efficiency. In this work we presented a methodology that uses open-source software and data to analyze the best location for the implantation of a centralized waste management system in a developing country (Guinea, Conakry). We applied an analytical hierarchy process (AHP) using different thematic layers such as land use (derived from up-to-date Sentinel 2 remote sensing images), soil type, distance and type of roads, hydrography, distance to dense populated areas, etc. Land-use data were derived from up-to-date Sentinel 2 remote sensing images, whereas roads and hydrography were obtained from the Open Street Map database and latter validated with administrative data. We performed the AHP analysis with the aid of QGIS open-software Geospatial Information System. This methodology is very effective for developing countries as it uses open-source software and data for the MCDA analysis, thus reducing costs in these first stages of the integrated analysis.
Pansharpening Techniques to Detect Mass Monument Damaging in Iraq
NASA Astrophysics Data System (ADS)
Baiocchi, V.; Bianchi, A.; Maddaluno, C.; Vidale, M.
2017-05-01
The recent mass destructions of monuments in Iraq cannot be monitored with the terrestrial survey methodologies, for obvious reasons of safety. For the same reasons, it's not advisable the use of classical aerial photogrammetry, so it was obvious to think to the use of multispectral Very High Resolution (VHR) satellite imagery. Nowadays VHR satellite images resolutions are very near airborne photogrammetrical images and usually they are acquired in multispectral mode. The combination of the various bands of the images is called pan-sharpening and it can be carried on using different algorithms and strategies. The correct pansharpening methodology, for a specific image, must be chosen considering the specific multispectral characteristics of the satellite used and the particular application. In this paper a first definition of guidelines for the use of VHR multispectral imagery to detect monument destruction in unsafe area, is reported. The proposed methodology, agreed with UNESCO and soon to be used in Libya for the coastal area, has produced a first report delivered to the Iraqi authorities. Some of the most evident examples are reported to show the possible capabilities of identification of damages using VHR images.
NASA Astrophysics Data System (ADS)
Alexakis, Dimitrios D.; Sarris, Apostolos; Papadopoulos, Nikos; Soupios, Pantelis; Doula, Maria; Cavvadias, Victor
2014-08-01
The olive-oil industry is one of the most important sectors of agricultural production in Greece, which is the third in olive-oil production country worldwide. Olive oil mill wastes (OOMW) constitute a major factor in pollution in olivegrowing regions and an important problem to be solved for the agricultural industry. The olive-oil mill wastes are normally deposited at tanks, or directly in the soil or even on adjacent torrents, rivers and lakes posing a high risk to the environmental pollution and the community health. GEODIAMETRIS project aspires to develop integrated geoinformatic methodologies for performing monitoring of land pollution from the disposal of OOMW in the island of Crete -Greece. These methodologies integrate GPS surveys, satellite remote sensing and risk assessment analysis in GIS environment, application of in situ and laboratory geophysical methodologies as well as soil and water physicochemical analysis. Concerning project's preliminary results, all the operating OOMW areas located in Crete have been already registered through extensive GPS field campaigns. Their spatial and attribute information has been stored in an integrated GIS database and an overall OOMW spectral signature database has been constructed through the analysis of multi-temporal Landsat-8 OLI satellite images. In addition, a specific OOMW area located in Alikianos village (Chania-Crete) has been selected as one of the main case study areas. Various geophysical methodologies, such as Electrical Resistivity Tomography, Induced Polarization, multifrequency electromagnetic, Self Potential measurements and Ground Penetrating Radar have been already implemented. Soil as well as liquid samples have been collected for performing physico-chemical analysis. The preliminary results have already contributed to the gradual development of an integrated environmental monitoring tool for studying and understanding environmental degradation from the disposal of OOMW.
Poveda, Ferran; Gil, Debora; Martí, Enric; Andaluz, Albert; Ballester, Manel; Carreras, Francesc
2013-10-01
Deeper understanding of the myocardial structure linking the morphology and function of the heart would unravel crucial knowledge for medical and surgical clinical procedures and studies. Several conceptual models of myocardial fiber organization have been proposed but the lack of an automatic and objective methodology prevented an agreement. We sought to deepen this knowledge through advanced computer graphical representations of the myocardial fiber architecture by diffusion tensor magnetic resonance imaging. We performed automatic tractography reconstruction of unsegmented diffusion tensor magnetic resonance imaging datasets of canine heart from the public database of the Johns Hopkins University. Full-scale tractographies have been built with 200 seeds and are composed by streamlines computed on the vector field of primary eigenvectors at the diffusion tensor volumes. We also introduced a novel multiscale visualization technique in order to obtain a simplified tractography. This methodology retains the main geometric features of the fiber tracts, making it easier to decipher the main properties of the architectural organization of the heart. Output analysis of our tractographic representations showed exact correlation with low-level details of myocardial architecture, but also with the more abstract conceptualization of a continuous helical ventricular myocardial fiber array. Objective analysis of myocardial architecture by an automated method, including the entire myocardium and using several 3-dimensional levels of complexity, reveals a continuous helical myocardial fiber arrangement of both right and left ventricles, supporting the anatomical model of the helical ventricular myocardial band described by F. Torrent-Guasp. Copyright © 2013 Sociedad Española de Cardiología. Published by Elsevier Espana. All rights reserved.
Molina-Viedma, Ángel Jesús; López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A; Rodríguez-Ahlquist, Javier; Iglesias-Vallejo, Manuel
2018-02-02
In real aircraft structures the comfort and the occupational performance of crewmembers and passengers are affected by the presence of noise. In this sense, special attention is focused on mechanical and material design for isolation and vibration control. Experimental characterization and, in particular, experimental modal analysis, provides information for adequate cabin noise control. Traditional sensors employed in the aircraft industry for this purpose are invasive and provide a low spatial resolution. This paper presents a methodology for experimental modal characterization of a front fuselage full-scale demonstrator using high-speed 3D digital image correlation, which is non-invasive, ensuring that the structural response is unperturbed by the instrumentation mass. Specifically, full-field measurements on the passenger window area were conducted when the structure was excited using an electrodynamic shaker. The spectral analysis of the measured time-domain displacements made it possible to identify natural frequencies and full-field operational deflection shapes. Changes in the modal parameters due to cabin pressurization and the behavior of different local structural modifications were assessed using this methodology. The proposed full-field methodology allowed the characterization of relevant dynamic response patterns, complementing the capabilities provided by accelerometers.
López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A.; Rodríguez-Ahlquist, Javier; Iglesias-Vallejo, Manuel
2018-01-01
In real aircraft structures the comfort and the occupational performance of crewmembers and passengers are affected by the presence of noise. In this sense, special attention is focused on mechanical and material design for isolation and vibration control. Experimental characterization and, in particular, experimental modal analysis, provides information for adequate cabin noise control. Traditional sensors employed in the aircraft industry for this purpose are invasive and provide a low spatial resolution. This paper presents a methodology for experimental modal characterization of a front fuselage full-scale demonstrator using high-speed 3D digital image correlation, which is non-invasive, ensuring that the structural response is unperturbed by the instrumentation mass. Specifically, full-field measurements on the passenger window area were conducted when the structure was excited using an electrodynamic shaker. The spectral analysis of the measured time-domain displacements made it possible to identify natural frequencies and full-field operational deflection shapes. Changes in the modal parameters due to cabin pressurization and the behavior of different local structural modifications were assessed using this methodology. The proposed full-field methodology allowed the characterization of relevant dynamic response patterns, complementing the capabilities provided by accelerometers. PMID:29393897
Human perception testing methodology for evaluating EO/IR imaging systems
NASA Astrophysics Data System (ADS)
Graybeal, John J.; Monfort, Samuel S.; Du Bosq, Todd W.; Familoni, Babajide O.
2018-04-01
The U.S. Army's RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) Perception Lab is tasked with supporting the development of sensor systems for the U.S. Army by evaluating human performance of emerging technologies. Typical research questions involve detection, recognition and identification as a function of range, blur, noise, spectral band, image processing techniques, image characteristics, and human factors. NVESD's Perception Lab provides an essential bridge between the physics of the imaging systems and the performance of the human operator. In addition to quantifying sensor performance, perception test results can also be used to generate models of human performance and to drive future sensor requirements. The Perception Lab seeks to develop and employ scientifically valid and efficient perception testing procedures within the practical constraints of Army research, including rapid development timelines for critical technologies, unique guidelines for ethical testing of Army personnel, and limited resources. The purpose of this paper is to describe NVESD Perception Lab capabilities, recent methodological improvements designed to align our methodology more closely with scientific best practice, and to discuss goals for future improvements and expanded capabilities. Specifically, we discuss modifying our methodology to improve training, to account for human fatigue, to improve assessments of human performance, and to increase experimental design consultation provided by research psychologists. Ultimately, this paper outlines a template for assessing human perception and overall system performance related to EO/IR imaging systems.
Quantification of Posterior Globe Flattening: Methodology Development and Validation
NASA Technical Reports Server (NTRS)
Lumpkins, Sarah B.; Garcia, Kathleen M.; Sargsyan, Ashot E.; Hamilton, Douglas R.; Berggren, Michael D.; Ebert, Douglas
2012-01-01
Microgravity exposure affects visual acuity in a subset of astronauts and mechanisms may include structural changes in the posterior globe and orbit. Particularly, posterior globe flattening has been implicated in the eyes of several astronauts. This phenomenon is known to affect some terrestrial patient populations and has been shown to be associated with intracranial hypertension. It is commonly assessed by magnetic resonance imaging (MRI), computed tomography (CT) or B-mode Ultrasound (US), without consistent objective criteria. NASA uses a semiquantitative scale of 0-3 as part of eye/orbit MRI and US analysis for occupational monitoring purposes. The goal of this study was ot initiate development of an objective quantification methodology to monitor small changes in posterior globe flattening.
Jeong, Seok Hoo; Yoon, Hyun Hwa; Kim, Eui Joo; Kim, Yoon Jae; Kim, Yeon Suk; Cho, Jae Hee
2017-01-01
Abstract Endoscopic ultrasound-guided fine needle aspiration (EUS-FNA) is the accurate diagnostic method for pancreatic masses and its accuracy is affected by various FNA methods and EUS equipment. Therefore, we aimed to elucidate the instrumental and methodologic factors for determining the diagnostic yield of EUS-FNA for pancreatic solid masses without an on-site cytopathology evaluation. We retrospectively reviewed the medical records of 260 patients (265 pancreatic solid masses) who underwent EUS-FNA. We compared historical conventional EUS groups with high-resolution imaging devices and finally analyzed various factors affecting EUS-FNA accuracy. In total, 265 pancreatic solid masses of 260 patients were included in this study. The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of EUS-FNA for pancreatic solid masses without on-site cytopathology evaluation were 83.4%, 81.8%, 100.0%, 100.0%, and 34.3%, respectively. In comparison with conventional image group, high-resolution image group showed the increased accuracy, sensitivity and specificity of EUS-FNA (71.3% vs 92.7%, 68.9% vs 91.9%, and 100% vs 100%, respectively). On the multivariate analysis with various instrumental and methodologic factors, high-resolution imaging (P = 0.040, odds ratio = 3.28) and 3 or more needle passes (P = 0.039, odds ratio = 2.41) were important factors affecting diagnostic yield of pancreatic solid masses. High-resolution imaging and 3 or more passes were the most significant factors influencing diagnostic yield of EUS-FNA in patients with pancreatic solid masses without an on-site cytopathologist. PMID:28079803
Borotikar, Bhushan S.; Sheehan, Frances T.
2017-01-01
Objectives To establish an in vivo, normative patellofemoral cartilage contact mechanics database acquired during voluntary muscle control using a novel dynamic magnetic resonance (MR) imaging-based computational methodology and validate the contact mechanics sensitivity to the known sub-millimeter methodological inaccuracies. Design Dynamic cine phase-contrast and multi-plane cine images were acquired while female subjects (n=20, sample of convenience) performed an open kinetic chain (knee flexion-extension) exercise inside a 3-Tesla MR scanner. Static cartilage models were created from high resolution three-dimensional static MR data and accurately placed in their dynamic pose at each time frame based on the cine-PC data. Cartilage contact parameters were calculated based on the surface overlap. Statistical analysis was performed using paired t-test and a one-sample repeated measures ANOVA. The sensitivity of the contact parameters to the known errors in the patellofemoral kinematics was determined. Results Peak mean patellofemoral contact area was 228.7±173.6mm2 at 40° knee angle. During extension, contact centroid and peak strain locations tracked medially on the femoral and patellar cartilage and were not significantly different from each other. At 30°, 35°, and 40° of knee extension, contact area was significantly different. Contact area and centroid locations were insensitive to rotational and translational perturbations. Conclusion This study is a first step towards unfolding the biomechanical pathways to anterior patellofemoral pain and OA using dynamic, in vivo, and accurate methodologies. The database provides crucial data for future studies and for validation of, or as an input to, computational models. PMID:24012620
NASA Astrophysics Data System (ADS)
Mwaniki, M. W.; Kuria, D. N.; Boitt, M. K.; Ngigi, T. G.
2017-04-01
Image enhancements lead to improved performance and increased accuracy of feature extraction, recognition, identification, classification and hence change detection. This increases the utility of remote sensing to suit environmental applications and aid disaster monitoring of geohazards involving large areas. The main aim of this study was to compare the effect of image enhancement applied to synthetic aperture radar (SAR) data and Landsat 8 imagery in landslide identification and mapping. The methodology involved pre-processing Landsat 8 imagery, image co-registration, despeckling of the SAR data, after which Landsat 8 imagery was enhanced by Principal and Independent Component Analysis (PCA and ICA), a spectral index involving bands 7 and 4, and using a False Colour Composite (FCC) with the components bearing the most geologic information. The SAR data were processed using textural and edge filters, and computation of SAR incoherence. The enhanced spatial, textural and edge information from the SAR data was incorporated to the spectral information from Landsat 8 imagery during the knowledge based classification. The methodology was tested in the central highlands of Kenya, characterized by rugged terrain and frequent rainfall induced landslides. The results showed that the SAR data complemented Landsat 8 data which had enriched spectral information afforded by the FCC with enhanced geologic information. The SAR classification depicted landslides along the ridges and lineaments, important information lacking in the Landsat 8 image classification. The success of landslide identification and classification was attributed to the enhanced geologic features by spectral, textural and roughness properties.
Fast and Accurate Cell Tracking by a Novel Optical-Digital Hybrid Method
NASA Astrophysics Data System (ADS)
Torres-Cisneros, M.; Aviña-Cervantes, J. G.; Pérez-Careta, E.; Ambriz-Colín, F.; Tinoco, Verónica; Ibarra-Manzano, O. G.; Plascencia-Mora, H.; Aguilera-Gómez, E.; Ibarra-Manzano, M. A.; Guzman-Cabrera, R.; Debeir, Olivier; Sánchez-Mondragón, J. J.
2013-09-01
An innovative methodology to detect and track cells using microscope images enhanced by optical cross-correlation techniques is proposed in this paper. In order to increase the tracking sensibility, image pre-processing has been implemented as a morphological operator on the microscope image. Results show that the pre-processing process allows for additional frames of cell tracking, therefore increasing its robustness. The proposed methodology can be used in analyzing different problems such as mitosis, cell collisions, and cell overlapping, ultimately designed to identify and treat illnesses and malignancies.
Global-Context Based Salient Region Detection in Nature Images
NASA Astrophysics Data System (ADS)
Bao, Hong; Xu, De; Tang, Yingjun
Visually saliency detection provides an alternative methodology to image description in many applications such as adaptive content delivery and image retrieval. One of the main aims of visual attention in computer vision is to detect and segment the salient regions in an image. In this paper, we employ matrix decomposition to detect salient object in nature images. To efficiently eliminate high contrast noise regions in the background, we integrate global context information into saliency detection. Therefore, the most salient region can be easily selected as the one which is globally most isolated. The proposed approach intrinsically provides an alternative methodology to model attention with low implementation complexity. Experiments show that our approach achieves much better performance than that from the existing state-of-art methods.
ACR Appropriateness Criteria® Chronic Ankle Pain.
Chang, Eric Y; Tadros, Anthony S; Amini, Behrang; Bell, Angela M; Bernard, Stephanie A; Fox, Michael G; Gorbachova, Tetyana; Ha, Alice S; Lee, Kenneth S; Metter, Darlene F; Mooar, Pekka A; Shah, Nehal A; Singer, Adam D; Smith, Stacy E; Taljanovic, Mihra S; Thiele, Ralf; Kransdorf, Mark J
2018-05-01
Chronic ankle pain is a common clinical problem whose cause is often elucidated by imaging. The ACR Appropriateness Criteria for chronic ankle pain define best practices of image ordering. Clinical scenarios are followed by the imaging choices and their appropriateness. The information is in ordered tables with an accompanying narrative explanation to guide physicians to order the right test. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision include an extensive analysis of current medical literature from peer reviewed journals and the application of well-established methodologies (RAND/UCLA Appropriateness Method and Grading of Recommendations Assessment, Development, and Evaluation or GRADE) to rate the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where evidence is lacking or equivocal, expert opinion may supplement the available evidence to recommend imaging or treatment. Copyright © 2018 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Sterling, N W; Lewis, M M; Du, G; Huang, X
2016-05-27
Parkinson's disease (PD) is a progressive age-related neurodegenerative disorder. Although the pathological hallmark of PD is dopaminergic cell death in the substantia nigra pars compacta, widespread neurodegenerative changes occur throughout the brain as disease progresses. Postmortem studies, for example, have demonstrated the presence of Lewy pathology, apoptosis, and loss of neurotransmitters and interneurons in both cortical and subcortical regions of PD patients. Many in vivo structural imaging studies have attempted to gauge PD-related pathology, particularly in gray matter, with the hope of identifying an imaging biomarker. Reports of brain atrophy in PD, however, have been inconsistent, most likely due to differences in the studied populations (i.e. different disease stages and/or clinical subtypes), experimental designs (i.e. cross-sectional vs. longitudinal), and image analysis methodologies (i.e. automatic vs. manual segmentation). This review attempts to summarize the current state of gray matter structural imaging research in PD in relationship to disease progression, reconciling some of the differences in reported results, and to identify challenges and future avenues.
ACR Appropriateness Criteria® Chronic Hip Pain.
Mintz, Douglas N; Roberts, Catherine C; Bencardino, Jenny T; Baccei, Steven J; Caird, Michelle S; Cassidy, R Carter; Chang, Eric Y; Fox, Michael G; Gyftopoulos, Soterios; Kransdorf, Mark J; Metter, Darlene F; Morrison, William B; Rosenberg, Zehava S; Shah, Nehal A; Small, Kirstin M; Subhas, Naveen; Tambar, Siddharth; Towers, Jeffrey D; Yu, Joseph S; Weissman, Barbara N
2017-05-01
Chronic hip pain is a common clinical problem whose cause is often elucidated by imaging. The ACR Appropriateness Criteria for chronic hip pain define best practices of image ordering. Clinical scenarios are followed by the imaging choices and their appropriateness. The information is in ordered tables with an accompanying narrative explanation to guide physicians to order the right test. The American College of Radiology Appropriateness Criteria are evidence-based guidelines for specific clinical conditions that are reviewed annually by a multidisciplinary expert panel. The guideline development and revision include an extensive analysis of current medical literature from peer-reviewed journals and the application of well-established methodologies (RAND/UCLA Appropriateness Method and Grading of Recommendations Assessment, Development, and Evaluation or GRADE) to rate the appropriateness of imaging and treatment procedures for specific clinical scenarios. In those instances where evidence is lacking or equivocal, expert opinion may supplement the available evidence to recommend imaging or treatment. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
Katchergin, Ofer
2017-09-01
The neurocentric worldview that identifies the essence of the human being with the material brain has become a central paradigm in current academic discourse. Israeli researchers also seek to understand educational principles and processes via neuroscientific models. On this background, the article uncovers the central role that visual brain images play in the learning-disabilities field in Israel. It examines the place brain images have in the professional imagination of didactic-diagnosticians as well as their influence on the diagnosticians' clinical attitudes. It relies on two theoretical fields: sociology and anthropology of the body and sociology of neuromedical knowledge. The research consists of three methodologies: ethnographic observations, in-depth interviews, and rhetorical analysis of visual and verbal texts. It uncovers the various rhetorical and ideological functions of brain images in the field. It also charts the repertoire of rhetorical devices which are utilized to strengthen the neuroreducionist messages contained in the images.
2017-06-09
primary question. This thesis has used the case study research methodology with Capability-Based Assessment (CBA) approach. My engagement in this...protected by more restrictions in their home countries, in which case further publication or sale of copyrighted images is not permissible...effective coordinating mechanism. The research follows the case study method utilizing the Capability Based Analysis (CBA) approach to scrutinize the
NASA Astrophysics Data System (ADS)
Ye, Su; Pontius, Robert Gilmore; Rakshit, Rahul
2018-07-01
Object-based image analysis (OBIA) has gained widespread popularity for creating maps from remotely sensed data. Researchers routinely claim that OBIA procedures outperform pixel-based procedures; however, it is not immediately obvious how to evaluate the degree to which an OBIA map compares to reference information in a manner that accounts for the fact that the OBIA map consists of objects that vary in size and shape. Our study reviews 209 journal articles concerning OBIA published between 2003 and 2017. We focus on the three stages of accuracy assessment: (1) sampling design, (2) response design and (3) accuracy analysis. First, we report the literature's overall characteristics concerning OBIA accuracy assessment. Simple random sampling was the most used method among probability sampling strategies, slightly more than stratified sampling. Office interpreted remotely sensed data was the dominant reference source. The literature reported accuracies ranging from 42% to 96%, with an average of 85%. A third of the articles failed to give sufficient information concerning accuracy methodology such as sampling scheme and sample size. We found few studies that focused specifically on the accuracy of the segmentation. Second, we identify a recent increase of OBIA articles in using per-polygon approaches compared to per-pixel approaches for accuracy assessment. We clarify the impacts of the per-pixel versus the per-polygon approaches respectively on sampling, response design and accuracy analysis. Our review defines the technical and methodological needs in the current per-polygon approaches, such as polygon-based sampling, analysis of mixed polygons, matching of mapped with reference polygons and assessment of segmentation accuracy. Our review summarizes and discusses the current issues in object-based accuracy assessment to provide guidance for improved accuracy assessments for OBIA.
Borri, Marco; Schmidt, Maria A.; Powell, Ceri; Koh, Dow-Mu; Riddell, Angela M.; Partridge, Mike; Bhide, Shreerang A.; Nutting, Christopher M.; Harrington, Kevin J.; Newbold, Katie L.; Leach, Martin O.
2015-01-01
Purpose To describe a methodology, based on cluster analysis, to partition multi-parametric functional imaging data into groups (or clusters) of similar functional characteristics, with the aim of characterizing functional heterogeneity within head and neck tumour volumes. To evaluate the performance of the proposed approach on a set of longitudinal MRI data, analysing the evolution of the obtained sub-sets with treatment. Material and Methods The cluster analysis workflow was applied to a combination of dynamic contrast-enhanced and diffusion-weighted imaging MRI data from a cohort of squamous cell carcinoma of the head and neck patients. Cumulative distributions of voxels, containing pre and post-treatment data and including both primary tumours and lymph nodes, were partitioned into k clusters (k = 2, 3 or 4). Principal component analysis and cluster validation were employed to investigate data composition and to independently determine the optimal number of clusters. The evolution of the resulting sub-regions with induction chemotherapy treatment was assessed relative to the number of clusters. Results The clustering algorithm was able to separate clusters which significantly reduced in voxel number following induction chemotherapy from clusters with a non-significant reduction. Partitioning with the optimal number of clusters (k = 4), determined with cluster validation, produced the best separation between reducing and non-reducing clusters. Conclusion The proposed methodology was able to identify tumour sub-regions with distinct functional properties, independently separating clusters which were affected differently by treatment. This work demonstrates that unsupervised cluster analysis, with no prior knowledge of the data, can be employed to provide a multi-parametric characterization of functional heterogeneity within tumour volumes. PMID:26398888
Optimized Design and Analysis of Sparse-Sampling fMRI Experiments
Perrachione, Tyler K.; Ghosh, Satrajit S.
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power. PMID:23616742
Optimized design and analysis of sparse-sampling FMRI experiments.
Perrachione, Tyler K; Ghosh, Satrajit S
2013-01-01
Sparse-sampling is an important methodological advance in functional magnetic resonance imaging (fMRI), in which silent delays are introduced between MR volume acquisitions, allowing for the presentation of auditory stimuli without contamination by acoustic scanner noise and for overt vocal responses without motion-induced artifacts in the functional time series. As such, the sparse-sampling technique has become a mainstay of principled fMRI research into the cognitive and systems neuroscience of speech, language, hearing, and music. Despite being in use for over a decade, there has been little systematic investigation of the acquisition parameters, experimental design considerations, and statistical analysis approaches that bear on the results and interpretation of sparse-sampling fMRI experiments. In this report, we examined how design and analysis choices related to the duration of repetition time (TR) delay (an acquisition parameter), stimulation rate (an experimental design parameter), and model basis function (an analysis parameter) act independently and interactively to affect the neural activation profiles observed in fMRI. First, we conducted a series of computational simulations to explore the parameter space of sparse design and analysis with respect to these variables; second, we validated the results of these simulations in a series of sparse-sampling fMRI experiments. Overall, these experiments suggest the employment of three methodological approaches that can, in many situations, substantially improve the detection of neurophysiological response in sparse fMRI: (1) Sparse analyses should utilize a physiologically informed model that incorporates hemodynamic response convolution to reduce model error. (2) The design of sparse fMRI experiments should maintain a high rate of stimulus presentation to maximize effect size. (3) TR delays of short to intermediate length can be used between acquisitions of sparse-sampled functional image volumes to increase the number of samples and improve statistical power.
NASA Astrophysics Data System (ADS)
Buckley, J.; Wilkinson, D.; Malaroda, A.; Metcalfe, P.
2017-01-01
Three alternative methodologies to the Computed-Tomography Dose Index for the evaluation of Cone-Beam Computed Tomography dose are compared, the Cone-Beam Dose Index, IAEA Human Health Report No. 5 recommended methodology and the AAPM Task Group 111 recommended methodology. The protocols were evaluated for Pelvis and Thorax scan modes on Varian® On-Board Imager and Truebeam kV XI imaging systems. The weighted planar average dose was highest for the AAPM methodology across all scans, with the CBDI being the second highest overall. A 17.96% and 1.14% decrease from the TG-111 protocol to the IAEA and CBDI protocols for the Pelvis mode and 18.15% and 13.10% decrease for the Thorax mode were observed for the XI system. For the OBI system, the variation was 16.46% and 7.14% for Pelvis mode and 15.93% to the CBDI protocol in Thorax mode respectively.
Smith, Joseph P; Smith, Frank C; Booksh, Karl S
2018-03-01
Lunar meteorites provide a more random sampling of the surface of the Moon than do the returned lunar samples, and they provide valuable information to help estimate the chemical composition of the lunar crust, the lunar mantle, and the bulk Moon. As of July 2014, ∼96 lunar meteorites had been documented and ten of these are unbrecciated mare basalts. Using Raman imaging with multivariate curve resolution-alternating least squares (MCR-ALS), we investigated portions of polished thin sections of paired, unbrecciated, mare-basalt lunar meteorites that had been collected from the LaPaz Icefield (LAP) of Antarctica-LAP 02205 and LAP 04841. Polarized light microscopy displays that both meteorites are heterogeneous and consist of polydispersed sized and shaped particles of varying chemical composition. For two distinct probed areas within each meteorite, the individual chemical species and associated chemical maps were elucidated using MCR-ALS applied to Raman hyperspectral images. For LAP 02205, spatially and spectrally resolved clinopyroxene, ilmenite, substrate-adhesive epoxy, and diamond polish were observed within the probed areas. Similarly, for LAP 04841, spatially resolved chemical images with corresponding resolved Raman spectra of clinopyroxene, troilite, a high-temperature polymorph of anorthite, substrate-adhesive epoxy, and diamond polish were generated. In both LAP 02205 and LAP 04841, substrate-adhesive epoxy and diamond polish were more readily observed within fractures/veinlet features. Spectrally diverse clinopyroxenes were resolved in LAP 04841. Factors that allow these resolved clinopyroxenes to be differentiated include crystal orientation, spatially distinct chemical zoning of pyroxene crystals, and/or chemical and molecular composition. The minerals identified using this analytical methodology-clinopyroxene, anorthite, ilmenite, and troilite-are consistent with the results of previous studies of the two meteorites using electron microprobe analysis. To our knowledge, this is the first report of MCR-ALS with Raman imaging used for the investigation of both lunar and other types of meteorites. We have demonstrated the use of multivariate analysis methods, namely MCR-ALS, with Raman imaging to investigate heterogeneous lunar meteorites. Our analytical methodology can be used to elucidate the chemical, molecular, and structural characteristics of phases in a host of complex, heterogeneous geological, geochemical, and extraterrestrial materials.
Chain of evidence generation for contrast enhancement in digital image forensics
NASA Astrophysics Data System (ADS)
Battiato, Sebastiano; Messina, Giuseppe; Strano, Daniela
2010-01-01
The quality of the images obtained by digital cameras has improved a lot since digital cameras early days. Unfortunately, it is not unusual in image forensics to find wrongly exposed pictures. This is mainly due to obsolete techniques or old technologies, but also due to backlight conditions. To extrapolate some invisible details a stretching of the image contrast is obviously required. The forensics rules to produce evidences require a complete documentation of the processing steps, enabling the replication of the entire process. The automation of enhancement techniques is thus quite difficult and needs to be carefully documented. This work presents an automatic procedure to find contrast enhancement settings, allowing both image correction and automatic scripting generation. The technique is based on a preprocessing step which extracts the features of the image and selects correction parameters. The parameters are thus saved through a JavaScript code that is used in the second step of the approach to correct the image. The generated script is Adobe Photoshop compliant (which is largely used in image forensics analysis) thus permitting the replication of the enhancement steps. Experiments on a dataset of images are also reported showing the effectiveness of the proposed methodology.
Semi-automatic mapping of linear-trending bedforms using 'Self-Organizing Maps' algorithm
NASA Astrophysics Data System (ADS)
Foroutan, M.; Zimbelman, J. R.
2017-09-01
Increased application of high resolution spatial data such as high resolution satellite or Unmanned Aerial Vehicle (UAV) images from Earth, as well as High Resolution Imaging Science Experiment (HiRISE) images from Mars, makes it necessary to increase automation techniques capable of extracting detailed geomorphologic elements from such large data sets. Model validation by repeated images in environmental management studies such as climate-related changes as well as increasing access to high-resolution satellite images underline the demand for detailed automatic image-processing techniques in remote sensing. This study presents a methodology based on an unsupervised Artificial Neural Network (ANN) algorithm, known as Self Organizing Maps (SOM), to achieve the semi-automatic extraction of linear features with small footprints on satellite images. SOM is based on competitive learning and is efficient for handling huge data sets. We applied the SOM algorithm to high resolution satellite images of Earth and Mars (Quickbird, Worldview and HiRISE) in order to facilitate and speed up image analysis along with the improvement of the accuracy of results. About 98% overall accuracy and 0.001 quantization error in the recognition of small linear-trending bedforms demonstrate a promising framework.
Detecting large-scale networks in the human brain using high-density electroencephalography.
Liu, Quanying; Farahibozorg, Seyedehrezvan; Porcaro, Camillo; Wenderoth, Nicole; Mantini, Dante
2017-09-01
High-density electroencephalography (hdEEG) is an emerging brain imaging technique that can be used to investigate fast dynamics of electrical activity in the healthy and the diseased human brain. Its applications are however currently limited by a number of methodological issues, among which the difficulty in obtaining accurate source localizations. In particular, these issues have so far prevented EEG studies from reporting brain networks similar to those previously detected by functional magnetic resonance imaging (fMRI). Here, we report for the first time a robust detection of brain networks from resting state (256-channel) hdEEG recordings. Specifically, we obtained 14 networks previously described in fMRI studies by means of realistic 12-layer head models and exact low-resolution brain electromagnetic tomography (eLORETA) source localization, together with independent component analysis (ICA) for functional connectivity analysis. Our analyses revealed three important methodological aspects. First, brain network reconstruction can be improved by performing source localization using the gray matter as source space, instead of the whole brain. Second, conducting EEG connectivity analyses in individual space rather than on concatenated datasets may be preferable, as it permits to incorporate realistic information on head modeling and electrode positioning. Third, the use of a wide frequency band leads to an unbiased and generally accurate reconstruction of several network maps, whereas filtering data in a narrow frequency band may enhance the detection of specific networks and penalize that of others. We hope that our methodological work will contribute to rise of hdEEG as a powerful tool for brain research. Hum Brain Mapp 38:4631-4643, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Banzato, T; Bonsembiante, F; Aresu, L; Gelain, M E; Burti, S; Zotti, A
2018-03-01
The aim of this methodological study was to develop a deep convolutional neural network (DNN) to detect degenerative hepatic disease from ultrasound images of the liver in dogs and to compare the diagnostic accuracy of the newly developed DNN with that of serum biochemistry and cytology on the same samples, using histopathology as a standard. Dogs with suspected hepatic disease that had no prior history of neoplastic disease, no hepatic nodular pathology, no ascites and ultrasonography performed 24h prior to death were included in the study (n=52). Ultrasonography and serum biochemistry were performed as part of the routine clinical evaluation. On the basis of histopathology, dogs were categorised as 'normal' (n=8), or having 'vascular abnormalities'(n=8), or 'inflammatory'(n=0), 'neoplastic' (n=4) or 'degenerative'(n=32) disease; dogs with 'neoplastic' disease were excluded from further analysis. On cytological evaluation, dogs were categorised as 'normal' (n=11), or having 'inflammatory' (n=0), 'neoplastic' (n=4) or 'degenerative' (n=37) disease. Dogs were categorised as having 'degenerative' (n=32) or 'non-degenerative' (n=16) liver disease for analysis due to the limited sample size. The DNN was developed using a transfer learning methodology on a pre-trained neural network that was retrained and fine-tuned to our data set. The resultant DNN had a high diagnostic accuracy for degenerative liver disease (area under the curve 0.91; sensitivity 100%; specificity 82.8%). Cytology and serum biochemical markers (alanine transaminase and aspartate transaminase) had poor diagnostic accuracy in the detection of degenerative liver disease. The DNN outperformed all the other non-invasive diagnostic tests in the detection of degenerative liver disease. Copyright © 2018 Elsevier Ltd. All rights reserved.
Laurinavicius, Arvydas; Plancoulaine, Benoit; Rasmusson, Allan; Besusparis, Justinas; Augulis, Renaldas; Meskauskas, Raimundas; Herlin, Paulette; Laurinaviciene, Aida; Abdelhadi Muftah, Abir A; Miligy, Islam; Aleskandarany, Mohammed; Rakha, Emad A; Green, Andrew R; Ellis, Ian O
2016-04-01
Proliferative activity, assessed by Ki67 immunohistochemistry (IHC), is an established prognostic and predictive biomarker of breast cancer (BC). However, it remains under-utilized due to lack of standardized robust measurement methodologies and significant intratumor heterogeneity of expression. A recently proposed methodology for IHC biomarker assessment in whole slide images (WSI), based on systematic subsampling of tissue information extracted by digital image analysis (DIA) into hexagonal tiling arrays, enables computation of a comprehensive set of Ki67 indicators, including intratumor variability. In this study, the tiling methodology was applied to assess Ki67 expression in WSI of 152 surgically removed Ki67-stained (on full-face sections) BC specimens and to test which, if any, Ki67 indicators can predict overall survival (OS). Visual Ki67 IHC estimates and conventional clinico-pathologic parameters were also included in the study. Analysis revealed linearly independent intrinsic factors of the Ki67 IHC variance: proliferation (level of expression), disordered texture (entropy), tumor size and Nottingham Prognostic Index, bimodality, and correlation. All visual and DIA-generated indicators of the level of Ki67 expression provided significant cutoff values as single predictors of OS. However, only bimodality indicators (Ashman's D, in particular) were independent predictors of OS in the context of hormone receptor and HER2 status. From this, we conclude that spatial heterogeneity of proliferative tumor activity, measured by DIA of Ki67 IHC expression and analyzed by the hexagonal tiling approach, can serve as an independent prognostic indicator of OS in BC patients that outperforms the prognostic power of the level of proliferative activity.
Fozooni, Tahereh; Ravan, Hadi; Sasan, Hosseinali
2017-12-01
Due to their unique properties, such as programmability, ligand-binding capability, and flexibility, nucleic acids can serve as analytes and/or recognition elements for biosensing. To improve the sensitivity of nucleic acid-based biosensing and hence the detection of a few copies of target molecule, different modern amplification methodologies, namely target-and-signal-based amplification strategies, have already been developed. These recent signal amplification technologies, which are capable of amplifying the signal intensity without changing the targets' copy number, have resulted in fast, reliable, and sensitive methods for nucleic acid detection. Working in cell-free settings, researchers have been able to optimize a variety of complex and quantitative methods suitable for deploying in live-cell conditions. In this study, a comprehensive review of the signal amplification technologies for the detection of nucleic acids is provided. We classify the signal amplification methodologies into enzymatic and non-enzymatic strategies with a primary focus on the methods that enable us to shift away from in vitro detecting to in vivo imaging. Finally, the future challenges and limitations of detection for cellular conditions are discussed.
The use of microtomography in structural geology: A new methodology to analyse fault faces
NASA Astrophysics Data System (ADS)
Jacques, Patricia D.; Nummer, Alexis Rosa; Heck, Richard J.; Machado, Rômulo
2014-09-01
This paper describes a new methodology to kinematically analyze faults in microscale dimensions (voxel size = 40 μm), using images obtained by X-ray computed microtomography (μCT). The equipment used is a GE MS8x-130 scanner. It was developed using rocks samples from Santa Catarina State, Brazil, and constructing micro Digital Elevation Models (μDEMs) for the fault surface, for analysing microscale brittle structures including striations, roughness and steps. Shaded relief images were created for the μDEMs, which enabled the generation of profiles to classify the secondary structures associated with the main fault surface. In the case of a sample with mineral growth that covers the fault surface, it is possible to detect the kinematic geometry even with the mineral cover. This technique proved to be useful for determining the sense of movement of faults, especially when it is not possible to determine striations in macro or microscopic analysis. When the sample has mineral deposit on the surface (mineral cover) this technique allows a relative chronology and geometric characterization between the faults with and without covering.
de Sousa Costa, Robherson Wector; da Silva, Giovanni Lucca França; de Carvalho Filho, Antonio Oseas; Silva, Aristófanes Corrêa; de Paiva, Anselmo Cardoso; Gattass, Marcelo
2018-05-23
Lung cancer presents the highest cause of death among patients around the world, in addition of being one of the smallest survival rates after diagnosis. Therefore, this study proposes a methodology for diagnosis of lung nodules in benign and malignant tumors based on image processing and pattern recognition techniques. Mean phylogenetic distance (MPD) and taxonomic diversity index (Δ) were used as texture descriptors. Finally, the genetic algorithm in conjunction with the support vector machine were applied to select the best training model. The proposed methodology was tested on computed tomography (CT) images from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), with the best sensitivity of 93.42%, specificity of 91.21%, accuracy of 91.81%, and area under the ROC curve of 0.94. The results demonstrate the promising performance of texture extraction techniques using mean phylogenetic distance and taxonomic diversity index combined with phylogenetic trees. Graphical Abstract Stages of the proposed methodology.
Geng, Hua; Todd, Naomi M; Devlin-Mullin, Aine; Poologasundarampillai, Gowsihan; Kim, Taek Bo; Madi, Kamel; Cartmell, Sarah; Mitchell, Christopher A; Jones, Julian R; Lee, Peter D
2016-06-01
A correlative imaging methodology was developed to accurately quantify bone formation in the complex lattice structure of additive manufactured implants. Micro computed tomography (μCT) and histomorphometry were combined, integrating the best features from both, while demonstrating the limitations of each imaging modality. This semi-automatic methodology registered each modality using a coarse graining technique to speed the registration of 2D histology sections to high resolution 3D μCT datasets. Once registered, histomorphometric qualitative and quantitative bone descriptors were directly correlated to 3D quantitative bone descriptors, such as bone ingrowth and bone contact. The correlative imaging allowed the significant volumetric shrinkage of histology sections to be quantified for the first time (~15 %). This technique demonstrated the importance of location of the histological section, demonstrating that up to a 30 % offset can be introduced. The results were used to quantitatively demonstrate the effectiveness of 3D printed titanium lattice implants.
NASA Astrophysics Data System (ADS)
Lapuebla-Ferri, Andrés; Cegoñino-Banzo, José; Jiménez-Mocholí, Antonio-José; Pérez del Palomar, Amaya
2017-11-01
In breast cancer screening or diagnosis, it is usual to combine different images in order to locate a lesion as accurately as possible. These images are generated using a single or several imaging techniques. As x-ray-based mammography is widely used, a breast lesion is located in the same plane of the image (mammogram), but tracking it across mammograms corresponding to different views is a challenging task for medical physicians. Accordingly, simulation tools and methodologies that use patient-specific numerical models can facilitate the task of fusing information from different images. Additionally, these tools need to be as straightforward as possible to facilitate their translation to the clinical area. This paper presents a patient-specific, finite-element-based and semi-automated simulation methodology to track breast lesions across mammograms. A realistic three-dimensional computer model of a patient’s breast was generated from magnetic resonance imaging to simulate mammographic compressions in cranio-caudal (CC, head-to-toe) and medio-lateral oblique (MLO, shoulder-to-opposite hip) directions. For each compression being simulated, a virtual mammogram was obtained and posteriorly superimposed to the corresponding real mammogram, by sharing the nipple as a common feature. Two-dimensional rigid-body transformations were applied, and the error distance measured between the centroids of the tumors previously located on each image was 3.84 mm and 2.41 mm for CC and MLO compression, respectively. Considering that the scope of this work is to conceive a methodology translatable to clinical practice, the results indicate that it could be helpful in supporting the tracking of breast lesions.
Simulating Colour Vision Deficiency from a Spectral Image.
Shrestha, Raju
2016-01-01
People with colour vision deficiency (CVD) have difficulty seeing full colour contrast and can miss some of the features in a scene. As a part of universal design, researcher have been working on how to modify and enhance the colour of images in order to make them see the scene with good contrast. For this, it is important to know how the original colour image is seen by different individuals with CVD. This paper proposes a methodology to simulate accurate colour deficient images from a spectral image using cone sensitivity of different cases of deficiency. As the method enables generation of accurate colour deficient image, the methodology is believed to help better understand the limitations of colour vision deficiency and that in turn leads to the design and development of more effective imaging technologies for better and wider accessibility in the context of universal design.
Laser scatter in clinical applications
NASA Astrophysics Data System (ADS)
Luther, Ed; Geddie, William
2008-02-01
Brightfield Laser Scanning Imaging (BLSI) is available on Laser Scanning Cytometers (LSCs) from CompuCyte Corporation. Briefly, digitation of photodetector outputs is coordinated with the combined motions of a small diameter (typically 2 to 10 microns) laser beam scanning a specimen in the Y direction (directed by a galvanometer-driven scanning mirror) and the microscope stage motion in the X direction. The output measurements are assembled into a two-dimensional array to provide a "non-real" digital image, where each pixel value reports the amount of laser-scattered light that is obtained when the laser beam is centered on that location. Depending on the detector positions, these images are analogous to Differential Interference Contrast or Phase Contrast microscopy. We report the incorporation of the new laser scattering capabilities into the workflow of a high-volume clinical cytology laboratory at University Health Network, Toronto, Canada. The laboratory has been employing LSC technology since 2003 for immunophenotypic fluorescence analysis of approximately 1200 cytological specimens per year, using the Clatch methodology. The new BLSI component allows visualization of cellular morphology at higher resolution levels than is possible with standard brightfield microscopic evaluation of unstained cells. BLSI is incorporated into the triage phase, where evaluation of unstained samples is combined with fluorescence evaluation to obtain specimen background levels. Technical details of the imaging methodology will be presented, as well as illustrative examples from current studies and comparisons to detailed, but obscure, historical studies of cytology specimens based on phase contrast microscopy.
ERIC Educational Resources Information Center
Lewis, Elise C.
2011-01-01
This study was designed to explore the relationships between users and interactive images. Three factors were identified and provided different perspectives on how users interact with images: image utility, information-need, and images with varying levels of interactivity. The study used a mixed methodology to gain a more comprehensive…
A method to perform a fast fourier transform with primitive image transformations.
Sheridan, Phil
2007-05-01
The Fourier transform is one of the most important transformations in image processing. A major component of this influence comes from the ability to implement it efficiently on a digital computer. This paper describes a new methodology to perform a fast Fourier transform (FFT). This methodology emerges from considerations of the natural physical constraints imposed by image capture devices (camera/eye). The novel aspects of the specific FFT method described include: 1) a bit-wise reversal re-grouping operation of the conventional FFT is replaced by the use of lossless image rotation and scaling and 2) the usual arithmetic operations of complex multiplication are replaced with integer addition. The significance of the FFT presented in this paper is introduced by extending a discrete and finite image algebra, named Spiral Honeycomb Image Algebra (SHIA), to a continuous version, named SHIAC.
Efficient testing methodologies for microcameras in a gigapixel imaging system
NASA Astrophysics Data System (ADS)
Youn, Seo Ho; Marks, Daniel L.; McLaughlin, Paul O.; Brady, David J.; Kim, Jungsang
2013-04-01
Multiscale parallel imaging--based on a monocentric optical design--promises revolutionary advances in diverse imaging applications by enabling high resolution, real-time image capture over a wide field-of-view (FOV), including sport broadcast, wide-field microscopy, astronomy, and security surveillance. Recently demonstrated AWARE-2 is a gigapixel camera consisting of an objective lens and 98 microcameras spherically arranged to capture an image over FOV of 120° by 50°, using computational image processing to form a composite image of 0.96 gigapixels. Since microcameras are capable of individually adjusting exposure, gain, and focus, true parallel imaging is achieved with a high dynamic range. From the integration perspective, manufacturing and verifying consistent quality of microcameras is a key to successful realization of AWARE cameras. We have developed an efficient testing methodology that utilizes a precisely fabricated dot grid chart as a calibration target to extract critical optical properties such as optical distortion, veiling glare index, and modulation transfer function to validate imaging performance of microcameras. This approach utilizes an AWARE objective lens simulator which mimics the actual objective lens but operates with a short object distance, suitable for a laboratory environment. Here we describe the principles of the methodologies developed for AWARE microcameras and discuss the experimental results with our prototype microcameras. Reference Brady, D. J., Gehm, M. E., Stack, R. A., Marks, D. L., Kittle, D. S., Golish, D. R., Vera, E. M., and Feller, S. D., "Multiscale gigapixel photography," Nature 486, 386--389 (2012).
Multi-observation PET image analysis for patient follow-up quantitation and therapy assessment
NASA Astrophysics Data System (ADS)
David, S.; Visvikis, D.; Roux, C.; Hatt, M.
2011-09-01
In positron emission tomography (PET) imaging, an early therapeutic response is usually characterized by variations of semi-quantitative parameters restricted to maximum SUV measured in PET scans during the treatment. Such measurements do not reflect overall tumor volume and radiotracer uptake variations. The proposed approach is based on multi-observation image analysis for merging several PET acquisitions to assess tumor metabolic volume and uptake variations. The fusion algorithm is based on iterative estimation using a stochastic expectation maximization (SEM) algorithm. The proposed method was applied to simulated and clinical follow-up PET images. We compared the multi-observation fusion performance to threshold-based methods, proposed for the assessment of the therapeutic response based on functional volumes. On simulated datasets the adaptive threshold applied independently on both images led to higher errors than the ASEM fusion and on clinical datasets it failed to provide coherent measurements for four patients out of seven due to aberrant delineations. The ASEM method demonstrated improved and more robust estimation of the evaluation leading to more pertinent measurements. Future work will consist in extending the methodology and applying it to clinical multi-tracer datasets in order to evaluate its potential impact on the biological tumor volume definition for radiotherapy applications.
Hyperspectral imaging of polymer banknotes for building and analysis of spectral library
NASA Astrophysics Data System (ADS)
Lim, Hoong-Ta; Murukeshan, Vadakke Matham
2017-11-01
The use of counterfeit banknotes increases crime rates and cripples the economy. New countermeasures are required to stop counterfeiters who use advancing technologies with criminal intent. Many countries started adopting polymer banknotes to replace paper notes, as polymer notes are more durable and have better quality. The research on authenticating such banknotes is of much interest to the forensic investigators. Hyperspectral imaging can be employed to build a spectral library of polymer notes, which can then be used for classification to authenticate these notes. This is however not widely reported and has become a research interest in forensic identification. This paper focuses on the use of hyperspectral imaging on polymer notes to build spectral libraries, using a pushbroom hyperspectral imager which has been previously reported. As an initial study, a spectral library will be built from three arbitrarily chosen regions of interest of five circulated genuine polymer notes. Principal component analysis is used for dimension reduction and to convert the information in the spectral library to principal components. A 99% confidence ellipse is formed around the cluster of principal component scores of each class and then used as classification criteria. The potential of the adopted methodology is demonstrated by the classification of the imaged regions as training samples.
Evaluation of color grading impact in restoration process of archive films
NASA Astrophysics Data System (ADS)
Fliegel, Karel; Vítek, Stanislav; Páta, Petr; Janout, Petr; Myslík, Jiří; Pecák, Josef; Jícha, Marek
2016-09-01
Color grading of archive films is a very particular task in the process of their restoration. The ultimate goal of color grading here is to achieve the same look of the movie as intended at the time of its first presentation. The role of the expert restorer, expert group and a digital colorist in this complicated process is to find the optimal settings of the digital color grading system so that the resulting image look is as close as possible to the estimate of the original reference release print adjusted by the expert group of cinematographers. A methodology for subjective assessment of perceived differences between the outcomes of color grading is introduced, and results of a subjective study are presented. Techniques for objective assessment of perceived differences are discussed, and their performance is evaluated using ground truth obtained from the subjective experiment. In particular, a solution based on calibrated digital single-lens reflex camera and subsequent analysis of image features captured from the projection screen is described. The system based on our previous work is further developed so that it can be used for the analysis of projected images. It allows assessing color differences in these images and predict their impact on the perceived difference in image look.
Towards the use of computationally inserted lesions for mammographic CAD assessment
NASA Astrophysics Data System (ADS)
Ghanian, Zahra; Pezeshk, Aria; Petrick, Nicholas; Sahiner, Berkman
2018-03-01
Computer-aided detection (CADe) devices used for breast cancer detection on mammograms are typically first developed and assessed for a specific "original" acquisition system, e.g., a specific image detector. When CADe developers are ready to apply their CADe device to a new mammographic acquisition system, they typically assess the CADe device with images acquired using the new system. Collecting large repositories of clinical images containing verified cancer locations and acquired by the new image acquisition system is costly and time consuming. Our goal is to develop a methodology to reduce the clinical data burden in the assessment of a CADe device for use with a different image acquisition system. We are developing an image blending technique that allows users to seamlessly insert lesions imaged using an original acquisition system into normal images or regions acquired with a new system. In this study, we investigated the insertion of microcalcification clusters imaged using an original acquisition system into normal images acquired with that same system utilizing our previously-developed image blending technique. We first performed a reader study to assess whether experienced observers could distinguish between computationally inserted and native clusters. For this purpose, we applied our insertion technique to clinical cases taken from the University of South Florida Digital Database for Screening Mammography (DDSM) and the Breast Cancer Digital Repository (BCDR). Regions of interest containing microcalcification clusters from one breast of a patient were inserted into the contralateral breast of the same patient. The reader study included 55 native clusters and their 55 inserted counterparts. Analysis of the reader ratings using receiver operating characteristic (ROC) methodology indicated that inserted clusters cannot be reliably distinguished from native clusters (area under the ROC curve, AUC=0.58±0.04). Furthermore, CADe sensitivity was evaluated on mammograms with native and inserted microcalcification clusters using a commercial CADe system. For this purpose, we used full field digital mammograms (FFDMs) from 68 clinical cases, acquired at the University of Michigan Health System. The average sensitivities for native and inserted clusters were equal, 85.3% (58/68). These results demonstrate the feasibility of using the inserted microcalcification clusters for assessing mammographic CAD devices.
Measuring saliency in images: which experimental parameters for the assessment of image quality?
NASA Astrophysics Data System (ADS)
Fredembach, Clement; Woolfe, Geoff; Wang, Jue
2012-01-01
Predicting which areas of an image are perceptually salient or attended to has become an essential pre-requisite of many computer vision applications. Because observers are notoriously unreliable in remembering where they look a posteriori, and because asking where they look while observing the image necessarily in uences the results, ground truth about saliency and visual attention has to be obtained by gaze tracking methods. From the early work of Buswell and Yarbus to the most recent forays in computer vision there has been, perhaps unfortunately, little agreement on standardisation of eye tracking protocols for measuring visual attention. As the number of parameters involved in experimental methodology can be large, their individual in uence on the nal results is not well understood. Consequently, the performance of saliency algorithms, when assessed by correlation techniques, varies greatly across the literature. In this paper, we concern ourselves with the problem of image quality. Specically: where people look when judging images. We show that in this case, the performance gap between existing saliency prediction algorithms and experimental results is signicantly larger than otherwise reported. To understand this discrepancy, we rst devise an experimental protocol that is adapted to the task of measuring image quality. In a second step, we compare our experimental parameters with the ones of existing methods and show that a lot of the variability can directly be ascribed to these dierences in experimental methodology and choice of variables. In particular, the choice of a task, e.g., judging image quality vs. free viewing, has a great impact on measured saliency maps, suggesting that even for a mildly cognitive task, ground truth obtained by free viewing does not adapt well. Careful analysis of the prior art also reveals that systematic bias can occur depending on instrumental calibration and the choice of test images. We conclude this work by proposing a set of parameters, tasks and images that can be used to compare the various saliency prediction methods in a manner that is meaningful for image quality assessment.
Fuller, John A; Berlinicke, Cynthia A; Inglese, James; Zack, Donald J
2016-01-01
High content analysis (HCA) has become a leading methodology in phenotypic drug discovery efforts. Typical HCA workflows include imaging cells using an automated microscope and analyzing the data using algorithms designed to quantify one or more specific phenotypes of interest. Due to the richness of high content data, unappreciated phenotypic changes may be discovered in existing image sets using interactive machine-learning based software systems. Primary postnatal day four retinal cells from the photoreceptor (PR) labeled QRX-EGFP reporter mice were isolated, seeded, treated with a set of 234 profiled kinase inhibitors and then cultured for 1 week. The cells were imaged with an Acumen plate-based laser cytometer to determine the number and intensity of GFP-expressing, i.e. PR, cells. Wells displaying intensities and counts above threshold values of interest were re-imaged at a higher resolution with an INCell2000 automated microscope. The images were analyzed with an open source HCA analysis tool, PhenoRipper (Rajaram et al., Nat Methods 9:635-637, 2012), to identify the high GFP-inducing treatments that additionally resulted in diverse phenotypes compared to the vehicle control samples. The pyrimidinopyrimidone kinase inhibitor CHEMBL-1766490, a pan kinase inhibitor whose major known targets are p38α and the Src family member lck, was identified as an inducer of photoreceptor neuritogenesis by using the open-source HCA program PhenoRipper. This finding was corroborated using a cell-based method of image analysis that measures quantitative differences in the mean neurite length in GFP expressing cells. Interacting with data using machine learning algorithms may complement traditional HCA approaches by leading to the discovery of small molecule-induced cellular phenotypes in addition to those upon which the investigator is initially focusing.
NASA Astrophysics Data System (ADS)
Chávez, G. Moreno; Sarocchi, D.; Santana, E. Arce; Borselli, L.
2015-12-01
The study of grain size distribution is fundamental for understanding sedimentological environments. Through these analyses, clast erosion, transport and deposition processes can be interpreted and modeled. However, grain size distribution analysis can be difficult in some outcrops due to the number and complexity of the arrangement of clasts and matrix and their physical size. Despite various technological advances, it is almost impossible to get the full grain size distribution (blocks to sand grain size) with a single method or instrument of analysis. For this reason development in this area continues to be fundamental. In recent years, various methods of particle size analysis by automatic image processing have been developed, due to their potential advantages with respect to classical ones; speed and final detailed content of information (virtually for each analyzed particle). In this framework, we have developed a novel algorithm and software for grain size distribution analysis, based on color image segmentation using an entropy-controlled quadratic Markov measure field algorithm and the Rosiwal method for counting intersections between clast and linear transects in the images. We test the novel algorithm in different sedimentary deposit types from 14 varieties of sedimentological environments. The results of the new algorithm were compared with grain counts performed manually by the same Rosiwal methods applied by experts. The new algorithm has the same accuracy as a classical manual count process, but the application of this innovative methodology is much easier and dramatically less time-consuming. The final productivity of the new software for analysis of clasts deposits after recording field outcrop images can be increased significantly.
Quantitative analysis of peel-off degree for printed electronics
NASA Astrophysics Data System (ADS)
Park, Janghoon; Lee, Jongsu; Sung, Ki-Hak; Shin, Kee-Hyun; Kang, Hyunkyoo
2018-02-01
We suggest a facile methodology of peel-off degree evaluation by image processing on printed electronics. The quantification of peeled and printed areas was performed using open source programs. To verify the accuracy of methods, we manually removed areas from the printed circuit that was measured, resulting in 96.3% accuracy. The sintered patterns showed a decreasing tendency in accordance with the increase in the energy density of an infrared lamp, and the peel-off degree increased. Thus, the comparison between both results was presented. Finally, the correlation between performance characteristics was determined by quantitative analysis.
Robinson, César Leyton; Caballero, Andrés Díaz
2007-01-01
This article is an experimental methodological reflection on the use of medical images as useful documents for constructing the history of medicine. A method is used that is based on guidelines or analysis topics that include different ways of viewing documents, from aesthetic, technical, social and political theories to historical and medical thinking. Some exercises are also included that enhance the proposal for the reader: rediscovering the worlds in society that harbor these medical photographical archives to obtain a new theoretical approach to the construction of the history of medical science.
Representation of Scientific Methodology in Secondary Science Textbooks
ERIC Educational Resources Information Center
Binns, Ian C.
2009-01-01
The purpose of this investigation was to assess the representation of scientific methodology in secondary science textbooks. More specifically, this study looked at how textbooks introduced scientific methodology and to what degree the examples from the rest of the textbook, the investigations, and the images were consistent with the text's…
Failure prediction in ceramic composites using acoustic emission and digital image correlation
NASA Astrophysics Data System (ADS)
Whitlow, Travis; Jones, Eric; Przybyla, Craig
2016-02-01
The objective of the work performed here was to develop a methodology for linking in-situ detection of localized matrix cracking to the final failure location in continuous fiber reinforced CMCs. First, the initiation and growth of matrix cracking are measured and triangulated via acoustic emission (AE) detection. High amplitude events at relatively low static loads can be associated with initiation of large matrix cracks. When there is a localization of high amplitude events, a measurable effect on the strain field can be observed. Full field surface strain measurements were obtained using digital image correlation (DIC). An analysis using the combination of the AE and DIC data was able to predict the final failure location.
Plaque formation and removal assessed in vivo in a novel repeated measures imaging methodology.
White, Donald J; Kozak, Kathy M; Baker, Rob; Saletta, Lisa
2006-01-01
A repeated measures digital imaging technique (Digital Plaque Image Analysis) was used to assess variations in plaque formation, including levels of plaque developed following evening and morning tooth brushing with a standard dentifrice, to establish a baseline for future assessments of antimicrobial formulations. Following a rigorous oral hygiene period, subjects were provided with a standard commercial (non-antibacterial) dentifrice and manual toothbrush and instructed to brush b.i.d., as normal. On six separate days over two weeks, subjects reported at three times for a daily plaque assessment: in the morning before oral hygiene, post-brushing, and in the afternoon post-brushing. Morning plaque levels covered approximately 10% of the measured dentition, and plaque was removed by 75% with morning tooth brushing. Plaque underwent rapid regrowth during the day, and averaged approximately 7% coverage by the afternoon. These results support the value of Digital Plaque Image Analysis in recording diurnal plaque variations and treatment effects, and suggest that assessment of oral hygiene efficacy (either mechanical or chemopreventive) accounts for diurnal variations in plaque formation. In addition, the results suggest that plaque regrowth and virulence activity overnight is a significant target for oral hygiene interventions.
NASA Technical Reports Server (NTRS)
Kruse, Fred A.; Dwyer, John L.
1993-01-01
The Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) measures reflected light in 224 contiguous spectra bands in the 0.4 to 2.45 micron region of the electromagnetic spectrum. Numerous studies have used these data for mineralogic identification and mapping based on the presence of diagnostic spectral features. Quantitative mapping requires conversion of the AVIRIS data to physical units (usually reflectance) so that analysis results can be compared and validated with field and laboratory measurements. This study evaluated two different AVIRIS calibration techniques to ground reflectance: an empirically-based method and an atmospheric model based method to determine their effects on quantitative scientific analyses. Expert system analysis and linear spectral unmixing were applied to both calibrated data sets to determine the effect of the calibration on the mineral identification and quantitative mapping results. Comparison of the image-map results and image reflectance spectra indicate that the model-based calibrated data can be used with automated mapping techniques to produce accurate maps showing the spatial distribution and abundance of surface mineralogy. This has positive implications for future operational mapping using AVIRIS or similar imaging spectrometer data sets without requiring a priori knowledge.
NASA Astrophysics Data System (ADS)
Zhao, Yan-Ru; Yu, Ke-Qiang; Li, Xiaoli; He, Yong
2016-12-01
Infected petals are often regarded as the source for the spread of fungi Sclerotinia sclerotiorum in all growing process of rapeseed (Brassica napus L.) plants. This research aimed to detect fungal infection of rapeseed petals by applying hyperspectral imaging in the spectral region of 874-1734 nm coupled with chemometrics. Reflectance was extracted from regions of interest (ROIs) in the hyperspectral image of each sample. Firstly, principal component analysis (PCA) was applied to conduct a cluster analysis with the first several principal components (PCs). Then, two methods including X-loadings of PCA and random frog (RF) algorithm were used and compared for optimizing wavebands selection. Least squares-support vector machine (LS-SVM) methodology was employed to establish discriminative models based on the optimal and full wavebands. Finally, area under the receiver operating characteristics curve (AUC) was utilized to evaluate classification performance of these LS-SVM models. It was found that LS-SVM based on the combination of all optimal wavebands had the best performance with AUC of 0.929. These results were promising and demonstrated the potential of applying hyperspectral imaging in fungus infection detection on rapeseed petals.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum.
Held, Christian; Nattkemper, Tim; Palmisano, Ralf; Wittenberg, Thomas
2013-01-01
Introduction: Research and diagnosis in medicine and biology often require the assessment of a large amount of microscopy image data. Although on the one hand, digital pathology and new bioimaging technologies find their way into clinical practice and pharmaceutical research, some general methodological issues in automated image analysis are still open. Methods: In this study, we address the problem of fitting the parameters in a microscopy image segmentation pipeline. We propose to fit the parameters of the pipeline's modules with optimization algorithms, such as, genetic algorithms or coordinate descents, and show how visual exploration of the parameter space can help to identify sub-optimal parameter settings that need to be avoided. Results: This is of significant help in the design of our automatic parameter fitting framework, which enables us to tune the pipeline for large sets of micrographs. Conclusion: The underlying parameter spaces pose a challenge for manual as well as automated parameter optimization, as the parameter spaces can show several local performance maxima. Hence, optimization strategies that are not able to jump out of local performance maxima, like the hill climbing algorithm, often result in a local maximum. PMID:23766941
Cortical Signal Analysis and Advances in Functional Near-Infrared Spectroscopy Signal: A Review.
Kamran, Muhammad A; Mannan, Malik M Naeem; Jeong, Myung Yung
2016-01-01
Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging modality that measures the concentration changes of oxy-hemoglobin (HbO) and de-oxy hemoglobin (HbR) at the same time. It is an emerging cortical imaging modality with a good temporal resolution that is acceptable for brain-computer interface applications. Researchers have developed several methods in last two decades to extract the neuronal activation related waveform from the observed fNIRS time series. But still there is no standard method for analysis of fNIRS data. This article presents a brief review of existing methodologies to model and analyze the activation signal. The purpose of this review article is to give a general overview of variety of existing methodologies to extract useful information from measured fNIRS data including pre-processing steps, effects of differential path length factor (DPF), variations and attributes of hemodynamic response function (HRF), extraction of evoked response, removal of physiological noises, instrumentation, and environmental noises and resting/activation state functional connectivity. Finally, the challenges in the analysis of fNIRS signal are summarized.
Cortical Signal Analysis and Advances in Functional Near-Infrared Spectroscopy Signal: A Review
Kamran, Muhammad A.; Mannan, Malik M. Naeem; Jeong, Myung Yung
2016-01-01
Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging modality that measures the concentration changes of oxy-hemoglobin (HbO) and de-oxy hemoglobin (HbR) at the same time. It is an emerging cortical imaging modality with a good temporal resolution that is acceptable for brain-computer interface applications. Researchers have developed several methods in last two decades to extract the neuronal activation related waveform from the observed fNIRS time series. But still there is no standard method for analysis of fNIRS data. This article presents a brief review of existing methodologies to model and analyze the activation signal. The purpose of this review article is to give a general overview of variety of existing methodologies to extract useful information from measured fNIRS data including pre-processing steps, effects of differential path length factor (DPF), variations and attributes of hemodynamic response function (HRF), extraction of evoked response, removal of physiological noises, instrumentation, and environmental noises and resting/activation state functional connectivity. Finally, the challenges in the analysis of fNIRS signal are summarized. PMID:27375458
Amariti, M L; Restori, M; De Ferrari, F; Paganelli, C; Faglia, R; Legnani, G
1999-06-01
Age determination by teeth examination is one of the main means of determining personal identification. Current studies have suggested different techniques for determining the age of a subject by means of the analysis of microscopic and macroscopic structural modifications of the tooth with ageing. The histological approach is useful among the various methodologies utilized for this purpose. It is still unclear as to what is the best technique, as almost all the authors suggest the use of the approach they themselves have tested. In the present study, age determination by means of microscopic techniques has been based on the quantitative analysis of three parameters, all well recognized in specialized literature: 1. dentinal tubules density/sclerosis 2. tooth translucency 3. analysis of the cementum thickness. After a description of the three methodologies (with automatic image processing of the dentinal sclerosis utilizing an appropriate computer program developed by the authors) the results obtained on cases using the three different approaches are presented, and the merits and failings of each technique are identified with the intention of identifying the one offering the least degree of error in age determination.
NASA Astrophysics Data System (ADS)
Cheng, Ruida; Jackson, Jennifer N.; McCreedy, Evan S.; Gandler, William; Eijkenboom, J. J. F. A.; van Middelkoop, M.; McAuliffe, Matthew J.; Sheehan, Frances T.
2016-03-01
The paper presents an automatic segmentation methodology for the patellar bone, based on 3D gradient recalled echo and gradient recalled echo with fat suppression magnetic resonance images. Constricted search space outlines are incorporated into recursive ray-tracing to segment the outer cortical bone. A statistical analysis based on the dependence of information in adjacent slices is used to limit the search in each image to between an outer and inner search region. A section based recursive ray-tracing mechanism is used to skip inner noise regions and detect the edge boundary. The proposed method achieves higher segmentation accuracy (0.23mm) than the current state-of-the-art methods with the average dice similarity coefficient of 96.0% (SD 1.3%) agreement between the auto-segmentation and ground truth surfaces.
A feature-based approach to combine functional MRI, structural MRI and EEG brain imaging data.
Calhoun, V; Adali, T; Liu, J
2006-01-01
The acquisition of multiple brain imaging types for a given study is a very common practice. However these data are typically examined in separate analyses, rather than in a combined model. We propose a novel methodology to perform joint independent component analysis across image modalities, including structural MRI data, functional MRI activation data and EEG data, and to visualize the results via a joint histogram visualization technique. Evaluation of which combination of fused data is most useful is determined by using the Kullback-Leibler divergence. We demonstrate our method on a data set composed of functional MRI data from two tasks, structural MRI data, and EEG data collected on patients with schizophrenia and healthy controls. We show that combining data types can improve our ability to distinguish differences between groups.
The image of the nurse on the internet.
Kalisch, Beatrice J; Begeny, Suzanne; Neumann, Sue
2007-01-01
The media image of the nurse is a source of concern because of its impact on: recruitment into the profession; the decisions of policy makers who enact legislation that defines the scope and financing of nursing services; the use of nursing services by consumers; and the self-image of the nurse. This article reports on the results of a study of the image of nursing on the Internet utilizing content analysis methodology. A total of 144 Websites were content-analyzed in 2001 and 152 in 2004. Approximately 70% of the Internet sites showed nurses as intelligent and educated and 60% as respected, accountable, committed, competent, and trustworthy. Nurses were also shown as having specialized knowledge and skills in 70% (2001) and 62% (2004) of the Websites. Scientific/research-oriented, competent, sexually promiscuous, powerful, and creative/innovative increased from 2001-2004 while committed, attractive/well groomed, and authoritative images decreased. Doctoral-prepared nurses were evident in 19% of the Websites in 2001 and doubled in 2004. The results of this study suggest that there are important opportunities to use the Internet to improve the image of the nurse.
Review of optical breast imaging and spectroscopy
NASA Astrophysics Data System (ADS)
Grosenick, Dirk; Rinneberg, Herbert; Cubeddu, Rinaldo; Taroni, Paola
2016-09-01
Diffuse optical imaging and spectroscopy of the female breast is an area of active research. We review the present status of this field and discuss the broad range of methodologies and applications. Starting with a brief overview on breast physiology, the remodeling of vasculature and extracellular matrix caused by solid tumors is highlighted that is relevant for contrast in optical imaging. Then, the various instrumental techniques and the related methods of data analysis and image generation are described and compared including multimodality instrumentation, fluorescence mammography, broadband spectroscopy, and diffuse correlation spectroscopy. We review the clinical results on functional properties of malignant and benign breast lesions compared to host tissue and discuss the various methods to improve contrast between healthy and diseased tissue, such as enhanced spectroscopic information, dynamic variations of functional properties, pharmacokinetics of extrinsic contrast agents, including the enhanced permeability and retention effect. We discuss research on monitoring neoadjuvant chemotherapy and on breast cancer risk assessment as potential clinical applications of optical breast imaging and spectroscopy. Moreover, we consider new experimental approaches, such as photoacoustic imaging and long-wavelength tissue spectroscopy.
A Novel Imaging Analysis Method for Capturing Pharyngeal Constriction During Swallowing.
Schwertner, Ryan W; Garand, Kendrea L; Pearson, William G
2016-01-01
Videofluoroscopic imaging of swallowing known as the Modified Barium Study (MBS) is the standard of care for assessing swallowing difficulty. While the clinical purpose of this radiographic imaging is to primarily assess aspiration risk, valuable biomechanical data is embedded in these studies. Computational analysis of swallowing mechanics (CASM) is an established research methodology for assessing multiple interactions of swallowing mechanics based on coordinates mapping muscle function including hyolaryngeal movement, pharyngeal shortening, tongue base retraction, and extension of the head and neck, however coordinates characterizing pharyngeal constriction is undeveloped. The aim of this study was to establish a method for locating the superior and middle pharyngeal constrictors using hard landmarks as guides on MBS videofluoroscopic imaging, and to test the reliability of this new method. Twenty de-identified, normal, MBS videos were randomly selected from a database. Two raters annotated landmarks for the superior and middle pharyngeal constrictors frame-by-frame using a semi-automated MATLAB tracker tool at two time points. Intraclass correlation coefficients were used to assess test-retest reliability between two raters with an ICC = 0.99 or greater for all coordinates for the retest measurement. MorphoJ integrated software was used to perform a discriminate function analysis to visualize how all 12 coordinates interact with each other in normal swallowing. The addition of the superior and middle pharyngeal constrictor coordinates to CASM allows for a robust analysis of the multiple components of swallowing mechanics interacting with a wide range of variables in both patient specific and cohort studies derived from common use imaging data.
A Novel Imaging Analysis Method for Capturing Pharyngeal Constriction During Swallowing
Schwertner, Ryan W.; Garand, Kendrea L.; Pearson, William G.
2016-01-01
Videofluoroscopic imaging of swallowing known as the Modified Barium Study (MBS) is the standard of care for assessing swallowing difficulty. While the clinical purpose of this radiographic imaging is to primarily assess aspiration risk, valuable biomechanical data is embedded in these studies. Computational analysis of swallowing mechanics (CASM) is an established research methodology for assessing multiple interactions of swallowing mechanics based on coordinates mapping muscle function including hyolaryngeal movement, pharyngeal shortening, tongue base retraction, and extension of the head and neck, however coordinates characterizing pharyngeal constriction is undeveloped. The aim of this study was to establish a method for locating the superior and middle pharyngeal constrictors using hard landmarks as guides on MBS videofluoroscopic imaging, and to test the reliability of this new method. Twenty de-identified, normal, MBS videos were randomly selected from a database. Two raters annotated landmarks for the superior and middle pharyngeal constrictors frame-by-frame using a semi-automated MATLAB tracker tool at two time points. Intraclass correlation coefficients were used to assess test-retest reliability between two raters with an ICC = 0.99 or greater for all coordinates for the retest measurement. MorphoJ integrated software was used to perform a discriminate function analysis to visualize how all 12 coordinates interact with each other in normal swallowing. The addition of the superior and middle pharyngeal constrictor coordinates to CASM allows for a robust analysis of the multiple components of swallowing mechanics interacting with a wide range of variables in both patient specific and cohort studies derived from common use imaging data. PMID:28239682
3D Image Analysis of Geomaterials using Confocal Microscopy
NASA Astrophysics Data System (ADS)
Mulukutla, G.; Proussevitch, A.; Sahagian, D.
2009-05-01
Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the shapes of the segmented vesicles, vapor bubbles, and void spaces due to the optical measurements, so corrective actions are being explored. This will establish a practical and reliable framework for an adaptive 3D image processing technique for the analysis of geomaterials using confocal microscopy.
NASA Astrophysics Data System (ADS)
Silva-Rodríguez, J.; Cortés, J.; Rodríguez-Osorio, X.; López-Urdaneta, J.; Pardo-Montero, J.; Aguiar, P.; Tsoumpas, C.
2016-10-01
Structural Functional Synergistic Resolution Recovery (SFS-RR) is a technique that uses supplementary structural information from MR or CT to improve the spatial resolution of PET or SPECT images. This wavelet-based method may have a potential impact on the clinical decision-making of brain focal disorders such as refractory epilepsy, since it can produce images with better quantitative accuracy and enhanced detectability. In this work, a method for the iterative application of SFS-RR (iSFS-RR) was firstly developed and optimized in terms of convergence and input voxel size, and the corrected images were used for the diagnosis of 18 patients with refractory epilepsy. To this end, PET/MR images were clinically evaluated through visual inspection, atlas-based asymmetry indices (AIs) and SPM (Statistical Parametric Mapping) analysis, using uncorrected images and images corrected with SFS-RR and iSFS-RR. Our results showed that the sensitivity can be increased from 78% for uncorrected images, to 84% for SFS-RR and 94% for the proposed iSFS-RR. Thus, the proposed methodology has demonstrated the potential to improve the management of refractory epilepsy patients in the clinical routine.
Imaging characteristics of photogrammetric camera systems
Welch, R.; Halliday, J.
1973-01-01
In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.
NASA Astrophysics Data System (ADS)
Korte, Andrew R.; Lee, Young Jin
2013-06-01
We have recently developed a multiplex mass spectrometry imaging (MSI) method which incorporates high mass resolution imaging and MS/MS and MS3 imaging of several compounds in a single data acquisition utilizing a hybrid linear ion trap-Orbitrap mass spectrometer (Perdian and Lee, Anal. Chem. 82, 9393-9400, 2010). Here we extend this capability to obtain positive and negative ion MS and MS/MS spectra in a single MS imaging experiment through polarity switching within spiral steps of each raster step. This methodology was demonstrated for the analysis of various lipid class compounds in a section of mouse brain. This allows for simultaneous imaging of compounds that are readily ionized in positive mode (e.g., phosphatidylcholines and sphingomyelins) and those that are readily ionized in negative mode (e.g., sulfatides, phosphatidylinositols and phosphatidylserines). MS/MS imaging was also performed for a few compounds in both positive and negative ion mode within the same experimental set-up. Insufficient stabilization time for the Orbitrap high voltage leads to slight deviations in observed masses, but these deviations are systematic and were easily corrected with a two-point calibration to background ions.
Modeling Of Object- And Scene-Prototypes With Hierarchically Structured Classes
NASA Astrophysics Data System (ADS)
Ren, Z.; Jensch, P.; Ameling, W.
1989-03-01
The success of knowledge-based image analysis methodology and implementation tools depends largely on an appropriately and efficiently built model wherein the domain-specific context information about and the inherent structure of the observed image scene have been encoded. For identifying an object in an application environment a computer vision system needs to know firstly the description of the object to be found in an image or in an image sequence, secondly the corresponding relationships between object descriptions within the image sequence. This paper presents models of image objects scenes by means of hierarchically structured classes. Using the topovisual formalism of graph and higraph, we are currently studying principally the relational aspect and data abstraction of the modeling in order to visualize the structural nature resident in image objects and scenes, and to formalize. their descriptions. The goal is to expose the structure of image scene and the correspondence of image objects in the low level image interpretation. process. The object-based system design approach has been applied to build the model base. We utilize the object-oriented programming language C + + for designing, testing and implementing the abstracted entity classes and the operation structures which have been modeled topovisually. The reference images used for modeling prototypes of objects and scenes are from industrial environments as'well as medical applications.
a Metadata Based Approach for Analyzing Uav Datasets for Photogrammetric Applications
NASA Astrophysics Data System (ADS)
Dhanda, A.; Remondino, F.; Santana Quintero, M.
2018-05-01
This paper proposes a methodology for pre-processing and analysing Unmanned Aerial Vehicle (UAV) datasets before photogrammetric processing. In cases where images are gathered without a detailed flight plan and at regular acquisition intervals the datasets can be quite large and be time consuming to process. This paper proposes a method to calculate the image overlap and filter out images to reduce large block sizes and speed up photogrammetric processing. The python-based algorithm that implements this methodology leverages the metadata in each image to determine the end and side overlap of grid-based UAV flights. Utilizing user input, the algorithm filters out images that are unneeded for photogrammetric processing. The result is an algorithm that can speed up photogrammetric processing and provide valuable information to the user about the flight path.
Hyperspectral imaging and multivariate analysis in the dried blood spots investigations
NASA Astrophysics Data System (ADS)
Majda, Alicja; Wietecha-Posłuszny, Renata; Mendys, Agata; Wójtowicz, Anna; Łydżba-Kopczyńska, Barbara
2018-04-01
The aim of this study was to apply a new methodology using the combination of the hyperspectral imaging and the dry blood spot (DBS) collecting. Application of the hyperspectral imaging is fast and non-destructive. DBS method offers the advantage also on the micro-invasive blood collecting and low volume of required sample. During experimental step, the reflected light was recorded by two hyperspectral systems. The collection of 776 spectral bands in the VIS-NIR range (400-1000 nm) and 256 spectral bands in the SWIR range (970-2500 nm) was applied. Pixel has the size of 8 × 8 and 30 × 30 µm for VIS-NIR and SWIR camera, respectively. The obtained data in the form of hyperspectral cubes were treated with chemometric methods, i.e., minimum noise fraction and principal component analysis. It has been shown that the application of these methods on this type of data, by analyzing the scatter plots, allows a rapid analysis of the homogeneity of DBS, and the selection of representative areas for further analysis. It also gives the possibility of tracking the dynamics of changes occurring in biological traces applied on the surface. For the analyzed 28 blood samples, described method allowed to distinguish those blood stains because of time of apply.
McIntire, Patrick J; Irshaid, Lina; Liu, Yifang; Chen, Zhengming; Menken, Faith; Nowak, Eugene; Shin, Sandra J; Ginter, Paula S
2018-05-07
CD8 + tumor-infiltrating lymphocytes (TILs) have emerged as a prognostic indicator in triple-negative breast cancer (TNBC). There is debate surrounding the prognostic value of hot spots for CD8 + TIL enumeration. We compared hot spot versus whole-tumor CD8 + TIL enumeration in prognosticating TNBC using immunohistochemistry on whole tissue sections and quantification by digital image analysis (Halo imaging analysis software; Indica Labs, Corrales, NM). A wide range of clinically relevant hot spot sizes was evaluated. CD8 + TIL enumeration was independently statistically significant for all hot spot sizes and whole-tumor annotations for disease-free survival by multivariate analysis. A 10× objective (2.2 mm diameter) hot spot was found to correlate significantly with overall survival (P = .04), while the remaining hot spots and whole-tumor CD8 + TIL enumeration did not (P > .05). Statistical significance was not demonstrated when comparing between hot spots and whole-tumor annotations, as the groups had overlapping confidence intervals. CD8 + TIL hot spot enumeration is equivalent to whole-tumor enumeration for prognostication in TNBC and may serve as a good alternative methodology in future studies and clinical practice. Copyright © 2018 Elsevier Inc. All rights reserved.
A study and evaluation of image analysis techniques applied to remotely sensed data
NASA Technical Reports Server (NTRS)
Atkinson, R. J.; Dasarathy, B. V.; Lybanon, M.; Ramapriyan, H. K.
1976-01-01
An analysis of phenomena causing nonlinearities in the transformation from Landsat multispectral scanner coordinates to ground coordinates is presented. Experimental results comparing rms errors at ground control points indicated a slight improvement when a nonlinear (8-parameter) transformation was used instead of an affine (6-parameter) transformation. Using a preliminary ground truth map of a test site in Alabama covering the Mobile Bay area and six Landsat images of the same scene, several classification methods were assessed. A methodology was developed for automatic change detection using classification/cluster maps. A coding scheme was employed for generation of change depiction maps indicating specific types of changes. Inter- and intraseasonal data of the Mobile Bay test area were compared to illustrate the method. A beginning was made in the study of data compression by applying a Karhunen-Loeve transform technique to a small section of the test data set. The second part of the report provides a formal documentation of the several programs developed for the analysis and assessments presented.
Abad, Sergi; Pérez, Xavier; Planas, Antoni; Turon, Xavier
2014-04-01
Recently, the need for crude glycerol valorisation from the biodiesel industry has generated many studies for practical and economic applications. Amongst them, fermentations based on glycerol media for the production of high value metabolites are prominent applications. This has generated a need to develop analytical techniques which allow fast and simple glycerol monitoring during fermentation. The methodology should be fast and inexpensive to be adopted in research, as well as in industrial applications. In this study three different methods were analysed and compared: two common methodologies based on liquid chromatography and enzymatic kits, and the new method based on a DotBlot assay coupled with image analysis. The new methodology is faster and cheaper than the other conventional methods, with comparable performance. Good linearity, precision and accuracy were achieved in the lower range (10 or 15 g/L to depletion), the most common range of glycerol concentrations to monitor fermentations in terms of growth kinetics. Copyright © 2014 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cha, Sangwon
2008-01-01
Matrix-assisted laser desorption/ionization(MALDI) mass spectrometry(MS) has been widely used for analysis of biological molecules, especially macromolecules such as proteins. However, MALDI MS has a problem in small molecule (less than 1 kDa) analysis because of the signal saturation by organic matrixes in the low mass region. In imaging MS (IMS), inhomogeneous surface formation due to the co-crystallization process by organic MALDI matrixes limits the spatial resolution of the mass spectral image. Therefore, to make laser desorption/ionization (LDI) MS more suitable for mass spectral profiling and imaging of small molecules directly from raw biological tissues, LDI MS protocols with various alternativemore » assisting materials were developed and applied to many biological systems of interest. Colloidal graphite was used as a matrix for IMS of small molecules for the first time and methodologies for analyses of small metabolites in rat brain tissues, fruits, and plant tissues were developed. With rat brain tissues, the signal enhancement for cerebroside species by colloidal graphite was observed and images of cerebrosides were successfully generated by IMS. In addition, separation of isobaric lipid ions was performed by imaging tandem MS. Directly from Arabidopsis flowers, flavonoids were successfully profiled and heterogeneous distribution of flavonoids in petals was observed for the first time by graphite-assisted LDI(GALDI) IMS.« less
NASA Astrophysics Data System (ADS)
Ma, Kevin; Liu, Joseph; Zhang, Xuejun; Lerner, Alex; Shiroishi, Mark; Amezcua, Lilyana; Liu, Brent
2016-03-01
We have designed and developed a multiple sclerosis eFolder system for patient data storage, image viewing, and automatic lesion quantification results stored in DICOM-SR format. The web-based system aims to be integrated in DICOM-compliant clinical and research environments to aid clinicians in patient treatments and data analysis. The system needs to quantify lesion volumes, identify and register lesion locations to track shifts in volume and quantity of lesions in a longitudinal study. In order to perform lesion registration, we have developed a brain warping and normalizing methodology using Statistical Parametric Mapping (SPM) MATLAB toolkit for brain MRI. Patients' brain MR images are processed via SPM's normalization processes, and the brain images are analyzed and warped according to the tissue probability map. Lesion identification and contouring are completed by neuroradiologists, and lesion volume quantification is completed by the eFolder's CAD program. Lesion comparison results in longitudinal studies show key growth and active regions. The results display successful lesion registration and tracking over a longitudinal study. Lesion change results are graphically represented in the web-based user interface, and users are able to correlate patient progress and changes in the MRI images. The completed lesion and disease tracking tool would enable the eFolder to provide complete patient profiles, improve the efficiency of patient care, and perform comprehensive data analysis through an integrated imaging informatics system.
Girelli, Carlos Magno Alves
2016-05-01
Fingerprints present in false identity documents were found on the web. In some cases, laterally reversed (mirrored) images of a same fingerprint were observed in different documents. In the present work, 100 fingerprints images downloaded from the web, as well as their reversals obtained by image editing, were compared between themselves and against the database of the Brazilian Federal Police AFIS, in order to better understand trends about this kind of forgery in Brazil. Some image editing effects were observed in the analyzed fingerprints: addition of artifacts (such as watermarks), image rotation, image stylization, lateral reversal and tonal reversal. Discussion about lateral reversals' detection is presented in this article, as well as suggestion to reduce errors due to missed HIT decisions between reversed fingerprints. The present work aims to highlight the importance of the fingerprints' analysis when performing document examination, especially when only copies of documents are available, something very common in Brazil. Besides the intrinsic features of the fingermarks considered in three levels of details by ACE-V methodology, some visual features of the fingerprints images can be helpful to identify sources of forgeries and modus operandi, such as: limits and image contours, fails in the friction ridges caused by excess or lack of inking and presence of watermarks and artifacts arising from the background. Based on the agreement of such features in fingerprints present in different identity documents and also on the analysis of the time and location where the documents were seized, it is possible to highlight potential links between apparently unconnected crimes. Therefore, fingerprints have potential to reduce linkage blindness and the present work suggests the analysis of fingerprints when profiling false identity documents, as well as the inclusion of fingerprints features in the profile of the documents. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dolly, S; University of Missouri, Columbia, MO; Chen, H
Purpose: Local noise power spectrum (NPS) properties are significantly affected by calculation variables and CT acquisition and reconstruction parameters, but a thoughtful analysis of these effects is absent. In this study, we performed a complete analysis of the effects of calculation and imaging parameters on the NPS. Methods: The uniformity module of a Catphan phantom was scanned with a Philips Brilliance 64-slice CT simulator using various scanning protocols. Images were reconstructed using both FBP and iDose4 reconstruction algorithms. From these images, local NPS were calculated for regions of interest (ROI) of varying locations and sizes, using four image background removalmore » methods. Additionally, using a predetermined ground truth, NPS calculation accuracy for various calculation parameters was compared for computer simulated ROIs. A complete analysis of the effects of calculation, acquisition, and reconstruction parameters on the NPS was conducted. Results: The local NPS varied with ROI size and image background removal method, particularly at low spatial frequencies. The image subtraction method was the most accurate according to the computer simulation study, and was also the most effective at removing low frequency background components in the acquired data. However, first-order polynomial fitting using residual sum of squares and principle component analysis provided comparable accuracy under certain situations. Similar general trends were observed when comparing the NPS for FBP to that of iDose4 while varying other calculation and scanning parameters. However, while iDose4 reduces the noise magnitude compared to FBP, this reduction is spatial-frequency dependent, further affecting NPS variations at low spatial frequencies. Conclusion: The local NPS varies significantly depending on calculation parameters, image acquisition parameters, and reconstruction techniques. Appropriate local NPS calculation should be performed to capture spatial variations of noise; calculation methodology should be selected with consideration of image reconstruction effects and the desired purpose of CT simulation for radiotherapy tasks.« less
The Handbook of Medical Image Perception and Techniques
NASA Astrophysics Data System (ADS)
Samei, Ehsan; Krupinski, Elizabeth
2014-07-01
1. Medical image perception Ehsan Samei and Elizabeth Krupinski; Part I. Historical Reflections and Theoretical Foundations: 2. A short history of image perception in medical radiology Harold Kundel and Calvin Nodine; 3. Spatial vision research without noise Arthur Burgess; 4. Signal detection theory, a brief history Arthur Burgess; 5. Signal detection in radiology Arthur Burgess; 6. Lessons from dinners with the giants of modern image science Robert Wagner; Part II. Science of Image Perception: 7. Perceptual factors in reading medical images Elizabeth Krupinski; 8. Cognitive factors in reading medical images David Manning; 9. Satisfaction of search in traditional radiographic imaging Kevin Berbaum, Edmund Franken, Robert Caldwell and Kevin Schartz; 10. The role of expertise in radiologic image interpretation Calvin Nodine and Claudia Mello-Thoms; 11. A primer of image quality and its perceptual relevance Robert Saunders and Ehsan Samei; 12. Beyond the limitations of human vision Maria Petrou; Part III. Perception Metrology: 13. Logistical issues in designing perception experiments Ehsan Samei and Xiang Li; 14. ROC analysis: basic concepts and practical applications Georgia Tourassi; 15. Multi-reader ROC Steve Hillis; 16. Recent developments in FROC methodology Dev Chakraborty; 17. Observer models as a surrogate to perception experiments Craig Abbey and Miguel Eckstein; 18. Implementation of observer models Matthew Kupinski; Part IV. Decision Support and Computer Aided Detection: 19. CAD: an image perception perspective Maryellen Giger and Weijie Chen; 20. Common designs of CAD studies Yulei Jiang; 21. Perceptual effect of CAD in reading chest images Matthew Freedman and Teresa Osicka; 22. Perceptual issues in mammography and CAD Michael Ulissey; 23. How perceptual factors affect the use and accuracy of CAD for interpretation of CT images Ronald Summers; 24. CAD: risks and benefits for radiologists' decisions Eugenio Alberdi, Andrey Povyakalo, Lorenzo Strigini and Peter Ayton; Part V. Optimization and Practical Issues: 25. Optimization of 2D and 3D radiographic systems Jeff Siewerdson; 26. Applications of AFC methodology in optimization of CT imaging systems Kent Ogden and Walter Huda; 27. Perceptual issues in reading mammograms Margarita Zuley; 28. Perceptual optimization of display processing techniques Richard Van Metter; 29. Optimization of display systems Elizabeth Krupinski and Hans Roehrig; 30. Ergonomic radiologist workplaces in the PACS environment Carl Zylack; Part VI. Epilogue: 31. Future prospects of medical image perception Ehsan Samei and Elizabeth Krupinski; Index.
The Handbook of Medical Image Perception and Techniques
NASA Astrophysics Data System (ADS)
Samei, Ehsan; Krupinski, Elizabeth
2009-12-01
1. Medical image perception Ehsan Samei and Elizabeth Krupinski; Part I. Historical Reflections and Theoretical Foundations: 2. A short history of image perception in medical radiology Harold Kundel and Calvin Nodine; 3. Spatial vision research without noise Arthur Burgess; 4. Signal detection theory, a brief history Arthur Burgess; 5. Signal detection in radiology Arthur Burgess; 6. Lessons from dinners with the giants of modern image science Robert Wagner; Part II. Science of Image Perception: 7. Perceptual factors in reading medical images Elizabeth Krupinski; 8. Cognitive factors in reading medical images David Manning; 9. Satisfaction of search in traditional radiographic imaging Kevin Berbaum, Edmund Franken, Robert Caldwell and Kevin Schartz; 10. The role of expertise in radiologic image interpretation Calvin Nodine and Claudia Mello-Thoms; 11. A primer of image quality and its perceptual relevance Robert Saunders and Ehsan Samei; 12. Beyond the limitations of human vision Maria Petrou; Part III. Perception Metrology: 13. Logistical issues in designing perception experiments Ehsan Samei and Xiang Li; 14. ROC analysis: basic concepts and practical applications Georgia Tourassi; 15. Multi-reader ROC Steve Hillis; 16. Recent developments in FROC methodology Dev Chakraborty; 17. Observer models as a surrogate to perception experiments Craig Abbey and Miguel Eckstein; 18. Implementation of observer models Matthew Kupinski; Part IV. Decision Support and Computer Aided Detection: 19. CAD: an image perception perspective Maryellen Giger and Weijie Chen; 20. Common designs of CAD studies Yulei Jiang; 21. Perceptual effect of CAD in reading chest images Matthew Freedman and Teresa Osicka; 22. Perceptual issues in mammography and CAD Michael Ulissey; 23. How perceptual factors affect the use and accuracy of CAD for interpretation of CT images Ronald Summers; 24. CAD: risks and benefits for radiologists' decisions Eugenio Alberdi, Andrey Povyakalo, Lorenzo Strigini and Peter Ayton; Part V. Optimization and Practical Issues: 25. Optimization of 2D and 3D radiographic systems Jeff Siewerdson; 26. Applications of AFC methodology in optimization of CT imaging systems Kent Ogden and Walter Huda; 27. Perceptual issues in reading mammograms Margarita Zuley; 28. Perceptual optimization of display processing techniques Richard Van Metter; 29. Optimization of display systems Elizabeth Krupinski and Hans Roehrig; 30. Ergonomic radiologist workplaces in the PACS environment Carl Zylack; Part VI. Epilogue: 31. Future prospects of medical image perception Ehsan Samei and Elizabeth Krupinski; Index.
Studying depression using imaging and machine learning methods.
Patel, Meenal J; Khalaf, Alexander; Aizenstein, Howard J
2016-01-01
Depression is a complex clinical entity that can pose challenges for clinicians regarding both accurate diagnosis and effective timely treatment. These challenges have prompted the development of multiple machine learning methods to help improve the management of this disease. These methods utilize anatomical and physiological data acquired from neuroimaging to create models that can identify depressed patients vs. non-depressed patients and predict treatment outcomes. This article (1) presents a background on depression, imaging, and machine learning methodologies; (2) reviews methodologies of past studies that have used imaging and machine learning to study depression; and (3) suggests directions for future depression-related studies.
Cancer Biotechnology | Center for Cancer Research
Biotechnology advances continue to underscore the need to educate NCI fellows in new methodologies. The Cancer Biotechnology course will be held on the NCI-Frederick campus on January 29, 2016 (Bldg. 549, Main Auditorium) and the course will be repeated on the Bethesda campus on February 9, 2016 (Natcher Balcony C). The latest advances in DNA, protein and image analysis will be presented. Clinical and postdoctoral fellows who want to learn about new biotechnology advances are encouraged to attend this course.
CMOS Active Pixel Sensor Technology and Reliability Characterization Methodology
NASA Technical Reports Server (NTRS)
Chen, Yuan; Guertin, Steven M.; Pain, Bedabrata; Kayaii, Sammy
2006-01-01
This paper describes the technology, design features and reliability characterization methodology of a CMOS Active Pixel Sensor. Both overall chip reliability and pixel reliability are projected for the imagers.
[Sick bodies textualized through images: what contemporary intensive care units can teach us].
Vargas, Mara Ambrosina de Oliveira; Meyer, Dagmar Estermann
2003-01-01
This paper belongs to a dissertation in which, what we have call "nursing worker cyborgizing" in intensive therapy is discussed. In order to carry out this discussion we based our work on cultural theorizing, as well as on Donna Haraway understanding of cyborg. For the corpses analysis manuals and assistance protocols were used. The methodological approach used in this work is the cultural analysis, as it has been used in the Poststructuralist Cultural Studies, in order to describe and analyze the production of images of sick bodies and its decoding process through a complex monitoring system in which sick bodies, machines and nursing practitioners, connect in different ways and effects. We have concluded that it is useful and relevant to consider and discuss these instances of production and diffusion of meanings attributed to the body, life, and the Nursing care.
Lykins, Amy D; Meana, Marta; Kambe, Gretchen
2006-10-01
As a first step in the investigation of the role of visual attention in the processing of erotic stimuli, eye-tracking methodology was employed to measure eye movements during erotic scene presentation. Because eye-tracking is a novel methodology in sexuality research, we attempted to determine whether the eye-tracker could detect differences (should they exist) in visual attention to erotic and non-erotic scenes. A total of 20 men and 20 women were presented with a series of erotic and non-erotic images and tracked their eye movements during image presentation. Comparisons between erotic and non-erotic image groups showed significant differences on two of three dependent measures of visual attention (number of fixations and total time) in both men and women. As hypothesized, there was a significant Stimulus x Scene Region interaction, indicating that participants visually attended to the body more in the erotic stimuli than in the non-erotic stimuli, as evidenced by a greater number of fixations and longer total time devoted to that region. These findings provide support for the application of eye-tracking methodology as a measure of visual attentional capture in sexuality research. Future applications of this methodology to expand our knowledge of the role of cognition in sexuality are suggested.
Automated CT Scan Scores of Bronchiectasis and Air Trapping in Cystic Fibrosis
Swiercz, Waldemar; Heltshe, Sonya L.; Anthony, Margaret M.; Szefler, Paul; Klein, Rebecca; Strain, John; Brody, Alan S.; Sagel, Scott D.
2014-01-01
Background: Computer analysis of high-resolution CT (HRCT) scans may improve the assessment of structural lung injury in children with cystic fibrosis (CF). The goal of this cross-sectional pilot study was to validate automated, observer-independent image analysis software to establish objective, simple criteria for bronchiectasis and air trapping. Methods: HRCT scans of the chest were performed in 35 children with CF and compared with scans from 12 disease control subjects. Automated image analysis software was developed to count visible airways on inspiratory images and to measure a low attenuation density (LAD) index on expiratory images. Among the children with CF, relationships among automated measures, Brody HRCT scanning scores, lung function, and sputum markers of inflammation were assessed. Results: The number of total, central, and peripheral airways on inspiratory images and LAD (%) on expiratory images were significantly higher in children with CF compared with control subjects. Among subjects with CF, peripheral airway counts correlated strongly with Brody bronchiectasis scores by two raters (r = 0.86, P < .0001; r = 0.91, P < .0001), correlated negatively with lung function, and were positively associated with sputum free neutrophil elastase activity. LAD (%) correlated with Brody air trapping scores (r = 0.83, P < .0001; r = 0.69, P < .0001) but did not correlate with lung function or sputum inflammatory markers. Conclusions: Quantitative airway counts and LAD (%) on HRCT scans appear to be useful surrogates for bronchiectasis and air trapping in children with CF. Our automated methodology provides objective quantitative measures of bronchiectasis and air trapping that may serve as end points in CF clinical trials. PMID:24114359
Giger, Maryellen L.; Chan, Heang-Ping; Boone, John
2008-01-01
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists’ goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities that are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists—as opposed to a completely automatic computer interpretation—focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous—from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects—collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more—from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis. PMID:19175137
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giger, Maryellen L.; Chan, Heang-Ping; Boone, John
2008-12-15
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists' goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities thatmore » are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists--as opposed to a completely automatic computer interpretation--focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous--from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects--collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more--from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis.« less