Computerized PET/CT image analysis in the evaluation of tumour response to therapy
Wang, J; Zhang, H H
2015-01-01
Current cancer therapy strategy is mostly population based, however, there are large differences in tumour response among patients. It is therefore important for treating physicians to know individual tumour response. In recent years, many studies proposed the use of computerized positron emission tomography/CT image analysis in the evaluation of tumour response. Results showed that computerized analysis overcame some major limitations of current qualitative and semiquantitative analysis and led to improved accuracy. In this review, we summarize these studies in four steps of the analysis: image registration, tumour segmentation, image feature extraction and response evaluation. Future works are proposed and challenges described. PMID:25723599
Neutron imaging data processing using the Mantid framework
NASA Astrophysics Data System (ADS)
Pouzols, Federico M.; Draper, Nicholas; Nagella, Sri; Yang, Erica; Sajid, Ahmed; Ross, Derek; Ritchie, Brian; Hill, John; Burca, Genoveva; Minniti, Triestino; Moreton-Smith, Christopher; Kockelmann, Winfried
2016-09-01
Several imaging instruments are currently being constructed at neutron sources around the world. The Mantid software project provides an extensible framework that supports high-performance computing for data manipulation, analysis and visualisation of scientific data. At ISIS, IMAT (Imaging and Materials Science & Engineering) will offer unique time-of-flight neutron imaging techniques which impose several software requirements to control the data reduction and analysis. Here we outline the extensions currently being added to Mantid to provide specific support for neutron imaging requirements.
Image analysis library software development
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Bryant, J.
1977-01-01
The Image Analysis Library consists of a collection of general purpose mathematical/statistical routines and special purpose data analysis/pattern recognition routines basic to the development of image analysis techniques for support of current and future Earth Resources Programs. Work was done to provide a collection of computer routines and associated documentation which form a part of the Image Analysis Library.
Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang
2017-05-02
Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components.
Wu, Kuo-Tsai; Hwang, Sheng-Jye; Lee, Huei-Huang
2017-01-01
Image sensors are the core components of computer, communication, and consumer electronic products. Complementary metal oxide semiconductor (CMOS) image sensors have become the mainstay of image-sensing developments, but are prone to leakage current. In this study, we simulate the CMOS image sensor (CIS) film stacking process by finite element analysis. To elucidate the relationship between the leakage current and stack architecture, we compare the simulated and measured leakage currents in the elements. Based on the analysis results, we further improve the performance by optimizing the architecture of the film stacks or changing the thin-film material. The material parameters are then corrected to improve the accuracy of the simulation results. The simulated and experimental results confirm a positive correlation between measured leakage current and stress. This trend is attributed to the structural defects induced by high stress, which generate leakage. Using this relationship, we can change the structure of the thin-film stack to reduce the leakage current and thereby improve the component life and reliability of the CIS components. PMID:28468324
Noise distribution and denoising of current density images
Beheshti, Mohammadali; Foomany, Farbod H.; Magtibay, Karl; Jaffray, David A.; Krishnan, Sridhar; Nanthakumar, Kumaraswamy; Umapathy, Karthikeyan
2015-01-01
Abstract. Current density imaging (CDI) is a magnetic resonance (MR) imaging technique that could be used to study current pathways inside the tissue. The current distribution is measured indirectly as phase changes. The inherent noise in the MR imaging technique degrades the accuracy of phase measurements leading to imprecise current variations. The outcome can be affected significantly, especially at a low signal-to-noise ratio (SNR). We have shown the residual noise distribution of the phase to be Gaussian-like and the noise in CDI images approximated as a Gaussian. This finding matches experimental results. We further investigated this finding by performing comparative analysis with denoising techniques, using two CDI datasets with two different currents (20 and 45 mA). We found that the block-matching and three-dimensional (BM3D) technique outperforms other techniques when applied on current density (J). The minimum gain in noise power by BM3D applied to J compared with the next best technique in the analysis was found to be around 2 dB per pixel. We characterize the noise profile in CDI images and provide insights on the performance of different denoising techniques when applied at two different stages of current density reconstruction. PMID:26158100
CALIPSO: an interactive image analysis software package for desktop PACS workstations
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Huang, H. K.
1990-07-01
The purpose of this project is to develop a low cost workstation for quantitative analysis of multimodality images using a Macintosh II personal computer. In the current configuration the Macintosh operates as a stand alone workstation where images are imported either from a central PACS server through a standard Ethernet network or recorded through video digitizer board. The CALIPSO software developed contains a large variety ofbasic image display and manipulation tools. We focused our effort however on the design and implementation ofquantitative analysis methods that can be applied to images from different imaging modalities. Analysis modules currently implemented include geometric and densitometric volumes and ejection fraction calculation from radionuclide and cine-angiograms Fourier analysis ofcardiac wall motion vascular stenosis measurement color coded parametric display of regional flow distribution from dynamic coronary angiograms automatic analysis ofmyocardial distribution ofradiolabelled tracers from tomoscintigraphic images. Several of these analysis tools were selected because they use similar color coded andparametric display methods to communicate quantitative data extracted from the images. 1. Rationale and objectives of the project Developments of Picture Archiving and Communication Systems (PACS) in clinical environment allow physicians and radiologists to assess radiographic images directly through imaging workstations (''). This convenient access to the images is often limited by the number of workstations available due in part to their high cost. There is also an increasing need for quantitative analysis ofthe images. During thepast decade
Failure Analysis of CCD Image Sensors Using SQUID and GMR Magnetic Current Imaging
NASA Technical Reports Server (NTRS)
Felt, Frederick S.
2005-01-01
During electrical testing of a Full Field CCD Image Senor, electrical shorts were detected on three of six devices. These failures occurred after the parts were soldered to the PCB. Failure analysis was performed to determine the cause and locations of these failures on the devices. After removing the fiber optic faceplate, optical inspection was performed on the CCDs to understand the design and package layout. Optical inspection revealed that the device had a light shield ringing the CCD array. This structure complicated the failure analysis. Alternate methods of analysis were considered, including liquid crystal, light and thermal emission, LT/A, TT/A SQUID, and MP. Of these, SQUID and MP techniques were pursued for further analysis. Also magnetoresistive current imaging technology is discussed and compared to SQUID.
Digital Radiographic Image Processing and Analysis.
Yoon, Douglas C; Mol, André; Benn, Douglas K; Benavides, Erika
2018-07-01
This article describes digital radiographic imaging and analysis from the basics of image capture to examples of some of the most advanced digital technologies currently available. The principles underlying the imaging technologies are described to provide a better understanding of their strengths and limitations. Copyright © 2018 Elsevier Inc. All rights reserved.
Imaging flow cytometry for phytoplankton analysis.
Dashkova, Veronika; Malashenkov, Dmitry; Poulton, Nicole; Vorobjev, Ivan; Barteneva, Natasha S
2017-01-01
This review highlights the concepts and instrumentation of imaging flow cytometry technology and in particular its use for phytoplankton analysis. Imaging flow cytometry, a hybrid technology combining speed and statistical capabilities of flow cytometry with imaging features of microscopy, is rapidly advancing as a cell imaging platform that overcomes many of the limitations of current techniques and contributed significantly to the advancement of phytoplankton analysis in recent years. This review presents the various instrumentation relevant to the field and currently used for assessment of complex phytoplankton communities' composition and abundance, size structure determination, biovolume estimation, detection of harmful algal bloom species, evaluation of viability and metabolic activity and other applications. Also we present our data on viability and metabolic assessment of Aphanizomenon sp. cyanobacteria using Imagestream X Mark II imaging cytometer. Herein, we highlight the immense potential of imaging flow cytometry for microalgal research, but also discuss limitations and future developments. Copyright © 2016 Elsevier Inc. All rights reserved.
Chen, Jia-Mei; Li, Yan; Xu, Jun; Gong, Lei; Wang, Lin-Wei; Liu, Wen-Lou; Liu, Juan
2017-03-01
With the advance of digital pathology, image analysis has begun to show its advantages in information analysis of hematoxylin and eosin histopathology images. Generally, histological features in hematoxylin and eosin images are measured to evaluate tumor grade and prognosis for breast cancer. This review summarized recent works in image analysis of hematoxylin and eosin histopathology images for breast cancer prognosis. First, prognostic factors for breast cancer based on hematoxylin and eosin histopathology images were summarized. Then, usual procedures of image analysis for breast cancer prognosis were systematically reviewed, including image acquisition, image preprocessing, image detection and segmentation, and feature extraction. Finally, the prognostic value of image features and image feature-based prognostic models was evaluated. Moreover, we discussed the issues of current analysis, and some directions for future research.
Low Cost Desktop Image Analysis Workstation With Enhanced Interactive User Interface
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Huang, H. K.
1989-05-01
A multimodality picture archiving and communication system (PACS) is in routine clinical use in the UCLA Radiology Department. Several types workstations are currently implemented for this PACS. Among them, the Apple Macintosh II personal computer was recently chosen to serve as a desktop workstation for display and analysis of radiological images. This personal computer was selected mainly because of its extremely friendly user-interface, its popularity among the academic and medical community and its low cost. In comparison to other microcomputer-based systems the Macintosh II offers the following advantages: the extreme standardization of its user interface, file system and networking, and the availability of a very large variety of commercial software packages. In the current configuration the Macintosh II operates as a stand-alone workstation where images are imported from a centralized PACS server through an Ethernet network using a standard TCP-IP protocol, and stored locally on magnetic disk. The use of high resolution screens (1024x768 pixels x 8bits) offer sufficient performance for image display and analysis. We focused our project on the design and implementation of a variety of image analysis algorithms ranging from automated structure and edge detection to sophisticated dynamic analysis of sequential images. Specific analysis programs were developed for ultrasound images, digitized angiograms, MRI and CT tomographic images and scintigraphic images.
Volumetric image interpretation in radiology: scroll behavior and cognitive processes.
den Boer, Larissa; van der Schaaf, Marieke F; Vincken, Koen L; Mol, Chris P; Stuijfzand, Bobby G; van der Gijp, Anouk
2018-05-16
The interpretation of medical images is a primary task for radiologists. Besides two-dimensional (2D) images, current imaging technologies allow for volumetric display of medical images. Whereas current radiology practice increasingly uses volumetric images, the majority of studies on medical image interpretation is conducted on 2D images. The current study aimed to gain deeper insight into the volumetric image interpretation process by examining this process in twenty radiology trainees who all completed four volumetric image cases. Two types of data were obtained concerning scroll behaviors and think-aloud data. Types of scroll behavior concerned oscillations, half runs, full runs, image manipulations, and interruptions. Think-aloud data were coded by a framework of knowledge and skills in radiology including three cognitive processes: perception, analysis, and synthesis. Relating scroll behavior to cognitive processes showed that oscillations and half runs coincided more often with analysis and synthesis than full runs, whereas full runs coincided more often with perception than oscillations and half runs. Interruptions were characterized by synthesis and image manipulations by perception. In addition, we investigated relations between cognitive processes and found an overall bottom-up way of reasoning with dynamic interactions between cognitive processes, especially between perception and analysis. In sum, our results highlight the dynamic interactions between these processes and the grounding of cognitive processes in scroll behavior. It suggests, that the types of scroll behavior are relevant to describe how radiologists interact with and manipulate volumetric images.
V-Sipal - a Virtual Laboratory for Satellite Image Processing and Analysis
NASA Astrophysics Data System (ADS)
Buddhiraju, K. M.; Eeti, L.; Tiwari, K. K.
2011-09-01
In this paper a virtual laboratory for the Satellite Image Processing and Analysis (v-SIPAL) being developed at the Indian Institute of Technology Bombay is described. v-SIPAL comprises a set of experiments that are normally carried out by students learning digital processing and analysis of satellite images using commercial software. Currently, the experiments that are available on the server include Image Viewer, Image Contrast Enhancement, Image Smoothing, Edge Enhancement, Principal Component Transform, Texture Analysis by Co-occurrence Matrix method, Image Indices, Color Coordinate Transforms, Fourier Analysis, Mathematical Morphology, Unsupervised Image Classification, Supervised Image Classification and Accuracy Assessment. The virtual laboratory includes a theory module for each option of every experiment, a description of the procedure to perform each experiment, the menu to choose and perform the experiment, a module on interpretation of results when performed with a given image and pre-specified options, bibliography, links to useful internet resources and user-feedback. The user can upload his/her own images for performing the experiments and can also reuse outputs of one experiment in another experiment where applicable. Some of the other experiments currently under development include georeferencing of images, data fusion, feature evaluation by divergence andJ-M distance, image compression, wavelet image analysis and change detection. Additions to the theory module include self-assessment quizzes, audio-video clips on selected concepts, and a discussion of elements of visual image interpretation. V-SIPAL is at the satge of internal evaluation within IIT Bombay and will soon be open to selected educational institutions in India for evaluation.
Muralidhar, Gautam S; Channappayya, Sumohana S; Slater, John H; Blinka, Ellen M; Bovik, Alan C; Frey, Wolfgang; Markey, Mia K
2008-11-06
Automated analysis of fluorescence microscopy images of endothelial cells labeled for actin is important for quantifying changes in the actin cytoskeleton. The current manual approach is laborious and inefficient. The goal of our work is to develop automated image analysis methods, thereby increasing cell analysis throughput. In this study, we present preliminary results on comparing different algorithms for cell segmentation and image denoising.
Finite element analysis of gradient z-coil induced eddy currents in a permanent MRI magnet.
Li, Xia; Xia, Ling; Chen, Wufan; Liu, Feng; Crozier, Stuart; Xie, Dexin
2011-01-01
In permanent magnetic resonance imaging (MRI) systems, pulsed gradient fields induce strong eddy currents in the conducting structures of the magnet body. The gradient field for image encoding is perturbed by these eddy currents leading to MR image distortions. This paper presents a comprehensive finite element (FE) analysis of the eddy current generation in the magnet conductors. In the proposed FE model, the hysteretic characteristics of ferromagnetic materials are considered and a scalar Preisach hysteresis model is employed. The developed FE model was applied to study gradient z-coil induced eddy currents in a 0.5 T permanent MRI device. The simulation results demonstrate that the approach could be effectively used to investigate eddy current problems involving ferromagnetic materials. With the knowledge gained from this eddy current model, our next step is to design a passive magnet structure and active gradient coils to reduce the eddy current effects. Copyright © 2010 Elsevier Inc. All rights reserved.
Duarte, Cristiana; Pinto-Gouveia, José
2017-12-01
This study examined the phenomenology of shame experiences from childhood and adolescence in a sample of women with Binge Eating Disorder. Moreover, a path analysis was investigated testing whether the association between shame-related memories which are traumatic and central to identity, and binge eating symptoms' severity, is mediated by current external shame, body image shame and body image cognitive fusion. Participants in this study were 114 patients, who were assessed through the Eating Disorder Examination and the Shame Experiences Interview, and through self-report measures of external shame, body image shame, body image cognitive fusion and binge eating symptoms. Shame experiences where physical appearance was negatively commented or criticized by others were the most frequently recalled. A path analysis showed a good fit between the hypothesised mediational model and the data. The traumatic and centrality qualities of shame-related memories predicted current external shame, especially body image shame. Current shame feelings were associated with body image cognitive fusion, which, in turn, predicted levels of binge eating symptomatology. Findings support the relevance of addressing early shame-related memories and negative affective and self-evaluative experiences, namely related to body image, in the understanding and management of binge eating. Copyright © 2017 Elsevier B.V. All rights reserved.
Design Criteria For Networked Image Analysis System
NASA Astrophysics Data System (ADS)
Reader, Cliff; Nitteberg, Alan
1982-01-01
Image systems design is currently undergoing a metamorphosis from the conventional computing systems of the past into a new generation of special purpose designs. This change is motivated by several factors, notably among which is the increased opportunity for high performance with low cost offered by advances in semiconductor technology. Another key issue is a maturing in understanding of problems and the applicability of digital processing techniques. These factors allow the design of cost-effective systems that are functionally dedicated to specific applications and used in a utilitarian fashion. Following an overview of the above stated issues, the paper presents a top-down approach to the design of networked image analysis systems. The requirements for such a system are presented, with orientation toward the hospital environment. The three main areas are image data base management, viewing of image data and image data processing. This is followed by a survey of the current state of the art, covering image display systems, data base techniques, communications networks and software systems control. The paper concludes with a description of the functional subystems and architectural framework for networked image analysis in a production environment.
An Analysis of Web Image Queries for Search.
ERIC Educational Resources Information Center
Pu, Hsiao-Tieh
2003-01-01
Examines the differences between Web image and textual queries, and attempts to develop an analytic model to investigate their implications for Web image retrieval systems. Provides results that give insight into Web image searching behavior and suggests implications for improvement of current Web image search engines. (AEF)
Kruse, Christian
2018-06-01
To review current practices and technologies within the scope of "Big Data" that can further our understanding of diabetes mellitus and osteoporosis from large volumes of data. "Big Data" techniques involving supervised machine learning, unsupervised machine learning, and deep learning image analysis are presented with examples of current literature. Supervised machine learning can allow us to better predict diabetes-induced osteoporosis and understand relative predictor importance of diabetes-affected bone tissue. Unsupervised machine learning can allow us to understand patterns in data between diabetic pathophysiology and altered bone metabolism. Image analysis using deep learning can allow us to be less dependent on surrogate predictors and use large volumes of images to classify diabetes-induced osteoporosis and predict future outcomes directly from images. "Big Data" techniques herald new possibilities to understand diabetes-induced osteoporosis and ascertain our current ability to classify, understand, and predict this condition.
An approach to integrate the human vision psychology and perception knowledge into image enhancement
NASA Astrophysics Data System (ADS)
Wang, Hui; Huang, Xifeng; Ping, Jiang
2009-07-01
Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.
Sun glitter imaging analysis of submarine sand waves in HJ-1A/B satellite CCD images
NASA Astrophysics Data System (ADS)
Zhang, Huaguo; He, Xiekai; Yang, Kang; Fu, Bin; Guan, Weibing
2014-11-01
Submarine sand waves are a widespread bed-form in tidal environment. Submarine sand waves induce current convergence and divergence that affect sea surface roughness thus become visible in sun glitter images. These sun glitter images have been employed for mapping sand wave topography. However, there are lots of effect factors in sun glitter imaging of the submarine sand waves, such as the imaging geometry and dynamic environment condition. In this paper, several sun glitter images from HJ-1A/B in the Taiwan Banks are selected. These satellite sun glitter images are used to discuss sun glitter imaging characteristics in different sensor parameters and dynamic environment condition. To interpret the imaging characteristics, calculating the sun glitter radiance and analyzing its spatial characteristics of the sand wave in different images is the best way. In this study, a simulated model based on sun glitter radiation transmission is adopted to certify the imaging analysis in further. Some results are drawn based on the study. Firstly, the sun glitter radiation is mainly determined by sensor view angle. Second, the current is another key factor for the sun glitter. The opposite current direction will cause exchanging of bright stripes and dark stripes. Third, brightness reversal would happen at the critical angle. Therefore, when using sun glitter image to obtain depth inversion, one is advised to take advantage of image properties of sand waves and to pay attention to key dynamic environment condition and brightness reversal.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fertig, Fabian, E-mail: fabian.fertig@ise.fraunhofer.de; Greulich, Johannes; Rein, Stefan
Spatially resolved determination of solar cell parameters is beneficial for loss analysis and optimization of conversion efficiency. One key parameter that has been challenging to access by an imaging technique on solar cell level is short-circuit current density. This work discusses the robustness of a recently suggested approach to determine short-circuit current density spatially resolved based on a series of lock-in thermography images and options for a simplified image acquisition procedure. For an accurate result, one or two emissivity-corrected illuminated lock-in thermography images and one dark lock-in thermography image have to be recorded. The dark lock-in thermography image can bemore » omitted if local shunts are negligible. Furthermore, it is shown that omitting the correction of lock-in thermography images for local emissivity variations only leads to minor distortions for standard silicon solar cells. Hence, adequate acquisition of one image only is sufficient to generate a meaningful map of short-circuit current density. Beyond that, this work illustrates the underlying physics of the recently proposed method and demonstrates its robustness concerning varying excitation conditions and locally increased series resistance. Experimentally gained short-circuit current density images are validated for monochromatic illumination in comparison to the reference method of light-beam induced current.« less
Kinetic Simulation and Energetic Neutral Atom Imaging of the Magnetosphere
NASA Technical Reports Server (NTRS)
Fok, Mei-Ching H.
2011-01-01
Advanced simulation tools and measurement techniques have been developed to study the dynamic magnetosphere and its response to drivers in the solar wind. The Comprehensive Ring Current Model (CRCM) is a kinetic code that solves the 3D distribution in space, energy and pitch-angle information of energetic ions and electrons. Energetic Neutral Atom (ENA) imagers have been carried in past and current satellite missions. Global morphology of energetic ions were revealed by the observed ENA images. We have combined simulation and ENA analysis techniques to study the development of ring current ions during magnetic storms and substorms. We identify the timing and location of particle injection and loss. We examine the evolution of ion energy and pitch-angle distribution during different phases of a storm. In this talk we will discuss the findings from our ring current studies and how our simulation and ENA analysis tools can be applied to the upcoming TRIO-CINAMA mission.
Digital image analysis techniques for fiber and soil mixtures : technical summary.
DOT National Transportation Integrated Search
1999-05-01
This project used to innovative technologies of digital image analysis for the characterization of a material currently being considered for broad use at DOTD. The material under consideration is a mixture of fiber and soil for use in the stabilizati...
An onboard data analysis method to track the seasonal polar caps on Mars
Wagstaff, K.L.; Castano, R.; Chien, S.; Ivanov, A.B.; Pounders, E.; Titus, T.N.; ,
2005-01-01
The Martian seasonal CO2 ice caps advance and retreat each year. They are currently studied using instruments such as the THermal EMission Imaging System (THEMIS), a visible and infra-red camera on the Mars Odyssey spacecraft [1]. However, each image must be downlinked to Earth prior to analysis. In contrast, we have developed the Bimodal Image Temperature (BIT) histogram analysis method for onboard detection of the cap edge, before transmission. In downlink-limited scenarios when the entire image cannot be transmitted, the location of the cap edge can still be identified and sent to Earth. In this paper, we evaluate our method on uncalibrated THEMIS data and find 1) agreement with manual cap edge identifications to within 28.2 km, and 2) high accuracy even with a smaller analysis window, yielding large reductions in memory requirements. This algorithm is currently being considered as a capability enhancement for the Odyssey second extended mission, beginning in fall 2006.
Image Analysis in Plant Sciences: Publish Then Perish.
Lobet, Guillaume
2017-07-01
Image analysis has become a powerful technique for most plant scientists. In recent years dozens of image analysis tools have been published in plant science journals. These tools cover the full spectrum of plant scales, from single cells to organs and canopies. However, the field of plant image analysis remains in its infancy. It still has to overcome important challenges, such as the lack of robust validation practices or the absence of long-term support. In this Opinion article, I: (i) present the current state of the field, based on data from the plant-image-analysis.org database; (ii) identify the challenges faced by its community; and (iii) propose workable ways of improvement. Copyright © 2017 Elsevier Ltd. All rights reserved.
Ultrasonic image analysis and image-guided interventions.
Noble, J Alison; Navab, Nassir; Becher, H
2011-08-06
The fields of medical image analysis and computer-aided interventions deal with reducing the large volume of digital images (X-ray, computed tomography, magnetic resonance imaging (MRI), positron emission tomography and ultrasound (US)) to more meaningful clinical information using software algorithms. US is a core imaging modality employed in these areas, both in its own right and used in conjunction with the other imaging modalities. It is receiving increased interest owing to the recent introduction of three-dimensional US, significant improvements in US image quality, and better understanding of how to design algorithms which exploit the unique strengths and properties of this real-time imaging modality. This article reviews the current state of art in US image analysis and its application in image-guided interventions. The article concludes by giving a perspective from clinical cardiology which is one of the most advanced areas of clinical application of US image analysis and describing some probable future trends in this important area of ultrasonic imaging research.
PICASSO: an end-to-end image simulation tool for space and airborne imaging systems
NASA Astrophysics Data System (ADS)
Cota, Steve A.; Bell, Jabin T.; Boucher, Richard H.; Dutton, Tracy E.; Florio, Chris J.; Franz, Geoffrey A.; Grycewicz, Thomas J.; Kalman, Linda S.; Keller, Robert A.; Lomheim, Terrence S.; Paulson, Diane B.; Willkinson, Timothy S.
2008-08-01
The design of any modern imaging system is the end result of many trade studies, each seeking to optimize image quality within real world constraints such as cost, schedule and overall risk. Image chain analysis - the prediction of image quality from fundamental design parameters - is an important part of this design process. At The Aerospace Corporation we have been using a variety of image chain analysis tools for many years, the Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) among them. In this paper we describe our PICASSO tool, showing how, starting with a high quality input image and hypothetical design descriptions representative of the current state of the art in commercial imaging satellites, PICASSO can generate standard metrics of image quality in support of the decision processes of designers and program managers alike.
PICASSO: an end-to-end image simulation tool for space and airborne imaging systems
NASA Astrophysics Data System (ADS)
Cota, Stephen A.; Bell, Jabin T.; Boucher, Richard H.; Dutton, Tracy E.; Florio, Christopher J.; Franz, Geoffrey A.; Grycewicz, Thomas J.; Kalman, Linda S.; Keller, Robert A.; Lomheim, Terrence S.; Paulson, Diane B.; Wilkinson, Timothy S.
2010-06-01
The design of any modern imaging system is the end result of many trade studies, each seeking to optimize image quality within real world constraints such as cost, schedule and overall risk. Image chain analysis - the prediction of image quality from fundamental design parameters - is an important part of this design process. At The Aerospace Corporation we have been using a variety of image chain analysis tools for many years, the Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) among them. In this paper we describe our PICASSO tool, showing how, starting with a high quality input image and hypothetical design descriptions representative of the current state of the art in commercial imaging satellites, PICASSO can generate standard metrics of image quality in support of the decision processes of designers and program managers alike.
Zarb, Francis; McEntee, Mark F; Rainford, Louise
2015-06-01
To evaluate visual grading characteristics (VGC) and ordinal regression analysis during head CT optimisation as a potential alternative to visual grading assessment (VGA), traditionally employed to score anatomical visualisation. Patient images (n = 66) were obtained using current and optimised imaging protocols from two CT suites: a 16-slice scanner at the national Maltese centre for trauma and a 64-slice scanner in a private centre. Local resident radiologists (n = 6) performed VGA followed by VGC and ordinal regression analysis. VGC alone indicated that optimised protocols had similar image quality as current protocols. Ordinal logistic regression analysis provided an in-depth evaluation, criterion by criterion allowing the selective implementation of the protocols. The local radiology review panel supported the implementation of optimised protocols for brain CT examinations (including trauma) in one centre, achieving radiation dose reductions ranging from 24 % to 36 %. In the second centre a 29 % reduction in radiation dose was achieved for follow-up cases. The combined use of VGC and ordinal logistic regression analysis led to clinical decisions being taken on the implementation of the optimised protocols. This improved method of image quality analysis provided the evidence to support imaging protocol optimisation, resulting in significant radiation dose savings. • There is need for scientifically based image quality evaluation during CT optimisation. • VGC and ordinal regression analysis in combination led to better informed clinical decisions. • VGC and ordinal regression analysis led to dose reductions without compromising diagnostic efficacy.
Dual-Energy CT: New Horizon in Medical Imaging
Goo, Jin Mo
2017-01-01
Dual-energy CT has remained underutilized over the past decade probably due to a cumbersome workflow issue and current technical limitations. Clinical radiologists should be made aware of the potential clinical benefits of dual-energy CT over single-energy CT. To accomplish this aim, the basic principle, current acquisition methods with advantages and disadvantages, and various material-specific imaging methods as clinical applications of dual-energy CT should be addressed in detail. Current dual-energy CT acquisition methods include dual tubes with or without beam filtration, rapid voltage switching, dual-layer detector, split filter technique, and sequential scanning. Dual-energy material-specific imaging methods include virtual monoenergetic or monochromatic imaging, effective atomic number map, virtual non-contrast or unenhanced imaging, virtual non-calcium imaging, iodine map, inhaled xenon map, uric acid imaging, automatic bone removal, and lung vessels analysis. In this review, we focus on dual-energy CT imaging including related issues of radiation exposure to patients, scanning and post-processing options, and potential clinical benefits mainly to improve the understanding of clinical radiologists and thus, expand the clinical use of dual-energy CT; in addition, we briefly describe the current technical limitations of dual-energy CT and the current developments of photon-counting detector. PMID:28670151
Assessment of cluster yield components by image analysis.
Diago, Maria P; Tardaguila, Javier; Aleixos, Nuria; Millan, Borja; Prats-Montalban, Jose M; Cubero, Sergio; Blasco, Jose
2015-04-01
Berry weight, berry number and cluster weight are key parameters for yield estimation for wine and tablegrape industry. Current yield prediction methods are destructive, labour-demanding and time-consuming. In this work, a new methodology, based on image analysis was developed to determine cluster yield components in a fast and inexpensive way. Clusters of seven different red varieties of grapevine (Vitis vinifera L.) were photographed under laboratory conditions and their cluster yield components manually determined after image acquisition. Two algorithms based on the Canny and the logarithmic image processing approaches were tested to find the contours of the berries in the images prior to berry detection performed by means of the Hough Transform. Results were obtained in two ways: by analysing either a single image of the cluster or using four images per cluster from different orientations. The best results (R(2) between 69% and 95% in berry detection and between 65% and 97% in cluster weight estimation) were achieved using four images and the Canny algorithm. The model's capability based on image analysis to predict berry weight was 84%. The new and low-cost methodology presented here enabled the assessment of cluster yield components, saving time and providing inexpensive information in comparison with current manual methods. © 2014 Society of Chemical Industry.
Okumura, Miwa; Ota, Takamasa; Kainuma, Kazuhisa; Sayre, James W.; McNitt-Gray, Michael; Katada, Kazuhiro
2008-01-01
Objective. For the multislice CT (MSCT) systems with a larger number of detector rows, it is essential to employ dose-reduction techniques. As reported in previous studies, edge-preserving adaptive image filters, which selectively eliminate only the noise elements that are increased when the radiation dose is reduced without affecting the sharpness of images, have been developed. In the present study, we employed receiver operating characteristic (ROC) analysis to assess the effects of the quantum denoising system (QDS), which is an edge-preserving adaptive filter that we have developed, on low-contrast resolution, and to evaluate to what degree the radiation dose can be reduced while maintaining acceptable low-contrast resolution. Materials and Methods. The low-contrast phantoms (Catphan 412) were scanned at various tube current settings, and ROC analysis was then performed for the groups of images obtained with/without the use of QDS at each tube current to determine whether or not a target could be identified. The tube current settings for which the area under the ROC curve (Az value) was approximately 0.7 were determined for both groups of images with/without the use of QDS. Then, the radiation dose reduction ratio when QDS was used was calculated by converting the determined tube current to the radiation dose. Results. The use of the QDS edge-preserving adaptive image filter allowed the radiation dose to be reduced by up to 38%. Conclusion. The QDS was found to be useful for reducing the radiation dose without affecting the low-contrast resolution in MSCT studies. PMID:19043565
NASA Astrophysics Data System (ADS)
Saxena, Nishank; Hows, Amie; Hofmann, Ronny; Alpak, Faruk O.; Freeman, Justin; Hunter, Sander; Appel, Matthias
2018-06-01
This study defines the optimal operating envelope of the Digital Rock technology from the perspective of imaging and numerical simulations of transport properties. Imaging larger volumes of rocks for Digital Rock Physics (DRP) analysis improves the chances of achieving a Representative Elementary Volume (REV) at which flow-based simulations (1) do not vary with change in rock volume, and (2) is insensitive to the choice of boundary conditions. However, this often comes at the expense of image resolution. This trade-off exists due to the finiteness of current state-of-the-art imaging detectors. Imaging and analyzing digital rocks that sample the REV and still sufficiently resolve pore throats is critical to ensure simulation quality and robustness of rock property trends for further analysis. We find that at least 10 voxels are needed to sufficiently resolve pore throats for single phase fluid flow simulations. If this condition is not met, additional analyses and corrections may allow for meaningful comparisons between simulation results and laboratory measurements of permeability, but some cases may fall outside the current technical feasibility of DRP. On the other hand, we find that the ratio of field of view and effective grain size provides a reliable measure of the REV for siliciclastic rocks. If this ratio is greater than 5, the coefficient of variation for single-phase permeability simulations drops below 15%. These imaging considerations are crucial when comparing digitally computed rock flow properties with those measured in the laboratory. We find that the current imaging methods are sufficient to achieve both REV (with respect to numerical boundary conditions) and required image resolution to perform digital core analysis for coarse to fine-grained sandstones.
Leischik, Roman; Littwitz, Henning; Dworrak, Birgit; Garg, Pankaj; Zhu, Meihua; Sahn, David J; Horlitz, Marc
2015-01-01
Left atrial (LA) functional analysis has an established role in assessing left ventricular diastolic function. The current standard echocardiographic parameters used to study left ventricular diastolic function include pulsed-wave Doppler mitral inflow analysis, tissue Doppler imaging measurements, and LA dimension estimation. However, the above-mentioned parameters do not directly quantify LA performance. Deformation studies using strain and strain-rate imaging to assess LA function were validated in previous research, but this technique is not currently used in routine clinical practice. This review discusses the history, importance, and pitfalls of strain technology for the analysis of LA mechanics.
Roles of universal three-dimensional image analysis devices that assist surgical operations.
Sakamoto, Tsuyoshi
2014-04-01
The circumstances surrounding medical image analysis have undergone rapid evolution. In such a situation, it can be said that "imaging" obtained through medical imaging modality and the "analysis" that we employ have become amalgamated. Recently, we feel the distance between "imaging" and "analysis" has become closer regarding the imaging analysis of any organ system, as if both terms mentioned above have become integrated. The history of medical image analysis started with the appearance of the computer. The invention of multi-planar reconstruction (MPR) used in the helical scan had a significant impact and became the basis for recent image analysis. Subsequently, curbed MPR (CPR) and other methods were developed, and the 3D diagnostic imaging and image analysis of the human body have started on a full scale. Volume rendering: the development of a new rendering algorithm and the significant improvement of memory and CPUs contributed to the development of "volume rendering," which allows 3D views with retained internal information. A new value was created by this development; computed tomography (CT) images that used to be for "diagnosis" before that time have become "applicable to treatment." In the past, before the development of volume rendering, a clinician had to mentally reconstruct an image reconfigured for diagnosis into a 3D image, but these developments have allowed the depiction of a 3D image on a monitor. Current technology: Currently, in Japan, the estimation of the liver volume and the perfusion area of the portal vein and hepatic vein are vigorously being adopted during preoperative planning for hepatectomy. Such a circumstance seems to be brought by the substantial improvement of said basic techniques and by upgrading the user interface, allowing doctors easy manipulation by themselves. The following describes the specific techniques. Future of post-processing technology: It is expected, in terms of the role of image analysis, for better or worse, that computer-aided diagnosis (CAD) will develop to a highly advanced level in every diagnostic field. Further, it is also expected in the treatment field that a technique coordinating various devices will be strongly required as a surgery navigator. Actually, surgery using an image navigator is being widely studied, and coordination with hardware, including robots, will also be developed. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.
In, Myung-Ho; Posnansky, Oleg; Speck, Oliver
2016-05-01
To accurately correct diffusion-encoding direction-dependent eddy-current-induced geometric distortions in diffusion-weighted echo-planar imaging (DW-EPI) and to minimize the calibration time at 7 Tesla (T). A point spread function (PSF) mapping based eddy-current calibration method is newly presented to determine eddy-current-induced geometric distortions even including nonlinear eddy-current effects within the readout acquisition window. To evaluate the temporal stability of eddy-current maps, calibration was performed four times within 3 months. Furthermore, spatial variations of measured eddy-current maps versus their linear superposition were investigated to enable correction in DW-EPIs with arbitrary diffusion directions without direct calibration. For comparison, an image-based eddy-current correction method was additionally applied. Finally, this method was combined with a PSF-based susceptibility-induced distortion correction approach proposed previously to correct both susceptibility and eddy-current-induced distortions in DW-EPIs. Very fast eddy-current calibration in a three-dimensional volume is possible with the proposed method. The measured eddy-current maps are very stable over time and very similar maps can be obtained by linear superposition of principal-axes eddy-current maps. High resolution in vivo brain results demonstrate that the proposed method allows more efficient eddy-current correction than the image-based method. The combination of both PSF-based approaches allows distortion-free images, which permit reliable analysis in diffusion tensor imaging applications at 7T. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ward, T.; Fleming, J. S.; Hoffmann, S. M. A.; Kemp, P. M.
2005-11-01
Simulation is useful in the validation of functional image analysis methods, particularly when considering the number of analysis techniques currently available lacking thorough validation. Problems exist with current simulation methods due to long run times or unrealistic results making it problematic to generate complete datasets. A method is presented for simulating known abnormalities within normal brain SPECT images using a measured point spread function (PSF), and incorporating a stereotactic atlas of the brain for anatomical positioning. This allows for the simulation of realistic images through the use of prior information regarding disease progression. SPECT images of cerebral perfusion have been generated consisting of a control database and a group of simulated abnormal subjects that are to be used in a UK audit of analysis methods. The abnormality is defined in the stereotactic space, then transformed to the individual subject space, convolved with a measured PSF and removed from the normal subject image. The dataset was analysed using SPM99 (Wellcome Department of Imaging Neuroscience, University College, London) and the MarsBaR volume of interest (VOI) analysis toolbox. The results were evaluated by comparison with the known ground truth. The analysis showed improvement when using a smoothing kernel equal to system resolution over the slightly larger kernel used routinely. Significant correlation was found between effective volume of a simulated abnormality and the detected size using SPM99. Improvements in VOI analysis sensitivity were found when using the region median over the region mean. The method and dataset provide an efficient methodology for use in the comparison and cross validation of semi-quantitative analysis methods in brain SPECT, and allow the optimization of analysis parameters.
Mansoor, Awais; Foster, Brent; Xu, Ziyue; Papadakis, Georgios Z.; Folio, Les R.; Udupa, Jayaram K.; Mollura, Daniel J.
2015-01-01
The computer-based process of identifying the boundaries of lung from surrounding thoracic tissue on computed tomographic (CT) images, which is called segmentation, is a vital first step in radiologic pulmonary image analysis. Many algorithms and software platforms provide image segmentation routines for quantification of lung abnormalities; however, nearly all of the current image segmentation approaches apply well only if the lungs exhibit minimal or no pathologic conditions. When moderate to high amounts of disease or abnormalities with a challenging shape or appearance exist in the lungs, computer-aided detection systems may be highly likely to fail to depict those abnormal regions because of inaccurate segmentation methods. In particular, abnormalities such as pleural effusions, consolidations, and masses often cause inaccurate lung segmentation, which greatly limits the use of image processing methods in clinical and research contexts. In this review, a critical summary of the current methods for lung segmentation on CT images is provided, with special emphasis on the accuracy and performance of the methods in cases with abnormalities and cases with exemplary pathologic findings. The currently available segmentation methods can be divided into five major classes: (a) thresholding-based, (b) region-based, (c) shape-based, (d) neighboring anatomy–guided, and (e) machine learning–based methods. The feasibility of each class and its shortcomings are explained and illustrated with the most common lung abnormalities observed on CT images. In an overview, practical applications and evolving technologies combining the presented approaches for the practicing radiologist are detailed. ©RSNA, 2015 PMID:26172351
Analyses of S-Box in Image Encryption Applications Based on Fuzzy Decision Making Criterion
NASA Astrophysics Data System (ADS)
Rehman, Inayatur; Shah, Tariq; Hussain, Iqtadar
2014-06-01
In this manuscript, we put forward a standard based on fuzzy decision making criterion to examine the current substitution boxes and study their strengths and weaknesses in order to decide their appropriateness in image encryption applications. The proposed standard utilizes the results of correlation analysis, entropy analysis, contrast analysis, homogeneity analysis, energy analysis, and mean of absolute deviation analysis. These analyses are applied to well-known substitution boxes. The outcome of these analyses are additional observed and a fuzzy soft set decision making criterion is used to decide the suitability of an S-box to image encryption applications.
Isse, Kumiko; Lesniak, Andrew; Grama, Kedar; Roysam, Badrinath; Minervini, Martha I.; Demetris, Anthony J
2013-01-01
Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. “-Omics” analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: a) spatial-temporal relationships; b) rare events/cells; c) complex structural context; and d) integration into a “systems” model. Nevertheless, except for immunostaining, no transformative advancements have “modernized” routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology - global “–omic” analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. PMID:22053785
Probing of multiple magnetic responses in magnetic inductors using atomic force microscopy.
Park, Seongjae; Seo, Hosung; Seol, Daehee; Yoon, Young-Hwan; Kim, Mi Yang; Kim, Yunseok
2016-02-08
Even though nanoscale analysis of magnetic properties is of significant interest, probing methods are relatively less developed compared to the significance of the technique, which has multiple potential applications. Here, we demonstrate an approach for probing various magnetic properties associated with eddy current, coil current and magnetic domains in magnetic inductors using multidimensional magnetic force microscopy (MMFM). The MMFM images provide combined magnetic responses from the three different origins, however, each contribution to the MMFM response can be differentiated through analysis based on the bias dependence of the response. In particular, the bias dependent MMFM images show locally different eddy current behavior with values dependent on the type of materials that comprise the MI. This approach for probing magnetic responses can be further extended to the analysis of local physical features.
NASA Astrophysics Data System (ADS)
Iltis, G.; Caswell, T. A.; Dill, E.; Wilkins, S.; Lee, W. K.
2014-12-01
X-ray tomographic imaging of porous media has proven to be a valuable tool for investigating and characterizing the physical structure and state of both natural and synthetic porous materials, including glass bead packs, ceramics, soil and rock. Given that most synchrotron facilities have user programs which grant academic researchers access to facilities and x-ray imaging equipment free of charge, a key limitation or hindrance for small research groups interested in conducting x-ray imaging experiments is the financial cost associated with post-experiment data analysis. While the cost of high performance computing hardware continues to decrease, expenses associated with licensing commercial software packages for quantitative image analysis continue to increase, with current prices being as high as $24,000 USD, for a single user license. As construction of the Nation's newest synchrotron accelerator nears completion, a significant effort is being made here at the National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory (BNL), to provide an open-source, experiment-to-publication toolbox that reduces the financial and technical 'activation energy' required for performing sophisticated quantitative analysis of multidimensional porous media data sets, collected using cutting-edge x-ray imaging techniques. Implementation focuses on leveraging existing open-source projects and developing additional tools for quantitative analysis. We will present an overview of the software suite that is in development here at BNL including major design decisions, a demonstration of several test cases illustrating currently available quantitative tools for analysis and characterization of multidimensional porous media image data sets and plans for their future development.
NASA Technical Reports Server (NTRS)
Gallagher, D. L.; Fok, M.-C.; Fuselier, S.; Gladstone, G. R.; Green, J. L.; Fung, S. F.; Perez, J.; Reiff, P.; Roelof, E. C.; Wilson, G.
1998-01-01
Simultaneous, global measurement of major magnetospheric plasma systems will be performed for the first time with the Imager for Magnetopause-to-Aurora Global Exploration (IMAGE) Mission. The ring current, plasmasphere, and auroral systems will be imaged using energetic neutral and ultraviolet cameras. Quantitative remote measurement of the magnetosheath, plasmaspheric, and magnetospheric densities will be obtained through radio sounding by the Radio Plasma Imager. The IMAGE Mission will open a new era in global magnetospheric physics, while bringing with it new challenges in data analysis. An overview of the IMAGE Theory and Modeling team efforts will be presented, including the state of development of Internet tools that will be available to the science community for access and analysis of IMAGE observations.
A survey of MRI-based medical image analysis for brain tumor studies
NASA Astrophysics Data System (ADS)
Bauer, Stefan; Wiest, Roland; Nolte, Lutz-P.; Reyes, Mauricio
2013-07-01
MRI-based medical image analysis for brain tumor studies is gaining attention in recent times due to an increased need for efficient and objective evaluation of large amounts of data. While the pioneering approaches applying automated methods for the analysis of brain tumor images date back almost two decades, the current methods are becoming more mature and coming closer to routine clinical application. This review aims to provide a comprehensive overview by giving a brief introduction to brain tumors and imaging of brain tumors first. Then, we review the state of the art in segmentation, registration and modeling related to tumor-bearing brain images with a focus on gliomas. The objective in the segmentation is outlining the tumor including its sub-compartments and surrounding tissues, while the main challenge in registration and modeling is the handling of morphological changes caused by the tumor. The qualities of different approaches are discussed with a focus on methods that can be applied on standard clinical imaging protocols. Finally, a critical assessment of the current state is performed and future developments and trends are addressed, giving special attention to recent developments in radiological tumor assessment guidelines.
Multiple view image analysis of freefalling U.S. wheat grains for damage assessment
USDA-ARS?s Scientific Manuscript database
Currently, inspection of wheat in the United States for grade and class is performed by human visual analysis. This is a time consuming operation typically taking several minutes for each sample. Digital imaging research has addressed this issue over the past two decades, with success in recognition...
Optical disk processing of solar images.
NASA Astrophysics Data System (ADS)
Title, A.; Tarbell, T.
The current generation of space and ground-based experiments in solar physics produces many megabyte-sized image data arrays. Optical disk technology is the leading candidate for convenient analysis, distribution, and archiving of these data. The authors have been developing data analysis procedures which use both analog and digital optical disks for the study of solar phenomena.
NASA Astrophysics Data System (ADS)
Bianchetti, Raechel Anne
Remotely sensed images have become a ubiquitous part of our daily lives. From novice users, aiding in search and rescue missions using tools such as TomNod, to trained analysts, synthesizing disparate data to address complex problems like climate change, imagery has become central to geospatial problem solving. Expert image analysts are continually faced with rapidly developing sensor technologies and software systems. In response to these cognitively demanding environments, expert analysts develop specialized knowledge and analytic skills to address increasingly complex problems. This study identifies the knowledge, skills, and analytic goals of expert image analysts tasked with identification of land cover and land use change. Analysts participating in this research are currently working as part of a national level analysis of land use change, and are well versed with the use of TimeSync, forest science, and image analysis. The results of this study benefit current analysts as it improves their awareness of their mental processes used during the image interpretation process. The study also can be generalized to understand the types of knowledge and visual cues that analysts use when reasoning with imagery for purposes beyond land use change studies. Here a Cognitive Task Analysis framework is used to organize evidence from qualitative knowledge elicitation methods for characterizing the cognitive aspects of the TimeSync image analysis process. Using a combination of content analysis, diagramming, semi-structured interviews, and observation, the study highlights the perceptual and cognitive elements of expert remote sensing interpretation. Results show that image analysts perform several standard cognitive processes, but flexibly employ these processes in response to various contextual cues. Expert image analysts' ability to think flexibly during their analysis process was directly related to their amount of image analysis experience. Additionally, results show that the basic Image Interpretation Elements continue to be important despite technological augmentation of the interpretation process. These results are used to derive a set of design guidelines for developing geovisual analytic tools and training to support image analysis.
Retinal imaging analysis based on vessel detection.
Jamal, Arshad; Hazim Alkawaz, Mohammed; Rehman, Amjad; Saba, Tanzila
2017-07-01
With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art. © 2017 Wiley Periodicals, Inc.
Automatic classification of minimally invasive instruments based on endoscopic image sequences
NASA Astrophysics Data System (ADS)
Speidel, Stefanie; Benzko, Julia; Krappe, Sebastian; Sudra, Gunther; Azad, Pedram; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger
2009-02-01
Minimally invasive surgery is nowadays a frequently applied technique and can be regarded as a major breakthrough in surgery. The surgeon has to adopt special operation-techniques and deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To analyze the current situation for context-aware assistance, we need intraoperatively gained sensor data and a model of the intervention. A situation consists of information about the performed activity, the used instruments, the surgical objects, the anatomical structures and defines the state of an intervention for a given moment in time. The endoscopic images provide a rich source of information which can be used for an image-based analysis. Different visual cues are observed in order to perform an image-based analysis with the objective to gain as much information as possible about the current situation. An important visual cue is the automatic recognition of the instruments which appear in the scene. In this paper we present the classification of minimally invasive instruments using the endoscopic images. The instruments are not modified by markers. The system segments the instruments in the current image and recognizes the instrument type based on three-dimensional instrument models.
Emerging imaging tools for use with traumatic brain injury research.
Hunter, Jill V; Wilde, Elisabeth A; Tong, Karen A; Holshouser, Barbara A
2012-03-01
This article identifies emerging neuroimaging measures considered by the inter-agency Pediatric Traumatic Brain Injury (TBI) Neuroimaging Workgroup. This article attempts to address some of the potential uses of more advanced forms of imaging in TBI as well as highlight some of the current considerations and unresolved challenges of using them. We summarize emerging elements likely to gain more widespread use in the coming years, because of 1) their utility in diagnosis, prognosis, and understanding the natural course of degeneration or recovery following TBI, and potential for evaluating treatment strategies; 2) the ability of many centers to acquire these data with scanners and equipment that are readily available in existing clinical and research settings; and 3) advances in software that provide more automated, readily available, and cost-effective analysis methods for large scale data image analysis. These include multi-slice CT, volumetric MRI analysis, susceptibility-weighted imaging (SWI), diffusion tensor imaging (DTI), magnetization transfer imaging (MTI), arterial spin tag labeling (ASL), functional MRI (fMRI), including resting state and connectivity MRI, MR spectroscopy (MRS), and hyperpolarization scanning. However, we also include brief introductions to other specialized forms of advanced imaging that currently do require specialized equipment, for example, single photon emission computed tomography (SPECT), positron emission tomography (PET), encephalography (EEG), and magnetoencephalography (MEG)/magnetic source imaging (MSI). Finally, we identify some of the challenges that users of the emerging imaging CDEs may wish to consider, including quality control, performing multi-site and longitudinal imaging studies, and MR scanning in infants and children.
Barlow, Anders J; Portoles, Jose F; Sano, Naoko; Cumpson, Peter J
2016-10-01
The development of the helium ion microscope (HIM) enables the imaging of both hard, inorganic materials and soft, organic or biological materials. Advantages include outstanding topographical contrast, superior resolution down to <0.5 nm at high magnification, high depth of field, and no need for conductive coatings. The instrument relies on helium atom adsorption and ionization at a cryogenically cooled tip that is atomically sharp. Under ideal conditions this arrangement provides a beam of ions that is stable for days to weeks, with beam currents in the order of picoamperes. Over time, however, this stability is lost as gaseous contamination builds up in the source region, leading to adsorbed atoms of species other than helium, which ultimately results in beam current fluctuations. This manifests itself as horizontal stripe artifacts in HIM images. We investigate post-processing methods to remove these artifacts from HIM images, such as median filtering, Gaussian blurring, fast Fourier transforms, and principal component analysis. We arrive at a simple method for completely removing beam current fluctuation effects from HIM images while maintaining the full integrity of the information within the image.
Oliver, A; Mendizabal, J A; Ripoll, G; Albertí, P; Purroy, A
2010-04-01
The SEUROP system is currently in use for carcass classification in Europe. Image analysis and other new technologies are being developed to enhance and supplement this classification system. After slaughtering, 91 carcasses of local Spanish beef breeds were weighed and classified according to the SEUROP system. Two digital photographs (a side and a dorsal view) were taken of the left carcass sides, and a total of 33 morphometric measurements (lengths, perimeters, areas) were made. Commercial butchering of these carcasses took place 24 h postmortem, and the different cuts were grouped according to four commercial meat cut quality categories: extra, first, second, and third. Multiple regression analysis of carcass weight and the SEUROP conformation score (x variables) on meat yield and the four commercial cut quality category yields (y variables) was performed as a measure of the accuracy of the SEUROP system. Stepwise regression analysis of carcass weight and the 33 morphometric image analysis measurements (x variables) and meat yield and yields of the four commercial cut quality categories (y variables) was carried out. Higher accuracy was achieved using image analysis than using only the current SEUROP conformation score. The regression coefficient values were between R(2)=0.66 and R(2)=0.93 (P<0.001) for the SEUROP system and between R(2)=0.81 and R(2)=0.94 (P<0.001) for the image analysis method. These results suggest that the image analysis method should be helpful as a means of supplementing and enhancing the SEUROP system for grading beef carcasses. 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Driscoll, Brandon; Jaffray, David; Coolens, Catherine
2014-03-01
Purpose: To provide clinicians & researchers participating in multi-centre clinical trials with a central repository for large volume dynamic imaging data as well as a set of tools for providing end-to-end testing and image analysis standards of practice. Methods: There are three main pieces to the data archiving and analysis system; the PACS server, the data analysis computer(s) and the high-speed networks that connect them. Each clinical trial is anonymized using a customizable anonymizer and is stored on a PACS only accessible by AE title access control. The remote analysis station consists of a single virtual machine per trial running on a powerful PC supporting multiple simultaneous instances. Imaging data management and analysis is performed within ClearCanvas Workstation® using custom designed plug-ins for kinetic modelling (The DCE-Tool®), quality assurance (The DCE-QA Tool) and RECIST. Results: A framework has been set up currently serving seven clinical trials spanning five hospitals with three more trials to be added over the next six months. After initial rapid image transfer (+ 2 MB/s), all data analysis is done server side making it robust and rapid. This has provided the ability to perform computationally expensive operations such as voxel-wise kinetic modelling on very large data archives (+20 GB/50k images/patient) remotely with minimal end-user hardware. Conclusions: This system is currently in its proof of concept stage but has been used successfully to send and analyze data from remote hospitals. Next steps will involve scaling up the system with a more powerful PACS and multiple high powered analysis machines as well as adding real-time review capabilities.
Structural MRI and Cognitive Correlates in Pest-control Personnel from Gulf War I
2009-04-01
Medicine where they will be reconstructed for morphometric analyses by the study imaging expert, Dr. Killiany. All the images will be transferred to... geometric design; assess ability to organize and construct Raw Score...MRI and morphometric analysis of the images. The results of the current study will be able to compare whether brain imaging differences exist
Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal
Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; al-Diri, Bashir; Cheung, Carol Y.; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M.; Jelinek, Herbert F.; Meriaudeau, Fabrice; Quellec, Gwénolé; MacGillivray, Tom; Dhillon, Bal
2013-01-01
This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433
Wavelet analysis enables system-independent texture analysis of optical coherence tomography images.
Lingley-Papadopoulos, Colleen A; Loew, Murray H; Zara, Jason M
2009-01-01
Texture analysis for tissue characterization is a current area of optical coherence tomography (OCT) research. We discuss some of the differences between OCT systems and the effects those differences have on the resulting images and subsequent image analysis. In addition, as an example, two algorithms for the automatic recognition of bladder cancer are compared: one that was developed on a single system with no consideration for system differences, and one that was developed to address the issues associated with system differences. The first algorithm had a sensitivity of 73% and specificity of 69% when tested using leave-one-out cross-validation on data taken from a single system. When tested on images from another system with a different central wavelength, however, the method classified all images as cancerous regardless of the true pathology. By contrast, with the use of wavelet analysis and the removal of system-dependent features, the second algorithm reported sensitivity and specificity values of 87 and 58%, respectively, when trained on images taken with one imaging system and tested on images taken with another.
Wavelet analysis enables system-independent texture analysis of optical coherence tomography images
NASA Astrophysics Data System (ADS)
Lingley-Papadopoulos, Colleen A.; Loew, Murray H.; Zara, Jason M.
2009-07-01
Texture analysis for tissue characterization is a current area of optical coherence tomography (OCT) research. We discuss some of the differences between OCT systems and the effects those differences have on the resulting images and subsequent image analysis. In addition, as an example, two algorithms for the automatic recognition of bladder cancer are compared: one that was developed on a single system with no consideration for system differences, and one that was developed to address the issues associated with system differences. The first algorithm had a sensitivity of 73% and specificity of 69% when tested using leave-one-out cross-validation on data taken from a single system. When tested on images from another system with a different central wavelength, however, the method classified all images as cancerous regardless of the true pathology. By contrast, with the use of wavelet analysis and the removal of system-dependent features, the second algorithm reported sensitivity and specificity values of 87 and 58%, respectively, when trained on images taken with one imaging system and tested on images taken with another.
IDIMS/GEOPAK: Users manual for a geophysical data display and analysis system
NASA Technical Reports Server (NTRS)
Libert, J. M.
1982-01-01
The application of an existing image analysis system to the display and analysis of geophysical data is described, the potential for expanding the capabilities of such a system toward more advanced computer analytic and modeling functions is investigated. The major features of the IDIMS (Interactive Display and Image Manipulation System) and its applicability for image type analysis of geophysical data are described. Development of a basic geophysical data processing system to permit the image representation, coloring, interdisplay and comparison of geophysical data sets using existing IDIMS functions and to provide for the production of hard copies of processed images was described. An instruction manual and documentation for the GEOPAK subsystem was produced. A training course for personnel in the use of the IDIMS/GEOPAK was conducted. The effectiveness of the current IDIMS/GEOPAK system for geophysical data analysis was evaluated.
LANDSAT-4 image data quality analysis for energy related applications. [nuclear power plant sites
NASA Technical Reports Server (NTRS)
Wukelic, G. E. (Principal Investigator)
1983-01-01
No useable LANDSAT 4 TM data were obtained for the Hanford site in the Columbia Plateau region, but TM simulator data for a Virginia Electric Company nuclear power plant was used to test image processing algorithms. Principal component analyses of this data set clearly indicated that thermal plumes in surface waters used for reactor cooling would be discrenible. Image processing and analysis programs were successfully testing using the 7 band Arkansas test scene and preliminary analysis of TM data for the Savanah River Plant shows that current interactive, image enhancement, analysis and integration techniques can be effectively used for LANDSAT 4 data. Thermal band data appear adequate for gross estimates of thermal changes occurring near operating nuclear facilities especially in surface water bodies being used for reactor cooling purposes. Additional image processing software was written and tested which provides for more rapid and effective analysis of the 7 band TM data.
Breast MRI radiogenomics: Current status and research implications.
Grimm, Lars J
2016-06-01
Breast magnetic resonance imaging (MRI) radiogenomics is an emerging area of research that has the potential to directly influence clinical practice. Clinical MRI scanners today are capable of providing excellent temporal and spatial resolution, which allows extraction of numerous imaging features via human extraction approaches or complex computer vision algorithms. Meanwhile, advances in breast cancer genetics research has resulted in the identification of promising genes associated with cancer outcomes. In addition, validated genomic signatures have been developed that allow categorization of breast cancers into distinct molecular subtypes as well as predict the risk of cancer recurrence and response to therapy. Current radiogenomics research has been directed towards exploratory analysis of individual genes, understanding tumor biology, and developing imaging surrogates to genetic analysis with the long-term goal of developing a meaningful tool for clinical care. The background of breast MRI radiogenomics research, image feature extraction techniques, approaches to radiogenomics research, and promising areas of investigation are reviewed. J. Magn. Reson. Imaging 2016;43:1269-1278. © 2015 Wiley Periodicals, Inc.
Isse, K; Lesniak, A; Grama, K; Roysam, B; Minervini, M I; Demetris, A J
2012-01-01
Conventional histopathology is the gold standard for allograft monitoring, but its value proposition is increasingly questioned. "-Omics" analysis of tissues, peripheral blood and fluids and targeted serologic studies provide mechanistic insights into allograft injury not currently provided by conventional histology. Microscopic biopsy analysis, however, provides valuable and unique information: (a) spatial-temporal relationships; (b) rare events/cells; (c) complex structural context; and (d) integration into a "systems" model. Nevertheless, except for immunostaining, no transformative advancements have "modernized" routine microscopy in over 100 years. Pathologists now team with hardware and software engineers to exploit remarkable developments in digital imaging, nanoparticle multiplex staining, and computational image analysis software to bridge the traditional histology-global "-omic" analyses gap. Included are side-by-side comparisons, objective biopsy finding quantification, multiplexing, automated image analysis, and electronic data and resource sharing. Current utilization for teaching, quality assurance, conferencing, consultations, research and clinical trials is evolving toward implementation for low-volume, high-complexity clinical services like transplantation pathology. Cost, complexities of implementation, fluid/evolving standards, and unsettled medical/legal and regulatory issues remain as challenges. Regardless, challenges will be overcome and these technologies will enable transplant pathologists to increase information extraction from tissue specimens and contribute to cross-platform biomarker discovery for improved outcomes. ©Copyright 2011 The American Society of Transplantation and the American Society of Transplant Surgeons.
NASA Technical Reports Server (NTRS)
Mendenhall, J. A.
2001-01-01
The stability of the EO-1 Advanced Land Imager dark current levels over the period of one-half orbit is investigated. A series of two-second dark current collections, over the course of 40 minutes, was performed during the first sixty days the instrument was in orbit. Analysis of this data indicates only two dark current reference periods, obtained entering and exiting eclipse, are required to remove ALI dark current offsets for 99.9% of the focal plane to within 1.5 digital numbers for any observation on the solar illuminated portion of the orbit.
Nativ, Nir I; Chen, Alvin I; Yarmush, Gabriel; Henry, Scot D; Lefkowitch, Jay H; Klein, Kenneth M; Maguire, Timothy J; Schloss, Rene; Guarrera, James V; Berthiaume, Francois; Yarmush, Martin L
2014-02-01
Large-droplet macrovesicular steatosis (ld-MaS) in more than 30% of liver graft hepatocytes is a major risk factor for liver transplantation. An accurate assessment of the ld-MaS percentage is crucial for determining liver graft transplantability, which is currently based on pathologists' evaluations of hematoxylin and eosin (H&E)-stained liver histology specimens, with the predominant criteria being the relative size of the lipid droplets (LDs) and their propensity to displace a hepatocyte's nucleus to the cell periphery. Automated image analysis systems aimed at objectively and reproducibly quantifying ld-MaS do not accurately differentiate large LDs from small-droplet macrovesicular steatosis and do not take into account LD-mediated nuclear displacement; this leads to a poor correlation with pathologists' assessments. Here we present an improved image analysis method that incorporates nuclear displacement as a key image feature for segmenting and classifying ld-MaS from H&E-stained liver histology slides. 52,000 LDs in 54 digital images from 9 patients were analyzed, and the performance of the proposed method was compared against the performance of current image analysis methods and the ld-MaS percentage evaluations of 2 trained pathologists from different centers. We show that combining nuclear displacement and LD size information significantly improves the separation between large and small macrovesicular LDs (specificity = 93.7%, sensitivity = 99.3%) and the correlation with pathologists' ld-MaS percentage assessments (linear regression coefficient of determination = 0.97). This performance vastly exceeds that of other automated image analyzers, which typically underestimate or overestimate pathologists' ld-MaS scores. This work demonstrates the potential of automated ld-MaS analysis in monitoring the steatotic state of livers. The image analysis principles demonstrated here may help to standardize ld-MaS scores among centers and ultimately help in the process of determining liver graft transplantability. © 2013 American Association for the Study of Liver Diseases.
Plasmonic Imaging of Electrochemical Reactions of Single Nanoparticles.
Fang, Yimin; Wang, Hui; Yu, Hui; Liu, Xianwei; Wang, Wei; Chen, Hong-Yuan; Tao, N J
2016-11-15
Electrochemical reactions are involved in many natural phenomena, and are responsible for various applications, including energy conversion and storage, material processing and protection, and chemical detection and analysis. An electrochemical reaction is accompanied by electron transfer between a chemical species and an electrode. For this reason, it has been studied by measuring current, charge, or related electrical quantities. This approach has led to the development of various electrochemical methods, which have played an essential role in the understanding and applications of electrochemistry. While powerful, most of the traditional methods lack spatial and temporal resolutions desired for studying heterogeneous electrochemical reactions on electrode surfaces and in nanoscale materials. To overcome the limitations, scanning probe microscopes have been invented to map local electrochemical reactions with nanometer resolution. Examples include the scanning electrochemical microscope and scanning electrochemical cell microscope, which directly image local electrochemical reaction current using a scanning electrode or pipet. The use of a scanning probe in these microscopes provides high spatial resolution, but at the expense of temporal resolution and throughput. This Account discusses an alternative approach to study electrochemical reactions. Instead of measuring electron transfer electrically, it detects the accompanying changes in the reactant and product concentrations on the electrode surface optically via surface plasmon resonance (SPR). SPR is highly surface sensitive, and it provides quantitative information on the surface concentrations of reactants and products vs time and electrode potential, from which local reaction kinetics can be analyzed and quantified. The plasmonic approach allows imaging of local electrochemical reactions with high temporal resolution and sensitivity, making it attractive for studying electrochemical reactions in biological systems and nanoscale materials with high throughput. The plasmonic approach has two imaging modes: electrochemical current imaging and interfacial impedance imaging. The former images local electrochemical current associated with electrochemical reactions (faradic current), and the latter maps local interfacial impedance, including nonfaradic contributions (e.g., double layer charging). The plasmonic imaging technique can perform voltammetry (cyclic or square wave) in an analogous manner to the traditional electrochemical methods. It can also be integrated with bright field, dark field, and fluorescence imaging capabilities in one optical setup to provide additional capabilities. To date the plasmonic imaging technique has found various applications, including mapping of heterogeneous surface reactions, analysis of trace substances, detection of catalytic reactions, and measurement of graphene quantum capacitance. The plasmonic and other emerging optical imaging techniques (e.g., dark field and fluorescence microscopy), together with the scanning probe-based electrochemical imaging and single nanoparticle analysis techniques, provide new capabilities for one to study single nanoparticle electrochemistry with unprecedented spatial and temporal resolutions. In this Account, we focus on imaging of electrochemical reactions at single nanoparticles.
Setting Standards for Reporting and Quantification in Fluorescence-Guided Surgery.
Hoogstins, Charlotte; Burggraaf, Jan Jaap; Koller, Marjory; Handgraaf, Henricus; Boogerd, Leonora; van Dam, Gooitzen; Vahrmeijer, Alexander; Burggraaf, Jacobus
2018-05-29
Intraoperative fluorescence imaging (FI) is a promising technique that could potentially guide oncologic surgeons toward more radical resections and thus improve clinical outcome. Despite the increase in the number of clinical trials, fluorescent agents and imaging systems for intraoperative FI, a standardized approach for imaging system performance assessment and post-acquisition image analysis is currently unavailable. We conducted a systematic, controlled comparison between two commercially available imaging systems using a novel calibration device for FI systems and various fluorescent agents. In addition, we analyzed fluorescence images from previous studies to evaluate signal-to-background ratio (SBR) and determinants of SBR. Using the calibration device, imaging system performance could be quantified and compared, exposing relevant differences in sensitivity. Image analysis demonstrated a profound influence of background noise and the selection of the background on SBR. In this article, we suggest clear approaches for the quantification of imaging system performance assessment and post-acquisition image analysis, attempting to set new standards in the field of FI.
Image improvement and three-dimensional reconstruction using holographic image processing
NASA Technical Reports Server (NTRS)
Stroke, G. W.; Halioua, M.; Thon, F.; Willasch, D. H.
1977-01-01
Holographic computing principles make possible image improvement and synthesis in many cases of current scientific and engineering interest. Examples are given for the improvement of resolution in electron microscopy and 3-D reconstruction in electron microscopy and X-ray crystallography, following an analysis of optical versus digital computing in such applications.
Application of He ion microscopy for material analysis
NASA Astrophysics Data System (ADS)
Altmann, F.; Simon, M.; Klengel, R.
2009-05-01
Helium ion beam microscopy (HIM) is a new high resolution imaging technique. The use of Helium ions instead of electrons enables none destructive imaging combined with contrasts quite similar to that from Gallium ion beam imaging. The use of very low probe currents and the comfortable charge compensation using low energy electrons offer imaging of none conductive samples without conductive coating. An ongoing microelectronic sample with Gold/Aluminum interconnects and polymer electronic devices were chosen to evaluate HIM in comparison to scanning electron microscopy (SEM). The aim was to look for key applications of HIM in material analysis. Main focus was on complementary contrast mechanisms and imaging of none conductive samples.
Automated X-ray image analysis for cargo security: Critical review and future promise.
Rogers, Thomas W; Jaccard, Nicolas; Morton, Edward J; Griffin, Lewis D
2017-01-01
We review the relatively immature field of automated image analysis for X-ray cargo imagery. There is increasing demand for automated analysis methods that can assist in the inspection and selection of containers, due to the ever-growing volumes of traded cargo and the increasing concerns that customs- and security-related threats are being smuggled across borders by organised crime and terrorist networks. We split the field into the classical pipeline of image preprocessing and image understanding. Preprocessing includes: image manipulation; quality improvement; Threat Image Projection (TIP); and material discrimination and segmentation. Image understanding includes: Automated Threat Detection (ATD); and Automated Contents Verification (ACV). We identify several gaps in the literature that need to be addressed and propose ideas for future research. Where the current literature is sparse we borrow from the single-view, multi-view, and CT X-ray baggage domains, which have some characteristics in common with X-ray cargo.
[Present status and trend of heart fluid mechanics research based on medical image analysis].
Gan, Jianhong; Yin, Lixue; Xie, Shenghua; Li, Wenhua; Lu, Jing; Luo, Anguo
2014-06-01
With introduction of current main methods for heart fluid mechanics researches, we studied the characteristics and weakness for three primary analysis methods based on magnetic resonance imaging, color Doppler ultrasound and grayscale ultrasound image, respectively. It is pointed out that particle image velocity (PIV), speckle tracking and block match have the same nature, and three algorithms all adopt block correlation. The further analysis shows that, with the development of information technology and sensor, the research for cardiac function and fluid mechanics will focus on energy transfer process of heart fluid, characteristics of Chamber wall related to blood fluid and Fluid-structure interaction in the future heart fluid mechanics fields.
Deep Learning in Nuclear Medicine and Molecular Imaging: Current Perspectives and Future Directions.
Choi, Hongyoon
2018-04-01
Recent advances in deep learning have impacted various scientific and industrial fields. Due to the rapid application of deep learning in biomedical data, molecular imaging has also started to adopt this technique. In this regard, it is expected that deep learning will potentially affect the roles of molecular imaging experts as well as clinical decision making. This review firstly offers a basic overview of deep learning particularly for image data analysis to give knowledge to nuclear medicine physicians and researchers. Because of the unique characteristics and distinctive aims of various types of molecular imaging, deep learning applications can be different from other fields. In this context, the review deals with current perspectives of deep learning in molecular imaging particularly in terms of development of biomarkers. Finally, future challenges of deep learning application for molecular imaging and future roles of experts in molecular imaging will be discussed.
Surface analysis of lipids by mass spectrometry: more than just imaging.
Ellis, Shane R; Brown, Simon H; In Het Panhuis, Marc; Blanksby, Stephen J; Mitchell, Todd W
2013-10-01
Mass spectrometry is now an indispensable tool for lipid analysis and is arguably the driving force in the renaissance of lipid research. In its various forms, mass spectrometry is uniquely capable of resolving the extensive compositional and structural diversity of lipids in biological systems. Furthermore, it provides the ability to accurately quantify molecular-level changes in lipid populations associated with changes in metabolism and environment; bringing lipid science to the "omics" age. The recent explosion of mass spectrometry-based surface analysis techniques is fuelling further expansion of the lipidomics field. This is evidenced by the numerous papers published on the subject of mass spectrometric imaging of lipids in recent years. While imaging mass spectrometry provides new and exciting possibilities, it is but one of the many opportunities direct surface analysis offers the lipid researcher. In this review we describe the current state-of-the-art in the direct surface analysis of lipids with a focus on tissue sections, intact cells and thin-layer chromatography substrates. The suitability of these different approaches towards analysis of the major lipid classes along with their current and potential applications in the field of lipid analysis are evaluated. Copyright © 2013 Elsevier Ltd. All rights reserved.
Emerging Imaging Tools for Use with Traumatic Brain Injury Research
Wilde, Elisabeth A.; Tong, Karen A.; Holshouser, Barbara A.
2012-01-01
Abstract This article identifies emerging neuroimaging measures considered by the inter-agency Pediatric Traumatic Brain Injury (TBI) Neuroimaging Workgroup. This article attempts to address some of the potential uses of more advanced forms of imaging in TBI as well as highlight some of the current considerations and unresolved challenges of using them. We summarize emerging elements likely to gain more widespread use in the coming years, because of 1) their utility in diagnosis, prognosis, and understanding the natural course of degeneration or recovery following TBI, and potential for evaluating treatment strategies; 2) the ability of many centers to acquire these data with scanners and equipment that are readily available in existing clinical and research settings; and 3) advances in software that provide more automated, readily available, and cost-effective analysis methods for large scale data image analysis. These include multi-slice CT, volumetric MRI analysis, susceptibility-weighted imaging (SWI), diffusion tensor imaging (DTI), magnetization transfer imaging (MTI), arterial spin tag labeling (ASL), functional MRI (fMRI), including resting state and connectivity MRI, MR spectroscopy (MRS), and hyperpolarization scanning. However, we also include brief introductions to other specialized forms of advanced imaging that currently do require specialized equipment, for example, single photon emission computed tomography (SPECT), positron emission tomography (PET), encephalography (EEG), and magnetoencephalography (MEG)/magnetic source imaging (MSI). Finally, we identify some of the challenges that users of the emerging imaging CDEs may wish to consider, including quality control, performing multi-site and longitudinal imaging studies, and MR scanning in infants and children. PMID:21787167
Digital image processing and analysis for activated sludge wastewater treatment.
Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed
2015-01-01
Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.
A quantitative comparison of two methods to correct eddy current-induced distortions in DT-MRI.
Muñoz Maniega, Susana; Bastin, Mark E; Armitage, Paul A
2007-04-01
Eddy current-induced geometric distortions of single-shot, diffusion-weighted, echo-planar (DW-EP) images are a major confounding factor to the accurate determination of water diffusion parameters in diffusion tensor MRI (DT-MRI). Previously, it has been suggested that these geometric distortions can be removed from brain DW-EP images using affine transformations determined from phantom calibration experiments using iterative cross-correlation (ICC). Since this approach was first described, a number of image-based registration methods have become available that can also correct eddy current-induced distortions in DW-EP images. However, as yet no study has investigated whether separate eddy current calibration or image-based registration provides the most accurate way of removing these artefacts from DT-MRI data. Here we compare how ICC phantom calibration and affine FLIRT (http://www.fmrib.ox.ac.uk), a popular image-based multi-modal registration method that can correct both eddy current-induced distortions and bulk subject motion, perform when registering DW-EP images acquired with different slice thicknesses (2.8 and 5 mm) and b-values (1000 and 3000 s/mm(2)). With the use of consistency testing, it was found that ICC was a more robust algorithm for correcting eddy current-induced distortions than affine FLIRT, especially at high b-value and small slice thickness. In addition, principal component analysis demonstrated that the combination of ICC phantom calibration (to remove eddy current-induced distortions) with rigid body FLIRT (to remove bulk subject motion) provided a more accurate registration of DT-MRI data than that achieved by affine FLIRT.
A novel image database analysis system maintenance of transportation facility.
DOT National Transportation Integrated Search
2009-01-01
The current project was funded by MIOH-UTC in the Spring of 2008 to investigate efficient : maintenance methods for transportation facilities. To achieve the objectives of the project, the : PIs undertook the research of various technologies of image...
a Cognitive Approach to Teaching a Graduate-Level Geobia Course
NASA Astrophysics Data System (ADS)
Bianchetti, Raechel A.
2016-06-01
Remote sensing image analysis training occurs both in the classroom and the research lab. Education in the classroom for traditional pixel-based image analysis has been standardized across college curriculums. However, with the increasing interest in Geographic Object-Based Image Analysis (GEOBIA), there is a need to develop classroom instruction for this method of image analysis. While traditional remote sensing courses emphasize the expansion of skills and knowledge related to the use of computer-based analysis, GEOBIA courses should examine the cognitive factors underlying visual interpretation. This current paper provides an initial analysis of the development, implementation, and outcomes of a GEOBIA course that considers not only the computational methods of GEOBIA, but also the cognitive factors of expertise, that such software attempts to replicate. Finally, a reflection on the first instantiation of this course is presented, in addition to plans for development of an open-source repository for course materials.
NASA Astrophysics Data System (ADS)
Thompson, John D.; Chakraborty, Dev P.; Szczepura, Katy; Vamvakas, Ioannis; Tootell, Andrew; Manning, David J.; Hogg, Peter
2015-03-01
Purpose: To investigate the dose saving potential of iterative reconstruction (IR) in a computed tomography (CT) examination of the thorax. Materials and Methods: An anthropomorphic chest phantom containing various configurations of simulated lesions (5, 8, 10 and 12mm; +100, -630 and -800 Hounsfield Units, HU) was imaged on a modern CT system over a tube current range (20, 40, 60 and 80mA). Images were reconstructed with (IR) and filtered back projection (FBP). An ATOM 701D (CIRS, Norfolk, VA) dosimetry phantom was used to measure organ dose. Effective dose was calculated. Eleven observers (15.11+/-8.75 years of experience) completed a free response study, localizing lesions in 544 single CT image slices. A modified jackknife alternative free-response receiver operating characteristic (JAFROC) analysis was completed to look for a significant effect of two factors: reconstruction method and tube current. Alpha was set at 0.05 to control the Type I error in this study. Results: For modified JAFROC analysis of reconstruction method there was no statistically significant difference in lesion detection performance between FBP and IR when figures-of-merit were averaged over tube current (F(1,10)=0.08, p = 0.789). For tube current analysis, significant differences were revealed between multiple pairs of tube current settings (F(3,10) = 16.96, p<0.001) when averaged over image reconstruction method. Conclusion: The free-response study suggests that lesion detection can be optimized at 40mA in this phantom model, a measured effective dose of 0.97mSv. In high-contrast regions the diagnostic value of IR, compared to FBP, is less clear.
ERIC Educational Resources Information Center
Morin, Alexandre J. S.; Maiano, Christophe; Marsh, Herbert W.; Janosz, Michel; Nagengast, Benjamin
2011-01-01
Self-esteem and body image are central to coping successfully with the developmental challenges of adolescence. However, the current knowledge surrounding self-esteem and body image is fraught with controversy. This study attempts to clarify some of them by addressing three questions: (1) Are the intraindividual developmental trajectories of…
Imaging fast electrical activity in the brain with electrical impedance tomography
Aristovich, Kirill Y.; Packham, Brett C.; Koo, Hwan; Santos, Gustavo Sato dos; McEvoy, Andy; Holder, David S.
2016-01-01
Imaging of neuronal depolarization in the brain is a major goal in neuroscience, but no technique currently exists that could image neural activity over milliseconds throughout the whole brain. Electrical impedance tomography (EIT) is an emerging medical imaging technique which can produce tomographic images of impedance changes with non-invasive surface electrodes. We report EIT imaging of impedance changes in rat somatosensory cerebral cortex with a resolution of 2 ms and < 200 μm during evoked potentials using epicortical arrays with 30 electrodes. Images were validated with local field potential recordings and current source-sink density analysis. Our results demonstrate that EIT can image neural activity in a volume 7 × 5 × 2 mm in somatosensory cerebral cortex with reduced invasiveness, greater resolution and imaging volume than other methods. Modeling indicates similar resolutions are feasible throughout the entire brain so this technique, uniquely, has the potential to image functional connectivity of cortical and subcortical structures. PMID:26348559
2013-09-01
existing MR scanning systems providing the ability to visualize structures that are impossible with current methods . Using techniques to concurrently...and unique system for analysis of affected brain regions and coupled with other imaging techniques and molecular measurements holds significant...scanning systems providing the ability to visualize structures that are impossible with current methods . Using techniques to concurrently stain
Kinoshita, Manabu; Sakai, Mio; Arita, Hideyuki; Shofuda, Tomoko; Chiba, Yasuyoshi; Kagawa, Naoki; Watanabe, Yoshiyuki; Hashimoto, Naoya; Fujimoto, Yasunori; Yoshimine, Toshiki; Nakanishi, Katsuyuki; Kanemura, Yonehiro
2016-01-01
Reports have suggested that tumor textures presented on T2-weighted images correlate with the genetic status of glioma. Therefore, development of an image analyzing framework that is capable of objective and high throughput image texture analysis for large scale image data collection is needed. The current study aimed to address the development of such a framework by introducing two novel parameters for image textures on T2-weighted images, i.e., Shannon entropy and Prewitt filtering. Twenty-two WHO grade 2 and 28 grade 3 glioma patients were collected whose pre-surgical MRI and IDH1 mutation status were available. Heterogeneous lesions showed statistically higher Shannon entropy than homogenous lesions (p = 0.006) and ROC curve analysis proved that Shannon entropy on T2WI was a reliable indicator for discrimination of homogenous and heterogeneous lesions (p = 0.015, AUC = 0.73). Lesions with well-defined borders exhibited statistically higher Edge mean and Edge median values using Prewitt filtering than those with vague lesion borders (p = 0.0003 and p = 0.0005 respectively). ROC curve analysis also proved that both Edge mean and median values were promising indicators for discrimination of lesions with vague and well defined borders and both Edge mean and median values performed in a comparable manner (p = 0.0002, AUC = 0.81 and p < 0.0001, AUC = 0.83, respectively). Finally, IDH1 wild type gliomas showed statistically lower Shannon entropy on T2WI than IDH1 mutated gliomas (p = 0.007) but no difference was observed between IDH1 wild type and mutated gliomas in Edge median values using Prewitt filtering. The current study introduced two image metrics that reflect lesion texture described on T2WI. These two metrics were validated by readings of a neuro-radiologist who was blinded to the results. This observation will facilitate further use of this technique in future large scale image analysis of glioma.
Image Analysis via Fuzzy-Reasoning Approach: Prototype Applications at NASA
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A.; Klinko, Steven J.
2004-01-01
A set of imaging techniques based on Fuzzy Reasoning (FR) approach was built for NASA at Kennedy Space Center (KSC) to perform complex real-time visual-related safety prototype tasks, such as detection and tracking of moving Foreign Objects Debris (FOD) during the NASA Space Shuttle liftoff and visual anomaly detection on slidewires used in the emergency egress system for Space Shuttle at the launch pad. The system has also proved its prospective in enhancing X-ray images used to screen hard-covered items leading to a better visualization. The system capability was used as well during the imaging analysis of the Space Shuttle Columbia accident. These FR-based imaging techniques include novel proprietary adaptive image segmentation, image edge extraction, and image enhancement. Probabilistic Neural Network (PNN) scheme available from NeuroShell(TM) Classifier and optimized via Genetic Algorithm (GA) was also used along with this set of novel imaging techniques to add powerful learning and image classification capabilities. Prototype applications built using these techniques have received NASA Space Awards, including a Board Action Award, and are currently being filed for patents by NASA; they are being offered for commercialization through the Research Triangle Institute (RTI), an internationally recognized corporation in scientific research and technology development. Companies from different fields, including security, medical, text digitalization, and aerospace, are currently in the process of licensing these technologies from NASA.
Lee, Byung Il; Oh, Suk Hoon; Woo, Eung Je; Lee, Soo Yeol; Cho, Min Hyoung; Kwon, Ohin; Seo, Jin Keun; Lee, June-Yub; Baek, Woon Sik
2003-07-07
In magnetic resonance electrical impedance tomography (MREIT), we try to reconstruct a cross-sectional resistivity (or conductivity) image of a subject. When we inject a current through surface electrodes, it generates a magnetic field. Using a magnetic resonance imaging (MRI) scanner, we can obtain the induced magnetic flux density from MR phase images of the subject. We use recessed electrodes to avoid undesirable artefacts near electrodes in measuring magnetic flux densities. An MREIT image reconstruction algorithm produces cross-sectional resistivity images utilizing the measured internal magnetic flux density in addition to boundary voltage data. In order to develop such an image reconstruction algorithm, we need a three-dimensional forward solver. Given injection currents as boundary conditions, the forward solver described in this paper computes voltage and current density distributions using the finite element method (FEM). Then, it calculates the magnetic flux density within the subject using the Biot-Savart law and FEM. The performance of the forward solver is analysed and found to be enough for use in MREIT for resistivity image reconstructions and also experimental designs and validations. The forward solver may find other applications where one needs to compute voltage, current density and magnetic flux density distributions all within a volume conductor.
NASA Technical Reports Server (NTRS)
Masuoka, E.; Rose, J.; Quattromani, M.
1981-01-01
Recent developments related to microprocessor-based personal computers have made low-cost digital image processing systems a reality. Image analysis systems built around these microcomputers provide color image displays for images as large as 256 by 240 pixels in sixteen colors. Descriptive statistics can be computed for portions of an image, and supervised image classification can be obtained. The systems support Basic, Fortran, Pascal, and assembler language. A description is provided of a system which is representative of the new microprocessor-based image processing systems currently on the market. While small systems may never be truly independent of larger mainframes, because they lack 9-track tape drives, the independent processing power of the microcomputers will help alleviate some of the turn-around time problems associated with image analysis and display on the larger multiuser systems.
Automated processing of zebrafish imaging data: a survey.
Mikut, Ralf; Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A; Kausler, Bernhard X; Ledesma-Carbayo, María J; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine
2013-09-01
Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines.
Automated Processing of Zebrafish Imaging Data: A Survey
Dickmeis, Thomas; Driever, Wolfgang; Geurts, Pierre; Hamprecht, Fred A.; Kausler, Bernhard X.; Ledesma-Carbayo, María J.; Marée, Raphaël; Mikula, Karol; Pantazis, Periklis; Ronneberger, Olaf; Santos, Andres; Stotzka, Rainer; Strähle, Uwe; Peyriéras, Nadine
2013-01-01
Abstract Due to the relative transparency of its embryos and larvae, the zebrafish is an ideal model organism for bioimaging approaches in vertebrates. Novel microscope technologies allow the imaging of developmental processes in unprecedented detail, and they enable the use of complex image-based read-outs for high-throughput/high-content screening. Such applications can easily generate Terabytes of image data, the handling and analysis of which becomes a major bottleneck in extracting the targeted information. Here, we describe the current state of the art in computational image analysis in the zebrafish system. We discuss the challenges encountered when handling high-content image data, especially with regard to data quality, annotation, and storage. We survey methods for preprocessing image data for further analysis, and describe selected examples of automated image analysis, including the tracking of cells during embryogenesis, heartbeat detection, identification of dead embryos, recognition of tissues and anatomical landmarks, and quantification of behavioral patterns of adult fish. We review recent examples for applications using such methods, such as the comprehensive analysis of cell lineages during early development, the generation of a three-dimensional brain atlas of zebrafish larvae, and high-throughput drug screens based on movement patterns. Finally, we identify future challenges for the zebrafish image analysis community, notably those concerning the compatibility of algorithms and data formats for the assembly of modular analysis pipelines. PMID:23758125
Quantitative Pulmonary Imaging Using Computed Tomography and Magnetic Resonance Imaging
Washko, George R.; Parraga, Grace; Coxson, Harvey O.
2011-01-01
Measurements of lung function, including spirometry and body plethesmography, are easy to perform and are the current clinical standard for assessing disease severity. However, these lung functional techniques do not adequately explain the observed variability in clinical manifestations of disease and offer little insight into the relationship of lung structure and function. Lung imaging and the image based assessment of lung disease has matured to the extent that it is common for clinical, epidemiologic, and genetic investigation to have a component dedicated to image analysis. There are several exciting imaging modalities currently being used for the non-invasive study of lung anatomy and function. In this review we will focus on two of them, x-ray computed tomography and magnetic resonance imaging. Following a brief introduction of each method we detail some of the most recent work being done to characterize smoking-related lung disease and the clinical applications of such knowledge. PMID:22142490
Lin, Nan; Jiang, Junhai; Guo, Shicheng; Xiong, Momiao
2015-01-01
Due to the advancement in sensor technology, the growing large medical image data have the ability to visualize the anatomical changes in biological tissues. As a consequence, the medical images have the potential to enhance the diagnosis of disease, the prediction of clinical outcomes and the characterization of disease progression. But in the meantime, the growing data dimensions pose great methodological and computational challenges for the representation and selection of features in image cluster analysis. To address these challenges, we first extend the functional principal component analysis (FPCA) from one dimension to two dimensions to fully capture the space variation of image the signals. The image signals contain a large number of redundant features which provide no additional information for clustering analysis. The widely used methods for removing the irrelevant features are sparse clustering algorithms using a lasso-type penalty to select the features. However, the accuracy of clustering using a lasso-type penalty depends on the selection of the penalty parameters and the threshold value. In practice, they are difficult to determine. Recently, randomized algorithms have received a great deal of attentions in big data analysis. This paper presents a randomized algorithm for accurate feature selection in image clustering analysis. The proposed method is applied to both the liver and kidney cancer histology image data from the TCGA database. The results demonstrate that the randomized feature selection method coupled with functional principal component analysis substantially outperforms the current sparse clustering algorithms in image cluster analysis. PMID:26196383
Jana, Tanima; Shroff, Jennifer; Bhutani, Manoop S.
2015-01-01
Pancreatic cystic lesions are being detected with increasing frequency, largely due to advances in cross-sectional imaging. The most common neoplasms include serous cystadenomas, mucinous cystic neoplasms, intraductal papillary mucinous neoplasms, solid pseudopapillary neoplasms, and cystic pancreatic endocrine neoplasms. Computed tomography (CT), magnetic resonance imaging (MRI), and endoscopic ultrasound (EUS) are currently used as imaging modalities. EUS-guided fine needle aspiration has proved to be a useful diagnostic tool, and enables an assessment of tumor markers, cytology, chemistries, and DNA analysis. Here, we review the current literature on pancreatic cystic neoplasms, including classification, diagnosis, treatment, and recommendations for surveillance. Data for this manuscript was acquired via searching the literature from inception to December 2014 on PubMed and Ovid MEDLINE. PMID:25821410
Zevenhoven, Koos C J; Busch, Sarah; Hatridge, Michael; Oisjöen, Fredrik; Ilmoniemi, Risto J; Clarke, John
2014-03-14
Eddy currents induced by applied magnetic-field pulses have been a common issue in ultra-low-field magnetic resonance imaging. In particular, a relatively large prepolarizing field-applied before each signal acquisition sequence to increase the signal-induces currents in the walls of the surrounding conductive shielded room. The magnetic-field transient generated by the eddy currents may cause severe image distortions and signal loss, especially with the large prepolarizing coils designed for in vivo imaging. We derive a theory of eddy currents in thin conducting structures and enclosures to provide intuitive understanding and efficient computations. We present detailed measurements of the eddy-current patterns and their time evolution in a previous-generation shielded room. The analysis led to the design and construction of a new shielded room with symmetrically placed 1.6-mm-thick aluminum sheets that were weakly coupled electrically. The currents flowing around the entire room were heavily damped, resulting in a decay time constant of about 6 ms for both the measured and computed field transients. The measured eddy-current vector maps were in excellent agreement with predictions based on the theory, suggesting that both the experimental methods and the theory were successful and could be applied to a wide variety of thin conducting structures.
Zevenhoven, Koos C. J.; Busch, Sarah; Hatridge, Michael; Öisjöen, Fredrik; Ilmoniemi, Risto J.; Clarke, John
2014-01-01
Eddy currents induced by applied magnetic-field pulses have been a common issue in ultra-low-field magnetic resonance imaging. In particular, a relatively large prepolarizing field—applied before each signal acquisition sequence to increase the signal—induces currents in the walls of the surrounding conductive shielded room. The magnetic-field transient generated by the eddy currents may cause severe image distortions and signal loss, especially with the large prepolarizing coils designed for in vivo imaging. We derive a theory of eddy currents in thin conducting structures and enclosures to provide intuitive understanding and efficient computations. We present detailed measurements of the eddy-current patterns and their time evolution in a previous-generation shielded room. The analysis led to the design and construction of a new shielded room with symmetrically placed 1.6-mm-thick aluminum sheets that were weakly coupled electrically. The currents flowing around the entire room were heavily damped, resulting in a decay time constant of about 6 ms for both the measured and computed field transients. The measured eddy-current vector maps were in excellent agreement with predictions based on the theory, suggesting that both the experimental methods and the theory were successful and could be applied to a wide variety of thin conducting structures. PMID:24753629
Mor-Avi, Victor; Lang, Roberto M; Badano, Luigi P; Belohlavek, Marek; Cardim, Nuno Miguel; Derumeaux, Genevieve; Galderisi, Maurizio; Marwick, Thomas; Nagueh, Sherif F; Sengupta, Partho P; Sicari, Rosa; Smiseth, Otto A; Smulevitz, Beverly; Takeuchi, Masaaki; Thomas, James D; Vannan, Mani; Voigt, Jens-Uwe; Zamorano, Jose Luis
2011-03-01
Echocardiographic imaging is ideally suited for the evaluation of cardiac mechanics because of its intrinsically dynamic nature. Because for decades, echocardiography has been the only imaging modality that allows dynamic imaging of the heart, it is only natural that new, increasingly automated techniques for sophisticated analysis of cardiac mechanics have been driven by researchers and manufacturers of ultrasound imaging equipment. Several such techniques have emerged over the past decades to address the issue of reader's experience and inter-measurement variability in interpretation. Some were widely embraced by echocardiographers around the world and became part of the clinical routine, whereas others remained limited to research and exploration of new clinical applications. Two such techniques have dominated the research arena of echocardiography: (1) Doppler-based tissue velocity measurements, frequently referred to as tissue Doppler or myocardial Doppler, and (2) speckle tracking on the basis of displacement measurements. Both types of measurements lend themselves to the derivation of multiple parameters of myocardial function. The goal of this document is to focus on the currently available techniques that allow quantitative assessment of myocardial function via image-based analysis of local myocardial dynamics, including Doppler tissue imaging and speckle-tracking echocardiography, as well as integrated back- scatter analysis. This document describes the current and potential clinical applications of these techniques and their strengths and weaknesses, briefly surveys a selection of the relevant published literature while highlighting normal and abnormal findings in the context of different cardiovascular pathologies, and summarizes the unresolved issues, future research priorities, and recommended indications for clinical use.
Mor-Avi, Victor; Lang, Roberto M; Badano, Luigi P; Belohlavek, Marek; Cardim, Nuno Miguel; Derumeaux, Geneviève; Galderisi, Maurizio; Marwick, Thomas; Nagueh, Sherif F; Sengupta, Partho P; Sicari, Rosa; Smiseth, Otto A; Smulevitz, Beverly; Takeuchi, Masaaki; Thomas, James D; Vannan, Mani; Voigt, Jens-Uwe; Zamorano, José Luis
2011-03-01
Echocardiographic imaging is ideally suited for the evaluation of cardiac mechanics because of its intrinsically dynamic nature. Because for decades, echocardiography has been the only imaging modality that allows dynamic imaging of the heart, it is only natural that new, increasingly automated techniques for sophisticated analysis of cardiac mechanics have been driven by researchers and manufacturers of ultrasound imaging equipment.Several such technique shave emerged over the past decades to address the issue of reader's experience and inter measurement variability in interpretation.Some were widely embraced by echocardiographers around the world and became part of the clinical routine,whereas others remained limited to research and exploration of new clinical applications.Two such techniques have dominated the research arena of echocardiography: (1) Doppler based tissue velocity measurements,frequently referred to as tissue Doppler or myocardial Doppler, and (2) speckle tracking on the basis of displacement measurements.Both types of measurements lend themselves to the derivation of multiple parameters of myocardial function. The goal of this document is to focus on the currently available techniques that allow quantitative assessment of myocardial function via image-based analysis of local myocardial dynamics, including Doppler tissue imaging and speckle-tracking echocardiography, as well as integrated backscatter analysis. This document describes the current and potential clinical applications of these techniques and their strengths and weaknesses,briefly surveys a selection of the relevant published literature while highlighting normal and abnormal findings in the context of different cardiovascular pathologies, and summarizes the unresolved issues, future research priorities, and recommended indications for clinical use.
Multiplexed 3D FRET imaging in deep tissue of live embryos
Zhao, Ming; Wan, Xiaoyang; Li, Yu; Zhou, Weibin; Peng, Leilei
2015-01-01
Current deep tissue microscopy techniques are mostly restricted to intensity mapping of fluorophores, which significantly limit their applications in investigating biochemical processes in vivo. We present a deep tissue multiplexed functional imaging method that probes multiple Förster resonant energy transfer (FRET) sensors in live embryos with high spatial resolution. The method simultaneously images fluorescence lifetimes in 3D with multiple excitation lasers. Through quantitative analysis of triple-channel intensity and lifetime images, we demonstrated that Ca2+ and cAMP levels of live embryos expressing dual FRET sensors can be monitored simultaneously at microscopic resolution. The method is compatible with a broad range of FRET sensors currently available for probing various cellular biochemical functions. It opens the door to imaging complex cellular circuitries in whole live organisms. PMID:26387920
The continuing education course on "Developmental Neurotoxicity Testing" (DNT) was designed to communicate current practices for DNT neuropathology, describe promising innovations in quantitative analysis and non-invasive imaging, and facilitate a discussion among experienced neu...
An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis
NASA Astrophysics Data System (ADS)
Kim, Yongmin; Alexander, Thomas
1986-06-01
In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.
Analysis of electroluminescence images in small-area circular CdTe solar cells
NASA Astrophysics Data System (ADS)
Bokalič, Matevž; Raguse, John; Sites, James R.; Topič, Marko
2013-09-01
The electroluminescence (EL) imaging process of small area solar cells is investigated in detail to expose optical and electrical effects that influence image acquisition and corrupt the acquired image. An approach to correct the measured EL images and to extract the exact EL radiation as emitted from the photovoltaic device is presented. EL images of circular cadmium telluride (CdTe) solar cells are obtained under different conditions. The power-law relationship between forward injection current and EL emission and a negative temperature coefficient of EL radiation are observed. The distributed Simulation Program with Integrated Circuit Emphasis (SPICE®) model of the circular CdTe solar cell is used to simulate the dark J-V curve and current distribution under the conditions used during EL measurements. Simulation results are presented as circularly averaged EL intensity profiles, which clearly show that the ratio between resistive parameters determines the current distribution in thin-film solar cells. The exact resistance values for front and back contact layers and for CdTe bulk layer are determined at different temperatures, and a negative temperature coefficient for the CdTe bulk resistance is observed.
Applicability of interferometric SAR technology to ground movement and pipeline monitoring
NASA Astrophysics Data System (ADS)
Grivas, Dimitri A.; Bhagvati, Chakravarthy; Schultz, B. C.; Trigg, Alan; Rizkalla, Moness
1998-03-01
This paper summarizes the findings of a cooperative effort between NOVA Gas Transmission Ltd. (NGTL), the Italian Natural Gas Transmission Company (SNAM), and Arista International, Inc., to determine whether current remote sensing technologies can be utilized to monitor small-scale ground movements over vast geographical areas. This topic is of interest due to the potential for small ground movements to cause strain accumulation in buried pipeline facilities. Ground movements are difficult to monitor continuously, but their cumulative effect over time can have a significant impact on the safety of buried pipelines. Interferometric synthetic aperture radar (InSAR or SARI) is identified as the most promising technique of those considered. InSAR analysis involves combining multiple images from consecutive passes of a radar imaging platform. The resulting composite image can detect changes as small as 2.5 to 5.0 centimeters (based on current analysis methods and radar satellite data of 5 centimeter wavelength). Research currently in progress shows potential for measuring ground movements as small as a few millimeters. Data needed for InSAR analysis is currently commercially available from four satellites, and additional satellites are planned for launch in the near future. A major conclusion of the present study is that InSAR technology is potentially useful for pipeline integrity monitoring. A pilot project is planned to test operational issues.
NASA Astrophysics Data System (ADS)
Vievering, J. T.; Glesener, L.; Panchapakesan, S. A.; Ryan, D.; Krucker, S.; Christe, S.; Buitrago-Casas, J. C.; Inglis, A. R.; Musset, S.
2017-12-01
Observations of the Sun in hard x-rays can provide insight into many solar phenomena which are not currently well-understood, including the mechanisms behind particle acceleration in flares. RHESSI is the only solar-dedicated imager currently operating in the hard x-ray regime. Though RHESSI has greatly added to our knowledge of flare particle acceleration, the indirect imaging method of rotating collimating optics is fundamentally limited in sensitivity and dynamic range. By instead using a direct imaging technique, the structure and evolution of even small flares and active regions can be investigated in greater depth. FOXSI (Focusing Optics X-ray Solar Imager), a hard x-ray instrument flown on two sounding rocket campaigns, seeks to achieve these improved capabilities by using focusing optics for solar observations in the 4-20 keV range. During the second of the FOXSI flights, flown on December 11, 2014, two microflares were observed, estimated as GOES class A0.5 and A2.5 (upper limits). Here we present current imaging and spectral analyses of these microflares, exploring the nature of energy release and comparing to observations from other instruments. Additionally, we feature the first analysis of data from the FOXSI-2 CdTe strip detectors, which provide improved efficiency above 10 keV. Through this analysis, we investigate the capabilities of FOXSI in enhancing our knowledge of smaller-scale solar events.
Radiation dose reduction for CT lung cancer screening using ASIR and MBIR: a phantom study.
Mathieu, Kelsey B; Ai, Hua; Fox, Patricia S; Godoy, Myrna Cobos Barco; Munden, Reginald F; de Groot, Patricia M; Pan, Tinsu
2014-03-06
The purpose of this study was to reduce the radiation dosage associated with computed tomography (CT) lung cancer screening while maintaining overall diagnostic image quality and definition of ground-glass opacities (GGOs). A lung screening phantom and a multipurpose chest phantom were used to quantitatively assess the performance of two iterative image reconstruction algorithms (adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR)) used in conjunction with reduced tube currents relative to a standard clinical lung cancer screening protocol (51 effective mAs (3.9 mGy) and filtered back-projection (FBP) reconstruction). To further assess the algorithms' performances, qualitative image analysis was conducted (in the form of a reader study) using the multipurpose chest phantom, which was implanted with GGOs of two densities. Our quantitative image analysis indicated that tube current, and thus radiation dose, could be reduced by 40% or 80% from ASIR or MBIR, respectively, compared with conventional FBP, while maintaining similar image noise magnitude and contrast-to-noise ratio. The qualitative portion of our study, which assessed reader preference, yielded similar results, indicating that dose could be reduced by 60% (to 20 effective mAs (1.6 mGy)) with either ASIR or MBIR, while maintaining GGO definition. Additionally, the readers' preferences (as indicated by their ratings) regarding overall image quality were equal or better (for a given dose) when using ASIR or MBIR, compared with FBP. In conclusion, combining ASIR or MBIR with reduced tube current may allow for lower doses while maintaining overall diagnostic image quality, as well as GGO definition, during CT lung cancer screening.
Fang, Yu-Hua Dean; Asthana, Pravesh; Salinas, Cristian; Huang, Hsuan-Ming; Muzic, Raymond F
2010-01-01
An integrated software package, Compartment Model Kinetic Analysis Tool (COMKAT), is presented in this report. COMKAT is an open-source software package with many functions for incorporating pharmacokinetic analysis in molecular imaging research and has both command-line and graphical user interfaces. With COMKAT, users may load and display images, draw regions of interest, load input functions, select kinetic models from a predefined list, or create a novel model and perform parameter estimation, all without having to write any computer code. For image analysis, COMKAT image tool supports multiple image file formats, including the Digital Imaging and Communications in Medicine (DICOM) standard. Image contrast, zoom, reslicing, display color table, and frame summation can be adjusted in COMKAT image tool. It also displays and automatically registers images from 2 modalities. Parametric imaging capability is provided and can be combined with the distributed computing support to enhance computation speeds. For users without MATLAB licenses, a compiled, executable version of COMKAT is available, although it currently has only a subset of the full COMKAT capability. Both the compiled and the noncompiled versions of COMKAT are free for academic research use. Extensive documentation, examples, and COMKAT itself are available on its wiki-based Web site, http://comkat.case.edu. Users are encouraged to contribute, sharing their experience, examples, and extensions of COMKAT. With integrated functionality specifically designed for imaging and kinetic modeling analysis, COMKAT can be used as a software environment for molecular imaging and pharmacokinetic analysis.
Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok
2016-05-01
To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.
Imaging ac losses in superconducting films via scanning Hall probe microscopy
NASA Astrophysics Data System (ADS)
Dinner, Rafael B.; Moler, Kathryn A.; Feldmann, D. Matthew; Beasley, M. R.
2007-04-01
Various local probes have been applied to understanding current flow through superconducting films, which are often surprisingly inhomogeneous. Here, we show that magnetic imaging allows quantitative reconstruction of both current density J and electric field E resolved in time and space in a film carrying subcritical ac current. Current reconstruction entails inversion of the Biot-Savart law, while electric fields are reconstructed using Faraday’s law. We describe the corresponding numerical procedures, largely adapting existing work to the case of a strip carrying ac current, but including other methods of obtaining the complete electric field from the inductive portion determined by Faraday’s law. We also delineate the physical requirements behind the mathematical transformations. We then apply the procedures to images of a strip of YBa2Cu3O7-δ carrying an ac current at 400Hz . Our scanning Hall probe microscope produces a time series of magnetic images of the strip with 1μm spatial resolution and 25μs time resolution. Combining the reconstructed J and E , we obtain a complete characterization including local critical current density, E-J curves, and power losses. This analysis has a range of applications from fundamental studies of vortex dynamics to practical coated conductor development.
Detection of rip current using camera monitoring techniques
NASA Astrophysics Data System (ADS)
Kim, T.
2016-02-01
Rip currents are approximately shore normal seaward flows which are strong, localized and rather narrow. They are known that stacked water by longshore currents suddenly flow back out to sea as rip currents. They are transient phenomena and their generation time and location are unpredictable. They are also doing significant roles for offshore sediment transport and beach erosion. Rip currents can be very hazardous to swimmers or floaters because of their strong seaward flows and sudden depth changes by narrow and strong flows. Because of its importance in terms of safety, shoreline evolution and pollutant transport, a number of studies have been attempted to find out their mechanisms. However, understanding of rip currents is still not enough to make warning to people in the water by predicting their location and time. This paper investigates the development of rip currents using camera images. Since rip currents are developed by longshore currents, the observed longshore current variations in space and time can be used to detect rip current generation. Most of the time convergence of two longshore currents in the opposite direction is the outbreak of rip current. In order to observe longshore currents, an optical current meter(OCM) technique proposed by Chickadel et al.(2003) is used. The relationship between rip current generation time and longshore current velocity variation observed by OCM is analyzed from the images taken on the shore. The direct measurement of rip current velocity is also tested using image analysis techniques. Quantitative estimation of rip current strength is also conducted by using average and variance image of rip current area. These efforts will contribute to reduce the hazards of swimmers by prediction and warning of rip current generation.
NASA Astrophysics Data System (ADS)
Vievering, J. T.; Glesener, L.; Krucker, S.; Christe, S.; Buitrago-Casas, J. C.; Ishikawa, S. N.; Ramsey, B.; Takahashi, T.; Watanabe, S.
2016-12-01
Observations of the sun in hard x-rays can provide insight into many solar phenomena which are not currently well-understood, including the mechanisms behind particle acceleration in flares. Currently, RHESSI is the only solar-dedicated spacecraft observing in the hard x-ray regime. Though RHESSI has greatly added to our knowledge of flare particle acceleration, the method of rotation modulation collimators is limited in sensitivity and dynamic range. By instead using a direct imaging technique, the structure and evolution of even small flares and active regions can be investigated in greater depth. FOXSI (Focusing Optics X-ray Solar Imager), a hard x-ray instrument flown on two sounding rocket campaigns, seeks to achieve these improved capabilities by using focusing optics for solar observations in the 4-20 keV range. During the second of the FOXSI flights, flown on December 11, 2014, two microflares were observed, estimated as GOES class A0.5 and A2.5 (upper limits). Preliminary analysis of these two flares will be presented, including imaging spectroscopy, light curves, and photon spectra. Through this analysis, we investigate the capabilities of FOXSI in enhancing our knowledge of smaller-scale solar events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrie, G.M.; Perry, E.M.; Kirkham, R.R.
1997-09-01
This report describes the work performed at the Pacific Northwest National Laboratory (PNNL) for the U.S. Department of Energy`s Office of Nonproliferation and National Security, Office of Research and Development (NN-20). The work supports the NN-20 Broad Area Search and Analysis, a program initiated by NN-20 to improve the detection and classification of undeclared weapons facilities. Ongoing PNNL research activities are described in three main components: image collection, information processing, and change analysis. The Multispectral Airborne Imaging System, which was developed to collect georeferenced imagery in the visible through infrared regions of the spectrum, and flown on a light aircraftmore » platform, will supply current land use conditions. The image information extraction software (dynamic clustering and end-member extraction) uses imagery, like the multispectral data collected by the PNNL multispectral system, to efficiently generate landcover information. The advanced change detection uses a priori (benchmark) information, current landcover conditions, and user-supplied rules to rank suspect areas by probable risk of undeclared facilities or proliferation activities. These components, both separately and combined, provide important tools for improving the detection of undeclared facilities.« less
Athanasiou, Lambros; Sakellarios, Antonis I; Bourantas, Christos V; Tsirka, Georgia; Siogkas, Panagiotis; Exarchos, Themis P; Naka, Katerina K; Michalis, Lampros K; Fotiadis, Dimitrios I
2014-07-01
Optical coherence tomography and intravascular ultrasound are the most widely used methodologies in clinical practice as they provide high resolution cross-sectional images that allow comprehensive visualization of the lumen and plaque morphology. Several methods have been developed in recent years to process the output of these imaging modalities, which allow fast, reliable and reproducible detection of the luminal borders and characterization of plaque composition. These methods have proven useful in the study of the atherosclerotic process as they have facilitated analysis of a vast amount of data. This review presents currently available intravascular ultrasound and optical coherence tomography processing methodologies for segmenting and characterizing the plaque area, highlighting their advantages and disadvantages, and discusses the future trends in intravascular imaging.
Image quality enhancement for skin cancer optical diagnostics
NASA Astrophysics Data System (ADS)
Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey
2017-12-01
The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.
Retinal Imaging Techniques for Diabetic Retinopathy Screening
Goh, James Kang Hao; Cheung, Carol Y.; Sim, Shaun Sebastian; Tan, Pok Chien; Tan, Gavin Siew Wei; Wong, Tien Yin
2016-01-01
Due to the increasing prevalence of diabetes mellitus, demand for diabetic retinopathy (DR) screening platforms is steeply increasing. Early detection and treatment of DR are key public health interventions that can greatly reduce the likelihood of vision loss. Current DR screening programs typically employ retinal fundus photography, which relies on skilled readers for manual DR assessment. However, this is labor-intensive and suffers from inconsistency across sites. Hence, there has been a recent proliferation of automated retinal image analysis software that may potentially alleviate this burden cost-effectively. Furthermore, current screening programs based on 2-dimensional fundus photography do not effectively screen for diabetic macular edema (DME). Optical coherence tomography is becoming increasingly recognized as the reference standard for DME assessment and can potentially provide a cost-effective solution for improving DME detection in large-scale DR screening programs. Current screening techniques are also unable to image the peripheral retina and require pharmacological pupil dilation; ultra-widefield imaging and confocal scanning laser ophthalmoscopy, which address these drawbacks, possess great potential. In this review, we summarize the current DR screening methods using various retinal imaging techniques, and also outline future possibilities. Advances in retinal imaging techniques can potentially transform the management of patients with diabetes, providing savings in health care costs and resources. PMID:26830491
Retinal Imaging Techniques for Diabetic Retinopathy Screening.
Goh, James Kang Hao; Cheung, Carol Y; Sim, Shaun Sebastian; Tan, Pok Chien; Tan, Gavin Siew Wei; Wong, Tien Yin
2016-02-01
Due to the increasing prevalence of diabetes mellitus, demand for diabetic retinopathy (DR) screening platforms is steeply increasing. Early detection and treatment of DR are key public health interventions that can greatly reduce the likelihood of vision loss. Current DR screening programs typically employ retinal fundus photography, which relies on skilled readers for manual DR assessment. However, this is labor-intensive and suffers from inconsistency across sites. Hence, there has been a recent proliferation of automated retinal image analysis software that may potentially alleviate this burden cost-effectively. Furthermore, current screening programs based on 2-dimensional fundus photography do not effectively screen for diabetic macular edema (DME). Optical coherence tomography is becoming increasingly recognized as the reference standard for DME assessment and can potentially provide a cost-effective solution for improving DME detection in large-scale DR screening programs. Current screening techniques are also unable to image the peripheral retina and require pharmacological pupil dilation; ultra-widefield imaging and confocal scanning laser ophthalmoscopy, which address these drawbacks, possess great potential. In this review, we summarize the current DR screening methods using various retinal imaging techniques, and also outline future possibilities. Advances in retinal imaging techniques can potentially transform the management of patients with diabetes, providing savings in health care costs and resources. © 2016 Diabetes Technology Society.
Datta, Niladri Sekhar; Dutta, Himadri Sekhar; Majumder, Koushik
2016-01-01
The contrast enhancement of retinal image plays a vital role for the detection of microaneurysms (MAs), which are an early sign of diabetic retinopathy disease. A retinal image contrast enhancement method has been presented to improve the MA detection technique. The success rate on low-contrast noisy retinal image analysis shows the importance of the proposed method. Overall, 587 retinal input images are tested for performance analysis. The average sensitivity and specificity are obtained as 95.94% and 99.21%, respectively. The area under curve is found as 0.932 for the receiver operating characteristics analysis. The classifications of diabetic retinopathy disease are also performed here. The experimental results show that the overall MA detection method performs better than the current state-of-the-art MA detection algorithms.
Image processing and analysis using neural networks for optometry area
NASA Astrophysics Data System (ADS)
Netto, Antonio V.; Ferreira de Oliveira, Maria C.
2002-11-01
In this work we describe the framework of a functional system for processing and analyzing images of the human eye acquired by the Hartmann-Shack technique (HS), in order to extract information to formulate a diagnosis of eye refractive errors (astigmatism, hypermetropia and myopia). The analysis is to be carried out using an Artificial Intelligence system based on Neural Nets, Fuzzy Logic and Classifier Combination. The major goal is to establish the basis of a new technology to effectively measure ocular refractive errors that is based on methods alternative those adopted in current patented systems. Moreover, analysis of images acquired with the Hartmann-Shack technique may enable the extraction of additional information on the health of an eye under exam from the same image used to detect refraction errors.
Open source tools for fluorescent imaging.
Hamilton, Nicholas A
2012-01-01
As microscopy becomes increasingly automated and imaging expands in the spatial and time dimensions, quantitative analysis tools for fluorescent imaging are becoming critical to remove both bottlenecks in throughput as well as fully extract and exploit the information contained in the imaging. In recent years there has been a flurry of activity in the development of bio-image analysis tools and methods with the result that there are now many high-quality, well-documented, and well-supported open source bio-image analysis projects with large user bases that cover essentially every aspect from image capture to publication. These open source solutions are now providing a viable alternative to commercial solutions. More importantly, they are forming an interoperable and interconnected network of tools that allow data and analysis methods to be shared between many of the major projects. Just as researchers build on, transmit, and verify knowledge through publication, open source analysis methods and software are creating a foundation that can be built upon, transmitted, and verified. Here we describe many of the major projects, their capabilities, and features. We also give an overview of the current state of open source software for fluorescent microscopy analysis and the many reasons to use and develop open source methods. Copyright © 2012 Elsevier Inc. All rights reserved.
A high-level 3D visualization API for Java and ImageJ.
Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin
2010-05-21
Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.
Quantitative image processing in fluid mechanics
NASA Technical Reports Server (NTRS)
Hesselink, Lambertus; Helman, James; Ning, Paul
1992-01-01
The current status of digital image processing in fluid flow research is reviewed. In particular, attention is given to a comprehensive approach to the extraction of quantitative data from multivariate databases and examples of recent developments. The discussion covers numerical simulations and experiments, data processing, generation and dissemination of knowledge, traditional image processing, hybrid processing, fluid flow vector field topology, and isosurface analysis using Marching Cubes.
He, Wenjing; Zhu, Yuanzhong; Wang, Wenzhou; Zou, Kai; Zhang, Kai; He, Chao
2017-04-01
Pulsed magnetic field gradients generated by gradient coils are widely used in signal location in magnetic resonance imaging (MRI). However, gradient coils can also induce eddy currents in final magnetic field in the nearby conducting structures which lead to distortion and artifact in images, misguiding clinical diagnosis. We tried in our laboratory to measure the magnetic field of gradient-induced eddy current in 1.5 T superconducting magnetic resonance imaging device; and extracted key parameters including amplitude and time constant of exponential terms according to inductance-resistance series mathematical module. These parameters of both self-induced component and crossing component are useful to design digital filters to implement pulse pre-emphasize to reshape the waveform. A measure device that is a basement equipped with phantoms and receiving coils was designed and placed in the isocenter of the magnetic field. By applying testing sequence, contrast experiments were carried out in a superconducting magnet before and after eddy current compensation. Sets of one dimension signal were obtained as raw data to calculate gradient-induced eddy currents. Curve fitting by least squares method was also done to match inductance-resistance series module. The results also illustrated that pulse pre-emphasize measurement with digital filter was correct and effective in reducing eddy current effect. Pre-emphasize waveform was developed based on system function. The usefulness of pre-emphasize measurement in reducing eddy current was confirmed and the improvement was also presented. All these are valuable for reducing artifact in magnetic resonance imaging device.
Booth, T C; Jackson, A; Wardlaw, J M; Taylor, S A; Waldman, A D
2010-01-01
Incidental findings found in “healthy” volunteers during research imaging are common and have important implications for study design and performance, particularly in the areas of informed consent, subjects' rights, clinical image analysis and disclosure. In this study, we aimed to determine current practice and regulations concerning information that should be given to research subjects when obtaining consent, reporting of research images, who should be informed about any incidental findings and the method of disclosure. We reviewed all UK, European and international humanitarian, legal and ethical agencies' guidance. We found that the guidance on what constitutes incidental pathology, how to recognise it and what to do about it is inconsistent between agencies, difficult to find and less complete in the UK than elsewhere. Where given, guidance states that volunteers should be informed during the consent process about how research images will be managed, whether a mechanism exists for identifying incidental findings, arrangements for their disclosure, the potential benefit or harm and therapeutic options. The effects of incidentally discovered pathology on the individual can be complex and far-reaching. Radiologist involvement in analysis of research images varies widely; many incidental findings might therefore go unrecognised. In conclusion, guidance on the management of research imaging is inconsistent, limited and does not address the interests of volunteers. Improved standards to guide management of research images and incidental findings are urgently required. PMID:20335427
Physical activity and body image among men and boys: A meta-analysis.
Bassett-Gunter, Rebecca; McEwan, Desmond; Kamarhie, Aria
2017-09-01
Three meta-analytic reviews have concluded that physical activity is positively related to body image. Historically, research regarding physical activity and body image has been disproportionately focused on female samples. For example, the most recent meta-analysis (2009) extracted 56 effect sizes for women and only 12 for men. The current paper provides an update to the literature regarding the relationship between physical activity and body image among men and boys across 84 individual effect sizes. The analysis also provides insight regarding moderator variables including participant age, and physical activity type and intensity. Overall, physical activity was positively related to body image among men and boys with various moderator variables warranting further investigation. Pragmatic implications are discussed as well as the limitations within existing research and need for additional research to further understand moderator and mediator variables. Copyright © 2017 Elsevier Ltd. All rights reserved.
An Automatic Phase-Change Detection Technique for Colloidal Hard Sphere Suspensions
NASA Technical Reports Server (NTRS)
McDowell, Mark; Gray, Elizabeth; Rogers, Richard B.
2005-01-01
Colloidal suspensions of monodisperse spheres are used as physical models of thermodynamic phase transitions and as precursors to photonic band gap materials. However, current image analysis techniques are not able to distinguish between densely packed phases within conventional microscope images, which are mainly characterized by degrees of randomness or order with similar grayscale value properties. Current techniques for identifying the phase boundaries involve manually identifying the phase transitions, which is very tedious and time consuming. We have developed an intelligent machine vision technique that automatically identifies colloidal phase boundaries. The algorithm utilizes intelligent image processing techniques that accurately identify and track phase changes vertically or horizontally for a sequence of colloidal hard sphere suspension images. This technique is readily adaptable to any imaging application where regions of interest are distinguished from the background by differing patterns of motion over time.
The Analysis of Image Segmentation Hierarchies with a Graph-based Knowledge Discovery System
NASA Technical Reports Server (NTRS)
Tilton, James C.; Cooke, diane J.; Ketkar, Nikhil; Aksoy, Selim
2008-01-01
Currently available pixel-based analysis techniques do not effectively extract the information content from the increasingly available high spatial resolution remotely sensed imagery data. A general consensus is that object-based image analysis (OBIA) is required to effectively analyze this type of data. OBIA is usually a two-stage process; image segmentation followed by an analysis of the segmented objects. We are exploring an approach to OBIA in which hierarchical image segmentations provided by the Recursive Hierarchical Segmentation (RHSEG) software developed at NASA GSFC are analyzed by the Subdue graph-based knowledge discovery system developed by a team at Washington State University. In this paper we discuss out initial approach to representing the RHSEG-produced hierarchical image segmentations in a graphical form understandable by Subdue, and provide results on real and simulated data. We also discuss planned improvements designed to more effectively and completely convey the hierarchical segmentation information to Subdue and to improve processing efficiency.
A survey on deep learning in medical image analysis.
Litjens, Geert; Kooi, Thijs; Bejnordi, Babak Ehteshami; Setio, Arnaud Arindra Adiyoso; Ciompi, Francesco; Ghafoorian, Mohsen; van der Laak, Jeroen A W M; van Ginneken, Bram; Sánchez, Clara I
2017-12-01
Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research. Copyright © 2017 Elsevier B.V. All rights reserved.
Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis.
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué
2015-10-01
In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%.
Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis
Guerra-Rosas, Esperanza; Álvarez-Borrego, Josué
2015-01-01
In this paper a new methodology for the diagnosing of skin cancer on images of dermatologic spots using image processing is presented. Currently skin cancer is one of the most frequent diseases in humans. This methodology is based on Fourier spectral analysis by using filters such as the classic, inverse and k-law nonlinear. The sample images were obtained by a medical specialist and a new spectral technique is developed to obtain a quantitative measurement of the complex pattern found in cancerous skin spots. Finally a spectral index is calculated to obtain a range of spectral indices defined for skin cancer. Our results show a confidence level of 95.4%. PMID:26504638
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2001-01-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Self-adaptive relevance feedback based on multilevel image content analysis
NASA Astrophysics Data System (ADS)
Gao, Yongying; Zhang, Yujin; Fu, Yu
2000-12-01
In current content-based image retrieval systems, it is generally accepted that obtaining high-level image features is a key to improve the querying. Among the related techniques, relevance feedback has become a hot research aspect because it combines the information from the user to refine the querying results. In practice, many methods have been proposed to achieve the goal of relevance feedback. In this paper, a new scheme for relevance feedback is proposed. Unlike previous methods for relevance feedback, our scheme provides a self-adaptive operation. First, based on multi- level image content analysis, the relevant images from the user could be automatically analyzed in different levels and the querying could be modified in terms of different analysis results. Secondly, to make it more convenient to the user, the procedure of relevance feedback could be led with memory or without memory. To test the performance of the proposed method, a practical semantic-based image retrieval system has been established, and the querying results gained by our self-adaptive relevance feedback are given.
Wang, Chen; Brancusi, Flavia; Valivullah, Zaheer M; Anderson, Michael G; Cunningham, Denise; Hedberg-Buenz, Adam; Power, Bradley; Simeonov, Dimitre; Gahl, William A; Zein, Wadih M; Adams, David R; Brooks, Brian
2018-01-01
To develop a sensitive scale of iris transillumination suitable for clinical and research use, with the capability of either quantitative analysis or visual matching of images. Iris transillumination photographic images were used from 70 study subjects with ocular or oculocutaneous albinism. Subjects represented a broad range of ocular pigmentation. A subset of images was subjected to image analysis and ranking by both expert and nonexpert reviewers. Quantitative ordering of images was compared with ordering by visual inspection. Images were binned to establish an 8-point scale. Ranking consistency was evaluated using the Kendall rank correlation coefficient (Kendall's tau). Visual ranking results were assessed using Kendall's coefficient of concordance (Kendall's W) analysis. There was a high degree of correlation among the image analysis, expert-based and non-expert-based image rankings. Pairwise comparisons of the quantitative ranking with each reviewer generated an average Kendall's tau of 0.83 ± 0.04 (SD). Inter-rater correlation was also high with Kendall's W of 0.96, 0.95, and 0.95 for nonexpert, expert, and all reviewers, respectively. The current standard for assessing iris transillumination is expert assessment of clinical exam findings. We adapted an image-analysis technique to generate quantitative transillumination values. Quantitative ranking was shown to be highly similar to a ranking produced by both expert and nonexpert reviewers. This finding suggests that the image characteristics used to quantify iris transillumination do not require expert interpretation. Inter-rater rankings were also highly similar, suggesting that varied methods of transillumination ranking are robust in terms of producing reproducible results.
Comparison of existing digital image analysis systems for the analysis of Thematic Mapper data
NASA Technical Reports Server (NTRS)
Likens, W. C.; Wrigley, R. C.
1984-01-01
Most existing image analysis systems were designed with the Landsat Multi-Spectral Scanner in mind, leaving open the question of whether or not these systems could adequately process Thematic Mapper data. In this report, both hardware and software systems have been evaluated for compatibility with TM data. Lack of spectral analysis capability was not found to be a problem, though techniques for spatial filtering and texture varied. Computer processing speed and data storage of currently existing mini-computer based systems may be less than adequate. Upgrading to more powerful hardware may be required for many TM applications.
NASA Astrophysics Data System (ADS)
Suzuki, Yuki; Fung, George S. K.; Shen, Zeyang; Otake, Yoshito; Lee, Okkyun; Ciuffo, Luisa; Ashikaga, Hiroshi; Sato, Yoshinobu; Taguchi, Katsuyuki
2017-03-01
Cardiac motion (or functional) analysis has shown promise not only for non-invasive diagnosis of cardiovascular diseases but also for prediction of cardiac future events. Current imaging modalities has limitations that could degrade the accuracy of the analysis indices. In this paper, we present a projection-based motion estimation method for x-ray CT that estimates cardiac motion with high spatio-temporal resolution using projection data and a reference 3D volume image. The experiment using a synthesized digital phantom showed promising results for motion analysis.
Presence of muscle dysmorphia symptomology among male weightlifters.
Hildebrandt, Tom; Schlundt, David; Langenbucher, James; Chung, Tammy
2006-01-01
Limited research exists on muscle dysmorphia (MD) in men and in nonclinical populations. The current study evaluated types of body image disturbance among 237 male weightlifters. Latent class analysis of 8 measures of body image disturbance revealed 5 independent types of respondents: Dysmorphic, Muscle Concerned, Fat Concerned, Normal Behavioral, and Normal. One-way analysis of variance of independent measures of body image disturbance and associated psychopathology confirmed significant differences between groups. The Dysmorphic group reported a pattern of body image disturbance consistent with MD by displaying a high overall level of body image disturbance, symptoms of associated psychopathology, steroid use, and appearance-controlling behavior. Findings generally supported classifying MD as a subtype of body dysmorphic disorder and an obsessive-compulsive spectrum disorder. Implications for studying body image disturbance in male weightlifters, and further evaluation of the MD diagnostic criteria are discussed.
Computer assisted analysis of auroral images obtained from high altitude polar satellites
NASA Technical Reports Server (NTRS)
Samadani, Ramin; Flynn, Michael
1993-01-01
Automatic techniques that allow the extraction of physically significant parameters from auroral images were developed. This allows the processing of a much larger number of images than is currently possible with manual techniques. Our techniques were applied to diverse auroral image datasets. These results were made available to geophysicists at NASA and at universities in the form of a software system that performs the analysis. After some feedback from users, an upgraded system was transferred to NASA and to two universities. The feasibility of user-trained search and retrieval of large amounts of data using our automatically derived parameter indices was demonstrated. Techniques based on classification and regression trees (CART) were developed and applied to broaden the types of images to which the automated search and retrieval may be applied. Our techniques were tested with DE-1 auroral images.
ERIC Educational Resources Information Center
Otto, Stacy
2005-01-01
Within this paper the author examines the current nostalgia for a never-present past through critical analysis of images of the mid 20th century American classroom in media culture. The author uses theories of nostalgia and the history of the photographic image to trouble the numerous equity issues surrounding the unchallenged canonization of the…
Lee, E J; Lee, S K; Agid, R; Howard, P; Bae, J M; terBrugge, K
2009-10-01
The combined automatic tube current modulation (ATCM) technique adapts and modulates the x-ray tube current in the x-y-z axis according to the patient's individual anatomy. We compared image quality and radiation dose of the combined ATCM technique with those of a fixed tube current (FTC) technique in craniocervical CT angiography performed with a 64-section multidetector row CT (MDCT) system. A retrospective review of craniocervical CT angiograms (CTAs) by using combined ATCM (n = 25) and FTC techniques (n = 25) was performed. Other CTA parameters, such as kilovolt (peak), matrix size, FOV, section thickness, pitch, contrast agent, and contrast injection techniques, were held constant. We recorded objective image noise in the muscles at 2 anatomic levels: radiation exposure doses (CT dose index volume and dose-length product); and subjective image quality parameters, such as vascular delineation of various arterial vessels, visibility of small arterial detail, image artifacts, and certainty of diagnosis. The Mann-Whitney U test was used for statistical analysis. No significant difference was detected in subjective image quality parameters between the FTC and combined ATCM techniques. Most subjects in both study groups (49/50, 98%) had acceptable subjective artifacts. The objective image noise values at shoulder level did not show a significant difference, but the noise value at the upper neck was higher with the combined ATCM (P < .05) technique. Significant reduction in radiation dose (18% reduction) was noted with the combined ATCM technique (P < .05). The combined ATCM technique for craniocervical CTA performed at 64-section MDCT substantially reduced radiation exposure dose but maintained diagnostic image quality.
Quality assessment of digital X-ray chest images using an anthropomorphic chest phantom
NASA Astrophysics Data System (ADS)
Vodovatov, A. V.; Kamishanskaya, I. G.; Drozdov, A. A.; Bernhardsson, C.
2017-02-01
The current study is focused on determining the optimal tube voltage for the conventional X-ray digital chest screening examinations, using a visual grading analysis method. Chest images of an anthropomorphic phantom were acquired in posterior-anterior projection on four digital X-ray units with different detector types. X-ray images obtained with an anthropomorphic phantom were accepted by the radiologists as corresponding to a normal human anatomy, hence allowing using phantoms in image quality trials without limitations.
DWI-based neural fingerprinting technology: a preliminary study on stroke analysis.
Ye, Chenfei; Ma, Heather Ting; Wu, Jun; Yang, Pengfei; Chen, Xuhui; Yang, Zhengyi; Ma, Jingbo
2014-01-01
Stroke is a common neural disorder in neurology clinics. Magnetic resonance imaging (MRI) has become an important tool to assess the neural physiological changes under stroke, such as diffusion weighted imaging (DWI) and diffusion tensor imaging (DTI). Quantitative analysis of MRI images would help medical doctors to localize the stroke area in the diagnosis in terms of structural information and physiological characterization. However, current quantitative approaches can only provide localization of the disorder rather than measure physiological variation of subtypes of ischemic stroke. In the current study, we hypothesize that each kind of neural disorder would have its unique physiological characteristics, which could be reflected by DWI images on different gradients. Based on this hypothesis, a DWI-based neural fingerprinting technology was proposed to classify subtypes of ischemic stroke. The neural fingerprint was constructed by the signal intensity of the region of interest (ROI) on the DWI images under different gradients. The fingerprint derived from the manually drawn ROI could classify the subtypes with accuracy 100%. However, the classification accuracy was worse when using semiautomatic and automatic method in ROI segmentation. The preliminary results showed promising potential of DWI-based neural fingerprinting technology in stroke subtype classification. Further studies will be carried out for enhancing the fingerprinting accuracy and its application in other clinical practices.
Radiation dose reduction for CT lung cancer screening using ASIR and MBIR: a phantom study
Mathieu, Kelsey B.; Ai, Hua; Fox, Patricia S.; Godoy, Myrna Cobos Barco; Munden, Reginald F.; de Groot, Patricia M.
2014-01-01
The purpose of this study was to reduce the radiation dosage associated with computed tomography (CT) lung cancer screening while maintaining overall diagnostic image quality and definition of ground‐glass opacities (GGOs). A lung screening phantom and a multipurpose chest phantom were used to quantitatively assess the performance of two iterative image reconstruction algorithms (adaptive statistical iterative reconstruction (ASIR) and model‐based iterative reconstruction (MBIR)) used in conjunction with reduced tube currents relative to a standard clinical lung cancer screening protocol (51 effective mAs (3.9 mGy) and filtered back‐projection (FBP) reconstruction). To further assess the algorithms' performances, qualitative image analysis was conducted (in the form of a reader study) using the multipurpose chest phantom, which was implanted with GGOs of two densities. Our quantitative image analysis indicated that tube current, and thus radiation dose, could be reduced by 40% or 80% from ASIR or MBIR, respectively, compared with conventional FBP, while maintaining similar image noise magnitude and contrast‐to‐noise ratio. The qualitative portion of our study, which assessed reader preference, yielded similar results, indicating that dose could be reduced by 60% (to 20 effective mAs (1.6 mGy)) with either ASIR or MBIR, while maintaining GGO definition. Additionally, the readers' preferences (as indicated by their ratings) regarding overall image quality were equal or better (for a given dose) when using ASIR or MBIR, compared with FBP. In conclusion, combining ASIR or MBIR with reduced tube current may allow for lower doses while maintaining overall diagnostic image quality, as well as GGO definition, during CT lung cancer screening. PACS numbers: 87.57.Q‐, 87.57.nf PMID:24710436
Computer-Assisted Digital Image Analysis of Plus Disease in Retinopathy of Prematurity.
Kemp, Pavlina S; VanderVeen, Deborah K
2016-01-01
The objective of this study is to review the current state and role of computer-assisted analysis in diagnosis of plus disease in retinopathy of prematurity. Diagnosis and documentation of retinopathy of prematurity are increasingly being supplemented by digital imaging. The incorporation of computer-aided techniques has the potential to add valuable information and standardization regarding the presence of plus disease, an important criterion in deciding the necessity of treatment of vision-threatening retinopathy of prematurity. A review of literature found that several techniques have been published examining the process and role of computer aided analysis of plus disease in retinopathy of prematurity. These techniques use semiautomated image analysis techniques to evaluate retinal vascular dilation and tortuosity, using calculated parameters to evaluate presence or absence of plus disease. These values are then compared with expert consensus. The study concludes that computer-aided image analysis has the potential to use quantitative and objective criteria to act as a supplemental tool in evaluating for plus disease in the setting of retinopathy of prematurity.
NASA Astrophysics Data System (ADS)
Harris, C. T.; Haw, D. W.; Handler, W. B.; Chronik, B. A.
2013-06-01
The time-varying magnetic fields created by the gradient coils in magnetic resonance imaging can produce negative effects on image quality and the system itself. Additionally, they can be a limiting factor to the introduction of non-MR devices such as cardiac pacemakers, orthopedic implants, and surgical robotics. The ability to model the induced currents produced by the switching gradient fields is key to developing methods for reducing these unwanted interactions. In this work, a framework for the calculation of induced currents on conducting surface geometries is summarized. This procedure is then compared to two separate experiments: (1) the analysis of the decay of currents induced upon a conducting cylinder by an insert gradient set within a head only 7 T MR scanner; and (2) analysis of the heat deposited into a small conductor by a uniform switching magnetic field at multiple frequencies and two distinct conductor thicknesses. The method was shown to allow the accurate modeling of the induced time-varying field decay in the first case, and was able to provide accurate estimation of the rise in temperature in the second experiment to within 30% when the skin depth was greater than or equal to the thickness of the conductor.
Chen, Wenjin; Wong, Chung; Vosburgh, Evan; Levine, Arnold J; Foran, David J; Xu, Eugenia Y
2014-07-08
The increasing number of applications of three-dimensional (3D) tumor spheroids as an in vitro model for drug discovery requires their adaptation to large-scale screening formats in every step of a drug screen, including large-scale image analysis. Currently there is no ready-to-use and free image analysis software to meet this large-scale format. Most existing methods involve manually drawing the length and width of the imaged 3D spheroids, which is a tedious and time-consuming process. This study presents a high-throughput image analysis software application - SpheroidSizer, which measures the major and minor axial length of the imaged 3D tumor spheroids automatically and accurately; calculates the volume of each individual 3D tumor spheroid; then outputs the results in two different forms in spreadsheets for easy manipulations in the subsequent data analysis. The main advantage of this software is its powerful image analysis application that is adapted for large numbers of images. It provides high-throughput computation and quality-control workflow. The estimated time to process 1,000 images is about 15 min on a minimally configured laptop, or around 1 min on a multi-core performance workstation. The graphical user interface (GUI) is also designed for easy quality control, and users can manually override the computer results. The key method used in this software is adapted from the active contour algorithm, also known as Snakes, which is especially suitable for images with uneven illumination and noisy background that often plagues automated imaging processing in high-throughput screens. The complimentary "Manual Initialize" and "Hand Draw" tools provide the flexibility to SpheroidSizer in dealing with various types of spheroids and diverse quality images. This high-throughput image analysis software remarkably reduces labor and speeds up the analysis process. Implementing this software is beneficial for 3D tumor spheroids to become a routine in vitro model for drug screens in industry and academia.
Carotid plaque characterization using CT and MRI scans for synergistic image analysis
NASA Astrophysics Data System (ADS)
Getzin, Matthew; Xu, Yiqin; Rao, Arhant; Madi, Saaussan; Bahadur, Ali; Lennartz, Michelle R.; Wang, Ge
2014-09-01
Noninvasive determination of plaque vulnerability has been a holy grail of medical imaging. Despite advances in tomographic technologies , there is currently no effective way to identify vulnerable atherosclerotic plaques with high sensitivity and specificity. Computed tomography (CT) and magnetic resonance imaging (MRI) are widely used, but neither provides sufficient information of plaque properties. Thus, we are motivated to combine CT and MRI imaging to determine if the composite information can better reflect the histological determination of plaque vulnerability. Two human endarterectomy specimens (1 symptomatic carotid and 1 stable femoral) were imaged using Scanco Medical Viva CT40 and Bruker Pharmascan 16cm 7T Horizontal MRI / MRS systems. μCT scans were done at 55 kVp and tube current of 70 mA. Samples underwent RARE-VTR and MSME pulse sequences to measure T1, T2 values, and proton density. The specimens were processed for histology and scored for vulnerability using the American Heart Association criteria. Single modality-based analyses were performed through segmentation of key imaging biomarkers (i.e. calcification and lumen), image registration, measurement of fibrous capsule, and multi-component T1 and T2 decay modeling. Feature differences were analyzed between the unstable and stable controls, symptomatic carotid and femoral plaque, respectively. By building on the techniques used in this study, synergistic CT+MRI analysis may provide a promising solution for plaque characterization in vivo.
Oddy, M H; Santiago, J G
2004-01-01
We have developed a method for measuring the electrophoretic mobility of submicrometer, fluorescently labeled particles and the electroosmotic mobility of a microchannel. We derive explicit expressions for the unknown electrophoretic and the electroosmotic mobilities as a function of particle displacements resulting from alternating current (AC) and direct current (DC) applied electric fields. Images of particle displacements are captured using an epifluorescent microscope and a CCD camera. A custom image-processing code was developed to determine image streak lengths associated with AC measurements, and a custom particle tracking velocimetry (PTV) code was devised to determine DC particle displacements. Statistical analysis was applied to relate mobility estimates to measured particle displacement distributions.
Malyarenko, Dariya; Newitt, David; Wilmes, Lisa; Tudorica, Alina; Helmer, Karl G.; Arlinghaus, Lori R.; Jacobs, Michael A.; Jajamovich, Guido; Taouli, Bachir; Yankeelov, Thomas E.; Huang, Wei; Chenevert, Thomas L.
2015-01-01
Purpose Characterize system-specific bias across common magnetic resonance imaging (MRI) platforms for quantitative diffusion measurements in multicenter trials. Methods Diffusion weighted imaging (DWI) was performed on an ice-water phantom along the superior-inferior (SI) and right-left (RL) orientations spanning ±150 mm. The same scanning protocol was implemented on 14 MRI systems at seven imaging centers. The bias was estimated as a deviation of measured from known apparent diffusion coefficient (ADC) along individual DWI directions. The relative contributions of gradient nonlinearity, shim errors, imaging gradients and eddy currents were assessed independently. The observed bias errors were compared to numerical models. Results The measured systematic ADC errors scaled quadratically with offset from isocenter, and ranged between −55% (SI) and 25% (RL). Nonlinearity bias was dependent on system design and diffusion gradient direction. Consistent with numerical models, minor ADC errors (±5%) due to shim, imaging and eddy currents were mitigated by double echo DWI and image co-registration of individual gradient directions. Conclusion The analysis confirms gradient nonlinearity as a major source of spatial DW bias and variability in off-center ADC measurements across MRI platforms, with minor contributions from shim, imaging gradients and eddy currents. The developed protocol enables empiric description of systematic bias in multicenter quantitative DWI studies. PMID:25940607
Malyarenko, Dariya I; Newitt, David; J Wilmes, Lisa; Tudorica, Alina; Helmer, Karl G; Arlinghaus, Lori R; Jacobs, Michael A; Jajamovich, Guido; Taouli, Bachir; Yankeelov, Thomas E; Huang, Wei; Chenevert, Thomas L
2016-03-01
Characterize system-specific bias across common magnetic resonance imaging (MRI) platforms for quantitative diffusion measurements in multicenter trials. Diffusion weighted imaging (DWI) was performed on an ice-water phantom along the superior-inferior (SI) and right-left (RL) orientations spanning ± 150 mm. The same scanning protocol was implemented on 14 MRI systems at seven imaging centers. The bias was estimated as a deviation of measured from known apparent diffusion coefficient (ADC) along individual DWI directions. The relative contributions of gradient nonlinearity, shim errors, imaging gradients, and eddy currents were assessed independently. The observed bias errors were compared with numerical models. The measured systematic ADC errors scaled quadratically with offset from isocenter, and ranged between -55% (SI) and 25% (RL). Nonlinearity bias was dependent on system design and diffusion gradient direction. Consistent with numerical models, minor ADC errors (± 5%) due to shim, imaging and eddy currents were mitigated by double echo DWI and image coregistration of individual gradient directions. The analysis confirms gradient nonlinearity as a major source of spatial DW bias and variability in off-center ADC measurements across MRI platforms, with minor contributions from shim, imaging gradients and eddy currents. The developed protocol enables empiric description of systematic bias in multicenter quantitative DWI studies. © 2015 Wiley Periodicals, Inc.
Sakamoto, Ryo; Yakami, Masahiro; Fujimoto, Koji; Nakagomi, Keita; Kubo, Takeshi; Emoto, Yutaka; Akasaka, Thai; Aoyama, Gakuto; Yamamoto, Hiroyuki; Miller, Michael I; Mori, Susumu; Togashi, Kaori
2017-11-01
Purpose To determine the improvement of radiologist efficiency and performance in the detection of bone metastases at serial follow-up computed tomography (CT) by using a temporal subtraction (TS) technique based on an advanced nonrigid image registration algorithm. Materials and Methods This retrospective study was approved by the institutional review board, and informed consent was waived. CT image pairs (previous and current scans of the torso) in 60 patients with cancer (primary lesion location: prostate, n = 14; breast, n = 16; lung, n = 20; liver, n = 10) were included. These consisted of 30 positive cases with a total of 65 bone metastases depicted only on current images and confirmed by two radiologists who had access to additional imaging examinations and clinical courses and 30 matched negative control cases (no bone metastases). Previous CT images were semiautomatically registered to current CT images by the algorithm, and TS images were created. Seven radiologists independently interpreted CT image pairs to identify newly developed bone metastases without and with TS images with an interval of at least 30 days. Jackknife free-response receiver operating characteristics (JAFROC) analysis was conducted to assess observer performance. Reading time was recorded, and usefulness was evaluated with subjective scores of 1-5, with 5 being extremely useful and 1 being useless. Significance of these values was tested with the Wilcoxon signed-rank test. Results The subtraction images depicted various types of bone metastases (osteolytic, n = 28; osteoblastic, n = 26; mixed osteolytic and blastic, n = 11) as temporal changes. The average reading time was significantly reduced (384.3 vs 286.8 seconds; Wilcoxon signed rank test, P = .028). The average figure-of-merit value increased from 0.758 to 0.835; however, this difference was not significant (JAFROC analysis, P = .092). The subjective usefulness survey response showed a median score of 5 for use of the technique (range, 3-5). Conclusion TS images obtained from serial CT scans using nonrigid registration successfully depicted newly developed bone metastases and showed promise for their efficient detection. © RSNA, 2017 Online supplemental material is available for this article.
NASA Technical Reports Server (NTRS)
Buzulukova, N.; Fok, M.-C.; Goldstein, J.; Valek, P.; McComas, D. J.; Brandt, P. C.
2010-01-01
We present a comparative study of ring current dynamics during strong and moderate storms. The ring current during the strong storm is studied with IMAGE/HENA data near the solar cycle maximum in 2000. The ring current during the moderate storm is studied using energetic neutral atom (ENA) data from the Two Wide-Angle Imaging Neutral- Atom Spectrometers (TWINS) mission during the solar minimum in 2008. For both storms, the local time distributions of ENA emissions show signatures of postmidnight enhancement (PME) during the main phases. To model the ring current and ENA emissions, we use the Comprehensive Ring Current Model (CRCM). CRCM results show that the main-phase ring current pressure peaks in the premidnight-dusk sector, while the most intense CRCM-simulated ENA emissions show PME signatures. We analyze two factors to explain this difference: the dependence of charge-exchange cross section on energy and pitch angle distributions of ring current. We find that the IMF By effect (twisting of the convection pattern due to By) is not needed to form the PME. Additionally, the PME is more pronounced for the strong storm, although relative shielding and hence electric field skewing is well developed for both events.
Particle sizing of pharmaceutical aerosols via direct imaging of particle settling velocities.
Fishler, Rami; Verhoeven, Frank; de Kruijf, Wilbur; Sznitman, Josué
2018-02-15
We present a novel method for characterizing in near real-time the aerodynamic particle size distributions from pharmaceutical inhalers. The proposed method is based on direct imaging of airborne particles followed by a particle-by-particle measurement of settling velocities using image analysis and particle tracking algorithms. Due to the simplicity of the principle of operation, this method has the potential of circumventing potential biases of current real-time particle analyzers (e.g. Time of Flight analysis), while offering a cost effective solution. The simple device can also be constructed in laboratory settings from off-the-shelf materials for research purposes. To demonstrate the feasibility and robustness of the measurement technique, we have conducted benchmark experiments whereby aerodynamic particle size distributions are obtained from several commercially-available dry powder inhalers (DPIs). Our measurements yield size distributions (i.e. MMAD and GSD) that are closely in line with those obtained from Time of Flight analysis and cascade impactors suggesting that our imaging-based method may embody an attractive methodology for rapid inhaler testing and characterization. In a final step, we discuss some of the ongoing limitations of the current prototype and conceivable routes for improving the technique. Copyright © 2017 Elsevier B.V. All rights reserved.
Promising critical current density characteristics of Ag-sheathed (Sr,Na)Fe2As2 tape
NASA Astrophysics Data System (ADS)
Suwa, Takahiro; Pyon, Sunseng; Tamegai, Tsuyoshi; Awaji, Satoshi
2018-06-01
We report the fabrication of (Sr,Na)Fe2As2 superconducting tapes by the powder-in-tube technique and their characteristics, including the transport critical current density J c at 4.2 K up to 140 kOe, the magnetic J c estimated from magnetic hysteresis curves, magneto-optical (MO) images, and scanning electron microscopy images. In a tape sintered at 875 °C for 1 h, the transport J c reaches 26 kA/cm2 at 4.2 K and 100 kOe for a field perpendicular to the tape surface. When the field is parallel to the tape surface, the magnetic J c exceeds the practical level of 100 kA/cm2 at 4.2 K below 25 kOe. Analysis of the MO images reveals clear current discontinuity lines in the core, indicating that the current flows homogeneously and the connections between grains are strong in the core.
NASA Astrophysics Data System (ADS)
Khansari, Maziyar M.; O'Neill, William; Penn, Richard; Blair, Norman P.; Chau, Felix; Shahidi, Mahnaz
2017-03-01
The conjunctiva is a densely vascularized tissue of the eye that provides an opportunity for imaging of human microcirculation. In the current study, automated fine structure analysis of conjunctival microvasculature images was performed to discriminate stages of diabetic retinopathy (DR). The study population consisted of one group of nondiabetic control subjects (NC) and 3 groups of diabetic subjects, with no clinical DR (NDR), non-proliferative DR (NPDR), or proliferative DR (PDR). Ordinary least square regression and Fisher linear discriminant analyses were performed to automatically discriminate images between group pairs of subjects. Human observers who were masked to the grouping of subjects performed image discrimination between group pairs. Over 80% and 70% of images of subjects with clinical and non-clinical DR were correctly discriminated by the automated method, respectively. The discrimination rates of the automated method were higher than human observers. The fine structure analysis of conjunctival microvasculature images provided discrimination of DR stages and can be potentially useful for DR screening and monitoring.
Crimp, Martin A
2006-05-01
The imaging and characterization of dislocations is commonly carried out by thin foil transmission electron microscopy (TEM) using diffraction contrast imaging. However, the thin foil approach is limited by difficult sample preparation, thin foil artifacts, relatively small viewable areas, and constraints on carrying out in situ studies. Electron channeling imaging of electron channeling contrast imaging (ECCI) offers an alternative approach for imaging crystalline defects, including dislocations. Because ECCI is carried out with field emission gun scanning electron microscope (FEG-SEM) using bulk specimens, many of the limitations of TEM thin foil analysis are overcome. This paper outlines the development of electron channeling patterns and channeling imaging to the current state of the art. The experimental parameters and set up necessary to carry out routine channeling imaging are reviewed. A number of examples that illustrate some of the advantages of ECCI over thin foil TEM are presented along with a discussion of some of the limitations on carrying out channeling contrast analysis of defect structures. Copyright (c) 2006 Wiley-Liss, Inc.
Contrast enhancement of bite mark images using the grayscale mixer in ACR in Photoshop®.
Evans, Sam; Noorbhai, Suzanne; Lawson, Zoe; Stacey-Jones, Seren; Carabott, Romina
2013-05-01
Enhanced images may improve bite mark edge definition, assisting forensic analysis. Current contrast enhancement involves color extraction, viewing layered images by channel. A novel technique, producing a single enhanced image using the grayscale mix panel within Adobe Camera Raw®, has been developed and assessed here, allowing adjustments of multiple color channels simultaneously. Stage 1 measured RGB values in 72 versions of a color chart image; eight sliders in Photoshop® were adjusted at 25% intervals, all corresponding colors affected. Stage 2 used a bite mark image, and found only red, orange, and yellow sliders had discernable effects. Stage 3 assessed modality preference between color, grayscale, and enhanced images; on average, the 22 survey participants chose the enhanced image as better defined for nine out of 10 bite marks. The study has shown potential benefits for this new technique. However, further research is needed before use in the analysis of bite marks. © 2013 American Academy of Forensic Sciences.
Light Microscopy at Maximal Precision
NASA Astrophysics Data System (ADS)
Bierbaum, Matthew; Leahy, Brian D.; Alemi, Alexander A.; Cohen, Itai; Sethna, James P.
2017-10-01
Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI). As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10-100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.
Internet (WWW) based system of ultrasonic image processing tools for remote image analysis.
Zeng, Hong; Fei, Ding-Yu; Fu, Cai-Ting; Kraft, Kenneth A
2003-07-01
Ultrasonic Doppler color imaging can provide anatomic information and simultaneously render flow information within blood vessels for diagnostic purpose. Many researchers are currently developing ultrasound image processing algorithms in order to provide physicians with accurate clinical parameters from the images. Because researchers use a variety of computer languages and work on different computer platforms to implement their algorithms, it is difficult for other researchers and physicians to access those programs. A system has been developed using World Wide Web (WWW) technologies and HTTP communication protocols to publish our ultrasonic Angle Independent Doppler Color Image (AIDCI) processing algorithm and several general measurement tools on the Internet, where authorized researchers and physicians can easily access the program using web browsers to carry out remote analysis of their local ultrasonic images or images provided from the database. In order to overcome potential incompatibility between programs and users' computer platforms, ActiveX technology was used in this project. The technique developed may also be used for other research fields.
NASA Astrophysics Data System (ADS)
Miller, C. J.; Gasson, D.; Fuentes, E.
2007-10-01
The NOAO NVO Portal is a web application for one-stop discovery, analysis, and access to VO-compliant imaging data and services. The current release allows for GUI-based discovery of nearly a half million images from archives such as the NOAO Science Archive, the Hubble Space Telescope WFPC2 and ACS instruments, XMM-Newton, Chandra, and ESO's INT Wide-Field Survey, among others. The NOAO Portal allows users to view image metadata, footprint wire-frames, FITS image previews, and provides one-click access to science quality imaging data throughout the entire sky via the Firefox web browser (i.e., no applet or code to download). Users can stage images from multiple archives at the NOAO NVO Portal for quick and easy bulk downloads. The NOAO NVO Portal also provides simplified and direct access to VO analysis services, such as the WESIX catalog generation service. We highlight the features of the NOAO NVO Portal (http://nvo.noao.edu).
Imaging and Modeling of Myocardial Metabolism
Jamshidi, Neema; Karimi, Afshin; Birgersdotter-Green, Ulrika; Hoh, Carl
2010-01-01
Current imaging methods have focused on evaluation of myocardial anatomy and function. However, since myocardial metabolism and function are interrelated, metabolic myocardial imaging techniques, such as positron emission tomography, single photon emission tomography, and magnetic resonance spectroscopy present novel opportunities for probing myocardial pathology and developing new therapeutic approaches. Potential clinical applications of metabolic imaging include hypertensive and ischemic heart disease, heart failure, cardiac transplantation, as well as cardiomyopathies. Furthermore, response to therapeutic intervention can be monitored using metabolic imaging. Analysis of metabolic data in the past has been limited, focusing primarily on isolated metabolites. Models of myocardial metabolism, however, such as the oxygen transport and cellular energetics model and constraint-based metabolic network modeling, offer opportunities for evaluation interactions between greater numbers of metabolites in the heart. In this review, the roles of metabolic myocardial imaging and analysis of metabolic data using modeling methods for expanding our understanding of cardiac pathology are discussed. PMID:20559785
Image Segmentation Analysis for NASA Earth Science Applications
NASA Technical Reports Server (NTRS)
Tilton, James C.
2010-01-01
NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.
Flightspeed Integral Image Analysis Toolkit
NASA Technical Reports Server (NTRS)
Thompson, David R.
2009-01-01
The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles
Image analysis tools and emerging algorithms for expression proteomics
English, Jane A.; Lisacek, Frederique; Morris, Jeffrey S.; Yang, Guang-Zhong; Dunn, Michael J.
2012-01-01
Since their origins in academic endeavours in the 1970s, computational analysis tools have matured into a number of established commercial packages that underpin research in expression proteomics. In this paper we describe the image analysis pipeline for the established 2-D Gel Electrophoresis (2-DE) technique of protein separation, and by first covering signal analysis for Mass Spectrometry (MS), we also explain the current image analysis workflow for the emerging high-throughput ‘shotgun’ proteomics platform of Liquid Chromatography coupled to MS (LC/MS). The bioinformatics challenges for both methods are illustrated and compared, whilst existing commercial and academic packages and their workflows are described from both a user’s and a technical perspective. Attention is given to the importance of sound statistical treatment of the resultant quantifications in the search for differential expression. Despite wide availability of proteomics software, a number of challenges have yet to be overcome regarding algorithm accuracy, objectivity and automation, generally due to deterministic spot-centric approaches that discard information early in the pipeline, propagating errors. We review recent advances in signal and image analysis algorithms in 2-DE, MS, LC/MS and Imaging MS. Particular attention is given to wavelet techniques, automated image-based alignment and differential analysis in 2-DE, Bayesian peak mixture models and functional mixed modelling in MS, and group-wise consensus alignment methods for LC/MS. PMID:21046614
Reduction and analysis techniques for infrared imaging data
NASA Technical Reports Server (NTRS)
Mccaughrean, Mark
1989-01-01
Infrared detector arrays are becoming increasingly available to the astronomy community, with a number of array cameras already in use at national observatories, and others under development at many institutions. As the detector technology and imaging instruments grow more sophisticated, more attention is focussed on the business of turning raw data into scientifically significant information. Turning pictures into papers, or equivalently, astronomy into astrophysics, both accurately and efficiently, is discussed. Also discussed are some of the factors that can be considered at each of three major stages; acquisition, reduction, and analysis, concentrating in particular on several of the questions most relevant to the techniques currently applied to near infrared imaging.
A comparative study of new and current methods for dental micro-CT image denoising
Lashgari, Mojtaba; Qin, Jie; Swain, Michael
2016-01-01
Objectives: The aim of the current study was to evaluate the application of two advanced noise-reduction algorithms for dental micro-CT images and to implement a comparative analysis of the performance of new and current denoising algorithms. Methods: Denoising was performed using gaussian and median filters as the current filtering approaches and the block-matching and three-dimensional (BM3D) method and total variation method as the proposed new filtering techniques. The performance of the denoising methods was evaluated quantitatively using contrast-to-noise ratio (CNR), edge preserving index (EPI) and blurring indexes, as well as qualitatively using the double-stimulus continuous quality scale procedure. Results: The BM3D method had the best performance with regard to preservation of fine textural features (CNREdge), non-blurring of the whole image (blurring index), the clinical visual score in images with very fine features and the overall visual score for all types of images. On the other hand, the total variation method provided the best results with regard to smoothing of images in texture-free areas (CNRTex-free) and in preserving the edges and borders of image features (EPI). Conclusions: The BM3D method is the most reliable technique for denoising dental micro-CT images with very fine textural details, such as shallow enamel lesions, in which the preservation of the texture and fine features is of the greatest importance. On the other hand, the total variation method is the technique of choice for denoising images without very fine textural details in which the clinician or researcher is interested mainly in anatomical features and structural measurements. PMID:26764583
Radar image enhancement and simulation as an aid to interpretation and training
NASA Technical Reports Server (NTRS)
Frost, V. S.; Stiles, J. A.; Holtzman, J. C.; Dellwig, L. F.; Held, D. N.
1980-01-01
Greatly increased activity in the field of radar image applications in the coming years demands that techniques of radar image analysis, enhancement, and simulation be developed now. Since the statistical nature of radar imagery differs from that of photographic imagery, one finds that the required digital image processing algorithms (e.g., for improved viewing and feature extraction) differ from those currently existing. This paper addresses these problems and discusses work at the Remote Sensing Laboratory in image simulation and processing, especially for systems comparable to the formerly operational SEASAT synthetic aperture radar.
Contrast-enhanced endoscopic ultrasonography: advance and current status
2014-01-01
Endoscopic ultrasonography (EUS) technology has undergone a great deal of progress along with the color and power Doppler imaging, three-dimensional imaging, electronic scanning, tissue harmonic imaging, and elastography, and one of the most important developments is the ability to acquire contrast-enhanced images. The blood flow in small vessels and the parenchymal microvasculature of the target lesion can be observed non-invasively by contrast-enhanced EUS (CE-EUS). Through a hemodynamic analysis, CE-EUS permits the diagnosis of various gastrointestinal diseases and differential diagnoses between benign and malignant tumors. Recently, mechanical innovations and the development of contrast agents have increased the use of CE-EUS in the diagnostic field, as well as for the assessment of the efficacy of therapeutic agents. The advances in and the current status of CE-EUS are discussed in this review. PMID:25038805
Tsipouras, Markos G; Giannakeas, Nikolaos; Tzallas, Alexandros T; Tsianou, Zoe E; Manousou, Pinelopi; Hall, Andrew; Tsoulos, Ioannis; Tsianos, Epameinondas
2017-03-01
Collagen proportional area (CPA) extraction in liver biopsy images provides the degree of fibrosis expansion in liver tissue, which is the most characteristic histological alteration in hepatitis C virus (HCV). Assessment of the fibrotic tissue is currently based on semiquantitative staging scores such as Ishak and Metavir. Since its introduction as a fibrotic tissue assessment technique, CPA calculation based on image analysis techniques has proven to be more accurate than semiquantitative scores. However, CPA has yet to reach everyday clinical practice, since the lack of standardized and robust methods for computerized image analysis for CPA assessment have proven to be a major limitation. The current work introduces a three-stage fully automated methodology for CPA extraction based on machine learning techniques. Specifically, clustering algorithms have been employed for background-tissue separation, as well as for fibrosis detection in liver tissue regions, in the first and the third stage of the methodology, respectively. Due to the existence of several types of tissue regions in the image (such as blood clots, muscle tissue, structural collagen, etc.), classification algorithms have been employed to identify liver tissue regions and exclude all other non-liver tissue regions from CPA computation. For the evaluation of the methodology, 79 liver biopsy images have been employed, obtaining 1.31% mean absolute CPA error, with 0.923 concordance correlation coefficient. The proposed methodology is designed to (i) avoid manual threshold-based and region selection processes, widely used in similar approaches presented in the literature, and (ii) minimize CPA calculation time. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Mass spectrometry imaging for visualizing organic analytes in food.
Handberg, Eric; Chingin, Konstantin; Wang, Nannan; Dai, Ximo; Chen, Huanwen
2015-01-01
The demand for rapid chemical imaging of food products steadily increases. Mass spectrometry (MS) is featured by excellent molecular specificity of analysis and is, therefore, a very attractive method for chemical profiling. MS for food imaging has increased significantly over the past decade, aided by the emergence of various ambient ionization techniques that allow direct and rapid analysis in ambient environment. In this article, the current status of food imaging with MSI is reviewed. The described approaches include matrix-assisted laser desorption/ionization (MALDI), but emphasize desorption atmospheric pressure photoionization (DAPPI), electrospray-assisted laser desorption/ionization (ELDI), probe electrospray ionization (PESI), surface desorption atmospheric pressure chemical ionization (SDAPCI), and laser ablation flowing atmospheric pressure afterglow (LA-FAPA). The methods are compared with regard to spatial resolution; analysis speed and time; limit of detection; and technical aspects. The performance of each method is illustrated with the description of a related application. Specific requirements in food imaging are discussed. © 2014 Wiley Periodicals, Inc.
Rapid development of medical imaging tools with open-source libraries.
Caban, Jesus J; Joshi, Alark; Nagy, Paul
2007-11-01
Rapid prototyping is an important element in researching new imaging analysis techniques and developing custom medical applications. In the last ten years, the open source community and the number of open source libraries and freely available frameworks for biomedical research have grown significantly. What they offer are now considered standards in medical image analysis, computer-aided diagnosis, and medical visualization. A cursory review of the peer-reviewed literature in imaging informatics (indeed, in almost any information technology-dependent scientific discipline) indicates the current reliance on open source libraries to accelerate development and validation of processes and techniques. In this survey paper, we review and compare a few of the most successful open source libraries and frameworks for medical application development. Our dual intentions are to provide evidence that these approaches already constitute a vital and essential part of medical image analysis, diagnosis, and visualization and to motivate the reader to use open source libraries and software for rapid prototyping of medical applications and tools.
Photogrammetric analysis of horizon panoramas: The Pathfinder landing site in Viking orbiter images
Oberst, J.; Jaumann, R.; Zeitler, W.; Hauber, E.; Kuschel, M.; Parker, T.; Golombek, M.; Malin, M.; Soderblom, L.
1999-01-01
Tiepoint measurements, block adjustment techniques, and sunrise/sunset pictures were used to obtain precise pointing data with respect to north for a set of 33 IMP horizon images. Azimuth angles for five prominent topographic features seen at the horizon were measured and correlated with locations of these features in Viking orbiter images. Based on this analysis, the Pathfinder line/sample coordinates in two raw Viking images were determined with approximate errors of 1 pixel, or 40 m. Identification of the Pathfinder location in orbit imagery yields geological context for surface studies of the landing site. Furthermore, the precise determination of coordinates in images together with the known planet-fixed coordinates of the lander make the Pathfinder landing site the most important anchor point in current control point networks of Mars. Copyright 1999 by the American Geophysical Union.
NASA Technical Reports Server (NTRS)
Conel, J. E.; Lang, H. R.; Paylor, E. D.; Alley, R. E.
1985-01-01
A Landsat-4 Thematic Mapper (TM) image of the Wind River Basin area in Wyoming is currently under analysis for stratigraphic and structural mapping and for assessment of spectral and spatial characteristics using visible, near infrared, and short wavelength infrared bands. To estimate the equivalent Lambertian surface reflectance, TM radiance data were calibrated to remove atmospheric and instrumental effects. Reflectance measurements for homogeneous natural and cultural targets were acquired about one year after data acquisition. Calibration data obtained during the analysis were used to calculate new gains and offsets to improve scanner response for earth science applications. It is shown that the principal component images calculated from the TM data were the result of linear transformations of ground reflectance. In images prepared from this transform, the separation of spectral classes was independent of systematic atmospheric and instrumental factors. Several examples of the processed images are provided.
Jo, Javier A.; Fang, Qiyin; Marcu, Laura
2007-01-01
We report a new deconvolution method for fluorescence lifetime imaging microscopy (FLIM) based on the Laguerre expansion technique. The performance of this method was tested on synthetic and real FLIM images. The following interesting properties of this technique were demonstrated. 1) The fluorescence intensity decay can be estimated simultaneously for all pixels, without a priori assumption of the decay functional form. 2) The computation speed is extremely fast, performing at least two orders of magnitude faster than current algorithms. 3) The estimated maps of Laguerre expansion coefficients provide a new domain for representing FLIM information. 4) The number of images required for the analysis is relatively small, allowing reduction of the acquisition time. These findings indicate that the developed Laguerre expansion technique for FLIM analysis represents a robust and extremely fast deconvolution method that enables practical applications of FLIM in medicine, biology, biochemistry, and chemistry. PMID:19444338
Sea Surface Wakes Observed by Spaceborne SAR in the Offshore Wind Farms
NASA Astrophysics Data System (ADS)
Li, Xiaoming; Lehner, Susanne; Jacobsen, Sven
2014-11-01
In the paper, we present some X-band spaceborne synthetic aperture radar (SAR) TerraSAR-X (TS-X) images acquired at the offshore wind farms in the North Sea and the East China Sea. The high spatial resolution SAR images show different sea surface wake patterns downstream of the offshore wind turbines. The analysis suggests that there are major two types of wakes among the observed cases. The wind turbine wakes generated by movement of wind around wind turbines are the most often observed cases. In contrast, due to the strong local tidal currents in the near shore wind farm sites, the tidal current wakes induced by tidal current impinging on the wind turbine piles are also observed in the high spatial resolution TS-X images. The discrimination of the two types of wakes observed in the offshore wind farms is also described in the paper.
A methodology for image quality evaluation of advanced CT systems.
Wilson, Joshua M; Christianson, Olav I; Richard, Samuel; Samei, Ehsan
2013-03-01
This work involved the development of a phantom-based method to quantify the performance of tube current modulation and iterative reconstruction in modern computed tomography (CT) systems. The quantification included resolution, HU accuracy, noise, and noise texture accounting for the impact of contrast, prescribed dose, reconstruction algorithm, and body size. A 42-cm-long, 22.5-kg polyethylene phantom was designed to model four body sizes. Each size was represented by a uniform section, for the measurement of the noise-power spectrum (NPS), and a feature section containing various rods, for the measurement of HU and the task-based modulation transfer function (TTF). The phantom was scanned on a clinical CT system (GE, 750HD) using a range of tube current modulation settings (NI levels) and reconstruction methods (FBP and ASIR30). An image quality analysis program was developed to process the phantom data to calculate the targeted image quality metrics as a function of contrast, prescribed dose, and body size. The phantom fabrication closely followed the design specifications. In terms of tube current modulation, the tube current and resulting image noise varied as a function of phantom size as expected based on the manufacturer specification: From the 16- to 37-cm section, the HU contrast for each rod was inversely related to phantom size, and noise was relatively constant (<5% change). With iterative reconstruction, the TTF exhibited a contrast dependency with better performance for higher contrast objects. At low noise levels, TTFs of iterative reconstruction were better than those of FBP, but at higher noise, that superiority was not maintained at all contrast levels. Relative to FBP, the NPS of iterative reconstruction exhibited an ~30% decrease in magnitude and a 0.1 mm(-1) shift in the peak frequency. Phantom and image quality analysis software were created for assessing CT image quality over a range of contrasts, doses, and body sizes. The testing platform enabled robust NPS, TTF, HU, and pixel noise measurements as a function of body size capable of characterizing the performance of reconstruction algorithms and tube current modulation techniques.
Infrared spectroscopic imaging: Label-free biochemical analysis of stroma and tissue fibrosis.
Nazeer, Shaiju S; Sreedhar, Hari; Varma, Vishal K; Martinez-Marin, David; Massie, Christine; Walsh, Michael J
2017-11-01
Infrared spectroscopic tissue imaging is a potentially powerful adjunct tool to current histopathology techniques. By coupling the biochemical signature obtained through infrared spectroscopy to the spatial information offered by microscopy, this technique can selectively analyze the chemical composition of different features of unlabeled, unstained tissue sections. In the past, the tissue features that have received the most interest were parenchymal and epithelial cells, chiefly due to their involvement in dysplasia and progression to carcinoma; however, the field has recently turned its focus toward stroma and areas of fibrotic change. These components of tissue present an untapped source of biochemical information that can shed light on many diverse disease processes, and potentially hold useful predictive markers for these same pathologies. Here we review the recent applications of infrared spectroscopic imaging to stromal and fibrotic regions of diseased tissue, and explore the potential of this technique to advance current capabilities for tissue analysis. Copyright © 2017 Elsevier Ltd. All rights reserved.
IQM: An Extensible and Portable Open Source Application for Image and Signal Analysis in Java
Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2015-01-01
Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM’s image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis. PMID:25612319
IQM: an extensible and portable open source application for image and signal analysis in Java.
Kainz, Philipp; Mayrhofer-Reinhartshuber, Michael; Ahammer, Helmut
2015-01-01
Image and signal analysis applications are substantial in scientific research. Both open source and commercial packages provide a wide range of functions for image and signal analysis, which are sometimes supported very well by the communities in the corresponding fields. Commercial software packages have the major drawback of being expensive and having undisclosed source code, which hampers extending the functionality if there is no plugin interface or similar option available. However, both variants cannot cover all possible use cases and sometimes custom developments are unavoidable, requiring open source applications. In this paper we describe IQM, a completely free, portable and open source (GNU GPLv3) image and signal analysis application written in pure Java. IQM does not depend on any natively installed libraries and is therefore runnable out-of-the-box. Currently, a continuously growing repertoire of 50 image and 16 signal analysis algorithms is provided. The modular functional architecture based on the three-tier model is described along the most important functionality. Extensibility is achieved using operator plugins, and the development of more complex workflows is provided by a Groovy script interface to the JVM. We demonstrate IQM's image and signal processing capabilities in a proof-of-principle analysis and provide example implementations to illustrate the plugin framework and the scripting interface. IQM integrates with the popular ImageJ image processing software and is aiming at complementing functionality rather than competing with existing open source software. Machine learning can be integrated into more complex algorithms via the WEKA software package as well, enabling the development of transparent and robust methods for image and signal analysis.
NanoSIMS for biological applications: Current practices and analyses
Nunez, Jamie R.; Renslow, Ryan S.; Cliff, III, John B.; ...
2017-09-27
Secondary ion mass spectrometry (SIMS) has become an increasingly utilized tool in biologically-relevant studies. Of these, high lateral resolution methodologies using the NanoSIMS 50/50L have been especially powerful within many biological fields over the past decade. Here, we provide a review of this technology, sample preparation and analysis considerations, examples of recent biological studies, data analysis, and current outlooks. Specifically, we offer an overview of SIMS and development of the NanoSIMS. We describe the major experimental factors that should be considered prior to NanoSIMS analysis and then provide information on best practices for data analysis and image generation, which includesmore » an in-depth discussion of appropriate colormaps. Additionally, we provide an open-source method for data representation that allows simultaneous visualization of secondary electron and ion information within a single image. Lastly, we present a perspective on the future of this technology and where we think it will have the greatest impact in near future.« less
NanoSIMS for biological applications: Current practices and analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nunez, Jamie R.; Renslow, Ryan S.; Cliff, III, John B.
Secondary ion mass spectrometry (SIMS) has become an increasingly utilized tool in biologically-relevant studies. Of these, high lateral resolution methodologies using the NanoSIMS 50/50L have been especially powerful within many biological fields over the past decade. Here, we provide a review of this technology, sample preparation and analysis considerations, examples of recent biological studies, data analysis, and current outlooks. Specifically, we offer an overview of SIMS and development of the NanoSIMS. We describe the major experimental factors that should be considered prior to NanoSIMS analysis and then provide information on best practices for data analysis and image generation, which includesmore » an in-depth discussion of appropriate colormaps. Additionally, we provide an open-source method for data representation that allows simultaneous visualization of secondary electron and ion information within a single image. Lastly, we present a perspective on the future of this technology and where we think it will have the greatest impact in near future.« less
Hiesgen, Renate; Helmly, Stefan; Galm, Ines; Morawietz, Tobias; Handl, Michael; Friedrich, K. Andreas
2012-01-01
The conductivity of fuel cell membranes as well as their mechanical properties at the nanometer scale were characterized using advanced tapping mode atomic force microscopy (AFM) techniques. AFM produces high-resolution images under continuous current flow of the conductive structure at the membrane surface and provides some insight into the bulk conducting network in Nafion membranes. The correlation of conductivity with other mechanical properties, such as adhesion force, deformation and stiffness, were simultaneously measured with the current and provided an indication of subsurface phase separations and phase distribution at the surface of the membrane. The distribution of conductive pores at the surface was identified by the formation of water droplets. A comparison of nanostructure models with high-resolution current images is discussed in detail. PMID:24958429
Image analysis and machine learning for detecting malaria.
Poostchi, Mahdieh; Silamut, Kamolrat; Maude, Richard J; Jaeger, Stefan; Thoma, George
2018-04-01
Malaria remains a major burden on global health, with roughly 200 million cases worldwide and more than 400,000 deaths per year. Besides biomedical research and political efforts, modern information technology is playing a key role in many attempts at fighting the disease. One of the barriers toward a successful mortality reduction has been inadequate malaria diagnosis in particular. To improve diagnosis, image analysis software and machine learning methods have been used to quantify parasitemia in microscopic blood slides. This article gives an overview of these techniques and discusses the current developments in image analysis and machine learning for microscopic malaria diagnosis. We organize the different approaches published in the literature according to the techniques used for imaging, image preprocessing, parasite detection and cell segmentation, feature computation, and automatic cell classification. Readers will find the different techniques listed in tables, with the relevant articles cited next to them, for both thin and thick blood smear images. We also discussed the latest developments in sections devoted to deep learning and smartphone technology for future malaria diagnosis. Published by Elsevier Inc.
[Sub-field imaging spectrometer design based on Offner structure].
Wu, Cong-Jun; Yan, Chang-Xiang; Liu, Wei; Dai, Hu
2013-08-01
To satisfy imaging spectrometers's miniaturization, lightweight and large field requirements in space application, the current optical design of imaging spectrometer with Offner structure was analyzed, and an simple method to design imaging spectrometer with concave grating based on current ways was given. Using the method offered, the sub-field imaging spectrometer with 400 km altitude, 0.4-1.0 microm wavelength range, 5 F-number of 720 mm focal length and 4.3 degrees total field was designed. Optical fiber was used to transfer the image in telescope's focal plane to three slits arranged in the same plane so as to achieve subfield. The CCD detector with 1 024 x 1 024 and 18 microm x 18 microm was used to receive the image of the three slits after dispersing. Using ZEMAX software optimization and tolerance analysis, the system can satisfy 5 nm spectrum resolution and 5 m field resolution, and the MTF is over 0.62 with 28 lp x mm(-1). The field of the system is almost 3 times that of similar instruments used in space probe.
Lee, Alex Pui-Wai; Fang, Fang; Jin, Chun-Na; Kam, Kevin Ka-Ho; Tsui, Gary K W; Wong, Kenneth K Y; Looi, Jen-Li; Wong, Randolph H L; Wan, Song; Sun, Jing Ping; Underwood, Malcolm J; Yu, Cheuk-Man
2014-01-01
The mitral valve (MV) has complex 3-dimensional (3D) morphology and motion. Advance in real-time 3D echocardiography (RT3DE) has revolutionized clinical imaging of the MV by providing clinicians with realistic visualization of the valve. Thus far, RT3DE of the MV structure and dynamics has adopted an approach that depends largely on subjective and qualitative interpretation of the 3D images of the valve, rather than objective and reproducible measurement. RT3DE combined with image-processing computer techniques provides precise segmentation and reliable quantification of the complex 3D morphology and rapid motion of the MV. This new approach to imaging may provide additional quantitative descriptions that are useful in diagnostic and therapeutic decision-making. Quantitative analysis of the MV using RT3DE has increased our understanding of the pathologic mechanism of degenerative, ischemic, functional, and rheumatic MV disease. Most recently, 3D morphologic quantification has entered into clinical use to provide more accurate diagnosis of MV disease and for planning surgery and transcatheter interventions. Current limitations of this quantitative approach to MV imaging include labor-intensiveness during image segmentation and lack of a clear definition of the clinical significance of many of the morphologic parameters. This review summarizes the current development and applications of quantitative analysis of the MV morphology using RT3DE.
Near-infrared hyperspectral imaging for quality analysis of agricultural and food products
NASA Astrophysics Data System (ADS)
Singh, C. B.; Jayas, D. S.; Paliwal, J.; White, N. D. G.
2010-04-01
Agricultural and food processing industries are always looking to implement real-time quality monitoring techniques as a part of good manufacturing practices (GMPs) to ensure high-quality and safety of their products. Near-infrared (NIR) hyperspectral imaging is gaining popularity as a powerful non-destructive tool for quality analysis of several agricultural and food products. This technique has the ability to analyse spectral data in a spatially resolved manner (i.e., each pixel in the image has its own spectrum) by applying both conventional image processing and chemometric tools used in spectral analyses. Hyperspectral imaging technique has demonstrated potential in detecting defects and contaminants in meats, fruits, cereals, and processed food products. This paper discusses the methodology of hyperspectral imaging in terms of hardware, software, calibration, data acquisition and compression, and development of prediction and classification algorithms and it presents a thorough review of the current applications of hyperspectral imaging in the analyses of agricultural and food products.
Submillimeter video imaging with a superconducting bolometer array
NASA Astrophysics Data System (ADS)
Becker, Daniel Thomas
Millimeter wavelength radiation holds promise for detection of security threats at a distance, including suicide bombers and maritime threats in poor weather. The high sensitivity of superconducting Transition Edge Sensor (TES) bolometers makes them ideal for passive imaging of thermal signals at millimeter and submillimeter wavelengths. I have built a 350 GHz video-rate imaging system using an array of feedhorn-coupled TES bolometers. The system operates at standoff distances of 16 m to 28 m with a measured spatial resolution of 1.4 cm (at 17 m). It currently contains one 251-detector sub-array, and can be expanded to contain four sub-arrays for a total of 1004 detectors. The system has been used to take video images that reveal the presence of weapons concealed beneath a shirt in an indoor setting. This dissertation describes the design, implementation and characterization of this system. It presents an overview of the challenges associated with standoff passive imaging and how these problems can be overcome through the use of large-format TES bolometer arrays. I describe the design of the system and cover the results of detector and optical characterization. I explain the procedure used to generate video images using the system, and present a noise analysis of those images. This analysis indicates that the Noise Equivalent Temperature Difference (NETD) of the video images is currently limited by artifacts of the scanning process. More sophisticated image processing algorithms can eliminate these artifacts and reduce the NETD to 100 mK, which is the target value for the most demanding passive imaging scenarios. I finish with an overview of future directions for this system.
Kawata, Masaaki; Sato, Chikara
2007-06-01
In determining the three-dimensional (3D) structure of macromolecular assemblies in single particle analysis, a large representative dataset of two-dimensional (2D) average images from huge number of raw images is a key for high resolution. Because alignments prior to averaging are computationally intensive, currently available multireference alignment (MRA) software does not survey every possible alignment. This leads to misaligned images, creating blurred averages and reducing the quality of the final 3D reconstruction. We present a new method, in which multireference alignment is harmonized with classification (multireference multiple alignment: MRMA). This method enables a statistical comparison of multiple alignment peaks, reflecting the similarities between each raw image and a set of reference images. Among the selected alignment candidates for each raw image, misaligned images are statistically excluded, based on the principle that aligned raw images of similar projections have a dense distribution around the correctly aligned coordinates in image space. This newly developed method was examined for accuracy and speed using model image sets with various signal-to-noise ratios, and with electron microscope images of the Transient Receptor Potential C3 and the sodium channel. In every data set, the newly developed method outperformed conventional methods in robustness against noise and in speed, creating 2D average images of higher quality. This statistically harmonized alignment-classification combination should greatly improve the quality of single particle analysis.
On-line 3-dimensional confocal imaging in vivo.
Li, J; Jester, J V; Cavanagh, H D; Black, T D; Petroll, W M
2000-09-01
In vivo confocal microscopy through focusing (CMTF) can provide a 3-D stack of high-resolution corneal images and allows objective measurements of corneal sublayer thickness and backscattering. However, current systems require time-consuming off-line image processing and analysis on multiple software platforms. Furthermore, there is a trade off between the CMTF speed and measurement precision. The purpose of this study was to develop a novel on-line system for in vivo corneal imaging and analysis that overcomes these limitations. A tandem scanning confocal microscope (TSCM) was used for corneal imaging. The TSCM video camera was interfaced directly to a PC image acquisition board to implement real-time digitization. Software was developed to allow in vivo 2-D imaging, CMTF image acquisition, interactive 3-D reconstruction, and analysis of CMTF data to be performed on line in a single user-friendly environment. A procedure was also incorporated to separate the odd/even video fields, thereby doubling the CMTF sampling rate and theoretically improving the precision of CMTF thickness measurements by a factor of two. In vivo corneal examinations of a normal human and a photorefractive keratectomy patient are presented to demonstrate the capabilities of the new system. Improvements in the convenience, speed, and functionality of in vivo CMTF image acquisition, display, and analysis are demonstrated. This is the first full-featured software package designed for in vivo TSCM imaging of the cornea, which performs both 2-D and 3-D image acquisition, display, and processing as well as CMTF analysis. The use of a PC platform and incorporation of easy to use, on line, and interactive features should help to improve the clinical utility of this technology.
Analysis of recoverable current from one component of magnetic flux density in MREIT and MRCDI.
Park, Chunjae; Lee, Byung Il; Kwon, Oh In
2007-06-07
Magnetic resonance current density imaging (MRCDI) provides a current density image by measuring the induced magnetic flux density within the subject with a magnetic resonance imaging (MRI) scanner. Magnetic resonance electrical impedance tomography (MREIT) has been focused on extracting some useful information of the current density and conductivity distribution in the subject Omega using measured B(z), one component of the magnetic flux density B. In this paper, we analyze the map Tau from current density vector field J to one component of magnetic flux density B(z) without any assumption on the conductivity. The map Tau provides an orthogonal decomposition J = J(P) + J(N) of the current J where J(N) belongs to the null space of the map Tau. We explicitly describe the projected current density J(P) from measured B(z). Based on the decomposition, we prove that B(z) data due to one injection current guarantee a unique determination of the isotropic conductivity under assumptions that the current is two-dimensional and the conductivity value on the surface is known. For a two-dimensional dominating current case, the projected current density J(P) provides a good approximation of the true current J without accumulating noise effects. Numerical simulations show that J(P) from measured B(z) is quite similar to the target J. Biological tissue phantom experiments compare J(P) with the reconstructed J via the reconstructed isotropic conductivity using the harmonic B(z) algorithm.
Informatics methods to enable sharing of quantitative imaging research data.
Levy, Mia A; Freymann, John B; Kirby, Justin S; Fedorov, Andriy; Fennessy, Fiona M; Eschrich, Steven A; Berglund, Anders E; Fenstermacher, David A; Tan, Yongqiang; Guo, Xiaotao; Casavant, Thomas L; Brown, Bartley J; Braun, Terry A; Dekker, Andre; Roelofs, Erik; Mountz, James M; Boada, Fernando; Laymon, Charles; Oborski, Matt; Rubin, Daniel L
2012-11-01
The National Cancer Institute Quantitative Research Network (QIN) is a collaborative research network whose goal is to share data, algorithms and research tools to accelerate quantitative imaging research. A challenge is the variability in tools and analysis platforms used in quantitative imaging. Our goal was to understand the extent of this variation and to develop an approach to enable sharing data and to promote reuse of quantitative imaging data in the community. We performed a survey of the current tools in use by the QIN member sites for representation and storage of their QIN research data including images, image meta-data and clinical data. We identified existing systems and standards for data sharing and their gaps for the QIN use case. We then proposed a system architecture to enable data sharing and collaborative experimentation within the QIN. There are a variety of tools currently used by each QIN institution. We developed a general information system architecture to support the QIN goals. We also describe the remaining architecture gaps we are developing to enable members to share research images and image meta-data across the network. As a research network, the QIN will stimulate quantitative imaging research by pooling data, algorithms and research tools. However, there are gaps in current functional requirements that will need to be met by future informatics development. Special attention must be given to the technical requirements needed to translate these methods into the clinical research workflow to enable validation and qualification of these novel imaging biomarkers. Copyright © 2012 Elsevier Inc. All rights reserved.
Influence of orographically steered winds on Mutsu Bay surface currents
NASA Astrophysics Data System (ADS)
Yamaguchi, Satoshi; Kawamura, Hiroshi
2005-09-01
Effects of spatially dependent sea surface wind field on currents in Mutsu Bay, which is located at the northern end of Japanese Honshu Island, are investigated using winds derived from synthetic aperture radar (SAR) images and a numerical model. A characteristic wind pattern over the bay was evidenced from analysis of 118 SAR images and coincided with in situ observations. Wind is topographically steered with easterly winds entering the bay through the terrestrial gap and stronger wind blowing over the central water toward its mouth. Nearshore winds are weaker due to terrestrial blockages. Using the Princeton Ocean Model, we investigated currents forced by the observed spatially dependent wind field. The predicted current pattern agrees well with available observations. For a uniform wind field of equal magnitude and average direction, the circulation pattern departs from observations demonstrating that vorticity input due to spatially dependent wind stress is essential in generation of the wind-driven current in Mutsu Bay.
Eddy current imaging for electrical characterization of silicon solar cells and TCO layers
NASA Astrophysics Data System (ADS)
Hwang, Byungguk; Hillmann, Susanne; Schulze, Martin; Klein, Marcus; Heuer, Henning
2015-03-01
Eddy Current Testing has been mainly used to determine defects of conductive materials and wall thicknesses in heavy industries such as construction or aerospace. Recently, high frequency Eddy Current imaging technology was developed. This enables the acquirement of information of different depth level in conductive thin-film structures by realizing proper standard penetration depth. In this paper, we summarize the state of the art applications focusing on PV industry and extend the analysis implementing achievements by applying spatially resolved Eddy Current Testing. The specific state of frequency and complex phase angle rotation demonstrates diverse defects from front to back side of silicon solar cells and characterizes homogeneity of sheet resistance in Transparent Conductive Oxide (TCO) layers. In order to verify technical feasibility, measurement results from the Multi Parameter Eddy Current Scanner, MPECS are compared to the results from Electroluminescence.
NASA Technical Reports Server (NTRS)
Thompson, Rodger I.
1997-01-01
Near Infrared Camera and Multi-Object Spectrometer (NICMOS) has been in orbit for about 8 months. This is a report on its current status and future plans. Also included are some comments on particular aspects of data analysis concerning dark subtraction, shading, and removal of cosmic rays. At present NICMOS provides excellent images of high scientific content. Most of the observations utilize cameras 1 and 2 which are in excellent focus. Camera 3 is not yet within the range of the focus adjustment mechanism, but its current images are still quite excellent. In this paper we will present the status of various aspects of the NICMOS instrument.
Buck, Thomas; Hwang, Shawn M; Plicht, Björn; Mucci, Ronald A; Hunold, Peter; Erbel, Raimund; Levine, Robert A
2008-06-01
Cardiac ultrasound imaging systems are limited in the noninvasive quantification of valvular regurgitation due to indirect measurements and inaccurate hemodynamic assumptions. We recently demonstrated that the principle of integration of backscattered acoustic Doppler power times velocity can be used for flow quantification in valvular regurgitation directly at the vena contracta of a regurgitant flow jet. We now aimed to accomplish implementation of automated Doppler power flow analysis software on a standard cardiac ultrasound system utilizing novel matrix-array transducer technology with detailed description of system requirements, components and software contributing to the system. This system based on a 3.5 MHz, matrix-array cardiac ultrasound scanner (Sonos 5500, Philips Medical Systems) was validated by means of comprehensive experimental signal generator trials, in vitro flow phantom trials and in vivo testing in 48 patients with mitral regurgitation of different severity and etiology using magnetic resonance imaging (MRI) for reference. All measurements displayed good correlation to the reference values, indicating successful implementation of automated Doppler power flow analysis on a matrix-array ultrasound imaging system. Systematic underestimation of effective regurgitant orifice areas >0.65 cm(2) and volumes >40 ml was found due to currently limited Doppler beam width that could be readily overcome by the use of new generation 2D matrix-array technology. Automated flow quantification in valvular heart disease based on backscattered Doppler power can be fully implemented on board a routinely used matrix-array ultrasound imaging systems. Such automated Doppler power flow analysis of valvular regurgitant flow directly, noninvasively, and user independent overcomes the practical limitations of current techniques.
Cardiac CT for myocardial ischaemia detection and characterization--comparative analysis.
Bucher, A M; De Cecco, C N; Schoepf, U J; Wang, R; Meinel, F G; Binukrishnan, S R; Spearman, J V; Vogl, T J; Ruzsics, B
2014-11-01
The assessment of patients presenting with symptoms of myocardial ischaemia remains one of the most common and challenging clinical scenarios faced by physicians. Current imaging modalities are capable of three-dimensional, functional and anatomical views of the heart and as such offer a unique contribution to understanding and managing the pathology involved. Evidence has accumulated that visual anatomical coronary evaluation does not adequately predict haemodynamic relevance and should be complemented by physiological evaluation, highlighting the importance of functional assessment. Technical advances in CT technology over the past decade have progressively moved cardiac CT imaging into the clinical workflow. In addition to anatomical evaluation, cardiac CT is capable of providing myocardial perfusion parameters. A variety of CT techniques can be used to assess the myocardial perfusion. The single energy first-pass CT and dual energy first-pass CT allow static assessment of myocardial blood pool. Dynamic cardiac CT imaging allows quantification of myocardial perfusion through time-resolved attenuation data. CT-based myocardial perfusion imaging (MPI) is showing promising diagnostic accuracy compared with the current reference modalities. The aim of this review is to present currently available myocardial perfusion techniques with a focus on CT imaging in light of recent clinical investigations. This article provides a comprehensive overview of currently available CT approaches of static and dynamic MPI and presents the results of corresponding clinical trials.
Paintdakhi, Ahmad; Parry, Bradley; Campos, Manuel; Irnov, Irnov; Elf, Johan; Surovtsev, Ivan; Jacobs-Wagner, Christine
2016-01-01
Summary With the realization that bacteria display phenotypic variability among cells and exhibit complex subcellular organization critical for cellular function and behavior, microscopy has re-emerged as a primary tool in bacterial research during the last decade. However, the bottleneck in today’s single-cell studies is quantitative image analysis of cells and fluorescent signals. Here, we address current limitations through the development of Oufti, a stand-alone, open-source software package for automated measurements of microbial cells and fluorescence signals from microscopy images. Oufti provides computational solutions for tracking touching cells in confluent samples, handles various cell morphologies, offers algorithms for quantitative analysis of both diffraction and non-diffraction-limited fluorescence signals, and is scalable for high-throughput analysis of massive datasets, all with subpixel precision. All functionalities are integrated in a single package. The graphical user interface, which includes interactive modules for segmentation, image analysis, and post-processing analysis, makes the software broadly accessible to users irrespective of their computational skills. PMID:26538279
Mohiyeddini, Changiz
2017-09-01
Repressive coping, as a means of preserving a positive self-image, has been widely explored in the context of dealing with self-evaluative cues. The current study extends this research by exploring whether repressive coping is associated with lower levels of body image concerns, drive for thinness, bulimic symptoms, and higher positive rational acceptance. A sample of 229 female college students was recruited in South London. Repressive coping was measured via the interaction between trait anxiety and defensiveness. The results of moderated regression analysis with simple slope analysis show that compared to non-repressors, repressors reported lower levels of body image concerns, drive for thinness, and bulimic symptoms while exhibiting a higher use of positive rational acceptance. These findings, in line with previous evidence, suggest that repressive coping may be adaptive particularly in the context of body image. Copyright © 2017 Elsevier Ltd. All rights reserved.
Software Toolbox for Low-Frequency Conductivity and Current Density Imaging Using MRI.
Sajib, Saurav Z K; Katoch, Nitish; Kim, Hyung Joong; Kwon, Oh In; Woo, Eung Je
2017-11-01
Low-frequency conductivity and current density imaging using MRI includes magnetic resonance electrical impedance tomography (MREIT), diffusion tensor MREIT (DT-MREIT), conductivity tensor imaging (CTI), and magnetic resonance current density imaging (MRCDI). MRCDI and MREIT provide current density and isotropic conductivity images, respectively, using current-injection phase MRI techniques. DT-MREIT produces anisotropic conductivity tensor images by incorporating diffusion weighted MRI into MREIT. These current-injection techniques are finding clinical applications in diagnostic imaging and also in transcranial direct current stimulation (tDCS), deep brain stimulation (DBS), and electroporation where treatment currents can function as imaging currents. To avoid adverse effects of nerve and muscle stimulations due to injected currents, conductivity tensor imaging (CTI) utilizes B1 mapping and multi-b diffusion weighted MRI to produce low-frequency anisotropic conductivity tensor images without injecting current. This paper describes numerical implementations of several key mathematical functions for conductivity and current density image reconstructions in MRCDI, MREIT, DT-MREIT, and CTI. To facilitate experimental studies of clinical applications, we developed a software toolbox for these low-frequency conductivity and current density imaging methods. This MR-based conductivity imaging (MRCI) toolbox includes 11 toolbox functions which can be used in the MATLAB environment. The MRCI toolbox is available at http://iirc.khu.ac.kr/software.html . Its functions were tested by using several experimental datasets, which are provided together with the toolbox. Users of the toolbox can focus on experimental designs and interpretations of reconstructed images instead of developing their own image reconstruction softwares. We expect more toolbox functions to be added from future research outcomes. Low-frequency conductivity and current density imaging using MRI includes magnetic resonance electrical impedance tomography (MREIT), diffusion tensor MREIT (DT-MREIT), conductivity tensor imaging (CTI), and magnetic resonance current density imaging (MRCDI). MRCDI and MREIT provide current density and isotropic conductivity images, respectively, using current-injection phase MRI techniques. DT-MREIT produces anisotropic conductivity tensor images by incorporating diffusion weighted MRI into MREIT. These current-injection techniques are finding clinical applications in diagnostic imaging and also in transcranial direct current stimulation (tDCS), deep brain stimulation (DBS), and electroporation where treatment currents can function as imaging currents. To avoid adverse effects of nerve and muscle stimulations due to injected currents, conductivity tensor imaging (CTI) utilizes B1 mapping and multi-b diffusion weighted MRI to produce low-frequency anisotropic conductivity tensor images without injecting current. This paper describes numerical implementations of several key mathematical functions for conductivity and current density image reconstructions in MRCDI, MREIT, DT-MREIT, and CTI. To facilitate experimental studies of clinical applications, we developed a software toolbox for these low-frequency conductivity and current density imaging methods. This MR-based conductivity imaging (MRCI) toolbox includes 11 toolbox functions which can be used in the MATLAB environment. The MRCI toolbox is available at http://iirc.khu.ac.kr/software.html . Its functions were tested by using several experimental datasets, which are provided together with the toolbox. Users of the toolbox can focus on experimental designs and interpretations of reconstructed images instead of developing their own image reconstruction softwares. We expect more toolbox functions to be added from future research outcomes.
Imaging characteristics of photogrammetric camera systems
Welch, R.; Halliday, J.
1973-01-01
In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.
Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio
2018-01-01
Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Goward, Samuel N.; Townshend, John R.; Zanoni, Vicki; Policelli, Fritz; Stanley, Tom; Ryan, Robert; Holekamp, Kara; Underwood, Lauren; Pagnutti, Mary; Fletcher, Rose
2003-01-01
In an effort to more full explore the potential of commercial remotely sensed land data sources, the NASA Earth Science Enterprise (ESE) implemented an experimental Scientific Data Purchase (SDP) that solicited bids from the private sector to meet ESE-user data needs. The images from the Space Imaging IKONOS system provided a particularly good match to the current ESE missions such as Terra and Landsat 7 and therefore serve as a focal point in this analysis.
Structure of Dilute Pyroclastic Density Currents During Transport, Buoyancy Reversal and Liftoff
NASA Astrophysics Data System (ADS)
Andrews, B. J.
2014-12-01
Scaled laboratory experiments provide insight into structure, entrainment and liftoff in pyroclastic density currents (PDCs). Experiments are conducted in a 8.5×6.1×2.6 m air-filled tank and comprise turbulently suspended mixtures of heated 20-μm talc particles introduced to the tank at steady and sustained rates; the tank is large enough that the currents are effectively unconfined. Experiments are scaled with bulk (densimetric and thermal Richardson numbers, Froude number) and turbulent (Stokes and settling numbers) parameters dynamically similar to natural currents. The Reynolds numbers of experiments are smaller than those of natural PDCs, but analysis of the experiments demonstrates that they are fully turbulent. Red, green, and blue laser sheets illuminate orthogonal planes within the currents for imaging and recording with HD video cameras; those data are reprojected into cross-sectional and map-view planes for analysis of turbulent velocity fields and fluctuations in particle concentration. A green laser sheet can be swept through the tank at 60 Hz and imaged with a high-speed CCD camera at up to 3000 fps; sequences of 60-300 images are used to make 3D volumetric reconstructions of the currents at up to 10 Hz. Currents typically comprise a lower "bypass" region and an upper entraining region that turbulently mixes with the ambient air. The bypass region is generally about half of the total current thickness and moves faster than the overlying, entraining region. The bypass region controls runout distance and steadiness of currents. If turbulent structures in the entraining region penetrate through the bypass region, the trailing portion of the current can stall before resuming forward progress; thus a single, "steady" current can generate multiple currents. When a current lifts off, it focuses along a narrow axis beneath the rising (coignimbrite) plume. At that time, ambient air entrainment occurs primarily through the lateral margins of the narrow bypass region. Eddies that entrain air through the lateral margins grow in size with transport distance such that at the maximum runout distance, eddies have lengthscales comparable to the current width. The largest structures within the rising plumes have lengthscales comparable to the cross-stream plume width.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Kuangcai
The goal of this study is to help with future data analysis and experiment designs in rotational dynamics research using DIC-based SPORT technique. Most of the current studies using DIC-based SPORT techniques are technical demonstrations. Understanding the mechanisms behind the observed rotational behaviors of the imaging probes should be the focus of the future SPORT studies. More efforts are still needed in the development of new imaging probes, particle tracking methods, instrumentations, and advanced data analysis methods to further extend the potential of DIC-based SPORT technique.
The Open Microscopy Environment: open image informatics for the biological sciences
NASA Astrophysics Data System (ADS)
Blackburn, Colin; Allan, Chris; Besson, Sébastien; Burel, Jean-Marie; Carroll, Mark; Ferguson, Richard K.; Flynn, Helen; Gault, David; Gillen, Kenneth; Leigh, Roger; Leo, Simone; Li, Simon; Lindner, Dominik; Linkert, Melissa; Moore, Josh; Moore, William J.; Ramalingam, Balaji; Rozbicki, Emil; Rustici, Gabriella; Tarkowska, Aleksandra; Walczysko, Petr; Williams, Eleanor; Swedlow, Jason R.
2016-07-01
Despite significant advances in biological imaging and analysis, major informatics challenges remain unsolved: file formats are proprietary, storage and analysis facilities are lacking, as are standards for sharing image data and results. While the open FITS file format is ubiquitous in astronomy, astronomical imaging shares many challenges with biological imaging, including the need to share large image sets using secure, cross-platform APIs, and the need for scalable applications for processing and visualization. The Open Microscopy Environment (OME) is an open-source software framework developed to address these challenges. OME tools include: an open data model for multidimensional imaging (OME Data Model); an open file format (OME-TIFF) and library (Bio-Formats) enabling free access to images (5D+) written in more than 145 formats from many imaging domains, including FITS; and a data management server (OMERO). The Java-based OMERO client-server platform comprises an image metadata store, an image repository, visualization and analysis by remote access, allowing sharing and publishing of image data. OMERO provides a means to manage the data through a multi-platform API. OMERO's model-based architecture has enabled its extension into a range of imaging domains, including light and electron microscopy, high content screening, digital pathology and recently into applications using non-image data from clinical and genomic studies. This is made possible using the Bio-Formats library. The current release includes a single mechanism for accessing image data of all types, regardless of original file format, via Java, C/C++ and Python and a variety of applications and environments (e.g. ImageJ, Matlab and R).
A forensic science perspective on the role of images in crime investigation and reconstruction.
Milliet, Quentin; Delémont, Olivier; Margot, Pierre
2014-12-01
This article presents a global vision of images in forensic science. The proliferation of perspectives on the use of images throughout criminal investigations and the increasing demand for research on this topic seem to demand a forensic science-based analysis. In this study, the definitions of and concepts related to material traces are revisited and applied to images, and a structured approach is used to persuade the scientific community to extend and improve the use of images as traces in criminal investigations. Current research efforts focus on technical issues and evidence assessment. This article provides a sound foundation for rationalising and explaining the processes involved in the production of clues from trace images. For example, the mechanisms through which these visual traces become clues of presence or action are described. An extensive literature review of forensic image analysis emphasises the existing guidelines and knowledge available for answering investigative questions (who, what, where, when and how). However, complementary developments are still necessary to demystify many aspects of image analysis in forensic science, including how to review and select images or use them to reconstruct an event or assist intelligence efforts. The hypothetico-deductive reasoning pathway used to discover unknown elements of an event or crime can also help scientists understand the underlying processes involved in their decision making. An analysis of a single image in an investigative or probative context is used to demonstrate the highly informative potential of images as traces and/or clues. Research efforts should be directed toward formalising the extraction and combination of clues from images. An appropriate methodology is key to expanding the use of images in forensic science. Copyright © 2014 Forensic Science Society. Published by Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Coddington, Odele; Platnick, Steven; Pilewskie, Peter; Schmidt, Sebastian
2016-04-01
The NASA Pre-Aerosol, Cloud and ocean Ecosystem (PACE) Science Definition Team (SDT) report released in 2012 defined imager stability requirements for the Ocean Color Instrument (OCI) at the sub-percent level. While the instrument suite and measurement requirements are currently being determined, the PACE SDT report provided details on imager options and spectral specifications. The options for a threshold instrument included a hyperspectral imager from 350-800 nm, two near-infrared (NIR) channels, and three short wave infrared (SWIR) channels at 1240, 1640, and 2130 nm. Other instrument options include a variation of the threshold instrument with 3 additional spectral channels at 940, 1378, and 2250 nm and the inclusion of a spectral polarimeter. In this work, we present cloud retrieval information content studies of optical thickness, droplet effective radius, and thermodynamic phase to quantify the potential for continuing the low cloud climate data record established by the MOderate Resolution and Imaging Spectroradiometer (MODIS) and Visible Infrared Imaging Radiometer Suite (VIIRS) missions with the PACE OCI instrument (i.e., non-polarized cloud reflectances and in the absence of midwave and longwave infrared channels). The information content analysis is performed using the GEneralized Nonlinear Retrieval Analysis (GENRA) methodology and the Collection 6 simulated cloud reflectance data for the common MODIS/VIIRS algorithm (MODAWG) for Cloud Mask, Cloud-Top, and Optical Properties. We show that using both channels near 2 microns improves the probability of cloud phase discrimination with shortwave-only cloud reflectance retrievals. Ongoing work will extend the information content analysis, currently performed for dark ocean surfaces, to different land surface types.
NASA Astrophysics Data System (ADS)
Kolekar, Sadhu; Patole, Shashikant P.; Yoo, Ji-Beom; Dharmadhikari, Chandrakant V.
2018-03-01
Field emission from nanostructured films is known to be dominated by only small number of localized spots which varies with the voltage, electric field and heat treatment. It is important to develop processing methods which will produce stable and uniform emitting sites. In this paper we report a novel approach which involves analysis of Proximity Field Emission Microscopic (PFEM) images using Scanning Probe Image Processing technique. Vertically aligned carbon nanotube emitters have been deposited on tungsten foil by water assisted chemical vapor deposition. Prior to the field electron emission studies, these films were characterized by scanning electron microscopy, transmission electron microscopy, and Atomic Force Microscopy (AFM). AFM images of the samples show bristle like structure, the size of bristle varying from 80 to 300 nm. The topography images were found to exhibit strong correlation with current images. Current-Voltage (I-V) measurements both from Scanning Tunneling Microscopy and Conducting-AFM mode suggest that electron transport mechanism in imaging vertically grown CNTs is ballistic rather than usual tunneling or field emission with a junction resistance of 10 kΩ. It was found that I-V curves for field emission mode in PFEM geometry vary initially with number of I-V cycles until reproducible I-V curves are obtained. Even for reasonably stable I-V behavior the number of spots was found to increase with the voltage leading to a modified Fowler-Nordheim (F-N) behavior. A plot of ln(I/V3) versus 1/V was found to be linear. Current versus time data exhibit large fluctuation with the power spectral density obeying 1/f2 law. It is suggested that an analogue of F-N equation of the form ln(I/Vα) versus 1/V may be used for the analysis of field emission data, where α may depend on nanostructure configuration and can be determined from the dependence of emitting spots on the voltage.
Bae, Youngwoo; Son, Taeyoon; Nelson, J. Stuart; Kim, Jae-Hong; Choi, Eung Ho; Jung, Byungjo
2010-01-01
Background/Purpose Digital color image analysis is currently considered as a routine procedure in dermatology. In our previous study, a multimodal facial color imaging modality (MFCIM), which provides a conventional, parallel- and cross-polarization, and fluorescent color image, was introduced for objective evaluation of various facial skin lesions. This study introduces a commercial version of MFCIM, DermaVision-PRO, for routine clinical use in dermatology and demonstrates its dermatological feasibility for cross-evaluation of skin lesions. Methods/Results Sample images of subjects with actinic keratosis or non-melanoma skin cancers were obtained at four different imaging modes. Various image analysis methods were applied to cross-evaluate the skin lesion and, finally, extract valuable diagnostic information. DermaVision-PRO is potentially a useful tool as an objective macroscopic imaging modality for quick prescreening and cross-evaluation of facial skin lesions. Conclusion DermaVision-PRO may be utilized as a useful tool for cross-evaluation of widely distributed facial skin lesions and an efficient database management of patient information. PMID:20923462
Image segmentation evaluation for very-large datasets
NASA Astrophysics Data System (ADS)
Reeves, Anthony P.; Liu, Shuang; Xie, Yiting
2016-03-01
With the advent of modern machine learning methods and fully automated image analysis there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. Current approaches of visual inspection and manual markings do not scale well to big data. We present a new approach that depends on fully automated algorithm outcomes for segmentation documentation, requires no manual marking, and provides quantitative evaluation for computer algorithms. The documentation of new image segmentations and new algorithm outcomes are achieved by visual inspection. The burden of visual inspection on large datasets is minimized by (a) customized visualizations for rapid review and (b) reducing the number of cases to be reviewed through analysis of quantitative segmentation evaluation. This method has been applied to a dataset of 7,440 whole-lung CT images for 6 different segmentation algorithms designed to fully automatically facilitate the measurement of a number of very important quantitative image biomarkers. The results indicate that we could achieve 93% to 99% successful segmentation for these algorithms on this relatively large image database. The presented evaluation method may be scaled to much larger image databases.
Ristivojević, Petar; Trifković, Jelena; Vovk, Irena; Milojković-Opsenica, Dušanka
2017-01-01
Considering the introduction of phytochemical fingerprint analysis, as a method of screening the complex natural products for the presence of most bioactive compounds, use of chemometric classification methods, application of powerful scanning and image capturing and processing devices and algorithms, advancement in development of novel stationary phases as well as various separation modalities, high-performance thin-layer chromatography (HPTLC) fingerprinting is becoming attractive and fruitful field of separation science. Multivariate image analysis is crucial in the light of proper data acquisition. In a current study, different image processing procedures were studied and compared in detail on the example of HPTLC chromatograms of plant resins. In that sense, obtained variables such as gray intensities of pixels along the solvent front, peak area and mean values of peak were used as input data and compared to obtained best classification models. Important steps in image analysis, baseline removal, denoising, target peak alignment and normalization were pointed out. Numerical data set based on mean value of selected bands and intensities of pixels along the solvent front proved to be the most convenient for planar-chromatographic profiling, although required at least the basic knowledge on image processing methodology, and could be proposed for further investigation in HPLTC fingerprinting. Copyright © 2016 Elsevier B.V. All rights reserved.
Quantitative Medical Image Analysis for Clinical Development of Therapeutics
NASA Astrophysics Data System (ADS)
Analoui, Mostafa
There has been significant progress in development of therapeutics for prevention and management of several disease areas in recent years, leading to increased average life expectancy, as well as of quality of life, globally. However, due to complexity of addressing a number of medical needs and financial burden of development of new class of therapeutics, there is a need for better tools for decision making and validation of efficacy and safety of new compounds. Numerous biological markers (biomarkers) have been proposed either as adjunct to current clinical endpoints or as surrogates. Imaging biomarkers are among rapidly increasing biomarkers, being examined to expedite effective and rational drug development. Clinical imaging often involves a complex set of multi-modality data sets that require rapid and objective analysis, independent of reviewer's bias and training. In this chapter, an overview of imaging biomarkers for drug development is offered, along with challenges that necessitate quantitative and objective image analysis. Examples of automated and semi-automated analysis approaches are provided, along with technical review of such methods. These examples include the use of 3D MRI for osteoarthritis, ultrasound vascular imaging, and dynamic contrast enhanced MRI for oncology. Additionally, a brief overview of regulatory requirements is discussed. In conclusion, this chapter highlights key challenges and future directions in this area.
NDE scanning and imaging of aircraft structure
NASA Astrophysics Data System (ADS)
Bailey, Donald; Kepler, Carl; Le, Cuong
1995-07-01
The Science and Engineering Lab at McClellan Air Force Base, Sacramento, Calif. has been involved in the development and use of computer-based scanning systems for NDE (nondestructive evaluation) since 1985. This paper describes the history leading up to our current applications which employ eddy current and ultrasonic scanning of aircraft structures that contain both metallics and advanced composites. The scanning is performed using industrialized computers interfaced to proprietary acquisition equipment and software. Examples are shown that image several types of damage such as exfoliation and fuselage lap joint corrosion in aluminum, impact damage, embedded foreign material, and porosity in Kevlar and graphite epoxy composites. Image analysis techniques are reported that are performed using consumer oriented computer hardware and software that are not NDE specific and not expensive
Automated segmentation of pulmonary structures in thoracic computed tomography scans: a review
NASA Astrophysics Data System (ADS)
van Rikxoort, Eva M.; van Ginneken, Bram
2013-09-01
Computed tomography (CT) is the modality of choice for imaging the lungs in vivo. Sub-millimeter isotropic images of the lungs can be obtained within seconds, allowing the detection of small lesions and detailed analysis of disease processes. The high resolution of thoracic CT and the high prevalence of lung diseases require a high degree of automation in the analysis pipeline. The automated segmentation of pulmonary structures in thoracic CT has been an important research topic for over a decade now. This systematic review provides an overview of current literature. We discuss segmentation methods for the lungs, the pulmonary vasculature, the airways, including airway tree construction and airway wall segmentation, the fissures, the lobes and the pulmonary segments. For each topic, the current state of the art is summarized, and topics for future research are identified.
Proceedings of the Airborne Imaging Spectrometer Data Analysis Workshop
NASA Technical Reports Server (NTRS)
Vane, G. (Editor); Goetz, A. F. H. (Editor)
1985-01-01
The Airborne Imaging Spectrometer (AIS) Data Analysis Workshop was held at the Jet Propulsion Laboratory on April 8 to 10, 1985. It was attended by 92 people who heard reports on 30 investigations currently under way using AIS data that have been collected over the past two years. Written summaries of 27 of the presentations are in these Proceedings. Many of the results presented at the Workshop are preliminary because most investigators have been working with this fundamentally new type of data for only a relatively short time. Nevertheless, several conclusions can be drawn from the Workshop presentations concerning the value of imaging spectrometry to Earth remote sensing. First, work with AIS has shown that direct identification of minerals through high spectral resolution imaging is a reality for a wide range of materials and geological settings. Second, there are strong indications that high spectral resolution remote sensing will enhance the ability to map vegetation species. There are also good indications that imaging spectrometry will be useful for biochemical studies of vegetation. Finally, there are a number of new data analysis techniques under development which should lead to more efficient and complete information extraction from imaging spectrometer data. The results of the Workshop indicate that as experience is gained with this new class of data, and as new analysis methodologies are developed and applied, the value of imaging spectrometry should increase.
Cutting-edge analysis of extracellular microparticles using ImageStream(X) imaging flow cytometry.
Headland, Sarah E; Jones, Hefin R; D'Sa, Adelina S V; Perretti, Mauro; Norling, Lucy V
2014-06-10
Interest in extracellular vesicle biology has exploded in the past decade, since these microstructures seem endowed with multiple roles, from blood coagulation to inter-cellular communication in pathophysiology. In order for microparticle research to evolve as a preclinical and clinical tool, accurate quantification of microparticle levels is a fundamental requirement, but their size and the complexity of sample fluids present major technical challenges. Flow cytometry is commonly used, but suffers from low sensitivity and accuracy. Use of Amnis ImageStream(X) Mk II imaging flow cytometer afforded accurate analysis of calibration beads ranging from 1 μm to 20 nm; and microparticles, which could be observed and quantified in whole blood, platelet-rich and platelet-free plasma and in leukocyte supernatants. Another advantage was the minimal sample preparation and volume required. Use of this high throughput analyzer allowed simultaneous phenotypic definition of the parent cells and offspring microparticles along with real time microparticle generation kinetics. With the current paucity of reliable techniques for the analysis of microparticles, we propose that the ImageStream(X) could be used effectively to advance this scientific field.
NASA Technical Reports Server (NTRS)
Qin, J. X.; Shiota, T.; Thomas, J. D.
2000-01-01
Reconstructed three-dimensional (3-D) echocardiography is an accurate and reproducible method of assessing left ventricular (LV) functions. However, it has limitations for clinical study due to the requirement of complex computer and echocardiographic analysis systems, electrocardiographic/respiratory gating, and prolonged imaging times. Real-time 3-D echocardiography has a major advantage of conveniently visualizing the entire cardiac anatomy in three dimensions and of potentially accurately quantifying LV volumes, ejection fractions, and myocardial mass in patients even in the presence of an LV aneurysm. Although the image quality of the current real-time 3-D echocardiographic methods is not optimal, its widespread clinical application is possible because of the convenient and fast image acquisition. We review real-time 3-D echocardiographic image acquisition and quantitative analysis for the evaluation of LV function and LV mass.
Qin, J X; Shiota, T; Thomas, J D
2000-11-01
Reconstructed three-dimensional (3-D) echocardiography is an accurate and reproducible method of assessing left ventricular (LV) functions. However, it has limitations for clinical study due to the requirement of complex computer and echocardiographic analysis systems, electrocardiographic/respiratory gating, and prolonged imaging times. Real-time 3-D echocardiography has a major advantage of conveniently visualizing the entire cardiac anatomy in three dimensions and of potentially accurately quantifying LV volumes, ejection fractions, and myocardial mass in patients even in the presence of an LV aneurysm. Although the image quality of the current real-time 3-D echocardiographic methods is not optimal, its widespread clinical application is possible because of the convenient and fast image acquisition. We review real-time 3-D echocardiographic image acquisition and quantitative analysis for the evaluation of LV function and LV mass.
A Stochastic-Variational Model for Soft Mumford-Shah Segmentation
2006-01-01
In contemporary image and vision analysis, stochastic approaches demonstrate great flexibility in representing and modeling complex phenomena, while variational-PDE methods gain enormous computational advantages over Monte Carlo or other stochastic algorithms. In combination, the two can lead to much more powerful novel models and efficient algorithms. In the current work, we propose a stochastic-variational model for soft (or fuzzy) Mumford-Shah segmentation of mixture image patterns. Unlike the classical hard Mumford-Shah segmentation, the new model allows each pixel to belong to each image pattern with some probability. Soft segmentation could lead to hard segmentation, and hence is more general. The modeling procedure, mathematical analysis on the existence of optimal solutions, and computational implementation of the new model are explored in detail, and numerical examples of both synthetic and natural images are presented. PMID:23165059
NASA Astrophysics Data System (ADS)
Leighs, J. A.; Halling-Brown, M. D.; Patel, M. N.
2018-03-01
The UK currently has a national breast cancer-screening program and images are routinely collected from a number of screening sites, representing a wealth of invaluable data that is currently under-used. Radiologists evaluate screening images manually and recall suspicious cases for further analysis such as biopsy. Histological testing of biopsy samples confirms the malignancy of the tumour, along with other diagnostic and prognostic characteristics such as disease grade. Machine learning is becoming increasingly popular for clinical image classification problems, as it is capable of discovering patterns in data otherwise invisible. This is particularly true when applied to medical imaging features; however clinical datasets are often relatively small. A texture feature extraction toolkit has been developed to mine a wide range of features from medical images such as mammograms. This study analysed a dataset of 1,366 radiologist-marked, biopsy-proven malignant lesions obtained from the OPTIMAM Medical Image Database (OMI-DB). Exploratory data analysis methods were employed to better understand extracted features. Machine learning techniques including Classification and Regression Trees (CART), ensemble methods (e.g. random forests), and logistic regression were applied to the data to predict the disease grade of the analysed lesions. Prediction scores of up to 83% were achieved; sensitivity and specificity of the models trained have been discussed to put the results into a clinical context. The results show promise in the ability to predict prognostic indicators from the texture features extracted and thus enable prioritisation of care for patients at greatest risk.
Dynamics of hemispheric dominance for language assessed by magnetoencephalographic imaging.
Findlay, Anne M; Ambrose, Josiah B; Cahn-Weiner, Deborah A; Houde, John F; Honma, Susanne; Hinkley, Leighton B N; Berger, Mitchel S; Nagarajan, Srikantan S; Kirsch, Heidi E
2012-05-01
The goal of the current study was to examine the dynamics of language lateralization using magnetoencephalographic (MEG) imaging, to determine the sensitivity and specificity of MEG imaging, and to determine whether MEG imaging can become a viable alternative to the intracarotid amobarbital procedure (IAP), the current gold standard for preoperative language lateralization in neurosurgical candidates. MEG was recorded during an auditory verb generation task and imaging analysis of oscillatory activity was initially performed in 21 subjects with epilepsy, brain tumor, or arteriovenous malformation who had undergone IAP and MEG. Time windows and brain regions of interest that best discriminated between IAP-determined left or right dominance for language were identified. Parameters derived in the retrospective analysis were applied to a prospective cohort of 14 patients and healthy controls. Power decreases in the beta frequency band were consistently observed following auditory stimulation in inferior frontal, superior temporal, and parietal cortices; similar power decreases were also seen in inferior frontal cortex prior to and during overt verb generation. Language lateralization was clearly observed to be a dynamic process that is bilateral for several hundred milliseconds during periods of auditory perception and overt speech production. Correlation with the IAP was seen in 13 of 14 (93%) prospective patients, with the test demonstrating a sensitivity of 100% and specificity of 92%. Our results demonstrate excellent correlation between MEG imaging findings and the IAP for language lateralization, and provide new insights into the spatiotemporal dynamics of cortical speech processing. Copyright © 2012 American Neurological Association.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, T; Sun, B; Li, H
Purpose: The current standard for calculation of photon and electron dose requires conversion of Hounsfield Units (HU) to Electron Density (ED) by applying a calibration curve specifically constructed for the corresponding CT tube voltage. This practice limits the use of the CT scanner to a single tube voltage and hinders the freedom in the selection of optimal tube voltage for better image quality. The objective of this study is to report a prototype CT reconstruction algorithm that provides direct ED images from the raw CT data independently of tube voltages used during acquisition. Methods: A tissue substitute phantom was scannedmore » for Stoichiometric CT calibrations at tube voltages of 70kV, 80kV, 100kV, 120kV and 140kV respectively. HU images and direct ED images were acquired sequentially on a thoracic anthropomorphic phantom at the same tube voltages. Electron densities converted from the HU images were compared to ED obtained from the direct ED images. A 7-field treatment plan was made on all HU and ED images. Gamma analysis was performed to demonstrate quantitatively dosimetric change from the two schemes in acquiring ED. Results: The average deviation of EDs obtained from the direct ED images was −1.5%±2.1% from the EDs from HU images with the corresponding CT calibration curves applied. Gamma analysis on dose calculated on the direct ED images and the HU images acquired at the same tube voltage indicated negligible difference with lowest passing rate at 99.9%. Conclusion: Direct ED images require no CT calibration while demonstrate equivalent dosimetry compared to that obtained from standard HU images. The ability of acquiring direct ED images simplifies the current practice at a safer level by eliminating CT calibration and HU conversion from commissioning and treatment planning respectively. Furthermore, it unlocks a wider range of tube voltages in CT scanner for better imaging quality while maintaining similar dosimetric accuracy.« less
microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling
NASA Astrophysics Data System (ADS)
Comi, Troy J.; Neumann, Elizabeth K.; Do, Thanh D.; Sweedler, Jonathan V.
2017-09-01
Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. [Figure not available: see fulltext.
microMS: A Python Platform for Image-Guided Mass Spectrometry Profiling.
Comi, Troy J; Neumann, Elizabeth K; Do, Thanh D; Sweedler, Jonathan V
2017-09-01
Image-guided mass spectrometry (MS) profiling provides a facile framework for analyzing samples ranging from single cells to tissue sections. The fundamental workflow utilizes a whole-slide microscopy image to select targets of interest, determine their spatial locations, and subsequently perform MS analysis at those locations. Improving upon prior reported methodology, a software package was developed for working with microscopy images. microMS, for microscopy-guided mass spectrometry, allows the user to select and profile diverse samples using a variety of target patterns and mass analyzers. Written in Python, the program provides an intuitive graphical user interface to simplify image-guided MS for novice users. The class hierarchy of instrument interactions permits integration of new MS systems while retaining the feature-rich image analysis framework. microMS is a versatile platform for performing targeted profiling experiments using a series of mass spectrometers. The flexibility in mass analyzers greatly simplifies serial analyses of the same targets by different instruments. The current capabilities of microMS are presented, and its application for off-line analysis of single cells on three distinct instruments is demonstrated. The software has been made freely available for research purposes. Graphical Abstract ᅟ.
Survey of contemporary trends in color image segmentation
NASA Astrophysics Data System (ADS)
Vantaram, Sreenath Rao; Saber, Eli
2012-10-01
In recent years, the acquisition of image and video information for processing, analysis, understanding, and exploitation of the underlying content in various applications, ranging from remote sensing to biomedical imaging, has grown at an unprecedented rate. Analysis by human observers is quite laborious, tiresome, and time consuming, if not infeasible, given the large and continuously rising volume of data. Hence the need for systems capable of automatically and effectively analyzing the aforementioned imagery for a variety of uses that span the spectrum from homeland security to elderly care. In order to achieve the above, tools such as image segmentation provide the appropriate foundation for expediting and improving the effectiveness of subsequent high-level tasks by providing a condensed and pertinent representation of image information. We provide a comprehensive survey of color image segmentation strategies adopted over the last decade, though notable contributions in the gray scale domain will also be discussed. Our taxonomy of segmentation techniques is sampled from a wide spectrum of spatially blind (or feature-based) approaches such as clustering and histogram thresholding as well as spatially guided (or spatial domain-based) methods such as region growing/splitting/merging, energy-driven parametric/geometric active contours, supervised/unsupervised graph cuts, and watersheds, to name a few. In addition, qualitative and quantitative results of prominent algorithms on several images from the Berkeley segmentation dataset are shown in order to furnish a fair indication of the current quality of the state of the art. Finally, we provide a brief discussion on our current perspective of the field as well as its associated future trends.
Quantitative real-time analysis of collective cancer invasion and dissemination
NASA Astrophysics Data System (ADS)
Ewald, Andrew J.
2015-05-01
A grand challenge in biology is to understand the cellular and molecular basis of tissue and organ level function in mammals. The ultimate goals of such efforts are to explain how organs arise in development from the coordinated actions of their constituent cells and to determine how molecularly regulated changes in cell behavior alter the structure and function of organs during disease processes. Two major barriers stand in the way of achieving these goals: the relative inaccessibility of cellular processes in mammals and the daunting complexity of the signaling environment inside an intact organ in vivo. To overcome these barriers, we have developed a suite of tissue isolation, three dimensional (3D) culture, genetic manipulation, nanobiomaterials, imaging, and molecular analysis techniques to enable the real-time study of cell biology within intact tissues in physiologically relevant 3D environments. This manuscript introduces the rationale for 3D culture, reviews challenges to optical imaging in these cultures, and identifies current limitations in the analysis of complex experimental designs that could be overcome with improved imaging, imaging analysis, and automated classification of the results of experimental interventions.
Nanoscale imaging of magnetization reversal driven by spin-orbit torque
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, Ian; Chen, P. J.; Gopman, Daniel B.
We use scanning electron microscopy with polarization analysis to image deterministic, spin-orbit torque-driven magnetization reversal of in-plane magnetized CoFeB rectangles in zero applied magnetic field. The spin-orbit torque is generated by running a current through heavy metal microstrips, either Pt or Ta, upon which the CoFeB rectangles are deposited. We image the CoFeB magnetization before and after a current pulse to see the effect of spin-orbit torque on the magnetic nanostructure. The observed changes in magnetic structure can be complex, deviating significantly from a simple macrospin approximation, especially in larger elements. Overall, however, the directions of the magnetization reversal inmore » the Pt and Ta devices are opposite, consistent with the opposite signs of the spin Hall angles of these materials. Lastly, our results elucidate the effects of current density, geometry, and magnetic domain structure on magnetization switching driven by spin-orbit torque.« less
Nanoscale imaging of magnetization reversal driven by spin-orbit torque
Gilbert, Ian; Chen, P. J.; Gopman, Daniel B.; ...
2016-09-23
We use scanning electron microscopy with polarization analysis to image deterministic, spin-orbit torque-driven magnetization reversal of in-plane magnetized CoFeB rectangles in zero applied magnetic field. The spin-orbit torque is generated by running a current through heavy metal microstrips, either Pt or Ta, upon which the CoFeB rectangles are deposited. We image the CoFeB magnetization before and after a current pulse to see the effect of spin-orbit torque on the magnetic nanostructure. The observed changes in magnetic structure can be complex, deviating significantly from a simple macrospin approximation, especially in larger elements. Overall, however, the directions of the magnetization reversal inmore » the Pt and Ta devices are opposite, consistent with the opposite signs of the spin Hall angles of these materials. Lastly, our results elucidate the effects of current density, geometry, and magnetic domain structure on magnetization switching driven by spin-orbit torque.« less
Three-dimensional head anthropometric analysis
NASA Astrophysics Data System (ADS)
Enciso, Reyes; Shaw, Alex M.; Neumann, Ulrich; Mah, James
2003-05-01
Currently, two-dimensional photographs are most commonly used to facilitate visualization, assessment and treatment of facial abnormalities in craniofacial care but are subject to errors because of perspective, projection, lack metric and 3-dimensional information. One can find in the literature a variety of methods to generate 3-dimensional facial images such as laser scans, stereo-photogrammetry, infrared imaging and even CT however each of these methods contain inherent limitations and as such no systems are in common clinical use. In this paper we will focus on development of indirect 3-dimensional landmark location and measurement of facial soft-tissue with light-based techniques. In this paper we will statistically evaluate and validate a current three-dimensional image-based face modeling technique using a plaster head model. We will also develop computer graphics tools for indirect anthropometric measurements in a three-dimensional head model (or polygonal mesh) including linear distances currently used in anthropometry. The measurements will be tested against a validated 3-dimensional digitizer (MicroScribe 3DX).
Methods for the analysis of ordinal response data in medical image quality assessment.
Keeble, Claire; Baxter, Paul D; Gislason-Lee, Amber J; Treadgold, Laura A; Davies, Andrew G
2016-07-01
The assessment of image quality in medical imaging often requires observers to rate images for some metric or detectability task. These subjective results are used in optimization, radiation dose reduction or system comparison studies and may be compared to objective measures from a computer vision algorithm performing the same task. One popular scoring approach is to use a Likert scale, then assign consecutive numbers to the categories. The mean of these response values is then taken and used for comparison with the objective or second subjective response. Agreement is often assessed using correlation coefficients. We highlight a number of weaknesses in this common approach, including inappropriate analyses of ordinal data and the inability to properly account for correlations caused by repeated images or observers. We suggest alternative data collection and analysis techniques such as amendments to the scale and multilevel proportional odds models. We detail the suitability of each approach depending upon the data structure and demonstrate each method using a medical imaging example. Whilst others have raised some of these issues, we evaluated the entire study from data collection to analysis, suggested sources for software and further reading, and provided a checklist plus flowchart for use with any ordinal data. We hope that raised awareness of the limitations of the current approaches will encourage greater method consideration and the utilization of a more appropriate analysis. More accurate comparisons between measures in medical imaging will lead to a more robust contribution to the imaging literature and ultimately improved patient care.
Rip current evidence by hydrodynamic simulations, bathymetric surveys and UAV observation
NASA Astrophysics Data System (ADS)
Benassai, Guido; Aucelli, Pietro; Budillon, Giorgio; De Stefano, Massimo; Di Luccio, Diana; Di Paola, Gianluigi; Montella, Raffaele; Mucerino, Luigi; Sica, Mario; Pennetta, Micla
2017-09-01
The prediction of the formation, spacing and location of rip currents is a scientific challenge that can be achieved by means of different complementary methods. In this paper the analysis of numerical and experimental data, including RPAS (remotely piloted aircraft systems) observations, allowed us to detect the presence of rip currents and rip channels at the mouth of Sele River, in the Gulf of Salerno, southern Italy. The dataset used to analyze these phenomena consisted of two different bathymetric surveys, a detailed sediment analysis and a set of high-resolution wave numerical simulations, completed with Google EarthTM images and RPAS observations. The grain size trend analysis and the numerical simulations allowed us to identify the rip current occurrence, forced by topographically constrained channels incised on the seabed, which were compared with observations.
FFDM image quality assessment using computerized image texture analysis
NASA Astrophysics Data System (ADS)
Berger, Rachelle; Carton, Ann-Katherine; Maidment, Andrew D. A.; Kontos, Despina
2010-04-01
Quantitative measures of image quality (IQ) are routinely obtained during the evaluation of imaging systems. These measures, however, do not necessarily correlate with the IQ of the actual clinical images, which can also be affected by factors such as patient positioning. No quantitative method currently exists to evaluate clinical IQ. Therefore, we investigated the potential of using computerized image texture analysis to quantitatively assess IQ. Our hypothesis is that image texture features can be used to assess IQ as a measure of the image signal-to-noise ratio (SNR). To test feasibility, the "Rachel" anthropomorphic breast phantom (Model 169, Gammex RMI) was imaged with a Senographe 2000D FFDM system (GE Healthcare) using 220 unique exposure settings (target/filter, kVs, and mAs combinations). The mAs were varied from 10%-300% of that required for an average glandular dose (AGD) of 1.8 mGy. A 2.5cm2 retroareolar region of interest (ROI) was segmented from each image. The SNR was computed from the ROIs segmented from images linear with dose (i.e., raw images) after flat-field and off-set correction. Image texture features of skewness, coarseness, contrast, energy, homogeneity, and fractal dimension were computed from the Premium ViewTM postprocessed image ROIs. Multiple linear regression demonstrated a strong association between the computed image texture features and SNR (R2=0.92, p<=0.001). When including kV, target and filter as additional predictor variables, a stronger association with SNR was observed (R2=0.95, p<=0.001). The strong associations indicate that computerized image texture analysis can be used to measure image SNR and potentially aid in automating IQ assessment as a component of the clinical workflow. Further work is underway to validate our findings in larger clinical datasets.
Catalog of microscopic organisms of the Everglades, Part 1—The cyanobacteria
Rosen, Barry H.; Mareš, Jan
2016-07-27
The microscopic organisms of the Everglades include numerous prokaryotic organisms, including the eubacteria, such as the cyanobacteria and non-photosynthetic bacteria, as well as several eukaryotic algae and protozoa that form the base of the food web. This report is part 1 in a series of reports that describe microscopic organisms encountered during the examination of several hundred samples collected in the southern Everglades. Part 1 describes the cyanobacteria and includes a suite of images and the most current taxonomic treatment of each taxon. The majority of the images are of live organisms, allowing their true color to be represented. A number of potential new species are illustrated; however, corroborating evidence from a genetic analysis of the morphological characteristics is needed to confirm these designations as new species. Part 1 also includes images of eubacteria that resemble cyanobacteria. Additional parts of the report on microscopic organisms of the Everglades are currently underway, such as the green algae and diatoms. The report also serves as the basis for a taxonomic image database that will provide a digital record of the Everglades microscopic flora and fauna. It is anticipated that these images will facilitate current and future ecological studies on the Everglades, such as understanding food-web dynamics, sediment formation and accumulation, the effects of nutrients and flow, and climate change.
Nondestructive Testing Information Analysis Center, 1979.
1980-09-01
transmission and reflectometry Ultrasonic imaging Spectrum analysis Acoustic emission * LIQUID PENETRANT TESTING Dye penetrants Fluorescent penetrants...OPTICAL TESTING Visual testing Optical reflectometry and transmission Holography * THERMAL TESTING Infrared radiometry The rmography 13 The present...on our surveillance effectiveness, we also scan Current Contents, NASA /SCAN, and the monthly Engineering Index and Science Abstracts. New books
NASA Astrophysics Data System (ADS)
El Bekri, Nadia; Angele, Susanne; Ruckhäberle, Martin; Peinsipp-Byma, Elisabeth; Haelke, Bruno
2015-10-01
This paper introduces an interactive recognition assistance system for imaging reconnaissance. This system supports aerial image analysts on missions during two main tasks: Object recognition and infrastructure analysis. Object recognition concentrates on the classification of one single object. Infrastructure analysis deals with the description of the components of an infrastructure and the recognition of the infrastructure type (e.g. military airfield). Based on satellite or aerial images, aerial image analysts are able to extract single object features and thereby recognize different object types. It is one of the most challenging tasks in the imaging reconnaissance. Currently, there are no high potential ATR (automatic target recognition) applications available, as consequence the human observer cannot be replaced entirely. State-of-the-art ATR applications cannot assume in equal measure human perception and interpretation. Why is this still such a critical issue? First, cluttered and noisy images make it difficult to automatically extract, classify and identify object types. Second, due to the changed warfare and the rise of asymmetric threats it is nearly impossible to create an underlying data set containing all features, objects or infrastructure types. Many other reasons like environmental parameters or aspect angles compound the application of ATR supplementary. Due to the lack of suitable ATR procedures, the human factor is still important and so far irreplaceable. In order to use the potential benefits of the human perception and computational methods in a synergistic way, both are unified in an interactive assistance system. RecceMan® (Reconnaissance Manual) offers two different modes for aerial image analysts on missions: the object recognition mode and the infrastructure analysis mode. The aim of the object recognition mode is to recognize a certain object type based on the object features that originated from the image signatures. The infrastructure analysis mode pursues the goal to analyze the function of the infrastructure. The image analyst extracts visually certain target object signatures, assigns them to corresponding object features and is finally able to recognize the object type. The system offers him the possibility to assign the image signatures to features given by sample images. The underlying data set contains a wide range of objects features and object types for different domains like ships or land vehicles. Each domain has its own feature tree developed by aerial image analyst experts. By selecting the corresponding features, the possible solution set of objects is automatically reduced and matches only the objects that contain the selected features. Moreover, we give an outlook of current research in the field of ground target analysis in which we deal with partly automated methods to extract image signatures and assign them to the corresponding features. This research includes methods for automatically determining the orientation of an object and geometric features like width and length of the object. This step enables to reduce automatically the possible object types offered to the image analyst by the interactive recognition assistance system.
Effects of 99mTc-TRODAT-1 drug template on image quantitative analysis
Yang, Bang-Hung; Chou, Yuan-Hwa; Wang, Shyh-Jen; Chen, Jyh-Cheng
2018-01-01
99mTc-TRODAT-1 is a type of drug that can bind to dopamine transporters in living organisms and is often used in SPCT imaging for observation of changes in the activity uptake of dopamine in the striatum. Therefore, it is currently widely used in studies on clinical diagnosis of Parkinson’s disease (PD) and movement-related disorders. In conventional 99mTc-TRODAT-1 SPECT image evaluation, visual inspection or manual selection of ROI for semiquantitative analysis is mainly used to observe and evaluate the degree of striatal defects. However, these methods are dependent on the subjective opinions of observers, which lead to human errors, have shortcomings such as long duration, increased effort, and have low reproducibility. To solve this problem, this study aimed to establish an automatic semiquantitative analytical method for 99mTc-TRODAT-1. This method combines three drug templates (one built-in SPECT template in SPM software and two self-generated MRI-based and HMPAO-based TRODAT-1 templates) for the semiquantitative analysis of the striatal phantom and clinical images. At the same time, the results of automatic analysis of the three templates were compared with results from a conventional manual analysis for examining the feasibility of automatic analysis and the effects of drug templates on automatic semiquantitative analysis results. After comparison, it was found that the MRI-based TRODAT-1 template generated from MRI images is the most suitable template for 99mTc-TRODAT-1 automatic semiquantitative analysis. PMID:29543874
Shen, Junlin; Du, Xiangying; Guo, Daode; Cao, Lizhen; Gao, Yan; Yang, Qi; Li, Pengyu; Liu, Jiabin; Li, Kuncheng
2013-01-01
Objectives To evaluate the clinical value of noise-based tube current reduction method with iterative reconstruction for obtaining consistent image quality with dose optimization in prospective electrocardiogram (ECG)-triggered coronary CT angiography (CCTA). Materials and Methods We performed a prospective randomized study evaluating 338 patients undergoing CCTA with prospective ECG-triggering. Patients were randomly assigned to fixed tube current with filtered back projection (Group 1, n = 113), noise-based tube current with filtered back projection (Group 2, n = 109) or with iterative reconstruction (Group 3, n = 116). Tube voltage was fixed at 120 kV. Qualitative image quality was rated on a 5-point scale (1 = impaired, to 5 = excellent, with 3–5 defined as diagnostic). Image noise and signal intensity were measured; signal-to-noise ratio was calculated; radiation dose parameters were recorded. Statistical analyses included one-way analysis of variance, chi-square test, Kruskal-Wallis test and multivariable linear regression. Results Image noise was maintained at the target value of 35HU with small interquartile range for Group 2 (35.00–35.03HU) and Group 3 (34.99–35.02HU), while from 28.73 to 37.87HU for Group 1. All images in the three groups were acceptable for diagnosis. A relative 20% and 51% reduction in effective dose for Group 2 (2.9 mSv) and Group 3 (1.8 mSv) were achieved compared with Group 1 (3.7 mSv). After adjustment for scan characteristics, iterative reconstruction was associated with 26% reduction in effective dose. Conclusion Noise-based tube current reduction method with iterative reconstruction maintains image noise precisely at the desired level and achieves consistent image quality. Meanwhile, effective dose can be reduced by more than 50%. PMID:23741444
Temporal and spatial variations of sea surface temperature in the East China Sea
NASA Astrophysics Data System (ADS)
Tseng, Chente; Lin, Chiyuan; Chen, Shihchin; Shyu, Chungzen
2000-03-01
Sea surface temperature of the East China Sea (ECS) were analyzed using the NOAA/AVHRR SST images. These satellite images reveal surface features of ECS including mainly the Kuroshio Current, Kuroshio Branch Current, Taiwan Warm Current, China coastal water, Changjiang diluted water and Yellow Sea mixed cold water. The SST of ECS ranges from 27 to 29°C in summer; some cold eddies were found off northeast Taiwan and to the south of Changjiang mouth. SST anomalies at the center of these eddies were about 2-5°C. The strongest front usually occurs in May each year and its temperature gradient is about 5-6°C over a cross-shelf distance of 30 nautical miles. The Yellow Sea mixed cold water also provides a contrast from China Coastal waters shoreward of the 50 m isobath; cross-shore temperature gradient is about 6-8°C over 30 nautical miles. The Kuroshio intrudes into ECS preferably at two locations. The first is off northeast Taiwan; the subsurface water of Kuroshio is upwelled onto the shelf while the main current is deflected seaward. The second site is located at 31°N and 128°E, which is generally considered as the origin of the Tsushima Warm Current. More quantitatively, a 2-year time series of monthly SST images is examined using EOF analysis to determine the spatial and temporal variations in the northwestern portion of ECS. The first spatial EOF mode accounts for 47.4% of total spatial variance and reveals the Changjiang plume and coastal cold waters off China. The second and third EOF modes account for 16.4 and 9.6% of total variance, respectively, and their eigenvector images show the intrusion of Yellow Sea mixed cold waters and the China coastal water. The fourth EOF mode accounts for 5.4% of total variance and reveals cold eddies around Chusan Islands. The temporal variance EOF analysis is less revealing in this study area.
Fully automated analysis of multi-resolution four-channel micro-array genotyping data
NASA Astrophysics Data System (ADS)
Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.
2006-03-01
We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.
Texture classification of lung computed tomography images
NASA Astrophysics Data System (ADS)
Pheng, Hang See; Shamsuddin, Siti M.
2013-03-01
Current development of algorithms in computer-aided diagnosis (CAD) scheme is growing rapidly to assist the radiologist in medical image interpretation. Texture analysis of computed tomography (CT) scans is one of important preliminary stage in the computerized detection system and classification for lung cancer. Among different types of images features analysis, Haralick texture with variety of statistical measures has been used widely in image texture description. The extraction of texture feature values is essential to be used by a CAD especially in classification of the normal and abnormal tissue on the cross sectional CT images. This paper aims to compare experimental results using texture extraction and different machine leaning methods in the classification normal and abnormal tissues through lung CT images. The machine learning methods involve in this assessment are Artificial Immune Recognition System (AIRS), Naive Bayes, Decision Tree (J48) and Backpropagation Neural Network. AIRS is found to provide high accuracy (99.2%) and sensitivity (98.0%) in the assessment. For experiments and testing purpose, publicly available datasets in the Reference Image Database to Evaluate Therapy Response (RIDER) are used as study cases.
Cunefare, David; Cooper, Robert F; Higgins, Brian; Katz, David F; Dubra, Alfredo; Carroll, Joseph; Farsiu, Sina
2016-05-01
Quantitative analysis of the cone photoreceptor mosaic in the living retina is potentially useful for early diagnosis and prognosis of many ocular diseases. Non-confocal split detector based adaptive optics scanning light ophthalmoscope (AOSLO) imaging reveals the cone photoreceptor inner segment mosaics often not visualized on confocal AOSLO imaging. Despite recent advances in automated cone segmentation algorithms for confocal AOSLO imagery, quantitative analysis of split detector AOSLO images is currently a time-consuming manual process. In this paper, we present the fully automatic adaptive filtering and local detection (AFLD) method for detecting cones in split detector AOSLO images. We validated our algorithm on 80 images from 10 subjects, showing an overall mean Dice's coefficient of 0.95 (standard deviation 0.03), when comparing our AFLD algorithm to an expert grader. This is comparable to the inter-observer Dice's coefficient of 0.94 (standard deviation 0.04). To the best of our knowledge, this is the first validated, fully-automated segmentation method which has been applied to split detector AOSLO images.
Exploitation of SAR data for measurement of ocean currents and wave velocities
NASA Technical Reports Server (NTRS)
Shuchman, R. A.; Lyzenga, D. R.; Klooster, A., Jr.
1981-01-01
Methods of extracting information on ocean currents and wave orbital velocities from SAR data by an analysis of the Doppler frequency content of the data are discussed. The theory and data analysis methods are discussed, and results are presented for both aircraft and satellite (SEASAT) data sets. A method of measuring the phase velocity of a gravity wave field is also described. This method uses the shift in position of the wave crests on two images generated from the same data set using two separate Doppler bands. Results of the current measurements are pesented for 11 aircraft data sets and 4 SEASAT data sets.
NASA Astrophysics Data System (ADS)
Lemaitre, P.; Brunel, M.; Rondeau, A.; Porcheron, E.; Gréhan, G.
2015-12-01
According to changes in aircraft certifications rules, instrumentation has to be developed to alert the flight crews of potential icing conditions. The technique developed needs to measure in real time the amount of ice and liquid water encountered by the plane. Interferometric imaging offers an interesting solution: It is currently used to measure the size of regular droplets, and it can further measure the size of irregular particles from the analysis of their speckle-like out-of-focus images. However, conventional image processing needs to be speeded up to be compatible with the real-time detection of icing conditions. This article presents the development of an optimised algorithm to accelerate image processing. The algorithm proposed is based on the detection of each interferogram with the use of the gradient pair vector method. This method is shown to be 13 times faster than the conventional Hough transform. The algorithm is validated on synthetic images of mixed phase clouds, and finally tested and validated in laboratory conditions. This algorithm should have important applications in the size measurement of droplets and ice particles for aircraft safety, cloud microphysics investigation, and more generally in the real-time analysis of triphasic flows using interferometric particle imaging.
Hyperspectral Raman imaging of bone growth and regrowth chemistry
NASA Astrophysics Data System (ADS)
Pezzuti, Jerilyn A.; Morris, Michael D.; Bonadio, Jeffrey F.; Goldstein, Steven A.
1998-06-01
Hyperspectral Raman microscopic imaging of carbonated hydroxyapatite (HAP) is used to follow the chemistry of bone growth and regrowth. Deep red excitation is employed to minimize protein fluorescence interference. A passive line generator based on Powell lens optics and a motorized translation stage provide the imaging capabilities. Raman image contrast is generated from several lines of the HAP Raman spectrum, primarily the PO4-3. Factor analysis is used to minimize the integration time needed for acceptable contrast and to explore the chemical species within the bone. Bone age is visualized as variations in image intensity. High definition, high resolution images of newly formed bone and mature bone are compared qualitatively. The technique is currently under evaluation for study of experimental therapies for fracture repair.
Two-dimensional PCA-based human gait identification
NASA Astrophysics Data System (ADS)
Chen, Jinyan; Wu, Rongteng
2012-11-01
It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.
Utilization of a multimedia PACS workstation for surgical planning of epilepsy
NASA Astrophysics Data System (ADS)
Soo Hoo, Kent; Wong, Stephen T.; Hawkins, Randall A.; Knowlton, Robert C.; Laxer, Kenneth D.; Rowley, Howard A.
1997-05-01
Surgical treatment of temporal lobe epilepsy requires the localization of the epileptogenic zone for surgical resection. Currently, clinicians utilize electroencephalography, various neuroimaging modalities, and psychological tests together to determine the location of this zone. We investigate how a multimedia neuroimaging workstation built on top of the UCSF Picture Archiving and Communication System can be used to aid surgical planning of epilepsy and related brain diseases. This usage demonstrates the ability of the workstation to retrieve image and textural data from PACS and other image sources, register multimodality images, visualize and render 3D data sets, analyze images, generate new image and text data from the analysis, and organize all data in a relational database management system.
NASA Astrophysics Data System (ADS)
Shatravin, V.; Shashev, D. V.
2018-05-01
Currently, robots are increasingly being used in every industry. One of the most high-tech areas is creation of completely autonomous robotic devices including vehicles. The results of various global research prove the efficiency of vision systems in autonomous robotic devices. However, the use of these systems is limited because of the computational and energy resources available in the robot device. The paper describes the results of applying the original approach for image processing on reconfigurable computing environments by the example of morphological operations over grayscale images. This approach is prospective for realizing complex image processing algorithms and real-time image analysis in autonomous robotic devices.
Towards real-time medical diagnostics using hyperspectral imaging technology
NASA Astrophysics Data System (ADS)
Bjorgan, Asgeir; Randeberg, Lise L.
2015-07-01
Hyperspectral imaging provides non-contact, high resolution spectral images which has a substantial diagnostic potential. This can be used for e.g. diagnosis and early detection of arthritis in finger joints. Processing speed is currently a limitation for clinical use of the technique. A real-time system for analysis and visualization using GPU processing and threaded CPU processing is presented. Images showing blood oxygenation, blood volume fraction and vessel enhanced images are among the data calculated in real-time. This study shows the potential of real-time processing in this context. A combination of the processing modules will be used in detection of arthritic finger joints from hyperspectral reflectance and transmittance data.
NASA Technical Reports Server (NTRS)
Adrian, M. L.; Gallagher, D. L.; Khazanov, G. V.; Chsang, S. W.; Liemohn, M. W.; Perez, J. D.; Green, J. L.; Sandel, B. R.; Mitchell, D. G.; Mende, S. B.;
2002-01-01
During a geomagnetic storm on 24 May 2000, the IMAGE Extreme Ultraviolet (EUV) camera observed a plasmaspheric density trough in the evening sector at L-values inside the plasmapause. Forward modeling of this feature has indicated that plasmaspheric densities beyond the outer wall of the trough are well below model expectations. This diminished plasma condition suggests the presence of an erosion process due to the interaction of the plasmasphere with ring current plasmas. We present an overview of EUV, energetic neutral atom (ENA), and Far Ultraviolet (FUV) camera observations associated with the plasmaspheric density trough of 24 May 2000, as well as forward modeling evidence of the lie existence of a plasmaspheric erosion process during this period. FUV proton aurora image analysis, convolution of ENA observations, and ring current modeling are then presented in an effort to associate the observed erosion with coupling between the plasmasphere and ring-current plasmas.
Kashiha, Mohammad Amin; Green, Angela R; Sales, Tatiana Glogerley; Bahr, Claudia; Berckmans, Daniel; Gates, Richard S
2014-10-01
Image processing systems have been widely used in monitoring livestock for many applications, including identification, tracking, behavior analysis, occupancy rates, and activity calculations. The primary goal of this work was to quantify image processing performance when monitoring laying hens by comparing length of stay in each compartment as detected by the image processing system with the actual occurrences registered by human observations. In this work, an image processing system was implemented and evaluated for use in an environmental animal preference chamber to detect hen navigation between 4 compartments of the chamber. One camera was installed above each compartment to produce top-view images of the whole compartment. An ellipse-fitting model was applied to captured images to detect whether the hen was present in a compartment. During a choice-test study, mean ± SD success detection rates of 95.9 ± 2.6% were achieved when considering total duration of compartment occupancy. These results suggest that the image processing system is currently suitable for determining the response measures for assessing environmental choices. Moreover, the image processing system offered a comprehensive analysis of occupancy while substantially reducing data processing time compared with the time-intensive alternative of manual video analysis. The above technique was used to monitor ammonia aversion in the chamber. As a preliminary pilot study, different levels of ammonia were applied to different compartments while hens were allowed to navigate between compartments. Using the automated monitor tool to assess occupancy, a negative trend of compartment occupancy with ammonia level was revealed, though further examination is needed. ©2014 Poultry Science Association Inc.
Habitable Exoplanet Imager Optical-Mechanical Design and Analysis
NASA Technical Reports Server (NTRS)
Gaskins, Jonathan; Stahl, H. Philip
2017-01-01
The Habitable Exoplanet Imager (HabEx) is a space telescope currently in development whose mission includes finding and spectroscopically characterizing exoplanets. Effective high-contrast imaging requires tight stability requirements of the mirrors to prevent issues such as line of sight and wavefront errors. PATRAN and NASTRAN were used to model updates in the design of the HabEx telescope and find how those updates affected stability. Most of the structural modifications increased first mode frequencies and improved line of sight errors. These studies will be used to help define the baseline HabEx telescope design.
Davatzikos, Christos; Rathore, Saima; Bakas, Spyridon; Pati, Sarthak; Bergman, Mark; Kalarot, Ratheesh; Sridharan, Patmaa; Gastounioti, Aimilia; Jahani, Nariman; Cohen, Eric; Akbari, Hamed; Tunc, Birkan; Doshi, Jimit; Parker, Drew; Hsieh, Michael; Sotiras, Aristeidis; Li, Hongming; Ou, Yangming; Doot, Robert K; Bilello, Michel; Fan, Yong; Shinohara, Russell T; Yushkevich, Paul; Verma, Ragini; Kontos, Despina
2018-01-01
The growth of multiparametric imaging protocols has paved the way for quantitative imaging phenotypes that predict treatment response and clinical outcome, reflect underlying cancer molecular characteristics and spatiotemporal heterogeneity, and can guide personalized treatment planning. This growth has underlined the need for efficient quantitative analytics to derive high-dimensional imaging signatures of diagnostic and predictive value in this emerging era of integrated precision diagnostics. This paper presents cancer imaging phenomics toolkit (CaPTk), a new and dynamically growing software platform for analysis of radiographic images of cancer, currently focusing on brain, breast, and lung cancer. CaPTk leverages the value of quantitative imaging analytics along with machine learning to derive phenotypic imaging signatures, based on two-level functionality. First, image analysis algorithms are used to extract comprehensive panels of diverse and complementary features, such as multiparametric intensity histogram distributions, texture, shape, kinetics, connectomics, and spatial patterns. At the second level, these quantitative imaging signatures are fed into multivariate machine learning models to produce diagnostic, prognostic, and predictive biomarkers. Results from clinical studies in three areas are shown: (i) computational neuro-oncology of brain gliomas for precision diagnostics, prediction of outcome, and treatment planning; (ii) prediction of treatment response for breast and lung cancer, and (iii) risk assessment for breast cancer.
Zehri, Aqib H.; Ramey, Wyatt; Georges, Joseph F.; Mooney, Michael A.; Martirosyan, Nikolay L.; Preul, Mark C.; Nakaji, Peter
2014-01-01
Background: The clinical application of fluorescent contrast agents (fluorescein, indocyanine green, and aminolevulinic acid) with intraoperative microscopy has led to advances in intraoperative brain tumor imaging. Their properties, mechanism of action, history of use, and safety are analyzed in this report along with a review of current laser scanning confocal endomicroscopy systems. Additional imaging modalities with potential neurosurgical utility are also analyzed. Methods: A comprehensive literature search was performed utilizing PubMed and key words: In vivo confocal microscopy, confocal endomicroscopy, fluorescence imaging, in vivo diagnostics/neoplasm, in vivo molecular imaging, and optical imaging. Articles were reviewed that discussed clinically available fluorophores in neurosurgery, confocal endomicroscopy instrumentation, confocal microscopy systems, and intraoperative cancer diagnostics. Results: Current clinically available fluorescent contrast agents have specific properties that provide microscopic delineation of tumors when imaged with laser scanning confocal endomicroscopes. Other imaging modalities such as coherent anti-Stokes Raman scattering (CARS) microscopy, confocal reflectance microscopy, fluorescent lifetime imaging (FLIM), two-photon microscopy, and second harmonic generation may also have potential in neurosurgical applications. Conclusion: In addition to guiding tumor resection, intraoperative fluorescence and microscopy have the potential to facilitate tumor identification and complement frozen section analysis during surgery by providing real-time histological assessment. Further research, including clinical trials, is necessary to test the efficacy of fluorescent contrast agents and optical imaging instrumentation in order to establish their role in neurosurgery. PMID:24872922
Comparison and evaluation on image fusion methods for GaoFen-1 imagery
NASA Astrophysics Data System (ADS)
Zhang, Ningyu; Zhao, Junqing; Zhang, Ling
2016-10-01
Currently, there are many research works focusing on the best fusion method suitable for satellite images of SPOT, QuickBird, Landsat and so on, but only a few of them discuss the application of GaoFen-1 satellite images. This paper proposes a novel idea by using four fusion methods, such as principal component analysis transform, Brovey transform, hue-saturation-value transform, and Gram-Schmidt transform, from the perspective of keeping the original image spectral information. The experimental results showed that the transformed images by the four fusion methods not only retain high spatial resolution on panchromatic band but also have the abundant spectral information. Through comparison and evaluation, the integration of Brovey transform is better, but the color fidelity is not the premium. The brightness and color distortion in hue saturation-value transformed image is the largest. Principal component analysis transform did a good job in color fidelity, but its clarity still need improvement. Gram-Schmidt transform works best in color fidelity, and the edge of the vegetation is the most obvious, the fused image sharpness is higher than that of principal component analysis. Brovey transform, is suitable for distinguishing the Gram-Schmidt transform, and the most appropriate for GaoFen-1 satellite image in vegetation and non-vegetation area. In brief, different fusion methods have different advantages in image quality and class extraction, and should be used according to the actual application information and image fusion algorithm.
A CAD system and quality assurance protocol for bone age assessment utilizing digital hand atlas
NASA Astrophysics Data System (ADS)
Gertych, Arakadiusz; Zhang, Aifeng; Ferrara, Benjamin; Liu, Brent J.
2007-03-01
Determination of bone age assessment (BAA) in pediatric radiology is a task based on detailed analysis of patient's left hand X-ray. The current standard utilized in clinical practice relies on a subjective comparison of the hand with patterns in the book atlas. The computerized approach to BAA (CBAA) utilizes automatic analysis of the regions of interest in the hand image. This procedure is followed by extraction of quantitative features sensitive to skeletal development that are further converted to a bone age value utilizing knowledge from the digital hand atlas (DHA). This also allows providing BAA results resembling current clinical approach. All developed methodologies have been combined into one CAD module with a graphical user interface (GUI). CBAA can also improve the statistical and analytical accuracy based on a clinical work-flow analysis. For this purpose a quality assurance protocol (QAP) has been developed. Implementation of the QAP helped to make the CAD more robust and find images that cannot meet conditions required by DHA standards. Moreover, the entire CAD-DHA system may gain further benefits if clinical acquisition protocol is modified. The goal of this study is to present the performance improvement of the overall CAD-DHA system with QAP and the comparison of the CAD results with chronological age of 1390 normal subjects from the DHA. The CAD workstation can process images from local image database or from a PACS server.
Construction of negative images of menstruation in Indian TV commercials.
Yagnik, Arpan Shailesh
2012-01-01
Menstruation is a perfectly normal physiological process; however, it is problematized in TV commercials. In the current study, a thematic analysis of 50 Indian TV commercials was conducted to identify the latent themes. Social captivity, restrictions, professional inefficiency, and physical and mental discomfort emerged as major themes after the analysis. The knowledge that manufacturers use such themes for image building and creating a conducive buying environment may prevent the reinforcement of menstrual taboos in Indian society. It can also guide the manufacturers in ideating and creating positive and healthier ways of advertising female hygiene products.
NASA Astrophysics Data System (ADS)
Asano, Takanori; Takaishi, Riichiro; Oda, Minoru; Sakuma, Kiwamu; Saitoh, Masumi; Tanaka, Hiroki
2018-04-01
We visualize the grain structures for individual nanosized thin film transistors (TFTs), which are electrically characterized, with an improved data processing technique for the dark-field image reconstruction of nanobeam electron diffraction maps. Our individual crystal analysis gives the one-to-one correspondence of TFTs with different grain boundary structures, such as random and coherent boundaries, to the characteristic degradations of ON-current and threshold voltage. Furthermore, the local crystalline uniformity inside a single grain is detected as the difference in diffraction intensity distribution.
NASA Astrophysics Data System (ADS)
Fritz, H. M.; Phillips, D. A.; Okayasu, A.; Shimozono, T.; Liu, H.; Takeda, S.; Mohammed, F.; Skanavis, V.; Synolakis, C. E.; Takahashi, T.
2012-12-01
The March 11, 2011, magnitude Mw 9.0 earthquake off the coast of the Tohoku region caused catastrophic damage and loss of life in Japan. The mid-afternoon tsunami arrival combined with survivors equipped with cameras on top of vertical evacuation buildings provided spontaneous spatially and temporally resolved inundation recordings. This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Myako, Kamaishi, Kesennuma and Yoriisohama along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were explored, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance in April, 2011. A follow-up survey in June, 2011 focused on terrestrial laser scanning (TLS) at locations with high quality eyewitness videos. We acquired precise topographic data using TLS at the video sites producing a 3-dimensional "point cloud" dataset. A camera mounted on the Riegl VZ-400 scanner yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing. The original video recordings were recovered from eyewitnesses and the Japanese Coast Guard (JCG). The analysis of the tsunami videos follows an adapted four step procedure originally developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh, Indonesia (Fritz et al., 2006). The first step requires the calibration of the sector of view present in the eyewitness video recording based on ground control points measured in the LiDAR data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent images by particle image velocimetry (PIV) applied to fixed objects. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates with a direct linear transformation (DLT) procedure. Finally, the instantaneous tsunami surface current and flooding velocity vector maps are determined by applying the digital PIV analysis method to the rectified tsunami video images with floating debris clusters. Tsunami currents up to 11 m/s per second were measured in Kesennuma Bay making navigation impossible. Tsunami hydrographs are derived from the videos based on water surface elevations at surface piercing objects identified in the acquired topographic TLS data. Apart from a dominant tsunami crest the hydrograph at Kamaishi also reveals a subsequent draw down to -10m exposing the harbor bottom. In some cases ship moorings resist the main tsunami crest only to be broken by the extreme draw down and setting vessels a drift for hours. Further we discuss the complex effects of coastal structures on inundation and outflow hydrographs and flow velocities.;
Fish-Eye Observing with Phased Array Radio Telescopes
NASA Astrophysics Data System (ADS)
Wijnholds, S. J.
The radio astronomical community is currently developing and building several new radio telescopes based on phased array technology. These telescopes provide a large field-of-view, that may in principle span a full hemisphere. This makes calibration and imaging very challenging tasks due to the complex source structures and direction dependent radio wave propagation effects. In this thesis, calibration and imaging methods are developed based on least squares estimation of instrument and source parameters. Monte Carlo simulations and actual observations with several prototype show that this model based approach provides statistically and computationally efficient solutions. The error analysis provides a rigorous mathematical framework to assess the imaging performance of current and future radio telescopes in terms of the effective noise, which is the combined effect of propagated calibration errors, noise in the data and source confusion.
The Focusing Optics X-ray Solar Imager: Second Flight and Recent Results
NASA Astrophysics Data System (ADS)
Christe, S.; Krucker, S.; Glesener, L.; Ishikawa, S. N.; Ramsey, B.; Buitrago Casas, J. C.; Foster, N.
2014-12-01
Solar flares accelerate particles up to high energies through various acceleration mechanisms which are not currently understood. Hard X-rays are the most direct diagnostic of flare-accelerated electrons. However past and current hard x-ray observation lack the sensitivity and dynamic range necessary to observe the faint signature of accelerated electrons in the acceleration region, the solar corona. These limitations can be easily overcome through the use of HXR focusing optics coupled with solid state pixelated detectors. We present on recent updates on the FOXSI sounding rocket program. During its first flight FOXSI observed imaged a microflare with simultaneous observations by RHESSI. We present recent imaging analysis of the FOXSI observations and detailed comparison with RHESSI. New detector calibration results are also presented and, time-permitting, preliminary results from the second launch of FOXSI scheduled for December 2014.
LORETA imaging of P300 in schizophrenia with individual MRI and 128-channel EEG.
Pae, Ji Soo; Kwon, Jun Soo; Youn, Tak; Park, Hae-Jeong; Kim, Myung Sun; Lee, Boreom; Park, Kwang Suk
2003-11-01
We investigated the characteristics of P300 generators in schizophrenics by using voxel-based statistical parametric mapping of current density images. P300 generators, produced by a rare target tone of 1500 Hz (15%) under a frequent nontarget tone of 1000 Hz (85%), were measured in 20 right-handed schizophrenics and 21 controls. Low-resolution electromagnetic tomography (LORETA), using a realistic head model of the boundary element method based on individual MRI, was applied to the 128-channel EEG. Three-dimensional current density images were reconstructed from the LORETA intensity maps that covered the whole cortical gray matter. Spatial normalization and intensity normalization of the smoothed current density images were used to reduce anatomical variance and subject-specific global activity and statistical parametric mapping (SPM) was applied for the statistical analysis. We found that the sources of P300 were consistently localized at the left superior parietal area in normal subjects, while those of schizophrenics were diversely distributed. Upon statistical comparison, schizophrenics, with globally reduced current densities, showed a significant P300 current density reduction in the left medial temporal area and in the left inferior parietal area, while both left prefrontal and right orbitofrontal areas were relatively activated. The left parietotemporal area was found to correlate negatively with Positive and Negative Syndrome Scale total scores of schizophrenic patients. In conclusion, the reduced and increased areas of current density in schizophrenic patients suggest that the medial temporal and frontal areas contribute to the pathophysiology of schizophrenia, the frontotemporal circuitry abnormality.
Imaging of current distributions in superconducting thin film structures
NASA Astrophysics Data System (ADS)
Dönitz, Dietmar
2006-10-01
Local analysis plays an important role in many fields of scientific research. However, imaging methods are not very common in the investigation of superconductors. For more than 20 years, Low Temperature Scanning Electron Microscopy (LTSEM) has been successfully used at the University of Tübingen for studying of condensed matter phenomena, especially of superconductivity. In this thesis LTSEM was used for imaging current distributions in different superconducting thin film structures: - Imaging of current distributions in Josephson junctions with ferromagnetic interlayer, also known as SIFS junctions, showed inhomogeneous current transport over the junctions which directly led to an improvement in the fabrication process. An investigation of improved samples showed a very homogeneous current distribution without any trace of magnetic domains. Either such domains were not present or too small for imaging with the LTSEM. - An investigation of Nb/YBCO zigzag Josephson junctions yielded important information on signal formation in the LTSEM both for Josephson junctions in the short and in the long limit. Using a reference junction our signal formation model could be verified, thus confirming earlier results on short zigzag junctions. These results, which could be reproduced in this work, support the theory of d-wave symmetry in the superconducting order parameter of YBCO. Furthermore, investigations of the quasiparticle tunneling in the zigzag junctions showed the existence of Andreev bound states, which is another indication of the d-wave symmetry in YBCO. - The LTSEM study of Hot Electron Bolometers (HEB) allowed the first successful imaging of a stable 'Hot Spot', a self-heating region in HEB structures. Moreover, the electron beam was used to induce an - otherwise unstable - hot spot. Both investigations yielded information on the homogeneity of the samples. - An entirely new method of imaging the current distribution in superconducting interference devices (SQUIDs) could be developed. It is based on vortex imaging by LTSEM that had been established several years ago. The vortex signals can be used as local detectors for the vortex-free circulating sheet-current distribution J. Compared to previous inversion methods that infer J from the measured magnetic field, this method gives a more direct measurement of the current distribution. The experimental results were in very good agreement with numerical calculations of J. The presented investigations show how versatile and useful Low Temperature Scanning Electron Microscopy can be for studying superconducting thin film structures. Thus one may expect that many more important results can be obtained with this method.
Hyperspectral imaging using the single-pixel Fourier transform technique
NASA Astrophysics Data System (ADS)
Jin, Senlin; Hui, Wangwei; Wang, Yunlong; Huang, Kaicheng; Shi, Qiushuai; Ying, Cuifeng; Liu, Dongqi; Ye, Qing; Zhou, Wenyuan; Tian, Jianguo
2017-03-01
Hyperspectral imaging technology is playing an increasingly important role in the fields of food analysis, medicine and biotechnology. To improve the speed of operation and increase the light throughput in a compact equipment structure, a Fourier transform hyperspectral imaging system based on a single-pixel technique is proposed in this study. Compared with current imaging spectrometry approaches, the proposed system has a wider spectral range (400-1100 nm), a better spectral resolution (1 nm) and requires fewer measurement data (a sample rate of 6.25%). The performance of this system was verified by its application to the non-destructive testing of potatoes.
Development and Current Status of Skull-Image Superimposition - Methodology and Instrumentation.
Lan, Y
1992-12-01
This article presents a review of the literature and an evaluation on the development and application of skull-image superimposition technology - both instrumentation and methodology - contributed by a number of scholars since 1935. Along with a comparison of the methodologies involved in the two superimposition techniques - photographic and video - the author characterized the techniques in action and the recent advances in computer image superimposition processing technology. The major disadvantage of conventional approaches is its relying on subjective interpretation. Through painstaking comparison and analysis, computer image processing technology can make more conclusive identifications by direct testing and evaluating the various programmed indices. Copyright © 1992 Central Police University.
NASA Technical Reports Server (NTRS)
Trolinger, J. D.; Lal, R. B.; Batra, A. K.; Mcintosh, D.
1991-01-01
The first International Microgravity Laboratory (IML-1), scheduled for spaceflight in early 1992 includes a crystal-growth-from-solution experiment which is equipped with an array of optical diagnostics instrumentation which includes transmission and reflection holography, tomography, schlieren, and particle image displacement velocimetry. During the course of preparation for this spaceflight experiment we have performed both experimentation and analysis for each of these diagnostics. In this paper we describe the work performed in the development of holographic particle image displacement velocimetry for microgravity application which will be employed primarily to observe and quantify minute convective currents in the Spacelab environment and also to measure the value of g. Additionally, the experiment offers a unique opportunity to examine physical phenomena which are normally negligible and not observable. A preliminary analysis of the motion of particles in fluid was performed and supporting experiments were carried out. The results of the analysis and the experiments are reported.
Design and Application of Hybrid Magnetic Field-Eddy Current Probe
NASA Technical Reports Server (NTRS)
Wincheski, Buzz; Wallace, Terryl; Newman, Andy; Leser, Paul; Simpson, John
2013-01-01
The incorporation of magnetic field sensors into eddy current probes can result in novel probe designs with unique performance characteristics. One such example is a recently developed electromagnetic probe consisting of a two-channel magnetoresistive sensor with an embedded single-strand eddy current inducer. Magnetic flux leakage maps of ferrous materials are generated from the DC sensor response while high-resolution eddy current imaging is simultaneously performed at frequencies up to 5 megahertz. In this work the design and optimization of this probe will be presented, along with an application toward analysis of sensory materials with embedded ferromagnetic shape-memory alloy (FSMA) particles. The sensory material is designed to produce a paramagnetic to ferromagnetic transition in the FSMA particles under strain. Mapping of the stray magnetic field and eddy current response of the sample with the hybrid probe can thereby image locations in the structure which have experienced an overstrain condition. Numerical modeling of the probe response is performed with good agreement with experimental results.
NASA Astrophysics Data System (ADS)
Alvarez, J.; Boutchich, M.; Kleider, J. P.; Teraji, T.; Koide, Y.
2014-09-01
The origin of the high leakage current measured in several vertical-type diamond Schottky devices is conjointly investigated by conducting probe atomic force microscopy and confocal micro-Raman/photoluminescence imaging analysis. Local areas characterized by a strong decrease of the local resistance (5-6 orders of magnitude drop) with respect to their close surrounding have been identified in several different regions of the sample surface. The same local areas, also referenced as electrical hot-spots, reveal a slightly constrained diamond lattice and three dominant Raman bands in the low-wavenumber region (590, 914 and 1040 cm-1). These latter bands are usually assigned to the vibrational modes involving boron impurities and its possible complexes that can electrically act as traps for charge carriers. Local current-voltage measurements performed at the hot-spots point out a trap-filled-limited current as the main conduction mechanism favouring the leakage current in the Schottky devices.
Kalra, Mannudeep K; Maher, Michael M; Blake, Michael A; Lucey, Brian C; Karau, Kelly; Toth, Thomas L; Avinash, Gopal; Halpern, Elkan F; Saini, Sanjay
2004-09-01
To assess the effect of noise reduction filters on detection and characterization of lesions on low-radiation-dose abdominal computed tomographic (CT) images. Low-dose CT images of abdominal lesions in 19 consecutive patients (11 women, eight men; age range, 32-78 years) were obtained at reduced tube currents (120-144 mAs). These baseline low-dose CT images were postprocessed with six noise reduction filters; the resulting postprocessed images were then randomly assorted with baseline images. Three radiologists performed independent evaluation of randomized images for presence, number, margins, attenuation, conspicuity, calcification, and enhancement of lesions, as well as image noise. Side-by-side comparison of baseline images with postprocessed images was performed by using a five-point scale for assessing lesion conspicuity and margins, image noise, beam hardening, and diagnostic acceptability. Quantitative noise and contrast-to-noise ratio were obtained for all liver lesions. Statistical analysis was performed by using the Wilcoxon signed rank test, Student t test, and kappa test of agreement. Significant reduction of noise was observed in images postprocessed with filter F compared with the noise in baseline nonfiltered images (P =.004). Although the number of lesions seen on baseline images and that seen on postprocessed images were identical, lesions were less conspicuous on postprocessed images than on baseline images. A decrease in quantitative image noise and contrast-to-noise ratio for liver lesions was noted with all noise reduction filters. There was good interobserver agreement (kappa = 0.7). Although the use of currently available noise reduction filters improves image noise and ameliorates beam-hardening artifacts at low-dose CT, such filters are limited by a compromise in lesion conspicuity and appearance in comparison with lesion conspicuity and appearance on baseline low-dose CT images. Copyright RSNA, 2004
NASA Astrophysics Data System (ADS)
Li, Zhenjiang; Wang, Weilan
2018-04-01
Thangka is a treasure of Tibetan culture. In its digital protection, most of the current research focuses on the content of Thangka images, not the fabrication process. For silk embroidered Thangka of "Guo Tang", there are two craft methods, namely, weave embroidered and piles embroidered. The local texture of weave embroidered Thangka is rough, and that of piles embroidered Thangka is more smooth. In order to distinguish these two kinds of fabrication processes from images, a effectively segmentation algorithm of color blocks is designed firstly, and the obtained color blocks contain the local texture patterns of Thangka image; Secondly, the local texture features of the color block are extracted and screened; Finally, the selected features are analyzed experimentally. The experimental analysis shows that the proposed features can well reflect the difference between methods of weave embroidered and piles embroidered.
Crowdsourcing and Automated Retinal Image Analysis for Diabetic Retinopathy.
Mudie, Lucy I; Wang, Xueyang; Friedman, David S; Brady, Christopher J
2017-09-23
As the number of people with diabetic retinopathy (DR) in the USA is expected to increase threefold by 2050, the need to reduce health care costs associated with screening for this treatable disease is ever present. Crowdsourcing and automated retinal image analysis (ARIA) are two areas where new technology has been applied to reduce costs in screening for DR. This paper reviews the current literature surrounding these new technologies. Crowdsourcing has high sensitivity for normal vs abnormal images; however, when multiple categories for severity of DR are added, specificity is reduced. ARIAs have higher sensitivity and specificity, and some commercial ARIA programs are already in use. Deep learning enhanced ARIAs appear to offer even more improvement in ARIA grading accuracy. The utilization of crowdsourcing and ARIAs may be a key to reducing the time and cost burden of processing images from DR screening.
Deserno, Thomas M; Haak, Daniel; Brandenburg, Vincent; Deserno, Verena; Classen, Christoph; Specht, Paula
2014-12-01
Especially for investigator-initiated research at universities and academic institutions, Internet-based rare disease registries (RDR) are required that integrate electronic data capture (EDC) with automatic image analysis or manual image annotation. We propose a modular framework merging alpha-numerical and binary data capture. In concordance with the Office of Rare Diseases Research recommendations, a requirement analysis was performed based on several RDR databases currently hosted at Uniklinik RWTH Aachen, Germany. With respect to the study management tool that is already successfully operating at the Clinical Trial Center Aachen, the Google Web Toolkit was chosen with Hibernate and Gilead connecting a MySQL database management system. Image and signal data integration and processing is supported by Apache Commons FileUpload-Library and ImageJ-based Java code, respectively. As a proof of concept, the framework is instantiated to the German Calciphylaxis Registry. The framework is composed of five mandatory core modules: (1) Data Core, (2) EDC, (3) Access Control, (4) Audit Trail, and (5) Terminology as well as six optional modules: (6) Binary Large Object (BLOB), (7) BLOB Analysis, (8) Standard Operation Procedure, (9) Communication, (10) Pseudonymization, and (11) Biorepository. Modules 1-7 are implemented in the German Calciphylaxis Registry. The proposed RDR framework is easily instantiated and directly integrates image management and analysis. As open source software, it may assist improved data collection and analysis of rare diseases in near future.
Using Deep Learning Algorithm to Enhance Image-review Software for Surveillance Cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Yonggang; Thomas, Maikael A.
We propose the development of proven deep learning algorithms to flag objects and events of interest in Next Generation Surveillance System (NGSS) surveillance to make IAEA image review more efficient. Video surveillance is one of the core monitoring technologies used by the IAEA Department of Safeguards when implementing safeguards at nuclear facilities worldwide. The current image review software GARS has limited automated functions, such as scene-change detection, black image detection and missing scene analysis, but struggles with highly cluttered backgrounds. A cutting-edge algorithm to be developed in this project will enable efficient and effective searches in images and video streamsmore » by identifying and tracking safeguards relevant objects and detect anomalies in their vicinity. In this project, we will develop the algorithm, test it with the IAEA surveillance cameras and data sets collected at simulated nuclear facilities at BNL and SNL, and implement it in a software program for potential integration into the IAEA’s IRAP (Integrated Review and Analysis Program).« less
2014-01-01
Current musculoskeletal imaging techniques usually target the macro-morphology of articular cartilage or use histological analysis. These techniques are able to reveal advanced osteoarthritic changes in articular cartilage but fail to give detailed information to distinguish early osteoarthritis from healthy cartilage, and this necessitates high-resolution imaging techniques measuring cells and the extracellular matrix within the multilayer structure of articular cartilage. This review provides a comprehensive exploration of the cellular components and extracellular matrix of articular cartilage as well as high-resolution imaging techniques, including magnetic resonance image, electron microscopy, confocal laser scanning microscopy, second harmonic generation microscopy, and laser scanning confocal arthroscopy, in the measurement of multilayer ultra-structures of articular cartilage. This review also provides an overview for micro-structural analysis of the main components of normal or osteoarthritic cartilage and discusses the potential and challenges associated with developing non-invasive high-resolution imaging techniques for both research and clinical diagnosis of early to late osteoarthritis. PMID:24946278
Coastal modification of a scene employing multispectral images and vector operators.
Lira, Jorge
2017-05-01
Changes in sea level, wind patterns, sea current patterns, and tide patterns have produced morphologic transformations in the coastline area of Tamaulipas Sate in North East Mexico. Such changes generated a modification of the coastline and variations of the texture-relief and texture of the continental area of Tamaulipas. Two high-resolution multispectral satellite Satellites Pour l'Observation de la Terre images were employed to quantify the morphologic change of such continental area. The images cover a time span close to 10 years. A variant of the principal component analysis was used to delineate the modification of the land-water line. To quantify changes in texture-relief and texture, principal component analysis was applied to the multispectral images. The first principal components of each image were modeled as a discrete bidimensional vector field. The divergence and Laplacian vector operators were applied to the discrete vector field. The divergence provided the change of texture, while the Laplacian produced the change of texture-relief in the area of study.
Single-Image Super-Resolution Based on Rational Fractal Interpolation.
Zhang, Yunfeng; Fan, Qinglan; Bao, Fangxun; Liu, Yifang; Zhang, Caiming
2018-08-01
This paper presents a novel single-image super-resolution (SR) procedure, which upscales a given low-resolution (LR) input image to a high-resolution image while preserving the textural and structural information. First, we construct a new type of bivariate rational fractal interpolation model and investigate its analytical properties. This model has different forms of expression with various values of the scaling factors and shape parameters; thus, it can be employed to better describe image features than current interpolation schemes. Furthermore, this model combines the advantages of rational interpolation and fractal interpolation, and its effectiveness is validated through theoretical analysis. Second, we develop a single-image SR algorithm based on the proposed model. The LR input image is divided into texture and non-texture regions, and then, the image is interpolated according to the characteristics of the local structure. Specifically, in the texture region, the scaling factor calculation is the critical step. We present a method to accurately calculate scaling factors based on local fractal analysis. Extensive experiments and comparisons with the other state-of-the-art methods show that our algorithm achieves competitive performance, with finer details and sharper edges.
Retinal fundus images for glaucoma analysis: the RIGA dataset
NASA Astrophysics Data System (ADS)
Almazroa, Ahmed; Alodhayb, Sami; Osman, Essameldin; Ramadan, Eslam; Hummadi, Mohammed; Dlaim, Mohammed; Alkatee, Muhannad; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2018-03-01
Glaucoma neuropathy is a major cause of irreversible blindness worldwide. Current models of chronic care will not be able to close the gap of growing prevalence of glaucoma and challenges for access to healthcare services. Teleophthalmology is being developed to close this gap. In order to develop automated techniques for glaucoma detection which can be used in tele-ophthalmology we have developed a large retinal fundus dataset. A de-identified dataset of retinal fundus images for glaucoma analysis (RIGA) was derived from three sources for a total of 750 images. The optic cup and disc boundaries for each image was marked and annotated manually by six experienced ophthalmologists and included the cup to disc (CDR) estimates. Six parameters were extracted and assessed (the disc area and centroid, cup area and centroid, horizontal and vertical cup to disc ratios) among the ophthalmologists. The inter-observer annotations were compared by calculating the standard deviation (SD) for every image between the six ophthalmologists in order to determine if the outliers amongst the six and was used to filter the corresponding images. The data set will be made available to the research community in order to crowd source other analysis from other research groups in order to develop, validate and implement analysis algorithms appropriate for tele-glaucoma assessment. The RIGA dataset can be freely accessed online through University of Michigan, Deep Blue website (doi:10.7302/Z23R0R29).
Assessment of Sentinel Node Biopsies With Full-Field Optical Coherence Tomography.
Grieve, Kate; Mouslim, Karima; Assayag, Osnath; Dalimier, Eugénie; Harms, Fabrice; Bruhat, Alexis; Boccara, Claude; Antoine, Martine
2016-04-01
Current techniques for the intraoperative analysis of sentinel lymph nodes during breast cancer surgery present drawbacks such as time and tissue consumption. Full-field optical coherence tomography is a novel noninvasive, high-resolution, fast imaging technique. This study investigated the use of full-field optical coherence tomography as an alternative technique for the intraoperative analysis of sentinel lymph nodes. Seventy-one axillary lymph nodes from 38 patients at Tenon Hospital were imaged minutes after excision with full-field optical coherence tomography in the pathology laboratory, before being handled for histological analysis. A pathologist performed a blind diagnosis (benign/malignant), based on the full-field optical coherence tomography images alone, which resulted in a sensitivity of 92% and a specificity of 83% (n = 65 samples). Regular feedback was given during the blind diagnosis, with thorough analysis of the images, such that features of normal and suspect nodes were identified in the images and compared with histology. A nonmedically trained imaging expert also performed a blind diagnosis aided by the reading criteria defined by the pathologist, which resulted in 85% sensitivity and 90% specificity (n = 71 samples). The number of false positives of the pathologist was reduced by 3 in a second blind reading a few months later. These results indicate that following adequate training, full-field optical coherence tomography can be an effective noninvasive diagnostic tool for extemporaneous sentinel node biopsy qualification. © The Author(s) 2015.
An Optimal Partial Differential Equations-based Stopping Criterion for Medical Image Denoising.
Khanian, Maryam; Feizi, Awat; Davari, Ali
2014-01-01
Improving the quality of medical images at pre- and post-surgery operations are necessary for beginning and speeding up the recovery process. Partial differential equations-based models have become a powerful and well-known tool in different areas of image processing such as denoising, multiscale image analysis, edge detection and other fields of image processing and computer vision. In this paper, an algorithm for medical image denoising using anisotropic diffusion filter with a convenient stopping criterion is presented. In this regard, the current paper introduces two strategies: utilizing the efficient explicit method due to its advantages with presenting impressive software technique to effectively solve the anisotropic diffusion filter which is mathematically unstable, proposing an automatic stopping criterion, that takes into consideration just input image, as opposed to other stopping criteria, besides the quality of denoised image, easiness and time. Various medical images are examined to confirm the claim.
Thermally-induced voltage alteration for analysis of microelectromechanical devices
Walraven, Jeremy A.; Cole, Jr., Edward I.
2002-01-01
A thermally-induced voltage alteration (TIVA) apparatus and method are disclosed for analyzing a microelectromechanical (MEM) device with or without on-board integrated circuitry. One embodiment of the TIVA apparatus uses constant-current biasing of the MEM device while scanning a focused laser beam over electrically-active members therein to produce localized heating which alters the power demand of the MEM device and thereby changes the voltage of the constant-current source. This changing voltage of the constant-current source can be measured and used in combination with the position of the focused and scanned laser beam to generate an image of any short-circuit defects in the MEM device (e.g. due to stiction or fabrication defects). In another embodiment of the TIVA apparatus, an image can be generated directly from a thermoelectric potential produced by localized laser heating at the location of any short-circuit defects in the MEM device, without any need for supplying power to the MEM device. The TIVA apparatus can be formed, in part, from a scanning optical microscope, and has applications for qualification testing or failure analysis of MEM devices.
Quantitative Detection of Cracks in Steel Using Eddy Current Pulsed Thermography.
Shi, Zhanqun; Xu, Xiaoyu; Ma, Jiaojiao; Zhen, Dong; Zhang, Hao
2018-04-02
Small cracks are common defects in steel and often lead to catastrophic accidents in industrial applications. Various nondestructive testing methods have been investigated for crack detection; however, most current methods focus on qualitative crack identification and image processing. In this study, eddy current pulsed thermography (ECPT) was applied for quantitative crack detection based on derivative analysis of temperature variation. The effects of the incentive parameters on the temperature variation were analyzed in the simulation study. The crack profile and position are identified in the thermal image based on the Canny edge detection algorithm. Then, one or more trajectories are determined through the crack profile in order to determine the crack boundary through its temperature distribution. The slope curve along the trajectory is obtained. Finally, quantitative analysis of the crack sizes was performed by analyzing the features of the slope curves. The experimental verification showed that the crack sizes could be quantitatively detected with errors of less than 1%. Therefore, the proposed ECPT method was demonstrated to be a feasible and effective nondestructive approach for quantitative crack detection.
Novel methods of imaging and analysis for the thermoregulatory sweat test.
Carroll, Michael Sean; Reed, David W; Kuntz, Nancy L; Weese-Mayer, Debra Ellyn
2018-06-07
The thermoregulatory sweat test (TST) can be central to the identification and management of disorders affecting sudomotor function and small sensory and autonomic nerve fibers, but the cumbersome nature of the standard testing protocol has prevented its widespread adoption. A high resolution, quantitative, clean and simple assay of sweating could significantly improve identification and management of these disorders. Images from 89 clinical TSTs were analyzed retrospectively using two novel techniques. First, using the standard indicator powder, skin surface sweat distributions were determined algorithmically for each patient. Second, a fundamentally novel method using thermal imaging of forced evaporative cooling was evaluated through comparison with the standard technique. Correlation and receiver operating characteristic analyses were used to determine the degree of match between these methods, and the potential limits of thermal imaging were examined through cumulative analysis of all studied patients. Algorithmic encoding of sweating and non-sweating regions produces a more objective analysis for clinical decision making. Additionally, results from the forced cooling method correspond well with those from indicator powder imaging, with a correlation across spatial regions of -0.78 (CI: -0.84 to -0.71). The method works similarly across body regions, and frame-by-frame analysis suggests the ability to identify sweating regions within about 1 second of imaging. While algorithmic encoding can enhance the standard sweat testing protocol, thermal imaging with forced evaporative cooling can dramatically improve the TST by making it less time-consuming and more patient-friendly than the current approach.
Magnetic force microscopy method and apparatus to detect and image currents in integrated circuits
Campbell, Ann. N.; Anderson, Richard E.; Cole, Jr., Edward I.
1995-01-01
A magnetic force microscopy method and improved magnetic tip for detecting and quantifying internal magnetic fields resulting from current of integrated circuits. Detection of the current is used for failure analysis, design verification, and model validation. The interaction of the current on the integrated chip with a magnetic field can be detected using a cantilevered magnetic tip. Enhanced sensitivity for both ac and dc current and voltage detection is achieved with voltage by an ac coupling or a heterodyne technique. The techniques can be used to extract information from analog circuits.
Magnetic force microscopy method and apparatus to detect and image currents in integrated circuits
Campbell, A.N.; Anderson, R.E.; Cole, E.I. Jr.
1995-11-07
A magnetic force microscopy method and improved magnetic tip for detecting and quantifying internal magnetic fields resulting from current of integrated circuits are disclosed. Detection of the current is used for failure analysis, design verification, and model validation. The interaction of the current on the integrated chip with a magnetic field can be detected using a cantilevered magnetic tip. Enhanced sensitivity for both ac and dc current and voltage detection is achieved with voltage by an ac coupling or a heterodyne technique. The techniques can be used to extract information from analog circuits. 17 figs.
Processing and analysis of cardiac optical mapping data obtained with potentiometric dyes
Laughner, Jacob I.; Ng, Fu Siong; Sulkin, Matthew S.; Arthur, R. Martin
2012-01-01
Optical mapping has become an increasingly important tool to study cardiac electrophysiology in the past 20 years. Multiple methods are used to process and analyze cardiac optical mapping data, and no consensus currently exists regarding the optimum methods. The specific methods chosen to process optical mapping data are important because inappropriate data processing can affect the content of the data and thus alter the conclusions of the studies. Details of the different steps in processing optical imaging data, including image segmentation, spatial filtering, temporal filtering, and baseline drift removal, are provided in this review. We also provide descriptions of the common analyses performed on data obtained from cardiac optical imaging, including activation mapping, action potential duration mapping, repolarization mapping, conduction velocity measurements, and optical action potential upstroke analysis. Optical mapping is often used to study complex arrhythmias, and we also discuss dominant frequency analysis and phase mapping techniques used for the analysis of cardiac fibrillation. PMID:22821993
ERIC Educational Resources Information Center
Mathis, Janelle B.
2015-01-01
International children's literature has the potential to create global experiences and cultural insights for young people confronted with limited and biased images of the world offered by media. The current inquiry was designed to explore, through a critical content analysis approach, international children's literature in which characters…
Mostaço-Guidolin, Leila; Rosin, Nicole L.; Hackett, Tillie-Louise
2017-01-01
The ability to respond to injury with tissue repair is a fundamental property of all multicellular organisms. The extracellular matrix (ECM), composed of fibrillar collagens as well as a number of other components is dis-regulated during repair in many organs. In many tissues, scaring results when the balance is lost between ECM synthesis and degradation. Investigating what disrupts this balance and what effect this can have on tissue function remains an active area of research. Recent advances in the imaging of fibrillar collagen using second harmonic generation (SHG) imaging have proven useful in enhancing our understanding of the supramolecular changes that occur during scar formation and disease progression. Here, we review the physical properties of SHG, and the current nonlinear optical microscopy imaging (NLOM) systems that are used for SHG imaging. We provide an extensive review of studies that have used SHG in skin, lung, cardiovascular, tendon and ligaments, and eye tissue to understand alterations in fibrillar collagens in scar tissue. Lastly, we review the current methods of image analysis that are used to extract important information about the role of fibrillar collagens in scar formation. PMID:28809791
Establishing imaging sensor specifications for digital still cameras
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2007-02-01
Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.
Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin
2017-12-01
Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.
Drew, Benjamin T.; Bowes, Michael A.; Redmond, Anthony C.; Dube, Bright; Kingsbury, Sarah R.; Conaghan, Philip G.
2017-01-01
Abstract Objectives Current structural associations of patellofemoral pain (PFP) are based on 2D imaging methodology with inherent measurement uncertainty due to positioning and rotation. This study employed novel technology to create 3D measures of commonly described patellofemoral joint imaging features and compared these features in people with and without PFP in a large cohort. Methods We compared two groups from the Osteoarthritis Initiative: one with localized PFP and pain on stairs, and a control group with no knee pain; both groups had no radiographic OA. MRI bone surfaces were automatically segmented and aligned using active appearance models. We applied t-tests, logistic regression and linear discriminant analysis to compare 13 imaging features (including patella position, trochlear morphology, facet area and tilt) converted into 3D equivalents, and a measure of overall 3D shape. Results One hundred and fifteen knees with PFP (mean age 59.7, BMI 27.5 kg/m2, female 58.2%) and 438 without PFP (mean age 63.6, BMI 26.9 kg/m2, female 52.9%) were included. After correction for multiple testing, no statistically significant differences were found between groups for any of the 3D imaging features or their combinations. A statistically significant discrimination was noted for overall 3D shape between genders, confirming the validity of the 3D measures. Conclusion Challenging current perceptions, no differences in patellofemoral morphology were found between older people with and without PFP using 3D quantitative imaging analysis. Further work is needed to see if these findings are replicated in a younger PFP population. PMID:28968747
Drew, Benjamin T; Bowes, Michael A; Redmond, Anthony C; Dube, Bright; Kingsbury, Sarah R; Conaghan, Philip G
2017-12-01
Current structural associations of patellofemoral pain (PFP) are based on 2D imaging methodology with inherent measurement uncertainty due to positioning and rotation. This study employed novel technology to create 3D measures of commonly described patellofemoral joint imaging features and compared these features in people with and without PFP in a large cohort. We compared two groups from the Osteoarthritis Initiative: one with localized PFP and pain on stairs, and a control group with no knee pain; both groups had no radiographic OA. MRI bone surfaces were automatically segmented and aligned using active appearance models. We applied t-tests, logistic regression and linear discriminant analysis to compare 13 imaging features (including patella position, trochlear morphology, facet area and tilt) converted into 3D equivalents, and a measure of overall 3D shape. One hundred and fifteen knees with PFP (mean age 59.7, BMI 27.5 kg/m2, female 58.2%) and 438 without PFP (mean age 63.6, BMI 26.9 kg/m2, female 52.9%) were included. After correction for multiple testing, no statistically significant differences were found between groups for any of the 3D imaging features or their combinations. A statistically significant discrimination was noted for overall 3D shape between genders, confirming the validity of the 3D measures. Challenging current perceptions, no differences in patellofemoral morphology were found between older people with and without PFP using 3D quantitative imaging analysis. Further work is needed to see if these findings are replicated in a younger PFP population. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology.
Sensor, signal, and image informatics - state of the art and current topics.
Lehmann, T M; Aach, T; Witte, H
2006-01-01
The number of articles published annually in the fields of biomedical signal and image acquisition and processing is increasing. Based on selected examples, this survey aims at comprehensively demonstrating the recent trends and developments. Four articles are selected for biomedical data acquisition covering topics such as dose saving in CT, C-arm X-ray imaging systems for volume imaging, and the replacement of dose-intensive CT-based diagnostic with harmonic ultrasound imaging. Regarding biomedical signal analysis (BSA), the four selected articles discuss the equivalence of different time-frequency approaches for signal analysis, an application to Cochlea implants, where time-frequency analysis is applied for controlling the replacement system, recent trends for fusion of different modalities, and the role of BSA as part of a brain machine interfaces. To cover the broad spectrum of publications in the field of biomedical image processing, six papers are focused. Important topics are content-based image retrieval in medical applications, automatic classification of tongue photographs from traditional Chinese medicine, brain perfusion analysis in single photon emission computed tomography (SPECT), model-based visualization of vascular trees, and virtual surgery, where enhanced visualization and haptic feedback techniques are combined with a sphere-filled model of the organ. The selected papers emphasize the five fields forming the chain of biomedical data processing: (1) data acquisition, (2) data reconstruction and pre-processing, (3) data handling, (4) data analysis, and (5) data visualization. Fields 1 and 2 form the sensor informatics, while fields 2 to 5 form signal or image informatics with respect to the nature of the data considered. Biomedical data acquisition and pre-processing, as well as data handling, analysis and visualization aims at providing reliable tools for decision support that improve the quality of health care. Comprehensive evaluation of the processing methods and their reliable integration in routine applications are future challenges in the field of sensor, signal and image informatics.
The FOXSI sounding rocket: Latest analysis and results
NASA Astrophysics Data System (ADS)
Buitrago-Casas, Juan Camilo; Glesener, Lindsay; Christe, Steven; Krucker, Sam; Ishikawa, Shin-Nosuke; Takahashi, Tadayuki; Ramsey, Brian; Han, Raymond
2016-05-01
Hard X-ray (HXR) observations are a linchpin for studying particle acceleration and hot thermal plasma emission in the solar corona. Current and past indirectly imaging instruments lack the sensitivity and dynamic range needed to observe faint HXR signatures, especially in the presences of brighter sources. These limitations are overcome by using HXR direct focusing optics coupled with semiconductor detectors. The Focusing Optics X-ray Solar Imager (FOXSI) sounding rocket experiment is a state of the art solar telescope that develops and applies these capabilities.The FOXSI sounding rocket has successfully flown twice, observing active regions, microflares, and areas of the quiet-Sun. Thanks to its far superior imaging dynamic range, FOXSI performs cleaner hard X-ray imaging spectroscopy than previous instruments that use indirect imaging methods like RHESSI.We present a description of the FOXSI rocket payload, paying attention to the optics and semiconductor detectors calibrations, as well as the upgrades made for the second flight. We also introduce some of the latest FOXSI data analysis, including imaging spectroscopy of microflares and active regions observed during the two flights, and the differential emission measure distribution of the nonflaring corona.
Comparison of breast density measurements made using ultrasound tomography and mammography
NASA Astrophysics Data System (ADS)
Sak, Mark; Duric, Neb; Littrup, Peter; Bey-Knight, Lisa; Krycia, Mark; Sherman, Mark E.; Boyd, Norman; Gierach, Gretchen L.
2015-03-01
Women with elevated mammographic percent density, defined as the ratio of fibroglandular tissue area to total breast area on a mammogram are at an increased risk of developing breast cancer. Ultrasound tomography (UST) is an imaging modality that can create tomographic sound speed images of a patient's breast, which can then be used to measure breast density. These sound speed images are useful because physical tissue density is directly proportional to sound speed. The work presented here updates previous results that compared mammographic breast density measurements with UST breast density measurements within an ongoing study. The current analysis has been expanded to include 158 women with negative digital mammographic screens who then underwent a breast UST scan. Breast density was measured for both imaging modalities and preliminary analysis demonstrated strong and positive correlations (Spearman correlation coefficient rs = 0.703). Additional mammographic and UST related imaging characteristics were also analyzed and used to compare the behavior of both imaging modalities. Results suggest that UST can be used among women with negative mammographic screens as a quantitative marker of breast density that may avert shortcomings of mammography.
Burns, Clare L; Keir, Benjamin; Ward, Elizabeth C; Hill, Anne J; Farrell, Anna; Phillips, Nick; Porter, Linda
2015-08-01
High-quality fluoroscopy images are required for accurate interpretation of videofluoroscopic swallow studies (VFSS) by speech pathologists and radiologists. Consequently, integral to developing any system to conduct VFSS remotely via telepractice is ensuring that the quality of the VFSS images transferred via the telepractice system is optimized. This study evaluates the extent of change observed in image quality when videofluoroscopic images are transmitted from a digital fluoroscopy system to (a) current clinical equipment (KayPentax Digital Swallowing Workstation, and b) four different telepractice system configurations. The telepractice system configurations consisted of either a local C20 or C60 Cisco TelePresence System (codec unit) connected to the digital fluoroscopy system and linked to a second remote C20 or C60 Cisco TelePresence System via a network running at speeds of either 2, 4 or 6 megabits per second (Mbit/s). Image quality was tested using the NEMA XR 21 Phantom, and results demonstrated some loss in spatial resolution, low contrast detectability and temporal resolution for all transferred images when compared to the fluoroscopy source. When using higher capacity codec units and/or the highest bandwidths to support data transmission, image quality transmitted through the telepractice system was found to be comparable if not better than the current clinical system. This study confirms that telepractice systems can be designed to support fluoroscopy image transfer and highlights important considerations when developing telepractice systems for VFSS analysis to ensure high-quality radiological image reproduction.
Moore, David Steven
2015-05-10
This second edition of "Infrared and Raman Spectroscopic Imaging" propels practitioners in that wide-ranging field, as well as other readers, to the current state of the art in a well-produced and full-color, completely revised and updated, volume. This new edition chronicles the expanded application of vibrational spectroscopic imaging from yesterday's time-consuming point-by-point buildup of a hyperspectral image cube, through the improvements afforded by the addition of focal plane arrays and line scan imaging, to methods applicable beyond the diffraction limit, instructs the reader on the improved instrumentation and image and data analysis methods, and expounds on their application to fundamentalmore » biomedical knowledge, food and agricultural surveys, materials science, process and quality control, and many others.« less
NASA Technical Reports Server (NTRS)
Schmahl, Edward J.; Kundu, Mukul R.
1998-01-01
We have continued our previous efforts in studies of fourier imaging methods applied to hard X-ray flares. We have performed physical and theoretical analysis of rotating collimator grids submitted to GSFC(Goddard Space Flight Center) for the High Energy Solar Spectroscopic Imager (HESSI). We have produced simulation algorithms which are currently being used to test imaging software and hardware for HESSI. We have developed Maximum-Entropy, Maximum-Likelihood, and "CLEAN" methods for reconstructing HESSI images from count-rate profiles. This work is expected to continue through the launch of HESSI in July, 2000. Section 1 shows a poster presentation "Image Reconstruction from HESSI Photon Lists" at the Solar Physics Division Meeting, June 1998; Section 2 shows the text and viewgraphs prepared for "Imaging Simulations" at HESSI's Preliminary Design Review on July 30, 1998.
Multi-scale image segmentation method with visual saliency constraints and its application
NASA Astrophysics Data System (ADS)
Chen, Yan; Yu, Jie; Sun, Kaimin
2018-03-01
Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.
Images Encryption Method using Steganographic LSB Method, AES and RSA algorithm
NASA Astrophysics Data System (ADS)
Moumen, Abdelkader; Sissaoui, Hocine
2017-03-01
Vulnerability of communication of digital images is an extremely important issue nowadays, particularly when the images are communicated through insecure channels. To improve communication security, many cryptosystems have been presented in the image encryption literature. This paper proposes a novel image encryption technique based on an algorithm that is faster than current methods. The proposed algorithm eliminates the step in which the secrete key is shared during the encryption process. It is formulated based on the symmetric encryption, asymmetric encryption and steganography theories. The image is encrypted using a symmetric algorithm, then, the secret key is encrypted by means of an asymmetrical algorithm and it is hidden in the ciphered image using a least significant bits steganographic scheme. The analysis results show that while enjoying the faster computation, our method performs close to optimal in terms of accuracy.
Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios
2017-02-01
Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.
Pointing and control system performance and improvement strategies for the SOFIA Airborne Telescope
NASA Astrophysics Data System (ADS)
Graf, Friederike; Reinacher, Andreas; Jakob, Holger; Lampater, Ulrich; Pfueller, Enrico; Wiedemann, Manuel; Wolf, Jürgen; Fasoulas, Stefanos
2016-07-01
The Stratospheric Observatory for Infrared Astronomy (SOFIA) has already successfully conducted over 300 flights. In its early science phase, SOFIA's pointing requirements and especially the image jitter requirements of less than 1 arcsec rms have driven the design of the control system. Since the first observation flights, the image jitter has been gradually reduced by various control mechanisms. During smooth flight conditions, the current pointing and control system allows us to achieve the standards set for early science on SOFIA. However, the increasing demands on the image size require an image jitter of less than 0.4 arcsec rms during light turbulence to reach SOFIA's scientific goals. The major portion of the remaining image motion is caused by deformation and excitation of the telescope structure in a wide range of frequencies due to aircraft motion and aerodynamic and aeroacoustic effects. Therefore the so-called Flexible Body Compensation system (FBC) is used, a set of fixed-gain filters to counteract the structural bending and deformation. Thorough testing of the current system under various flight conditions has revealed a variety of opportunities for further improvements. The currently applied filters have solely been developed based on a FEM analysis. By implementing the inflight measurements in a simulation and optimization, an improved fixed-gain compensation method was identified. This paper will discuss promising results from various jitter measurements recorded with sampling frequencies of up to 400 Hz using the fast imaging tracking camera.
Development of High Resolution Eddy Current Imaging Using an Electro-Mechanical Sensor (Preprint)
2011-11-01
The Fluxgate Magnetometer ,” J. Phys. E: Sci. Instrum., Vol. 12: 241-253. 13. A. Abedi, J. J. Fellenstein, A. J. Lucas, and J. P. Wikswo, Jr., “A...206 (2006). 11. Ripka, P., 1992, Review of Fluxgate Sensors, Sensors and Actuators, A. 33, Elsevier Sequoia: 129-141. 12. Primdahl, F., 1979...superconducting quantum interference device magnetometer system for quantitative analysis and imaging of hidden corrosion activity in aircraft aluminum
Introduction of A New Toolbox for Processing Digital Images From Multiple Camera Networks: FMIPROT
NASA Astrophysics Data System (ADS)
Melih Tanis, Cemal; Nadir Arslan, Ali
2017-04-01
Webcam networks intended for scientific monitoring of ecosystems is providing digital images and other environmental data for various studies. Also, other types of camera networks can also be used for scientific purposes, e.g. usage of traffic webcams for phenological studies, camera networks for ski tracks and avalanche monitoring over mountains for hydrological studies. To efficiently harness the potential of these camera networks, easy to use software which can obtain and handle images from different networks having different protocols and standards is necessary. For the analyses of the images from webcam networks, numerous software packages are freely available. These software packages have different strong features not only for analyzing but also post processing digital images. But specifically for the ease of use, applicability and scalability, a different set of features could be added. Thus, a more customized approach would be of high value, not only for analyzing images of comprehensive camera networks, but also considering the possibility to create operational data extraction and processing with an easy to use toolbox. At this paper, we introduce a new toolbox, entitled; Finnish Meteorological Institute Image PROcessing Tool (FMIPROT) which a customized approach is followed. FMIPROT has currently following features: • straightforward installation, • no software dependencies that require as extra installations, • communication with multiple camera networks, • automatic downloading and handling images, • user friendly and simple user interface, • data filtering, • visualizing results on customizable plots, • plugins; allows users to add their own algorithms. Current image analyses in FMIPROT include "Color Fraction Extraction" and "Vegetation Indices". The analysis of color fraction extraction is calculating the fractions of the colors in a region of interest, for red, green and blue colors along with brightness and luminance parameters. The analysis of vegetation indices is a collection of indices used in vegetation phenology and includes "Green Fraction" (green chromatic coordinate), "Green-Red Vegetation Index" and "Green Excess Index". "Snow cover fraction" analysis which detects snow covered pixels in the images and georeference them on a geospatial plane to calculate the snow cover fraction is being implemented at the moment. FMIPROT is being developed during the EU Life+ MONIMET project. Altogether we mounted 28 cameras at 14 different sites in Finland as MONIMET camera network. In this paper, we will present details of FMIPROT and analysis results from MONIMET camera network. We will also discuss on future planned developments of FMIPROT.
Bonny, Jean Marie; Boespflug-Tanguly, Odile; Zanca, Michel; Renou, Jean Pierre
2003-03-01
A solution for discrete multi-exponential analysis of T(2) relaxation decay curves obtained in current multi-echo imaging protocol conditions is described. We propose a preprocessing step to improve the signal-to-noise ratio and thus lower the signal-to-noise ratio threshold from which a high percentage of true multi-exponential detection is detected. It consists of a multispectral nonlinear edge-preserving filter that takes into account the signal-dependent Rician distribution of noise affecting magnitude MR images. Discrete multi-exponential decomposition, which requires no a priori knowledge, is performed by a non-linear least-squares procedure initialized with estimates obtained from a total least-squares linear prediction algorithm. This approach was validated and optimized experimentally on simulated data sets of normal human brains.
Spatial and spectral analysis of corneal epithelium injury using hyperspectral images
NASA Astrophysics Data System (ADS)
Md Noor, Siti Salwa; Michael, Kaleena; Marshall, Stephen; Ren, Jinchang
2017-12-01
Eye assessment is essential in preventing blindness. Currently, the existing methods to assess corneal epithelium injury are complex and require expert knowledge. Hence, we have introduced a non-invasive technique using hyperspectral imaging (HSI) and an image analysis algorithm of corneal epithelium injury. Three groups of images were compared and analyzed, namely healthy eyes, injured eyes, and injured eyes with stain. Dimensionality reduction using principal component analysis (PCA) was applied to reduce massive data and redundancies. The first 10 principal components (PCs) were selected for further processing. The mean vector of 10 PCs with 45 pairs of all combinations was computed and sent to two classifiers. A quadratic Bayes normal classifier (QDC) and a support vector classifier (SVC) were used in this study to discriminate the eleven eyes into three groups. As a result, the combined classifier of QDC and SVC showed optimal performance with 2D PCA features (2DPCA-QDSVC) and was utilized to classify normal and abnormal tissues, using color image segmentation. The result was compared with human segmentation. The outcome showed that the proposed algorithm produced extremely promising results to assist the clinician in quantifying a cornea injury.
NASA Astrophysics Data System (ADS)
Peller, Joseph; Thompson, Kyle J.; Siddiqui, Imran; Martinie, John; Iannitti, David A.; Trammell, Susan R.
2017-02-01
Pancreatic cancer is the fourth leading cause of cancer death in the US. Currently, surgery is the only treatment that offers a chance of cure, however, accurately identifying tumor margins in real-time is difficult. Research has demonstrated that optical spectroscopy can be used to distinguish between healthy and diseased tissue. The design of a single-pixel imaging system for cancer detection is discussed. The system differentiates between healthy and diseased tissue based on differences in the optical reflectance spectra of these regions. In this study, pancreatic tissue samples from 6 patients undergoing Whipple procedures are imaged with the system (total number of tissue sample imaged was N=11). Regions of healthy and unhealthy tissue are determined based on SAM analysis of these spectral images. Hyperspectral imaging results are then compared to white light imaging and histological analysis. Cancerous regions were clearly visible in the hyperspectral images. Margins determined via spectral imaging were in good agreement with margins identified by histology, indicating that hyperspectral imaging system can differentiate between healthy and diseased tissue. After imaging the system was able to detect cancerous regions with a sensitivity of 74.50±5.89% and a specificity of 75.53±10.81%. Possible applications of this imaging system include determination of tumor margins during surgery/biopsy and assistance with cancer diagnosis and staging.
A Survey of FDG- and Amyloid-PET Imaging in Dementia and GRADE Analysis
Daniela, Perani; Orazio, Schillaci; Alessandro, Padovani; Mariano, Nobili Flavio; Leonardo, Iaccarino; Pasquale Anthony, Della Rosa; Giovanni, Frisoni; Carlo, Caltagirone
2014-01-01
PET based tools can improve the early diagnosis of Alzheimer's disease (AD) and differential diagnosis of dementia. The importance of identifying individuals at risk of developing dementia among people with subjective cognitive complaints or mild cognitive impairment has clinical, social, and therapeutic implications. Within the two major classes of AD biomarkers currently identified, that is, markers of pathology and neurodegeneration, amyloid- and FDG-PET imaging represent decisive tools for their measurement. As a consequence, the PET tools have been recognized to be of crucial value in the recent guidelines for the early diagnosis of AD and other dementia conditions. The references based recommendations, however, include large PET imaging literature based on visual methods that greatly reduces sensitivity and specificity and lacks a clear cut-off between normal and pathological findings. PET imaging can be assessed using parametric or voxel-wise analyses by comparing the subject's scan with a normative data set, significantly increasing the diagnostic accuracy. This paper is a survey of the relevant literature on FDG and amyloid-PET imaging aimed at providing the value of quantification for the early and differential diagnosis of AD. This allowed a meta-analysis and GRADE analysis revealing high values for PET imaging that might be useful in considering recommendations. PMID:24772437
Advanced NDE research in electromagnetic, thermal, and coherent optics
NASA Technical Reports Server (NTRS)
Skinner, S. Ballou
1992-01-01
A new inspection technology called magneto-optic/eddy current imaging was investigated. The magneto-optic imager makes readily visible irregularities and inconsistencies in airframe components. Other research observed in electromagnetics included (1) disbond detection via resonant modal analysis; (2) AC magnetic field frequency dependence of magnetoacoustic emission; and (3) multi-view magneto-optic imaging. Research observed in the thermal group included (1) thermographic detection and characterization of corrosion in aircraft aluminum; (2) a multipurpose infrared imaging system for thermoelastic stress detection; (3) thermal diffusivity imaging of stress induced damage in composites; and (4) detection and measurement of ice formation on the space shuttle main fuel tank. Research observed in the optics group included advancements in optical nondestructive evaluation (NDE).
Radiology and Enterprise Medical Imaging Extensions (REMIX).
Erdal, Barbaros S; Prevedello, Luciano M; Qian, Songyue; Demirer, Mutlu; Little, Kevin; Ryu, John; O'Donnell, Thomas; White, Richard D
2018-02-01
Radiology and Enterprise Medical Imaging Extensions (REMIX) is a platform originally designed to both support the medical imaging-driven clinical and clinical research operational needs of Department of Radiology of The Ohio State University Wexner Medical Center. REMIX accommodates the storage and handling of "big imaging data," as needed for large multi-disciplinary cancer-focused programs. The evolving REMIX platform contains an array of integrated tools/software packages for the following: (1) server and storage management; (2) image reconstruction; (3) digital pathology; (4) de-identification; (5) business intelligence; (6) texture analysis; and (7) artificial intelligence. These capabilities, along with documentation and guidance, explaining how to interact with a commercial system (e.g., PACS, EHR, commercial database) that currently exists in clinical environments, are to be made freely available.
Samei, Ehsan; Buhr, Egbert; Granfors, Paul; Vandenbroucke, Dirk; Wang, Xiaohui
2005-08-07
The modulation transfer function (MTF) is well established as a metric to characterize the resolution performance of a digital radiographic system. Implemented by various laboratories, the edge technique is currently the most widespread approach to measure the MTF. However, there can be differences in the results attributed to differences in the analysis technique employed. The objective of this study was to determine whether comparable results can be obtained from different algorithms processing identical images representative of those of current digital radiographic systems. Five laboratories participated in a round-robin evaluation of six different algorithms including one prescribed in the International Electrotechnical Commission (IEC) 62220-1 standard. The algorithms were applied to two synthetic and 12 real edge images from different digital radiographic systems including CR, and direct- and indirect-conversion detector systems. The results were analysed in terms of variability as well as accuracy of the resulting presampled MTFs. The results indicated that differences between the individual MTFs and the mean MTF were largely below 0.02. In the case of the two simulated edge images, all algorithms yielded similar results within 0.01 of the expected true MTF. The findings indicated that all algorithms tested in this round-robin evaluation, including the IEC-prescribed algorithm, were suitable for accurate MTF determination from edge images, provided the images are not excessively noisy. The agreement of the MTF results was judged sufficient for the measurement of the MTF necessary for the determination of the DQE.
Biomarkers and Surrogate Endpoints in Uveitis: The Impact of Quantitative Imaging.
Denniston, Alastair K; Keane, Pearse A; Srivastava, Sunil K
2017-05-01
Uveitis is a major cause of sight loss across the world. The reliable assessment of intraocular inflammation in uveitis ('disease activity') is essential in order to score disease severity and response to treatment. In this review, we describe how 'quantitative imaging', the approach of using automated analysis and measurement algorithms across both standard and emerging imaging modalities, can develop objective instrument-based measures of disease activity. This is a narrative review based on searches of the current world literature using terms related to quantitative imaging techniques in uveitis, supplemented by clinical trial registry data, and expert knowledge of surrogate endpoints and outcome measures in ophthalmology. Current measures of disease activity are largely based on subjective clinical estimation, and are relatively insensitive, with poor discrimination and reliability. The development of quantitative imaging in uveitis is most established in the use of optical coherence tomographic (OCT) measurement of central macular thickness (CMT) to measure severity of macular edema (ME). The transformative effect of CMT in clinical assessment of patients with ME provides a paradigm for the development and impact of other forms of quantitative imaging. Quantitative imaging approaches are now being developed and validated for other key inflammatory parameters such as anterior chamber cells, vitreous haze, retinovascular leakage, and chorioretinal infiltrates. As new forms of quantitative imaging in uveitis are proposed, the uveitis community will need to evaluate these tools against the current subjective clinical estimates and reach a new consensus for how disease activity in uveitis should be measured. The development, validation, and adoption of sensitive and discriminatory measures of disease activity is an unmet need that has the potential to transform both drug development and routine clinical care for the patient with uveitis.
Laser speckle imaging of rat retinal blood flow with hybrid temporal and spatial analysis method
NASA Astrophysics Data System (ADS)
Cheng, Haiying; Yan, Yumei; Duong, Timothy Q.
2009-02-01
Noninvasive monitoring of blood flow in retinal circulation will reveal the progression and treatment of ocular disorders, such as diabetic retinopathy, age-related macular degeneration and glaucoma. A non-invasive and direct BF measurement technique with high spatial-temporal resolution is needed for retinal imaging. Laser speckle imaging (LSI) is such a method. Currently, there are two analysis methods for LSI: spatial statistics LSI (SS-LSI) and temporal statistical LSI (TS-LSI). Comparing these two analysis methods, SS-LSI has higher signal to noise ratio (SNR) and TSLSI is less susceptible to artifacts from stationary speckle. We proposed a hybrid temporal and spatial analysis method (HTS-LSI) to measure the retinal blood flow. Gas challenge experiment was performed and images were analyzed by HTS-LSI. Results showed that HTS-LSI can not only remove the stationary speckle but also increase the SNR. Under 100% O2, retinal BF decreased by 20-30%. This was consistent with the results observed with laser Doppler technique. As retinal blood flow is a critical physiological parameter and its perturbation has been implicated in the early stages of many retinal diseases, HTS-LSI will be an efficient method in early detection of retina diseases.
Falahati, Farshad; Westman, Eric; Simmons, Andrew
2014-01-01
Machine learning algorithms and multivariate data analysis methods have been widely utilized in the field of Alzheimer's disease (AD) research in recent years. Advances in medical imaging and medical image analysis have provided a means to generate and extract valuable neuroimaging information. Automatic classification techniques provide tools to analyze this information and observe inherent disease-related patterns in the data. In particular, these classifiers have been used to discriminate AD patients from healthy control subjects and to predict conversion from mild cognitive impairment to AD. In this paper, recent studies are reviewed that have used machine learning and multivariate analysis in the field of AD research. The main focus is on studies that used structural magnetic resonance imaging (MRI), but studies that included positron emission tomography and cerebrospinal fluid biomarkers in addition to MRI are also considered. A wide variety of materials and methods has been employed in different studies, resulting in a range of different outcomes. Influential factors such as classifiers, feature extraction algorithms, feature selection methods, validation approaches, and cohort properties are reviewed, as well as key MRI-based and multi-modal based studies. Current and future trends are discussed.
Chemical Applications of a Programmable Image Acquisition System
NASA Astrophysics Data System (ADS)
Ogren, Paul J.; Henry, Ian; Fletcher, Steven E. S.; Kelly, Ian
2003-06-01
Image analysis is widely used in chemistry, both for rapid qualitative evaluations using techniques such as thin layer chromatography (TLC) and for quantitative purposes such as well-plate measurements of analyte concentrations or fragment-size determinations in gel electrophoresis. This paper describes a programmable system for image acquisition and processing that is currently used in the laboratories of our organic and physical chemistry courses. It has also been used in student research projects in analytical chemistry and biochemistry. The potential range of applications is illustrated by brief presentations of four examples: (1) using well-plate optical transmission data to construct a standard concentration absorbance curve; (2) the quantitative analysis of acetaminophen in Tylenol and acetylsalicylic acid in aspirin using TLC with fluorescence detection; (3) the analysis of electrophoresis gels to determine DNA fragment sizes and amounts; and, (4) using color change to follow reaction kinetics. The supplemental material in JCE Online contains information on two additional examples: deconvolution of overlapping bands in protein gel electrophoresis, and the recovery of data from published images or graphs. The JCE Online material also presents additional information on each example, on the system hardware and software, and on the data analysis methodology.
NASA Astrophysics Data System (ADS)
Popescu, Florin; Ayache, Stephane; Escalera, Sergio; Baró Solé, Xavier; Capponi, Cecile; Panciatici, Patrick; Guyon, Isabelle
2016-04-01
The big data transformation currently revolutionizing science and industry forges novel possibilities in multi-modal analysis scarcely imaginable only a decade ago. One of the important economic and industrial problems that stand to benefit from the recent expansion of data availability and computational prowess is the prediction of electricity demand and renewable energy generation. Both are correlates of human activity: spatiotemporal energy consumption patterns in society are a factor of both demand (weather dependent) and supply, which determine cost - a relation expected to strengthen along with increasing renewable energy dependence. One of the main drivers of European weather patterns is the activity of the Atlantic Ocean and in particular its dominant Northern Hemisphere current: the Gulf Stream. We choose this particular current as a test case in part due to larger amount of relevant data and scientific literature available for refinement of analysis techniques. This data richness is due not only to its economic importance but also to its size being clearly visible in radar and infrared satellite imagery, which makes it easier to detect using Computer Vision (CV). The power of CV techniques makes basic analysis thus developed scalable to other smaller and less known, but still influential, currents, which are not just curves on a map, but complex, evolving, moving branching trees in 3D projected onto a 2D image. We investigate means of extracting, from several image modalities (including recently available Copernicus radar and earlier Infrared satellites), a parameterized representation of the state of the Gulf Stream and its environment that is useful as feature space representation in a machine learning context, in this case with the EC's H2020-sponsored 'See.4C' project, in the context of which data scientists may find novel predictors of spatiotemporal energy flow. Although automated extractors of Gulf Stream position exist, they differ in methodology and result. We shall attempt to extract more complex feature representation including branching points, eddies and parameterized changes in transport and velocity. Other related predictive features will be similarly developed, such as inference of deep water flux long the current path and wider spatial scale features such as Hough transform, surface turbulence indicators and temperature gradient indexes along with multi-time scale analysis of ocean height and temperature dynamics. The geospatial imaging and ML community may therefore benefit from a baseline of open-source techniques useful and expandable to other related prediction and/or scientific analysis tasks.
Hyperspectral small animal fluorescence imaging: spectral selection imaging
NASA Astrophysics Data System (ADS)
Leavesley, Silas; Jiang, Yanan; Patsekin, Valery; Hall, Heidi; Vizard, Douglas; Robinson, J. Paul
2008-02-01
Molecular imaging is a rapidly growing area of research, fueled by needs in pharmaceutical drug-development for methods for high-throughput screening, pre-clinical and clinical screening for visualizing tumor growth and drug targeting, and a growing number of applications in the molecular biology fields. Small animal fluorescence imaging employs fluorescent probes to target molecular events in vivo, with a large number of molecular targeting probes readily available. The ease at which new targeting compounds can be developed, the short acquisition times, and the low cost (compared to microCT, MRI, or PET) makes fluorescence imaging attractive. However, small animal fluorescence imaging suffers from high optical scattering, absorption, and autofluorescence. Much of these problems can be overcome through multispectral imaging techniques, which collect images at different fluorescence emission wavelengths, followed by analysis, classification, and spectral deconvolution methods to isolate signals from fluorescence emission. We present an alternative to the current method, using hyperspectral excitation scanning (spectral selection imaging), a technique that allows excitation at any wavelength in the visible and near-infrared wavelength range. In many cases, excitation imaging may be more effective at identifying specific fluorescence signals because of the higher complexity of the fluorophore excitation spectrum. Because the excitation is filtered and not the emission, the resolution limit and image shift imposed by acousto-optic tunable filters have no effect on imager performance. We will discuss design of the imager, optimizing the imager for use in small animal fluorescence imaging, and application of spectral analysis and classification methods for identifying specific fluorescence signals.
Rubel, Oliver; Bowen, Benjamin P
2018-01-01
Mass spectrometry imaging (MSI) is a transformative imaging method that supports the untargeted, quantitative measurement of the chemical composition and spatial heterogeneity of complex samples with broad applications in life sciences, bioenergy, and health. While MSI data can be routinely collected, its broad application is currently limited by the lack of easily accessible analysis methods that can process data of the size, volume, diversity, and complexity generated by MSI experiments. The development and application of cutting-edge analytical methods is a core driver in MSI research for new scientific discoveries, medical diagnostics, and commercial-innovation. However, the lack of means to share, apply, and reproduce analyses hinders the broad application, validation, and use of novel MSI analysis methods. To address this central challenge, we introduce the Berkeley Analysis and Storage Toolkit (BASTet), a novel framework for shareable and reproducible data analysis that supports standardized data and analysis interfaces, integrated data storage, data provenance, workflow management, and a broad set of integrated tools. Based on BASTet, we describe the extension of the OpenMSI mass spectrometry imaging science gateway to enable web-based sharing, reuse, analysis, and visualization of data analyses and derived data products. We demonstrate the application of BASTet and OpenMSI in practice to identify and compare characteristic substructures in the mouse brain based on their chemical composition measured via MSI.
Validation of a Smartphone Image-Based Dietary Assessment Method for Pregnant Women
Ashman, Amy M.; Collins, Clare E.; Brown, Leanne J.; Rae, Kym M.; Rollo, Megan E.
2017-01-01
Image-based dietary records could lower participant burden associated with traditional prospective methods of dietary assessment. They have been used in children, adolescents and adults, but have not been evaluated in pregnant women. The current study evaluated relative validity of the DietBytes image-based dietary assessment method for assessing energy and nutrient intakes. Pregnant women collected image-based dietary records (via a smartphone application) of all food, drinks and supplements consumed over three non-consecutive days. Intakes from the image-based method were compared to intakes collected from three 24-h recalls, taken on random days; once per week, in the weeks following the image-based record. Data were analyzed using nutrient analysis software. Agreement between methods was ascertained using Pearson correlations and Bland-Altman plots. Twenty-five women (27 recruited, one withdrew, one incomplete), median age 29 years, 15 primiparas, eight Aboriginal Australians, completed image-based records for analysis. Significant correlations between the two methods were observed for energy, macronutrients and fiber (r = 0.58–0.84, all p < 0.05), and for micronutrients both including (r = 0.47–0.94, all p < 0.05) and excluding (r = 0.40–0.85, all p < 0.05) supplements in the analysis. Bland-Altman plots confirmed acceptable agreement with no systematic bias. The DietBytes method demonstrated acceptable relative validity for assessment of nutrient intakes of pregnant women. PMID:28106758
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.; Kuncic, Zdenka; Keall, Paul J.
2014-01-01
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets with various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage. PMID:24694143
Brain imaging and behavioral outcome in traumatic brain injury.
Bigler, E D
1996-09-01
Brain imaging studies have become an essential diagnostic assessment procedure in evaluating the effects of traumatic brain injury (TBI). Such imaging studies provide a wealth of information about structural and functional deficits following TBI. But how pathologic changes identified by brain imaging methods relate to neurobehavioral outcome is not as well known. Thus, the focus of this article is on brain imaging findings and outcome following TBI. The article starts with an overview of current research dealing with the cellular pathology associated with TBI. Understanding the cellular elements of pathology permits extrapolation to what is observed with brain imaging. Next, this article reviews the relationship of brain imaging findings to underlying pathology and how that pathology relates to neurobehavioral outcome. The brain imaging techniques of magnetic resonance imaging, computerized tomography, and single photon emission computed tomography are reviewed. Various image analysis procedures, and how such findings relate to neuropsychological testing, are discussed. The importance of brain imaging in evaluating neurobehavioral deficits following brain injury is stressed.
Fritscher, Karl; Grunerbl, Agnes; Hanni, Markus; Suhm, Norbert; Hengg, Clemens; Schubert, Rainer
2009-10-01
Currently, conventional X-ray and CT images as well as invasive methods performed during the surgical intervention are used to judge the local quality of a fractured proximal femur. However, these approaches are either dependent on the surgeon's experience or cannot assist diagnostic and planning tasks preoperatively. Therefore, in this work a method for the individual analysis of local bone quality in the proximal femur based on model-based analysis of CT- and X-ray images of femur specimen will be proposed. A combined representation of shape and spatial intensity distribution of an object and different statistical approaches for dimensionality reduction are used to create a statistical appearance model in order to assess the local bone quality in CT and X-ray images. The developed algorithms are tested and evaluated on 28 femur specimen. It will be shown that the tools and algorithms presented herein are highly adequate to automatically and objectively predict bone mineral density values as well as a biomechanical parameter of the bone that can be measured intraoperatively.
DIGITAL IMAGE ANALYSIS OF ZOSTERA MARINA LEAF INJURY
Current methods for assessing leaf injury in Zostera marina (eelgrass) utilize subjective indexes for desiccation injury and wasting disease. Because of the subjective nature of these measures, they are inherently imprecise making them difficult to use in quantifying complex leaf...
Automatic discrimination of fine roots in minirhizotron images.
Zeng, Guang; Birchfield, Stanley T; Wells, Christina E
2008-01-01
Minirhizotrons provide detailed information on the production, life history and mortality of fine roots. However, manual processing of minirhizotron images is time-consuming, limiting the number and size of experiments that can reasonably be analysed. Previously, an algorithm was developed to automatically detect and measure individual roots in minirhizotron images. Here, species-specific root classifiers were developed to discriminate detected roots from bright background artifacts. Classifiers were developed from training images of peach (Prunus persica), freeman maple (Acer x freemanii) and sweetbay magnolia (Magnolia virginiana) using the Adaboost algorithm. True- and false-positive rates for classifiers were estimated using receiver operating characteristic curves. Classifiers gave true positive rates of 89-94% and false positive rates of 3-7% when applied to nontraining images of the species for which they were developed. The application of a classifier trained on one species to images from another species resulted in little or no reduction in accuracy. These results suggest that a single root classifier can be used to distinguish roots from background objects across multiple minirhizotron experiments. By incorporating root detection and discrimination algorithms into an open-source minirhizotron image analysis application, many analysis tasks that are currently performed by hand can be automated.
Cryo-imaging of fluorescently labeled single cells in a mouse
NASA Astrophysics Data System (ADS)
Steyer, Grant J.; Roy, Debashish; Salvado, Olivier; Stone, Meredith E.; Wilson, David L.
2009-02-01
We developed a cryo-imaging system to provide single-cell detection of fluorescently labeled cells in mouse, with particular applicability to stem cells and metastatic cancer. The Case cryoimaging system consists of a fluorescence microscope, robotic imaging positioner, customized cryostat, PC-based control system, and visualization/analysis software. The system alternates between sectioning (10-40 μm) and imaging, collecting color brightfield and fluorescent blockface image volumes >60GB. In mouse experiments, we imaged quantum-dot labeled stem cells, GFP-labeled cancer and stem cells, and cell-size fluorescent microspheres. To remove subsurface fluorescence, we used a simplified model of light-tissue interaction whereby the next image was scaled, blurred, and subtracted from the current image. We estimated scaling and blurring parameters by minimizing entropy of subtracted images. Tissue specific attenuation parameters were found [uT : heart (267 +/- 47.6 μm), liver (218 +/- 27.1 μm), brain (161 +/- 27.4 μm)] to be within the range of estimates in the literature. "Next image" processing removed subsurface fluorescence equally well across multiple tissues (brain, kidney, liver, adipose tissue, etc.), and analysis of 200 microsphere images in the brain gave 97+/-2% reduction of subsurface fluorescence. Fluorescent signals were determined to arise from single cells based upon geometric and integrated intensity measurements. Next image processing greatly improved axial resolution, enabled high quality 3D volume renderings, and improved enumeration of single cells with connected component analysis by up to 24%. Analysis of image volumes identified metastatic cancer sites, found homing of stem cells to injury sites, and showed microsphere distribution correlated with blood flow patterns. We developed and evaluated cryo-imaging to provide single-cell detection of fluorescently labeled cells in mouse. Our cryo-imaging system provides extreme (>60GB), micron-scale, fluorescence, and bright field image data. Here we describe our image preprocessing, analysis, and visualization techniques. Processing improves axial resolution, reduces subsurface fluorescence by 97%, and enables single cell detection and counting. High quality 3D volume renderings enable us to evaluate cell distribution patterns. Applications include the myriad of biomedical experiments using fluorescent reporter gene and exogenous fluorophore labeling of cells in applications such as stem cell regenerative medicine, cancer, tissue engineering, etc.
Hologlyphics: volumetric image synthesis performance system
NASA Astrophysics Data System (ADS)
Funk, Walter
2008-02-01
This paper describes a novel volumetric image synthesis system and artistic technique, which generate moving volumetric images in real-time, integrated with music. The system, called the Hologlyphic Funkalizer, is performance based, wherein the images and sound are controlled by a live performer, for the purposes of entertaining a live audience and creating a performance art form unique to volumetric and autostereoscopic images. While currently configured for a specific parallax barrier display, the Hologlyphic Funkalizer's architecture is completely adaptable to various volumetric and autostereoscopic display technologies. Sound is distributed through a multi-channel audio system; currently a quadraphonic speaker setup is implemented. The system controls volumetric image synthesis, production of music and spatial sound via acoustic analysis and human gestural control, using a dedicated control panel, motion sensors, and multiple musical keyboards. Music can be produced by external acoustic instruments, pre-recorded sounds or custom audio synthesis integrated with the volumetric image synthesis. Aspects of the sound can control the evolution of images and visa versa. Sounds can be associated and interact with images, for example voice synthesis can be combined with an animated volumetric mouth, where nuances of generated speech modulate the mouth's expressiveness. Different images can be sent to up to 4 separate displays. The system applies many novel volumetric special effects, and extends several film and video special effects into the volumetric realm. Extensive and various content has been developed and shown to live audiences by a live performer. Real world applications will be explored, with feedback on the human factors.
NASA Astrophysics Data System (ADS)
Cheng, Mao-Hsun; Zhao, Chumin; Kanicki, Jerzy
2017-05-01
Current-mode active pixel sensor (C-APS) circuits based on amorphous indium-tin-zinc-oxide thin-film transistors (a-ITZO TFTs) are proposed for indirect X-ray imagers. The proposed C-APS circuits include a combination of a hydrogenated amorphous silicon (a-Si:H) p+-i-n+ photodiode (PD) and a-ITZO TFTs. Source-output (SO) and drain-output (DO) C-APS are investigated and compared. Acceptable signal linearity and high gains are realized for SO C-APS. APS circuit characteristics including voltage gain, charge gain, signal linearity, charge-to-current conversion gain, electron-to-voltage conversion gain are evaluated. The impact of the a-ITZO TFT threshold voltage shifts on C-APS is also considered. A layout for a pixel pitch of 50 μm and an associated fabrication process are suggested. Data line loadings for 4k-resolution X-ray imagers are computed and their impact on circuit performances is taken into consideration. Noise analysis is performed, showing a total input-referred noise of 239 e-.
Discrimination of herbicide-resistant kochia with hyperspectral imaging
NASA Astrophysics Data System (ADS)
Nugent, Paul W.; Shaw, Joseph A.; Jha, Prashant; Scherrer, Bryan; Donelick, Andrew; Kumar, Vipan
2018-01-01
A hyperspectral imager was used to differentiate herbicide-resistant versus herbicide-susceptible biotypes of the agronomic weed kochia, in different crops in the field at the Southern Agricultural Research Center in Huntley, Montana. Controlled greenhouse experiments showed that enough information was captured by the imager to classify plants as either a crop, herbicide-susceptible or herbicide-resistant kochia. The current analysis is developing an algorithm that will work in more uncontrolled outdoor situations. In overcast conditions, the algorithm correctly identified dicamba-resistant kochia, glyphosate-resistant kochia, and glyphosate- and dicamba-susceptible kochia with 67%, 76%, and 80% success rates, respectively.
The Role of Magnetic Forces in Biology and Medicine
Roth, Bradley J
2011-01-01
The Lorentz force (the force acting on currents in a magnetic field) plays an increasingly larger role in techniques to image current and conductivity. This review will summarize several applications involving the Lorentz force, including 1) magneto-acoustic imaging of current, 2) “Hall effect” imaging, 3) ultrasonically-induced Lorentz force imaging of conductivity, 4) magneto-acoustic tomography with magnetic induction, and 5) Lorentz force imaging of action currents using magnetic resonance imaging. PMID:21321309
ERIC Educational Resources Information Center
Massie, Keith R.
2011-01-01
The current study examined over 3000 visual images on the homepages of 234 National University to determine how power relations are depicted. Using a hybrid methodology of grounded theory, critical discursive analysis, and facial prominence scoring, the work culminates in a theory: The (Im)Balanced Theory of College Identity Formation Online. The…
Aslam, Tariq Mehmood; Shakir, Savana; Wong, James; Au, Leon; Ashworth, Jane
2012-12-01
Mucopolysaccharidoses (MPS) can cause corneal opacification that is currently difficult to objectively quantify. With newer treatments for MPS comes an increased need for a more objective, valid and reliable index of disease severity for clinical and research use. Clinical evaluation by slit lamp is very subjective and techniques based on colour photography are difficult to standardise. In this article the authors present evidence for the utility of dedicated image analysis algorithms applied to images obtained by a highly sophisticated iris recognition camera that is small, manoeuvrable and adapted to achieve rapid, reliable and standardised objective imaging in a wide variety of patients while minimising artefactual interference in image quality.
Stålhammar, Gustav; Robertson, Stephanie; Wedlund, Lena; Lippert, Michael; Rantalainen, Mattias; Bergh, Jonas; Hartman, Johan
2018-05-01
During pathological examination of breast tumours, proliferative activity is routinely evaluated by a count of mitoses. Adding immunohistochemical stains of Ki67 provides extra prognostic and predictive information. However, the currently used methods for these evaluations suffer from imperfect reproducibility. It is still unclear whether analysis of Ki67 should be performed in hot spots, in the tumour periphery, or as an average of the whole tumour section. The aim of this study was to compare the clinical relevance of mitoses, Ki67 and phosphohistone H3 in two cohorts of primary breast cancer specimens (total n = 294). Both manual and digital image analysis scores were evaluated for sensitivity and specificity for luminal B versus A subtype as defined by PAM50 gene expression assays, for high versus low transcriptomic grade, for axillary lymph node status, and for prognostic value in terms of prediction of overall and relapse-free survival. Digital image analysis of Ki67 outperformed the other markers, especially in hot spots. Tumours with high Ki67 expression and high numbers of phosphohistone H3-positive cells had significantly increased hazard ratios for all-cause mortality within 10 years from diagnosis. Replacing manual mitotic counts with digital image analysis of Ki67 in hot spots increased the differences in overall survival between the highest and lowest histological grades, and added significant prognostic information. Digital image analysis of Ki67 in hot spots is the marker of choice for routine analysis of proliferation in breast cancer. © 2017 John Wiley & Sons Ltd.
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Editor); Schenker, Paul (Editor)
1987-01-01
The papers presented in this volume provide an overview of current research in both optical and digital pattern recognition, with a theme of identifying overlapping research problems and methodologies. Topics discussed include image analysis and low-level vision, optical system design, object analysis and recognition, real-time hybrid architectures and algorithms, high-level image understanding, and optical matched filter design. Papers are presented on synthetic estimation filters for a control system; white-light correlator character recognition; optical AI architectures for intelligent sensors; interpreting aerial photographs by segmentation and search; and optical information processing using a new photopolymer.
NASA Technical Reports Server (NTRS)
Castruccio, P. A.; Loats, H. L., Jr.
1975-01-01
An analysis of current computer usage by major water resources users was made to determine the trends of usage and costs for the principal hydrologic users/models. The laws and empirical relationships governing the growth of the data processing loads were described and applied to project the future data loads. Data loads for ERTS CCT image processing were computed and projected through the 1985 era. The analysis showns significant impact due to the utilization and processing of ERTS CCT's data.
Temperature Diffusion Distribution of Electric Wire Deteriorated by Overcurrent
NASA Astrophysics Data System (ADS)
Choi, Chung-Seog; Kim, Hyang-Kon; Kim, Dong-Woo; Lee, Ki-Yeon
This study presents thermal diffusion distribution of the electric wires when overcurrent is supplied to copper wires. And then, this study intends to provide a basis of knowledge for analyzing the causes of electric accidents through hybrid technology. In the thermal image distribution analysis of the electric wire to which fusing current was supplied, it was found that less heat was accumulated in the thin wires because of easier heat dispersion, while more heat was accumulated in the thicker wires. The 3-dimensional thermal image analysis showed that heat distribution was concentrated at the center of the wire and the inclination of heat distribution was steep in the thicker wires. When 81A was supplied to 1.6mm copper wire for 500 seconds, the surface temperature of wire was maximum 46.68°C and minimum 30.87°C. It revealed the initial characteristics of insulation deterioration that generates white smoke without external deformation. In the analysis with stereoscopic microscope, the surface turned dark brown and rough with the increase of fusing current. Also, it was known that exfoliation occurred when wire melted down with 2 times the fusing current. With the increase of current, we found the number of primary arms of the dendrite structure to be increased and those of the secondary and tertiary arms to be decreased. Also, when the overcurrent reached twice the fusing current, it was found that columnar composition, observed in the cross sectional structure of molten wire, appeared and formed regular directivity. As described above, we could present the burning pattern and change in characteristics of insulation and conductor quantitatively. And we could not only minimize the analysis error by combining the information but also present the scientific basis in the analysis of causes of electric accidents, mediation of disputes on product liability concerning the electric products.
Predictive spectroscopy and chemical imaging based on novel optical systems
NASA Astrophysics Data System (ADS)
Nelson, Matthew Paul
1998-10-01
This thesis describes two futuristic optical systems designed to surpass contemporary spectroscopic methods for predictive spectroscopy and chemical imaging. These systems are advantageous to current techniques in a number of ways including lower cost, enhanced portability, shorter analysis time, and improved S/N. First, a novel optical approach to predicting chemical and physical properties based on principal component analysis (PCA) is proposed and evaluated. A regression vector produced by PCA is designed into the structure of a set of paired optical filters. Light passing through the paired filters produces an analog detector signal directly proportional to the chemical/physical property for which the regression vector was designed. Second, a novel optical system is described which takes a single-shot approach to chemical imaging with high spectroscopic resolution using a dimension-reduction fiber-optic array. Images are focused onto a two- dimensional matrix of optical fibers which are drawn into a linear distal array with specific ordering. The distal end is imaged with a spectrograph equipped with an ICCD camera for spectral analysis. Software is used to extract the spatial/spectral information contained in the ICCD images and deconvolute them into wave length-specific reconstructed images or position-specific spectra which span a multi-wavelength space. This thesis includes a description of the fabrication of two dimension-reduction arrays as well as an evaluation of the system for spatial and spectral resolution, throughput, image brightness, resolving power, depth of focus, and channel cross-talk. PCA is performed on the images by treating rows of the ICCD images as spectra and plotting the scores of each PC as a function of reconstruction position. In addition, iterative target transformation factor analysis (ITTFA) is performed on the spectroscopic images to generate ``true'' chemical maps of samples. Univariate zero-order images, univariate first-order spectroscopic images, bivariate first-order spectroscopic images, and multivariate first-order spectroscopic images of the temporal development of laser-induced plumes are presented and interpreted. Reconstructed chemical images generated using bivariate and trivariate wavelength techniques, bimodal and trimodal PCA methods, and bimodal and trimodal ITTFA approaches are also included.
NASA Technical Reports Server (NTRS)
Oswald, Hayden; Molthan, Andrew L.
2011-01-01
Satellite remote sensing has gained widespread use in the field of operational meteorology. Although raw satellite imagery is useful, several techniques exist which can convey multiple types of data in a more efficient way. One of these techniques is multispectral compositing. The NASA Short-term Prediction Research and Transition (SPoRT) Center has developed two multispectral satellite imagery products which utilize data from the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard NASA's Terra and Aqua satellites, based upon products currently generated and used by the European Organization for the Exploitation of Meteorological Satellites (EUMETSAT). The nighttime microphysics product allows users to identify clouds occurring at different altitudes, but emphasizes fog and low cloud detection. This product improves upon current spectral difference and single channel infrared techniques. Each of the current products has its own set of advantages for nocturnal fog detection, but each also has limiting drawbacks which can hamper the analysis process. The multispectral product combines each current product with a third channel difference. Since the final image is enhanced with color, it simplifies the fog identification process. Analysis has shown that the nighttime microphysics imagery product represents a substantial improvement to conventional fog detection techniques, as well as provides a preview of future satellite capabilities to forecasters.
3D temporal subtraction on multislice CT images using nonlinear warping technique
NASA Astrophysics Data System (ADS)
Ishida, Takayuki; Katsuragawa, Shigehiko; Kawashita, Ikuo; Kim, Hyounseop; Itai, Yoshinori; Awai, Kazuo; Li, Qiang; Doi, Kunio
2007-03-01
The detection of very subtle lesions and/or lesions overlapped with vessels on CT images is a time consuming and difficult task for radiologists. In this study, we have developed a 3D temporal subtraction method to enhance interval changes between previous and current multislice CT images based on a nonlinear image warping technique. Our method provides a subtraction CT image which is obtained by subtraction of a previous CT image from a current CT image. Reduction of misregistration artifacts is important in the temporal subtraction method. Therefore, our computerized method includes global and local image matching techniques for accurate registration of current and previous CT images. For global image matching, we selected the corresponding previous section image for each current section image by using 2D cross-correlation between a blurred low-resolution current CT image and a blurred previous CT image. For local image matching, we applied the 3D template matching technique with translation and rotation of volumes of interests (VOIs) which were selected in the current and the previous CT images. The local shift vector for each VOI pair was determined when the cross-correlation value became the maximum in the 3D template matching. The local shift vectors at all voxels were determined by interpolation of shift vectors of VOIs, and then the previous CT image was nonlinearly warped according to the shift vector for each voxel. Finally, the warped previous CT image was subtracted from the current CT image. The 3D temporal subtraction method was applied to 19 clinical cases. The normal background structures such as vessels, ribs, and heart were removed without large misregistration artifacts. Thus, interval changes due to lung diseases were clearly enhanced as white shadows on subtraction CT images.
Waldenberg, Christian; Hebelka, Hanna; Brisby, Helena; Lagerstrand, Kerstin Magdalena
2018-05-01
Magnetic resonance imaging (MRI) is the best diagnostic imaging method for low back pain. However, the technique is currently not utilized in its full capacity, often failing to depict painful intervertebral discs (IVDs), potentially due to the rough degeneration classification system used clinically today. MR image histograms, which reflect the IVD heterogeneity, may offer sensitive imaging biomarkers for IVD degeneration classification. This study investigates the feasibility of using histogram analysis as means of objective and continuous grading of IVD degeneration. Forty-nine IVDs in ten low back pain patients (six males, 25-69 years) were examined with MRI (T2-weighted images and T2-maps). Each IVD was semi-automatically segmented on three mid-sagittal slices. Histogram features of the IVD were extracted from the defined regions of interest and correlated to Pfirrmann grade. Both T2-weighted images and T2-maps displayed similar histogram features. Histograms of well-hydrated IVDs displayed two separate peaks, representing annulus fibrosus and nucleus pulposus. Degenerated IVDs displayed decreased peak separation, where the separation was shown to correlate strongly with Pfirrmann grade (P < 0.05). In addition, some degenerated IVDs within the same Pfirrmann grade displayed diametrically different histogram appearances. Histogram features correlated well with IVD degeneration, suggesting that IVD histogram analysis is a suitable tool for objective and continuous IVD degeneration classification. As histogram analysis revealed IVD heterogeneity, it may be a clinical tool for characterization of regional IVD degeneration effects. To elucidate the usefulness of histogram analysis in patient management, IVD histogram features between asymptomatic and symptomatic individuals needs to be compared.
Development of High Resolution Eddy Current Imaging Using an Electro-Mechanical Sensor (Postprint)
2011-08-01
Primdahl, F., 1979, “The Fluxgate Magnetometer ,” J. Phys. E: Sci. Instrum., Vol. 12: 241-253. 13. A. Abedi, J. J. Fellenstein, A. J. Lucas, and J. P...Issues 1-2, Pages 203-206 (2006). 11. Ripka, P., 1992, Review of Fluxgate Sensors, Sensors and Actuators, A. 33, Elsevier Sequoia: 129-141. 12...Wikswo, Jr., “A superconducting quantum interference device magnetometer system for quantitative analysis and imaging of hidden corrosion activity in
A Content Analysis of Television Ads: Does Current Practice Maximize Cognitive Processing?
2008-12-11
ads with arousing content such as sexual imagery and fatty/sweet food imagery have the potential to stress the cognitive processing system. When the...to examine differences in content arousal , this study included variables shown to elicit arousal —loved brands, sexual images, and fatty/sweet food...loved brands as well as ads with sexual and fatty/food images are not all the same—they are not likely to be equally arousing . Initially, brands were
Remote Sensing and Imaging Physics
2012-03-07
Model Analysis Process Wire-frame Shape Model a s s u m e d a p rio ri k n o w le d g e No material BRDF library employed in retrieval...a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 07 MAR 2012 2. REPORT TYPE 3. DATES COVERED...imaging estimation problems Allows properties of local maxima to be derived from the Kolmogorov model of atmospheric turbulence: Each speckle
Note: Eddy current displacement sensors independent of target conductivity.
Wang, Hongbo; Li, Wei; Feng, Zhihua
2015-01-01
Eddy current sensors (ECSs) are widely used for non-contact displacement measurement. In this note, the quantitative error of an ECS caused by target conductivity was analyzed using a complex image method. The response curves (L-x) of the ECS with different targets were similar and could be overlapped by shifting the curves on x direction with √2δ/2. Both finite element analysis and experiments match well with the theoretical analysis, which indicates that the measured error of high precision ECSs caused by target conductivity can be completely eliminated, and the ECSs can measure different materials precisely without calibration.
Electromigration Mechanism of Failure in Flip-Chip Solder Joints Based on Discrete Void Formation.
Chang, Yuan-Wei; Cheng, Yin; Helfen, Lukas; Xu, Feng; Tian, Tian; Scheel, Mario; Di Michiel, Marco; Chen, Chih; Tu, King-Ning; Baumbach, Tilo
2017-12-20
In this investigation, SnAgCu and SN100C solders were electromigration (EM) tested, and the 3D laminography imaging technique was employed for in-situ observation of the microstructure evolution during testing. We found that discrete voids nucleate, grow and coalesce along the intermetallic compound/solder interface during EM testing. A systematic analysis yields quantitative information on the number, volume, and growth rate of voids, and the EM parameter of DZ*. We observe that fast intrinsic diffusion in SnAgCu solder causes void growth and coalescence, while in the SN100C solder this coalescence was not significant. To deduce the current density distribution, finite-element models were constructed on the basis of the laminography images. The discrete voids do not change the global current density distribution, but they induce the local current crowding around the voids: this local current crowding enhances the lateral void growth and coalescence. The correlation between the current density and the probability of void formation indicates that a threshold current density exists for the activation of void formation. There is a significant increase in the probability of void formation when the current density exceeds half of the maximum value.
Directional spatial frequency analysis of lipid distribution in atherosclerotic plaque
NASA Astrophysics Data System (ADS)
Korn, Clyde; Reese, Eric; Shi, Lingyan; Alfano, Robert; Russell, Stewart
2016-04-01
Atherosclerosis is characterized by the growth of fibrous plaques due to the retention of cholesterol and lipids within the artery wall, which can lead to vessel occlusion and cardiac events. One way to evaluate arterial disease is to quantify the amount of lipid present in these plaques, since a higher disease burden is characterized by a higher concentration of lipid. Although therapeutic stimulation of reverse cholesterol transport to reduce cholesterol deposits in plaque has not produced significant results, this may be due to current image analysis methods which use averaging techniques to calculate the total amount of lipid in the plaque without regard to spatial distribution, thereby discarding information that may have significance in marking response to therapy. Here we use Directional Fourier Spatial Frequency (DFSF) analysis to generate a characteristic spatial frequency spectrum for atherosclerotic plaques from C57 Black 6 mice both treated and untreated with a cholesterol scavenging nanoparticle. We then use the Cauchy product of these spectra to classify the images with a support vector machine (SVM). Our results indicate that treated plaque can be distinguished from untreated plaque using this method, where no difference is seen using the spatial averaging method. This work has the potential to increase the effectiveness of current in-vivo methods of plaque detection that also use averaging methods, such as laser speckle imaging and Raman spectroscopy.
Nonlinear, non-stationary image processing technique for eddy current NDE
NASA Astrophysics Data System (ADS)
Yang, Guang; Dib, Gerges; Kim, Jaejoon; Zhang, Lu; Xin, Junjun; Udpa, Lalita
2012-05-01
Automatic analysis of eddy current (EC) data has facilitated the analysis of large volumes of data generated in the inspection of steam generator tubes in nuclear power plants. The traditional procedure for analysis of EC data includes data calibration, pre-processing, region of interest (ROI) detection, feature extraction and classification. Accurate ROI detection has been enhanced by pre-processing, which involves reducing noise and other undesirable components as well as enhancing defect indications in the raw measurement. This paper presents the Hilbert-Huang Transform (HHT) for feature extraction and support vector machine (SVM) for classification. The performance is shown to significantly better than the existing rule based classification approach used in industry.
Advanced image based methods for structural integrity monitoring: Review and prospects
NASA Astrophysics Data System (ADS)
Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.
2018-02-01
There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.
Multispectral image fusion for target detection
NASA Astrophysics Data System (ADS)
Leviner, Marom; Maltz, Masha
2009-09-01
Various different methods to perform multi-spectral image fusion have been suggested, mostly on the pixel level. However, the jury is still out on the benefits of a fused image compared to its source images. We present here a new multi-spectral image fusion method, multi-spectral segmentation fusion (MSSF), which uses a feature level processing paradigm. To test our method, we compared human observer performance in an experiment using MSSF against two established methods: Averaging and Principle Components Analysis (PCA), and against its two source bands, visible and infrared. The task that we studied was: target detection in the cluttered environment. MSSF proved superior to the other fusion methods. Based on these findings, current speculation about the circumstances in which multi-spectral image fusion in general and specific fusion methods in particular would be superior to using the original image sources can be further addressed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, W; Wang, J; Zhang, H
Purpose: To review the literature in using computerized PET/CT image analysis for the evaluation of tumor response to therapy. Methods: We reviewed and summarized more than 100 papers that used computerized image analysis techniques for the evaluation of tumor response with PET/CT. This review mainly covered four aspects: image registration, tumor segmentation, image feature extraction, and response evaluation. Results: Although rigid image registration is straightforward, it has been shown to achieve good alignment between baseline and evaluation scans. Deformable image registration has been shown to improve the alignment when complex deformable distortions occur due to tumor shrinkage, weight loss ormore » gain, and motion. Many semi-automatic tumor segmentation methods have been developed on PET. A comparative study revealed benefits of high levels of user interaction with simultaneous visualization of CT images and PET gradients. On CT, semi-automatic methods have been developed for only tumors that show marked difference in CT attenuation between the tumor and the surrounding normal tissues. Quite a few multi-modality segmentation methods have been shown to improve accuracy compared to single-modality algorithms. Advanced PET image features considering spatial information, such as tumor volume, tumor shape, total glycolytic volume, histogram distance, and texture features have been found more informative than the traditional SUVmax for the prediction of tumor response. Advanced CT features, including volumetric, attenuation, morphologic, structure, and texture descriptors, have also been found advantage over the traditional RECIST and WHO criteria in certain tumor types. Predictive models based on machine learning technique have been constructed for correlating selected image features to response. These models showed improved performance compared to current methods using cutoff value of a single measurement for tumor response. Conclusion: This review showed that computerized PET/CT image analysis holds great potential to improve the accuracy in evaluation of tumor response. This work was supported in part by the National Cancer Institute Grant R01CA172638.« less
Ultrafast current imaging by Bayesian inversion
Somnath, Suhas; Law, Kody J. H.; Morozovska, Anna; Maksymovych, Petro; Kim, Yunseok; Lu, Xiaoli; Alexe, Marin; Archibald, Richard K; Kalinin, Sergei V; Jesse, Stephen; Vasudevan, Rama K
2016-01-01
Spectroscopic measurements of current-voltage curves in scanning probe microscopy is the earliest and one of the most common methods for characterizing local energy-dependent electronic properties, providing insight into superconductive, semiconductor, and memristive behaviors. However, the quasistatic nature of these measurements renders them extremely slow. Here, we demonstrate a fundamentally new approach for dynamic spectroscopic current imaging via full information capture and Bayesian inference analysis. This "general-mode I-V"method allows three orders of magnitude faster rates than presently possible. The technique is demonstrated by acquiring I-V curves in ferroelectric nanocapacitors, yielding >100,000 I-V curves in <20 minutes. This allows detection of switching currents in the nanoscale capacitors, as well as determination of dielectric constant. These experiments show the potential for the use of full information capture and Bayesian inference towards extracting physics from rapid I-V measurements, and can be used for transport measurements in both atomic force and scanning tunneling microscopy. The data was analyzed using pycroscopy - an open-source python package available at https://github.com/pycroscopy/pycroscopy
Current and future trends in marine image annotation software
NASA Astrophysics Data System (ADS)
Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.
2016-12-01
Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.
On an image reconstruction method for ECT
NASA Astrophysics Data System (ADS)
Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro
2007-04-01
An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.
VoxelStats: A MATLAB Package for Multi-Modal Voxel-Wise Brain Image Analysis.
Mathotaarachchi, Sulantha; Wang, Seqian; Shin, Monica; Pascoal, Tharick A; Benedet, Andrea L; Kang, Min Su; Beaudry, Thomas; Fonov, Vladimir S; Gauthier, Serge; Labbe, Aurélie; Rosa-Neto, Pedro
2016-01-01
In healthy individuals, behavioral outcomes are highly associated with the variability on brain regional structure or neurochemical phenotypes. Similarly, in the context of neurodegenerative conditions, neuroimaging reveals that cognitive decline is linked to the magnitude of atrophy, neurochemical declines, or concentrations of abnormal protein aggregates across brain regions. However, modeling the effects of multiple regional abnormalities as determinants of cognitive decline at the voxel level remains largely unexplored by multimodal imaging research, given the high computational cost of estimating regression models for every single voxel from various imaging modalities. VoxelStats is a voxel-wise computational framework to overcome these computational limitations and to perform statistical operations on multiple scalar variables and imaging modalities at the voxel level. VoxelStats package has been developed in Matlab(®) and supports imaging formats such as Nifti-1, ANALYZE, and MINC v2. Prebuilt functions in VoxelStats enable the user to perform voxel-wise general and generalized linear models and mixed effect models with multiple volumetric covariates. Importantly, VoxelStats can recognize scalar values or image volumes as response variables and can accommodate volumetric statistical covariates as well as their interaction effects with other variables. Furthermore, this package includes built-in functionality to perform voxel-wise receiver operating characteristic analysis and paired and unpaired group contrast analysis. Validation of VoxelStats was conducted by comparing the linear regression functionality with existing toolboxes such as glim_image and RMINC. The validation results were identical to existing methods and the additional functionality was demonstrated by generating feature case assessments (t-statistics, odds ratio, and true positive rate maps). In summary, VoxelStats expands the current methods for multimodal imaging analysis by allowing the estimation of advanced regional association metrics at the voxel level.
Gilhodes, Jean-Claude; Julé, Yvon; Kreuz, Sebastian; Stierstorfer, Birgit; Stiller, Detlef; Wollin, Lutz
2017-01-01
Current literature on pulmonary fibrosis induced in animal models highlights the need of an accurate, reliable and reproducible histological quantitative analysis. One of the major limits of histological scoring concerns the fact that it is observer-dependent and consequently subject to variability, which may preclude comparative studies between different laboratories. To achieve a reliable and observer-independent quantification of lung fibrosis we developed an automated software histological image analysis performed from digital image of entire lung sections. This automated analysis was compared to standard evaluation methods with regard to its validation as an end-point measure of fibrosis. Lung fibrosis was induced in mice by intratracheal administration of bleomycin (BLM) at 0.25, 0.5, 0.75 and 1 mg/kg. A detailed characterization of BLM-induced fibrosis was performed 14 days after BLM administration using lung function testing, micro-computed tomography and Ashcroft scoring analysis. Quantification of fibrosis by automated analysis was assessed based on pulmonary tissue density measured from thousands of micro-tiles processed from digital images of entire lung sections. Prior to analysis, large bronchi and vessels were manually excluded from the original images. Measurement of fibrosis has been expressed by two indexes: the mean pulmonary tissue density and the high pulmonary tissue density frequency. We showed that tissue density indexes gave access to a very accurate and reliable quantification of morphological changes induced by BLM even for the lowest concentration used (0.25 mg/kg). A reconstructed 2D-image of the entire lung section at high resolution (3.6 μm/pixel) has been performed from tissue density values allowing the visualization of their distribution throughout fibrotic and non-fibrotic regions. A significant correlation (p<0.0001) was found between automated analysis and the above standard evaluation methods. This correlation establishes automated analysis as a novel end-point measure of BLM-induced lung fibrosis in mice, which will be very valuable for future preclinical drug explorations.
Gilhodes, Jean-Claude; Kreuz, Sebastian; Stierstorfer, Birgit; Stiller, Detlef; Wollin, Lutz
2017-01-01
Current literature on pulmonary fibrosis induced in animal models highlights the need of an accurate, reliable and reproducible histological quantitative analysis. One of the major limits of histological scoring concerns the fact that it is observer-dependent and consequently subject to variability, which may preclude comparative studies between different laboratories. To achieve a reliable and observer-independent quantification of lung fibrosis we developed an automated software histological image analysis performed from digital image of entire lung sections. This automated analysis was compared to standard evaluation methods with regard to its validation as an end-point measure of fibrosis. Lung fibrosis was induced in mice by intratracheal administration of bleomycin (BLM) at 0.25, 0.5, 0.75 and 1 mg/kg. A detailed characterization of BLM-induced fibrosis was performed 14 days after BLM administration using lung function testing, micro-computed tomography and Ashcroft scoring analysis. Quantification of fibrosis by automated analysis was assessed based on pulmonary tissue density measured from thousands of micro-tiles processed from digital images of entire lung sections. Prior to analysis, large bronchi and vessels were manually excluded from the original images. Measurement of fibrosis has been expressed by two indexes: the mean pulmonary tissue density and the high pulmonary tissue density frequency. We showed that tissue density indexes gave access to a very accurate and reliable quantification of morphological changes induced by BLM even for the lowest concentration used (0.25 mg/kg). A reconstructed 2D-image of the entire lung section at high resolution (3.6 μm/pixel) has been performed from tissue density values allowing the visualization of their distribution throughout fibrotic and non-fibrotic regions. A significant correlation (p<0.0001) was found between automated analysis and the above standard evaluation methods. This correlation establishes automated analysis as a novel end-point measure of BLM-induced lung fibrosis in mice, which will be very valuable for future preclinical drug explorations. PMID:28107543
NASA Astrophysics Data System (ADS)
Fritz, Hermann M.; Phillips, David A.; Okayasu, Akio; Shimozono, Takenori; Liu, Haijiang; Takeda, Seiichi; Mohammed, Fahad; Skanavis, Vassilis; Synolakis, Costas E.; Takahashi, Tomoyuki
2013-04-01
The March 11, 2011, magnitude Mw 9.0 earthquake off the Tohoku coast of Japan caused catastrophic damage and loss of life to a tsunami aware population. The mid-afternoon tsunami arrival combined with survivors equipped with cameras on top of vertical evacuation buildings provided fragmented spatially and temporally resolved inundation recordings. This report focuses on the surveys at 9 tsunami eyewitness video recording locations in Myako, Kamaishi, Kesennuma and Yoriisohama along Japan's Sanriku coast and the subsequent video image calibration, processing, tsunami hydrograph and flow velocity analysis. Selected tsunami video recording sites were explored, eyewitnesses interviewed and some ground control points recorded during the initial tsunami reconnaissance in April, 2011. A follow-up survey in June, 2011 focused on terrestrial laser scanning (TLS) at locations with high quality eyewitness videos. We acquired precise topographic data using TLS at the video sites producing a 3-dimensional "point cloud" dataset. A camera mounted on the Riegl VZ-400 scanner yields photorealistic 3D images. Integrated GPS measurements allow accurate georeferencing. The original video recordings were recovered from eyewitnesses and the Japanese Coast Guard (JCG). The analysis of the tsunami videos follows an adapted four step procedure originally developed for the analysis of 2004 Indian Ocean tsunami videos at Banda Aceh, Indonesia (Fritz et al., 2006). The first step requires the calibration of the sector of view present in the eyewitness video recording based on ground control points measured in the LiDAR data. In a second step the video image motion induced by the panning of the video camera was determined from subsequent images by particle image velocimetry (PIV) applied to fixed objects. The third step involves the transformation of the raw tsunami video images from image coordinates to world coordinates with a direct linear transformation (DLT) procedure. Finally, the instantaneous tsunami surface current and flooding velocity vector maps are determined by applying the digital PIV analysis method to the rectified tsunami video images with floating debris clusters. Tsunami currents up to 11 m/s were measured in Kesennuma Bay making navigation impossible (Fritz et al., 2012). Tsunami hydrographs are derived from the videos based on water surface elevations at surface piercing objects identified in the acquired topographic TLS data. Apart from a dominant tsunami crest the hydrograph at Kamaishi also reveals a subsequent draw down to minus 10m exposing the harbor bottom. In some cases ship moorings resist the main tsunami crest only to be broken by the extreme draw down and setting vessels a drift for hours. Further we discuss the complex effects of coastal structures on inundation and outflow hydrographs and flow velocities. Lastly a perspective on the recovery and reconstruction process is provided based on numerous revisits of identical sites between April 2011 and July 2012.
NASA Astrophysics Data System (ADS)
Goss, Tristan M.
2016-05-01
With 640x512 pixel format IR detector arrays having been on the market for the past decade, Standard Definition (SD) thermal imaging sensors have been developed and deployed across the world. Now with 1280x1024 pixel format IR detector arrays becoming readily available designers of thermal imager systems face new challenges as pixel sizes reduce and the demand and applications for High Definition (HD) thermal imaging sensors increases. In many instances the upgrading of existing under-sampled SD thermal imaging sensors into more optimally sampled or oversampled HD thermal imaging sensors provides a more cost effective and reduced time to market option than to design and develop a completely new sensor. This paper presents the analysis and rationale behind the selection of the best suited HD pixel format MWIR detector for the upgrade of an existing SD thermal imaging sensor to a higher performing HD thermal imaging sensor. Several commercially available and "soon to be" commercially available HD small pixel IR detector options are included as part of the analysis and are considered for this upgrade. The impact the proposed detectors have on the sensor's overall sensitivity, noise and resolution is analyzed, and the improved range performance is predicted. Furthermore with reduced dark currents due to the smaller pixel sizes, the candidate HD MWIR detectors are operated at higher temperatures when compared to their SD predecessors. Therefore, as an additional constraint and as a design goal, the feasibility of achieving upgraded performance without any increase in the size, weight and power consumption of the thermal imager is discussed herein.
[Analysis of body image perception of university students in navarra].
Soto Ruiz, Ma Nelia; Marin Fernández, Blanca; Aguinaga Ontoso, Inés; Guillén-Grima, Francisco; Serrano Mozó, Inmaculada; Canga Armayor, Navidad; Hermoso de Mendoza Cantón, Juana; Stock, Christiane; Kraemer, Alexander; Annan, James
2015-05-01
Current models of beauty represent an extreme thinness in the women and a muscular body in the men. The body image perception will condition the search of ideal beauty through different behaviors and can be transform in eating disorders. The university students, with the changes typical of youth and university transition, are a vulnerable group. The purpose of this study was to evaluate the body image perception of university students in Navarra. The study included 1162 subjects of which 64.2% were female. Students asked for a self-managed questionnaire and they were weighted and heighted to calculate the body mass index (BMI). Their body image perception were obtained asking the students to select a picture, according to their perception which corresponded to their current body image from nine different silhouettes for men and women. Their BMI were calculated and compared with their perceived BMI. 43.03% of students, overestimated their body image (10.65% in males and 59.69% in females) and 10.20% of students underestimated it. 46.75% of students had concordance between BMI and body image perception. There were more cases the alterations in the body image perception in women. In general, women saw themselves as being fatter than really were while men saw themselves as being thinner than they really were. The results shown that the women were more worried about their weight and body image than the men. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Supervised learning of tools for content-based search of image databases
NASA Astrophysics Data System (ADS)
Delanoy, Richard L.
1996-03-01
A computer environment, called the Toolkit for Image Mining (TIM), is being developed with the goal of enabling users with diverse interests and varied computer skills to create search tools for content-based image retrieval and other pattern matching tasks. Search tools are generated using a simple paradigm of supervised learning that is based on the user pointing at mistakes of classification made by the current search tool. As mistakes are identified, a learning algorithm uses the identified mistakes to build up a model of the user's intentions, construct a new search tool, apply the search tool to a test image, display the match results as feedback to the user, and accept new inputs from the user. Search tools are constructed in the form of functional templates, which are generalized matched filters capable of knowledge- based image processing. The ability of this system to learn the user's intentions from experience contrasts with other existing approaches to content-based image retrieval that base searches on the characteristics of a single input example or on a predefined and semantically- constrained textual query. Currently, TIM is capable of learning spectral and textural patterns, but should be adaptable to the learning of shapes, as well. Possible applications of TIM include not only content-based image retrieval, but also quantitative image analysis, the generation of metadata for annotating images, data prioritization or data reduction in bandwidth-limited situations, and the construction of components for larger, more complex computer vision algorithms.
Method for stitching microbial images using a neural network
NASA Astrophysics Data System (ADS)
Semenishchev, E. A.; Voronin, V. V.; Marchuk, V. I.; Tolstova, I. V.
2017-05-01
Currently an analog microscope has a wide distribution in the following fields: medicine, animal husbandry, monitoring technological objects, oceanography, agriculture and others. Automatic method is preferred because it will greatly reduce the work involved. Stepper motors are used to move the microscope slide and allow to adjust the focus in semi-automatic or automatic mode view with transfer images of microbiological objects from the eyepiece of the microscope to the computer screen. Scene analysis allows to locate regions with pronounced abnormalities for focusing specialist attention. This paper considers the method for stitching microbial images, obtained of semi-automatic microscope. The method allows to keep the boundaries of objects located in the area of capturing optical systems. Objects searching are based on the analysis of the data located in the area of the camera view. We propose to use a neural network for the boundaries searching. The stitching image boundary is held of the analysis borders of the objects. To auto focus, we use the criterion of the minimum thickness of the line boundaries of object. Analysis produced the object located in the focal axis of the camera. We use method of recovery of objects borders and projective transform for the boundary of objects which are based on shifted relative to the focal axis. Several examples considered in this paper show the effectiveness of the proposed approach on several test images.
Jeong, Y J; Oh, T I; Woo, E J; Kim, K J
2017-07-01
Recently, highly flexible and soft pressure distribution imaging sensor is in great demand for tactile sensing, gait analysis, ubiquitous life-care based on activity recognition, and therapeutics. In this study, we integrate the piezo-capacitive and piezo-electric nanowebs with the conductive fabric sheets for detecting static and dynamic pressure distributions on a large sensing area. Electrical impedance tomography (EIT) and electric source imaging are applied for reconstructing pressure distribution images from measured current-voltage data on the boundary of the hybrid fabric sensor. We evaluated the piezo-capacitive nanoweb sensor, piezo-electric nanoweb sensor, and hybrid fabric sensor. The results show the feasibility of static and dynamic pressure distribution imaging from the boundary measurements of the fabric sensors.
Accumulation of electric currents driving jetting events in the solar atmosphere
NASA Astrophysics Data System (ADS)
Vargas Domínguez, S.; Guo, Y.; Demoulin, P.; Schmieder, B.; Ding, M.; Liu, Y.
2013-12-01
The solar atmosphere is populated with a wide variety of structures and phenomena at different spatial and temporal scales. Explosive phenomena are of particular interest due to their contribution to the atmosphere's energy budget and their implications, e.g. coronal heating. Recent instrumental developments have provided important observations and therefore new insights for tracking the dynamic evolution of the solar atmosphere. Jets of plasma are frequently observed in the solar corona and are thought to be a consequence of magnetic reconnection, however, the physics involved is not fully understood. Unprecedented observations (EUV and vector magnetic fields) are used to study solar jetting events, from which we derive the magnetic flux evolution, the photospheric velocity field, and the vertical electric current evolution. The evolution of magnetic parasitic polarities displaying diverging flows are detected to trigger recurrent jets in a solar regionon 17 September 2010. The interaction drive the build up of electric currents. Observed diverging flows are proposed to build continuously such currents. Magnetic reconnection is proposed to occur periodically, in the current layer created between the emerging bipole and the large scale active region field. SDO/AIA EUV composite images. Upper: SDO/AIA 171 Å image overlaid by the line-of-sight magnetic field observed at the same time as that of the 171 Å image. Lower: Map of photospheric transverse velocities derived from LCT analysis with the HMI magnetograms.
NASA Astrophysics Data System (ADS)
Ichino, Shinya; Mawaki, Takezo; Teramoto, Akinobu; Kuroda, Rihito; Park, Hyeonwoo; Wakashima, Shunichi; Goto, Tetsuya; Suwa, Tomoyuki; Sugawa, Shigetoshi
2018-04-01
Random telegraph noise (RTN), which occurs in in-pixel source follower (SF) transistors, has become one of the most critical problems in high-sensitivity CMOS image sensors (CIS) because it is a limiting factor of dark random noise. In this paper, the behaviors of RTN toward changes in SF drain current conditions were analyzed using a low-noise array test circuit measurement system with a floor noise of 35 µV rms. In addition to statistical analysis by measuring a large number of transistors (18048 transistors), we also analyzed the behaviors of RTN parameters such as amplitude and time constants in the individual transistors. It is demonstrated that the appearance probability of RTN becomes small under a small drain current condition, although large-amplitude RTN tends to appear in a very small number of cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campbell, A.N.; Cole, E.I. Jr.; Dodd, B.A.
This invited paper describes recently reported work on the application of magnetic force microscopy (MFM) to image currents in IC conductors [1]. A computer model for MFM imaging of IC currents and experimental results demonstrating the ability to determine current direction and magnitude with a resolution of {approximately} 1 mA dc and {approximately} 1 {mu}A ac are presented. The physics of MFM signal generation and applications to current imaging and measurement are described.
NASA Astrophysics Data System (ADS)
Liu, Limei; Trakic, Adnan; Sanchez-Lopez, Hector; Liu, Feng; Crozier, Stuart
2014-01-01
MRI-LINAC is a new image-guided radiotherapy treatment system that combines magnetic resonance imaging (MRI) with a linear accelerator (LINAC) in a single unit. One drawback is that the pulsing of the split gradient coils of the system induces an electric field and currents in the patient which need to be predicted and evaluated for patient safety. In this novel numerical study the in situ electric fields and associated current densities were evaluated inside tissue-accurate male and female human voxel models when a number of different split-geometry gradient coils were operated. The body models were located in the MRI-LINAC system along the axial and radial directions in three different body positions. Each model had a region of interest (ROI) suitable for image-guided radiotherapy. The simulation results show that the amplitudes and distributions of the field and current density induced by different split x-gradient coils were similar with one another in the ROI of the body model, but varied outside of the region. The fields and current densities induced by a split classic coil with the surface unconnected showed the largest deviation from those given by the conventional non-split coils. Another finding indicated that the distributions of the peak current densities varied when the body position, orientation or gender changed, while the peak electric fields mainly occurred in the skin and fat tissues.
Medical imaging: examples of clinical applications
NASA Astrophysics Data System (ADS)
Meinzer, H. P.; Thorn, M.; Vetter, M.; Hassenpflug, P.; Hastenteufel, M.; Wolf, I.
Clinical routine is currently producing a multitude of diagnostic digital images but only a few are used in therapy planning and treatment. Medical imaging is involved in both diagnosis and therapy. Using a computer, existing 2D images can be transformed into interactive 3D volumes and results from different modalities can be merged. Furthermore, it is possible to calculate functional areas that were not visible in the primary images. This paper presents examples of clinical applications that are integrated into clinical routine and are based on medical imaging fundamentals. In liver surgery, the importance of virtual planning is increasing because surgery is still the only possible curative procedure. Visualisation and analysis of heart defects are also gaining in significance due to improved surgery techniques. Finally, an outlook is provided on future developments in medical imaging using navigation to support the surgeon's work. The paper intends to give an impression of the wide range of medical imaging that goes beyond the mere calculation of medical images.
Quantitative analysis of brain magnetic resonance imaging for hepatic encephalopathy
NASA Astrophysics Data System (ADS)
Syh, Hon-Wei; Chu, Wei-Kom; Ong, Chin-Sing
1992-06-01
High intensity lesions around ventricles have recently been observed in T1-weighted brain magnetic resonance images for patients suffering hepatic encephalopathy. The exact etiology that causes magnetic resonance imaging (MRI) gray scale changes has not been totally understood. The objective of our study was to investigate, through quantitative means, (1) the amount of changes to brain white matter due to the disease process, and (2) the extent and distribution of these high intensity lesions, since it is believed that the abnormality may not be entirely limited to the white matter only. Eleven patients with proven haptic encephalopathy and three normal persons without any evidence of liver abnormality constituted our current data base. Trans-axial, sagittal, and coronal brain MRI were obtained on a 1.5 Tesla scanner. All processing was carried out on a microcomputer-based image analysis system in an off-line manner. Histograms were decomposed into regular brain tissues and lesions. Gray scale ranges coded as lesion were then brought back to original images to identify distribution of abnormality. Our results indicated the disease process involved pallidus, mesencephalon, and subthalamic regions.
High Throughput Multispectral Image Processing with Applications in Food Science.
Tsakanikas, Panagiotis; Pavlidis, Dimitris; Nychas, George-John
2015-01-01
Recently, machine vision is gaining attention in food science as well as in food industry concerning food quality assessment and monitoring. Into the framework of implementation of Process Analytical Technology (PAT) in the food industry, image processing can be used not only in estimation and even prediction of food quality but also in detection of adulteration. Towards these applications on food science, we present here a novel methodology for automated image analysis of several kinds of food products e.g. meat, vanilla crème and table olives, so as to increase objectivity, data reproducibility, low cost information extraction and faster quality assessment, without human intervention. Image processing's outcome will be propagated to the downstream analysis. The developed multispectral image processing method is based on unsupervised machine learning approach (Gaussian Mixture Models) and a novel unsupervised scheme of spectral band selection for segmentation process optimization. Through the evaluation we prove its efficiency and robustness against the currently available semi-manual software, showing that the developed method is a high throughput approach appropriate for massive data extraction from food samples.
Thali, M J; Dirnhofer, R; Becker, R; Oliver, W; Potter, K
2004-10-01
The study aimed to validate magnetic resonance microscopy (MRM) studies of forensic tissue specimens (skin samples with electric injury patterns) against the results from routine histology. Computed tomography and magnetic resonance imaging are fast becoming important tools in clinical and forensic pathology. This study is the first forensic application of MRM to the analysis of electric injury patterns in human skin. Three-dimensional high-resolution MRM images of fixed skin specimens provided a complete 3D view of the damaged tissues at the site of an electric injury as well as in neighboring tissues, consistent with histologic findings. The image intensity of the dermal layer in T2-weighted MRM images was reduced in the central zone due to carbonization or coagulation necrosis and increased in the intermediate zone because of dermal edema. A subjacent blood vessel with an intravascular occlusion supports the hypothesis that current traveled through the vascular system before arcing to ground. High-resolution imaging offers a noninvasive alternative to conventional histology in forensic wound analysis and can be used to perform 3D virtual histology.
Noncontact optical motion sensing for real-time analysis
NASA Astrophysics Data System (ADS)
Fetzer, Bradley R.; Imai, Hiromichi
1990-08-01
The adaptation of an image dissector tube (IDT) within the OPTFOLLOW system provides high resolution displacement measurement of a light discontinuity. Due to the high speed response of the IDT and the advanced servo loop circuitry, the system is capable of real time analysis of the object under test. The image of the discontinuity may be contoured by direct or reflected light and ranges spectrally within the field of visible light. The image is monitored to 500 kHz through a lens configuration which transposes the optical image upon the photocathode of the IDT. The photoelectric effect accelerates the resultant electrons through a photomultiplier and an enhanced current is emitted from the anode. A servo loop controls the electron beam, continually centering it within the IDT using magnetic focusing of deflection coils. The output analog voltage from the servo amplifier is thereby proportional to the displacement of the target. The system is controlled by a microprocessor with a 32kbyte memory and provides a digital display as well as instructional readout on a color monitor allowing for offset image tracking and automatic system calibration.
Imaging intratumor heterogeneity: role in therapy response, resistance, and clinical outcome.
O'Connor, James P B; Rose, Chris J; Waterton, John C; Carano, Richard A D; Parker, Geoff J M; Jackson, Alan
2015-01-15
Tumors exhibit genomic and phenotypic heterogeneity, which has prognostic significance and may influence response to therapy. Imaging can quantify the spatial variation in architecture and function of individual tumors through quantifying basic biophysical parameters such as CT density or MRI signal relaxation rate; through measurements of blood flow, hypoxia, metabolism, cell death, and other phenotypic features; and through mapping the spatial distribution of biochemical pathways and cell signaling networks using PET, MRI, and other emerging molecular imaging techniques. These methods can establish whether one tumor is more or less heterogeneous than another and can identify subregions with differing biology. In this article, we review the image analysis methods currently used to quantify spatial heterogeneity within tumors. We discuss how analysis of intratumor heterogeneity can provide benefit over more simple biomarkers such as tumor size and average function. We consider how imaging methods can be integrated with genomic and pathology data, instead of being developed in isolation. Finally, we identify the challenges that must be overcome before measurements of intratumoral heterogeneity can be used routinely to guide patient care. ©2014 American Association for Cancer Research.
Visual analytics for semantic queries of TerraSAR-X image content
NASA Astrophysics Data System (ADS)
Espinoza-Molina, Daniela; Alonso, Kevin; Datcu, Mihai
2015-10-01
With the continuous image product acquisition of satellite missions, the size of the image archives is considerably increasing every day as well as the variety and complexity of their content, surpassing the end-user capacity to analyse and exploit them. Advances in the image retrieval field have contributed to the development of tools for interactive exploration and extraction of the images from huge archives using different parameters like metadata, key-words, and basic image descriptors. Even though we count on more powerful tools for automated image retrieval and data analysis, we still face the problem of understanding and analyzing the results. Thus, a systematic computational analysis of these results is required in order to provide to the end-user a summary of the archive content in comprehensible terms. In this context, visual analytics combines automated analysis with interactive visualizations analysis techniques for an effective understanding, reasoning and decision making on the basis of very large and complex datasets. Moreover, currently several researches are focused on associating the content of the images with semantic definitions for describing the data in a format to be easily understood by the end-user. In this paper, we present our approach for computing visual analytics and semantically querying the TerraSAR-X archive. Our approach is mainly composed of four steps: 1) the generation of a data model that explains the information contained in a TerraSAR-X product. The model is formed by primitive descriptors and metadata entries, 2) the storage of this model in a database system, 3) the semantic definition of the image content based on machine learning algorithms and relevance feedback, and 4) querying the image archive using semantic descriptors as query parameters and computing the statistical analysis of the query results. The experimental results shows that with the help of visual analytics and semantic definitions we are able to explain the image content using semantic terms and the relations between them answering questions such as what is the percentage of urban area in a region? or what is the distribution of water bodies in a city?
Kamalian, Shervin; Atkinson, Wendy L; Florin, Lauren A; Pomerantz, Stuart R; Lev, Michael H; Romero, Javier M
2014-06-01
Evaluation of the posterior fossa (PF) on 5-mm-thick helical CT images (current default) has improved diagnostic accuracy compared to 5-mm sequential CT images; however, 5-mm-thick images may not be ideal for PF pathology due to volume averaging of rapid changes in anatomy in the Z-direction. Therefore, we sought to determine if routine review of 1.25-mm-thin helical CT images has superior accuracy in screening for nontraumatic PF pathology. MRI proof of diagnosis was obtained within 6 h of helical CT acquisition for 90 consecutive ED patients with, and 88 without, posterior fossa lesions. Helical CT images were post-processed at 1.25 and 5-mm-axial slice thickness. Two neuroradiologists blinded to the clinical/MRI findings reviewed both image sets. Interobserver agreement and accuracy were rated using Kappa statistics and ROC analysis, respectively. Of the 90/178 (51 %) who were MR positive, 60/90 (66 %) had stroke and 30/90 (33 %) had other etiologies. There was excellent interobserver agreement (κ > 0.97) for both thick and thin slice assessments. The accuracy, sensitivity, and specificity for 1.25-mm images were 65, 44, and 84 %, respectively, and for 5-mm images were 67, 45, and 85 %, respectively. The diagnostic accuracy was not significantly different (p > 0.5). In this cohort of patients with nontraumatic neurological symptoms referred to the posterior fossa, 1.25-mm-thin slice CT reformatted images do not have superior accuracy compared to 5-mm-thick images. This information has implications on optimizing resource utilizations and efficiency in a busy emergency room. Review of 1.25-mm-thin images may help diagnostic accuracy only when review of 5-mm-thick images as current default is inconclusive.
NASA Technical Reports Server (NTRS)
Wiegman, E. J.; Evans, W. E.; Hadfield, R.
1975-01-01
Measurements are examined of snow coverage during the snow-melt season in 1973 and 1974 from LANDSAT imagery for the three Columbia River Subbasins. Satellite derived snow cover inventories for the three test basins were obtained as an alternative to inventories performed with the current operational practice of using small aircraft flights over selected snow fields. The accuracy and precision versus cost for several different interactive image analysis procedures was investigated using a display device, the Electronic Satellite Image Analysis Console. Single-band radiance thresholding was the principal technique employed in the snow detection, although this technique was supplemented by an editing procedure involving reference to hand-generated elevation contours. For each data and view measured, a binary thematic map or "mask" depicting the snow cover was generated by a combination of objective and subjective procedures. Photographs of data analysis equipment (displays) are shown.
Review of the current state of whole slide imaging in pathology
Pantanowitz, Liron; Valenstein, Paul N.; Evans, Andrew J.; Kaplan, Keith J.; Pfeifer, John D.; Wilbur, David C.; Collins, Laura C.; Colgan, Terence J.
2011-01-01
Whole slide imaging (WSI), or “virtual” microscopy, involves the scanning (digitization) of glass slides to produce “digital slides”. WSI has been advocated for diagnostic, educational and research purposes. When used for remote frozen section diagnosis, WSI requires a thorough implementation period coupled with trained support personnel. Adoption of WSI for rendering pathologic diagnoses on a routine basis has been shown to be successful in only a few “niche” applications. Wider adoption will most likely require full integration with the laboratory information system, continuous automated scanning, high-bandwidth connectivity, massive storage capacity, and more intuitive user interfaces. Nevertheless, WSI has been reported to enhance specific pathology practices, such as scanning slides received in consultation or of legal cases, of slides to be used for patient care conferences, for quality assurance purposes, to retain records of slides to be sent out or destroyed by ancillary testing, and for performing digital image analysis. In addition to technical issues, regulatory and validation requirements related to WSI have yet to be adequately addressed. Although limited validation studies have been published using WSI there are currently no standard guidelines for validating WSI for diagnostic use in the clinical laboratory. This review addresses the current status of WSI in pathology related to regulation and validation, the provision of remote and routine pathologic diagnoses, educational uses, implementation issues, and the cost-benefit analysis of adopting WSI in routine clinical practice. PMID:21886892
Current Controversies in Diagnosis and Management of Cleft Palate and Velopharyngeal Insufficiency
Ysunza, Pablo Antonio; Repetto, Gabriela M.; Pamplona, Maria Carmen; Calderon, Juan F.; Shaheen, Kenneth; Chaiyasate, Konkgrit; Rontal, Matthew
2015-01-01
Background. One of the most controversial topics concerning cleft palate is the diagnosis and treatment of velopharyngeal insufficiency (VPI). Objective. This paper reviews current genetic aspects of cleft palate, imaging diagnosis of VPI, the planning of operations for restoring velopharyngeal function during speech, and strategies for speech pathology treatment of articulation disorders in patients with cleft palate. Materials and Methods. An updated review of the scientific literature concerning genetic aspects of cleft palate was carried out. Current strategies for assessing and treating articulation disorders associated with cleft palate were analyzed. Imaging procedures for assessing velopharyngeal closure during speech were reviewed, including a recent method for performing intraoperative videonasopharyngoscopy. Results. Conclusions from the analysis of genetic aspects of syndromic and nonsyndromic cleft palate and their use in its diagnosis and management are presented. Strategies for classifying and treating articulation disorders in patients with cleft palate are presented. Preliminary results of the use of multiplanar videofluoroscopy as an outpatient procedure and intraoperative endoscopy for the planning of operations which aimed to correct VPI are presented. Conclusion. This paper presents current aspects of the diagnosis and management of patients with cleft palate and VPI including 3 main aspects: genetics and genomics, speech pathology and imaging diagnosis, and surgical management. PMID:26273595
King, Andy J
2015-01-01
Researchers and practitioners have an increasing interest in visual components of health information and health communication messages. This study contributes to this evolving body of research by providing an account of the visual images and information featured in printed cancer communication materials. Using content analysis, 147 pamphlets and 858 images were examined to determine how frequently images are used in printed materials, what types of images are used, what information is conveyed visually, and whether or not current recommendations for the inclusion of visual content were being followed. Although visual messages were found to be common in printed health materials, existing recommendations about the inclusion of visual content were only partially followed. Results are discussed in terms of how relevant theoretical frameworks in the areas of behavior change and visual persuasion seem to be used in these materials, as well as how more theory-oriented research is necessary in visual messaging efforts.
Imaging energy landscapes with concentrated diffusing colloidal probes
NASA Astrophysics Data System (ADS)
Bahukudumbi, Pradipkumar; Bevan, Michael A.
2007-06-01
The ability to locally interrogate interactions between particles and energetically patterned surfaces provides essential information to design, control, and optimize template directed self-assembly processes. Although numerous techniques are capable of characterizing local physicochemical surface properties, no current method resolves interactions between colloids and patterned surfaces on the order of the thermal energy kT, which is the inherent energy scale of equilibrium self-assembly processes. Here, the authors describe video microscopy measurements and an inverse Monte Carlo analysis of diffusing colloidal probes as a means to image three dimensional free energy and potential energy landscapes due to physically patterned surfaces. In addition, they also develop a consistent analysis of self-diffusion in inhomogeneous fluids of concentrated diffusing probes on energy landscapes, which is important to the temporal imaging process and to self-assembly kinetics. Extension of the concepts developed in this work suggests a general strategy to image multidimensional and multiscale physical, chemical, and biological surfaces using a variety of diffusing probes (i.e., molecules, macromolecules, nanoparticles, and colloids).
Nagel, S. R.; Benedetti, L. R.; Bradley, D. K.; ...
2016-08-05
The dilation x-ray imager (DIXI) is a high-speed x-ray framing camera that uses the pulse-dilation technique to achieve a temporal resolution of less than 10 ps. This is a 10× improvement over conventional framing cameras currently employed on the National Ignition Facility (NIF) (100 ps resolution), and otherwise only achievable with 1D streaked imaging. A side effect of the dramatically reduced gate width is the comparatively lower detected signal level. Therefore we implement a Poisson noise reduction with non-local principal component analysis method to improve the robustness of the DIXI data analysis. Furthermore, we present results on ignition-relevant experiments atmore » the NIF using DIXI. In particular we focus on establishing that/when DIXI gives reliable shape metrics (P 0, P 2 and P 4 Legendre modes, and their temporal evolution/swings).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagel, S. R.; Benedetti, L. R.; Bradley, D. K.
The dilation x-ray imager (DIXI) is a high-speed x-ray framing camera that uses the pulse-dilation technique to achieve a temporal resolution of less than 10 ps. This is a 10× improvement over conventional framing cameras currently employed on the National Ignition Facility (NIF) (100 ps resolution), and otherwise only achievable with 1D streaked imaging. A side effect of the dramatically reduced gate width is the comparatively lower detected signal level. Therefore we implement a Poisson noise reduction with non-local principal component analysis method to improve the robustness of the DIXI data analysis. Furthermore, we present results on ignition-relevant experiments atmore » the NIF using DIXI. In particular we focus on establishing that/when DIXI gives reliable shape metrics (P 0, P 2 and P 4 Legendre modes, and their temporal evolution/swings).« less
Imaging Intratumor Heterogeneity: Role in Therapy Response, Resistance, and Clinical Outcome
O’Connor, James P.B.; Rose, Chris J.; Waterton, John C.; Carano, Richard A.D.; Parker, Geoff J.M.; Jackson, Alan
2014-01-01
Tumors exhibit genomic and phenotypic heterogeneity which has prognostic significance and may influence response to therapy. Imaging can quantify the spatial variation in architecture and function of individual tumors through quantifying basic biophysical parameters such as density or MRI signal relaxation rate; through measurements of blood flow, hypoxia, metabolism, cell death and other phenotypic features; and through mapping the spatial distribution of biochemical pathways and cell signaling networks. These methods can establish whether one tumor is more or less heterogeneous than another and can identify sub-regions with differing biology. In this article we review the image analysis methods currently used to quantify spatial heterogeneity within tumors. We discuss how analysis of intratumor heterogeneity can provide benefit over more simple biomarkers such as tumor size and average function. We consider how imaging methods can be integrated with genomic and pathology data, rather than be developed in isolation. Finally, we identify the challenges that must be overcome before measurements of intratumoral heterogeneity can be used routinely to guide patient care. PMID:25421725
Park, Chunjae; Kwon, Ohin; Woo, Eung Je; Seo, Jin Keun
2004-03-01
In magnetic resonance electrical impedance tomography (MREIT), we try to visualize cross-sectional conductivity (or resistivity) images of a subject. We inject electrical currents into the subject through surface electrodes and measure the z component Bz of the induced internal magnetic flux density using an MRI scanner. Here, z is the direction of the main magnetic field of the MRI scanner. We formulate the conductivity image reconstruction problem in MREIT from a careful analysis of the relationship between the injection current and the induced magnetic flux density Bz. Based on the novel mathematical formulation, we propose the gradient Bz decomposition algorithm to reconstruct conductivity images. This new algorithm needs to differentiate Bz only once in contrast to the previously developed harmonic Bz algorithm where the numerical computation of (inverted delta)2Bz is required. The new algorithm, therefore, has the important advantage of much improved noise tolerance. Numerical simulations with added random noise of realistic amounts show the feasibility of the algorithm in practical applications and also its robustness against measurement noise.
Optimization of exposure factors for X-ray radiography non-destructive testing of pearl oyster
NASA Astrophysics Data System (ADS)
Susilo; Yulianti, I.; Addawiyah, A.; Setiawan, R.
2018-03-01
One of the processes in pearl oyster cultivation is detecting the pearl nucleus to gain information whether the pearl nucleus is still attached in the shell or vomited. The common tool used to detect pearl nucleus is an X-ray machine. However, an X-ray machine has a drawback that is the energy used is higher than that used by digital radiography. The high energy make the resulted image is difficult to be analysed. One of the advantages of digital radiography is the energy used can be adjusted so that the resulted image can be analysed easily. To obtain a high quality of pearl image using digital radiography, the exposure factors should be optimized. In this work, optimization was done by varying the voltage, current, and exposure time. Then, the radiography images were analysed using Contrast to Noise Ratio (CNR). From the analysis, it can be determined that the optimum exposure factors are 60 kV of voltage, 16 mA of current, and 0.125 s of exposure time which result in CNR of 5.71.
The Role of Laser Speckle Imaging in Port-Wine Stain Research: Recent Advances and Opportunities
Choi, Bernard; Tan, Wenbin; Jia, Wangcun; White, Sean M.; Moy, Wesley J.; Yang, Bruce Y.; Zhu, Jiang; Chen, Zhongping; Kelly, Kristen M.; Nelson, J. Stuart
2016-01-01
Here, we review our current knowledge on the etiology and treatment of port-wine stain (PWS) birthmarks. Current treatment options have significant limitations in terms of efficacy. With the combination of 1) a suitable preclinical microvascular model, 2) laser speckle imaging (LSI) to evaluate blood-flow dynamics, and 3) a longitudinal experimental design, rapid preclinical assessment of new phototherapies can be translated from the lab to the clinic. The combination of photodynamic therapy (PDT) and pulsed-dye laser (PDL) irradiation achieves a synergistic effect that reduces the required radiant exposures of the individual phototherapies to achieve persistent vascular shutdown. PDL combined with anti-angiogenic agents is a promising strategy to achieve persistent vascular shutdown by preventing reformation and reperfusion of photocoagulated blood vessels. Integration of LSI into the clinical workflow may lead to surgical image guidance that maximizes acute photocoagulation, is expected to improve PWS therapeutic outcome. Continued integration of noninvasive optical imaging technologies and biochemical analysis collectively are expected to lead to more robust treatment strategies. PMID:27013846
On the SAR derived alert in the detection of oil spills according to the analysis of the EGEMP.
Ferraro, Guido; Baschek, Björn; de Montpellier, Geraldine; Njoten, Ove; Perkovic, Marko; Vespe, Michele
2010-01-01
Satellite services that deliver information about possible oil spills at sea currently use different labels of "confidence" to describe the detections based on radar image processing. A common approach is to use a classification differentiating between low, medium and high levels of confidence. There is an ongoing discussion on the suitability of the existing classification systems of possible oil spills detected by radar satellite images with regard to the relevant significance and correspondence to user requirements. This paper contains a basic analysis of user requirements, current technical possibilities of satellite services as well as proposals for a redesign of the classification system as an evolution towards a more structured alert system. This research work offers a first review of implemented methodologies for the categorisation of detected oil spills, together with the proposal of explorative ideas evaluated by the European Group of Experts on satellite Monitoring of sea-based oil Pollution (EGEMP). Copyright 2009 Elsevier Ltd. All rights reserved.
White blood cell counting analysis of blood smear images using various segmentation strategies
NASA Astrophysics Data System (ADS)
Safuan, Syadia Nabilah Mohd; Tomari, Razali; Zakaria, Wan Nurshazwani Wan; Othman, Nurmiza
2017-09-01
In white blood cell (WBC) diagnosis, the most crucial measurement parameter is the WBC counting. Such information is widely used to evaluate the effectiveness of cancer therapy and to diagnose several hidden infection within human body. The current practice of manual WBC counting is laborious and a very subjective assessment which leads to the invention of computer aided system (CAS) with rigorous image processing solution. In the CAS counting work, segmentation is the crucial step to ensure the accuracy of the counted cell. The optimal segmentation strategy that can work under various blood smeared image acquisition conditions is remain a great challenge. In this paper, a comparison between different segmentation methods based on color space analysis to get the best counting outcome is elaborated. Initially, color space correction is applied to the original blood smeared image to standardize the image color intensity level. Next, white blood cell segmentation is performed by using combination of several color analysis subtraction which are RGB, CMYK and HSV, and Otsu thresholding. Noises and unwanted regions that present after the segmentation process is eliminated by applying a combination of morphological and Connected Component Labelling (CCL) filter. Eventually, Circle Hough Transform (CHT) method is applied to the segmented image to estimate the number of WBC including the one under the clump region. From the experiment, it is found that G-S yields the best performance.
Automatic cloud coverage assessment of Formosat-2 image
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2011-11-01
Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.
Excitation-scanning hyperspectral imaging as a means to discriminate various tissues types
NASA Astrophysics Data System (ADS)
Deal, Joshua; Favreau, Peter F.; Lopez, Carmen; Lall, Malvika; Weber, David S.; Rich, Thomas C.; Leavesley, Silas J.
2017-02-01
Little is currently known about the fluorescence excitation spectra of disparate tissues and how these spectra change with pathological state. Current imaging diagnostic techniques have limited capacity to investigate fluorescence excitation spectral characteristics. This study utilized excitation-scanning hyperspectral imaging to perform a comprehensive assessment of fluorescence spectral signatures of various tissues. Immediately following tissue harvest, a custom inverted microscope (TE-2000, Nikon Instruments) with Xe arc lamp and thin film tunable filter array (VersaChrome, Semrock, Inc.) were used to acquire hyperspectral image data from each sample. Scans utilized excitation wavelengths from 340 nm to 550 nm in 5 nm increments. Hyperspectral images were analyzed with custom Matlab scripts including linear spectral unmixing (LSU), principal component analysis (PCA), and Gaussian mixture modeling (GMM). Spectra were examined for potential characteristic features such as consistent intensity peaks at specific wavelengths or intensity ratios among significant wavelengths. The resultant spectral features were conserved among tissues of similar molecular composition. Additionally, excitation spectra appear to be a mixture of pure endmembers with commonalities across tissues of varied molecular composition, potentially identifiable through GMM. These results suggest the presence of common autofluorescent molecules in most tissues and that excitationscanning hyperspectral imaging may serve as an approach for characterizing tissue composition as well as pathologic state. Future work will test the feasibility of excitation-scanning hyperspectral imaging as a contrast mode for discriminating normal and pathological tissues.
Prioritizing Scientific Data for Transmission
NASA Technical Reports Server (NTRS)
Castano, Rebecca; Anderson, Robert; Estlin, Tara; DeCoste, Dennis; Gaines, Daniel; Mazzoni, Dominic; Fisher, Forest; Judd, Michele
2004-01-01
A software system has been developed for prioritizing newly acquired geological data onboard a planetary rover. The system has been designed to enable efficient use of limited communication resources by transmitting the data likely to have the most scientific value. This software operates onboard a rover by analyzing collected data, identifying potential scientific targets, and then using that information to prioritize data for transmission to Earth. Currently, the system is focused on the analysis of acquired images, although the general techniques are applicable to a wide range of data modalities. Image prioritization is performed using two main steps. In the first step, the software detects features of interest from each image. In its current application, the system is focused on visual properties of rocks. Thus, rocks are located in each image and rock properties, such as shape, texture, and albedo, are extracted from the identified rocks. In the second step, the features extracted from a group of images are used to prioritize the images using three different methods: (1) identification of key target signature (finding specific rock features the scientist has identified as important), (2) novelty detection (finding rocks we haven t seen before), and (3) representative rock sampling (finding the most average sample of each rock type). These methods use techniques such as K-means unsupervised clustering and a discrimination-based kernel classifier to rank images based on their interest level.
Applications of artificial intelligence V; Proceedings of the Meeting, Orlando, FL, May 18-20, 1987
NASA Technical Reports Server (NTRS)
Gilmore, John F. (Editor)
1987-01-01
The papers contained in this volume focus on current trends in applications of artificial intelligence. Topics discussed include expert systems, image understanding, artificial intelligence tools, knowledge-based systems, heuristic systems, manufacturing applications, and image analysis. Papers are presented on expert system issues in automated, autonomous space vehicle rendezvous; traditional versus rule-based programming techniques; applications to the control of optional flight information; methodology for evaluating knowledge-based systems; and real-time advisory system for airborne early warning.
NASA Stennis Space Center Test Technology Branch Activities
NASA Technical Reports Server (NTRS)
Solano, Wanda M.
2000-01-01
This paper provides a short history of NASA Stennis Space Center's Test Technology Laboratory and briefly describes the variety of engine test technology activities and developmental project initiatives. Theoretical rocket exhaust plume modeling, acoustic monitoring and analysis, hand held fire imaging, heat flux radiometry, thermal imaging and exhaust plume spectroscopy are all examples of current and past test activities that are briefly described. In addition, recent efforts and visions focused on accomodating second, third, and fourth generation flight vehicle engine test requirements are discussed.
Verbalization and imagery in the process of formation of operator labor skills
NASA Technical Reports Server (NTRS)
Mistyuk, V. V.
1975-01-01
Sensorimotor control tests show that mastering operational skills occurs under conditions that stimulate the operator to independent active analysis and summarization of current information with the goal of clarifying the signs and the integral images that are a model of the situation. Goal directed determination of such an image requires inner and external speech, activates and improves the thinking of the operator, accelerates the training process, increases its effectiveness, and enables the formation of strategies in anticipating the course of events.
Mącik, Dorota; Ziółkowska, Patrycja; Kowalska, Monika
2012-01-01
Analysis of changes in self-perception in post-mastectomy patients and its comparison with self-perception of healthy women. The subjects of this study were 50 women. The main group was post-mastectomy patients involved in the meetings of the Amazons Club (25 women). The reference group consisted of 25 healthy women. The method used in the study was the ACL (Adjective Check List) test, identifying 37 dimensions of self-image. Oncological patients completed a test twice (for current and pre-cancer self-image), and healthy women once - for current self. Both groups were selected similarly in respect of education level for the purpose of ensuring a similar level of insight. Retrospective self-image and the current one in the Amazon women group were highly convergent. Existing differences include a reduced need for achievement and dominance, and a lower level of self-confidence. However, the comparison of current self-images in both groups showed a large discrepancy of the results. The Amazon women assess themselves in a much more negative way. Also, their self-image is self-contradictory in certain characteristics. Mastectomy is a difficult experience requiring one to re-adapt and to accept oneself thereafter. The way of thinking about oneself is a defence mechanism helping to cope. The work with patients programmes must, therefore, focus on identifying their emotions and thoughts, especially on those they do not want to accept because of the perceived pressure from the environment to effectively and quickly deal with this difficult situation. The increasing acceptance of personal limitations may help the affected women to adjust psychologically faster and easier.
NASA Astrophysics Data System (ADS)
Vyas, N.; Sammons, R. L.; Addison, O.; Dehghani, H.; Walmsley, A. D.
2016-09-01
Biofilm accumulation on biomaterial surfaces is a major health concern and significant research efforts are directed towards producing biofilm resistant surfaces and developing biofilm removal techniques. To accurately evaluate biofilm growth and disruption on surfaces, accurate methods which give quantitative information on biofilm area are needed, as current methods are indirect and inaccurate. We demonstrate the use of machine learning algorithms to segment biofilm from scanning electron microscopy images. A case study showing disruption of biofilm from rough dental implant surfaces using cavitation bubbles from an ultrasonic scaler is used to validate the imaging and analysis protocol developed. Streptococcus mutans biofilm was disrupted from sandblasted, acid etched (SLA) Ti discs and polished Ti discs. Significant biofilm removal occurred due to cavitation from ultrasonic scaling (p < 0.001). The mean sensitivity and specificity values for segmentation of the SLA surface images were 0.80 ± 0.18 and 0.62 ± 0.20 respectively and 0.74 ± 0.13 and 0.86 ± 0.09 respectively for polished surfaces. Cavitation has potential to be used as a novel way to clean dental implants. This imaging and analysis method will be of value to other researchers and manufacturers wishing to study biofilm growth and removal.
Specimen preparation, imaging, and analysis protocols for knife-edge scanning microscopy.
Choe, Yoonsuck; Mayerich, David; Kwon, Jaerock; Miller, Daniel E; Sung, Chul; Chung, Ji Ryang; Huffman, Todd; Keyser, John; Abbott, Louise C
2011-12-09
Major advances in high-throughput, high-resolution, 3D microscopy techniques have enabled the acquisition of large volumes of neuroanatomical data at submicrometer resolution. One of the first such instruments producing whole-brain-scale data is the Knife-Edge Scanning Microscope (KESM), developed and hosted in the authors' lab. KESM has been used to section and image whole mouse brains at submicrometer resolution, revealing the intricate details of the neuronal networks (Golgi), vascular networks (India ink), and cell body distribution (Nissl). The use of KESM is not restricted to the mouse nor the brain. We have successfully imaged the octopus brain, mouse lung, and rat brain. We are currently working on whole zebra fish embryos. Data like these can greatly contribute to connectomics research; to microcirculation and hemodynamic research; and to stereology research by providing an exact ground-truth. In this article, we will describe the pipeline, including specimen preparation (fixing, staining, and embedding), KESM configuration and setup, sectioning and imaging with the KESM, image processing, data preparation, and data visualization and analysis. The emphasis will be on specimen preparation and visualization/analysis of obtained KESM data. We expect the detailed protocol presented in this article to help broaden the access to KESM and increase its utilization.
Study of robot landmark recognition with complex background
NASA Astrophysics Data System (ADS)
Huang, Yuqing; Yang, Jia
2007-12-01
It's of great importance for assisting robot in path planning, position navigating and task performing by perceiving and recognising environment characteristic. To solve the problem of monocular-vision-oriented landmark recognition for mobile intelligent robot marching with complex background, a kind of nested region growing algorithm which fused with transcendental color information and based on current maximum convergence center is proposed, allowing invariance localization to changes in position, scale, rotation, jitters and weather conditions. Firstly, a novel experiment threshold based on RGB vision model is used for the first image segmentation, which allowing some objects and partial scenes with similar color to landmarks also are detected with landmarks together. Secondly, with current maximum convergence center on segmented image as each growing seed point, the above region growing algorithm accordingly starts to establish several Regions of Interest (ROI) orderly. According to shape characteristics, a quick and effectual contour analysis based on primitive element is applied in deciding whether current ROI could be reserved or deleted after each region growing, then each ROI is judged initially and positioned. When the position information as feedback is conveyed to the gray image, the whole landmarks are extracted accurately with the second segmentation on the local image that exclusive to landmark area. Finally, landmarks are recognised by Hopfield neural network. Results issued from experiments on a great number of images with both campus and urban district as background show the effectiveness of the proposed algorithm.
Robbins, Lorraine B; Ling, Jiying; Resnicow, Kenneth
2017-12-06
Understanding factors related to girls' body image discrepancy, which is the difference between self-perceived current or actual and ideal body size, is important for addressing body-related issues and preventing adverse sequelae. Two aims were to: 1) examine demographic differences in body image discrepancy; and 2) determine the association of body image discrepancy with weight status, percent body fat, physical activity, sedentary behavior, and cardiovascular (CV) fitness among young adolescent girls. The cross-sectional study included a secondary analysis of baseline data from a group randomized controlled trial including 1519 5th-8th grade girls in 24 U.S. schools. Girls completed physical activity and sedentary behavior surveys. To indicate perceived current/actual and ideal body image, girls selected from nine body figures the one that represented how they look now and another showing how they want to look. Girls wore accelerometers measuring physical activity. Height, weight, and percent body fat were assessed. The Progressive Aerobic CV Endurance Run was used to estimate CV fitness. Independent t-test, one- and two-way ANOVA, correlational analyses, and hierarchical linear regressions were performed. The majority (67.5%; n = 1023) chose a smaller ideal than current/actual figure. White girls had higher body image discrepancy than Black girls (p = .035). Body image discrepancy increased with increasing weight status (F 3,1506 = 171.32, p < .001). Moderate-to-vigorous physical activity (MVPA) and vigorous physical activity were negatively correlated with body image discrepancy (r = -.10, p < .001; r = -.14, p < .001, respectively), but correlations were not significant after adjusting for race and body mass index (BMI), respectively. Body image discrepancy was moderately correlated with CV fitness (r = -.55, p < .001). After adjusting for demographics, percent body fat, but not CV fitness or MVPA, influenced body image discrepancy. Girls with higher percent body fat had higher body image discrepancy (p < .001). This study provided important information to guide interventions for promoting a positive body image among girls. ClinicalTrials.gov Identifier NCT01503333 , registration date: January 4, 2012.
Engineering the Ideal Gigapixel Image Viewer
NASA Astrophysics Data System (ADS)
Perpeet, D. Wassenberg, J.
2011-09-01
Despite improvements in automatic processing, analysts are still faced with the task of evaluating gigapixel-scale mosaics or images acquired by telescopes such as Pan-STARRS. Displaying such images in ‘ideal’ form is a major challenge even today, and the amount of data will only increase as sensor resolutions improve. In our opinion, the ideal viewer has several key characteristics. Lossless display - down to individual pixels - ensures all information can be extracted from the image. Support for all relevant pixel formats (integer or floating point) allows displaying data from different sensors. Smooth zooming and panning in the high-resolution data enables rapid screening and navigation in the image. High responsiveness to input commands avoids frustrating delays. Instantaneous image enhancement, e.g. contrast adjustment and image channel selection, helps with analysis tasks. Modest system requirements allow viewing on regular workstation computers or even laptops. To the best of our knowledge, no such software product is currently available. Meeting these goals requires addressing certain realities of current computer architectures. GPU hardware accelerates rendering and allows smooth zooming without high CPU load. Programmable GPU shaders enable instant channel selection and contrast adjustment without any perceptible slowdown or changes to the input data. Relatively low disk transfer speeds suggest the use of compression to decrease the amount of data to transfer. Asynchronous I/O allows decompressing while waiting for previous I/O operations to complete. The slow seek times of magnetic disks motivate optimizing the order of the data on disk. Vectorization and parallelization allow significant increases in computational capacity. Limited memory requires streaming and caching of image regions. We develop a viewer that takes the above issues into account. Its awareness of the computer architecture enables previously unattainable features such as smooth zooming and image enhancement within high-resolution data. We describe our implementation, disclosing its novel file format and lossless image codec whose decompression is faster than copying the raw data in memory. Both provide crucial performance boosts compared to conventional approaches. Usability tests demonstrate the suitability of our viewer for rapid analysis of large SAR datasets, multispectral satellite imagery and mosaics.
Larue, Ruben T H M; Defraene, Gilles; De Ruysscher, Dirk; Lambin, Philippe; van Elmpt, Wouter
2017-02-01
Quantitative analysis of tumour characteristics based on medical imaging is an emerging field of research. In recent years, quantitative imaging features derived from CT, positron emission tomography and MR scans were shown to be of added value in the prediction of outcome parameters in oncology, in what is called the radiomics field. However, results might be difficult to compare owing to a lack of standardized methodologies to conduct quantitative image analyses. In this review, we aim to present an overview of the current challenges, technical routines and protocols that are involved in quantitative imaging studies. The first issue that should be overcome is the dependency of several features on the scan acquisition and image reconstruction parameters. Adopting consistent methods in the subsequent target segmentation step is evenly crucial. To further establish robust quantitative image analyses, standardization or at least calibration of imaging features based on different feature extraction settings is required, especially for texture- and filter-based features. Several open-source and commercial software packages to perform feature extraction are currently available, all with slightly different functionalities, which makes benchmarking quite challenging. The number of imaging features calculated is typically larger than the number of patients studied, which emphasizes the importance of proper feature selection and prediction model-building routines to prevent overfitting. Even though many of these challenges still need to be addressed before quantitative imaging can be brought into daily clinical practice, radiomics is expected to be a critical component for the integration of image-derived information to personalize treatment in the future.
Integration of Optical Coherence Tomography Scan Patterns to Augment Clinical Data Suite
NASA Technical Reports Server (NTRS)
Mason, S.; Patel, N.; Van Baalen, M.; Tarver, W.; Otto, C.; Samuels, B.; Koslovsky, M.; Schaefer, C.; Taiym, W.; Wear, M.;
2018-01-01
Vision changes identified in long duration spaceflight astronauts has led Space Medicine at NASA to adopt a more comprehensive clinical monitoring protocol. Optical Coherence Tomography (OCT) was recently implemented at NASA, including on board the International Space Station in 2013. NASA is collaborating with Heidelberg Engineering to increase the fidelity of the current OCT data set by integrating the traditional circumpapillary OCT image with radial and horizontal block images at the optic nerve head. The retinal nerve fiber layer was segmented by two experienced individuals. Intra-rater (N=4 subjects and 70 images) and inter-rater (N=4 subjects and 221 images) agreement was performed. The results of this analysis and the potential benefits will be presented.
Non-Contact EDDY Current Hole Eccentricity and Diameter Measurement
NASA Technical Reports Server (NTRS)
Chern, E. James
1998-01-01
Precision holes are among the most critical features of a mechanical component. Deviations from permissible tolerances can impede operation and result in unexpected failure. We have developed an automated non-contact eddy current hole diameter and eccentricity measuring system. The operating principle is based on the eddy current lift-off effect, which is the coil impedance as a function of the distance between the coil and the test object. An absolute eddy current probe rotates in the hole. The impedance of each angular position is acquired and input to the computer for integration and analysis. The eccentricity of the hole is the profile of the impedance as a function of angular position as compared to a straight line, an ideal hole. The diameter of the hole is the sum of the diameter of the probe and twice the distance-calibrated impedance. An eddy current image is generated by integrating angular scans for a plurality of depths between the top and bottom to display the eccentricity profile. This system can also detect and image defects in the hole. The method for non-contact eddy current hole diameter and eccentricity measurement has been granted a patent by the U.S. Patent and Trademark Office.
Schalk, Stefan G; Demi, Libertario; Smeenge, Martijn; Mills, David M; Wallace, Kirk D; de la Rosette, Jean J M C H; Wijkstra, Hessel; Mischi, Massimo
2015-05-01
Currently, nonradical treatment for prostate cancer is hampered by the lack of reliable diagnostics. Contrastultrasound dispersion imaging (CUDI) has recently shown great potential as a prostate cancer imaging technique. CUDI estimates the local dispersion of intravenously injected contrast agents, imaged by transrectal dynamic contrast-enhanced ultrasound (DCE-US), to detect angiogenic processes related to tumor growth. The best CUDI results have so far been obtained by similarity analysis of the contrast kinetics in neighboring pixels. To date, CUDI has been investigated in 2-D only. In this paper, an implementation of 3-D CUDI based on spatiotemporal similarity analysis of 4-D DCE-US is described. Different from 2-D methods, 3-D CUDI permits analysis of the entire prostate using a single injection of contrast agent. To perform 3-D CUDI, a new strategy was designed to estimate the similarity in the contrast kinetics at each voxel, and data processing steps were adjusted to the characteristics of 4-D DCE-US images. The technical feasibility of 4-D DCE-US in 3-D CUDI was assessed and confirmed. Additionally, in a preliminary validation in two patients, dispersion maps by 3-D CUDI were quantitatively compared with those by 2-D CUDI and with 12-core systematic biopsies with promising results.
Feedback circuit design of an auto-gating power supply for low-light-level image intensifier
NASA Astrophysics Data System (ADS)
Yang, Ye; Yan, Bo; Zhi, Qiang; Ni, Xiao-bing; Li, Jun-guo; Wang, Yu; Yao, Ze
2015-11-01
This paper introduces the basic principle of auto-gating power supply which using a hybrid automatic brightness control scheme. By the analysis of current as image intensifier to special requirements of auto-gating power supply, a feedback circuit of the auto-gating power supply is analyzed. Find out the reason of the screen flash after the auto-gating power supply assembled image intensifier. This paper designed a feedback circuit which can shorten the response time of auto-gating power supply and improve screen slight flicker phenomenon which the human eye can distinguish under the high intensity of illumination.
X-ray online detection for laser welding T-joint of Al-Li alloy
NASA Astrophysics Data System (ADS)
Zhan, Xiaohong; Bu, Xing; Qin, Tao; Yu, Haisong; Chen, Jie; Wei, Yanhong
2017-05-01
In order to detect weld defects in laser welding T-joint of Al-Li alloy, a real-time X-ray image system is set up for quality inspection. Experiments on real-time radiography procedure of the weldment are conducted by using this system. Twin fillet welding seam radiographic arrangement is designed according to the structural characteristics of the weldment. The critical parameters including magnification times, focal length, tube current and tube voltage are studied to acquire high quality weld images. Through the theoretical and data analysis, optimum parameters are settled and expected digital images are captured, which is conductive to automatic defect detection.
Non-destructive, high-content analysis of wheat grain traits using X-ray micro computed tomography.
Hughes, Nathan; Askew, Karen; Scotson, Callum P; Williams, Kevin; Sauze, Colin; Corke, Fiona; Doonan, John H; Nibau, Candida
2017-01-01
Wheat is one of the most widely grown crop in temperate climates for food and animal feed. In order to meet the demands of the predicted population increase in an ever-changing climate, wheat production needs to dramatically increase. Spike and grain traits are critical determinants of final yield and grain uniformity a commercially desired trait, but their analysis is laborious and often requires destructive harvest. One of the current challenges is to develop an accurate, non-destructive method for spike and grain trait analysis capable of handling large populations. In this study we describe the development of a robust method for the accurate extraction and measurement of spike and grain morphometric parameters from images acquired by X-ray micro-computed tomography (μCT). The image analysis pipeline developed automatically identifies plant material of interest in μCT images, performs image analysis, and extracts morphometric data. As a proof of principle, this integrated methodology was used to analyse the spikes from a population of wheat plants subjected to high temperatures under two different water regimes. Temperature has a negative effect on spike height and grain number with the middle of the spike being the most affected region. The data also confirmed that increased grain volume was correlated with the decrease in grain number under mild stress. Being able to quickly measure plant phenotypes in a non-destructive manner is crucial to advance our understanding of gene function and the effects of the environment. We report on the development of an image analysis pipeline capable of accurately and reliably extracting spike and grain traits from crops without the loss of positional information. This methodology was applied to the analysis of wheat spikes can be readily applied to other economically important crop species.
Giger, Maryellen L.; Chan, Heang-Ping; Boone, John
2008-01-01
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists’ goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities that are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists—as opposed to a completely automatic computer interpretation—focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous—from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects—collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more—from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis. PMID:19175137
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giger, Maryellen L.; Chan, Heang-Ping; Boone, John
2008-12-15
The roles of physicists in medical imaging have expanded over the years, from the study of imaging systems (sources and detectors) and dose to the assessment of image quality and perception, the development of image processing techniques, and the development of image analysis methods to assist in detection and diagnosis. The latter is a natural extension of medical physicists' goals in developing imaging techniques to help physicians acquire diagnostic information and improve clinical decisions. Studies indicate that radiologists do not detect all abnormalities on images that are visible on retrospective review, and they do not always correctly characterize abnormalities thatmore » are found. Since the 1950s, the potential use of computers had been considered for analysis of radiographic abnormalities. In the mid-1980s, however, medical physicists and radiologists began major research efforts for computer-aided detection or computer-aided diagnosis (CAD), that is, using the computer output as an aid to radiologists--as opposed to a completely automatic computer interpretation--focusing initially on methods for the detection of lesions on chest radiographs and mammograms. Since then, extensive investigations of computerized image analysis for detection or diagnosis of abnormalities in a variety of 2D and 3D medical images have been conducted. The growth of CAD over the past 20 years has been tremendous--from the early days of time-consuming film digitization and CPU-intensive computations on a limited number of cases to its current status in which developed CAD approaches are evaluated rigorously on large clinically relevant databases. CAD research by medical physicists includes many aspects--collecting relevant normal and pathological cases; developing computer algorithms appropriate for the medical interpretation task including those for segmentation, feature extraction, and classifier design; developing methodology for assessing CAD performance; validating the algorithms using appropriate cases to measure performance and robustness; conducting observer studies with which to evaluate radiologists in the diagnostic task without and with the use of the computer aid; and ultimately assessing performance with a clinical trial. Medical physicists also have an important role in quantitative imaging, by validating the quantitative integrity of scanners and developing imaging techniques, and image analysis tools that extract quantitative data in a more accurate and automated fashion. As imaging systems become more complex and the need for better quantitative information from images grows, the future includes the combined research efforts from physicists working in CAD with those working on quantitative imaging systems to readily yield information on morphology, function, molecular structure, and more--from animal imaging research to clinical patient care. A historical review of CAD and a discussion of challenges for the future are presented here, along with the extension to quantitative image analysis.« less
Computer analysis of digital sky surveys using citizen science and manual classification
NASA Astrophysics Data System (ADS)
Kuminski, Evan; Shamir, Lior
2015-01-01
As current and future digital sky surveys such as SDSS, LSST, DES, Pan-STARRS and Gaia create increasingly massive databases containing millions of galaxies, there is a growing need to be able to efficiently analyze these data. An effective way to do this is through manual analysis, however, this may be insufficient considering the extremely vast pipelines of astronomical images generated by the present and future surveys. Some efforts have been made to use citizen science to classify galaxies by their morphology on a larger scale than individual or small groups of scientists can. While these citizen science efforts such as Zooniverse have helped obtain reasonably accurate morphological information about large numbers of galaxies, they cannot scale to provide complete analysis of billions of galaxy images that will be collected by future ventures such as LSST. Since current forms of manual classification cannot scale to the masses of data collected by digital sky surveys, it is clear that in order to keep up with the growing databases some form of automation of the data analysis will be required, and will work either independently or in combination with human analysis such as citizen science. Here we describe a computer vision method that can automatically analyze galaxy images and deduce galaxy morphology. Experiments using Galaxy Zoo 2 data show that the performance of the method increases as the degree of agreement between the citizen scientists gets higher, providing a cleaner dataset. For several morphological features, such as the spirality of the galaxy, the algorithm agreed with the citizen scientists on around 95% of the samples. However, the method failed to analyze some of the morphological features such as the number of spiral arms, and provided accuracy of just ~36%.
Digital mammography, cancer screening: Factors important for image compression
NASA Technical Reports Server (NTRS)
Clarke, Laurence P.; Blaine, G. James; Doi, Kunio; Yaffe, Martin J.; Shtern, Faina; Brown, G. Stephen; Winfield, Daniel L.; Kallergi, Maria
1993-01-01
The use of digital mammography for breast cancer screening poses several novel problems such as development of digital sensors, computer assisted diagnosis (CAD) methods for image noise suppression, enhancement, and pattern recognition, compression algorithms for image storage, transmission, and remote diagnosis. X-ray digital mammography using novel direct digital detection schemes or film digitizers results in large data sets and, therefore, image compression methods will play a significant role in the image processing and analysis by CAD techniques. In view of the extensive compression required, the relative merit of 'virtually lossless' versus lossy methods should be determined. A brief overview is presented here of the developments of digital sensors, CAD, and compression methods currently proposed and tested for mammography. The objective of the NCI/NASA Working Group on Digital Mammography is to stimulate the interest of the image processing and compression scientific community for this medical application and identify possible dual use technologies within the NASA centers.
NASA Technical Reports Server (NTRS)
2008-01-01
This image, and many like it, are one way NASA's Phoenix Mars Lander is measuring trace amounts of water vapor in the atmosphere over far-northern Mars. Phoenix's Surface Stereo Imager (SSI) uses solar filters, or filters designed to image the sun, to make these images. The camera is aimed at the sky for long exposures. SSI took this image as a test on June 9, 2008, which was the Phoenix mission's 15th Martian day, or sol, since landing, at 5:20 p.m. local solar time. The camera was pointed about 38 degrees above the horizon. The white dots in the sky are detector dark current that will be removed during image processing and analysis. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin SpaceAnatale, Katharine; Kelly, Sarah
2015-03-01
Adolescence is a tumultuous and challenging time period in life. Sexual risk behavior among adolescents is a widespread topic of interest in the current literature. Two common factors that influence increased sexual risk behavior are symptoms of depression and negative body image. The purpose of this study was to investigate the effect of body image and symptoms of depression upon sexual risk-taking in an adolescent female population. A secondary data analysis of the 2011 Youth Risk Behavior Survey (YRBS) was used to explore girls' sexual activity, body image, and mental health. There were 7,708 high-school girls who participated in this study. Three questions were used to represent the constructs under investigation. There were significant correlations between sexual activity, body image, and symptoms of depression; only symptoms of depression were significant predictors of both sexual activity and condom usage. Body image was a predictor of sexual activity, but not condom use. Our findings support previous studies that suggested that people with depressive symptoms were more likely to engage in risky sexual behaviors. Our study also supports the idea that a negative body image decreases sexual activity; however, other researchers have reported that negative body image leads to an increase in sexual activity.
Zimmerman, Stefan L; Kim, Woojin; Boonn, William W
2011-01-01
Quantitative and descriptive imaging data are a vital component of the radiology report and are frequently of paramount importance to the ordering physician. Unfortunately, current methods of recording these data in the report are both inefficient and error prone. In addition, the free-text, unstructured format of a radiology report makes aggregate analysis of data from multiple reports difficult or even impossible without manual intervention. A structured reporting work flow has been developed that allows quantitative data created at an advanced imaging workstation to be seamlessly integrated into the radiology report with minimal radiologist intervention. As an intermediary step between the workstation and the reporting software, quantitative and descriptive data are converted into an extensible markup language (XML) file in a standardized format specified by the Annotation and Image Markup (AIM) project of the National Institutes of Health Cancer Biomedical Informatics Grid. The AIM standard was created to allow image annotation data to be stored in a uniform machine-readable format. These XML files containing imaging data can also be stored on a local database for data mining and analysis. This structured work flow solution has the potential to improve radiologist efficiency, reduce errors, and facilitate storage of quantitative and descriptive imaging data for research. Copyright © RSNA, 2011.
Huang, Hui; Liu, Li; Ngadi, Michael O; Gariépy, Claude; Prasher, Shiv O
2014-01-01
Marbling is an important quality attribute of pork. Detection of pork marbling usually involves subjective scoring, which raises the efficiency costs to the processor. In this study, the ability to predict pork marbling using near-infrared (NIR) hyperspectral imaging (900-1700 nm) and the proper image processing techniques were studied. Near-infrared images were collected from pork after marbling evaluation according to current standard chart from the National Pork Producers Council. Image analysis techniques-Gabor filter, wide line detector, and spectral averaging-were applied to extract texture, line, and spectral features, respectively, from NIR images of pork. Samples were grouped into calibration and validation sets. Wavelength selection was performed on calibration set by stepwise regression procedure. Prediction models of pork marbling scores were built using multiple linear regressions based on derivatives of mean spectra and line features at key wavelengths. The results showed that the derivatives of both texture and spectral features produced good results, with correlation coefficients of validation of 0.90 and 0.86, respectively, using wavelengths of 961, 1186, and 1220 nm. The results revealed the great potential of the Gabor filter for analyzing NIR images of pork for the effective and efficient objective evaluation of pork marbling.
Kim, Hyun Gi; Lee, Young Han; Choi, Jin-Young; Park, Mi-Suk; Kim, Myeong-Jin; Kim, Ki Whang
2015-01-01
Purpose To investigate the optimal blending percentage of adaptive statistical iterative reconstruction (ASIR) in a reduced radiation dose while preserving a degree of image quality and texture that is similar to that of standard-dose computed tomography (CT). Materials and Methods The CT performance phantom was scanned with standard and dose reduction protocols including reduced mAs or kVp. Image quality parameters including noise, spatial, and low-contrast resolution, as well as image texture, were quantitatively evaluated after applying various blending percentages of ASIR. The optimal blending percentage of ASIR that preserved image quality and texture compared to standard dose CT was investigated in each radiation dose reduction protocol. Results As the percentage of ASIR increased, noise and spatial-resolution decreased, whereas low-contrast resolution increased. In the texture analysis, an increasing percentage of ASIR resulted in an increase of angular second moment, inverse difference moment, and correlation and in a decrease of contrast and entropy. The 20% and 40% dose reduction protocols with 20% and 40% ASIR blending, respectively, resulted in an optimal quality of images with preservation of the image texture. Conclusion Blending the 40% ASIR to the 40% reduced tube-current product can maximize radiation dose reduction and preserve adequate image quality and texture. PMID:25510772
NASA Astrophysics Data System (ADS)
Berthon, Beatrice; Dansette, Pierre-Marc; Tanter, Mickaël; Pernot, Mathieu; Provost, Jean
2017-07-01
Direct imaging of the electrical activation of the heart is crucial to better understand and diagnose diseases linked to arrhythmias. This work presents an ultrafast acoustoelectric imaging (UAI) system for direct and non-invasive ultrafast mapping of propagating current densities using the acoustoelectric effect. Acoustoelectric imaging is based on the acoustoelectric effect, the modulation of the medium’s electrical impedance by a propagating ultrasonic wave. UAI triggers this effect with plane wave emissions to image current densities. An ultrasound research platform was fitted with electrodes connected to high common-mode rejection ratio amplifiers and sampled by up to 128 independent channels. The sequences developed allow for both real-time display of acoustoelectric maps and long ultrafast acquisition with fast off-line processing. The system was evaluated by injecting controlled currents into a saline pool via copper wire electrodes. Sensitivity to low current and low acoustic pressure were measured independently. Contrast and spatial resolution were measured for varying numbers of plane waves and compared to line per line acoustoelectric imaging with focused beams at equivalent peak pressure. Temporal resolution was assessed by measuring time-varying current densities associated with sinusoidal currents. Complex intensity distributions were also imaged in 3D. Electrical current densities were detected for injected currents as low as 0.56 mA. UAI outperformed conventional focused acoustoelectric imaging in terms of contrast and spatial resolution when using 3 and 13 plane waves or more, respectively. Neighboring sinusoidal currents with opposed phases were accurately imaged and separated. Time-varying currents were mapped and their frequency accurately measured for imaging frame rates up to 500 Hz. Finally, a 3D image of a complex intensity distribution was obtained. The results demonstrated the high sensitivity of the UAI system proposed. The plane wave based approach provides a highly flexible trade-off between frame rate, resolution and contrast. In conclusion, the UAI system shows promise for non-invasive, direct and accurate real-time imaging of electrical activation in vivo.
Texture analysis of medical images for radiotherapy applications
Rizzo, Giovanna
2017-01-01
The high-throughput extraction of quantitative information from medical images, known as radiomics, has grown in interest due to the current necessity to quantitatively characterize tumour heterogeneity. In this context, texture analysis, consisting of a variety of mathematical techniques that can describe the grey-level patterns of an image, plays an important role in assessing the spatial organization of different tissues and organs. For these reasons, the potentiality of texture analysis in the context of radiotherapy has been widely investigated in several studies, especially for the prediction of the treatment response of tumour and normal tissues. Nonetheless, many different factors can affect the robustness, reproducibility and reliability of textural features, thus limiting the impact of this technique. In this review, an overview of the most recent works that have applied texture analysis in the context of radiotherapy is presented, with particular focus on the assessment of tumour and tissue response to radiations. Preliminary, the main factors that have an influence on features estimation are discussed, highlighting the need of more standardized image acquisition and reconstruction protocols and more accurate methods for region of interest identification. Despite all these limitations, texture analysis is increasingly demonstrating its ability to improve the characterization of intratumour heterogeneity and the prediction of clinical outcome, although prospective studies and clinical trials are required to draw a more complete picture of the full potential of this technique. PMID:27885836
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shieh, Chun-Chien; Kipritidis, John; O’Brien, Ricky T.
Purpose: Respiratory signal, binning method, and reconstruction algorithm are three major controllable factors affecting image quality in thoracic 4D cone-beam CT (4D-CBCT), which is widely used in image guided radiotherapy (IGRT). Previous studies have investigated each of these factors individually, but no integrated sensitivity analysis has been performed. In addition, projection angular spacing is also a key factor in reconstruction, but how it affects image quality is not obvious. An investigation of the impacts of these four factors on image quality can help determine the most effective strategy in improving 4D-CBCT for IGRT. Methods: Fourteen 4D-CBCT patient projection datasets withmore » various respiratory motion features were reconstructed with the following controllable factors: (i) respiratory signal (real-time position management, projection image intensity analysis, or fiducial marker tracking), (ii) binning method (phase, displacement, or equal-projection-density displacement binning), and (iii) reconstruction algorithm [Feldkamp–Davis–Kress (FDK), McKinnon–Bates (MKB), or adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS)]. The image quality was quantified using signal-to-noise ratio (SNR), contrast-to-noise ratio, and edge-response width in order to assess noise/streaking and blur. The SNR values were also analyzed with respect to the maximum, mean, and root-mean-squared-error (RMSE) projection angular spacing to investigate how projection angular spacing affects image quality. Results: The choice of respiratory signals was found to have no significant impact on image quality. Displacement-based binning was found to be less prone to motion artifacts compared to phase binning in more than half of the cases, but was shown to suffer from large interbin image quality variation and large projection angular gaps. Both MKB and ASD-POCS resulted in noticeably improved image quality almost 100% of the time relative to FDK. In addition, SNR values were found to increase with decreasing RMSE values of projection angular gaps with strong correlations (r ≈ −0.7) regardless of the reconstruction algorithm used. Conclusions: Based on the authors’ results, displacement-based binning methods, better reconstruction algorithms, and the acquisition of even projection angular views are the most important factors to consider for improving thoracic 4D-CBCT image quality. In view of the practical issues with displacement-based binning and the fact that projection angular spacing is not currently directly controllable, development of better reconstruction algorithms represents the most effective strategy for improving image quality in thoracic 4D-CBCT for IGRT applications at the current stage.« less
The usefulness of videomanometry for studying pediatric esophageal motor disease.
Kawahara, Hisayoshi; Kubota, Akio; Okuyama, Hiroomi; Oue, Takaharu; Tazuke, Yuko; Okada, Akira
2004-12-01
Abnormalities in esophageal motor function underlie various symptoms in the pediatric population. Manometry remains an important tool for studying esophageal motor function, whereas its analyses have been conducted with considerable subjective interpretation. The usefulness of videomanometry with topographic analysis was examined in the current study. Videomanometry was conducted in 5 patients with primary gastroesophageal reflux disease (GERD), 4 with postoperative esophageal atresia (EA), 1 with congenital esophageal stenosis (CES), and 1 with diffuse esophageal spasms (DES). Digitized videofluoroscopic images were recorded synchronously with manometric digital data in a personal computer. Manometric analysis was conducted with a view of concurrent esophageal contour and bolus transit. Primary GERD patients showed esophageal flow proceeding into the stomach during peristaltic contractions recorded manometrically, whereas patients with EA/CES frequently showed impaired esophageal transit during defective esophageal peristaltic contractions. A characteristic corkscrew appearance and esophageal flow in a to-and-fro fashion were seen with high-amplitude synchronous esophageal contractions in a DES patient. The topographic analysis showed distinctive images characteristic of each pathological condition. Videomanometry is helpful in interpreting manometric data by analyzing concurrent fluoroscopic images. Topographic analyses provide characteristic images reflecting motor abnormalities in pediatric esophageal disease.
Use of Visible Satellite Imagery to Determine Velocity in Tidal Rivers
NASA Astrophysics Data System (ADS)
Mied, R. P.; Donato, T. F.; Chen, W.
2006-05-01
In the open ocean and on the continental shelf, current velocities have traditionally been calculated remotely using the Maximum Correlation Coefficient (MCC) technique to track features between sequential sea surface temperature image scenes. These images are obtained from NOAA polar orbiters having an effective ground pixel size of 1.47 km. In contrast to this relatively large distance, spatial scales over which current velocities can vary in rivers and estuaries are hundreds of meters; associated temporal scales vary from tens of minutes to hours. Traditional in-situ measurements can be instructive in determining some aspects of the flow, but truly synoptic overviews are possible only with remote sensing, provided high-resolution imagery is available. With the advent of a constellation of moderate- to high-resolution imaging systems (e.g., Landsat, ASTER, SPOT, Quickbird, Ikonos, and Orbview-3) it is now available to extend current estimations to these areas. For instance, Landsat-7 and ASTER produce imagery with spatial resolutions on the order of 30 m or less and within 30 min of each other. This is sufficient to spatially resolve a wide variety of surface features, and to maintain feature integrity over time for tracking purposes. We apply this approach to a portion of the tidal Potomac River by using pairs of co-registered, sequential, multi-spectral Landsat-7 and ASTER images. The final data used in the analysis set contain three spectral bands (green, red, and near-infrared), and have a ground pixel spacing (GSD) of 30m. The time step between each Landsat-7 and ASTER pair is approximately 29 minutes. Two image sets are used in the present study, one occurring on 5 October 2001 and the other on 2 April 2003. We show current maps derived from both image pairs an discuss the results in the light of model and
Gap-free segmentation of vascular networks with automatic image processing pipeline.
Hsu, Chih-Yang; Ghaffari, Mahsa; Alaraj, Ali; Flannery, Michael; Zhou, Xiaohong Joe; Linninger, Andreas
2017-03-01
Current image processing techniques capture large vessels reliably but often fail to preserve connectivity in bifurcations and small vessels. Imaging artifacts and noise can create gaps and discontinuity of intensity that hinders segmentation of vascular trees. However, topological analysis of vascular trees require proper connectivity without gaps, loops or dangling segments. Proper tree connectivity is also important for high quality rendering of surface meshes for scientific visualization or 3D printing. We present a fully automated vessel enhancement pipeline with automated parameter settings for vessel enhancement of tree-like structures from customary imaging sources, including 3D rotational angiography, magnetic resonance angiography, magnetic resonance venography, and computed tomography angiography. The output of the filter pipeline is a vessel-enhanced image which is ideal for generating anatomical consistent network representations of the cerebral angioarchitecture for further topological or statistical analysis. The filter pipeline combined with computational modeling can potentially improve computer-aided diagnosis of cerebrovascular diseases by delivering biometrics and anatomy of the vasculature. It may serve as the first step in fully automatic epidemiological analysis of large clinical datasets. The automatic analysis would enable rigorous statistical comparison of biometrics in subject-specific vascular trees. The robust and accurate image segmentation using a validated filter pipeline would also eliminate operator dependency that has been observed in manual segmentation. Moreover, manual segmentation is time prohibitive given that vascular trees have more than thousands of segments and bifurcations so that interactive segmentation consumes excessive human resources. Subject-specific trees are a first step toward patient-specific hemodynamic simulations for assessing treatment outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Object-Based Change Detection Using High-Resolution Remotely Sensed Data and GIS
NASA Astrophysics Data System (ADS)
Sofina, N.; Ehlers, M.
2012-08-01
High resolution remotely sensed images provide current, detailed, and accurate information for large areas of the earth surface which can be used for change detection analyses. Conventional methods of image processing permit detection of changes by comparing remotely sensed multitemporal images. However, for performing a successful analysis it is desirable to take images from the same sensor which should be acquired at the same time of season, at the same time of a day, and - for electro-optical sensors - in cloudless conditions. Thus, a change detection analysis could be problematic especially for sudden catastrophic events. A promising alternative is the use of vector-based maps containing information about the original urban layout which can be related to a single image obtained after the catastrophe. The paper describes a methodology for an object-based search of destroyed buildings as a consequence of a natural or man-made catastrophe (e.g., earthquakes, flooding, civil war). The analysis is based on remotely sensed and vector GIS data. It includes three main steps: (i) generation of features describing the state of buildings; (ii) classification of building conditions; and (iii) data import into a GIS. One of the proposed features is a newly developed 'Detected Part of Contour' (DPC). Additionally, several features based on the analysis of textural information corresponding to the investigated vector objects are calculated. The method is applied to remotely sensed images of areas that have been subjected to an earthquake. The results show the high reliability of the DPC feature as an indicator for change.
Geometric rectification of camera-captured document images.
Liang, Jian; DeMenthon, Daniel; Doermann, David
2008-04-01
Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.
Mining biomedical images towards valuable information retrieval in biomedical and life sciences
Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas
2016-01-01
Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. PMID:27538578
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gang, G; Stayman, J; Ouadah, S
2015-06-15
Purpose: This work introduces a task-driven imaging framework that utilizes a patient-specific anatomical model, mathematical definition of the imaging task, and a model of the imaging system to prospectively design acquisition and reconstruction techniques that maximize task-based imaging performance. Utility of the framework is demonstrated in the joint optimization of tube current modulation and view-dependent reconstruction kernel in filtered-backprojection reconstruction and non-circular orbit design in model-based reconstruction. Methods: The system model is based on a cascaded systems analysis of cone-beam CT capable of predicting the spatially varying noise and resolution characteristics as a function of the anatomical model and amore » wide range of imaging parameters. Detectability index for a non-prewhitening observer model is used as the objective function in a task-driven optimization. The combination of tube current and reconstruction kernel modulation profiles were identified through an alternating optimization algorithm where tube current was updated analytically followed by a gradient-based optimization of reconstruction kernel. The non-circular orbit is first parameterized as a linear combination of bases functions and the coefficients were then optimized using an evolutionary algorithm. The task-driven strategy was compared with conventional acquisitions without modulation, using automatic exposure control, and in a circular orbit. Results: The task-driven strategy outperformed conventional techniques in all tasks investigated, improving the detectability of a spherical lesion detection task by an average of 50% in the interior of a pelvis phantom. The non-circular orbit design successfully mitigated photon starvation effects arising from a dense embolization coil in a head phantom, improving the conspicuity of an intracranial hemorrhage proximal to the coil. Conclusion: The task-driven imaging framework leverages a knowledge of the imaging task within a patient-specific anatomical model to optimize image acquisition and reconstruction techniques, thereby improving imaging performance beyond that achievable with conventional approaches. 2R01-CA-112163; R01-EB-017226; U01-EB-018758; Siemens Healthcare (Forcheim, Germany)« less
Cheng, Cynthia; Lee, Chadd W; Daskalakis, Constantine
2015-10-27
Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient's microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.(1) This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique.
Daskalakis, Constantine
2015-01-01
Capillaroscopy is a non-invasive, efficient, relatively inexpensive and easy to learn methodology for directly visualizing the microcirculation. The capillaroscopy technique can provide insight into a patient’s microvascular health, leading to a variety of potentially valuable dermatologic, ophthalmologic, rheumatologic and cardiovascular clinical applications. In addition, tumor growth may be dependent on angiogenesis, which can be quantitated by measuring microvessel density within the tumor. However, there is currently little to no standardization of techniques, and only one publication to date reports the reliability of a currently available, complex computer based algorithms for quantitating capillaroscopy data.1 This paper describes a new, simpler, reliable, standardized capillary counting algorithm for quantitating nailfold capillaroscopy data. A simple, reproducible computerized capillaroscopy algorithm such as this would facilitate more widespread use of the technique among researchers and clinicians. Many researchers currently analyze capillaroscopy images by hand, promoting user fatigue and subjectivity of the results. This paper describes a novel, easy-to-use automated image processing algorithm in addition to a reproducible, semi-automated counting algorithm. This algorithm enables analysis of images in minutes while reducing subjectivity; only a minimal amount of training time (in our experience, less than 1 hr) is needed to learn the technique. PMID:26554744
Eddy current compensation for delta relaxation enhanced MR by dynamic reference phase modulation.
Hoelscher, Uvo Christoph; Jakob, Peter M
2013-04-01
Eddy current compensation by dynamic reference phase modulation (eDREAM) is a compensation method for eddy current fields induced by B 0 field-cycling which occur in delta relaxation enhanced MR (dreMR) imaging. The presented method is based on a dynamic frequency adjustment and prevents eddy current related artifacts. It is easy to implement and can be completely realized in software for any imaging sequence. In this paper, the theory of eDREAM is derived and two applications are demonstrated. The theory describes how to model the behavior of the eddy currents and how to implement the compensation. Phantom and in vivo measurements are carried out and demonstrate the benefits of eDREAM. A comparison of images acquired with and without eDREAM shows a significant improvement in dreMR image quality. Images without eDREAM suffer from severe artifacts and do not allow proper interpretation while images with eDREAM are artifact free. In vivo experiments demonstrate that dreMR imaging without eDREAM is not feasible as artifacts completely change the image contrast. eDREAM is a flexible eddy current compensation for dreMR. It is capable of completely removing the influence of eddy currents such that the dreMR images do not suffer from artifacts.
Three-dimensional Imaging and Scanning: Current and Future Applications for Pathology
Farahani, Navid; Braun, Alex; Jutt, Dylan; Huffman, Todd; Reder, Nick; Liu, Zheng; Yagi, Yukako; Pantanowitz, Liron
2017-01-01
Imaging is vital for the assessment of physiologic and phenotypic details. In the past, biomedical imaging was heavily reliant on analog, low-throughput methods, which would produce two-dimensional images. However, newer, digital, and high-throughput three-dimensional (3D) imaging methods, which rely on computer vision and computer graphics, are transforming the way biomedical professionals practice. 3D imaging has been useful in diagnostic, prognostic, and therapeutic decision-making for the medical and biomedical professions. Herein, we summarize current imaging methods that enable optimal 3D histopathologic reconstruction: Scanning, 3D scanning, and whole slide imaging. Briefly mentioned are emerging platforms, which combine robotics, sectioning, and imaging in their pursuit to digitize and automate the entire microscopy workflow. Finally, both current and emerging 3D imaging methods are discussed in relation to current and future applications within the context of pathology. PMID:28966836
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shimizu, Y; Yoon, Y; Iwase, K
Purpose: We are trying to develop an image-searching technique to identify misfiled images in a picture archiving and communication system (PACS) server by using five biological fingerprints: the whole lung field, cardiac shadow, superior mediastinum, lung apex, and right lower lung. Each biological fingerprint in a chest radiograph includes distinctive anatomical structures to identify misfiled images. The whole lung field was less effective for evaluating the similarity between two images than the other biological fingerprints. This was mainly due to the variation in the positioning for chest radiographs. The purpose of this study is to develop new biological fingerprints thatmore » could reduce influence of differences in the positioning for chest radiography. Methods: Two hundred patients were selected randomly from our database (36,212 patients). These patients had two images each (current and previous images). Current images were used as the misfiled images in this study. A circumscribed rectangular area of the lung and the upper half of the rectangle were selected automatically as new biological fingerprints. These biological fingerprints were matched to all previous images in the database. The degrees of similarity between the two images were calculated for the same and different patients. The usefulness of new the biological fingerprints for automated patient recognition was examined in terms of receiver operating characteristic (ROC) analysis. Results: Area under the ROC curves (AUCs) for the circumscribed rectangle of the lung, upper half of the rectangle, and whole lung field were 0.980, 0.994, and 0.950, respectively. The new biological fingerprints showed better performance in identifying the patients correctly than the whole lung field. Conclusion: We have developed new biological fingerprints: circumscribed rectangle of the lung and upper half of the rectangle. These new biological fingerprints would be useful for automated patient identification system because they are less affected by positioning differences during imaging.« less
Performance evaluation of infrared imaging system in field test
NASA Astrophysics Data System (ADS)
Wang, Chensheng; Guo, Xiaodong; Ren, Tingting; Zhang, Zhi-jie
2014-11-01
Infrared imaging system has been applied widely in both military and civilian fields. Since the infrared imager has various types and different parameters, for system manufacturers and customers, there is great demand for evaluating the performance of IR imaging systems with a standard tool or platform. Since the first generation IR imager was developed, the standard method to assess the performance has been the MRTD or related improved methods which are not perfect adaptable for current linear scanning imager or 2D staring imager based on FPA detector. For this problem, this paper describes an evaluation method based on the triangular orientation discrimination metric which is considered as the effective and emerging method to evaluate the synthesis performance of EO system. To realize the evaluation in field test, an experiment instrument is developed. And considering the importance of operational environment, the field test is carried in practical atmospheric environment. The test imagers include panoramic imaging system and staring imaging systems with different optics and detectors parameters (both cooled and uncooled). After showing the instrument and experiment setup, the experiment results are shown. The target range performance is analyzed and discussed. In data analysis part, the article gives the range prediction values obtained from TOD method, MRTD method and practical experiment, and shows the analysis and results discussion. The experimental results prove the effectiveness of this evaluation tool, and it can be taken as a platform to give the uniform performance prediction reference.
Machine Vision-Based Measurement Systems for Fruit and Vegetable Quality Control in Postharvest.
Blasco, José; Munera, Sandra; Aleixos, Nuria; Cubero, Sergio; Molto, Enrique
Individual items of any agricultural commodity are different from each other in terms of colour, shape or size. Furthermore, as they are living thing, they change their quality attributes over time, thereby making the development of accurate automatic inspection machines a challenging task. Machine vision-based systems and new optical technologies make it feasible to create non-destructive control and monitoring tools for quality assessment to ensure adequate accomplishment of food standards. Such systems are much faster than any manual non-destructive examination of fruit and vegetable quality, thus allowing the whole production to be inspected with objective and repeatable criteria. Moreover, current technology makes it possible to inspect the fruit in spectral ranges beyond the sensibility of the human eye, for instance in the ultraviolet and near-infrared regions. Machine vision-based applications require the use of multiple technologies and knowledge, ranging from those related to image acquisition (illumination, cameras, etc.) to the development of algorithms for spectral image analysis. Machine vision-based systems for inspecting fruit and vegetables are targeted towards different purposes, from in-line sorting into commercial categories to the detection of contaminants or the distribution of specific chemical compounds on the product's surface. This chapter summarises the current state of the art in these techniques, starting with systems based on colour images for the inspection of conventional colour, shape or external defects and then goes on to consider recent developments in spectral image analysis for internal quality assessment or contaminant detection.
Min, Jee-Eun; Ryu, Joo-Hyung; Lee, Seok; Son, Seunghyun
2012-02-01
Suspended sediment concentration (SS) is an important indicator of marine environmental changes due to natural causes such as tides, tidal currents, and river discharges, as well as human activities such as construction in coastal regions. In the Saemangeum area on the west coast of Korea, construction of a huge tidal dyke for land reclamation has strongly influenced the coastal environment. This study used remotely sensed data to analyze the SS changes in coastal waters caused by the dyke construction. Landsat and MODIS satellite images were used for the spatial analysis of finer patterns and for the detailed temporal analysis, respectively. Forty Landsat scenes and 105 monthly composite MODIS images observed during 1985-2010 were employed, and four field campaigns (from 2005 to 2006) were performed to verify the image-derived SS. The results of the satellite data analyses showed that the seawater was clear before the dyke construction, with SS values lower than 20 g/m(3). These values increased continuously as the dyke construction progressed. The maximum SS values appeared just before completion of the fourth dyke. Values decreased to below 5 g/m(3) after dyke construction. These changes indicated tidal current modification. Some eddies and plumes were observed in the images generated from Landsat data. Landsat and MODIS can reveal that coastal water turbidity was greatly reduced after completion of the construction. Copyright © 2011 Elsevier Ltd. All rights reserved.
Serša, Igor; Kranjc, Matej; Miklavčič, Damijan
2015-01-01
Electroporation is gaining its importance in everyday clinical practice of cancer treatment. For its success it is extremely important that coverage of the target tissue, i.e. treated tumor, with electric field is within the specified range. Therefore, an efficient tool for the electric field monitoring in the tumor during delivery of electroporation pulses is needed. The electric field can be reconstructed by the magnetic resonance electric impedance tomography method from current density distribution data. In this study, the use of current density imaging with MRI for monitoring current density distribution during delivery of irreversible electroporation pulses was demonstrated. Using a modified single-shot RARE sequence, where four 3000 V and 100 μs long pulses were included at the start, current distribution between a pair of electrodes inserted in a liver tissue sample was imaged. Two repetitions of the sequence with phases of refocusing radiofrequency pulses 90° apart were needed to acquire one current density image. For each sample in total 45 current density images were acquired to follow a standard protocol for irreversible electroporation where 90 electric pulses are delivered at 1 Hz. Acquired current density images showed that the current density in the middle of the sample increased from first to last electric pulses by 60%, i.e. from 8 kA/m2 to 13 kA/m2 and that direction of the current path did not change with repeated electric pulses significantly. The presented single-shot RARE-based current density imaging sequence was used successfully to image current distribution during delivery of short high-voltage electric pulses. The method has a potential to enable monitoring of tumor coverage by electric field during irreversible electroporation tissue ablation.
Quantitative assessment of dynamic PET imaging data in cancer imaging.
Muzi, Mark; O'Sullivan, Finbarr; Mankoff, David A; Doot, Robert K; Pierce, Larry A; Kurland, Brenda F; Linden, Hannah M; Kinahan, Paul E
2012-11-01
Clinical imaging in positron emission tomography (PET) is often performed using single-time-point estimates of tracer uptake or static imaging that provides a spatial map of regional tracer concentration. However, dynamic tracer imaging can provide considerably more information about in vivo biology by delineating both the temporal and spatial pattern of tracer uptake. In addition, several potential sources of error that occur in static imaging can be mitigated. This review focuses on the application of dynamic PET imaging to measuring regional cancer biologic features and especially in using dynamic PET imaging for quantitative therapeutic response monitoring for cancer clinical trials. Dynamic PET imaging output parameters, particularly transport (flow) and overall metabolic rate, have provided imaging end points for clinical trials at single-center institutions for years. However, dynamic imaging poses many challenges for multicenter clinical trial implementations from cross-center calibration to the inadequacy of a common informatics infrastructure. Underlying principles and methodology of PET dynamic imaging are first reviewed, followed by an examination of current approaches to dynamic PET image analysis with a specific case example of dynamic fluorothymidine imaging to illustrate the approach. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Augustine, Kurt E.; Camp, Jon J.; Holmes, David R.; Huddleston, Paul M.; Lu, Lichun; Yaszemski, Michael J.; Robb, Richard A.
2012-03-01
Failure of the spine's structural integrity from metastatic disease can lead to both pain and neurologic deficit. Fractures that require treatment occur in over 30% of bony metastases. Our objective is to use computed tomography (CT) in conjunction with analytic techniques that have been previously developed to predict fracture risk in cancer patients with metastatic disease to the spine. Current clinical practice for cancer patients with spine metastasis often requires an empirical decision regarding spinal reconstructive surgery. Early image-based software systems used for CT analysis are time consuming and poorly suited for clinical application. The Biomedical Image Resource (BIR) at Mayo Clinic, Rochester has developed an image analysis computer program that calculates from CT scans, the residual load-bearing capacity in a vertebra with metastatic cancer. The Spine Cancer Assessment (SCA) program is built on a platform designed for clinical practice, with a workflow format that allows for rapid selection of patient CT exams, followed by guided image analysis tasks, resulting in a fracture risk report. The analysis features allow the surgeon to quickly isolate a single vertebra and obtain an immediate pre-surgical multiple parallel section composite beam fracture risk analysis based on algorithms developed at Mayo Clinic. The analysis software is undergoing clinical validation studies. We expect this approach will facilitate patient management and utilization of reliable guidelines for selecting among various treatment option based on fracture risk.
FDG-PET imaging in mild traumatic brain injury: a critical review
Byrnes, Kimberly R.; Wilson, Colin M.; Brabazon, Fiona; von Leden, Ramona; Jurgens, Jennifer S.; Oakes, Terrence R.; Selwyn, Reed G.
2013-01-01
Traumatic brain injury (TBI) affects an estimated 1.7 million people in the United States and is a contributing factor to one third of all injury related deaths annually. According to the CDC, approximately 75% of all reported TBIs are concussions or considered mild in form, although the number of unreported mild TBIs (mTBI) and patients not seeking medical attention is unknown. Currently, classification of mTBI or concussion is a clinical assessment since diagnostic imaging is typically inconclusive due to subtle, obscure, or absent changes in anatomical or physiological parameters measured using standard magnetic resonance (MR) or computed tomography (CT) imaging protocols. Molecular imaging techniques that examine functional processes within the brain, such as measurement of glucose uptake and metabolism using [18F]fluorodeoxyglucose and positron emission tomography (FDG-PET), have the ability to detect changes after mTBI. Recent technological improvements in the resolution of PET systems, the integration of PET with magnetic resonance imaging (MRI), and the availability of normal healthy human databases and commercial image analysis software contribute to the growing use of molecular imaging in basic science research and advances in clinical imaging. This review will discuss the technological considerations and limitations of FDG-PET, including differentiation between glucose uptake and glucose metabolism and the significance of these measurements. In addition, the current state of FDG-PET imaging in assessing mTBI in clinical and preclinical research will be considered. Finally, this review will provide insight into potential critical data elements and recommended standardization to improve the application of FDG-PET to mTBI research and clinical practice. PMID:24409143
Autonomous Image Analysis for Future Mars Missions
NASA Technical Reports Server (NTRS)
Gulick, V. C.; Morris, R. L.; Ruzon, M. A.; Bandari, E.; Roush, T. L.
1999-01-01
To explore high priority landing sites and to prepare for eventual human exploration, future Mars missions will involve rovers capable of traversing tens of kilometers. However, the current process by which scientists interact with a rover does not scale to such distances. Specifically, numerous command cycles are required to complete even simple tasks, such as, pointing the spectrometer at a variety of nearby rocks. In addition, the time required by scientists to interpret image data before new commands can be given and the limited amount of data that can be downlinked during a given command cycle constrain rover mobility and achievement of science goals. Experience with rover tests on Earth supports these concerns. As a result, traverses to science sites as identified in orbital images would require numerous science command cycles over a period of many weeks, months or even years, perhaps exceeding rover design life and other constraints. Autonomous onboard science analysis can address these problems in two ways. First, it will allow the rover to preferentially transmit "interesting" images, defined as those likely to have higher science content. Second, the rover will be able to anticipate future commands. For example, a rover might autonomously acquire and return spectra of "interesting" rocks along with a high-resolution image of those rocks in addition to returning the context images in which they were detected. Such approaches, coupled with appropriate navigational software, help to address both the data volume and command cycle bottlenecks that limit both rover mobility and science yield. We are developing fast, autonomous algorithms to enable such intelligent on-board decision making by spacecraft. Autonomous algorithms developed to date have the ability to identify rocks and layers in a scene, locate the horizon, and compress multi-spectral image data. We are currently investigating the possibility of reconstructing a 3D surface from a sequence of images acquired by a robotic arm camera. This would then allow the return of a single completely in focus image constructed only from those portions of individual images that lie within the camera's depth of field. Output from these algorithms could be used to autonomously obtain rock spectra, determine which images should be transmitted to the ground, or to aid in image compression. We will discuss these algorithms and their performance during a recent rover field test.
Machine learning for a Toolkit for Image Mining
NASA Technical Reports Server (NTRS)
Delanoy, Richard L.
1995-01-01
A prototype user environment is described that enables a user with very limited computer skills to collaborate with a computer algorithm to develop search tools (agents) that can be used for image analysis, creating metadata for tagging images, searching for images in an image database on the basis of image content, or as a component of computer vision algorithms. Agents are learned in an ongoing, two-way dialogue between the user and the algorithm. The user points to mistakes made in classification. The algorithm, in response, attempts to discover which image attributes are discriminating between objects of interest and clutter. It then builds a candidate agent and applies it to an input image, producing an 'interest' image highlighting features that are consistent with the set of objects and clutter indicated by the user. The dialogue repeats until the user is satisfied. The prototype environment, called the Toolkit for Image Mining (TIM) is currently capable of learning spectral and textural patterns. Learning exhibits rapid convergence to reasonable levels of performance and, when thoroughly trained, Fo appears to be competitive in discrimination accuracy with other classification techniques.
NASA Astrophysics Data System (ADS)
Liu, Brent; Documet, Jorge; McNitt-Gray, Sarah; Requejo, Phil; McNitt-Gray, Jill
2011-03-01
Clinical decisions for improving motor function in patients both with disability as well as improving an athlete's performance are made through clinical and movement analysis. Currently, this analysis facilitates identifying abnormalities in a patient's motor function for a large amount of neuro-musculoskeletal pathologies. However definitively identifying the underlying cause or long-term consequences of a specific abnormality in the patient's movement pattern is difficult since this requires information from multiple sources and formats across different times and currently relies on the experience and intuition of the expert clinician. In addition, this data must be persistent for longitudinal outcomes studies. Therefore a multimedia ePR system integrating imaging informatics data could have a significant impact on decision support within this clinical workflow. We present the design and architecture of such an ePR system as well as the data types that need integration in order to develop relevant decision support tools. Specifically, we will present two data model examples: 1) A performance improvement project involving volleyball athletes and 2) Wheelchair propulsion evaluation of patients with disabilities. The end result is a new frontier area of imaging informatics research within rehabilitation engineering and biomechanics.
A top-down manner-based DCNN architecture for semantic image segmentation.
Qiao, Kai; Chen, Jian; Wang, Linyuan; Zeng, Lei; Yan, Bin
2017-01-01
Given their powerful feature representation for recognition, deep convolutional neural networks (DCNNs) have been driving rapid advances in high-level computer vision tasks. However, their performance in semantic image segmentation is still not satisfactory. Based on the analysis of visual mechanism, we conclude that DCNNs in a bottom-up manner are not enough, because semantic image segmentation task requires not only recognition but also visual attention capability. In the study, superpixels containing visual attention information are introduced in a top-down manner, and an extensible architecture is proposed to improve the segmentation results of current DCNN-based methods. We employ the current state-of-the-art fully convolutional network (FCN) and FCN with conditional random field (DeepLab-CRF) as baselines to validate our architecture. Experimental results of the PASCAL VOC segmentation task qualitatively show that coarse edges and error segmentation results are well improved. We also quantitatively obtain about 2%-3% intersection over union (IOU) accuracy improvement on the PASCAL VOC 2011 and 2012 test sets.
Mahapatra, Dwarikanath; Schueffler, Peter; Tielbeek, Jeroen A W; Buhmann, Joachim M; Vos, Franciscus M
2013-10-01
Increasing incidence of Crohn's disease (CD) in the Western world has made its accurate diagnosis an important medical challenge. The current reference standard for diagnosis, colonoscopy, is time-consuming and invasive while magnetic resonance imaging (MRI) has emerged as the preferred noninvasive procedure over colonoscopy. Current MRI approaches assess rate of contrast enhancement and bowel wall thickness, and rely on extensive manual segmentation for accurate analysis. We propose a supervised learning method for the identification and localization of regions in abdominal magnetic resonance images that have been affected by CD. Low-level features like intensity and texture are used with shape asymmetry information to distinguish between diseased and normal regions. Particular emphasis is laid on a novel entropy-based shape asymmetry method and higher-order statistics like skewness and kurtosis. Multi-scale feature extraction renders the method robust. Experiments on real patient data show that our features achieve a high level of accuracy and perform better than two competing methods.
Lower-Dark-Current, Higher-Blue-Response CMOS Imagers
NASA Technical Reports Server (NTRS)
Pain, Bedabrata; Cunningham, Thomas; Hancock, Bruce
2008-01-01
Several improved designs for complementary metal oxide/semiconductor (CMOS) integrated-circuit image detectors have been developed, primarily to reduce dark currents (leakage currents) and secondarily to increase responses to blue light and increase signal-handling capacities, relative to those of prior CMOS imagers. The main conclusion that can be drawn from a study of the causes of dark currents in prior CMOS imagers is that dark currents could be reduced by relocating p/n junctions away from Si/SiO2 interfaces. In addition to reflecting this conclusion, the improved designs include several other features to counteract dark-current mechanisms and enhance performance.
NASA Astrophysics Data System (ADS)
Ravkin, Ilya; Temov, Vladimir
1998-04-01
The detection and genetic analysis of fetal cells in maternal blood will permit noninvasive prenatal screening for genetic defects. Applied Imaging has developed and is currently evaluating a system for semiautomatic detection of fetal nucleated red blood cells on slides and acquisition of their DNA probe FISH images. The specimens are blood smears from pregnant women (9 - 16 weeks gestation) enriched for nucleated red blood cells (NRBC). The cells are identified by using labeled monoclonal antibodies directed to different types of hemoglobin chains (gamma, epsilon); the nuclei are stained with DAPI. The Applied Imaging system has been implemented with both Olympus BX and Nikon Eclipse series microscopes which were equipped with transmission and fluorescence optics. The system includes the following motorized components: stage, focus, transmission, and fluorescence filter wheels. A video camera with light integration (COHU 4910) permits low light imaging. The software capabilities include scanning, relocation, autofocusing, feature extraction, facilities for operator review, and data analysis. Detection of fetal NRBCs is achieved by employing a combination of brightfield and fluorescence images of nuclear and cytoplasmic markers. The brightfield and fluorescence images are all obtained with a single multi-bandpass dichroic mirror. A Z-stack of DNA probe FISH images is acquired by moving focus and switching excitation filters. This stack is combined to produce an enhanced image for presentation and spot counting.
Backhausen, Lea L.; Herting, Megan M.; Buse, Judith; Roessner, Veit; Smolka, Michael N.; Vetter, Nora C.
2016-01-01
In structural magnetic resonance imaging motion artifacts are common, especially when not scanning healthy young adults. It has been shown that motion affects the analysis with automated image-processing techniques (e.g., FreeSurfer). This can bias results. Several developmental and adult studies have found reduced volume and thickness of gray matter due to motion artifacts. Thus, quality control is necessary in order to ensure an acceptable level of quality and to define exclusion criteria of images (i.e., determine participants with most severe artifacts). However, information about the quality control workflow and image exclusion procedure is largely lacking in the current literature and the existing rating systems differ. Here, we propose a stringent workflow of quality control steps during and after acquisition of T1-weighted images, which enables researchers dealing with populations that are typically affected by motion artifacts to enhance data quality and maximize sample sizes. As an underlying aim we established a thorough quality control rating system for T1-weighted images and applied it to the analysis of developmental clinical data using the automated processing pipeline FreeSurfer. This hands-on workflow and quality control rating system will aid researchers in minimizing motion artifacts in the final data set, and therefore enhance the quality of structural magnetic resonance imaging studies. PMID:27999528
Current trends in gamma radiation detection for radiological emergency response
NASA Astrophysics Data System (ADS)
Mukhopadhyay, Sanjoy; Guss, Paul; Maurer, Richard
2011-09-01
Passive and active detection of gamma rays from shielded radioactive materials, including special nuclear materials, is an important task for any radiological emergency response organization. This article reports on the current trends and status of gamma radiation detection objectives and measurement techniques as applied to nonproliferation and radiological emergencies. In recent years, since the establishment of the Domestic Nuclear Detection Office by the Department of Homeland Security, a tremendous amount of progress has been made in detection materials (scintillators, semiconductors), imaging techniques (Compton imaging, use of active masking and hybrid imaging), data acquisition systems with digital signal processing, field programmable gate arrays and embedded isotopic analysis software (viz. gamma detector response and analysis software [GADRAS]1), fast template matching, and data fusion (merging radiological data with geo-referenced maps, digital imagery to provide better situational awareness). In this stride to progress, a significant amount of inter-disciplinary research and development has taken place-techniques and spin-offs from medical science (such as x-ray radiography and tomography), materials engineering (systematic planned studies on scintillators to optimize several qualities of a good scintillator, nanoparticle applications, quantum dots, and photonic crystals, just to name a few). No trend analysis of radiation detection systems would be complete without mentioning the unprecedented strategic position taken by the National Nuclear Security Administration (NNSA) to deter, detect, and interdict illicit trafficking in nuclear and other radioactive materials across international borders and through the global maritime transportation-the so-called second line of defense.
Trends in Library and Information Science: 1989. ERIC Digest.
ERIC Educational Resources Information Center
Eisenberg, Michael B.
Based on a content analysis of professional journals, conference proceedings, ERIC documents, annuals, and dissertations in library and information science, the following current trends in the field are discussed: (1) there are important emerging roles and responsibilities for information professionals; (2) the status and image of librarians…
Dance Technology. Current Applications and Future Trends.
ERIC Educational Resources Information Center
Gray, Judith A., Ed.
Original research is reported on image digitizing, robot choreography, movement analysis, databases for dance, computerized dance notation, and computerized lightboards for dance performance. Articles in this publication are as follows: (1) "The Evolution of Dance Technology" (Judith A. Gray); (2) "Toward a Language for Human Movement" (Thomas W.…
Balkan Identity: Changing Self-Images of the South Slavs
ERIC Educational Resources Information Center
Saric, Ljiljana
2004-01-01
This paper provides an analysis of texts containing the noun "the Balkans" and the adjective "Balkan" in a small corpus of approximately 80 journalistic texts from different south Slavic regions currently available online. The texts were published over the last 10 years. The term "the Balkans" and its derivations…
MO-C-BRCD-03: The Role of Informatics in Medical Physics and Vice Versa.
Andriole, K
2012-06-01
Like Medical Physics, Imaging Informatics encompasses concepts touching every aspect of the imaging chain from image creation, acquisition, management and archival, to image processing, analysis, display and interpretation. The two disciplines are in fact quite complementary, with similar goals to improve the quality of care provided to patients using an evidence-based approach, to assure safety in the clinical and research environments, to facilitate efficiency in the workplace, and to accelerate knowledge discovery. Use-cases describing several areas of informatics activity will be given to illustrate current limitations that would benefit from medical physicist participation, and conversely areas in which informaticists may contribute to the solution. Topics to be discussed include radiation dose monitoring, process management and quality control, display technologies, business analytics techniques, and quantitative imaging. Quantitative imaging is increasingly becoming an essential part of biomedicalresearch as well as being incorporated into clinical diagnostic activities. Referring clinicians are asking for more objective information to be gleaned from the imaging tests that they order so that they may make the best clinical management decisions for their patients. Medical Physicists may be called upon to identify existing issues as well as develop, validate and implement new approaches and technologies to help move the field further toward quantitative imaging methods for the future. Biomedical imaging informatics tools and techniques such as standards, integration, data mining, cloud computing and new systems architectures, ontologies and lexicons, data visualization and navigation tools, and business analytics applications can be used to overcome some of the existing limitations. 1. Describe what is meant by Medical Imaging Informatics and understand why the medical physicist should care. 2. Identify existing limitations in information technologies with respect to Medical Physics, and conversely see how Informatics may assist the medical physicist in filling some of the current gaps in their activities. 3. Understand general informatics concepts and areas of investigation including imaging and workflow standards, systems integration, computing architectures, ontologies, data mining and business analytics, data visualization and human-computer interface tools, and the importance of quantitative imaging for the future of Medical Physics and Imaging Informatics. 4. Become familiar with on-going efforts to address current challenges facing future research into and clinical implementation of quantitative imaging applications. © 2012 American Association of Physicists in Medicine.
Husarik, Daniela B; Marin, Daniele; Samei, Ehsan; Richard, Samuel; Chen, Baiyu; Jaffe, Tracy A; Bashir, Mustafa R; Nelson, Rendon C
2012-08-01
The aim of this study was to compare the image quality of abdominal computed tomography scans in an anthropomorphic phantom acquired at different radiation dose levels where each raw data set is reconstructed with both a standard convolution filtered back projection (FBP) and a full model-based iterative reconstruction (MBIR) algorithm. An anthropomorphic phantom in 3 sizes was used with a custom-built liver insert simulating late hepatic arterial enhancement and containing hypervascular liver lesions of various sizes. Imaging was performed on a 64-section multidetector-row computed tomography scanner (Discovery CT750 HD; GE Healthcare, Waukesha, WI) at 3 different tube voltages for each patient size and 5 incrementally decreasing tube current-time products for each tube voltage. Quantitative analysis consisted of contrast-to-noise ratio calculations and image noise assessment. Qualitative image analysis was performed by 3 independent radiologists rating subjective image quality and lesion conspicuity. Contrast-to-noise ratio was significantly higher and mean image noise was significantly lower on MBIR images than on FBP images in all patient sizes, at all tube voltage settings, and all radiation dose levels (P < 0.05). Overall image quality and lesion conspicuity were rated higher for MBIR images compared with FBP images at all radiation dose levels. Image quality and lesion conspicuity on 25% to 50% dose MBIR images were rated equal to full-dose FBP images. This phantom study suggests that depending on patient size, clinically acceptable image quality of the liver in the late hepatic arterial phase can be achieved with MBIR at approximately 50% lower radiation dose compared with FBP.
Chen, Xiaoxia; Zhao, Jing; Chen, Tianshu; Gao, Tao; Zhu, Xiaoli; Li, Genxi
2018-01-01
Comprehensive analysis of the expression level and location of tumor-associated membrane proteins (TMPs) is of vital importance for the profiling of tumor cells. Currently, two kinds of independent techniques, i.e. ex situ detection and in situ imaging, are usually required for the quantification and localization of TMPs respectively, resulting in some inevitable problems. Methods: Herein, based on a well-designed and fluorophore-labeled DNAzyme, we develop an integrated and facile method, in which imaging and quantification of TMPs in situ are achieved simultaneously in a single system. The labeled DNAzyme not only produces localized fluorescence for the visualization of TMPs but also catalyzes the cleavage of a substrate to produce quantitative fluorescent signals that can be collected from solution for the sensitive detection of TMPs. Results: Results from the DNAzyme-based in situ imaging and quantification of TMPs match well with traditional immunofluorescence and western blotting. In addition to the advantage of two-in-one, the DNAzyme-based method is highly sensitivity, allowing the detection of TMPs in only 100 cells. Moreover, the method is nondestructive. Cells after analysis could retain their physiological activity and could be cultured for other applications. Conclusion: The integrated system provides solid results for both imaging and quantification of TMPs, making it a competitive method over some traditional techniques for the analysis of TMPs, which offers potential application as a toolbox in the future.
Ohno, Yoshiharu; Koyama, Hisanobu; Kono, Astushi; Terada, Mari; Inokawa, Hiroyasu; Matsumoto, Sumiaki; Sugimura, Kazuro
2007-12-01
The purpose of the present study was to determine the influence of detector collimation and beam pitch for identification and image quality of ground-glass attenuation (GGA) and nodules on 16- and 64-detector row CTs, by using a commercially available chest phantom. A chest CT phantom including simulated GGAs and nodules was scanned with different detector collimations, beam pitches and tube currents. The probability and image quality of each simulated abnormality was visually assessed with a five-point scoring system. ROC-analysis and ANOVA were then performed to compare the identification and image quality of either protocol with standard values. Detection rates of low-dose CTs were significantly reduced when tube currents were set at 40mA or less by using detector collimation 16 and 64x0.5mm and 16 and 32mmx1.0mm for low pitch, and at 100mA or less by using detector collimation 16 and 64x0.5mm and 16 and 32mmx1.0mm for high pitch (p<0.05). Image qualities of low-dose CTs deteriorated significantly when tube current was set at 100mA or less by using detector collimation 16 and 64x0.5mm and 16 and 32x1.0mm for low pitch, and at 150mA or less by using detector collimation 16 and 64x0.5mm and 16 and 32x1.0mm for high pitch (p<0.05). Detector collimation and beam pitch were important factors for the image quality and identification of GGA and nodules by 16- and 64-detector row CT.
Advanced Test Reactor National Scientific User Facility (ATR NSUF) Monthly Report December 2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renae Soelberg
2014-12-01
• PNNL has completed sectioning of the U.C. Berkeley hydride fuel rodlet 1 (highest burn-up) and is currently polishing samples in preparation for optical metallography. • A disk was successfully sectioned from rodlet 1 at the location of the internal thermocouple tip as desired. The transition from annular pellet to solid pellet is verified by the eutectic-filled inner cavity located on the back face of this disk (top left) and the solid front face (bottom left). Preliminary low-resolution images indicate interesting sample characteristics in the eutectic surrounding the rodlet at the location of the outer thermocouple tip (right). This samplemore » has been potted and is currently being polished for high-resolution optical microscopy and subsequent SEM analysis. (See images.)« less
[Bioimpedance means of skin condition monitoring during therapeutic and cosmetic procedures].
Alekseenko, V A; Kus'min, A A; Filist, S A
2008-01-01
Engineering and technological problems of bioimpedance skin surface mapping are considered. A typical design of a device based on a PIC 16F microcontroller is suggested. It includes a keyboard, LCD indicator, probing current generator with programmed frequency tuning, and units for probing current monitoring and bioimpedance measurement. The electrode matrix of the device is constructed using nanotechnology. A microcontroller-controlled multiplexor provides scanning of interelectrode impedance, which makes it possible to obtain the impedance image of the skin surface under the electrode matrix. The microcontroller controls the probing signal generator frequency and allows layer-by-layer images of skin under the electrode matrix to be obtained. This makes it possible to use reconstruction tomography methods for analysis and monitoring of the skin condition during therapeutic and cosmetic procedures.
Intra-coil interactions in split gradient coils in a hybrid MRI-LINAC system
NASA Astrophysics Data System (ADS)
Tang, Fangfang; Freschi, Fabio; Sanchez Lopez, Hector; Repetto, Maurizio; Liu, Feng; Crozier, Stuart
2016-04-01
An MRI-LINAC system combines a magnetic resonance imaging (MRI) system with a medical linear accelerator (LINAC) to provide image-guided radiotherapy for targeting tumors in real-time. In an MRI-LINAC system, a set of split gradient coils is employed to produce orthogonal gradient fields for spatial signal encoding. Owing to this unconventional gradient configuration, eddy currents induced by switching gradient coils on and off may be of particular concern. It is expected that strong intra-coil interactions in the set will be present due to the constrained return paths, leading to potential degradation of the gradient field linearity and image distortion. In this study, a series of gradient coils with different track widths have been designed and analyzed to investigate the electromagnetic interactions between coils in a split gradient set. A driving current, with frequencies from 100 Hz to 10 kHz, was applied to study the inductive coupling effects with respect to conductor geometry and operating frequency. It was found that the eddy currents induced in the un-energized coils (hereby-referred to as passive coils) positively correlated with track width and frequency. The magnetic field induced by the eddy currents in the passive coils with wide tracks was several times larger than that induced by eddy currents in the cold shield of cryostat. The power loss in the passive coils increased with the track width. Therefore, intra-coil interactions should be included in the coil design and analysis process.
A high resolution Passive Flux Meter approach based on colorimetric responses
NASA Astrophysics Data System (ADS)
Chardi, K.; Dombrowski, K.; Cho, J.; Hatfield, K.; Newman, M.; Annable, M. D.
2016-12-01
Subsurface water and contaminant mass flux measurements are critical in determining risk, optimizing remediation strategies, and monitoring contaminant attenuation. The standard Passive Flux Meter, hereafter knows as a (PFM), is a well-developed device used for determining and monitoring rates of groundwater and contaminant mass flux in screened wells. The current PFM is a permeable device that contains granular activated carbon impregnated with alcohol tracers which is deployed in a flow field for a designated period of time. Once extracted, sampling requires laboratory analysis to quantify Darcy flux, which can be time consuming and have significant cost. To expedite test results, a modified PFM based on the image analysis of colorimetric responses, herein referred to as a colorimetric Passive Flux Meter (cPFM), was developed. Various dyes and sorbents were selected and evaluated to determine colorimetric response to water flow. Rhodamine, fluorescent yellow, fluorescent orange, and turmeric were the dye candidates while 100% wool and a 35% wool blend with 65% rayon were the sorbent candidates selected for use in the cPFM. Ultraviolet light image analysis was used to calculate average color intensity using ImageJ, a Java-based image processing program. These results were then used to quantify Darcy flux. Error ranges evaluated for Darcy flux using the cPFM are comparable to those with the standard, activated carbon based, PFM. The cPFM has the potential to accomplish the goal of obtaining high resolution Darcy flux data while eliminating high costs and analysis time. Implications of groundwater characteristics, such as PH and contaminant concentrations, on image analysis are to be tested through laboratory analysis followed by field testing of the cPFM.
Integrating DICOM structure reporting (SR) into the medical imaging informatics data grid
NASA Astrophysics Data System (ADS)
Lee, Jasper; Le, Anh; Liu, Brent
2008-03-01
The Medical Imaging Informatics (MI2) Data Grid developed at the USC Image Processing and Informatics Laboratory enables medical images to be shared securely between multiple imaging centers. Current applications include an imaging-based clinical trial setting where multiple field sites perform image acquisition and a centralized radiology core performs image analysis, often using computer-aided diagnosis tools (CAD) that generate a DICOM-SR to report their findings and measurements. As more and more CAD tools are being developed in the radiology field, the generated DICOM Structure Reports (SR) holding key radiological findings and measurements that are not part of the DICOM image need to be integrated into the existing Medical Imaging Informatics Data Grid with the corresponding imaging studies. We will discuss the significance and method involved in adapting DICOM-SR into the Medical Imaging Informatics Data Grid. The result is a MI2 Data Grid repository from which users can send and receive DICOM-SR objects based on the imaging-based clinical trial application. The services required to extract and categorize information from the structured reports will be discussed, and the workflow to store and retrieve a DICOM-SR file into the existing MI2 Data Grid will be shown.
Speckle imaging with the MAMA detector: Preliminary results
NASA Technical Reports Server (NTRS)
Horch, E.; Heanue, J. F.; Morgan, J. S.; Timothy, J. G.
1994-01-01
We report on the first successful speckle imaging studies using the Stanford University speckle interferometry system, an instrument that uses a multianode microchannel array (MAMA) detector as the imaging device. The method of producing high-resolution images is based on the analysis of so-called 'near-axis' bispectral subplanes and follows the work of Lohmann et al. (1983). In order to improve the signal-to-noise ratio in the bispectrum, the frame-oversampling technique of Nakajima et al. (1989) is also employed. We present speckle imaging results of binary stars and other objects from V magnitude 5.5 to 11, and the quality of these images is studied. While the Stanford system is capable of good speckle imaging results, it is limited by the overall quantum efficiency of the current MAMA detector (which is due to the response of the photocathode at visible wavelengths and other detector properties) and by channel saturation of the microchannel plate. Both affect the signal-to-noise ratio of the power spectrum and bispectrum.
Development of and Improved Magneto-Optic/Eddy-Current Imager
DOT National Transportation Integrated Search
1997-04-01
Magneto-optic/eddy-current imaging technology has been developed and approved for inspection of cracks in aging aircraft. This relatively new nondestructive test method gives the inspector the ability to quickly generate real-time eddy-current images...
DrishtiCare: a telescreening platform for diabetic retinopathy powered with fundus image analysis.
Joshi, Gopal Datt; Sivaswamy, Jayanthi
2011-01-01
Diabetic retinopathy is the leading cause of blindness in urban populations. Early diagnosis through regular screening and timely treatment has been shown to prevent visual loss and blindness. It is very difficult to cater to this vast set of diabetes patients, primarily because of high costs in reaching out to patients and a scarcity of skilled personnel. Telescreening offers a cost-effective solution to reach out to patients but is still inadequate due to an insufficient number of experts who serve the diabetes population. Developments toward fundus image analysis have shown promise in addressing the scarcity of skilled personnel for large-scale screening. This article aims at addressing the underlying issues in traditional telescreening to develop a solution that leverages the developments carried out in fundus image analysis. We propose a novel Web-based telescreening solution (called DrishtiCare) integrating various value-added fundus image analysis components. A Web-based platform on the software as a service (SaaS) delivery model is chosen to make the service cost-effective, easy to use, and scalable. A server-based prescreening system is employed to scrutinize the fundus images of patients and to refer them to the experts. An automatic quality assessment module ensures transfer of fundus images that meet grading standards. An easy-to-use interface, enabled with new visualization features, is designed for case examination by experts. Three local primary eye hospitals have participated and used DrishtiCare's telescreening service. A preliminary evaluation of the proposed platform is performed on a set of 119 patients, of which 23% are identified with the sight-threatening retinopathy. Currently, evaluation at a larger scale is under process, and a total of 450 patients have been enrolled. The proposed approach provides an innovative way of integrating automated fundus image analysis in the telescreening framework to address well-known challenges in large-scale disease screening. It offers a low-cost, effective, and easily adoptable screening solution to primary care providers. © 2010 Diabetes Technology Society.
The COST Action IC0604 "Telepathology Network in Europe" (EURO-TELEPATH).
García-Rojo, Marcial; Gonçalves, Luís; Blobel, Bernd
2012-01-01
The COST Action IC0604 "Telepathology Network in Europe" (EURO-TELEPATH) is a European COST Action that has been running from 2007 to 2011. COST Actions are funded by the COST (European Cooperation in the field of Scientific and Technical Research) Agency, supported by the Seventh Framework Programme for Research and Technological Development (FP7), of the European Union. EURO-TELEPATH's main objectives were evaluating and validating the common technological framework and communication standards required to access, transmit and manage digital medical records by pathologists and other medical professionals in a networked environment. The project was organized in four working groups. orking Group 1 "Business modeling in pathology" has designed main pathology processes - Frozen Study, Formalin Fixed Specimen Study, Telepathology, Cytology, and Autopsy -using Business Process Modeling Notation (BPMN). orking Group 2 "Informatics standards in pathology" has been dedicated to promoting the development and application of informatics standards in pathology, collaborating with Integrating the Healthcare Enterprise (IHE), Digital Imaging and Communications in Medicine (DICOM), Health Level Seven (HL7), and other standardization bodies. Working Group 3 "Images: Analysis, Processing, Retrieval and Management" worked on the use of virtual or digital slides that are fostering the use of image processing and analysis in pathology not only for research purposes, but also in daily practice. Working Group 4 "Technology and Automation in Pathology" was focused on studying the adequacy of current existing technical solutions, including, e.g., the quality of images obtained by slide scanners, or the efficiency of image analysis applications. Major outcome of this action are the collaboration with international health informatics standardization bodies to foster the development of standards for digital pathology, offering a new approach for workflow analysis, based in business process modeling. Health terminology standardization research has become a topic of high interest. Future research work should focus on standardization of automatic image analysis and tissue microarrays imaging.
Velmurugan, Jeyavel; Kalinin, Sergei V.; Kolmakov, Andrei; ...
2016-02-11
Here, noninvasive in situ nanoscale imaging in liquid environments is a current imperative in the analysis of delicate biomedical objects and electrochemical processes at reactive liquid–solid interfaces. Microwaves of a few gigahertz frequencies offer photons with energies of ≈10 μeV, which can affect neither electronic states nor chemical bonds in condensed matter. Here, we describe an implementation of scanning near-field microwave microscopy for imaging in liquids using ultrathin molecular impermeable membranes separating scanning probes from samples enclosed in environmental cells. We imaged a model electroplating reaction as well as individual live cells. Through a side-by-side comparison of the microwave imagingmore » with scanning electron microscopy, we demonstrate the advantage of microwaves for artifact-free imaging.« less
Signal intensity analysis and optimization for in vivo imaging of Cherenkov and excited luminescence
NASA Astrophysics Data System (ADS)
LaRochelle, Ethan P. M.; Shell, Jennifer R.; Gunn, Jason R.; Davis, Scott C.; Pogue, Brian W.
2018-04-01
During external beam radiotherapy (EBRT), in vivo Cherenkov optical emissions can be used as a dosimetry tool or to excite luminescence, termed Cherenkov-excited luminescence (CEL) with microsecond-level time-gated cameras. The goal of this work was to develop a complete theoretical foundation for the detectable signal strength, in order to provide guidance on optimization of the limits of detection and how to optimize near real time imaging. The key parameters affecting photon production, propagation and detection were considered and experimental validation with both tissue phantoms and a murine model are shown. Both the theoretical analysis and experimental data indicate that the detection level is near a single photon-per-pixel for the detection geometry and frame rates commonly used, with the strongest factor being the signal decrease with the square of distance from tissue to camera. Experimental data demonstrates how the SNR improves with increasing integration time, but only up to the point where the dominance of camera read noise is overcome by stray photon noise that cannot be suppressed. For the current camera in a fixed geometry, the signal to background ratio limits the detection of light signals, and the observed in vivo Cherenkov emission is on the order of 100× stronger than CEL signals. As a result, imaging signals from depths <15 mm is reasonable for Cherenkov light, and depths <3 mm is reasonable for CEL imaging. The current investigation modeled Cherenkov and CEL imaging of two oxygen sensing phosphorescent compounds, but the modularity of the code allows for easy comparison of different agents or alternative cameras, geometries or tissues.
LaRochelle, Ethan P M; Shell, Jennifer R; Gunn, Jason R; Davis, Scott C; Pogue, Brian W
2018-04-20
During external beam radiotherapy (EBRT), in vivo Cherenkov optical emissions can be used as a dosimetry tool or to excite luminescence, termed Cherenkov-excited luminescence (CEL) with microsecond-level time-gated cameras. The goal of this work was to develop a complete theoretical foundation for the detectable signal strength, in order to provide guidance on optimization of the limits of detection and how to optimize near real time imaging. The key parameters affecting photon production, propagation and detection were considered and experimental validation with both tissue phantoms and a murine model are shown. Both the theoretical analysis and experimental data indicate that the detection level is near a single photon-per-pixel for the detection geometry and frame rates commonly used, with the strongest factor being the signal decrease with the square of distance from tissue to camera. Experimental data demonstrates how the SNR improves with increasing integration time, but only up to the point where the dominance of camera read noise is overcome by stray photon noise that cannot be suppressed. For the current camera in a fixed geometry, the signal to background ratio limits the detection of light signals, and the observed in vivo Cherenkov emission is on the order of 100× stronger than CEL signals. As a result, imaging signals from depths <15 mm is reasonable for Cherenkov light, and depths <3 mm is reasonable for CEL imaging. The current investigation modeled Cherenkov and CEL imaging of two oxygen sensing phosphorescent compounds, but the modularity of the code allows for easy comparison of different agents or alternative cameras, geometries or tissues.
Current approaches and future role of high content imaging in safety sciences and drug discovery.
van Vliet, Erwin; Daneshian, Mardas; Beilmann, Mario; Davies, Anthony; Fava, Eugenio; Fleck, Roland; Julé, Yvon; Kansy, Manfred; Kustermann, Stefan; Macko, Peter; Mundy, William R; Roth, Adrian; Shah, Imran; Uteng, Marianne; van de Water, Bob; Hartung, Thomas; Leist, Marcel
2014-01-01
High content imaging combines automated microscopy with image analysis approaches to simultaneously quantify multiple phenotypic and/or functional parameters in biological systems. The technology has become an important tool in the fields of safety sciences and drug discovery, because it can be used for mode-of-action identification, determination of hazard potency and the discovery of toxicity targets and biomarkers. In contrast to conventional biochemical endpoints, high content imaging provides insight into the spatial distribution and dynamics of responses in biological systems. This allows the identification of signaling pathways underlying cell defense, adaptation, toxicity and death. Therefore, high content imaging is considered a promising technology to address the challenges for the "Toxicity testing in the 21st century" approach. Currently, high content imaging technologies are frequently applied in academia for mechanistic toxicity studies and in pharmaceutical industry for the ranking and selection of lead drug compounds or to identify/confirm mechanisms underlying effects observed in vivo. A recent workshop gathered scientists working on high content imaging in academia, pharmaceutical industry and regulatory bodies with the objective to compile the state-of-the-art of the technology in the different institutions. Together they defined technical and methodological gaps, proposed quality control measures and performance standards, highlighted cell sources and new readouts and discussed future requirements for regulatory implementation. This review summarizes the discussion, proposed solutions and recommendations of the specialists contributing to the workshop.
NASA Astrophysics Data System (ADS)
Butler, M. L.; Rainford, L.; Last, J.; Brennan, P. C.
2009-02-01
Introduction The American Association of Medical Physicists is currently standardizing the exposure index (EI) value. Recent studies have questioned whether the EI value offered by manufacturers is optimal. This current work establishes optimum EIs for the antero-posterior (AP) projections of a pelvis and knee on a Carestream Health (Kodak) CR system and compares these with manufacturers recommended EI values from a patient dose and image quality perspective. Methodology Human cadavers were used to produce images of clinically relevant standards. Several exposures were taken to achieve various EI values and corresponding entrance surface doses (ESD) were measured using thermoluminescent dosimeters. Image quality was assessed by 5 experienced clinicians using anatomical criteria judged against a reference image. Visualization of image specific common abnormalities was also analyzed to establish diagnostic efficacy. Results A rise in ESD for both examinations, consistent with increasing EI was shown. Anatomic image quality was deemed to be acceptable at an EI of 1560 for the AP pelvis and 1590 for the AP knee. From manufacturers recommended values, a significant reduction in ESD (p=0.02) of 38% and 33% for the pelvis and knee respectively was noted. Initial pathological analysis suggests that diagnostic efficacy at lower EI values may be projection-specific. Conclusion The data in this study emphasize the need for clinical centres to consider establishing their own EI guidelines, and not necessarily relying on manufacturers recommendations. Normal and abnormal images must be used in this process.
A picture tells a thousand words: A content analysis of concussion-related images online.
Ahmed, Osman H; Lee, Hopin; Struik, Laura L
2016-09-01
Recently image-sharing social media platforms have become a popular medium for sharing health-related images and associated information. However within the field of sports medicine, and more specifically sports related concussion, the content of images and meta-data shared through these popular platforms have not been investigated. The aim of this study was to analyse the content of concussion-related images and its accompanying meta-data on image-sharing social media platforms. We retrieved 300 images from Pinterest, Instagram and Flickr by using a standardised search strategy. All images were screened and duplicate images were removed. We excluded images if they were: non-static images; illustrations; animations; or screenshots. The content and characteristics of each image was evaluated using a customised coding scheme to determine major content themes, and images were referenced to the current international concussion management guidelines. From 300 potentially relevant images, 176 images were included for analysis; 70 from Pinterest, 63 from Flickr, and 43 from Instagram. Most images were of another person or a scene (64%), with the primary content depicting injured individuals (39%). The primary purposes of the images were to share a concussion-related incident (33%) and to dispense education (19%). For those images where it could be evaluated, the majority (91%) were found to reflect the Sports Concussion Assessment Tool 3 (SCAT3) guidelines. The ability to rapidly disseminate rich information though photos, images, and infographics to a wide-reaching audience suggests that image-sharing social media platforms could be used as an effective communication tool for sports concussion. Public health strategies could direct educative content to targeted populations via the use of image-sharing platforms. Further research is required to understand how image-sharing platforms can be used to effectively relay evidence-based information to patients and sports medicine clinicians. Copyright © 2016 Elsevier Ltd. All rights reserved.
Rosen, Eyal; Taschieri, Silvio; Del Fabbro, Massimo; Beitlitum, Ilan; Tsesis, Igor
2015-07-01
The aim of this study was to evaluate the diagnostic efficacy of cone-beam computed tomographic (CBCT) imaging in endodontics based on a systematic search and analysis of the literature using an efficacy model. A systematic search of the literature was performed to identify studies evaluating the use of CBCT imaging in endodontics. The identified studies were subjected to strict inclusion criteria followed by an analysis using a hierarchical model of efficacy (model) designed for appraisal of the literature on the levels of efficacy of a diagnostic imaging modality. Initially, 485 possible relevant articles were identified. After title and abstract screening and a full-text evaluation, 58 articles (12%) that met the inclusion criteria were analyzed and allocated to levels of efficacy. Most eligible articles (n = 52, 90%) evaluated technical characteristics or the accuracy of CBCT imaging, which was defined in this model as low levels of efficacy. Only 6 articles (10%) proclaimed to evaluate the efficacy of CBCT imaging to support the practitioner's decision making; treatment planning; and, ultimately, the treatment outcome, which was defined as higher levels of efficacy. The expected ultimate benefit of CBCT imaging to the endodontic patient as evaluated by its level of diagnostic efficacy is unclear and is mainly limited to its technical and diagnostic accuracy efficacies. Even for these low levels of efficacy, current knowledge is limited. Therefore, a cautious and rational approach is advised when considering CBCT imaging for endodontic purposes. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
[Research applications in digital radiology. Big data and co].
Müller, H; Hanbury, A
2016-02-01
Medical imaging produces increasingly complex images (e.g. thinner slices and higher resolution) with more protocols, so that image reading has also become much more complex. More information needs to be processed and usually the number of radiologists available for these tasks has not increased to the same extent. The objective of this article is to present current research results from projects on the use of image data for clinical decision support. An infrastructure that can allow large volumes of data to be accessed is presented. In this way the best performing tools can be identified without the medical data having to leave secure servers. The text presents the results of the VISCERAL and Khresmoi EU-funded projects, which allow the analysis of previous cases from institutional archives to support decision-making and for process automation. The results also represent a secure evaluation environment for medical image analysis. This allows the use of data extracted from past cases to solve information needs occurring when diagnosing new cases. The presented research prototypes allow direct extraction of knowledge from the visual data of the images and to use this for decision support or process automation. Real clinical use has not been tested but several subjective user tests showed the effectiveness and efficiency of the process. The future in radiology will clearly depend on better use of the important knowledge in clinical image archives to automate processes and aid decision-making via big data analysis. This can help concentrate the work of radiologists towards the most important parts of diagnostics.
Evaluation of nucleus segmentation in digital pathology images through large scale image synthesis
NASA Astrophysics Data System (ADS)
Zhou, Naiyun; Yu, Xiaxia; Zhao, Tianhao; Wen, Si; Wang, Fusheng; Zhu, Wei; Kurc, Tahsin; Tannenbaum, Allen; Saltz, Joel; Gao, Yi
2017-03-01
Digital histopathology images with more than 1 Gigapixel are drawing more and more attention in clinical, biomedical research, and computer vision fields. Among the multiple observable features spanning multiple scales in the pathology images, the nuclear morphology is one of the central criteria for diagnosis and grading. As a result it is also the mostly studied target in image computing. Large amount of research papers have devoted to the problem of extracting nuclei from digital pathology images, which is the foundation of any further correlation study. However, the validation and evaluation of nucleus extraction have yet been formulated rigorously and systematically. Some researches report a human verified segmentation with thousands of nuclei, whereas a single whole slide image may contain up to million. The main obstacle lies in the difficulty of obtaining such a large number of validated nuclei, which is essentially an impossible task for pathologist. We propose a systematic validation and evaluation approach based on large scale image synthesis. This could facilitate a more quantitatively validated study for current and future histopathology image analysis field.
Reeves, Anthony P; Xie, Yiting; Liu, Shuang
2017-04-01
With the advent of fully automated image analysis and modern machine learning methods, there is a need for very large image datasets having documented segmentations for both computer algorithm training and evaluation. This paper presents a method and implementation for facilitating such datasets that addresses the critical issue of size scaling for algorithm validation and evaluation; current evaluation methods that are usually used in academic studies do not scale to large datasets. This method includes protocols for the documentation of many regions in very large image datasets; the documentation may be incrementally updated by new image data and by improved algorithm outcomes. This method has been used for 5 years in the context of chest health biomarkers from low-dose chest CT images that are now being used with increasing frequency in lung cancer screening practice. The lung scans are segmented into over 100 different anatomical regions, and the method has been applied to a dataset of over 20,000 chest CT images. Using this framework, the computer algorithms have been developed to achieve over 90% acceptable image segmentation on the complete dataset.
NASA Astrophysics Data System (ADS)
Rahmes, Mark; Fagan, Dean; Lemieux, George
2017-03-01
The capability of a software algorithm to automatically align same-patient dental bitewing and panoramic x-rays over time is complicated by differences in collection perspectives. We successfully used image correlation with an affine transform for each pixel to discover common image borders, followed by a non-linear homography perspective adjustment to closely align the images. However, significant improvements in image registration could be realized if images were collected from the same perspective, thus facilitating change analysis. The perspective differences due to current dental image collection devices are so significant that straightforward change analysis is not possible. To address this, a new custom dental tray could be used to provide the standard reference needed for consistent positioning of a patient's mouth. Similar to sports mouth guards, the dental tray could be fabricated in standard sizes from plastic and use integrated electronics that have been miniaturized. In addition, the x-ray source needs to be consistently positioned in order to collect images with similar angles and scales. Solving this pose correction is similar to solving for collection angle in aerial imagery for change detection. A standard collection system would provide a method for consistent source positioning using real-time sensor position feedback from a digital x-ray image reference. Automated, robotic sensor positioning could replace manual adjustments. Given an image set from a standard collection, a disparity map between images can be created using parallax from overlapping viewpoints to enable change detection. This perspective data can be rectified and used to create a three-dimensional dental model reconstruction.
O'Brien, Kieran; Daducci, Alessandro; Kickler, Nils; Lazeyras, Francois; Gruetter, Rolf; Feiweier, Thorsten; Krueger, Gunnar
2013-08-01
Clinical use of the Stejskal-Tanner diffusion weighted images is hampered by the geometric distortions that result from the large residual 3-D eddy current field induced. In this work, we aimed to predict, using linear response theory, the residual 3-D eddy current field required for geometric distortion correction based on phantom eddy current field measurements. The predicted 3-D eddy current field induced by the diffusion-weighting gradients was able to reduce the root mean square error of the residual eddy current field to ~1 Hz. The model's performance was tested on diffusion weighted images of four normal volunteers, following distortion correction, the quality of the Stejskal-Tanner diffusion-weighted images was found to have comparable quality to image registration based corrections (FSL) at low b-values. Unlike registration techniques the correction was not hindered by low SNR at high b-values, and results in improved image quality relative to FSL. Characterization of the 3-D eddy current field with linear response theory enables the prediction of the 3-D eddy current field required to correct eddy current induced geometric distortions for a wide range of clinical and high b-value protocols.
NASA Astrophysics Data System (ADS)
Schlueter, S.; Sheppard, A.; Wildenschild, D.
2013-12-01
Imaging of fluid interfaces in three-dimensional porous media via x-ray microtomography is an efficient means to test thermodynamically derived predictions on the relationship between capillary pressure, fluid saturation and specific interfacial area (Pc-Sw-Anw) in partially saturated porous media. Various experimental studies exist to date that validate the uniqueness of the Pc-Sw-Anw relationship under static conditions and with current technological progress direct imaging of moving interfaces under dynamic conditions is also becoming available. Image acquisition and subsequent image processing currently involves many steps each prone to operator bias, like merging different scans of the same sample obtained at different beam energies into a single image or the generation of isosurfaces from the segmented multiphase image on which the interface properties are usually calculated. We demonstrate that with recent advancements in (i) image enhancement methods, (ii) multiphase segmentation methods and (iii) methods of structural analysis we can considerably decrease the time and cost of image acquisition and the uncertainty associated with the measurement of interfacial properties. In particular, we highlight three notorious problems in multiphase image processing and provide efficient solutions for each: (i) Due to noise, partial volume effects, and imbalanced volume fractions, automated histogram-based threshold detection methods frequently fail. However, these impairments can be mitigated with modern denoising methods, special treatment of gray value edges and adaptive histogram equilization, such that most of the standard methods for threshold detection (Otsu, fuzzy c-means, minimum error, maximum entropy) coincide at the same set of values. (ii) Partial volume effects due to blur may produce apparent water films around solid surfaces that alter the specific fluid-fluid interfacial area (Anw) considerably. In a synthetic test image some local segmentation methods like Bayesian Markov random field, converging active contours and watershed segmentation reduced the error in Anw associated with apparent water films from 21% to 6-11%. (iii) The generation of isosurfaces from the segmented data usually requires a lot of postprocessing in order to smooth the surface and check for consistency errors. This can be avoided by calculating specific interfacial areas directly on the segmented voxel image by means of Minkowski functionals which is highly efficient and less error prone.
NASA Astrophysics Data System (ADS)
O'Shea, Tuathan; Bamber, Jeffrey; Fontanarosa, Davide; van der Meer, Skadi; Verhaegen, Frank; Harris, Emma
2016-04-01
Imaging has become an essential tool in modern radiotherapy (RT), being used to plan dose delivery prior to treatment and verify target position before and during treatment. Ultrasound (US) imaging is cost-effective in providing excellent contrast at high resolution for depicting soft tissue targets apart from those shielded by the lungs or cranium. As a result, it is increasingly used in RT setup verification for the measurement of inter-fraction motion, the subject of Part I of this review (Fontanarosa et al 2015 Phys. Med. Biol. 60 R77-114). The combination of rapid imaging and zero ionising radiation dose makes US highly suitable for estimating intra-fraction motion. The current paper (Part II of the review) covers this topic. The basic technology for US motion estimation, and its current clinical application to the prostate, is described here, along with recent developments in robust motion-estimation algorithms, and three dimensional (3D) imaging. Together, these are likely to drive an increase in the number of future clinical studies and the range of cancer sites in which US motion management is applied. Also reviewed are selections of existing and proposed novel applications of US imaging to RT. These are driven by exciting developments in structural, functional and molecular US imaging and analytical techniques such as backscatter tissue analysis, elastography, photoacoustography, contrast-specific imaging, dynamic contrast analysis, microvascular and super-resolution imaging, and targeted microbubbles. Such techniques show promise for predicting and measuring the outcome of RT, quantifying normal tissue toxicity, improving tumour definition and defining a biological target volume that describes radiation sensitive regions of the tumour. US offers easy, low cost and efficient integration of these techniques into the RT workflow. US contrast technology also has potential to be used actively to assist RT by manipulating the tumour cell environment and by improving the delivery of radiosensitising agents. Finally, US imaging offers various ways to measure dose in 3D. If technical problems can be overcome, these hold potential for wide-dissemination of cost-effective pre-treatment dose verification and in vivo dose monitoring methods. It is concluded that US imaging could eventually contribute to all aspects of the RT workflow.
O'Shea, Tuathan; Bamber, Jeffrey; Fontanarosa, Davide; van der Meer, Skadi; Verhaegen, Frank; Harris, Emma
2016-04-21
Imaging has become an essential tool in modern radiotherapy (RT), being used to plan dose delivery prior to treatment and verify target position before and during treatment. Ultrasound (US) imaging is cost-effective in providing excellent contrast at high resolution for depicting soft tissue targets apart from those shielded by the lungs or cranium. As a result, it is increasingly used in RT setup verification for the measurement of inter-fraction motion, the subject of Part I of this review (Fontanarosa et al 2015 Phys. Med. Biol. 60 R77-114). The combination of rapid imaging and zero ionising radiation dose makes US highly suitable for estimating intra-fraction motion. The current paper (Part II of the review) covers this topic. The basic technology for US motion estimation, and its current clinical application to the prostate, is described here, along with recent developments in robust motion-estimation algorithms, and three dimensional (3D) imaging. Together, these are likely to drive an increase in the number of future clinical studies and the range of cancer sites in which US motion management is applied. Also reviewed are selections of existing and proposed novel applications of US imaging to RT. These are driven by exciting developments in structural, functional and molecular US imaging and analytical techniques such as backscatter tissue analysis, elastography, photoacoustography, contrast-specific imaging, dynamic contrast analysis, microvascular and super-resolution imaging, and targeted microbubbles. Such techniques show promise for predicting and measuring the outcome of RT, quantifying normal tissue toxicity, improving tumour definition and defining a biological target volume that describes radiation sensitive regions of the tumour. US offers easy, low cost and efficient integration of these techniques into the RT workflow. US contrast technology also has potential to be used actively to assist RT by manipulating the tumour cell environment and by improving the delivery of radiosensitising agents. Finally, US imaging offers various ways to measure dose in 3D. If technical problems can be overcome, these hold potential for wide-dissemination of cost-effective pre-treatment dose verification and in vivo dose monitoring methods. It is concluded that US imaging could eventually contribute to all aspects of the RT workflow.
Phenotype detection in morphological mutant mice using deformation features.
Roy, Sharmili; Liang, Xi; Kitamoto, Asanobu; Tamura, Masaru; Shiroishi, Toshihiko; Brown, Michael S
2013-01-01
Large-scale global efforts are underway to knockout each of the approximately 25,000 mouse genes and interpret their roles in shaping the mammalian embryo. Given the tremendous amount of data generated by imaging mutated prenatal mice, high-throughput image analysis systems are inevitable to characterize mammalian development and diseases. Current state-of-the-art computational systems offer only differential volumetric analysis of pre-defined anatomical structures between various gene-knockout mice strains. For subtle anatomical phenotypes, embryo phenotyping still relies on the laborious histological techniques that are clearly unsuitable in such big data environment. This paper presents a system that automatically detects known phenotypes and assists in discovering novel phenotypes in muCT images of mutant mice. Deformation features obtained from non-linear registration of mutant embryo to a normal consensus average image are extracted and analyzed to compute phenotypic and candidate phenotypic areas. The presented system is evaluated using C57BL/10 embryo images. All cases of ventricular septum defect and polydactyly, well-known to be present in this strain, are successfully detected. The system predicts potential phenotypic areas in the liver that are under active histological evaluation for possible phenotype of this mouse line.
Optic disc detection and boundary extraction in retinal images.
Basit, A; Fraz, Muhammad Moazam
2015-04-10
With the development of digital image processing, analysis and modeling techniques, automatic retinal image analysis is emerging as an important screening tool for early detection of ophthalmologic disorders such as diabetic retinopathy and glaucoma. In this paper, a robust method for optic disc detection and extraction of the optic disc boundary is proposed to help in the development of computer-assisted diagnosis and treatment of such ophthalmic disease. The proposed method is based on morphological operations, smoothing filters, and the marker controlled watershed transform. Internal and external markers are used to first modify the gradient magnitude image and then the watershed transformation is applied on this modified gradient magnitude image for boundary extraction. This method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc. The proposed method has optic disc detection success rate of 100%, 100%, 100% and 98.9% for the DRIVE, Shifa, CHASE_DB1, and DIARETDB1 databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 61.88%, 70.96%, 45.61%, and 54.69% for these databases, respectively, which are higher than currents methods.
NASA Astrophysics Data System (ADS)
Fu, Yan; Guo, Pei-yuan; Xiang, Ling-zi; Bao, Man; Chen, Xing-hai
2013-08-01
With the gradually mature of hyper spectral image technology, the application of the meat nondestructive detection and recognition has become one of the current research focuses. This paper for the study of marine and freshwater fish by the pre-processing and feature extraction of the collected spectral curve data, combined with BP network structure and LVQ network structure, a predictive model of hyper spectral image data of marine and freshwater fish has been initially established and finally realized the qualitative analysis and identification of marine and freshwater fish quality. The results of this study show that hyper spectral imaging technology combined with the BP and LVQ Artificial Neural Network Model can be used for the identification of marine and freshwater fish detection. Hyper-spectral data acquisition can be carried out without any pretreatment of the samples, thus hyper-spectral imaging technique is the lossless, high- accuracy and rapid detection method for quality of fish. In this study, only 30 samples are used for the exploratory qualitative identification of research, although the ideal study results are achieved, we will further increase the sample capacity to take the analysis of quantitative identification and verify the feasibility of this theory.
Could MRI Be Used To Image Kidney Fibrosis? A Review of Recent Advances and Remaining Barriers.
Leung, General; Kirpalani, Anish; Szeto, Stephen G; Deeb, Maya; Foltz, Warren; Simmons, Craig A; Yuen, Darren A
2017-06-07
A key contributor to the progression of nearly all forms of CKD is fibrosis, a largely irreversible process that drives further kidney injury. Despite its importance, clinicians currently have no means of noninvasively assessing renal scar, and thus have historically relied on percutaneous renal biopsy to assess fibrotic burden. Although helpful in the initial diagnostic assessment, renal biopsy remains an imperfect test for fibrosis measurement, limited not only by its invasiveness, but also, because of the small amounts of tissue analyzed, its susceptibility to sampling bias. These concerns have limited not only the prognostic utility of biopsy analysis and its ability to guide therapeutic decisions, but also the clinical translation of experimental antifibrotic agents. Recent advances in imaging technology have raised the exciting possibility of magnetic resonance imaging (MRI)-based renal scar analysis, by capitalizing on the differing physical features of fibrotic and nonfibrotic tissue. In this review, we describe two key fibrosis-induced pathologic changes (capillary loss and kidney stiffening) that can be imaged by MRI techniques, and the potential for these new MRI-based technologies to noninvasively image renal scar. Copyright © 2017 by the American Society of Nephrology.
Could MRI Be Used To Image Kidney Fibrosis? A Review of Recent Advances and Remaining Barriers
Leung, General; Kirpalani, Anish; Szeto, Stephen G.; Deeb, Maya; Foltz, Warren; Simmons, Craig A.
2017-01-01
A key contributor to the progression of nearly all forms of CKD is fibrosis, a largely irreversible process that drives further kidney injury. Despite its importance, clinicians currently have no means of noninvasively assessing renal scar, and thus have historically relied on percutaneous renal biopsy to assess fibrotic burden. Although helpful in the initial diagnostic assessment, renal biopsy remains an imperfect test for fibrosis measurement, limited not only by its invasiveness, but also, because of the small amounts of tissue analyzed, its susceptibility to sampling bias. These concerns have limited not only the prognostic utility of biopsy analysis and its ability to guide therapeutic decisions, but also the clinical translation of experimental antifibrotic agents. Recent advances in imaging technology have raised the exciting possibility of magnetic resonance imaging (MRI)–based renal scar analysis, by capitalizing on the differing physical features of fibrotic and nonfibrotic tissue. In this review, we describe two key fibrosis-induced pathologic changes (capillary loss and kidney stiffening) that can be imaged by MRI techniques, and the potential for these new MRI-based technologies to noninvasively image renal scar. PMID:28298435
Multi-Image Registration for an Enhanced Vision System
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2002-01-01
An Enhanced Vision System (EVS) utilizing multi-sensor image fusion is currently under development at the NASA Langley Research Center. The EVS will provide enhanced images of the flight environment to assist pilots in poor visibility conditions. Multi-spectral images obtained from a short wave infrared (SWIR), a long wave infrared (LWIR), and a color visible band CCD camera, are enhanced and fused using the Retinex algorithm. The images from the different sensors do not have a uniform data structure: the three sensors not only operate at different wavelengths, but they also have different spatial resolutions, optical fields of view (FOV), and bore-sighting inaccuracies. Thus, in order to perform image fusion, the images must first be co-registered. Image registration is the task of aligning images taken at different times, from different sensors, or from different viewpoints, so that all corresponding points in the images match. In this paper, we present two methods for registering multiple multi-spectral images. The first method performs registration using sensor specifications to match the FOVs and resolutions directly through image resampling. In the second method, registration is obtained through geometric correction based on a spatial transformation defined by user selected control points and regression analysis.
Fornasaro, Stefano; Vicario, Annalisa; De Leo, Luigina; Bonifacio, Alois; Not, Tarcisio; Sergo, Valter
2018-05-14
Raman hyperspectral imaging is an emerging practice in biological and biomedical research for label free analysis of tissues and cells. Using this method, both spatial distribution and spectral information of analyzed samples can be obtained. The current study reports the first Raman microspectroscopic characterisation of colon tissues from patients with Coeliac Disease (CD). The aim was to assess if Raman imaging coupled with hyperspectral multivariate image analysis is capable of detecting the alterations in the biochemical composition of intestinal tissues associated with CD. The analytical approach was based on a multi-step methodology: duodenal biopsies from healthy and coeliac patients were measured and processed with Multivariate Curve Resolution Alternating Least Squares (MCR-ALS). Based on the distribution maps and the pure spectra of the image constituents obtained from MCR-ALS, interesting biochemical differences between healthy and coeliac patients has been derived. Noticeably, a reduced distribution of complex lipids in the pericryptic space, and a different distribution and abundance of proteins rich in beta-sheet structures was found in CD patients. The output of the MCR-ALS analysis was then used as a starting point for two clustering algorithms (k-means clustering and hierarchical clustering methods). Both methods converged with similar results providing precise segmentation over multiple Raman images of studied tissues.
Evaluating minerals of environmental concern using spectroscopy
Swayze, G.A.; Clark, R.N.; Higgins, C.T.; Kokaly, R.F.; Livo, K. Eric; Hoefen, T.M.; Ong, C.; Kruse, F.A.
2006-01-01
Imaging spectroscopy has been successfully used to aid researchers in characterizing potential environmental impacts posed by acid-rock drainage, ore-processing dust on mangroves, and asbestos in serpentine mineral deposits and urban dust. Many of these applications synergistically combine field spectroscopy with remote sensing data, thus allowing more-precise data calibration, spectral analysis of the data, and verification of mapping. The increased accuracy makes these environmental evaluation tools efficient because they can be used to focus field work on those areas most critical to the research effort. The use of spectroscopy to evaluate minerals of environmental concern pushes current imaging spectrometer technology to its limits; we present laboratory results that indicate the direction for future designs of imaging spectrometers.
NASA Astrophysics Data System (ADS)
Gloe, Thomas; Borowka, Karsten; Winkler, Antje
2010-01-01
The analysis of lateral chromatic aberration forms another ingredient for a well equipped toolbox of an image forensic investigator. Previous work proposed its application to forgery detection1 and image source identification.2 This paper takes a closer look on the current state-of-the-art method to analyse lateral chromatic aberration and presents a new approach to estimate lateral chromatic aberration in a runtime-efficient way. Employing a set of 11 different camera models including 43 devices, the characteristic of lateral chromatic aberration is investigated in a large-scale. The reported results point to general difficulties that have to be considered in real world investigations.
NASA Astrophysics Data System (ADS)
Rodriguez, M.; Jones, S.; Mentzell, E.; Gill, N.
2011-12-01
The Thermospheric Temperature Imager (TTI) on Fast, Affordable, Science and Technology SATellite (FASTSAT) measures the upper atmospheric atomic oxygen emission at 135.6 nm and the molecular nitrogen LBH emission at 135.4 nm to determine the atmospheric O/N2 density ratio. Observations of variations in this thermosheric ratio correspond to electron density variations in the ionosphere. The TTI design makes use of a Fabry-Perot interferometer to measure Doppler widened atmospheric emissions to determine neutral atmospheric temperature from low Earth orbit. FASTSAT launched November 10, 2010 and TTI is currently observing geomagnetic signatures in the aurora and airglow. This work is supported by NASA.
Images of the future - Two decades in astronomy
NASA Technical Reports Server (NTRS)
Weistrop, D.
1982-01-01
Future instruments for the 100-10,000 A UV-wavelength region will require detectors with greater quantum efficiency, smaller picture elements, a greater wavelength range, and greater active area than those currently available. After assessing the development status and performance characteristics of vidicons, image tubes, electronographic cameras, digicons, silicon arrays and microchannel plate intensifiers presently employed by astronomical spacecraft, attention is given to such next-generation detectors as the Mosaicked Optical Self-scanned Array Imaging Camera, which consists of a photocathode deposited on the input side of a microchannel plate intensifier. The problems posed by the signal processing and data analysis requirements of the devices foreseen for the 21st century are noted.
NASA Technical Reports Server (NTRS)
Rodriquez, Marcello; Jones, Sarah; Mentzell, Eric; Gill, Nathaniel
2011-01-01
The Thermospheric Temperature Imager (TTI) on Fast, Affordable, Science and Technology SATellite (FASTSAT) measures the upper atmospheric atomic oxygen emission at 135.6 nm and the molecular nitrogen LBH emission at 135.4 nm to determine the atmospheric O/N2 density ratio. Observations of variations in this thermospheric ratio correspond to electron density variations in the ionosphere. The TTI design makes use of a Fabry-Perot interferometer to measure Doppler widened atmospheric emissions to determine neutral atmospheric temperature from low Earth orbit. FASTSAT launched November 10, 2010 and TTI is currently observing geomagnetic signatures in the aurora and airglow. This work is supported by NASA.
Current status of nuclear cardiology: a limited review
DOE Office of Scientific and Technical Information (OSTI.GOV)
Botvinick, E.H.; Dae, M.; Hattner, R.S.
1985-11-01
To summarize the current status of nuclear cardiology, the authors will focus on areas that the emphasize the specific advantages of nuclear cardiology methods: (a) their benign, noninvasive nature, (b) their pathophysiologic nature, and (c) the ease of their computer manipulation and analysis, permitting quantitative evaluation. The areas covered include: (a) blood pool scintigraphy and parametric imaging, (b) pharmacologic intervention for the diagnosis of ischemic heart disease, (c) scintigraphic studies for the diagnosis and prognosis of coronary artery disease, and (d) considerations of cost effectiveness.
Bidinosti, C P; Kravchuk, I S; Hayden, M E
2005-11-01
We provide an exact expression for the magnetic field produced by cylindrical saddle-shaped coils and their ideal shield currents in the low-frequency limit. The stream function associated with the shield surface current is also determined. The results of the analysis are useful for the design of actively shielded radio-frequency (RF) coils. Examples pertinent to very low field nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI) are presented and discussed.
Scaling of Counter-Current Imbibition Process in Low-Permeability Porous Media, TR-121
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kvoscek, A.R.; Zhou, D.; Jia, L.
2001-01-17
This project presents the recent work on imaging imbibition in low permeability porous media (diatomite) with X-ray completed tomography. The viscosity ratio between nonwetting and wetting fluids is varied over several orders of magnitude yielding different levels of imbibition performance. Also performed is mathematical analysis of counter-current imbibition processes and development of a modified scaling group incorporating the mobility ratio. This modified group is physically based and appears to improve scaling accuracy of countercurrent imbibition significantly.
Study of sea ice in the Sea of Okhotsk and its influence on the Oyashio current
NASA Technical Reports Server (NTRS)
Watanabe, K.; Kuroda, R.; Hata, K.; Akagawa, M. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Two photographic techniques were applied to Skylab S190A multispectral pictures for extracting oceanic patterns at the sea surface separately from cloud patterns. One is the image-masking technique and another a stereographic analysis. The extracted oceanic patterns were interpreted as areas where the amount, or the concentration of phytoplankton was high by utilizing surface data of water temperature, ocean current by GEK, and microplankton.
Semiautomated Workflow for Clinically Streamlined Glioma Parametric Response Mapping
Keith, Lauren; Ross, Brian D.; Galbán, Craig J.; Luker, Gary D.; Galbán, Stefanie; Zhao, Binsheng; Guo, Xiaotao; Chenevert, Thomas L.; Hoff, Benjamin A.
2017-01-01
Management of glioblastoma multiforme remains a challenging problem despite recent advances in targeted therapies. Timely assessment of therapeutic agents is hindered by the lack of standard quantitative imaging protocols for determining targeted response. Clinical response assessment for brain tumors is determined by volumetric changes assessed at 10 weeks post-treatment initiation. Further, current clinical criteria fail to use advanced quantitative imaging approaches, such as diffusion and perfusion magnetic resonance imaging. Development of the parametric response mapping (PRM) applied to diffusion-weighted magnetic resonance imaging has provided a sensitive and early biomarker of successful cytotoxic therapy in brain tumors while maintaining a spatial context within the tumor. Although PRM provides an earlier readout than volumetry and sometimes greater sensitivity compared with traditional whole-tumor diffusion statistics, it is not routinely used for patient management; an automated and standardized software for performing the analysis and for the generation of a clinical report document is required for this. We present a semiautomated and seamless workflow for image coregistration, segmentation, and PRM classification of glioblastoma multiforme diffusion-weighted magnetic resonance imaging scans. The software solution can be integrated using local hardware or performed remotely in the cloud while providing connectivity to existing picture archive and communication systems. This is an important step toward implementing PRM analysis of solid tumors in routine clinical practice. PMID:28286871
Khan, Bilal; Chand, Pankaj; Alexandrakis, George
2011-01-01
Functional near infrared (fNIR) imaging was used to identify spatiotemporal relations between spatially distinct cortical regions activated during various hand and arm motion protocols. Imaging was performed over a field of view (FOV, 12 x 8.4 cm) including the secondary motor, primary sensorimotor, and the posterior parietal cortices over a single brain hemisphere. This is a more extended FOV than typically used in current fNIR studies. Three subjects performed four motor tasks that induced activation over this extended FOV. The tasks included card flipping (pronation and supination) that, to our knowledge, has not been performed in previous functional magnetic resonance imaging (fMRI) or fNIR studies. An earlier rise and a longer duration of the hemodynamic activation response were found in tasks requiring increased physical or mental effort. Additionally, analysis of activation images by cluster component analysis (CCA) demonstrated that cortical regions can be grouped into clusters, which can be adjacent or distant from each other, that have similar temporal activation patterns depending on whether the performed motor task is guided by visual or tactile feedback. These analyses highlight the future potential of fNIR imaging to tackle clinically relevant questions regarding the spatiotemporal relations between different sensorimotor cortex regions, e.g. ones involved in the rehabilitation response to motor impairments. PMID:22162826
Ultrafast Microfluidic Cellular Imaging by Optical Time-Stretch.
Lau, Andy K S; Wong, Terence T W; Shum, Ho Cheung; Wong, Kenneth K Y; Tsia, Kevin K
2016-01-01
There is an unmet need in biomedicine for measuring a multitude of parameters of individual cells (i.e., high content) in a large population efficiently (i.e., high throughput). This is particularly driven by the emerging interest in bringing Big-Data analysis into this arena, encompassing pathology, drug discovery, rare cancer cell detection, emulsion microdroplet assays, to name a few. This momentum is particularly evident in recent advancements in flow cytometry. They include scaling of the number of measurable colors from the labeled cells and incorporation of imaging capability to access the morphological information of the cells. However, an unspoken predicament appears in the current technologies: higher content comes at the expense of lower throughput, and vice versa. For example, accessing additional spatial information of individual cells, imaging flow cytometers only achieve an imaging throughput ~1000 cells/s, orders of magnitude slower than the non-imaging flow cytometers. In this chapter, we introduce an entirely new imaging platform, namely optical time-stretch microscopy, for ultrahigh speed and high contrast label-free single-cell (in a ultrafast microfluidic flow up to 10 m/s) imaging and analysis with an ultra-fast imaging line-scan rate as high as tens of MHz. Based on this technique, not only morphological information of the individual cells can be obtained in an ultrafast manner, quantitative evaluation of cellular information (e.g., cell volume, mass, refractive index, stiffness, membrane tension) at nanometer scale based on the optical phase is also possible. The technology can also be integrated with conventional fluorescence measurements widely adopted in the non-imaging flow cytometers. Therefore, these two combinatorial and complementary measurement capabilities in long run is an attractive platform for addressing the pressing need for expanding the "parameter space" in high-throughput single-cell analysis. This chapter provides the general guidelines of constructing the optical system for time stretch imaging, fabrication and design of the microfluidic chip for ultrafast fluidic flow, as well as the image acquisition and processing.
NASA Astrophysics Data System (ADS)
Pueyo, Laurent
2016-01-01
A new class of high-contrast image analysis algorithms, that empirically fit and subtract systematic noise has lead to recent discoveries of faint exoplanet /substellar companions and scattered light images of circumstellar disks. The consensus emerging in the community is that these methods are extremely efficient at enhancing the detectability of faint astrophysical signal, but do generally create systematic biases in their observed properties. This poster provides a solution this outstanding problem. We present an analytical derivation of a linear expansion that captures the impact of astrophysical over/self-subtraction in current image analysis techniques. We examine the general case for which the reference images of the astrophysical scene moves azimuthally and/or radially across the field of view as a result of the observation strategy. Our new method method is based on perturbing the covariance matrix underlying any least-squares speckles problem and propagating this perturbation through the data analysis algorithm. This work is presented in the framework of Karhunen-Loeve Image Processing (KLIP) but it can be easily generalized to methods relying on linear combination of images (instead of eigen-modes). Based on this linear expansion, obtained in the most general case, we then demonstrate practical applications of this new algorithm. We first consider the case of the spectral extraction of faint point sources in IFS data and illustrate, using public Gemini Planet Imager commissioning data, that our novel perturbation based Forward Modeling (which we named KLIP-FM) can indeed alleviate algorithmic biases. We then apply KLIP-FM to the detection of point sources and show how it decreases the rate of false negatives while keeping the rate of false positives unchanged when compared to classical KLIP. This can potentially have important consequences on the design of follow-up strategies of ongoing direct imaging surveys.
Responding mindfully to distressing psychosis: A grounded theory analysis.
Abba, Nicola; Chadwick, Paul; Stevenson, Chris
2008-01-01
This study investigates the psychological process involved when people with current distressing psychosis learned to respond mindfully to unpleasant psychotic sensations (voices, thoughts, and images). Sixteen participants were interviewed on completion of a mindfulness group program. Grounded theory methodology was used to generate a theory of the core psychological process using a systematically applied set of methods linking analysis with data collection. The theory inducted describes the experience of relating differently to psychosis through a three-stage process: centering in awareness of psychosis; allowing voices, thoughts, and images to come and go without reacting or struggle; and reclaiming power through acceptance of psychosis and the self. The conceptual and clinical applications of the theory and its limits are discussed.
NASA Technical Reports Server (NTRS)
Meyer, Peter; Green, Robert O.; Staenz, Karl; Itten, Klaus I.
1994-01-01
A geocoding procedure for remotely sensed data of airborne systems in rugged terrain is affected by several factors: buffeting of the aircraft by turbulence, variations in ground speed, changes in altitude, attitude variations, and surface topography. The current investigation was carried out with an Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) scene of central Switzerland (Rigi) from NASA's Multi Aircraft Campaign (MAC) in Europe (1991). The parametric approach reconstructs for every pixel the observation geometry based on the flight line, aircraft attitude, and surface topography. To utilize the data for analysis of materials on the surface, the AVIRIS data are corrected to apparent reflectance using algorithms based on MODTRAN (moderate resolution transfer code).
StreakDet data processing and analysis pipeline for space debris optical observations
NASA Astrophysics Data System (ADS)
Virtanen, Jenni; Flohrer, Tim; Muinonen, Karri; Granvik, Mikael; Torppa, Johanna; Poikonen, Jonne; Lehti, Jussi; Santti, Tero; Komulainen, Tuomo; Naranen, Jyri
We describe a novel data processing and analysis pipeline for optical observations of space debris. The monitoring of space object populations requires reliable acquisition of observational data, to support the development and validation of space debris environment models, the build-up and maintenance of a catalogue of orbital elements. In addition, data is needed for the assessment of conjunction events and for the support of contingency situations or launches. The currently available, mature image processing algorithms for detection and astrometric reduction of optical data cover objects that cross the sensor field-of-view comparably slowly, and within a rather narrow, predefined range of angular velocities. By applying specific tracking techniques, the objects appear point-like or as short trails in the exposures. However, the general survey scenario is always a “track before detect” problem, resulting in streaks, i.e., object trails of arbitrary lengths, in the images. The scope of the ESA-funded StreakDet (Streak detection and astrometric reduction) project is to investigate solutions for detecting and reducing streaks from optical images, particularly in the low signal-to-noise ratio (SNR) domain, where algorithms are not readily available yet. For long streaks, the challenge is to extract precise position information and related registered epochs with sufficient precision. Although some considerations for low-SNR processing of streak-like features are available in the current image processing and computer vision literature, there is a need to discuss and compare these approaches for space debris analysis, in order to develop and evaluate prototype implementations. In the StreakDet project, we develop algorithms applicable to single images (as compared to consecutive frames of the same field) obtained with any observing scenario, including space-based surveys and both low- and high-altitude populations. The proposed processing pipeline starts from the segmentation of the acquired image (i.e., the extraction of all sources), followed by the astrometric and photometric characterization of the candidate streaks, and ends with orbital validation of the detected streaks. A central concept of the pipeline is streak classification which guides the actual characterization process by aiming to identify the interesting sources and to filter out the uninteresting ones, as well as by allowing the tailoring of algorithms for specific streak classes (e.g. point-like vs. long, disintegrated streaks). To validate the single-image detections, the processing is finalized by orbital analysis, resulting in preliminary orbital classification (Earth-bound vs. non-Earth-bound orbit) for the detected streaks.
Myocardial perfusion imaging with PET
Nakazato, Ryo; Berman, Daniel S; Alexanderson, Erick; Slomka, Piotr
2013-01-01
PET-myocardial perfusion imaging (MPI) allows accurate measurement of myocardial perfusion, absolute myocardial blood flow and function at stress and rest in a single study session performed in approximately 30 min. Various PET tracers are available for MPI, and rubidium-82 or nitrogen-13-ammonia is most commonly used. In addition, a new fluorine-18-based PET-MPI tracer is currently being evaluated. Relative quantification of PET perfusion images shows very high diagnostic accuracy for detection of obstructive coronary artery disease. Dynamic myocardial blood flow analysis has demonstrated additional prognostic value beyond relative perfusion imaging. Patient radiation dose can be reduced and image quality can be improved with latest advances in PET/CT equipment. Simultaneous assessment of both anatomy and perfusion by hybrid PET/CT can result in improved diagnostic accuracy. Compared with SPECT-MPI, PET-MPI provides higher diagnostic accuracy, using lower radiation doses during a shorter examination time period for the detection of coronary artery disease. PMID:23671459
2017-03-21
This is an odd-looking image. It shows gullies during the winter while entirely in the shadow of the crater wall. Illumination comes only from the winter skylight. We acquire such images because gullies on Mars actively form in the winter when there is carbon dioxide frost on the ground, so we image them in the winter, even though not well illuminated, to look for signs of activity. The dark streaks might be signs of current activity, removing the frost, but further analysis is needed. NB: North is down in the cutout, and the terrain slopes towards the bottom of the image. The map is projected here at a scale of 50 centimeters (19.7 inches) per pixel. [The original image scale is 62.3 centimeters (24.5 inches) per pixel (with 2 x 2 binning); objects on the order of 187 centimeters (73.6 inches) across are resolved.] North is up. http://photojournal.jpl.nasa.gov/catalog/PIA21568
Ultrasound tissue analysis and characterization
NASA Astrophysics Data System (ADS)
Kaufhold, John; Chan, Ray C.; Karl, William C.; Castanon, David A.
1999-07-01
On the battlefield of the future, it may become feasible for medics to perform, via application of new biomedical technologies, more sophisticated diagnoses and surgery than is currently practiced. Emerging biomedical technology may enable the medic to perform laparoscopic surgical procedures to remove, for example, shrapnel from injured soldiers. Battlefield conditions constrain the types of medical image acquisition and interpretation which can be performed. Ultrasound is the only viable biomedical imaging modality appropriate for deployment on the battlefield -- which leads to image interpretation issues because of the poor quality of ultrasound imagery. To help overcome these issues, we develop and implement a method of image enhancement which could aid non-experts in the rapid interpretation and use of ultrasound imagery. We describe an energy minimization approach to finding boundaries in medical images and show how prior information on edge orientation can be incorporated into this framework to detect tissue boundaries oriented at a known angle.
Mining biomedical images towards valuable information retrieval in biomedical and life sciences.
Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas
2016-01-01
Biomedical images are helpful sources for the scientists and practitioners in drawing significant hypotheses, exemplifying approaches and describing experimental results in published biomedical literature. In last decades, there has been an enormous increase in the amount of heterogeneous biomedical image production and publication, which results in a need for bioimaging platforms for feature extraction and analysis of text and content in biomedical images to take advantage in implementing effective information retrieval systems. In this review, we summarize technologies related to data mining of figures. We describe and compare the potential of different approaches in terms of their developmental aspects, used methodologies, produced results, achieved accuracies and limitations. Our comparative conclusions include current challenges for bioimaging software with selective image mining, embedded text extraction and processing of complex natural language queries. © The Author(s) 2016. Published by Oxford University Press.
[Medical image compression: a review].
Noreña, Tatiana; Romero, Eduardo
2013-01-01
Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.
Object localization in handheld thermal images for fireground understanding
NASA Astrophysics Data System (ADS)
Vandecasteele, Florian; Merci, Bart; Jalalvand, Azarakhsh; Verstockt, Steven
2017-05-01
Despite the broad application of the handheld thermal imaging cameras in firefighting, its usage is mostly limited to subjective interpretation by the person carrying the device. As remedies to overcome this limitation, object localization and classification mechanisms could assist the fireground understanding and help with the automated localization, characterization and spatio-temporal (spreading) analysis of the fire. An automated understanding of thermal images can enrich the conventional knowledge-based firefighting techniques by providing the information from the data and sensing-driven approaches. In this work, transfer learning is applied on multi-labeling convolutional neural network architectures for object localization and recognition in monocular visual, infrared and multispectral dynamic images. Furthermore, the possibility of analyzing fire scene images is studied and their current limitations are discussed. Finally, the understanding of the room configuration (i.e., objects location) for indoor localization in reduced visibility environments and the linking with Building Information Models (BIM) are investigated.
Liu, Tao; Jung, HaeWon; Liu, Jianfei; Droettboom, Michael; Tam, Johnny
2017-10-01
The retinal pigment epithelial (RPE) cells contain intrinsic fluorophores that can be visualized using infrared autofluorescence (IRAF). Although IRAF is routinely utilized in the clinic for visualizing retinal health and disease, currently, it is not possible to discern cellular details using IRAF due to limits in resolution. We demonstrate that the combination of adaptive optics (AO) with IRAF (AO-IRAF) enables higher-resolution imaging of the IRAF signal, revealing the RPE mosaic in the living human eye. Quantitative analysis of visualized RPE cells in 10 healthy subjects across various eccentricities demonstrates the possibility for in vivo density measurements of RPE cells, which range from 6505 to 5388 cells/mm 2 for the areas measured (peaking at the fovea). We also identified cone photoreceptors in relation to underlying RPE cells, and found that RPE cells support on average up to 18.74 cone photoreceptors in the fovea down to an average of 1.03 cone photoreceptors per RPE cell at an eccentricity of 6 mm. Clinical application of AO-IRAF to a patient with retinitis pigmentosa illustrates the potential for AO-IRAF imaging to become a valuable complementary approach to the current landscape of high resolution imaging modalities.
Malinen, Eirik; Rødal, Jan; Knudtsen, Ingerid Skjei; Søvik, Åste; Skogmo, Hege Kippenes
2011-08-01
Molecular and functional imaging techniques such as dynamic positron emission tomography (DPET) and dynamic contrast enhanced computed tomography (DCECT) may provide improved characterization of tumors compared to conventional anatomic imaging. The purpose of the current work was to compare spatiotemporal uptake patterns in DPET and DCECT images. A PET/CT protocol comprising DCECT with an iodine based contrast agent and DPET with (18)F-fluorodeoxyglucose was set up. The imaging protocol was used for examination of three dogs with spontaneous tumors of the head and neck at sessions prior to and after fractionated radiotherapy. Software tools were developed for downsampling the DCECT image series to the PET image dimensions, for segmentation of tracer uptake pattern in the tumors and for spatiotemporal correlation analysis of DCECT and DPET images. DCECT images evaluated one minute post injection qualitatively resembled the DPET images at most imaging sessions. Segmentation by region growing gave similar tumor extensions in DCECT and DPET images, with a median Dice similarity coefficient of 0.81. A relatively high correlation (median 0.85) was found between temporal tumor uptake patterns from DPET and DCECT. The heterogeneity in tumor uptake was not significantly different in the DPET and DCECT images. The median of the spatial correlation was 0.72. DCECT and DPET gave similar temporal wash-in characteristics, and the images also showed a relatively high spatial correlation. Hence, if the limited spatial resolution of DPET is considered adequate, a single DPET scan only for assessing both tumor perfusion and metabolic activity may be considered. However, further work on a larger number of cases is needed to verify the correlations observed in the present study.
Lim, Kyungjae; Kwon, Heejin; Cho, Jinhan; Oh, Jongyoung; Yoon, Seongkuk; Kang, Myungjin; Ha, Dongho; Lee, Jinhwa; Kang, Eunju
2015-01-01
The purpose of this study was to assess the image quality of a novel advanced iterative reconstruction (IR) method called as "adaptive statistical IR V" (ASIR-V) by comparing the image noise, contrast-to-noise ratio (CNR), and spatial resolution from those of filtered back projection (FBP) and adaptive statistical IR (ASIR) on computed tomography (CT) phantom image. We performed CT scans at 5 different tube currents (50, 70, 100, 150, and 200 mA) using 3 types of CT phantoms. Scanned images were subsequently reconstructed in 7 different scan settings, such as FBP, and 3 levels of ASIR and ASIR-V (30%, 50%, and 70%). The image noise was measured in the first study using body phantom. The CNR was measured in the second study using contrast phantom and the spatial resolutions were measured in the third study using a high-resolution phantom. We compared the image noise, CNR, and spatial resolution among the 7 reconstructed image scan settings to determine whether noise reduction, high CNR, and high spatial resolution could be achieved at ASIR-V. At quantitative analysis of the first and second studies, it showed that the images reconstructed using ASIR-V had reduced image noise and improved CNR compared with those of FBP and ASIR (P < 0.001). At qualitative analysis of the third study, it also showed that the images reconstructed using ASIR-V had significantly improved spatial resolution than those of FBP and ASIR (P < 0.001). Our phantom studies showed that ASIR-V provides a significant reduction in image noise and a significant improvement in CNR as well as spatial resolution. Therefore, this technique has the potential to reduce the radiation dose further without compromising image quality.