Remote Sensing Image Quality Assessment Experiment with Post-Processing
NASA Astrophysics Data System (ADS)
Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.
2018-04-01
This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.
Comparison of breast percent density estimation from raw versus processed digital mammograms
NASA Astrophysics Data System (ADS)
Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina
2011-03-01
We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.
He, Longjun; Xu, Lang; Ming, Xing; Liu, Qian
2015-02-01
Three-dimensional post-processing operations on the volume data generated by a series of CT or MR images had important significance on image reading and diagnosis. As a part of the DIOCM standard, WADO service defined how to access DICOM objects on the Web, but it didn't involve three-dimensional post-processing operations on the series images. This paper analyzed the technical features of three-dimensional post-processing operations on the volume data, and then designed and implemented a web service system for three-dimensional post-processing operations of medical images based on the WADO protocol. In order to improve the scalability of the proposed system, the business tasks and calculation operations were separated into two modules. As results, it was proved that the proposed system could support three-dimensional post-processing service of medical images for multiple clients at the same moment, which met the demand of accessing three-dimensional post-processing operations on the volume data on the web.
Li, Haobo; Chen, Yanxi; Qiang, Minfei; Zhang, Kun; Jiang, Yuchen; Zhang, Yijie; Jia, Xiaoyang
2017-06-14
The objective of this study is to evaluate the value of computed tomography (CT) post-processing images in postoperative assessment of Lisfranc injuries compared with plain radiographs. A total of 79 cases with closed Lisfranc injuries that were treated with conventional open reduction and internal fixation from January 2010 to June 2016 were analyzed. Postoperative assessment was performed by two independent orthopedic surgeons with both plain radiographs and CT post-processing images. Inter- and intra-observer agreement were analyzed by kappa statistics while the differences between the two postoperative imaging assessments were assessed using the χ 2 test (McNemar's test). Significance was assumed when p < 0.05. Inter- and intra-observer agreement of CT post-processing images was much higher than that of plain radiographs. Non-anatomic reduction was more easily identified in patients with injuries of Myerson classifications A, B1, B2, and C1 using CT post-processing images with overall groups (p < 0.05), and poor internal fixation was also more easily detected in patients with injuries of Myerson classifications A, B1, B2, and C2 using CT post-processing images with overall groups (p < 0.05). CT post-processing images can be more reliable than plain radiographs in the postoperative assessment of reduction and implant placement for Lisfranc injuries.
Spot restoration for GPR image post-processing
Paglieroni, David W; Beer, N. Reginald
2014-05-20
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
A service protocol for post-processing of medical images on the mobile device
NASA Astrophysics Data System (ADS)
He, Longjun; Ming, Xing; Xu, Lang; Liu, Qian
2014-03-01
With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. It is uneasy and time-consuming for transferring medical images with large data size from picture archiving and communication system to mobile client, since the wireless network is unstable and limited by bandwidth. Besides, limited by computing capability, memory and power endurance, it is hard to provide a satisfactory quality of experience for radiologists to handle some complex post-processing of medical images on the mobile device, such as real-time direct interactive three-dimensional visualization. In this work, remote rendering technology is employed to implement the post-processing of medical images instead of local rendering, and a service protocol is developed to standardize the communication between the render server and mobile client. In order to make mobile devices with different platforms be able to access post-processing of medical images, the Extensible Markup Language is taken to describe this protocol, which contains four main parts: user authentication, medical image query/ retrieval, 2D post-processing (e.g. window leveling, pixel values obtained) and 3D post-processing (e.g. maximum intensity projection, multi-planar reconstruction, curved planar reformation and direct volume rendering). And then an instance is implemented to verify the protocol. This instance can support the mobile device access post-processing of medical image services on the render server via a client application or on the web page.
Spatially assisted down-track median filter for GPR image post-processing
Paglieroni, David W; Beer, N Reginald
2014-10-07
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
STAMPS: Software Tool for Automated MRI Post-processing on a supercomputer.
Bigler, Don C; Aksu, Yaman; Miller, David J; Yang, Qing X
2009-08-01
This paper describes a Software Tool for Automated MRI Post-processing (STAMP) of multiple types of brain MRIs on a workstation and for parallel processing on a supercomputer (STAMPS). This software tool enables the automation of nonlinear registration for a large image set and for multiple MR image types. The tool uses standard brain MRI post-processing tools (such as SPM, FSL, and HAMMER) for multiple MR image types in a pipeline fashion. It also contains novel MRI post-processing features. The STAMP image outputs can be used to perform brain analysis using Statistical Parametric Mapping (SPM) or single-/multi-image modality brain analysis using Support Vector Machines (SVMs). Since STAMPS is PBS-based, the supercomputer may be a multi-node computer cluster or one of the latest multi-core computers.
Non-linear Post Processing Image Enhancement
NASA Technical Reports Server (NTRS)
Hunt, Shawn; Lopez, Alex; Torres, Angel
1997-01-01
A non-linear filter for image post processing based on the feedforward Neural Network topology is presented. This study was undertaken to investigate the usefulness of "smart" filters in image post processing. The filter has shown to be useful in recovering high frequencies, such as those lost during the JPEG compression-decompression process. The filtered images have a higher signal to noise ratio, and a higher perceived image quality. Simulation studies comparing the proposed filter with the optimum mean square non-linear filter, showing examples of the high frequency recovery, and the statistical properties of the filter are given,
Documentation of procedures for textural/spatial pattern recognition techniques
NASA Technical Reports Server (NTRS)
Haralick, R. M.; Bryant, W. F.
1976-01-01
A C-130 aircraft was flown over the Sam Houston National Forest on March 21, 1973 at 10,000 feet altitude to collect multispectral scanner (MSS) data. Existing textural and spatial automatic processing techniques were used to classify the MSS imagery into specified timber categories. Several classification experiments were performed on this data using features selected from the spectral bands and a textural transform band. The results indicate that (1) spatial post-processing a classified image can cut the classification error to 1/2 or 1/3 of its initial value, (2) spatial post-processing the classified image using combined spectral and textural features produces a resulting image with less error than post-processing a classified image using only spectral features and (3) classification without spatial post processing using the combined spectral textural features tends to produce about the same error rate as a classification without spatial post processing using only spectral features.
Imaging through Fog Using Polarization Imaging in the Visible/NIR/SWIR Spectrum
2017-01-11
few haze effects as possible. One post processing step on the image in order to complete image dehazing Figure 6: Basic architecture of the...Page 16 Figure 7: Basic architecture of post-processing techniques to recover an image dehazed from a raw image This first study was limited on the
Content standards for medical image metadata
NASA Astrophysics Data System (ADS)
d'Ornellas, Marcos C.; da Rocha, Rafael P.
2003-12-01
Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.
Buried object detection in GPR images
Paglieroni, David W; Chambers, David H; Bond, Steven W; Beer, W. Reginald
2014-04-29
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Radar signal pre-processing to suppress surface bounce and multipath
Paglieroni, David W; Mast, Jeffrey E; Beer, N. Reginald
2013-12-31
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes that return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Acquisition and Post-Processing of Immunohistochemical Images.
Sedgewick, Jerry
2017-01-01
Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.
Spatially adaptive migration tomography for multistatic GPR imaging
Paglieroni, David W; Beer, N. Reginald
2013-08-13
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Synthetic aperture integration (SAI) algorithm for SAR imaging
Chambers, David H; Mast, Jeffrey E; Paglieroni, David W; Beer, N. Reginald
2013-07-09
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Zero source insertion technique to account for undersampling in GPR imaging
Chambers, David H; Mast, Jeffrey E; Paglieroni, David W
2014-02-25
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Real-time system for imaging and object detection with a multistatic GPR array
Paglieroni, David W; Beer, N Reginald; Bond, Steven W; Top, Philip L; Chambers, David H; Mast, Jeffrey E; Donetti, John G; Mason, Blake C; Jones, Steven M
2014-10-07
A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.
Ahn, Hye Shin; Kim, Sun Mi; Jang, Mijung; Yun, Bo La; Kim, Bohyoung; Ko, Eun Sook; Han, Boo-Kyung; Chang, Jung Min; Yi, Ann; Cho, Nariya; Moon, Woo Kyung; Choi, Hye Young
2014-01-01
To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige®), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of "Mammogram enhancement ver. 2.0"; group B (SMB), specimen mammography with application of "Mammogram enhancement ver. 2.0". Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software.
Smet, M H; Breysem, L; Mussen, E; Bosmans, H; Marshall, N W; Cockmartin, L
2018-07-01
To evaluate the impact of digital detector, dose level and post-processing on neonatal chest phantom X-ray image quality (IQ). A neonatal phantom was imaged using four different detectors: a CR powder phosphor (PIP), a CR needle phosphor (NIP) and two wireless CsI DR detectors (DXD and DRX). Five different dose levels were studied for each detector and two post-processing algorithms evaluated for each vendor. Three paediatric radiologists scored the images using European quality criteria plus additional questions on vascular lines, noise and disease simulation. Visual grading characteristics and ordinal regression statistics were used to evaluate the effect of detector type, post-processing and dose on VGA score (VGAS). No significant differences were found between the NIP, DXD and CRX detectors (p>0.05) whereas the PIP detector had significantly lower VGAS (p< 0.0001). Processing did not influence VGAS (p=0.819). Increasing dose resulted in significantly higher VGAS (p<0.0001). Visual grading analysis (VGA) identified a detector air kerma/image (DAK/image) of ~2.4 μGy as an ideal working point for NIP, DXD and DRX detectors. VGAS tracked IQ differences between detectors and dose levels but not image post-processing changes. VGA showed a DAK/image value above which perceived IQ did not improve, potentially useful for commissioning. • A VGA study detects IQ differences between detectors and dose levels. • The NIP detector matched the VGAS of the CsI DR detectors. • VGA data are useful in setting initial detector air kerma level. • Differences in NNPS were consistent with changes in VGAS.
Semi-automated camera trap image processing for the detection of ungulate fence crossing events.
Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija
2017-09-27
Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.
Henderson, Fiona; Hart, Philippa J; Pradillo, Jesus M; Kassiou, Michael; Christie, Lidan; Williams, Kaye J; Boutin, Herve; McMahon, Adam
2018-05-15
Stroke is a leading cause of disability worldwide. Understanding the recovery process post-stroke is essential; however, longer-term recovery studies are lacking. In vivo positron emission tomography (PET) can image biological recovery processes, but is limited by spatial resolution and its targeted nature. Untargeted mass spectrometry imaging offers high spatial resolution, providing an ideal ex vivo tool for brain recovery imaging. Magnetic resonance imaging (MRI) was used to image a rat brain 48 h after ischaemic stroke to locate the infarcted regions of the brain. PET was carried out 3 months post-stroke using the tracers [ 18 F]DPA-714 for TSPO and [ 18 F]IAM6067 for sigma-1 receptors to image neuroinflammation and neurodegeneration, respectively. The rat brain was flash-frozen immediately after PET scanning, and sectioned for matrix-assisted laser desorption/ionisation mass spectrometry (MALDI-MS) imaging. Three months post-stroke, PET imaging shows minimal detection of neurodegeneration and neuroinflammation, indicating that the brain has stabilised. However, MALDI-MS images reveal distinct differences in lipid distributions (e.g. phosphatidylcholine and sphingomyelin) between the scar and the healthy brain, suggesting that recovery processes are still in play. It is currently not known if the altered lipids in the scar will change on a longer time scale, or if they are stabilised products of the brain post-stroke. The data demonstrates the ability to combine MALD-MS with in vivo PET to image different aspects of stroke recovery. Copyright © 2018 John Wiley & Sons, Ltd.
False colors removal on the YCr-Cb color space
NASA Astrophysics Data System (ADS)
Tomaselli, Valeria; Guarnera, Mirko; Messina, Giuseppe
2009-01-01
Post-processing algorithms are usually placed in the pipeline of imaging devices to remove residual color artifacts introduced by the demosaicing step. Although demosaicing solutions aim to eliminate, limit or correct false colors and other impairments caused by a non ideal sampling, post-processing techniques are usually more powerful in achieving this purpose. This is mainly because the input of post-processing algorithms is a fully restored RGB color image. Moreover, post-processing can be applied more than once, in order to meet some quality criteria. In this paper we propose an effective technique for reducing the color artifacts generated by conventional color interpolation algorithms, in YCrCb color space. This solution efficiently removes false colors and can be executed while performing the edge emphasis process.
UCXp camera imaging principle and key technologies of data post-processing
NASA Astrophysics Data System (ADS)
Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao
2014-03-01
The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.
Assesment on the performance of electrode arrays using image processing technique
NASA Astrophysics Data System (ADS)
Usman, N.; Khiruddin, A.; Nawawi, Mohd
2017-08-01
Interpreting inverted resistivity section is time consuming, tedious and requires other sources of information to be relevant geologically. Image processing technique was used in order to perform post inversion processing which make geophysical data interpretation easier. The inverted data sets were imported into the PCI Geomatica 9.0.1 for further processing. The data sets were clipped and merged together in order to match the coordinates of the three layers and permit pixel to pixel analysis. Dipole-dipole array is more sensitive to resistivity variation with depth in comparison with Werner-Schlumberger and pole-dipole. Image processing serves as good post-inversion tool in geophysical data processing.
Parallel workflow tools to facilitate human brain MRI post-processing
Cui, Zaixu; Zhao, Chenxi; Gong, Gaolang
2015-01-01
Multi-modal magnetic resonance imaging (MRI) techniques are widely applied in human brain studies. To obtain specific brain measures of interest from MRI datasets, a number of complex image post-processing steps are typically required. Parallel workflow tools have recently been developed, concatenating individual processing steps and enabling fully automated processing of raw MRI data to obtain the final results. These workflow tools are also designed to make optimal use of available computational resources and to support the parallel processing of different subjects or of independent processing steps for a single subject. Automated, parallel MRI post-processing tools can greatly facilitate relevant brain investigations and are being increasingly applied. In this review, we briefly summarize these parallel workflow tools and discuss relevant issues. PMID:26029043
A Rotor Tip Vortex Tracing Algorithm for Image Post-Processing
NASA Technical Reports Server (NTRS)
Overmeyer, Austin D.
2015-01-01
A neurite tracing algorithm, originally developed for medical image processing, was used to trace the location of the rotor tip vortex in density gradient flow visualization images. The tracing algorithm was applied to several representative test images to form case studies. The accuracy of the tracing algorithm was compared to two current methods including a manual point and click method and a cross-correlation template method. It is shown that the neurite tracing algorithm can reduce the post-processing time to trace the vortex by a factor of 10 to 15 without compromising the accuracy of the tip vortex location compared to other methods presented in literature.
Image Corruption Detection in Diffusion Tensor Imaging for Post-Processing and Real-Time Monitoring
Li, Yue; Shea, Steven M.; Lorenz, Christine H.; Jiang, Hangyi; Chou, Ming-Chung; Mori, Susumu
2013-01-01
Due to the high sensitivity of diffusion tensor imaging (DTI) to physiological motion, clinical DTI scans often suffer a significant amount of artifacts. Tensor-fitting-based, post-processing outlier rejection is often used to reduce the influence of motion artifacts. Although it is an effective approach, when there are multiple corrupted data, this method may no longer correctly identify and reject the corrupted data. In this paper, we introduce a new criterion called “corrected Inter-Slice Intensity Discontinuity” (cISID) to detect motion-induced artifacts. We compared the performance of algorithms using cISID and other existing methods with regard to artifact detection. The experimental results show that the integration of cISID into fitting-based methods significantly improves the retrospective detection performance at post-processing analysis. The performance of the cISID criterion, if used alone, was inferior to the fitting-based methods, but cISID could effectively identify severely corrupted images with a rapid calculation time. In the second part of this paper, an outlier rejection scheme was implemented on a scanner for real-time monitoring of image quality and reacquisition of the corrupted data. The real-time monitoring, based on cISID and followed by post-processing, fitting-based outlier rejection, could provide a robust environment for routine DTI studies. PMID:24204551
Sex Differences in Hookah-Related Images Posted on Tumblr: A Content Analysis.
Primack, Brian A; Carroll, Mary V; Shensa, Ariel; Davis, Wesley; Levine, Michele D
2016-01-01
Hookah tobacco smoking is prevalent, widespread, and associated with large amounts of toxicants. Hookah tobacco smoking may be viewed differently by males and females. For example, females have been drawn to types of tobacco that are flavored, milder, and marketed as more social and exotic. Individuals often use the growing segment of anonymous social networking sites, such as Tumblr, to learn about potentially dangerous or harmful behaviors. We used a systematic process involving stratification by time of day, day of week, and search term to gather a sample of 140 Tumblr posts related to hookah tobacco smoking. After a structured codebook development process, 2 coders independently assessed all posts in their entirety, and all disagreements were easily adjudicated. When data on poster sex and age were available, we found that 77% of posts were posted by females and 35% were posted by individuals younger than 18. The most prominent features displayed in all posts were references to or images of hookahs themselves, sexuality, socializing, alcohol, hookah smoke, and tricks performed with hookah smoke. Compared with females, males more frequently posted images of hookahs and alcohol-related images or references. This information may help guide future research in this area and the development of targeted interventions to curb this behavior.
Optimizing hippocampal segmentation in infants utilizing MRI post-acquisition processing.
Thompson, Deanne K; Ahmadzai, Zohra M; Wood, Stephen J; Inder, Terrie E; Warfield, Simon K; Doyle, Lex W; Egan, Gary F
2012-04-01
This study aims to determine the most reliable method for infant hippocampal segmentation by comparing magnetic resonance (MR) imaging post-acquisition processing techniques: contrast to noise ratio (CNR) enhancement, or reformatting to standard orientation. MR scans were performed with a 1.5 T GE scanner to obtain dual echo T2 and proton density (PD) images at term equivalent (38-42 weeks' gestational age). 15 hippocampi were manually traced four times on ten infant images by 2 independent raters on the original T2 image, as well as images processed by: a) combining T2 and PD images (T2-PD) to enhance CNR; then b) reformatting T2-PD images perpendicular to the long axis of the left hippocampus. CNRs and intraclass correlation coefficients (ICC) were calculated. T2-PD images had 17% higher CNR (15.2) than T2 images (12.6). Original T2 volumes' ICC was 0.87 for rater 1 and 0.84 for rater 2, whereas T2-PD images' ICC was 0.95 for rater 1 and 0.87 for rater 2. Reliability of hippocampal segmentation on T2-PD images was not improved by reformatting images (rater 1 ICC = 0.88, rater 2 ICC = 0.66). Post-acquisition processing can improve CNR and hence reliability of hippocampal segmentation in neonate MR scans when tissue contrast is poor. These findings may be applied to enhance boundary definition in infant segmentation for various brain structures or in any volumetric study where image contrast is sub-optimal, enabling hippocampal structure-function relationships to be explored.
Shortcomings of low-cost imaging systems for viewing computed radiographs.
Ricke, J; Hänninen, E L; Zielinski, C; Amthauer, H; Stroszczynski, C; Liebig, T; Wolf, M; Hosten, N
2000-01-01
To assess potential advantages of a new PC-based viewing tool featuring image post-processing for viewing computed radiographs on low-cost hardware (PC) with a common display card and color monitor, and to evaluate the effect of using color versus monochrome monitors. Computed radiographs of a statistical phantom were viewed on a PC, with and without post-processing (spatial frequency and contrast processing), employing a monochrome or a color monitor. Findings were compared with the viewing on a radiological Workstation and evaluated with ROC analysis. Image post-processing improved the perception of low-contrast details significantly irrespective of the monitor used. No significant difference in perception was observed between monochrome and color monitors. The review at the radiological Workstation was superior to the review done using the PC with image processing. Lower quality hardware (graphic card and monitor) used in low cost PCs negatively affects perception of low-contrast details in computed radiographs. In this situation, it is highly recommended to use spatial frequency and contrast processing. No significant quality gain has been observed for the high-end monochrome monitor compared to the color display. However, the color monitor was affected stronger by high ambient illumination.
Kakakhel, M B; Jirasek, A; Johnston, H; Kairn, T; Trapp, J V
2017-03-01
This study evaluated the feasibility of combining the 'zero-scan' (ZS) X-ray computed tomography (CT) based polymer gel dosimeter (PGD) readout with adaptive mean (AM) filtering for improving the signal to noise ratio (SNR), and to compare these results with available average scan (AS) X-ray CT readout techniques. NIPAM PGD were manufactured, irradiated with 6 MV photons, CT imaged and processed in Matlab. AM filter for two iterations, with 3 × 3 and 5 × 5 pixels (kernel size), was used in two scenarios (a) the CT images were subjected to AM filtering (pre-processing) and these were further employed to generate AS and ZS gel images, and (b) the AS and ZS images were first reconstructed from the CT images and then AM filtering was carried out (post-processing). SNR was computed in an ROI of 30 × 30 for different pre and post processing cases. Results showed that the ZS technique combined with AM filtering resulted in improved SNR. Using the previously-recommended 25 images for reconstruction the ZS pre-processed protocol can give an increase of 44% and 80% in SNR for 3 × 3 and 5 × 5 kernel sizes respectively. However, post processing using both techniques and filter sizes introduced blur and a reduction in the spatial resolution. Based on this work, it is possible to recommend that the ZS method may be combined with pre-processed AM filtering using appropriate kernel size, to produce a large increase in the SNR of the reconstructed PGD images.
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2012-11-01
Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.
Nishiyama, Megumi; Kawaguchi, Jun
2014-11-01
To clarify the relationship between visual long-term memory (VLTM) and online visual processing, we investigated whether and how VLTM involuntarily affects the performance of a one-shot change detection task using images consisting of six meaningless geometric objects. In the study phase, participants observed pre-change (Experiment 1), post-change (Experiment 2), or both pre- and post-change (Experiment 3) images appearing in the subsequent change detection phase. In the change detection phase, one object always changed between pre- and post-change images and participants reported which object was changed. Results showed that VLTM of pre-change images enhanced the performance of change detection, while that of post-change images decreased accuracy. Prior exposure to both pre- and post-change images did not influence performance. These results indicate that pre-change information plays an important role in change detection, and that information in VLTM related to the current task does not always have a positive effect on performance. Copyright © 2014 Elsevier Inc. All rights reserved.
Nissan, Noam; Furman-Haran, Edna; Shapiro-Feinberg, Myra; Grobgeld, Dov; Degani, Hadassa
2017-09-01
Lactation and the return to the pre-conception state during post-weaning are regulated by hormonal induced processes that modify the microstructure of the mammary gland, leading to changes in the features of the ductal / glandular tissue, the stroma and the fat tissue. These changes create a challenge in the radiological workup of breast disorder during lactation and early post-weaning. Here we present non-invasive MRI protocols designed to record in vivo high spatial resolution, T 2 -weighted images and diffusion tensor images of the entire mammary gland. Advanced imaging processing tools enabled tracking the changes in the anatomical and microstructural features of the mammary gland from the time of lactation to post-weaning. Specifically, by using diffusion tensor imaging (DTI) it was possible to quantitatively distinguish between the ductal / glandular tissue distention during lactation and the post-weaning involution. The application of the T 2 -weighted imaging and DTI is completely safe, non-invasive and uses intrinsic contrast based on differences in transverse relaxation rates and water diffusion rates in various directions, respectively. This study provides a basis for further in-vivo monitoring of changes during the mammary developmental stages, as well as identifying changes due to malignant transformation in patients with pregnancy associated breast cancer (PABC).
Effects of image processing on the detective quantum efficiency
NASA Astrophysics Data System (ADS)
Park, Hye-Suk; Kim, Hee-Joung; Cho, Hyo-Min; Lee, Chang-Lae; Lee, Seung-Wan; Choi, Yu-Na
2010-04-01
Digital radiography has gained popularity in many areas of clinical practice. This transition brings interest in advancing the methodologies for image quality characterization. However, as the methodologies for such characterizations have not been standardized, the results of these studies cannot be directly compared. The primary objective of this study was to standardize methodologies for image quality characterization. The secondary objective was to evaluate affected factors to Modulation transfer function (MTF), noise power spectrum (NPS), and detective quantum efficiency (DQE) according to image processing algorithm. Image performance parameters such as MTF, NPS, and DQE were evaluated using the international electro-technical commission (IEC 62220-1)-defined RQA5 radiographic techniques. Computed radiography (CR) images of hand posterior-anterior (PA) for measuring signal to noise ratio (SNR), slit image for measuring MTF, white image for measuring NPS were obtained and various Multi-Scale Image Contrast Amplification (MUSICA) parameters were applied to each of acquired images. In results, all of modified images were considerably influence on evaluating SNR, MTF, NPS, and DQE. Modified images by the post-processing had higher DQE than the MUSICA=0 image. This suggests that MUSICA values, as a post-processing, have an affect on the image when it is evaluating for image quality. In conclusion, the control parameters of image processing could be accounted for evaluating characterization of image quality in same way. The results of this study could be guided as a baseline to evaluate imaging systems and their imaging characteristics by measuring MTF, NPS, and DQE.
Diagnostic value of radiological imaging pre- and post-drainage of pleural effusions.
Corcoran, John P; Acton, Louise; Ahmed, Asia; Hallifax, Robert J; Psallidas, Ioannis; Wrightson, John M; Rahman, Najib M; Gleeson, Fergus V
2016-02-01
Patients with an unexplained pleural effusion often require urgent investigation. Clinical practice varies due to uncertainty as to whether an effusion should be drained completely before diagnostic imaging. We performed a retrospective study of patients undergoing medical thoracoscopy for an unexplained effusion. In 110 patients with paired (pre- and post-drainage) chest X-rays and 32 patients with paired computed tomography scans, post-drainage imaging did not provide additional information that would have influenced the clinical decision-making process. © 2015 Asian Pacific Society of Respirology.
Sechopoulos, Ioannis
2013-01-01
Many important post-acquisition aspects of breast tomosynthesis imaging can impact its clinical performance. Chief among them is the reconstruction algorithm that generates the representation of the three-dimensional breast volume from the acquired projections. But even after reconstruction, additional processes, such as artifact reduction algorithms, computer aided detection and diagnosis, among others, can also impact the performance of breast tomosynthesis in the clinical realm. In this two part paper, a review of breast tomosynthesis research is performed, with an emphasis on its medical physics aspects. In the companion paper, the first part of this review, the research performed relevant to the image acquisition process is examined. This second part will review the research on the post-acquisition aspects, including reconstruction, image processing, and analysis, as well as the advanced applications being investigated for breast tomosynthesis. PMID:23298127
NASA Astrophysics Data System (ADS)
Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.
2017-01-01
Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system.
Gnyawali, Surya C.; Blum, Kevin; Pal, Durba; Ghatak, Subhadip; Khanna, Savita; Roy, Sashwati; Sen, Chandan K.
2017-01-01
Cutaneous microvasculopathy complicates wound healing. Functional assessment of gated individual dermal microvessels is therefore of outstanding interest. Functional performance of laser speckle contrast imaging (LSCI) systems is compromised by motion artefacts. To address such weakness, post-processing of stacked images is reported. We report the first post-processing of binary raw data from a high-resolution LSCI camera. Sharp images of low-flowing microvessels were enabled by introducing inverse variance in conjunction with speckle contrast in Matlab-based program code. Extended moving window averaging enhanced signal-to-noise ratio. Functional quantitative study of blood flow kinetics was performed on single gated microvessels using a free hand tool. Based on detection of flow in low-flow microvessels, a new sharp contrast image was derived. Thus, this work presents the first distinct image with quantitative microperfusion data from gated human foot microvasculature. This versatile platform is applicable to study a wide range of tissue systems including fine vascular network in murine brain without craniotomy as well as that in the murine dorsal skin. Importantly, the algorithm reported herein is hardware agnostic and is capable of post-processing binary raw data from any camera source to improve the sensitivity of functional flow data above and beyond standard limits of the optical system. PMID:28106129
NASA Astrophysics Data System (ADS)
Bolan, Jeffrey; Hall, Elise; Clifford, Chris; Thurow, Brian
The Light-Field Imaging Toolkit (LFIT) is a collection of MATLAB functions designed to facilitate the rapid processing of raw light field images captured by a plenoptic camera. An included graphical user interface streamlines the necessary post-processing steps associated with plenoptic images. The generation of perspective shifted views and computationally refocused images is supported, in both single image and animated formats. LFIT performs necessary calibration, interpolation, and structuring steps to enable future applications of this technology.
Evaluation of skin pathologies by RGB autofluorescence imaging
NASA Astrophysics Data System (ADS)
Lihachev, Alexey; Plorina, Emilija V.; Derjabo, Alexander; Lange, Marta; Lihacova, Ilze
2017-12-01
A clinical trial on autofluorescence imaging of malignant and non-malignant skin pathologies comprising 32 basal cell carcinomas (BCC), 4 malignant melanomas (MM), 1 squamous cell carcinoma (SCC), 89 nevi, 14 dysplastic nevi, 20 hemangiomas, 23 seborrheic keratoses, 4 hyperkeratoses, 3 actinic keratoses, 3 psoriasis, 1 dematitis, 2 dermatofibromas, 5 papillofibromas, 12 lupus erythematosus, 7 purpura, 6 bruises, 5 freckles, 3 fungal infections, 1 burn, 1 tattoo, 1 age spot, 1 vitiligo, 32 postoperative scars, 8 post cream therapy BCCs, 4 post radiation therapy scars, 2 post laser therapy scars, 1 post freezing scar as well as 114 reference images of healthy skin was performed. The sequence of autofluorescence images of skin pathologies were recorded by smartphone RGB camera under continuous 405 nm LED excitation during 20 seconds with 0.5 fps. Obtained image sequences further were processed with subsequent extraction of autofluorescence intensity and photobleaching parameters.
Lam, D L; Mitsumori, L M; Neligan, P C; Warren, B H; Shuman, W P; Dubinsky, T J
2012-12-01
Autologous breast reconstructive surgery with deep inferior epigastric artery (DIEA) perforator flaps has become the mainstay for breast reconstructive surgery. CT angiography and three-dimensional image post processing can depict the number, size, course and location of the DIEA perforating arteries for the pre-operative selection of the best artery to use for the tissue flap. Knowledge of the location and selection of the optimal perforating artery shortens operative times and decreases patient morbidity.
Digital image modification detection using color information and its histograms.
Zhou, Haoyu; Shen, Yue; Zhu, Xinghui; Liu, Bo; Fu, Zigang; Fan, Na
2016-09-01
The rapid development of many open source and commercial image editing software makes the authenticity of the digital images questionable. Copy-move forgery is one of the most widely used tampering techniques to create desirable objects or conceal undesirable objects in a scene. Existing techniques reported in the literature to detect such tampering aim to improve the robustness of these methods against the use of JPEG compression, blurring, noise, or other types of post processing operations. These post processing operations are frequently used with the intention to conceal tampering and reduce tampering clues. A robust method based on the color moments and other five image descriptors is proposed in this paper. The method divides the image into fixed size overlapping blocks. Clustering operation divides entire search space into smaller pieces with similar color distribution. Blocks from the tampered regions will reside within the same cluster since both copied and moved regions have similar color distributions. Five image descriptors are used to extract block features, which makes the method more robust to post processing operations. An ensemble of deep compositional pattern-producing neural networks are trained with these extracted features. Similarity among feature vectors in clusters indicates possible forged regions. Experimental results show that the proposed method can detect copy-move forgery even if an image was distorted by gamma correction, addictive white Gaussian noise, JPEG compression, or blurring. Copyright © 2016. Published by Elsevier Ireland Ltd.
A new image enhancement algorithm with applications to forestry stand mapping
NASA Technical Reports Server (NTRS)
Kan, E. P. F. (Principal Investigator); Lo, J. K.
1975-01-01
The author has identified the following significant results. Results show that the new algorithm produced cleaner classification maps in which holes of small predesignated sizes were eliminated and significant boundary information was preserved. These cleaner post-processed maps better resemble true life timber stand maps and are thus more usable products than the pre-post-processing ones: Compared to an accepted neighbor-checking post-processing technique, the new algorithm is more appropriate for timber stand mapping.
Automatic cloud coverage assessment of Formosat-2 image
NASA Astrophysics Data System (ADS)
Hsu, Kuo-Hsien
2011-11-01
Formosat-2 satellite equips with the high-spatial-resolution (2m ground sampling distance) remote sensing instrument. It has been being operated on the daily-revisiting mission orbit by National Space organization (NSPO) of Taiwan since May 21 2004. NSPO has also serving as one of the ground receiving stations for daily processing the received Formosat- 2 images. The current cloud coverage assessment of Formosat-2 image for NSPO Image Processing System generally consists of two major steps. Firstly, an un-supervised K-means method is used for automatically estimating the cloud statistic of Formosat-2 image. Secondly, manual estimation of cloud coverage from Formosat-2 image is processed by manual examination. Apparently, a more accurate Automatic Cloud Coverage Assessment (ACCA) method certainly increases the efficiency of processing step 2 with a good prediction of cloud statistic. In this paper, mainly based on the research results from Chang et al, Irish, and Gotoh, we propose a modified Formosat-2 ACCA method which considered pre-processing and post-processing analysis. For pre-processing analysis, cloud statistic is determined by using un-supervised K-means classification, Sobel's method, Otsu's method, non-cloudy pixels reexamination, and cross-band filter method. Box-Counting fractal method is considered as a post-processing tool to double check the results of pre-processing analysis for increasing the efficiency of manual examination.
NASA Astrophysics Data System (ADS)
Nishimaru, Eiji; Ichikawa, Katsuhiro; Okita, Izumi; Ninomiya, Yuuji; Tomoshige, Yukihiro; Kurokawa, Takehiro; Ono, Yutaka; Nakamura, Yuko; Suzuki, Masayuki
2008-03-01
Recently, several kinds of post-processing image filters which reduce the noise of computed tomography (CT) images have been proposed. However, these image filters are mostly for adults. Because these are not very effective in small (< 20 cm) display fields of view (FOV), we cannot use them for pediatric body images (e.g., premature babies and infant children). We have developed a new noise reduction filter algorithm for pediatric body CT images. This algorithm is based on a 3D post-processing in which the output pixel values are calculated by nonlinear interpolation in z-directions on original volumetric-data-sets. This algorithm does not need the in-plane (axial plane) processing, so the spatial resolution does not change. From the phantom studies, our algorithm could reduce SD up to 40% without affecting the spatial resolution of x-y plane and z-axis, and improved the CNR up to 30%. This newly developed filter algorithm will be useful for the diagnosis and radiation dose reduction of the pediatric body CT images.
Hurst, Megan; Dittmar, Helga; Banerjee, Robin; Bond, Rod
2017-03-01
Appearance goals for exercise are consistently associated with negative body image, but research has yet to consider the processes that link these two variables. Self-determination theory offers one such process: introjected (guilt-based) regulation of exercise behavior. Study 1 investigated these relationships within a cross-sectional sample of female UK students (n=215, 17-30 years). Appearance goals were indirectly, negatively associated with body image due to links with introjected regulation. Study 2 experimentally tested this pathway, manipulating guilt relating to exercise and appearance goals independently and assessing post-test guilt and body anxiety (n=165, 18-27 years). The guilt manipulation significantly increased post-test feelings of guilt, and these increases were associated with increased post-test body anxiety, but only for participants in the guilt condition. The implications of these findings for self-determination theory and the importance of guilt for the body image literature are discussed. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
A Robust Post-Processing Workflow for Datasets with Motion Artifacts in Diffusion Kurtosis Imaging
Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X.; Wan, Mingxi
2014-01-01
Purpose The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). Materials and methods The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). Results The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). Conclusion The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements. PMID:24727862
A robust post-processing workflow for datasets with motion artifacts in diffusion kurtosis imaging.
Li, Xianjun; Yang, Jian; Gao, Jie; Luo, Xue; Zhou, Zhenyu; Hu, Yajie; Wu, Ed X; Wan, Mingxi
2014-01-01
The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI). The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients). The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05). The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements.
VIP: Vortex Image Processing Package for High-contrast Direct Imaging
NASA Astrophysics Data System (ADS)
Gomez Gonzalez, Carlos Alberto; Wertz, Olivier; Absil, Olivier; Christiaens, Valentin; Defrère, Denis; Mawet, Dimitri; Milli, Julien; Absil, Pierre-Antoine; Van Droogenbroeck, Marc; Cantalloube, Faustine; Hinz, Philip M.; Skemer, Andrew J.; Karlsson, Mikael; Surdej, Jean
2017-07-01
We present the Vortex Image Processing (VIP) library, a python package dedicated to astronomical high-contrast imaging. Our package relies on the extensive python stack of scientific libraries and aims to provide a flexible framework for high-contrast data and image processing. In this paper, we describe the capabilities of VIP related to processing image sequences acquired using the angular differential imaging (ADI) observing technique. VIP implements functionalities for building high-contrast data processing pipelines, encompassing pre- and post-processing algorithms, potential source position and flux estimation, and sensitivity curve generation. Among the reference point-spread function subtraction techniques for ADI post-processing, VIP includes several flavors of principal component analysis (PCA) based algorithms, such as annular PCA and incremental PCA algorithms capable of processing big datacubes (of several gigabytes) on a computer with limited memory. Also, we present a novel ADI algorithm based on non-negative matrix factorization, which comes from the same family of low-rank matrix approximations as PCA and provides fairly similar results. We showcase the ADI capabilities of the VIP library using a deep sequence on HR 8799 taken with the LBTI/LMIRCam and its recently commissioned L-band vortex coronagraph. Using VIP, we investigated the presence of additional companions around HR 8799 and did not find any significant additional point source beyond the four known planets. VIP is available at http://github.com/vortex-exoplanet/VIP and is accompanied with Jupyter notebook tutorials illustrating the main functionalities of the library.
Enhnacing the science of the WFIRST coronagraph instrument with post-processing.
NASA Astrophysics Data System (ADS)
Pueyo, Laurent; WFIRST CGI data analysis and post-processing WG
2018-01-01
We summarize the results of a three years effort investigating how to apply to the WFIRST coronagraph instrument (CGI) modern image analysis methods, now routinely used with ground-based coronagraphs. In this post we quantify the gain associated post-processing for WFIRST-CGI observing scenarios simulated between 2013 and 2017. We also show based one simulations that spectrum of planet can be confidently retrieved using these processing tools with and Integral Field Spectrograph. We then discuss our work using CGI experimental data and quantify coronagraph post-processing testbed gains. We finally introduce stability metrics that are simple to define and measure, and place useful lower bound and upper bounds on the achievable RDI post-processing contrast gain. We show that our bounds hold in the case of the testbed data.
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation. PMID:22368479
Rosnell, Tomi; Honkavaara, Eija
2012-01-01
The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems' SOCET SET classical commercial photogrammetric software and another is built using Microsoft(®)'s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.
Thin layer imaging process for microlithography using radiation at strongly attenuated wavelengths
Wheeler, David R.
2004-01-06
A method for patterning of resist surfaces which is particularly advantageous for systems having low photon flux and highly energetic, strongly attenuated radiation. A thin imaging layer is created with uniform silicon distribution in a bilayer format. An image is formed by exposing selected regions of the silylated imaging layer to radiation. The radiation incident upon the silyliated resist material results in acid generation which either catalyzes cleavage of Si--O bonds to produce moieties that are volatile enough to be driven off in a post exposure bake step or produces a resist material where the exposed portions of the imaging layer are soluble in a basic solution, thereby desilylating the exposed areas of the imaging layer. The process is self limiting due to the limited quantity of silyl groups within each region of the pattern. Following the post exposure bake step, an etching step, generally an oxygen plasma etch, removes the resist material from the de-silylated areas of the imaging layer.
A Web simulation of medical image reconstruction and processing as an educational tool.
Papamichail, Dimitrios; Pantelis, Evaggelos; Papagiannis, Panagiotis; Karaiskos, Pantelis; Georgiou, Evangelos
2015-02-01
Web educational resources integrating interactive simulation tools provide students with an in-depth understanding of the medical imaging process. The aim of this work was the development of a purely Web-based, open access, interactive application, as an ancillary learning tool in graduate and postgraduate medical imaging education, including a systematic evaluation of learning effectiveness. The pedagogic content of the educational Web portal was designed to cover the basic concepts of medical imaging reconstruction and processing, through the use of active learning and motivation, including learning simulations that closely resemble actual tomographic imaging systems. The user can implement image reconstruction and processing algorithms under a single user interface and manipulate various factors to understand the impact on image appearance. A questionnaire for pre- and post-training self-assessment was developed and integrated in the online application. The developed Web-based educational application introduces the trainee in the basic concepts of imaging through textual and graphical information and proceeds with a learning-by-doing approach. Trainees are encouraged to participate in a pre- and post-training questionnaire to assess their knowledge gain. An initial feedback from a group of graduate medical students showed that the developed course was considered as effective and well structured. An e-learning application on medical imaging integrating interactive simulation tools was developed and assessed in our institution.
NASA Astrophysics Data System (ADS)
Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.
2012-05-01
The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.
Jadidi, Masoud; Båth, Magnus; Nyrén, Sven
2018-04-09
To compare the quality of images obtained with two different protocols with different acquisition time and the influence from image post processing in a chest digital tomosynthesis (DTS) system. 20 patients with suspected lung cancer were imaged with a chest X-ray equipment with tomosynthesis option. Two examination protocols with different acquisition times (6.3 and 12 s) were performed on each patient. Both protocols were presented with two different image post-processing (standard DTS processing and more advanced processing optimised for chest radiography). Thus, 4 series from each patient, altogether 80 series, were presented anonymously and in a random order. Five observers rated the quality of the reconstructed section images according to predefined quality criteria in three different classes. Visual grading characteristics (VGC) was used to analyse the data and the area under the VGC curve (AUC VGC ) was used as figure-of-merit. The 12 s protocol and the standard DTS processing were used as references in the analyses. The protocol with 6.3 s acquisition time had a statistically significant advantage over the vendor-recommended protocol with 12 s acquisition time for the classes of criteria, Demarcation (AUC VGC = 0.56, p = 0.009) and Disturbance (AUC VGC = 0.58, p < 0.001). A similar value of AUC VGC was found also for the class Structure (definition of bone structures in the spine) (0.56) but it could not be statistically separated from 0.5 (p = 0.21). For the image processing, the VGC analysis showed a small but statistically significant advantage for the standard DTS processing over the more advanced processing for the classes of criteria Demarcation (AUC VGC = 0.45, p = 0.017) and Disturbance (AUC VGC = 0.43, p = 0.005). A similar value of AUC VGC was found also for the class Structure (0.46), but it could not be statistically separated from 0.5 (p = 0.31). The study indicates that the protocol with 6.3 s acquisition time yields slightly better image quality than the vender-recommended protocol with acquisition time 12 s for several anatomical structures. Furthermore, the standard gradation processing (the vendor-recommended post-processing for DTS), yields to some extent advantage over the gradation processing/multiobjective frequency processing/flexible noise control processing in terms of image quality for all classes of criteria. Advances in knowledge: The study proves that the image quality may be strongly affected by the selection of DTS protocol and that the vendor-recommended protocol may not always be the optimal choice.
MO-DE-207-04: Imaging educational program on solutions to common pediatric imaging challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnamurthy, R.
This imaging educational program will focus on solutions to common pediatric imaging challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. The educational program will begin with a detailed discussion of the optimal configuration of fluoroscopes for general pediatric procedures. Following this introduction will be a focused discussion on the utility of Dual Energy CT for imaging children. The third lecture will address the substantial challenge of obtaining consistent image post -processing in pediatric digital radiography. The fourth and final lecture will address best practices in pediatric MRI includingmore » a discussion of ancillary methods to reduce sedation and anesthesia rates. Learning Objectives: To learn techniques for optimizing radiation dose and image quality in pediatric fluoroscopy To become familiar with the unique challenges and applications of Dual Energy CT in pediatric imaging To learn solutions for consistent post-processing quality in pediatric digital radiography To understand the key components of an effective MRI safety and quality program for the pediatric practice.« less
A post-processing system for automated rectification and registration of spaceborne SAR imagery
NASA Technical Reports Server (NTRS)
Curlander, John C.; Kwok, Ronald; Pang, Shirley S.
1987-01-01
An automated post-processing system has been developed that interfaces with the raw image output of the operational digital SAR correlator. This system is designed for optimal efficiency by using advanced signal processing hardware and an algorithm that requires no operator interaction, such as the determination of ground control points. The standard output is a geocoded image product (i.e. resampled to a specified map projection). The system is capable of producing multiframe mosaics for large-scale mapping by combining images in both the along-track direction and adjacent cross-track swaths from ascending and descending passes over the same target area. The output products have absolute location uncertainty of less than 50 m and relative distortion (scale factor and skew) of less than 0.1 per cent relative to local variations from the assumed geoid.
Whitlock, J; Dixon, J; Sherlock, C; Tucker, R; Bolt, D M; Weller, R
2016-05-21
Since the 1950s, veterinary practitioners have included two separate dorsoproximal-palmarodistal oblique (DPr-PaDiO) radiographs as part of a standard series of the equine foot. One image is obtained to visualise the distal phalanx and the other to visualise the navicular bone. However, rapid development of computed radiography and digital radiography and their post-processing capabilities could mean that this practice is no longer required. The aim of this study was to determine differences in perceived image quality between DPr-PaDiO radiographs that were acquired with a computerised radiography system with exposures, centring and collimation recommended for the navicular bone versus images acquired for the distal phalanx but were subsequently manipulated post-acquisition to highlight the navicular bone. Thirty images were presented to four clinicians for quality assessment and graded using a 1-3 scale (1=textbook quality, 2=diagnostic quality, 3=non-diagnostic image). No significant difference in diagnostic quality was found between the original navicular bone images and the manipulated distal phalanx images. This finding suggests that a single DPr-PaDiO image of the distal phalanx is sufficient for an equine foot radiographic series, with appropriate post-processing and manipulation. This change in protocol will result in reduced radiographic study time and decreased patient/personnel radiation exposure. British Veterinary Association.
NASA Astrophysics Data System (ADS)
Al-Ansary, Mariam Luay Y.
Ultrasound Imaging has been favored by clinicians for its safety, affordability, accessibility, and speed compared to other imaging modalities. However, the trade-offs to these benefits are a relatively lower image quality and interpretability, which can be addressed by, for example, post-processing methods. One particularly difficult imaging case is associated with the presence of a barrier, such as a human skull, with significantly different acoustical properties than the brain tissue as the target medium. Some methods were proposed in the literature to account for this structure if the skull's geometry is known. Measuring the skull's geometry is therefore an important task that requires attention. In this work, a new edge detection method for accurate human skull profile extraction via post-processing of ultrasonic A-Scans is introduced. This method, referred to as the Selective Echo Extraction algorithm, SEE, processes each A-Scan separately and determines the outermost and innermost boundaries of the skull by means of adaptive filtering. The method can also be used to determine the average attenuation coefficient of the skull. When applied to simulated B-Mode images of the skull profile, promising results were obtained. The profiles obtained from the proposed process in simulations were found to be within 0.15lambda +/- 0.11lambda or 0.09 +/- 0.07mm from the actual profiles. Experiments were also performed to test SEE on skull mimicking phantoms with major acoustical properties similar to those of the actual human skull. With experimental data, the profiles obtained with the proposed process were within 0.32lambda +/- 0.25lambda or 0.19 +/- 0.15mm from the actual profile.
Embedded, real-time UAV control for improved, image-based 3D scene reconstruction
Jean Liénard; Andre Vogs; Demetrios Gatziolis; Nikolay Strigul
2016-01-01
Unmanned Aerial Vehicles (UAVs) are already broadly employed for 3D modeling of large objects such as trees and monuments via photogrammetry. The usual workflow includes two distinct steps: image acquisition with UAV and computationally demanding postflight image processing. Insufficient feature overlaps across images is a common shortcoming in post-flight image...
Imaging has enormous untapped potential to improve cancer research through software to extract and process morphometric and functional biomarkers. In the era of non-cytotoxic treatment agents, multi- modality image-guided ablative therapies and rapidly evolving computational resources, quantitative imaging software can be transformative in enabling minimally invasive, objective and reproducible evaluation of cancer treatment response. Post-processing algorithms are integral to high-throughput analysis and fine- grained differentiation of multiple molecular targets.
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Changhui; Wei, Kai
2008-07-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.
Temporally flickering nanoparticles for compound cellular imaging and super resolution
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Danan, Yossef; Meir, Rinat; Meiri, Amihai; Zalevsky, Zeev
2016-03-01
This work presents the use of flickering nanoparticles for imaging biological samples. The method has high noise immunity, and it enables the detection of overlapping types of GNPs, at significantly sub-diffraction distances, making it attractive for super resolving localization microscopy techniques. The method utilizes a lock-in technique at which the imaging of the sample is done using a time-modulated laser beam that match the number of the types of gold nanoparticles (GNPs) that label a given sample, and resulting in the excitation of the temporal flickering of the scattered light at known temporal frequencies. The final image where the GNPs are spatially separated is obtained using post processing where the proper spectral components corresponding to the different modulation frequencies are extracted. This allows the simultaneous super resolved imaging of multiple types of GNPs that label targets of interest within biological samples. Additionally applying the post-processing algorithm of the K-factor image decomposition algorithm can further improve the performance of the proposed approach.
NASA Astrophysics Data System (ADS)
Kim, Ji Hye; Ahn, Il Jun; Nam, Woo Hyun; Ra, Jong Beom
2015-02-01
Positron emission tomography (PET) images usually suffer from a noticeable amount of statistical noise. In order to reduce this noise, a post-filtering process is usually adopted. However, the performance of this approach is limited because the denoising process is mostly performed on the basis of the Gaussian random noise. It has been reported that in a PET image reconstructed by the expectation-maximization (EM), the noise variance of each voxel depends on its mean value, unlike in the case of Gaussian noise. In addition, we observe that the variance also varies with the spatial sensitivity distribution in a PET system, which reflects both the solid angle determined by a given scanner geometry and the attenuation information of a scanned object. Thus, if a post-filtering process based on the Gaussian random noise is applied to PET images without consideration of the noise characteristics along with the spatial sensitivity distribution, the spatially variant non-Gaussian noise cannot be reduced effectively. In the proposed framework, to effectively reduce the noise in PET images reconstructed by the 3-D ordinary Poisson ordered subset EM (3-D OP-OSEM), we first denormalize an image according to the sensitivity of each voxel so that the voxel mean value can represent its statistical properties reliably. Based on our observation that each noisy denormalized voxel has a linear relationship between the mean and variance, we try to convert this non-Gaussian noise image to a Gaussian noise image. We then apply a block matching 4-D algorithm that is optimized for noise reduction of the Gaussian noise image, and reconvert and renormalize the result to obtain a final denoised image. Using simulated phantom data and clinical patient data, we demonstrate that the proposed framework can effectively suppress the noise over the whole region of a PET image while minimizing degradation of the image resolution.
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-Ur; Woodell, Glenn A.; Hines, Glenn D.
2004-01-01
Noise is the primary visibility limit in the process of non-linear image enhancement, and is no longer a statistically stable additive noise in the post-enhancement image. Therefore novel approaches are needed to both assess and reduce spatially variable noise at this stage in overall image processing. Here we will examine the use of edge pattern analysis both for automatic assessment of spatially variable noise and as a foundation for new noise reduction methods.
Biederer, Juergen; Gottwald, Tobias; Bolte, Hendrik; Riedel, Christian; Freitag, Sandra; Van Metter, Richard; Heller, Martin
2007-04-01
To evaluate increased image latitude post-processing of digital projection radiograms for the detection of pulmonary nodules. 20 porcine lungs were inflated inside a chest phantom, prepared with 280 solid nodules of 4-8 mm in diameter and examined with direct radiography (3.0x2.5 k detector, 125 kVp, 4 mAs). Nodule position and size were documented by CT controls and dissection. Four intact lungs served as negative controls. Image post-processing included standard tone scales and increased latitude with detail contrast enhancement (log-factors 1.0, 1.5 and 2.0). 1280 sub-images (512x512 pixel) were centred on nodules or controls, behind the diaphragm and over free parenchyma, randomized and presented to six readers. Confidence in the decision was recorded with a scale of 0-100%. Sensitivity and specificity for nodules behind the diaphragm were 0.87/0.97 at standard tone scale and 0.92/0.92 with increased latitude (log factor 2.0). The fraction of "not diagnostic" readings was reduced (from 208/1920 to 52/1920). As an indicator of increased detection confidence, the median of the ratings behind the diaphragm approached 100 and 0, respectively, and the inter-quartile width decreased (controls: p<0.001, nodules: p=0.239) at higher image latitude. Above the diaphragm, accuracy and detection confidence remained unchanged. Here, the sensitivity for nodules was 0.94 with a specificity from 0.96 to 0.97 (all p>0.05). Increased latitude post-processing has minimal effects on the overall accuracy, but improves the detection confidence for sub-centimeter nodules in the posterior recesses of the lung.
Effect of color coding and subtraction on the accuracy of contrast echocardiography
NASA Technical Reports Server (NTRS)
Pasquet, A.; Greenberg, N.; Brunken, R.; Thomas, J. D.; Marwick, T. H.
1999-01-01
BACKGROUND: Contrast echocardiography may be used to assess myocardial perfusion. However, gray scale assessment of myocardial contrast echocardiography (MCE) is difficult because of variations in regional backscatter intensity, difficulties in distinguishing varying shades of gray, and artifacts or attenuation. We sought to determine whether the assessment of rest myocardial perfusion by MCE could be improved with subtraction and color coding. METHODS AND RESULTS: MCE was performed in 31 patients with previous myocardial infarction with a 2nd generation agent (NC100100, Nycomed AS), using harmonic triggered or continuous imaging and gain settings were kept constant throughout the study. Digitized images were post processed by subtraction of baseline from contrast data and colorized to reflect the intensity of myocardial contrast. Gray scale MCE alone, MCE images combined with baseline and subtracted colorized images were scored independently using a 16 segment model. The presence and severity of myocardial contrast abnormalities were compared with perfusion defined by rest MIBI-SPECT. Segments that were not visualized by continuous (17%) or triggered imaging (14%) after color processing were excluded from further analysis. The specificity of gray scale MCE alone (56%) or MCE combined with baseline 2D (47%) was significantly enhanced by subtraction and color coding (76%, p<0.001) of triggered images. The accuracy of the gray scale approaches (respectively 52% and 47%) was increased to 70% (p<0.001). Similarly, for continuous images, the specificity of gray scale MCE with and without baseline comparison was 23% and 42% respectively, compared with 60% after post processing (p<0.001). The accuracy of colorized images (59%) was also significantly greater than gray scale MCE (43% and 29%, p<0.001). The sensitivity of MCE for both acquisitions was not altered by subtraction. CONCLUSION: Post-processing with subtraction and color coding significantly improves the accuracy and specificity of MCE for detection of perfusion defects.
Advanced imaging programs: maximizing a multislice CT investment.
Falk, Robert
2008-01-01
Advanced image processing has moved from a luxury to a necessity in the practice of medicine. A hospital's adoption of sophisticated 3D imaging entails several important steps with many factors to consider in order to be successful. Like any new hospital program, 3D post-processing should be introduced through a strategic planning process that includes administrators, physicians, and technologists to design, implement, and market a program that is scalable-one that minimizes up front costs while providing top level service. This article outlines the steps for planning, implementation, and growth of an advanced imaging program.
Computerized image analysis for acetic acid induced intraepithelial lesions
NASA Astrophysics Data System (ADS)
Li, Wenjing; Ferris, Daron G.; Lieberman, Rich W.
2008-03-01
Cervical Intraepithelial Neoplasia (CIN) exhibits certain morphologic features that can be identified during a visual inspection exam. Immature and dysphasic cervical squamous epithelium turns white after application of acetic acid during the exam. The whitening process occurs visually over several minutes and subjectively discriminates between dysphasic and normal tissue. Digital imaging technologies allow us to assist the physician analyzing the acetic acid induced lesions (acetowhite region) in a fully automatic way. This paper reports a study designed to measure multiple parameters of the acetowhitening process from two images captured with a digital colposcope. One image is captured before the acetic acid application, and the other is captured after the acetic acid application. The spatial change of the acetowhitening is extracted using color and texture information in the post acetic acid image; the temporal change is extracted from the intensity and color changes between the post acetic acid and pre acetic acid images with an automatic alignment. The imaging and data analysis system has been evaluated with a total of 99 human subjects and demonstrate its potential to screening underserved women where access to skilled colposcopists is limited.
In vivo terahertz reflection imaging of human scars during and after the healing process.
Fan, Shuting; Ung, Benjamin S Y; Parrott, Edward P J; Wallace, Vincent P; Pickwell-MacPherson, Emma
2017-09-01
We use terahertz imaging to measure four human skin scars in vivo. Clear contrast between the refractive index of the scar and surrounding tissue was observed for all of the scars, despite some being difficult to see with the naked eye. Additionally, we monitored the healing process of a hypertrophic scar. We found that the contrast in the absorption coefficient became less prominent after a few months post-injury, but that the contrast in the refractive index was still significant even months post-injury. Our results demonstrate the capability of terahertz imaging to quantitatively measure subtle changes in skin properties and this may be useful for improving scar treatment and management. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Simulations for Improved Imaging of Faint Objects at Maui Space Surveillance Site
NASA Astrophysics Data System (ADS)
Holmes, R.; Roggemann, M.; Werth, M.; Lucas, J.; Thompson, D.
A detailed wave-optics simulation is used in conjunction with advanced post-processing algorithms to explore the trade space between image post-processing and adaptive optics for improved imaging of low signal-to-noise ratio (SNR) targets. Target-based guidestars are required for imaging of most active Earth-orbiting satellites because of restrictions on using laser-backscatter-based guidestars in the direction of such objects. With such target-based guidestars and Maui conditions, it is found that significant reductions in adaptive optics actuator and subaperture density can result in improved imaging of fainter objects. Simulation indicates that elimination of adaptive optics produces sub-optimal results for all of the faint-object cases considered. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
Synthetic Foveal Imaging Technology
NASA Technical Reports Server (NTRS)
Hoenk, Michael; Monacos, Steve; Nikzad, Shouleh
2009-01-01
Synthetic Foveal imaging Technology (SyFT) is an emerging discipline of image capture and image-data processing that offers the prospect of greatly increased capabilities for real-time processing of large, high-resolution images (including mosaic images) for such purposes as automated recognition and tracking of moving objects of interest. SyFT offers a solution to the image-data processing problem arising from the proposed development of gigapixel mosaic focal-plane image-detector assemblies for very wide field-of-view imaging with high resolution for detecting and tracking sparse objects or events within narrow subfields of view. In order to identify and track the objects or events without the means of dynamic adaptation to be afforded by SyFT, it would be necessary to post-process data from an image-data space consisting of terabytes of data. Such post-processing would be time-consuming and, as a consequence, could result in missing significant events that could not be observed at all due to the time evolution of such events or could not be observed at required levels of fidelity without such real-time adaptations as adjusting focal-plane operating conditions or aiming of the focal plane in different directions to track such events. The basic concept of foveal imaging is straightforward: In imitation of a natural eye, a foveal-vision image sensor is designed to offer higher resolution in a small region of interest (ROI) within its field of view. Foveal vision reduces the amount of unwanted information that must be transferred from the image sensor to external image-data-processing circuitry. The aforementioned basic concept is not new in itself: indeed, image sensors based on these concepts have been described in several previous NASA Tech Briefs articles. Active-pixel integrated-circuit image sensors that can be programmed in real time to effect foveal artificial vision on demand are one such example. What is new in SyFT is a synergistic combination of recent advances in foveal imaging, computing, and related fields, along with a generalization of the basic foveal-vision concept to admit a synthetic fovea that is not restricted to one contiguous region of an image.
Hsu, Shu-Hui; Cao, Yue; Lawrence, Theodore S.; Tsien, Christina; Feng, Mary; Grodzki, David M.; Balter, James M.
2015-01-01
Accurate separation of air and bone is critical for creating synthetic CT from MRI to support Radiation Oncology workflow. This study compares two different ultrashort echo-time sequences in the separation of air from bone, and evaluates post-processing methods that correct intensity nonuniformity of images and account for intensity gradients at tissue boundaries to improve this discriminatory power. CT and MRI scans were acquired on 12 patients under an institution review board-approved prospective protocol. The two MRI sequences tested were ultra-short TE imaging using 3D radial acquisition (UTE), and using pointwise encoding time reduction with radial acquisition (PETRA). Gradient nonlinearity correction was applied to both MR image volumes after acquisition. MRI intensity nonuniformity was corrected by vendor-provided normalization methods, and then further corrected using the N4itk algorithm. To overcome the intensity-gradient at air-tissue boundaries, spatial dilations, from 0 to 4 mm, were applied to threshold-defined air regions from MR images. Receiver operating characteristic (ROC) analyses, by comparing predicted (defined by MR images) versus “true” regions of air and bone (defined by CT images), were performed with and without residual bias field correction and local spatial expansion. The post-processing corrections increased the areas under the ROC curves (AUC) from 0.944 ± 0.012 to 0.976 ± 0.003 for UTE images, and from 0.850 ± 0.022 to 0.887 ± 0.012 for PETRA images, compared to without corrections. When expanding the threshold-defined air volumes, as expected, sensitivity of air identification decreased with an increase in specificity of bone discrimination, but in a non-linear fashion. A 1-mm air mask expansion yielded AUC increases of 1% and 4% for UTE and PETRA images, respectively. UTE images had significantly greater discriminatory power in separating air from bone than PETRA images. Post-processing strategies improved the discriminatory power of air from bone for both UTE and PETRA images, and reduced the difference between the two imaging sequences. Both postprocessed UTE and PETRA images demonstrated sufficient power to discriminate air from bone to support synthetic CT generation from MRI data. PMID:25776205
Noise reduction techniques for Bayer-matrix images
NASA Astrophysics Data System (ADS)
Kalevo, Ossi; Rantanen, Henry
2002-04-01
In this paper, some arrangements to apply Noise Reduction (NR) techniques for images captured by a single sensor digital camera are studied. Usually, the NR filter processes full three-color component image data. This requires that raw Bayer-matrix image data, available from the image sensor, is first interpolated by using Color Filter Array Interpolation (CFAI) method. Another choice is that the raw Bayer-matrix image data is processed directly. The advantages and disadvantages of both processing orders, before (pre-) CFAI and after (post-) CFAI, are studied with linear, multi-stage median, multistage median hybrid and median-rational filters .The comparison is based on the quality of the output image, the processing power requirements and the amount of memory needed. Also the solution, which improves preservation of details in the NR filtering before the CFAI, is proposed.
Comprehensive Study of Plasma-Wall Sheath Transport Phenomena
2016-10-26
function of the applied thermo-mechanical stress. An experiment was designed to test whether and how the process of plasma erosion might depend on ...of exposed surface, a, b) pretest height and laser image, c, d) post - test height and laser image. For the following analysis, a curve fit of the...normal to the ion beam. However, even with a one -dimensional simulation, features of a similar depth and profile to the post - test surface develop
Grid Computing Application for Brain Magnetic Resonance Image Processing
NASA Astrophysics Data System (ADS)
Valdivia, F.; Crépeault, B.; Duchesne, S.
2012-02-01
This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.
Comparison of ring artifact removal methods using flat panel detector based CT images
2011-01-01
Background Ring artifacts are the concentric rings superimposed on the tomographic images often caused by the defective and insufficient calibrated detector elements as well as by the damaged scintillator crystals of the flat panel detector. It may be also generated by objects attenuating X-rays very differently in different projection direction. Ring artifact reduction techniques so far reported in the literature can be broadly classified into two groups. One category of the approaches is based on the sinogram processing also known as the pre-processing techniques and the other category of techniques perform processing on the 2-D reconstructed images, recognized as the post-processing techniques in the literature. The strength and weakness of these categories of approaches are yet to be explored from a common platform. Method In this paper, a comparative study of the two categories of ring artifact reduction techniques basically designed for the multi-slice CT instruments is presented from a common platform. For comparison, two representative algorithms from each of the two categories are selected from the published literature. A very recently reported state-of-the-art sinogram domain ring artifact correction method that classifies the ring artifacts according to their strength and then corrects the artifacts using class adaptive correction schemes is also included in this comparative study. The first sinogram domain correction method uses a wavelet based technique to detect the corrupted pixels and then using a simple linear interpolation technique estimates the responses of the bad pixels. The second sinogram based correction method performs all the filtering operations in the transform domain, i.e., in the wavelet and Fourier domain. On the other hand, the two post-processing based correction techniques actually operate on the polar transform domain of the reconstructed CT images. The first method extracts the ring artifact template vector using a homogeneity test and then corrects the CT images by subtracting the artifact template vector from the uncorrected images. The second post-processing based correction technique performs median and mean filtering on the reconstructed images to produce the corrected images. Results The performances of the comparing algorithms have been tested by using both quantitative and perceptual measures. For quantitative analysis, two different numerical performance indices are chosen. On the other hand, different types of artifact patterns, e.g., single/band ring, artifacts from defective and mis-calibrated detector elements, rings in highly structural object and also in hard object, rings from different flat-panel detectors are analyzed to perceptually investigate the strength and weakness of the five methods. An investigation has been also carried out to compare the efficacy of these algorithms in correcting the volume images from a cone beam CT with the parameters determined from one particular slice. Finally, the capability of each correction technique in retaining the image information (e.g., small object at the iso-center) accurately in the corrected CT image has been also tested. Conclusions The results show that the performances of the algorithms are limited and none is fully suitable for correcting different types of ring artifacts without introducing processing distortion to the image structure. To achieve the diagnostic quality of the corrected slices a combination of the two approaches (sinogram- and post-processing) can be used. Also the comparing methods are not suitable for correcting the volume images from a cone beam flat-panel detector based CT. PMID:21846411
Technique of diffusion weighted imaging and its application in stroke
NASA Astrophysics Data System (ADS)
Li, Enzhong; Tian, Jie; Han, Ying; Wang, Huifang; Li, Wu; He, Huiguang
2003-05-01
To study the application of diffusion weighted imaging and image post processing in the diagnosis of stroke, especially in acute stroke, 205 patients were examined by 1.5 T or 1.0 T MRI scanner and the images such as T1, T2 and diffusion weighted images were obtained. Image post processing was done with "3D Med System" developed by our lab to analyze data and acquire the apparent diffusion coefficient (ADC) map. In acute and subacute stage of stroke, the signal in cerebral infarction areas changed to hyperintensity in T2- and diffusion-weighted images, normal or hypointensity in T1-weighted images. In hyperacute stage, however, the signal was hyperintense just in the diffusion weighted imaes; others were normal. In the chronic stage, the signal in T1- and diffusion-weighted imaging showed hypointensity and hyperintensity in T2 weighted imaging. Because ADC declined obviously in acute and subacute stage of stroke, the lesion area was hypointensity in ADC map. With the development of the disease, ADC gradually recovered and then changed to hyperintensity in ADC map in chronic stage. Using diffusion weighted imaging and ADC mapping can make a diagnosis of stroke, especially in the hyperacute stage of stroke, and can differentiate acute and chronic stroke.
Super-Resolution for “Jilin-1” Satellite Video Imagery via a Convolutional Network
Wang, Zhongyuan; Wang, Lei; Ren, Yexian
2018-01-01
Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method’s practicality. Experimental results on “Jilin-1” satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods. PMID:29652838
Super-Resolution for "Jilin-1" Satellite Video Imagery via a Convolutional Network.
Xiao, Aoran; Wang, Zhongyuan; Wang, Lei; Ren, Yexian
2018-04-13
Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method's practicality. Experimental results on "Jilin-1" satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods.
Siri, Sangeeta K; Latte, Mrityunjaya V
2017-11-01
Many different diseases can occur in the liver, including infections such as hepatitis, cirrhosis, cancer and over effect of medication or toxins. The foremost stage for computer-aided diagnosis of liver is the identification of liver region. Liver segmentation algorithms extract liver image from scan images which helps in virtual surgery simulation, speedup the diagnosis, accurate investigation and surgery planning. The existing liver segmentation algorithms try to extort exact liver image from abdominal Computed Tomography (CT) scan images. It is an open problem because of ambiguous boundaries, large variation in intensity distribution, variability of liver geometry from patient to patient and presence of noise. A novel approach is proposed to meet challenges in extracting the exact liver image from abdominal CT scan images. The proposed approach consists of three phases: (1) Pre-processing (2) CT scan image transformation to Neutrosophic Set (NS) and (3) Post-processing. In pre-processing, the noise is removed by median filter. The "new structure" is designed to transform a CT scan image into neutrosophic domain which is expressed using three membership subset: True subset (T), False subset (F) and Indeterminacy subset (I). This transform approximately extracts the liver image structure. In post processing phase, morphological operation is performed on indeterminacy subset (I) and apply Chan-Vese (C-V) model with detection of initial contour within liver without user intervention. This resulted in liver boundary identification with high accuracy. Experiments show that, the proposed method is effective, robust and comparable with existing algorithm for liver segmentation of CT scan images. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, Yanlong; Zhou, Xing; Li, Runze; Van Horn, Mark; Peng, Tong; Lei, Ming; Wu, Di; Chen, Xun; Yao, Baoli; Ye, Tong
2015-03-01
Bessel beams have been used in many applications due to their unique optical properties of maintaining their intensity profiles unchanged during propagation. In imaging applications, Bessel beams have been successfully used to provide extended focuses for volumetric imaging and uniformed illumination plane in light-sheet microscopy. Coupled with two-photon excitation, Bessel beams have been successfully used in realizing fluorescence projected volumetric imaging. We demonstrated previously a stereoscopic solution-two-photon fluorescence stereomicroscopy (TPFSM)-for recovering the depth information in volumetric imaging with Bessel beams. In TPFSM, tilted Bessel beams were used to generate stereoscopic images on a laser scanning two-photon fluorescence microscope; upon post image processing we could successfully provide 3D perception of acquired volume images by wearing anaglyph 3D glasses. However, tilted Bessel beams were generated by shifting either an axicon or an objective laterally; the slow imaging speed and severe aberrations made it hard to use in real-time volume imaging. In this article, we report recent improvements of TPFSM with newly designed scanner and imaging software, which allows 3D stereoscopic imaging without moving any of the optical components on the setup. This improvement has dramatically improved focusing qualities and imaging speed so that the TPFSM can be performed potentially in real-time to provide 3D visualization in scattering media without post image processing.
NASA Astrophysics Data System (ADS)
Panda, Kalpataru; Sundaravel, B.; Panigrahi, B. K.; Chen, H.-C.; Huang, P.-C.; Shih, W.-C.; Lo, S.-C.; Lin, L.-J.; Lee, C.-Y.; Lin, I.-N.
2013-03-01
A thin layer of iron coating and subsequent post-annealing (Fe-coating/post-annealing) is seen to significantly enhance the electron field emission (EFE) properties of ultrananocrystalline diamond (UNCD) films. The best EFE properties, with a turn on field (E0) of 1.98 V/μm and current density (Je) of 705 μA/cm2 at 7.5 V/μm, are obtained for the films, which were Fe-coated/post-annealed at 900 °C in H2 atmosphere. The mechanism behind the enhanced EFE properties of Fe coated/post-annealed UNCD films are explained by the microstructural analysis which shows formation of nanographitic phase surrounding the Fe (or Fe3C) nanoparticles. The role of the nanographitic phase in improving the emission sites of Fe coated/post-annealed UNCD films is clearly revealed by the current imaging tunneling spectroscopy (CITS) images. The CITS images clearly show significant increase in emission sites in Fe-coated/post-annealed UNCD films than the as-deposited one. Enhanced emission sites are mostly seen around the boundaries of the Fe (or Fe3C) nanoparticles which were formed due to the Fe-coating/post-annealing processes. Moreover, the Fe-coating/post-annealing processes enhance the EFE properties of UNCD films more than that on the microcrystalline diamond films. The authentic factor, resulting in such a phenomenon, is attributed to the unique granular structure of the UNCD films. The nano-sized and uniformly distributed grains of UNCD films, resulted in markedly smaller and densely populated Fe-clusters, which, in turn, induced more finer and higher populated nano-graphite clusters.
Tan, A C; Richards, R
1989-01-01
Three-dimensional (3D) medical graphics is becoming popular in clinical use on tomographic scanners. Research work in 3D reconstructive display of computerized tomography (CT) and magnetic resonance imaging (MRI) scans on conventional computers has produced many so-called pseudo-3D images. The quality of these images depends on the rendering algorithm, the coarseness of the digitized object, the number of grey levels and the image screen resolution. CT and MRI data are fundamentally voxel based and they produce images that are coarse because of the resolution of the data acquisition system. 3D images produced by the Z-buffer depth shading technique suffer loss of detail when complex objects with fine textural detail need to be displayed. Attempts have been made to improve the display of voxel objects, and existing techniques have shown the improvement possible using these post-processing algorithms. The improved rendering technique works on the Z-buffer image to generate a shaded image using a single light source in any direction. The effectiveness of the technique in generating a shaded image has been shown to be a useful means of presenting 3D information for clinical use.
NASA Astrophysics Data System (ADS)
Wangerin, Kristen A.; Muzi, Mark; Peterson, Lanell M.; Linden, Hannah M.; Novakova, Alena; Mankoff, David A.; E Kinahan, Paul
2017-05-01
We developed a method to evaluate variations in the PET imaging process in order to characterize the relative ability of static and dynamic metrics to measure breast cancer response to therapy in a clinical trial setting. We performed a virtual clinical trial by generating 540 independent and identically distributed PET imaging study realizations for each of 22 original dynamic fluorodeoxyglucose (18F-FDG) breast cancer patient studies pre- and post-therapy. Each noise realization accounted for known sources of uncertainty in the imaging process, such as biological variability and SUV uptake time. Four definitions of SUV were analyzed, which were SUVmax, SUVmean, SUVpeak, and SUV50%. We performed a ROC analysis on the resulting SUV and kinetic parameter uncertainty distributions to assess the impact of the variability on the measurement capabilities of each metric. The kinetic macro parameter, K i , showed more variability than SUV (mean CV K i = 17%, SUV = 13%), but K i pre- and post-therapy distributions also showed increased separation compared to the SUV pre- and post-therapy distributions (mean normalized difference K i = 0.54, SUV = 0.27). For the patients who did not show perfect separation between the pre- and post-therapy parameter uncertainty distributions (ROC AUC < 1), dynamic imaging outperformed SUV in distinguishing metabolic change in response to therapy, ranging from 12 to 14 of 16 patients over all SUV definitions and uptake time scenarios (p < 0.05). For the patient cohort in this study, which is comprised of non-high-grade ER+ tumors, K i outperformed SUV in an ROC analysis of the parameter uncertainty distributions pre- and post-therapy. This methodology can be applied to different scenarios with the ability to inform the design of clinical trials using PET imaging.
Photoacoustic image reconstruction from ultrasound post-beamformed B-mode image
NASA Astrophysics Data System (ADS)
Zhang, Haichong K.; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M.
2016-03-01
A requirement to reconstruct photoacoustic (PA) image is to have a synchronized channel data acquisition with laser firing. Unfortunately, most clinical ultrasound (US) systems don't offer an interface to obtain synchronized channel data. To broaden the impact of clinical PA imaging, we propose a PA image reconstruction algorithm utilizing US B-mode image, which is readily available from clinical scanners. US B-mode image involves a series of signal processing including beamforming, followed by envelope detection, and end with log compression. Yet, it will be defocused when PA signals are input due to incorrect delay function. Our approach is to reverse the order of image processing steps and recover the original US post-beamformed radio-frequency (RF) data, in which a synthetic aperture based PA rebeamforming algorithm can be further applied. Taking B-mode image as the input, we firstly recovered US postbeamformed RF data by applying log decompression and convoluting an acoustic impulse response to combine carrier frequency information. Then, the US post-beamformed RF data is utilized as pre-beamformed RF data for the adaptive PA beamforming algorithm, and the new delay function is applied by taking into account that the focus depth in US beamforming is at the half depth of the PA case. The feasibility of the proposed method was validated through simulation, and was experimentally demonstrated using an acoustic point source. The point source was successfully beamformed from a US B-mode image, and the full with at the half maximum of the point improved 3.97 times. Comparing this result to the ground-truth reconstruction using channel data, the FWHM was slightly degraded with 1.28 times caused by information loss during envelope detection and convolution of the RF information.
Correspondence between fiber post and drill dimensions for post canal preparation.
Portigliatti, Ricardo Pablo; Tumini, José Luis; Bertoldi Hepburn, Alejandro Daniel; Aromando, Romina Flavia; Olmos, Jorge Lorenzo
2017-12-01
To compare fiber posts of several calibers and trademarks to their corresponding root canal preparation drills. Three widely used endodontic post brands and their drills were evaluated: Exacto, ParaPost Taper Lux, and Macro-Lock Illusion X-RO. Fiber posts and drills were microphotographed with a scanning electron microscope and images were analyzed using ImageJ image processing software. Fiber post diameter on apical extreme (Pd0), fiber post diameter at 5 mm from the apical extreme (Pd5), drill diameter on apical extreme (Dd0) and drill diameter at 5 mm from the apical extreme (Dd5) were analyzed. The data were statistically analyzed using student t-test. Exacto posts 0.5 showed larger dimensions than their corresponding drills (P< 0.05) at Pd0. Macro-Lock posts showed no significant differences vs. their drills at Pd0 in any of the studied groups. ParaPost drills 4.5, 5 and 5.5 were statistically significantly larger than their posts at Dd0 (P< 0.05). Exacto posts 0.5 and 1 showed larger dimensions than their drills measured at Pd5 (P< 0.05). Exacto posts number 2 showed smaller calibers than their corresponding drills at Pd5 (P< 0.05). Macro-Lock drills number 4 and ParaPost drills number 5 were larger than their posts at Dd5 (P< 0.05). Poor spatial correspondence between post and drill dimensions can adversely affect the film thickness of the resin cement, diminishing bond strength due to polymerization shrinkage. The lack of correspondence in size between posts and drills may lead to the formation of empty chambers between the post and endodontic obturation with excessive luting cement thickness, thus inducing critical C-Factor stresses.
Acharya, Rajendra Udyavara; Yu, Wenwei; Zhu, Kuanyi; Nayak, Jagadish; Lim, Teik-Cheng; Chan, Joey Yiptong
2010-08-01
Human eyes are most sophisticated organ, with perfect and interrelated subsystems such as retina, pupil, iris, cornea, lens and optic nerve. The eye disorder such as cataract is a major health problem in the old age. Cataract is formed by clouding of lens, which is painless and developed slowly over a long period. Cataract will slowly diminish the vision leading to the blindness. At an average age of 65, it is most common and one third of the people of this age in world have cataract in one or both the eyes. A system for detection of the cataract and to test for the efficacy of the post-cataract surgery using optical images is proposed using artificial intelligence techniques. Images processing and Fuzzy K-means clustering algorithm is applied on the raw optical images to detect the features specific to three classes to be classified. Then the backpropagation algorithm (BPA) was used for the classification. In this work, we have used 140 optical image belonging to the three classes. The ANN classifier showed an average rate of 93.3% in detecting normal, cataract and post cataract optical images. The system proposed exhibited 98% sensitivity and 100% specificity, which indicates that the results are clinically significant. This system can also be used to test the efficacy of the cataract operation by testing the post-cataract surgery optical images.
Image enhancement in positron emission mammography
NASA Astrophysics Data System (ADS)
Slavine, Nikolai V.; Seiler, Stephen; McColl, Roderick W.; Lenkinski, Robert E.
2017-02-01
Purpose: To evaluate an efficient iterative deconvolution method (RSEMD) for improving the quantitative accuracy of previously reconstructed breast images by commercial positron emission mammography (PEM) scanner. Materials and Methods: The RSEMD method was tested on breast phantom data and clinical PEM imaging data. Data acquisition was performed on a commercial Naviscan Flex Solo II PEM camera. This method was applied to patient breast images previously reconstructed with Naviscan software (MLEM) to determine improvements in resolution, signal to noise ratio (SNR) and contrast to noise ratio (CNR.) Results: In all of the patients' breast studies the post-processed images proved to have higher resolution and lower noise as compared with images reconstructed by conventional methods. In general, the values of SNR reached a plateau at around 6 iterations with an improvement factor of about 2 for post-processed Flex Solo II PEM images. Improvements in image resolution after the application of RSEMD have also been demonstrated. Conclusions: A rapidly converging, iterative deconvolution algorithm with a novel resolution subsets-based approach RSEMD that operates on patient DICOM images has been used for quantitative improvement in breast imaging. The RSEMD method can be applied to clinical PEM images to improve image quality to diagnostically acceptable levels and will be crucial in order to facilitate diagnosis of tumor progression at the earliest stages. The RSEMD method can be considered as an extended Richardson-Lucy algorithm with multiple resolution levels (resolution subsets).
High Resolution Near Real Time Image Processing and Support for MSSS Modernization
2012-09-01
00-00-2012 to 00-00-2012 4 . TITLE AND SUBTITLE High Resolution Near Real Time Image Processing and Support for MSSS Modernization 5a. CONTRACT...This current CONOPS is depicted in Fig. 4 . Fig. 4 . PCID/ASPIRE High Resolution Post...experiments were performed, and subsequently addressed in papers and presentations [3, 4 ,] that demonstrated system behavior; with details of the
An Optimal Partial Differential Equations-based Stopping Criterion for Medical Image Denoising.
Khanian, Maryam; Feizi, Awat; Davari, Ali
2014-01-01
Improving the quality of medical images at pre- and post-surgery operations are necessary for beginning and speeding up the recovery process. Partial differential equations-based models have become a powerful and well-known tool in different areas of image processing such as denoising, multiscale image analysis, edge detection and other fields of image processing and computer vision. In this paper, an algorithm for medical image denoising using anisotropic diffusion filter with a convenient stopping criterion is presented. In this regard, the current paper introduces two strategies: utilizing the efficient explicit method due to its advantages with presenting impressive software technique to effectively solve the anisotropic diffusion filter which is mathematically unstable, proposing an automatic stopping criterion, that takes into consideration just input image, as opposed to other stopping criteria, besides the quality of denoised image, easiness and time. Various medical images are examined to confirm the claim.
Quantitative subsurface analysis using frequency modulated thermal wave imaging
NASA Astrophysics Data System (ADS)
Subhani, S. K.; Suresh, B.; Ghali, V. S.
2018-01-01
Quantitative depth analysis of the anomaly with an enhanced depth resolution is a challenging task towards the estimation of depth of the subsurface anomaly using thermography. Frequency modulated thermal wave imaging introduced earlier provides a complete depth scanning of the object by stimulating it with a suitable band of frequencies and further analyzing the subsequent thermal response using a suitable post processing approach to resolve subsurface details. But conventional Fourier transform based methods used for post processing unscramble the frequencies with a limited frequency resolution and contribute for a finite depth resolution. Spectral zooming provided by chirp z transform facilitates enhanced frequency resolution which can further improves the depth resolution to axially explore finest subsurface features. Quantitative depth analysis with this augmented depth resolution is proposed to provide a closest estimate to the actual depth of subsurface anomaly. This manuscript experimentally validates this enhanced depth resolution using non stationary thermal wave imaging and offers an ever first and unique solution for quantitative depth estimation in frequency modulated thermal wave imaging.
Gao, Bo-Cai; Liu, Ming
2013-01-01
Surface reflectance spectra retrieved from remotely sensed hyperspectral imaging data using radiative transfer models often contain residual atmospheric absorption and scattering effects. The reflectance spectra may also contain minor artifacts due to errors in radiometric and spectral calibrations. We have developed a fast smoothing technique for post-processing of retrieved surface reflectance spectra. In the present spectral smoothing technique, model-derived reflectance spectra are first fit using moving filters derived with a cubic spline smoothing algorithm. A common gain curve, which contains minor artifacts in the model-derived reflectance spectra, is then derived. This gain curve is finally applied to all of the reflectance spectra in a scene to obtain the spectrally smoothed surface reflectance spectra. Results from analysis of hyperspectral imaging data collected with the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data are given. Comparisons between the smoothed spectra and those derived with the empirical line method are also presented. PMID:24129022
Achieving superresolution with illumination-enhanced sparsity.
Yu, Jiun-Yann; Becker, Stephen R; Folberth, James; Wallin, Bruce F; Chen, Simeng; Cogswell, Carol J
2018-04-16
Recent advances in superresolution fluorescence microscopy have been limited by a belief that surpassing two-fold resolution enhancement of the Rayleigh resolution limit requires stimulated emission or the fluorophore to undergo state transitions. Here we demonstrate a new superresolution method that requires only image acquisitions with a focused illumination spot and computational post-processing. The proposed method utilizes the focused illumination spot to effectively reduce the object size and enhance the object sparsity and consequently increases the resolution and accuracy through nonlinear image post-processing. This method clearly resolves 70nm resolution test objects emitting ~530nm light with a 1.4 numerical aperture (NA) objective, and, when imaging through a 0.5NA objective, exhibits high spatial frequencies comparable to a 1.4NA widefield image, both demonstrating a resolution enhancement above two-fold of the Rayleigh resolution limit. More importantly, we examine how the resolution increases with photon numbers, and show that the more-than-two-fold enhancement is achievable with realistic photon budgets.
NASA Astrophysics Data System (ADS)
Nagai, H.; Ohki, M.; Abe, T.
2017-12-01
Urgent crisis response for a hurricane-induced flood needs urgent providing of a flood map covering a broad region. However, there is no standard threshold values for automatic flood identification from pre-and-post images obtained by satellite-based synthetic aperture radars (SARs). This problem could hamper prompt data providing for operational uses. Furthermore, one pre-flood SAR image does not always represent potential water surfaces and river flows especially in tropical flat lands which are greatly influenced by seasonal precipitation cycle. We are, therefore, developing a new method of flood mapping using PALSAR-2, an L-band SAR, which is less affected by temporal surface changes. Specifically, a mean-value image and a standard-deviation image are calculated from a series of pre-flood SAR images. It is combined with a post-flood SAR image to obtain normalized backscatter amplitude difference (NoBADi), with which a difference between a post-flood image and a mean-value image is divided by a standard-deviation image to emphasize anomalous water extents. Flooding areas are then automatically obtained from the NoBADi images as lower-value pixels avoiding potential water surfaces. We applied this method to PALSAR-2 images acquired on Sept. 8, 10, and 12, 2017, covering flooding areas in a central region of Dominican Republic and west Florida, the U.S. affected by Hurricane Irma. The output flooding outlines are validated with flooding areas manually delineated from high-resolution optical satellite images, resulting in higher consistency and less uncertainty than previous methods (i.e., a simple pre-and-post flood difference and pre-and-post coherence changes). The NoBADi method has a great potential to obtain a reliable flood map for future flood hazards, not hampered by cloud cover, seasonal surface changes, and "casual" thresholds in the flood identification process.
Poro-elastic Rebound Along the Landers 1992 Earthquake Surface Rupture
NASA Technical Reports Server (NTRS)
Peltzer, G.; Rosen, P.; Rogez, F.; Hudnut, K.
1998-01-01
Maps of post-seismic surface displacement after the 1992, Landers, California earthquake, generated by interferometric processing of ERS-1 Synthetic Aperture Radar (SAR) images, reveal effects of various deformation processes near the 1992 surface rupture.
Nuts and Bolts of CEST MR imaging
Liu, Guanshu; Song, Xiaolei; Chan, Kannie W.Y.
2013-01-01
Chemical Exchange Saturation Transfer (CEST) has emerged as a novel MRI contrast mechanism that is well suited for molecular imaging studies. This new mechanism can be used to detect small amounts of contrast agent through saturation of rapidly exchanging protons on these agents, allowing a wide range of applications. CEST technology has a number of indispensable features, such as the possibility of simultaneous detection of multiple “colors” of agents and detecting changes in their environment (e.g. pH, metabolites, etc) through MR contrast. Currently a large number of new imaging schemes and techniques have been developed to improve the temporal resolution and specificity and to correct the influence of B0 and B1 inhomogeneities. In this review, the techniques developed over the last decade have been summarized with the different imaging strategies and post-processing methods discussed from a practical point of view including describing their relative merits for detecting CEST agents. The goal of the present work is to provide the reader with a fundamental understanding of the techniques developed, and to provide guidance to help refine future applications of this technology. This review is organized into three main sections: Basics of CEST Contrast, Implementation, Post-Processing, and also includes a brief Introduction section and Summary. The Basics of CEST Contrast section contains a description of the relevant background theory for saturation transfer and frequency labeled transfer, and a brief discussion of methods to determine exchange rates. The Implementation section contains a description of the practical considerations in conducting CEST MRI studies, including choice of magnetic field, pulse sequence, saturation pulse, imaging scheme, and strategies to separate MT and CEST. The Post-Processing section contains a description of the typical image processing employed for B0/B1 correction, Z-spectral interpolation, frequency selective detection, and improving CEST contrast maps. PMID:23303716
New techniques for fluorescence background rejection in microscopy and endoscopy
NASA Astrophysics Data System (ADS)
Ventalon, Cathie
2009-03-01
Confocal microscopy is a popular technique in the bioimaging community, mainly because it provides optical sectioning. However, its standard implementation requires 3-dimensional scanning of focused illumination throughout the sample. Efficient non-scanning alternatives have been implemented, among which the simple and well-established incoherent structured illumination microscopy (SIM) [1]. We recently proposed a similar technique, called Dynamic Speckle Illumination (DSI) microscopy, wherein the incoherent grid illumination pattern is replaced with a coherent speckle illumination pattern from a laser, taking advantage of the fact that speckle contrast is highly maintained in a scattering media, making the technique well adapted to tissue imaging [2]. DSI microscopy relies on the illumination of a sample with a sequence of dynamic speckle patterns and an image processing algorithm based only on an a priori knowledge of speckle statistics. The choice of this post-processing algorithm is crucial to obtain a good sectioning strength: in particular, we developed a novel post-processing algorithm based one wavelet pre-filtering of the raw images and obtained near-confocal fluorescence sectioning in a mouse brain labeled with GFP, with a good image quality maintained throughout a depth of ˜100 μm [3]. In the purpose of imaging fluorescent tissue at higher depth, we recently applied structured illumination to endoscopy. We used a similar set-up wherein the illumination pattern (a one-dimensional grid) is transported to the sample with an imaging fiber bundle with miniaturized objective and the fluorescence image is collected through the same bundle. Using a post-processing algorithm similar to the one previously described [3], we obtained high-quality images of a fluorescein-labeled rat colonic mucosa [4], establishing the potential of our endomicroscope for bioimaging applications. [4pt] Ref: [0pt] [1] M. A. A. Neil et al, Opt. Lett. 22, 1905 (1997) [0pt] [2] C. Ventalon et al, Opt. Lett. 30, 3350 (2005) [0pt] [3] C. Ventalon et al, Opt. Lett. 32, 1417 (2007) [0pt] [4] N. Bozinovic et al, Opt. Express 16, 8016 (2008)
Computational method for multi-modal microscopy based on transport of intensity equation
NASA Astrophysics Data System (ADS)
Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao
2017-02-01
In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.
NASA Astrophysics Data System (ADS)
Gaffney, Kevin P.; Aghaei, Faranak; Battiste, James; Zheng, Bin
2017-03-01
Detection of residual brain tumor is important to evaluate efficacy of brain cancer surgery, determine optimal strategy of further radiation therapy if needed, and assess ultimate prognosis of the patients. Brain MR is a commonly used imaging modality for this task. In order to distinguish between residual tumor and surgery induced scar tissues, two sets of MRI scans are conducted pre- and post-gadolinium contrast injection. The residual tumors are only enhanced in the post-contrast injection images. However, subjective reading and quantifying this type of brain MR images faces difficulty in detecting real residual tumor regions and measuring total volume of the residual tumor. In order to help solve this clinical difficulty, we developed and tested a new interactive computer-aided detection scheme, which consists of three consecutive image processing steps namely, 1) segmentation of the intracranial region, 2) image registration and subtraction, 3) tumor segmentation and refinement. The scheme also includes a specially designed and implemented graphical user interface (GUI) platform. When using this scheme, two sets of pre- and post-contrast injection images are first automatically processed to detect and quantify residual tumor volume. Then, a user can visually examine segmentation results and conveniently guide the scheme to correct any detection or segmentation errors if needed. The scheme has been repeatedly tested using five cases. Due to the observed high performance and robustness of the testing results, the scheme is currently ready for conducting clinical studies and helping clinicians investigate the association between this quantitative image marker and outcome of patients.
Erberich, Stephan G; Bhandekar, Manasee; Chervenak, Ann; Kesselman, Carl; Nelson, Marvin D
2007-01-01
Functional MRI is successfully being used in clinical and research applications including preoperative planning, language mapping, and outcome monitoring. However, clinical use of fMRI is less widespread due to its complexity of imaging, image workflow, post-processing, and lack of algorithmic standards hindering result comparability. As a consequence, wide-spread adoption of fMRI as clinical tool is low contributing to the uncertainty of community physicians how to integrate fMRI into practice. In addition, training of physicians with fMRI is in its infancy and requires clinical and technical understanding. Therefore, many institutions which perform fMRI have a team of basic researchers and physicians to perform fMRI as a routine imaging tool. In order to provide fMRI as an advanced diagnostic tool to the benefit of a larger patient population, image acquisition and image post-processing must be streamlined, standardized, and available at any institution which does not have these resources available. Here we describe a software architecture, the functional imaging laboratory (funcLAB/G), which addresses (i) standardized image processing using Statistical Parametric Mapping and (ii) its extension to secure sharing and availability for the community using standards-based Grid technology (Globus Toolkit). funcLAB/G carries the potential to overcome the limitations of fMRI in clinical use and thus makes standardized fMRI available to the broader healthcare enterprise utilizing the Internet and HealthGrid Web Services technology.
NASA Astrophysics Data System (ADS)
Turola, Massimo; Meah, Chris J.; Marshall, Richard J.; Styles, Iain B.; Gruppetta, Stephen
2015-06-01
A plenoptic imaging system records simultaneously the intensity and the direction of the rays of light. This additional information allows many post processing features such as 3D imaging, synthetic refocusing and potentially evaluation of wavefront aberrations. In this paper the effects of low order aberrations on a simple plenoptic imaging system have been investigated using a wave optics simulations approach.
Automated knot detection with visual post-processing of Douglas-fir veneer images
C.L. Todoroki; Eini C. Lowell; Dennis Dykstra
2010-01-01
Knots on digital images of 51 full veneer sheets, obtained from nine peeler blocks crosscut from two 35-foot (10.7 m) long logs and one 18-foot (5.5 m) log from a single Douglas-fir tree, were detected using a two-phase algorithm. The algorithm was developed using one image, the Development Sheet, refined on five other images, the Training Sheets, and then applied to...
Real-time catheter localization and visualization using three-dimensional echocardiography
NASA Astrophysics Data System (ADS)
Kozlowski, Pawel; Bandaru, Raja Sekhar; D'hooge, Jan; Samset, Eigil
2017-03-01
Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D- TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15 FPS was achieved.
Platform for Post-Processing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Don J.
2010-01-01
Signal- and image-processing methods are commonly needed to extract information from the waves, improve resolution of, and highlight defects in an image. Since some similarity exists for all waveform-based nondestructive evaluation (NDE) methods, it would seem that a common software platform containing multiple signal- and image-processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. NDE Wave & Image Processor Version 2.0 software provides a single, integrated signal- and image-processing and analysis environment for total NDE data processing and analysis. It brings some of the most useful algorithms developed for NDE over the past 20 years into a commercial-grade product. The software can import signal/spectroscopic data, image data, and image series data. This software offers the user hundreds of basic and advanced signal- and image-processing capabilities including esoteric 1D and 2D wavelet-based de-noising, de-trending, and filtering. Batch processing is included for signal- and image-processing capability so that an optimized sequence of processing operations can be applied to entire folders of signals, spectra, and images. Additionally, an extensive interactive model-based curve-fitting facility has been included to allow fitting of spectroscopy data such as from Raman spectroscopy. An extensive joint-time frequency module is included for analysis of non-stationary or transient data such as that from acoustic emission, vibration, or earthquake data.
Extended depth of field imaging for high speed object analysis
NASA Technical Reports Server (NTRS)
Frost, Keith (Inventor); Ortyn, William (Inventor); Basiji, David (Inventor); Bauer, Richard (Inventor); Liang, Luchuan (Inventor); Hall, Brian (Inventor); Perry, David (Inventor)
2011-01-01
A high speed, high-resolution flow imaging system is modified to achieve extended depth of field imaging. An optical distortion element is introduced into the flow imaging system. Light from an object, such as a cell, is distorted by the distortion element, such that a point spread function (PSF) of the imaging system is invariant across an extended depth of field. The distorted light is spectrally dispersed, and the dispersed light is used to simultaneously generate a plurality of images. The images are detected, and image processing is used to enhance the detected images by compensating for the distortion, to achieve extended depth of field images of the object. The post image processing preferably involves de-convolution, and requires knowledge of the PSF of the imaging system, as modified by the optical distortion element.
Automatic detection of the inner ears in head CT images using deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Zhang, Dongqing; Noble, Jack H.; Dawant, Benoit M.
2018-03-01
Cochlear implants (CIs) use electrode arrays that are surgically inserted into the cochlea to stimulate nerve endings to replace the natural electro-mechanical transduction mechanism and restore hearing for patients with profound hearing loss. Post-operatively, the CI needs to be programmed. Traditionally, this is done by an audiologist who is blind to the positions of the electrodes relative to the cochlea and relies on the patient's subjective response to stimuli. This is a trial-and-error process that can be frustratingly long (dozens of programming sessions are not unusual). To assist audiologists, we have proposed what we call IGCIP for image-guided cochlear implant programming. In IGCIP, we use image processing algorithms to segment the intra-cochlear anatomy in pre-operative CT images and to localize the electrode arrays in post-operative CTs. We have shown that programming strategies informed by image-derived information significantly improve hearing outcomes for both adults and pediatric populations. We are now aiming at deploying these techniques clinically, which requires full automation. One challenge we face is the lack of standard image acquisition protocols. The content of the image volumes we need to process thus varies greatly and visual inspection and labelling is currently required to initialize processing pipelines. In this work we propose a deep learning-based approach to automatically detect if a head CT volume contains two ears, one ear, or no ear. Our approach has been tested on a data set that contains over 2,000 CT volumes from 153 patients and we achieve an overall 95.97% classification accuracy.
Spatio-temporal diffusion of dynamic PET images
NASA Astrophysics Data System (ADS)
Tauber, C.; Stute, S.; Chau, M.; Spiteri, P.; Chalon, S.; Guilloteau, D.; Buvat, I.
2011-10-01
Positron emission tomography (PET) images are corrupted by noise. This is especially true in dynamic PET imaging where short frames are required to capture the peak of activity concentration after the radiotracer injection. High noise results in a possible bias in quantification, as the compartmental models used to estimate the kinetic parameters are sensitive to noise. This paper describes a new post-reconstruction filter to increase the signal-to-noise ratio in dynamic PET imaging. It consists in a spatio-temporal robust diffusion of the 4D image based on the time activity curve (TAC) in each voxel. It reduces the noise in homogeneous areas while preserving the distinct kinetics in regions of interest corresponding to different underlying physiological processes. Neither anatomical priors nor the kinetic model are required. We propose an automatic selection of the scale parameter involved in the diffusion process based on a robust statistical analysis of the distances between TACs. The method is evaluated using Monte Carlo simulations of brain activity distributions. We demonstrate the usefulness of the method and its superior performance over two other post-reconstruction spatial and temporal filters. Our simulations suggest that the proposed method can be used to significantly increase the signal-to-noise ratio in dynamic PET imaging.
Ober, Christopher P
Second-year veterinary students are often challenged by concepts in veterinary radiology, including the fundamentals of image quality and generation of differential lists. Four card games were developed to provide veterinary students with a supplemental means of learning about radiographic image quality and differential diagnoses in urogenital imaging. Students played these games and completed assessments of their subject knowledge before and after playing. The hypothesis was that playing each game would improve students' understanding of the topic area. For each game, students who played the game performed better on the post-test than students who did not play that game (all p<.01). For three of the four games, students who played each respective game demonstrated significant improvement in scores between the pre-test and the post-test (p<.002). The majority of students expressed that the games were both helpful and enjoyable. Educationally focused games can help students learn classroom and laboratory material. However, game design is important, as the game using the most passive learning process also demonstrated the weakest results. In addition, based on participants' comments, the games were very useful in improving student engagement in the learning process. Thus, use of games in the classroom and laboratory setting seems to benefit the learning process.
An efficient system for reliably transmitting image and video data over low bit rate noisy channels
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Huang, Y. F.; Stevenson, Robert L.
1994-01-01
This research project is intended to develop an efficient system for reliably transmitting image and video data over low bit rate noisy channels. The basic ideas behind the proposed approach are the following: employ statistical-based image modeling to facilitate pre- and post-processing and error detection, use spare redundancy that the source compression did not remove to add robustness, and implement coded modulation to improve bandwidth efficiency and noise rejection. Over the last six months, progress has been made on various aspects of the project. Through our studies of the integrated system, a list-based iterative Trellis decoder has been developed. The decoder accepts feedback from a post-processor which can detect channel errors in the reconstructed image. The error detection is based on the Huber Markov random field image model for the compressed image. The compression scheme used here is that of JPEG (Joint Photographic Experts Group). Experiments were performed and the results are quite encouraging. The principal ideas here are extendable to other compression techniques. In addition, research was also performed on unequal error protection channel coding, subband vector quantization as a means of source coding, and post processing for reducing coding artifacts. Our studies on unequal error protection (UEP) coding for image transmission focused on examining the properties of the UEP capabilities of convolutional codes. The investigation of subband vector quantization employed a wavelet transform with special emphasis on exploiting interband redundancy. The outcome of this investigation included the development of three algorithms for subband vector quantization. The reduction of transform coding artifacts was studied with the aid of a non-Gaussian Markov random field model. This results in improved image decompression. These studies are summarized and the technical papers included in the appendices.
Orthographic Stereo Correlator on the Terrain Model for Apollo Metric Images
NASA Technical Reports Server (NTRS)
Kim, Taemin; Husmann, Kyle; Moratto, Zachary; Nefian, Ara V.
2011-01-01
A stereo correlation method on the object domain is proposed to generate the accurate and dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce high-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. Given camera parameters of an image pair from bundle adjustment in ASP, a correlation window is defined on the terrain with the predefined surface normal of a post rather than image domain. The squared error of back-projected images on the local terrain is minimized with respect to the post elevation. This single dimensional optimization is solved efficiently and improves the accuracy of the elevation estimate.
Comparison of different phantoms used in digital diagnostic imaging
NASA Astrophysics Data System (ADS)
Bor, Dogan; Unal, Elif; Uslu, Anil
2015-09-01
The organs of extremity, chest, skull and lumbar were physically simulated using uniform PMMA slabs with different thicknesses alone and using these slabs together with aluminum plates and air gaps (ANSI Phantoms). The variation of entrance surface air kerma and scatter fraction with X-ray beam qualities was investigated for these phantoms and the results were compared with those measured from anthropomorphic phantoms. A flat panel digital radiographic system was used for all the experiments. Considerable variations of entrance surface air kermas were found for the same organs of different designs, and highest doses were measured for the PMMA slabs. A low contrast test tool and a contrast detail test object (CDRAD) were used together with each organ simulation of PMMA slabs and ANSI phantoms in order to test the clinical image qualities. Digital images of these phantom combinations and anthropomorphic phantoms were acquired in raw and clinically processed formats. Variation of image quality with kVp and post processing was evaluated using the numerical metrics of these test tools and measured contrast values from the anthropomorphic phantoms. Our results indicated that design of some phantoms may not be efficient enough to reveal the expected performance of the post processing algorithms.
A Software Platform for Post-Processing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Donald J.; Martin, Richard E.; Seebo, Jeff P.; Trinh, Long B.; Walker, James L.; Winfree, William P.
2007-01-01
Ultrasonic, microwave, and terahertz nondestructive evaluation imaging systems generally require the acquisition of waveforms at each scan point to form an image. For such systems, signal and image processing methods are commonly needed to extract information from the waves and improve resolution of, and highlight, defects in the image. Since some similarity exists for all waveform-based NDE methods, it would seem a common software platform containing multiple signal and image processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. This presentation describes NASA Glenn Research Center's approach in developing a common software platform for processing waveform-based NDE signals and images. This platform is currently in use at NASA Glenn and at Lockheed Martin Michoud Assembly Facility for processing of pulsed terahertz and ultrasonic data. Highlights of the software operation will be given. A case study will be shown for use with terahertz data. The authors also request scientists and engineers who are interested in sharing customized signal and image processing algorithms to contribute to this effort by letting the authors code up and include these algorithms in future releases.
Leading Marines in a Digital World
2013-03-01
23 2. Empathy ..............................................................................................25 3. Healing...Direction Center fMRI Functional Magnetic Resonance Imaging LMX Leader-Member Exchange MCPP Marine Corps Planning Process MRI Magnetic Resonance...Imaging NCO Non Commissioned Officer OCS Officer Candidate School PTSD Post-traumatic Stress Disorder U.S. United States Wyo Wyoming x
Image Navigation and Registration Performance Assessment Evaluation Tools for GOES-R ABI and GLM
NASA Technical Reports Server (NTRS)
Houchin, Scott; Porter, Brian; Graybill, Justin; Slingerland, Philip
2017-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. This paper describes the software design and implementation of IPATS and provides preliminary test results.
A novel pre-processing technique for improving image quality in digital breast tomosynthesis.
Kim, Hyeongseok; Lee, Taewon; Hong, Joonpyo; Sabir, Sohail; Lee, Jung-Ryun; Choi, Young Wook; Kim, Hak Hee; Chae, Eun Young; Cho, Seungryong
2017-02-01
Nonlinear pre-reconstruction processing of the projection data in computed tomography (CT) where accurate recovery of the CT numbers is important for diagnosis is usually discouraged, for such a processing would violate the physics of image formation in CT. However, one can devise a pre-processing step to enhance detectability of lesions in digital breast tomosynthesis (DBT) where accurate recovery of the CT numbers is fundamentally impossible due to the incompleteness of the scanned data. Since the detection of lesions such as micro-calcifications and mass in breasts is the purpose of using DBT, it is justified that a technique producing higher detectability of lesions is a virtue. A histogram modification technique was developed in the projection data domain. Histogram of raw projection data was first divided into two parts: One for the breast projection data and the other for background. Background pixel values were set to a single value that represents the boundary between breast and background. After that, both histogram parts were shifted by an appropriate amount of offset and the histogram-modified projection data were log-transformed. Filtered-backprojection (FBP) algorithm was used for image reconstruction of DBT. To evaluate performance of the proposed method, we computed the detectability index for the reconstructed images from clinically acquired data. Typical breast border enhancement artifacts were greatly suppressed and the detectability of calcifications and masses was increased by use of the proposed method. Compared to a global threshold-based post-reconstruction processing technique, the proposed method produced images of higher contrast without invoking additional image artifacts. In this work, we report a novel pre-processing technique that improves detectability of lesions in DBT and has potential advantages over the global threshold-based post-reconstruction processing technique. The proposed method not only increased the lesion detectability but also reduced typical image artifacts pronounced in conventional FBP-based DBT. © 2016 American Association of Physicists in Medicine.
A Wearable Real-Time and Non-Invasive Thoracic Cavity Monitoring System
NASA Astrophysics Data System (ADS)
Salman, Safa
A surgery-free on-body monitoring system is proposed to evaluate the dielectric constant of internal body tissues (especially lung and heart) and effectively determine irregularities in real-time. The proposed surgery-free on-body monitoring system includes a sensor, a post-processing technique, and an automated data collection circuit. Data are automatically collected from the sensor electrodes and then post processed to extract the electrical properties of the underlying biological tissue(s). To demonstrate the imaging concept, planar and wrap-around sensors are devised. These sensors are designed to detect changes in the dielectric constant of inner tissues (lung and heart). The planar sensor focuses on a single organ while the wrap-around sensors allows for imaging of the thoracic cavity's cross section. Moreover, post-processing techniques are proposed to complement sensors for a more complete on-body monitoring system. The idea behind the post-processing technique is to suppress interference from the outer layers (skin, fat, muscle, and bone). The sensors and post-processing techniques yield high signal (from the inner layers) to noise (from the outer layers) ratio. Additionally, data collection circuits are proposed for a more robust and stand-alone system. The circuit design aims to sequentially activate each port of the sensor and portions of the propagating signal are to be received at all passive ports in the form of a voltage at the probes. The voltages are converted to scattering parameters which are then used in the post-processing technique to obtain epsilonr. The concept of wearability is also considered through the use of electrically conductive fibers (E-fibers). These fibers show matching performance to that of copper, especially at low frequencies making them a viable substitute. For the cases considered, the proposed sensors show promising results in recovering the permittivity of deep tissues with a maximum error of 13.5%. These sensors provide a way for a new class of medical sensors through accuracy improvements and avoidance of inverse scattering techniques.
A back-illuminated megapixel CMOS image sensor
NASA Technical Reports Server (NTRS)
Pain, Bedabrata; Cunningham, Thomas; Nikzad, Shouleh; Hoenk, Michael; Jones, Todd; Wrigley, Chris; Hancock, Bruce
2005-01-01
In this paper, we present the test and characterization results for a back-illuminated megapixel CMOS imager. The imager pixel consists of a standard junction photodiode coupled to a three transistor-per-pixel switched source-follower readout [1]. The imager also consists of integrated timing and control and bias generation circuits, and provides analog output. The analog column-scan circuits were implemented in such a way that the imager could be configured to run in off-chip correlated double-sampling (CDS) mode. The imager was originally designed for normal front-illuminated operation, and was fabricated in a commercially available 0.5 pn triple-metal CMOS-imager compatible process. For backside illumination, the imager was thinned by etching away the substrate was etched away in a post-fabrication processing step.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batin, E; Depauw, N; MacDonald, S
Purpose: Historically, the set-up for proton post-mastectomy chestwall irradiation at our institution started with positioning the patient using tattoos and lasers. One or more rounds of orthogonal X-rays at gantry 0° and beamline X-ray at treatment gantry angle were then taken to finalize the set-up position. As chestwall targets are shallow and superficial, surface imaging is a promising tool for set-up and needs to be investigated Methods: The orthogonal imaging was entirely replaced by AlignRT™ (ART) images. The beamline X-Ray image is kept as a confirmation, based primarily on three opaque markers placed on skin surface instead of bony anatomy.more » In the first phase of the process, ART gated images were used to set-up the patient and the same specific point of the breathing curve was used every day. The moves (translations and rotations) computed for each point of the breathing curve during the first five fractions were analyzed for ten patients. During a second phase of the study, ART gated images were replaced by ART non-gated images combined with real-time monitoring. In both cases, ART images were acquired just before treatment to access the patient position compare to the non-gated CT. Results: The average difference between the maximum move and the minimum move depending on the chosen breathing curve point was less than 1.7 mm for all translations and less than 0.7° for all rotations. The average position discrepancy over the course of treatment obtained by ART non gated images combined to real-time monitoring taken before treatment to the planning CT were smaller than the average position discrepancy obtained using ART gated images. The X-Ray validation images show similar results with both ART imaging process. Conclusion: The use of ART non gated images combined with real time imaging allows positioning post-mastectomy chestwall patients in less than 3 mm / 1°.« less
Surgical approaches to chronic pancreatitis: indications and imaging findings.
Hafezi-Nejad, Nima; Singh, Vikesh K; Johnson, Stephen I; Makary, Martin A; Hirose, Kenzo; Fishman, Elliot K; Zaheer, Atif
2016-10-01
Chronic pancreatitis (CP) is an irreversible, inflammatory process characterized by progressive fibrosis of the pancreas that can result in abdominal pain, exocrine insufficiency, and diabetes. Inadequate pain relief using medical and/or endoscopic therapies is an indication for surgery. The surgical management of CP is centered around three main operations including pancreaticoduodenectomy (PD), duodenum-preserving pancreatic head resection (DPPHR) and drainage procedures, and total pancreatectomy with islet autotransplantation (TPIAT). PD is the method of choice when there is a high suspicion for malignancy. Combined drainage and resection procedures are associated with pain relief, higher quality of life, and superior short-term and long-term survival in comparison with the PD. TPIAT is a reemerging treatment that may be promising in subjects with intractable pain and impaired quality of life. Imaging examinations have an extensive role in pre-operative and post-operative evaluation of CP patients. Pre-operative advanced imaging examinations including CT and MRI can detect hallmarks of CP such as calcifications, pancreatic duct dilatation, chronic pseudocysts, focal pancreatic enlargement, and biliary ductal dilatation. Post-operative findings may include periportal hepatic edema, pneumobilia, perivascular cuffing and mild pancreatic duct dilation. Imaging can also be useful in the detection of post-operative complications including obstructions, anastomotic leaks, and vascular lesions. Imaging helps identify unique post-operative findings associated with TPIAT and may aid in predicting viability and function of the transplanted islet cells. In this review, we explore surgical indications as well as pre-operative and post-operative imaging findings associated with surgical options that are typically performed for CP patients.
Characterizing challenged Minnesota ballots
NASA Astrophysics Data System (ADS)
Nagy, George; Lopresti, Daniel; Barney Smith, Elisa H.; Wu, Ziyan
2011-01-01
Photocopies of the ballots challenged in the 2008 Minnesota elections, which constitute a public record, were scanned on a high-speed scanner and made available on a public radio website. The PDF files were downloaded, converted to TIF images, and posted on the PERFECT website. Based on a review of relevant image-processing aspects of paper-based election machinery and on additional statistics and observations on the posted sample data, robust tools were developed for determining the underlying grid of the targets on these ballots regardless of skew, clipping, and other degradations caused by high-speed copying and digitization. The accuracy and robustness of a method based on both index-marks and oval targets are demonstrated on 13,435 challenged ballot page images.
Observing vegetation phenology through social media.
Silva, Sam J; Barbieri, Lindsay K; Thomer, Andrea K
2018-01-01
The widespread use of social media has created a valuable but underused source of data for the environmental sciences. We demonstrate the potential for images posted to the website Twitter to capture variability in vegetation phenology across United States National Parks. We process a subset of images posted to Twitter within eight U.S. National Parks, with the aim of understanding the amount of green vegetation in each image. Analysis of the relative greenness of the images show statistically significant seasonal cycles across most National Parks at the 95% confidence level, consistent with springtime green-up and fall senescence. Additionally, these social media-derived greenness indices correlate with monthly mean satellite NDVI (r = 0.62), reinforcing the potential value these data could provide in constraining models and observing regions with limited high quality scientific monitoring.
Post-image acquisition processing approaches for coherent backscatter validation
NASA Astrophysics Data System (ADS)
Smith, Christopher A.; Belichki, Sara B.; Coffaro, Joseph T.; Panich, Michael G.; Andrews, Larry C.; Phillips, Ronald L.
2014-10-01
Utilizing a retro-reflector from a target point, the reflected irradiance of a laser beam traveling back toward the transmitting point contains a peak point of intensity known as the enhanced backscatter (EBS) phenomenon. EBS is dependent on the strength regime of turbulence currently occurring within the atmosphere as the beam propagates across and back. In order to capture and analyze this phenomenon so that it may be compared to theory, an imaging system is integrated into the optical set up. With proper imaging established, we are able to implement various post-image acquisition techniques to help determine detection and positioning of EBS which can then be validated with theory by inspection of certain dependent meteorological parameters such as the refractive index structure parameter, Cn2 and wind speed.
Hughes, Brianna H; Greenberg, Neil J; Yang, Tom C; Skonberg, Denise I
2015-01-01
High-pressure processing (HPP) is used to increase meat safety and shelf-life, with conflicting quality effects depending on rigor status during HPP. In the seafood industry, HPP is used to shuck and pasteurize oysters, but its use on abalones has only been minimally evaluated and the effect of rigor status during HPP on abalone quality has not been reported. Farm-raised abalones (Haliotis rufescens) were divided into 12 HPP treatments and 1 unprocessed control treatment. Treatments were processed pre-rigor or post-rigor at 2 pressures (100 and 300 MPa) and 3 processing times (1, 3, and 5 min). The control was analyzed post-rigor. Uniform plugs were cut from adductor and foot meat for texture profile analysis, shear force, and color analysis. Subsamples were used for scanning electron microscopy of muscle ultrastructure. Texture profile analysis revealed that post-rigor processed abalone was significantly (P < 0.05) less firm and chewy than pre-rigor processed irrespective of muscle type, processing time, or pressure. L values increased with pressure to 68.9 at 300 MPa for pre-rigor processed foot, 73.8 for post-rigor processed foot, 90.9 for pre-rigor processed adductor, and 89.0 for post-rigor processed adductor. Scanning electron microscopy images showed fraying of collagen fibers in processed adductor, but did not show pressure-induced compaction of the foot myofibrils. Post-rigor processed abalone meat was more tender than pre-rigor processed meat, and post-rigor processed foot meat was lighter in color than pre-rigor processed foot meat, suggesting that waiting for rigor to resolve prior to processing abalones may improve consumer perceptions of quality and market value. © 2014 Institute of Food Technologists®
Digital image processing of vascular angiograms
NASA Technical Reports Server (NTRS)
Selzer, R. H.; Beckenbach, E. S.; Blankenhorn, D. H.; Crawford, D. W.; Brooks, S. H.
1975-01-01
The paper discusses the estimation of the degree of atherosclerosis in the human femoral artery through the use of a digital image processing system for vascular angiograms. The film digitizer uses an electronic image dissector camera to scan the angiogram and convert the recorded optical density information into a numerical format. Another processing step involves locating the vessel edges from the digital image. The computer has been programmed to estimate vessel abnormality through a series of measurements, some derived primarily from the vessel edge information and others from optical density variations within the lumen shadow. These measurements are combined into an atherosclerosis index, which is found in a post-mortem study to correlate well with both visual and chemical estimates of atherosclerotic disease.
“Lucky Averaging”: Quality improvement on Adaptive Optics Scanning Laser Ophthalmoscope Images
Huang, Gang; Zhong, Zhangyi; Zou, Weiyao; Burns, Stephen A.
2012-01-01
Adaptive optics(AO) has greatly improved retinal image resolution. However, even with AO, temporal and spatial variations in image quality still occur due to wavefront fluctuations, intra-frame focus shifts and other factors. As a result, aligning and averaging images can produce a mean image that has lower resolution or contrast than the best images within a sequence. To address this, we propose an image post-processing scheme called “lucky averaging”, analogous to lucky imaging (Fried, 1978) based on computing the best local contrast over time. Results from eye data demonstrate improvements in image quality. PMID:21964097
Impact of post-processing methods on apparent diffusion coefficient values.
Zeilinger, Martin Georg; Lell, Michael; Baltzer, Pascal Andreas Thomas; Dörfler, Arnd; Uder, Michael; Dietzel, Matthias
2017-03-01
The apparent diffusion coefficient (ADC) is increasingly used as a quantitative biomarker in oncological imaging. ADC calculation is based on raw diffusion-weighted imaging (DWI) data, and multiple post-processing methods (PPMs) have been proposed for this purpose. We investigated whether PPM has an impact on final ADC values. Sixty-five lesions scanned with a standardized whole-body DWI-protocol at 3 T served as input data (EPI-DWI, b-values: 50, 400 and 800 s/mm 2 ). Using exactly the same ROI coordinates, four different PPM (ADC_1-ADC_4) were executed to calculate corresponding ADC values, given as [10 -3 mm 2 /s] of each lesion. Statistical analysis was performed to intra-individually compare ADC values stratified by PPM (Wilcoxon signed-rank tests: α = 1 %; descriptive statistics; relative difference/∆; coefficient of variation/CV). Stratified by PPM, mean ADCs ranged from 1.136-1.206 *10 -3 mm 2 /s (∆ = 7.0 %). Variances between PPM were pronounced in the upper range of ADC values (maximum: 2.540-2.763 10 -3 mm 2 /s, ∆ = 8 %). Pairwise comparisons identified significant differences between all PPM (P ≤ 0.003; mean CV = 7.2 %) and reached 0.137 *10 -3 mm 2 /s within the 25th-75th percentile. Altering the PPM had a significant impact on the ADC value. This should be considered if ADC values from different post-processing methods are compared in patient studies. • Post-processing methods significantly influenced ADC values. • The mean coefficient of ADC variation due to PPM was 7.2 %. • To achieve reproducible ADC values, standardization of post-processing is recommended.
Recovering the fine structures in solar images
NASA Technical Reports Server (NTRS)
Karovska, Margarita; Habbal, S. R.; Golub, L.; Deluca, E.; Hudson, Hugh S.
1994-01-01
Several examples of the capability of the blind iterative deconvolution (BID) technique to recover the real point spread function, when limited a priori information is available about its characteristics. To demonstrate the potential of image post-processing for probing the fine scale and temporal variability of the solar atmosphere, the BID technique is applied to different samples of solar observations from space. The BID technique was originally proposed for correction of the effects of atmospheric turbulence on optical images. The processed images provide a detailed view of the spatial structure of the solar atmosphere at different heights in regions with different large-scale magnetic field structures.
Multislice CT perfusion imaging of the lung in detection of pulmonary embolism
NASA Astrophysics Data System (ADS)
Hong, Helen; Lee, Jeongjin
2006-03-01
We propose a new subtraction technique for accurately imaging lung perfusion and efficiently detecting pulmonary embolism in chest MDCT angiography. Our method is composed of five stages. First, optimal segmentation technique is performed for extracting same volume of the lungs, major airway and vascular structures from pre- and post-contrast images with different lung density. Second, initial registration based on apex, hilar point and center of inertia (COI) of each unilateral lung is proposed to correct the gross translational mismatch. Third, initial alignment is refined by iterative surface registration. For fast and robust convergence of the distance measure to the optimal value, a 3D distance map is generated by the narrow-band distance propagation. Fourth, 3D nonlinear filter is applied to the lung parenchyma to compensate for residual spiral artifacts and artifacts caused by heart motion. Fifth, enhanced vessels are visualized by subtracting registered pre-contrast images from post-contrast images. To facilitate visualization of parenchyma enhancement, color-coded mapping and image fusion is used. Our method has been successfully applied to ten patients of pre- and post-contrast images in chest MDCT angiography. Experimental results show that the performance of our method is very promising compared with conventional methods with the aspects of its visual inspection, accuracy and processing time.
Morawski, Markus; Kirilina, Evgeniya; Scherf, Nico; Jäger, Carsten; Reimann, Katja; Trampel, Robert; Gavriilidis, Filippos; Geyer, Stefan; Biedermann, Bernd; Arendt, Thomas; Weiskopf, Nikolaus
2017-11-28
Recent breakthroughs in magnetic resonance imaging (MRI) enabled quantitative relaxometry and diffusion-weighted imaging with sub-millimeter resolution. Combined with biophysical models of MR contrast the emerging methods promise in vivo mapping of cyto- and myelo-architectonics, i.e., in vivo histology using MRI (hMRI) in humans. The hMRI methods require histological reference data for model building and validation. This is currently provided by MRI on post mortem human brain tissue in combination with classical histology on sections. However, this well established approach is limited to qualitative 2D information, while a systematic validation of hMRI requires quantitative 3D information on macroscopic voxels. We present a promising histological method based on optical 3D imaging combined with a tissue clearing method, Clear Lipid-exchanged Acrylamide-hybridized Rigid Imaging compatible Tissue hYdrogel (CLARITY), adapted for hMRI validation. Adapting CLARITY to the needs of hMRI is challenging due to poor antibody penetration into large sample volumes and high opacity of aged post mortem human brain tissue. In a pilot experiment we achieved transparency of up to 8 mm-thick and immunohistochemical staining of up to 5 mm-thick post mortem brain tissue by a combination of active and passive clearing, prolonged clearing and staining times. We combined 3D optical imaging of the cleared samples with tailored image processing methods. We demonstrated the feasibility for quantification of neuron density, fiber orientation distribution and cell type classification within a volume with size similar to a typical MRI voxel. The presented combination of MRI, 3D optical microscopy and image processing is a promising tool for validation of MRI-based microstructure estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Initial Navigation Alignment of Optical Instruments on GOES-R
NASA Technical Reports Server (NTRS)
Isaacson, Peter J.; DeLuccia, Frank J.; Reth, Alan D.; Igli, David A.; Carter, Delano R.
2016-01-01
Post-launch alignment errors for the Advanced Baseline Imager (ABI) and Geospatial Lightning Mapper (GLM) on GOES-R may be too large for the image navigation and registration (INR) processing algorithms to function without an initial adjustment to calibration parameters. We present an approach that leverages a combination of user-selected image-to-image tie points and image correlation algorithms to estimate this initial launch-induced offset and calculate adjustments to the Line of Sight Motion Compensation (LMC) parameters. We also present an approach to generate synthetic test images, to which shifts and rotations of known magnitude are applied. Results of applying the initial alignment tools to a subset of these synthetic test images are presented. The results for both ABI and GLM are within the specifications established for these tools, and indicate that application of these tools during the post-launch test (PLT) phase of GOES-R operations will enable the automated INR algorithms for both instruments to function as intended.
Video enhancement method with color-protection post-processing
NASA Astrophysics Data System (ADS)
Kim, Youn Jin; Kwak, Youngshin
2015-01-01
The current study is aimed to propose a post-processing method for video enhancement by adopting a color-protection technique. The color-protection intends to attenuate perceptible artifacts due to over-enhancements in visually sensitive image regions such as low-chroma colors, including skin and gray objects. In addition, reducing the loss in color texture caused by the out-of-color-gamut signals is also taken into account. Consequently, color reproducibility of video sequences could be remarkably enhanced while the undesirable visual exaggerations are minimized.
On the influence of zero-padding on the nonlinear operations in Quantitative Susceptibility Mapping
Eskreis-Winkler, Sarah; Zhou, Dong; Liu, Tian; Gupta, Ajay; Gauthier, Susan A.; Wang, Yi; Spincemaille, Pascal
2016-01-01
Purpose Zero padding is a well-studied interpolation technique that improves image visualization without increasing image resolution. This interpolation is often performed as a last step before images are displayed on clinical workstations. Here, we seek to demonstrate the importance of zero padding before rather than after performing non-linear post-processing algorithms, such as Quantitative Susceptibility Mapping (QSM). To do so, we evaluate apparent spatial resolution, relative error and depiction of multiple sclerosis (MS) lesions on images that were zero padded prior to, in the middle of, and after the application of the QSM algorithm. Materials and Methods High resolution gradient echo (GRE) data were acquired on twenty MS patients, from which low resolution data were derived using k-space cropping. Pre-, mid-, and post-zero padded QSM images were reconstructed from these low resolution data by zero padding prior to field mapping, after field mapping, and after susceptibility mapping, respectively. Using high resolution QSM as the gold standard, apparent spatial resolution, relative error, and image quality of the pre-, mid-, and post-zero padded QSM images were measured and compared. Results Both the accuracy and apparent spatial resolution of the pre-zero padded QSM was higher than that of mid-zero padded QSM (p < 0.001; p < 0.001), which was higher than that of post-zero padded QSM (p < 0.001; p < 0.001). The image quality of pre-zero padded reconstructions was higher than that of mid- and post-zero padded reconstructions (p = 0.004; p < 0.001). Conclusion Zero padding of the complex GRE data prior to nonlinear susceptibility mapping improves image accuracy and apparent resolution compared to zero padding afterwards. It also provides better delineation of MS lesion geometry, which may improve lesion subclassification and disease monitoring in MS patients. PMID:27587225
On the influence of zero-padding on the nonlinear operations in Quantitative Susceptibility Mapping.
Eskreis-Winkler, Sarah; Zhou, Dong; Liu, Tian; Gupta, Ajay; Gauthier, Susan A; Wang, Yi; Spincemaille, Pascal
2017-01-01
Zero padding is a well-studied interpolation technique that improves image visualization without increasing image resolution. This interpolation is often performed as a last step before images are displayed on clinical workstations. Here, we seek to demonstrate the importance of zero padding before rather than after performing non-linear post-processing algorithms, such as Quantitative Susceptibility Mapping (QSM). To do so, we evaluate apparent spatial resolution, relative error and depiction of multiple sclerosis (MS) lesions on images that were zero padded prior to, in the middle of, and after the application of the QSM algorithm. High resolution gradient echo (GRE) data were acquired on twenty MS patients, from which low resolution data were derived using k-space cropping. Pre-, mid-, and post-zero padded QSM images were reconstructed from these low resolution data by zero padding prior to field mapping, after field mapping, and after susceptibility mapping, respectively. Using high resolution QSM as the gold standard, apparent spatial resolution, relative error, and image quality of the pre-, mid-, and post-zero padded QSM images were measured and compared. Both the accuracy and apparent spatial resolution of the pre-zero padded QSM was higher than that of mid-zero padded QSM (p<0.001; p<0.001), which was higher than that of post-zero padded QSM (p<0.001; p<0.001). The image quality of pre-zero padded reconstructions was higher than that of mid- and post-zero padded reconstructions (p=0.004; p<0.001). Zero padding of the complex GRE data prior to nonlinear susceptibility mapping improves image accuracy and apparent resolution compared to zero padding afterwards. It also provides better delineation of MS lesion geometry, which may improve lesion subclassification and disease monitoring in MS patients. Copyright © 2016 Elsevier Inc. All rights reserved.
Onofrey, John A.; Staib, Lawrence H.; Papademetris, Xenophon
2015-01-01
This paper describes a framework for learning a statistical model of non-rigid deformations induced by interventional procedures. We make use of this learned model to perform constrained non-rigid registration of pre-procedural and post-procedural imaging. We demonstrate results applying this framework to non-rigidly register post-surgical computed tomography (CT) brain images to pre-surgical magnetic resonance images (MRIs) of epilepsy patients who had intra-cranial electroencephalography electrodes surgically implanted. Deformations caused by this surgical procedure, imaging artifacts caused by the electrodes, and the use of multi-modal imaging data make non-rigid registration challenging. Our results show that the use of our proposed framework to constrain the non-rigid registration process results in significantly improved and more robust registration performance compared to using standard rigid and non-rigid registration methods. PMID:26900569
Saroha, Kartik; Pandey, Anil Kumar; Sharma, Param Dev; Behera, Abhishek; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh
2017-01-01
The detection of abdomino-pelvic tumors embedded in or nearby radioactive urine containing 18F-FDG activity is a challenging task on PET/CT scan. In this study, we propose and validate the suprathreshold stochastic resonance-based image processing method for the detection of these tumors. The method consists of the addition of noise to the input image, and then thresholding it that creates one frame of intermediate image. One hundred such frames were generated and averaged to get the final image. The method was implemented using MATLAB R2013b on a personal computer. Noisy image was generated using random Poisson variates corresponding to each pixel of the input image. In order to verify the method, 30 sets of pre-diuretic and its corresponding post-diuretic PET/CT scan images (25 tumor images and 5 control images with no tumor) were included. For each sets of pre-diuretic image (input image), 26 images (at threshold values equal to mean counts multiplied by a constant factor ranging from 1.0 to 2.6 with increment step of 0.1) were created and visually inspected, and the image that most closely matched with the gold standard (corresponding post-diuretic image) was selected as the final output image. These images were further evaluated by two nuclear medicine physicians. In 22 out of 25 images, tumor was successfully detected. In five control images, no false positives were reported. Thus, the empirical probability of detection of abdomino-pelvic tumors evaluates to 0.88. The proposed method was able to detect abdomino-pelvic tumors on pre-diuretic PET/CT scan with a high probability of success and no false positives.
In-Vivo Imaging of Cell Migration Using Contrast Enhanced MRI and SVM Based Post-Processing.
Weis, Christian; Hess, Andreas; Budinsky, Lubos; Fabry, Ben
2015-01-01
The migration of cells within a living organism can be observed with magnetic resonance imaging (MRI) in combination with iron oxide nanoparticles as an intracellular contrast agent. This method, however, suffers from low sensitivity and specificty. Here, we developed a quantitative non-invasive in-vivo cell localization method using contrast enhanced multiparametric MRI and support vector machines (SVM) based post-processing. Imaging phantoms consisting of agarose with compartments containing different concentrations of cancer cells labeled with iron oxide nanoparticles were used to train and evaluate the SVM for cell localization. From the magnitude and phase data acquired with a series of T2*-weighted gradient-echo scans at different echo-times, we extracted features that are characteristic for the presence of superparamagnetic nanoparticles, in particular hyper- and hypointensities, relaxation rates, short-range phase perturbations, and perturbation dynamics. High detection quality was achieved by SVM analysis of the multiparametric feature-space. The in-vivo applicability was validated in animal studies. The SVM detected the presence of iron oxide nanoparticles in the imaging phantoms with high specificity and sensitivity with a detection limit of 30 labeled cells per mm3, corresponding to 19 μM of iron oxide. As proof-of-concept, we applied the method to follow the migration of labeled cancer cells injected in rats. The combination of iron oxide labeled cells, multiparametric MRI and a SVM based post processing provides high spatial resolution, specificity, and sensitivity, and is therefore suitable for non-invasive in-vivo cell detection and cell migration studies over prolonged time periods.
Toward a perceptual image quality assessment of color quantized images
NASA Astrophysics Data System (ADS)
Frackiewicz, Mariusz; Palus, Henryk
2018-04-01
Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.
Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A
2015-11-01
This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional.
Ciulla, Carlo; Veljanovski, Dimitar; Rechkoska Shikoska, Ustijana; Risteski, Filip A.
2015-01-01
This research presents signal-image post-processing techniques called Intensity-Curvature Measurement Approaches with application to the diagnosis of human brain tumors detected through Magnetic Resonance Imaging (MRI). Post-processing of the MRI of the human brain encompasses the following model functions: (i) bivariate cubic polynomial, (ii) bivariate cubic Lagrange polynomial, (iii) monovariate sinc, and (iv) bivariate linear. The following Intensity-Curvature Measurement Approaches were used: (i) classic-curvature, (ii) signal resilient to interpolation, (iii) intensity-curvature measure and (iv) intensity-curvature functional. The results revealed that the classic-curvature, the signal resilient to interpolation and the intensity-curvature functional are able to add additional information useful to the diagnosis carried out with MRI. The contribution to the MRI diagnosis of our study are: (i) the enhanced gray level scale of the tumor mass and the well-behaved representation of the tumor provided through the signal resilient to interpolation, and (ii) the visually perceptible third dimension perpendicular to the image plane provided through the classic-curvature and the intensity-curvature functional. PMID:26644943
Color Imaging management in film processing
NASA Astrophysics Data System (ADS)
Tremeau, Alain; Konik, Hubert; Colantoni, Philippe
2003-12-01
The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.
Surface topography analysis and performance on post-CMP images (Conference Presentation)
NASA Astrophysics Data System (ADS)
Lee, Jusang; Bello, Abner F.; Kakita, Shinichiro; Pieniazek, Nicholas; Johnson, Timothy A.
2017-03-01
Surface topography on post-CMP processing can be measured with white light interference microscopy to determine the planarity. Results are used to avoid under or over polishing and to decrease dishing. The numerical output of the surface topography is the RMS (root-mean-square) of the height. Beyond RMS, the topography image is visually examined and not further quantified. Subjective comparisons of the height maps are used to determine optimum CMP process conditions. While visual comparison of height maps can determine excursions, it's only through manual inspection of the images. In this work we describe methods of quantifying post-CMP surface topography characteristics that are used in other technical fields such as geography and facial-recognition. The topography image is divided into small surface patches of 7x7 pixels. Each surface patch is fitted to an analytic surface equation, in this case a third order polynomial, from which the gradient, directional derivatives, and other characteristics are calculated. Based on the characteristics, the surface patch is labeled as peak, ridge, flat, saddle, ravine, pit or hillside. The number of each label and thus the associated histogram is then used as a quantified characteristic of the surface topography, and could be used as a parameter for SPC (statistical process control) charting. In addition, the gradient for each surface patch is calculated, so the average, maximum, and other characteristics of the gradient distribution can be used for SPC. Repeatability measurements indicate high confidence where individual labels can be lower than 2% relative standard deviation. When the histogram is considered, an associated chi-squared value can be defined from which to compare other measurements. The chi-squared value of the histogram is a very sensitive and quantifiable parameter to determine the within wafer and wafer-to-wafer topography non-uniformity. As for the gradient histogram distribution, the chi-squared could again be calculated and used as yet another quantifiable parameter for SPC. In this work we measured the post Cu CMP of a die designed for 14nm technology. A region of interest (ROI) known to be indicative of the CMP processing is chosen for the topography analysis. The ROI, of size 1800 x 2500 pixels where each pixel represents 2um, was repeatably measured. We show the sensitivity based on measurements and the comparison between center and edge die measurements. The topography measurements and surface patch analysis were applied to hundreds of images representing the periodic process qualification runs required to control and verify CMP performance and tool matching. The analysis is shown to be sensitive to process conditions that vary in polishing time, type of slurry, CMP tool manufacturer, and CMP pad lifetime. Keywords: Keywords: CMP, Topography, Image Processing, Metrology, Interference microscopy, surface processing [1] De Lega, Xavier Colonna, and Peter De Groot. "Optical topography measurement of patterned wafers." Characterization and Metrology for ULSI Technology 2005 788 (2005): 432-436. [2] de Groot, Peter. "Coherence scanning interferometry." Optical Measurement of Surface Topography. Springer Berlin Heidelberg, 2011. 187-208. [3] Watson, Layne T., Thomas J. Laffey, and Robert M. Haralick. "Topographic classification of digital image intensity surfaces using generalized splines and the discrete cosine transformation." Computer Vision, Graphics, and Image Processing 29.2 (1985): 143-167. [4] Wang, Jun, et al. "3D facial expression recognition based on primitive surface feature distribution." Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on. Vol. 2. IEEE, 2006.
Touch HDR: photograph enhancement by user controlled wide dynamic range adaptation
NASA Astrophysics Data System (ADS)
Verrall, Steve; Siddiqui, Hasib; Atanassov, Kalin; Goma, Sergio; Ramachandra, Vikas
2013-03-01
High Dynamic Range (HDR) technology enables photographers to capture a greater range of tonal detail. HDR is typically used to bring out detail in a dark foreground object set against a bright background. HDR technologies include multi-frame HDR and single-frame HDR. Multi-frame HDR requires the combination of a sequence of images taken at different exposures. Single-frame HDR requires histogram equalization post-processing of a single image, a technique referred to as local tone mapping (LTM). Images generated using HDR technology can look less natural than their non- HDR counterparts. Sometimes it is only desired to enhance small regions of an original image. For example, it may be desired to enhance the tonal detail of one subject's face while preserving the original background. The Touch HDR technique described in this paper achieves these goals by enabling selective blending of HDR and non-HDR versions of the same image to create a hybrid image. The HDR version of the image can be generated by either multi-frame or single-frame HDR. Selective blending can be performed as a post-processing step, for example, as a feature of a photo editor application, at any time after the image has been captured. HDR and non-HDR blending is controlled by a weighting surface, which is configured by the user through a sequence of touches on a touchscreen.
In-situ Planetary Subsurface Imaging System
NASA Astrophysics Data System (ADS)
Song, W.; Weber, R. C.; Dimech, J. L.; Kedar, S.; Neal, C. R.; Siegler, M.
2017-12-01
Geophysical and seismic instruments are considered the most effective tools for studying the detailed global structures of planetary interiors. A planet's interior bears the geochemical markers of its evolutionary history, as well as its present state of activity, which has direct implications to habitability. On Earth, subsurface imaging often involves massive data collection from hundreds to thousands of geophysical sensors (seismic, acoustic, etc) followed by transfer by hard links or wirelessly to a central location for post processing and computing, which will not be possible in planetary environments due to imposed mission constraints on mass, power, and bandwidth. Emerging opportunities for geophysical exploration of the solar system from Venus to the icy Ocean Worlds of Jupiter and Saturn dictate that subsurface imaging of the deep interior will require substantial data reduction and processing in-situ. The Real-time In-situ Subsurface Imaging (RISI) technology is a mesh network that senses and processes geophysical signals. Instead of data collection then post processing, the mesh network performs the distributed data processing and computing in-situ, and generates an evolving 3D subsurface image in real-time that can be transmitted under bandwidth and resource constraints. Seismic imaging algorithms (including traveltime tomography, ambient noise imaging, and microseismic imaging) have been successfully developed and validated using both synthetic and real-world terrestrial seismic data sets. The prototype hardware system has been implemented and can be extended as a general field instrumentation platform tailored specifically for a wide variety of planetary uses, including crustal mapping, ice and ocean structure, and geothermal systems. The team is applying the RISI technology to real off-world seismic datasets. For example, the Lunar Seismic Profiling Experiment (LSPE) deployed during the Apollo 17 Moon mission consisted of four geophone instruments spaced up to 100 meters apart, which in essence forms a small aperture seismic network. A pattern recognition technique based on Hidden Markov Models was able to characterize this dataset, and we are exploring how the RISI technology can be adapted for this dataset.
Levy, Andrew E; Shah, Nishant R; Matheny, Michael E; Reeves, Ruth M; Gobbel, Glenn T; Bradley, Steven M
2018-04-25
Reporting standards promote clarity and consistency of stress myocardial perfusion imaging (MPI) reports, but do not require an assessment of post-test risk. Natural Language Processing (NLP) tools could potentially help estimate this risk, yet it is unknown whether reports contain adequate descriptive data to use NLP. Among VA patients who underwent stress MPI and coronary angiography between January 1, 2009 and December 31, 2011, 99 stress test reports were randomly selected for analysis. Two reviewers independently categorized each report for the presence of critical data elements essential to describing post-test ischemic risk. Few stress MPI reports provided a formal assessment of post-test risk within the impression section (3%) or the entire document (4%). In most cases, risk was determinable by combining critical data elements (74% impression, 98% whole). If ischemic risk was not determinable (25% impression, 2% whole), inadequate description of systolic function (9% impression, 1% whole) and inadequate description of ischemia (5% impression, 1% whole) were most commonly implicated. Post-test ischemic risk was determinable but rarely reported in this sample of stress MPI reports. This supports the potential use of NLP to help clarify risk. Further study of NLP in this context is needed.
NASA Astrophysics Data System (ADS)
Jagadale, Basavaraj N.; Udupa, Jayaram K.; Tong, Yubing; Wu, Caiyun; McDonough, Joseph; Torigian, Drew A.; Campbell, Robert M.
2018-02-01
General surgeons, orthopedists, and pulmonologists individually treat patients with thoracic insufficiency syndrome (TIS). The benefits of growth-sparing procedures such as Vertical Expandable Prosthetic Titanium Rib (VEPTR)insertionfor treating patients with TIS have been demonstrated. However, at present there is no objective assessment metricto examine different thoracic structural components individually as to their roles in the syndrome, in contributing to dynamics and function, and in influencing treatment outcome. Using thoracic dynamic MRI (dMRI), we have been developing a methodology to overcome this problem. In this paper, we extend this methodology from our previous structural analysis approaches to examining lung tissue properties. We process the T2-weighted dMRI images through a series of steps involving 4D image construction of the acquired dMRI images, intensity non-uniformity correction and standardization of the 4D image, lung segmentation, and estimation of the parameters describing lung tissue intensity distributions in the 4D image. Based on pre- and post-operative dMRI data sets from 25 TIS patients (predominantly neuromuscular and congenital conditions), we demonstrate how lung tissue can be characterized by the estimated distribution parameters. Our results show that standardized T2-weighted image intensity values decrease from the pre- to post-operative condition, likely reflecting improved lung aeration post-operatively. In both pre- and post-operative conditions, the intensity values decrease also from end-expiration to end-inspiration, supporting the basic premise of our results.
High-speed spectral domain optical coherence tomography using non-uniform fast Fourier transform
Chan, Kenny K. H.; Tang, Shuo
2010-01-01
The useful imaging range in spectral domain optical coherence tomography (SD-OCT) is often limited by the depth dependent sensitivity fall-off. Processing SD-OCT data with the non-uniform fast Fourier transform (NFFT) can improve the sensitivity fall-off at maximum depth by greater than 5dB concurrently with a 30 fold decrease in processing time compared to the fast Fourier transform with cubic spline interpolation method. NFFT can also improve local signal to noise ratio (SNR) and reduce image artifacts introduced in post-processing. Combined with parallel processing, NFFT is shown to have the ability to process up to 90k A-lines per second. High-speed SD-OCT imaging is demonstrated at camera-limited 100 frames per second on an ex-vivo squid eye. PMID:21258551
NASA Astrophysics Data System (ADS)
Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu
To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.
Sajn, Luka; Kukar, Matjaž
2011-12-01
The paper presents results of our long-term study on using image processing and data mining methods in a medical imaging. Since evaluation of modern medical images is becoming increasingly complex, advanced analytical and decision support tools are involved in integration of partial diagnostic results. Such partial results, frequently obtained from tests with substantial imperfections, are integrated into ultimate diagnostic conclusion about the probability of disease for a given patient. We study various topics such as improving the predictive power of clinical tests by utilizing pre-test and post-test probabilities, texture representation, multi-resolution feature extraction, feature construction and data mining algorithms that significantly outperform medical practice. Our long-term study reveals three significant milestones. The first improvement was achieved by significantly increasing post-test diagnostic probabilities with respect to expert physicians. The second, even more significant improvement utilizes multi-resolution image parametrization. Machine learning methods in conjunction with the feature subset selection on these parameters significantly improve diagnostic performance. However, further feature construction with the principle component analysis on these features elevates results to an even higher accuracy level that represents the third milestone. With the proposed approach clinical results are significantly improved throughout the study. The most significant result of our study is improvement in the diagnostic power of the whole diagnostic process. Our compound approach aids, but does not replace, the physician's judgment and may assist in decisions on cost effectiveness of tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
A fast automatic target detection method for detecting ships in infrared scenes
NASA Astrophysics Data System (ADS)
Özertem, Kemal Arda
2016-05-01
Automatic target detection in infrared scenes is a vital task for many application areas like defense, security and border surveillance. For anti-ship missiles, having a fast and robust ship detection algorithm is crucial for overall system performance. In this paper, a straight-forward yet effective ship detection method for infrared scenes is introduced. First, morphological grayscale reconstruction is applied to the input image, followed by an automatic thresholding onto the suppressed image. For the segmentation step, connected component analysis is employed to obtain target candidate regions. At this point, it can be realized that the detection is defenseless to outliers like small objects with relatively high intensity values or the clouds. To deal with this drawback, a post-processing stage is introduced. For the post-processing stage, two different methods are used. First, noisy detection results are rejected with respect to target size. Second, the waterline is detected by using Hough transform and the detection results that are located above the waterline with a small margin are rejected. After post-processing stage, there are still undesired holes remaining, which cause to detect one object as multi objects or not to detect an object as a whole. To improve the detection performance, another automatic thresholding is implemented only to target candidate regions. Finally, two detection results are fused and post-processing stage is repeated to obtain final detection result. The performance of overall methodology is tested with real world infrared test data.
Automation of a Wave-Optics Simulation and Image Post-Processing Package on Riptide
NASA Astrophysics Data System (ADS)
Werth, M.; Lucas, J.; Thompson, D.; Abercrombie, M.; Holmes, R.; Roggemann, M.
Detailed wave-optics simulations and image post-processing algorithms are computationally expensive and benefit from the massively parallel hardware available at supercomputing facilities. We created an automated system that interfaces with the Maui High Performance Computing Center (MHPCC) Distributed MATLAB® Portal interface to submit massively parallel waveoptics simulations to the IBM iDataPlex (Riptide) supercomputer. This system subsequently postprocesses the output images with an improved version of physically constrained iterative deconvolution (PCID) and analyzes the results using a series of modular algorithms written in Python. With this architecture, a single person can simulate thousands of unique scenarios and produce analyzed, archived, and briefing-compatible output products with very little effort. This research was developed with funding from the Defense Advanced Research Projects Agency (DARPA). The views, opinions, and/or findings expressed are those of the author(s) and should not be interpreted as representing the official views or policies of the Department of Defense or the U.S. Government.
NASA Astrophysics Data System (ADS)
Arhatari, Benedicta D.; Abbey, Brian
2018-01-01
Ross filter pairs have recently been demonstrated as a highly effective means of producing quasi-monoenergetic beams from polychromatic X-ray sources. They have found applications in both X-ray spectroscopy and for elemental separation in X-ray computed tomography (XCT). Here we explore whether they could be applied to the problem of metal artefact reduction (MAR) for applications in medical imaging. Metal artefacts are a common problem in X-ray imaging of metal implants embedded in bone and soft tissue. A number of data post-processing approaches to MAR have been proposed in the literature, however these can be time-consuming and sometimes have limited efficacy. Here we describe and demonstrate an alternative approach based on beam conditioning using Ross filter pairs. This approach obviates the need for any complex post-processing of the data and enables MAR and segmentation from the surrounding tissue by exploiting the absorption edge contrast of the implant.
NASA Astrophysics Data System (ADS)
Faber, Tracy L.; Garcia, Ernest V.; Lalush, David S.; Segars, W. Paul; Tsui, Benjamin M.
2001-05-01
The spline-based Mathematical Cardiac Torso (MCAT) phantom is a realistic software simulation designed to simulate single photon emission computed tomographic (SPECT) data. It incorporates a heart model of known size and shape; thus, it is invaluable for measuring accuracy of acquisition, reconstruction, and post-processing routines. New functionality has been added by replacing the standard heart model with left ventricular (LV) epicaridal and endocardial surface points detected from actual patient SPECT perfusion studies. LV surfaces detected from standard post-processing quantitation programs are converted through interpolation in space and time into new B-spline models. Perfusion abnormalities are added to the model based on results of standard perfusion quantification. The new LV is translated and rotated to fit within existing atria and right ventricular models, which are scaled based on the size of the LV. Simulations were created for five different patients with myocardial infractions who had undergone SPECT perfusion imaging. Shape, size, and motion of the resulting activity map were compared visually to the original SPECT images. In all cases, size, shape and motion of simulated LVs matched well with the original images. Thus, realistic simulations with known physiologic and functional parameters can be created for evaluating efficacy of processing algorithms.
On detection of median filtering in digital images
NASA Astrophysics Data System (ADS)
Kirchner, Matthias; Fridrich, Jessica
2010-01-01
In digital image forensics, it is generally accepted that intentional manipulations of the image content are most critical and hence numerous forensic methods focus on the detection of such 'malicious' post-processing. However, it is also beneficial to know as much as possible about the general processing history of an image, including content-preserving operations, since they can affect the reliability of forensic methods in various ways. In this paper, we present a simple yet effective technique to detect median filtering in digital images-a widely used denoising and smoothing operator. As a great variety of forensic methods relies on some kind of a linearity assumption, a detection of non-linear median filtering is of particular interest. The effectiveness of our method is backed with experimental evidence on a large image database.
Onboard spectral imager data processor
NASA Astrophysics Data System (ADS)
Otten, Leonard J.; Meigs, Andrew D.; Franklin, Abraham J.; Sears, Robert D.; Robison, Mark W.; Rafert, J. Bruce; Fronterhouse, Donald C.; Grotbeck, Ronald L.
1999-10-01
Previous papers have described the concept behind the MightySat II.1 program, the satellite's Fourier Transform imaging spectrometer's optical design, the design for the spectral imaging payload, and its initial qualification testing. This paper discusses the on board data processing designed to reduce the amount of downloaded data by an order of magnitude and provide a demonstration of a smart spaceborne spectral imaging sensor. Two custom components, a spectral imager interface 6U VME card that moves data at over 30 MByte/sec, and four TI C-40 processors mounted to a second 6U VME and daughter card, are used to adapt the sensor to the spacecraft and provide the necessary high speed processing. A system architecture that offers both on board real time image processing and high-speed post data collection analysis of the spectral data has been developed. In addition to the on board processing of the raw data into a usable spectral data volume, one feature extraction technique has been incorporated. This algorithm operates on the basic interferometric data. The algorithm is integrated within the data compression process to search for uploadable feature descriptions.
Precise Aperture-Dependent Motion Compensation with Frequency Domain Fast Back-Projection Algorithm.
Zhang, Man; Wang, Guanyong; Zhang, Lei
2017-10-26
Precise azimuth-variant motion compensation (MOCO) is an essential and difficult task for high-resolution synthetic aperture radar (SAR) imagery. In conventional post-filtering approaches, residual azimuth-variant motion errors are generally compensated through a set of spatial post-filters, where the coarse-focused image is segmented into overlapped blocks concerning the azimuth-dependent residual errors. However, image domain post-filtering approaches, such as precise topography- and aperture-dependent motion compensation algorithm (PTA), have difficulty of robustness in declining, when strong motion errors are involved in the coarse-focused image. In this case, in order to capture the complete motion blurring function within each image block, both the block size and the overlapped part need necessary extension leading to degeneration of efficiency and robustness inevitably. Herein, a frequency domain fast back-projection algorithm (FDFBPA) is introduced to deal with strong azimuth-variant motion errors. FDFBPA disposes of the azimuth-variant motion errors based on a precise azimuth spectrum expression in the azimuth wavenumber domain. First, a wavenumber domain sub-aperture processing strategy is introduced to accelerate computation. After that, the azimuth wavenumber spectrum is partitioned into a set of wavenumber blocks, and each block is formed into a sub-aperture coarse resolution image via the back-projection integral. Then, the sub-aperture images are straightforwardly fused together in azimuth wavenumber domain to obtain a full resolution image. Moreover, chirp-Z transform (CZT) is also introduced to implement the sub-aperture back-projection integral, increasing the efficiency of the algorithm. By disusing the image domain post-filtering strategy, robustness of the proposed algorithm is improved. Both simulation and real-measured data experiments demonstrate the effectiveness and superiority of the proposal.
Pertuz, Said; McDonald, Elizabeth S; Weinstein, Susan P; Conant, Emily F; Kontos, Despina
2016-04-01
To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board-approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration-cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging-based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment.
Computer imaging and workflow systems in the business office.
Adams, W T; Veale, F H; Helmick, P M
1999-05-01
Computer imaging and workflow technology automates many business processes that currently are performed using paper processes. Documents are scanned into the imaging system and placed in electronic patient account folders. Authorized users throughout the organization, including preadmission, verification, admission, billing, cash posting, customer service, and financial counseling staff, have online access to the information they need when they need it. Such streamlining of business functions can increase collections and customer satisfaction while reducing labor, supply, and storage costs. Because the costs of a comprehensive computer imaging and workflow system can be considerable, healthcare organizations should consider implementing parts of such systems that can be cost-justified or include implementation as part of a larger strategic technology initiative.
Alferova, V V; Mayorova, L A; Ivanova, E G; Guekht, A B; Shklovskij, V M
2017-01-01
The introduction of non-invasive functional neuroimaging techniques such as functional magnetic resonance imaging (fMRI), in the practice of scientific and clinical research can increase our knowledge about the organization of cognitive processes, including language, in normal and reorganization of these cognitive functions in post-stroke aphasia. The article discusses the results of fMRI studies of functional organization of the cortex of a healthy adult's brain in the processing of various voice information as well as the main types of speech reorganization after post-stroke aphasia in different stroke periods. The concepts of 'effective' and 'ineffective' brain plasticity in post-stroke aphasia were considered. It was concluded that there was an urgent need for further comprehensive studies, including neuropsychological testing and several complementary methods of functional neuroimaging, to develop a phased treatment plan and neurorehabilitation of patients with post-stroke aphasia.
Post-Processing of Low Dose Mammography Images
2002-05-01
method of restoring images in the presence of blur as well as noise ” (12:276). The deblurring and denoising characteristics make Wiener filtering...independent noise . The signal dependant scatter noise can be modeled as blur in the mammography image. A Wiener filter with deblurring characteristics can...centered on. This method is used to eradicate noise impulses with high 26 pixel values (2:7). For the research at hand, the median filter would
Post Launch Calibration and Testing of the Advanced Baseline Imager on the GOES-R Satellite
NASA Technical Reports Server (NTRS)
Lebair, William; Rollins, C.; Kline, John; Todirita, M.; Kronenwetter, J.
2016-01-01
The Geostationary Operational Environmental Satellite R (GOES-R) series is the planned next generation of operational weather satellites for the United State's National Oceanic and Atmospheric Administration. The first launch of the GOES-R series is planned for October 2016. The GOES-R series satellites and instruments are being developed by the National Aeronautics and Space Administration (NASA). One of the key instruments on the GOES-R series is the Advance Baseline Imager (ABI). The ABI is a multi-channel, visible through infrared, passive imaging radiometer. The ABI will provide moderate spatial and spectral resolution at high temporal and radiometric resolution to accurately monitor rapidly changing weather. Initial on-orbit calibration and performance characterization is crucial to establishing baseline used to maintain performance throughout mission life. A series of tests has been planned to establish the post launch performance and establish the parameters needed to process the data in the Ground Processing Algorithm. The large number of detectors for each channel required to provide the needed temporal coverage presents unique challenges for accurately calibrating ABI and minimizing striping. This paper discusses the planned tests to be performed on ABI over the six-month Post Launch Test period and the expected performance as it relates to ground tests.
Post Launch Calibration and Testing of the Advanced Baseline Imager on the GOES-R Satellite
NASA Technical Reports Server (NTRS)
Lebair, William; Rollins, C.; Kline, John; Todirita, M.; Kronenwetter, J.
2016-01-01
The Geostationary Operational Environmental Satellite R (GOES-R) series is the planned next generation of operational weather satellites for the United States National Oceanic and Atmospheric Administration. The first launch of the GOES-R series is planned for October 2016. The GOES-R series satellites and instruments are being developed by the National Aeronautics and Space Administration (NASA). One of the key instruments on the GOES-R series is the Advance Baseline Imager (ABI). The ABI is a multi-channel, visible through infrared, passive imaging radiometer. The ABI will provide moderate spatial and spectral resolution at high temporal and radiometric resolution to accurately monitor rapidly changing weather. Initial on-orbit calibration and performance characterization is crucial to establishing baseline used to maintain performance throughout mission life. A series of tests has been planned to establish the post launch performance and establish the parameters needed to process the data in the Ground Processing Algorithm. The large number of detectors for each channel required to provide the needed temporal coverage presents unique challenges for accurately calibrating ABI and minimizing striping. This paper discusses the planned tests to be performed on ABI over the six-month Post Launch Test period and the expected performance as it relates to ground tests.
Post launch calibration and testing of the Advanced Baseline Imager on the GOES-R satellite
NASA Astrophysics Data System (ADS)
Lebair, William; Rollins, C.; Kline, John; Todirita, M.; Kronenwetter, J.
2016-05-01
The Geostationary Operational Environmental Satellite R (GOES-R) series is the planned next generation of operational weather satellites for the United State's National Oceanic and Atmospheric Administration. The first launch of the GOES-R series is planned for October 2016. The GOES-R series satellites and instruments are being developed by the National Aeronautics and Space Administration (NASA). One of the key instruments on the GOES-R series is the Advance Baseline Imager (ABI). The ABI is a multi-channel, visible through infrared, passive imaging radiometer. The ABI will provide moderate spatial and spectral resolution at high temporal and radiometric resolution to accurately monitor rapidly changing weather. Initial on-orbit calibration and performance characterization is crucial to establishing baseline used to maintain performance throughout mission life. A series of tests has been planned to establish the post launch performance and establish the parameters needed to process the data in the Ground Processing Algorithm. The large number of detectors for each channel required to provide the needed temporal coverage presents unique challenges for accurately calibrating ABI and minimizing striping. This paper discusses the planned tests to be performed on ABI over the six-month Post Launch Test period and the expected performance as it relates to ground tests.
NASA Astrophysics Data System (ADS)
Park, Minsuk; Kang, Jeeun; Lee, Gunho; Kim, Min; Song, Tai-Kyong
2016-04-01
Recently, a portable US imaging system using smart devices is highlighted for enhancing the portability of diagnosis. Especially, the system combination can enhance the user experience during whole US diagnostic procedures by employing the advanced wireless communication technology integrated in a smart device, e.g., WiFi, Bluetooth, etc. In this paper, an effective post-phase rotation-based dynamic receive beamforming (PRBF-POST) method is presented for wireless US imaging device integrating US probe system and commercial smart device. In conventional, the frame rate of conventional PRBF (PRBF-CON) method suffers from the large amount of calculations for the bifurcated processing paths of in-phase and quadrature signal components as the number of channel increase. Otherwise, the proposed PRBF-POST method can preserve the frame rate regardless of the number of channels by firstly aggregating the baseband IQ data along the channels whose phase quantization levels are identical ahead of phase rotation and summation procedures on a smart device. To evaluate the performance of the proposed PRBF-POST method, the pointspread functions of PRBF-CON and PRBF-POST methods were compared each other. Also, the frame rate of each PRBF method was measured 20-times to calculate the average frame rate and its standard deviation. As a result, the PRBFCON and PRBF-POST methods indicates identical beamforming performance in the Field-II simulation (correlation coefficient = 1). Also, the proposed PRBF-POST method indicates the consistent frame rate for varying number of channels (i.e., 44.25, 44.32, and 44.35 fps for 16, 64, and 128 channels, respectively), while the PRBF-CON method shows the decrease of frame rate as the number of channel increase (39.73, 13.19, and 3.8 fps). These results indicate that the proposed PRBF-POST method can be more advantageous for implementing the wireless US imaging system than the PRBF-CON method.
Supervised detection of exoplanets in high-contrast imaging sequences
NASA Astrophysics Data System (ADS)
Gomez Gonzalez, C. A.; Absil, O.; Van Droogenbroeck, M.
2018-06-01
Context. Post-processing algorithms play a key role in pushing the detection limits of high-contrast imaging (HCI) instruments. State-of-the-art image processing approaches for HCI enable the production of science-ready images relying on unsupervised learning techniques, such as low-rank approximations, for generating a model point spread function (PSF) and subtracting the residual starlight and speckle noise. Aims: In order to maximize the detection rate of HCI instruments and survey campaigns, advanced algorithms with higher sensitivities to faint companions are needed, especially for the speckle-dominated innermost region of the images. Methods: We propose a reformulation of the exoplanet detection task (for ADI sequences) that builds on well-established machine learning techniques to take HCI post-processing from an unsupervised to a supervised learning context. In this new framework, we present algorithmic solutions using two different discriminative models: SODIRF (random forests) and SODINN (neural networks). We test these algorithms on real ADI datasets from VLT/NACO and VLT/SPHERE HCI instruments. We then assess their performances by injecting fake companions and using receiver operating characteristic analysis. This is done in comparison with state-of-the-art ADI algorithms, such as ADI principal component analysis (ADI-PCA). Results: This study shows the improved sensitivity versus specificity trade-off of the proposed supervised detection approach. At the diffraction limit, SODINN improves the true positive rate by a factor ranging from 2 to 10 (depending on the dataset and angular separation) with respect to ADI-PCA when working at the same false-positive level. Conclusions: The proposed supervised detection framework outperforms state-of-the-art techniques in the task of discriminating planet signal from speckles. In addition, it offers the possibility of re-processing existing HCI databases to maximize their scientific return and potentially improve the demographics of directly imaged exoplanets.
Benítez, Alfredo; Santiago, Ulises; Sanchez, John E; Ponce, Arturo
2018-01-01
In this work, an innovative cathodoluminescence (CL) system is coupled to a scanning electron microscope and synchronized with a Raspberry Pi computer integrated with an innovative processing signal. The post-processing signal is based on a Python algorithm that correlates the CL and secondary electron (SE) images with a precise dwell time correction. For CL imaging, the emission signal is collected through an optical fiber and transduced to an electrical signal via a photomultiplier tube (PMT). CL Images are registered in a panchromatic mode and can be filtered using a monochromator connected between the optical fiber and the PMT to produce monochromatic CL images. The designed system has been employed to study ZnO samples prepared by electrical arc discharge and microwave methods. CL images are compared with SE images and chemical elemental mapping images to correlate the emission regions of the sample.
NASA Astrophysics Data System (ADS)
Benítez, Alfredo; Santiago, Ulises; Sanchez, John E.; Ponce, Arturo
2018-01-01
In this work, an innovative cathodoluminescence (CL) system is coupled to a scanning electron microscope and synchronized with a Raspberry Pi computer integrated with an innovative processing signal. The post-processing signal is based on a Python algorithm that correlates the CL and secondary electron (SE) images with a precise dwell time correction. For CL imaging, the emission signal is collected through an optical fiber and transduced to an electrical signal via a photomultiplier tube (PMT). CL Images are registered in a panchromatic mode and can be filtered using a monochromator connected between the optical fiber and the PMT to produce monochromatic CL images. The designed system has been employed to study ZnO samples prepared by electrical arc discharge and microwave methods. CL images are compared with SE images and chemical elemental mapping images to correlate the emission regions of the sample.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castillo, S; Castillo, R; Castillo, E
2014-06-15
Purpose: Artifacts arising from the 4D CT acquisition and post-processing methods add systematic uncertainty to the treatment planning process. We propose an alternate cine 4D CT acquisition and post-processing method to consistently reduce artifacts, and explore patient parameters indicative of image quality. Methods: In an IRB-approved protocol, 18 patients with primary thoracic malignancies received a standard cine 4D CT acquisition followed by an oversampling 4D CT that doubled the number of images acquired. A second cohort of 10 patients received the clinical 4D CT plus 3 oversampling scans for intra-fraction reproducibility. The clinical acquisitions were processed by the standard phasemore » sorting method. The oversampling acquisitions were processed using Dijkstras algorithm to optimize an artifact metric over available image data. Image quality was evaluated with a one-way mixed ANOVA model using a correlation-based artifact metric calculated from the final 4D CT image sets. Spearman correlations and a linear mixed model tested the association between breathing parameters, patient characteristics, and image quality. Results: The oversampling 4D CT scans reduced artifact presence significantly by 27% and 28%, for the first cohort and second cohort respectively. From cohort 2, the inter-replicate deviation for the oversampling method was within approximately 13% of the cross scan average at the 0.05 significance level. Artifact presence for both clinical and oversampling methods was significantly correlated with breathing period (ρ=0.407, p-value<0.032 clinical, ρ=0.296, p-value<0.041 oversampling). Artifact presence in the oversampling method was significantly correlated with amount of data acquired, (ρ=-0.335, p-value<0.02) indicating decreased artifact presence with increased breathing cycles per scan location. Conclusion: The 4D CT oversampling acquisition with optimized sorting reduced artifact presence significantly and reproducibly compared to the phase-sorted clinical acquisition.« less
Improved associative recall of binary data in volume holographic memories
NASA Astrophysics Data System (ADS)
Betzos, George A.; Laisné, Alexandre; Mitkas, Pericles A.
1999-11-01
A new technique is presented that improves the results of associative recall in a volume holographic memory system. A background is added to the normal search argument to increase the amount of optical power that is used to reconstruct the reference beams in the crystal. This is combined with post-processing of the captured image of the reference beams. The use of both the background and post-processing greatly improves the results by allowing associative recall using small arguments. In addition, the number of false hits is reduced and misses are virtually eliminated.
Automatic small target detection in synthetic infrared images
NASA Astrophysics Data System (ADS)
Yardımcı, Ozan; Ulusoy, Ä.°lkay
2017-05-01
Automatic detection of targets from far distances is a very challenging problem. Background clutter and small target size are the main difficulties which should be solved while reaching a high detection performance as well as a low computational load. The pre-processing, detection and post-processing approaches are very effective on the final results. In this study, first of all, various methods in the literature were evaluated separately for each of these stages using the simulated test scenarios. Then, a full system of detection was constructed among available solutions which resulted in the best performance in terms of detection. However, although a precision rate as 100% was reached, the recall values stayed low around 25-45%. Finally, a post-processing method was proposed which increased the recall value while keeping the precision at 100%. The proposed post-processing method, which is based on local operations, increased the recall value to 65-95% in all test scenarios.
Fission gas bubble identification using MATLAB's image processing toolbox
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collette, R.; King, J.; Keiser, Jr., D.
Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less
Fission gas bubble identification using MATLAB's image processing toolbox
Collette, R.; King, J.; Keiser, Jr., D.; ...
2016-06-08
Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less
ERIC Educational Resources Information Center
Torsti, Pilvi
2007-01-01
This study examines the national division of history teaching in Bosnia and Herzegovina in the war and post-war period. The process of division of schooling into three curricula (Bosnian Serb, Bosnian Croat, and Bosniak) is presented. Representations of other national groups are central in 8th-grade history textbooks used by the three national…
An adaptive optics imaging system designed for clinical use.
Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R; Rossi, Ethan A
2015-06-01
Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2-3 arc minutes, (arcmin) 2) ~0.5-0.8 arcmin and, 3) ~0.05-0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3-5 arcmin, 2) ~0.7-1.1 arcmin and 3) ~0.07-0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing.
Post-modelling of images from a laser-induced wavy boiling front
NASA Astrophysics Data System (ADS)
Matti, R. S.; Kaplan, A. F. H.
2015-12-01
Processes like laser keyhole welding, remote fusion laser cutting or laser drilling are governed by a highly dynamic wavy boiling front that was recently recorded by ultra-high speed imaging. A new approach has now been established by post-modelling of the high speed images. Based on the image greyscale and on a cavity model the three-dimensional front topology is reconstructed. As a second step the Fresnel absorptivity modulation across the wavy front is calculated, combined with the local projection of the laser beam. Frequency polygons enable additional analysis of the statistical variations of the properties across the front. Trends like shadow formation and time dependency can be studied, locally and for the whole front. Despite strong topology modulation in space and time, for lasers with 1 μm wavelength and steel the absorptivity is bounded to a narrow range of 35-43%, owing to its Fresnel characteristics.
NASA Astrophysics Data System (ADS)
Amit, S. N. K.; Saito, S.; Sasaki, S.; Kiyoki, Y.; Aoki, Y.
2015-04-01
Google earth with high-resolution imagery basically takes months to process new images before online updates. It is a time consuming and slow process especially for post-disaster application. The objective of this research is to develop a fast and effective method of updating maps by detecting local differences occurred over different time series; where only region with differences will be updated. In our system, aerial images from Massachusetts's road and building open datasets, Saitama district datasets are used as input images. Semantic segmentation is then applied to input images. Semantic segmentation is a pixel-wise classification of images by implementing deep neural network technique. Deep neural network technique is implemented due to being not only efficient in learning highly discriminative image features such as road, buildings etc., but also partially robust to incomplete and poorly registered target maps. Then, aerial images which contain semantic information are stored as database in 5D world map is set as ground truth images. This system is developed to visualise multimedia data in 5 dimensions; 3 dimensions as spatial dimensions, 1 dimension as temporal dimension, and 1 dimension as degenerated dimensions of semantic and colour combination dimension. Next, ground truth images chosen from database in 5D world map and a new aerial image with same spatial information but different time series are compared via difference extraction method. The map will only update where local changes had occurred. Hence, map updating will be cheaper, faster and more effective especially post-disaster application, by leaving unchanged region and only update changed region.
Shelmerdine, Susan C; Simcock, Ian C; Hutchinson, John Ciaran; Aughwane, Rosalind; Melbourne, Andrew; Nikitichev, Daniil I; Ong, Ju-Ling; Borghi, Alessandro; Cole, Garrard; Kingham, Emilia; Calder, Alistair D; Capelli, Claudio; Akhtar, Aadam; Cook, Andrew C; Schievano, Silvia; David, Anna; Ourselin, Sebastian; Sebire, Neil J; Arthurs, Owen J
2018-06-14
Microfocus CT (micro-CT) is an imaging method that provides three-dimensional digital data sets with comparable resolution to light microscopy. Although it has traditionally been used for non-destructive testing in engineering, aerospace industries and in preclinical animal studies, new applications are rapidly becoming available in the clinical setting including post-mortem fetal imaging and pathological specimen analysis. Printing three-dimensional models from imaging data sets for educational purposes is well established in the medical literature, but typically using low resolution (0.7 mm voxel size) data acquired from CT or MR examinations. With higher resolution imaging (voxel sizes below 1 micron, <0.001 mm) at micro-CT, smaller structures can be better characterised, and data sets post-processed to create accurate anatomical models for review and handling. In this review, we provide examples of how three-dimensional printing of micro-CT imaged specimens can provide insight into craniofacial surgical applications, developmental cardiac anatomy, placental imaging, archaeological remains and high-resolution bone imaging. We conclude with other potential future usages of this emerging technique.
Optimization of a fast optical CT scanner for nPAG gel dosimetry
NASA Astrophysics Data System (ADS)
Vandecasteele, Jan; DeDeene, Yves
2009-05-01
A fast laser scanning optical CT scanner was constructed and optimized at the Ghent university. The first images acquired were contaminated with several imaging artifacts. The origins of the artifacts were investigated. Performance characteristics of different components were measured such as the laser spot size, light attenuation by the lenses and the dynamic range of the photo-detector. The need for a differential measurement using a second photo-detector was investigated. Post processing strategies to compensate for hardware related errors were developed. Drift of the laser and of the detector was negligible. Incorrectly refractive index matching was dealt with by developing an automated matching process. When scratches on the water bath and phantom container are present, these pose a post processing challenge to eliminate the resulting artifacts from the reconstructed images Secondary laser spots due to multiple reflections need to be further investigated. The time delay in the control of the galvanometer and detector was dealt with using black strips that serve as markers of the projection position. Still some residual ringing artifacts are present. Several small volumetric test phantoms were constructed to obtain an overall picture of the accuracy.
Post-processing for improving hyperspectral anomaly detection accuracy
NASA Astrophysics Data System (ADS)
Wu, Jee-Cheng; Jiang, Chi-Ming; Huang, Chen-Liang
2015-10-01
Anomaly detection is an important topic in the exploitation of hyperspectral data. Based on the Reed-Xiaoli (RX) detector and a morphology operator, this research proposes a novel technique for improving the accuracy of hyperspectral anomaly detection. Firstly, the RX-based detector is used to process a given input scene. Then, a post-processing scheme using morphology operator is employed to detect those pixels around high-scoring anomaly pixels. Tests were conducted using two real hyperspectral images with ground truth information and the results based on receiver operating characteristic curves, illustrated that the proposed method reduced the false alarm rates of the RXbased detector.
Large-field-of-view wide-spectrum artificial reflecting superposition compound eyes
NASA Astrophysics Data System (ADS)
Huang, Chi-Chieh
The study of the imaging principles of natural compound eyes has become an active area of research and has fueled the advancement of modern optics with many attractive design features beyond those available with conventional technologies. Most prominent among all compound eyes is the reflecting superposition compound eyes (RSCEs) found in some decapods. They are extraordinary imaging systems with numerous optical features such as minimum chromatic aberration, wide-angle field of view (FOV), high sensitivity to light and superb acuity to motion. Inspired by their remarkable visual system, we were able to implement the unique lens-free, reflection-based imaging mechanisms into a miniaturized, large-FOV optical imaging device operating at the wide visible spectrum to minimize chromatic aberration without any additional post-image processing. First, two micro-transfer printing methods, a multiple and a shear-assisted transfer printing technique, were studied and discussed to realize life-sized artificial RSCEs. The processes exploited the differential adhesive tendencies of the microstructures formed between a donor and a transfer substrate to accomplish an efficient release and transfer process. These techniques enabled conformal wrapping of three-dimensional (3-D) microstructures, initially fabricated in two-dimensional (2-D) layouts with standard fabrication technology onto a wide range of surfaces with complex and curvilinear shapes. Final part of this dissertation was focused on implementing the key operational features of the natural RSCEs into large-FOV, wide-spectrum artificial RSCEs as an optical imaging device suitable for the wide visible spectrum. Our devices can form real, clear images based on reflection rather than refraction, hence avoiding chromatic aberration due to dispersion by the optical materials. Compared to the performance of conventional refractive lenses of comparable size, our devices demonstrated minimum chromatic aberration, exceptional FOV up to 165o without distortion, modest spherical aberrations and comparable imaging quality without any post-image processing. Together with an augmenting cruciform pattern surrounding each focused image, our devices possessed enhanced, dynamic motion-tracking capability ideal for diverse applications in military, security, search and rescue, night navigation, medical imaging and astronomy. In the future, due to its reflection-based operating principles, it can be further extended into mid- and far-infrared for more demanding applications.
Stent deployment protocol for optimized real-time visualization during endovascular neurosurgery.
Silva, Michael A; See, Alfred P; Dasenbrock, Hormuzdiyar H; Ashour, Ramsey; Khandelwal, Priyank; Patel, Nirav J; Frerichs, Kai U; Aziz-Sultan, Mohammad A
2017-05-01
Successful application of endovascular neurosurgery depends on high-quality imaging to define the pathology and the devices as they are being deployed. This is especially challenging in the treatment of complex cases, particularly in proximity to the skull base or in patients who have undergone prior endovascular treatment. The authors sought to optimize real-time image guidance using a simple algorithm that can be applied to any existing fluoroscopy system. Exposure management (exposure level, pulse management) and image post-processing parameters (edge enhancement) were modified from traditional fluoroscopy to improve visualization of device position and material density during deployment. Examples include the deployment of coils in small aneurysms, coils in giant aneurysms, the Pipeline embolization device (PED), the Woven EndoBridge (WEB) device, and carotid artery stents. The authors report on the development of the protocol and their experience using representative cases. The stent deployment protocol is an image capture and post-processing algorithm that can be applied to existing fluoroscopy systems to improve real-time visualization of device deployment without hardware modifications. Improved image guidance facilitates aneurysm coil packing and proper positioning and deployment of carotid artery stents, flow diverters, and the WEB device, especially in the context of complex anatomy and an obscured field of view.
Gambling, Tina S; Long, Andrew F
2013-03-01
To explore the experiences of young women with developmental dysplasia of the hip explicating the impact of peri-acetabular osteotomy surgery and recovery in the short and longer term. Postings of five, selected women on an online active message board aimed at women with developmental dysplasia of the hip were analysed. Interest lay on their postings after they had had peri-acetabular osteotomy surgery. Data analysis was performed through the approach of interpretive phenomenological analysis. The time length of the postings for the cases ranged from 1 year to 6 years, and the number of postings varied substantially, from 48 to 591. Two major concepts were prominent across participants' accounts. The first concept, 'body image', centred on affects on the women's self-esteem and body image. The second, 'the long road to recovery', highlighted 'the emotional and physical battle of learning to walk' and concerns with 'saving my joints'. Developmental dysplasia of the hip potentially provides a critical case for exploration of the process of how a disability can affect confidence, self-esteem and body image. Recovery from this condition requires enormous effort, resilience and commitment from the women.
Plenoptic Imager for Automated Surface Navigation
NASA Technical Reports Server (NTRS)
Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael
2010-01-01
An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.
Shanmugam, Akshaya; Usmani, Mohammad; Mayberry, Addison; Perkins, David L; Holcomb, Daniel E
2018-01-01
Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples.
Fischer, Michael A; Leidner, Bertil; Kartalis, Nikolaos; Svensson, Anders; Aspelin, Peter; Albiin, Nils; Brismar, Torkel B
2014-01-01
To assess feasibility and image quality (IQ) of a new post-processing algorithm for retrospective extraction of an optimised multi-phase CT (time-resolved CT) of the liver from volumetric perfusion imaging. Sixteen patients underwent clinically indicated perfusion CT using 4D spiral mode of dual-source 128-slice CT. Three image sets were reconstructed: motion-corrected and noise-reduced (MCNR) images derived from 4D raw data; maximum and average intensity projections (time MIP/AVG) of the arterial/portal/portal-venous phases and all phases (total MIP/ AVG) derived from retrospective fusion of dedicated MCNR split series. Two readers assessed the IQ, detection rate and evaluation time; one reader assessed image noise and lesion-to-liver contrast. Time-resolved CT was feasible in all patients. Each post-processing step yielded a significant reduction of image noise and evaluation time, maintaining lesion-to-liver contrast. Time MIPs/AVGs showed the highest overall IQ without relevant motion artefacts and best depiction of arterial and portal/portal-venous phases respectively. Time MIPs demonstrated a significantly higher detection rate for arterialised liver lesions than total MIPs/AVGs and the raw data series. Time-resolved CT allows data from volumetric perfusion imaging to be condensed into an optimised multi-phase liver CT, yielding a superior IQ and higher detection rate for arterialised liver lesions than the raw data series. • Four-dimensional computed tomography is limited by motion artefacts and poor image quality. • Time-resolved-CT facilitates 4D-CT data visualisation, segmentation and analysis by condensing raw data. • Time-resolved CT demonstrates better image quality than raw data images. • Time-resolved CT improves detection of arterialised liver lesions in cirrhotic patients.
SU-D-209-03: Radiation Dose Reduction Using Real-Time Image Processing in Interventional Radiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanal, K; Moirano, J; Zamora, D
Purpose: To characterize changes in radiation dose after introducing a new real-time image processing technology in interventional radiology systems. Methods: Interventional radiology (IR) procedures are increasingly complex, at times requiring substantial time and radiation dose. The risk of inducing tissue reactions as well as long-term stochastic effects such as radiation-induced cancer is not trivial. To reduce this risk, IR systems are increasingly equipped with dose reduction technologies.Recently, ClarityIQ (Philips Healthcare) technology was installed in our existing neuroradiology IR (NIR) and vascular IR (VIR) suites respectively. ClarityIQ includes real-time image processing that reduces noise/artifacts, enhances images, and sharpens edges while alsomore » reducing radiation dose rates. We reviewed 412 NIR (175 pre- and 237 post-ClarityIQ) procedures and 329 VIR (156 preand 173 post-ClarityIQ) procedures performed at our institution pre- and post-ClarityIQ implementation. NIR procedures were primarily classified as interventional or diagnostic. VIR procedures included drain port, drain placement, tube change, mesenteric, and implanted venous procedures. Air Kerma (AK in units of mGy) was documented for all the cases using a commercial radiation exposure management system. Results: When considering all NIR procedures, median AK decreased from 1194 mGy to 561 mGy. When considering all VIR procedures, median AK decreased from 49 to 14 mGy. Both NIR and VIR exhibited a decrease in AK exceeding 50% after ClarityIQ implementation, a statistically significant (p<0.05) difference. Of the 5 most common VIR procedures, all median AK values decreased, but significance (p<0.05) was only reached in venous access (N=53), angio mesenteric (N=41), and drain placement procedures (N=31). Conclusion: ClarityIQ can reduce dose significantly for both NIR and VIR procedures. Image quality was not assessed in conjunction with the dose reduction.« less
Medical image analysis with artificial neural networks.
Jiang, J; Trundle, P; Ren, J
2010-12-01
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.
Near Real-Time Image Reconstruction
NASA Astrophysics Data System (ADS)
Denker, C.; Yang, G.; Wang, H.
2001-08-01
In recent years, post-facto image-processing algorithms have been developed to achieve diffraction-limited observations of the solar surface. We present a combination of frame selection, speckle-masking imaging, and parallel computing which provides real-time, diffraction-limited, 256×256 pixel images at a 1-minute cadence. Our approach to achieve diffraction limited observations is complementary to adaptive optics (AO). At the moment, AO is limited by the fact that it corrects wavefront abberations only for a field of view comparable to the isoplanatic patch. This limitation does not apply to speckle-masking imaging. However, speckle-masking imaging relies on short-exposure images which limits its spectroscopic applications. The parallel processing of the data is performed on a Beowulf-class computer which utilizes off-the-shelf, mass-market technologies to provide high computational performance for scientific calculations and applications at low cost. Beowulf computers have a great potential, not only for image reconstruction, but for any kind of complex data reduction. Immediate access to high-level data products and direct visualization of dynamic processes on the Sun are two of the advantages to be gained.
Using modern imaging techniques to old HST data: a summary of the ALICE program.
NASA Astrophysics Data System (ADS)
Choquet, Elodie; Soummer, Remi; Perrin, Marshall; Pueyo, Laurent; Hagan, James Brendan; Zimmerman, Neil; Debes, John Henry; Schneider, Glenn; Ren, Bin; Milli, Julien; Wolff, Schuyler; Stark, Chris; Mawet, Dimitri; Golimowski, David A.; Hines, Dean C.; Roberge, Aki; Serabyn, Eugene
2018-01-01
Direct imaging of extrasolar systems is a powerful technique to study the physical properties of exoplanetary systems and understand their formation and evolution mechanisms. The detection and characterization of these objects are challenged by their high contrast with their host star. Several observing strategies and post-processing algorithms have been developed for ground-based high-contrast imaging instruments, enabling the discovery of directly-imaged and spectrally-characterized exoplanets. The Hubble Space Telescope (HST), pioneer in directly imaging extrasolar systems, has yet been often limited to the detection of bright debris disks systems, with sensitivity limited by the difficulty to implement an optimal PSF subtraction stategy, which is readily offered on ground-based telescopes in pupil tracking mode.The Archival Legacy Investigations of Circumstellar Environments (ALICE) program is a consistent re-analysis of the 10 year old coronagraphic archive of HST's NICMOS infrared imager. Using post-processing methods developed for ground-based observations, we used the whole archive to calibrate PSF temporal variations and improve NICMOS's detection limits. We have now delivered ALICE-reprocessed science products for the whole NICMOS archival data back to the community. These science products, as well as the ALICE pipeline, were used to prototype the JWST coronagraphic data and reduction pipeline. The ALICE program has enabled the detection of 10 faint debris disk systems never imaged before in the near-infrared and several substellar companion candidates, which we are all in the process of characterizing through follow-up observations with both ground-based facilities and HST-STIS coronagraphy. In this publication, we provide a summary of the results of the ALICE program, advertise its science products and discuss the prospects of the program.
NASA Astrophysics Data System (ADS)
Daye, Dania; Bobo, Ezra; Baumann, Bethany; Ioannou, Antonios; Conant, Emily F.; Maidment, Andrew D. A.; Kontos, Despina
2011-03-01
Mammographic parenchymal texture patterns have been shown to be related to breast cancer risk. Yet, little is known about the biological basis underlying this association. Here, we investigate the potential of mammographic parenchymal texture patterns as an inherent phenotypic imaging marker of endogenous hormonal exposure of the breast tissue. Digital mammographic (DM) images in the cranio-caudal (CC) view of the unaffected breast from 138 women diagnosed with unilateral breast cancer were retrospectively analyzed. Menopause status was used as a surrogate marker of endogenous hormonal activity. Retroareolar 2.5cm2 ROIs were segmented from the post-processed DM images using an automated algorithm. Parenchymal texture features of skewness, coarseness, contrast, energy, homogeneity, grey-level spatial correlation, and fractal dimension were computed. Receiver operating characteristic (ROC) curve analysis was performed to evaluate feature classification performance in distinguishing between 72 pre- and 66 post-menopausal women. Logistic regression was performed to assess the independent effect of each texture feature in predicting menopause status. ROC analysis showed that texture features have inherent capacity to distinguish between pre- and post-menopausal statuses (AUC>0.5, p<0.05). Logistic regression including all texture features yielded an ROC curve with an AUC of 0.76. Addition of age at menarche, ethnicity, contraception use and hormonal replacement therapy (HRT) use lead to a modest model improvement (AUC=0.78) while texture features maintained significant contribution (p<0.05). The observed differences in parenchymal texture features between pre- and post- menopausal women suggest that mammographic texture can potentially serve as a surrogate imaging marker of endogenous hormonal activity.
Palm, Christoph; Axer, Markus; Gräßel, David; Dammers, Jürgen; Lindemeyer, Johannes; Zilles, Karl; Pietrzyk, Uwe; Amunts, Katrin
2009-01-01
Polarised light imaging (PLI) utilises the birefringence of the myelin sheaths in order to visualise the orientation of nerve fibres in microtome sections of adult human post-mortem brains at ultra-high spatial resolution. The preparation of post-mortem brains for PLI involves fixation, freezing and cutting into 100-μm-thick sections. Hence, geometrical distortions of histological sections are inevitable and have to be removed for 3D reconstruction and subsequent fibre tracking. We here present a processing pipeline for 3D reconstruction of these sections using PLI derived multimodal images of post-mortem brains. Blockface images of the brains were obtained during cutting; they serve as reference data for alignment and elimination of distortion artefacts. In addition to the spatial image transformation, fibre orientation vectors were reoriented using the transformation fields, which consider both affine and subsequent non-linear registration. The application of this registration and reorientation approach results in a smooth fibre vector field, which reflects brain morphology. PLI combined with 3D reconstruction and fibre tracking is a powerful tool for human brain mapping. It can also serve as an independent method for evaluating in vivo fibre tractography. PMID:20461231
NASA Astrophysics Data System (ADS)
Made, Pertiwi Jaya Ni; Miura, Fusanori; Besse Rimba, A.
2016-06-01
A large-scale earthquake and tsunami affect thousands of people and cause serious damages worldwide every year. Quick observation of the disaster damage is extremely important for planning effective rescue operations. In the past, acquiring damage information was limited to only field surveys or using aerial photographs. In the last decade, space-borne images were used in many disaster researches, such as tsunami damage detection. In this study, SAR data of ALOS/PALSAR satellite images were used to estimate tsunami damage in the form of inundation areas in Talcahuano, the area near the epicentre of the 2010 Chile earthquake. The image processing consisted of three stages, i.e. pre-processing, analysis processing, and post-processing. It was conducted using multi-temporal images before and after the disaster. In the analysis processing, inundation areas were extracted through the masking processing. It consisted of water masking using a high-resolution optical image of ALOS/AVNIR-2 and elevation masking which built upon the inundation height using DEM image of ASTER-GDEM. The area result was 8.77 Km2. It showed a good result and corresponded to the inundation map of Talcahuano. Future study in another area is needed in order to strengthen the estimation processing method.
NASA Astrophysics Data System (ADS)
Tonbul, H.; Kavzoglu, T.
2017-12-01
Forest fires are among the most important natural disasters with the damage to the natural habitat and human-life. Mapping damaged forest fires is crucial for assessing ecological effects caused by fire, monitoring land cover changes and modeling atmospheric and climatic effects of fire. In this context, satellite data provides a great advantage to users by providing a rapid process of detecting burning areas and determining the severity of fire damage. Especially, Mediterranean ecosystems countries sets the suitable conditions for the forest fires. In this study, the determination of burnt areas of forest fire in Pedrógão Grande region of Portugal occurred in June 2017 was carried out using Landsat 8 OLI and Sentinel-2A satellite images. The Pedrógão Grande fire was one of the largest fires in Portugal, more than 60 people was killed and thousands of hectares were ravaged. In this study, four pairs of pre-fire and post-fire top of atmosphere (TOA) and atmospherically corrected images were utilized. The red and near infrared (NIR) spectral bands of pre-fire and post-fire images were stacked and multiresolution segmentation algorithm was applied. In the segmentation processes, the image objects were generated with estimated optimum homogeneity criteria. Using eCognition software, rule sets have been created to distinguish unburned areas from burned areas. In constructing the rule sets, NDVI threshold values were determined pre- and post-fire and areas where vegetation loss was detected using the NDVI difference image. The results showed that both satellite images yielded successful results for burned area discrimination with a very high degree of consistency in terms of spatial overlap and total burned area (over 93%). Object based image analysis (OBIA) was found highly effective in delineation of burnt areas.
The importance of ray pathlengths when measuring objects in maximum intensity projection images.
Schreiner, S; Dawant, B M; Paschal, C B; Galloway, R L
1996-01-01
It is important to understand any process that affects medical data. Once the data have changed from the original form, one must consider the possibility that the information contained in the data has also changed. In general, false negative and false positive diagnoses caused by this post-processing must be minimized. Medical imaging is one area in which post-processing is commonly performed, but there is often little or no discussion of how these algorithms affect the data. This study uncovers some interesting properties of maximum intensity projection (MIP) algorithms which are commonly used in the post-processing of magnetic resonance (MR) and computed tomography (CT) angiographic data. The appearance of the width of vessels and the extent of malformations such as aneurysms is of interest to clinicians. This study will show how MIP algorithms interact with the shape of the object being projected. MIP's can make objects appear thinner in the projection than in the original data set and also alter the shape of the profile of the object seen in the original data. These effects have consequences for width-measuring algorithms which will be discussed. Each projected intensity is dependent upon the pathlength of the ray from which the projected pixel arises. The morphology (shape and intensity profile) of an object will change the pathlength that each ray experiences. This is termed the pathlength effect. In order to demonstrate the pathlength effect, simple computer models of an imaged vessel were created. Additionally, a static MR phantom verified that the derived equation for the projection-plane probability density function (pdf) predicts the projection-plane intensities well (R(2)=0.96). Finally, examples of projections through in vivo MR angiography and CT angiography data are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nosrati, R; Sunnybrook Health Sciences Centre, Toronto, Ontario; Soliman, A
Purpose: This study aims at developing an MRI-only workflow for post-implant dosimetry of the prostate LDR brachytherapy seeds. The specific goal here is to develop a post-processing algorithm to produce positive contrast for the seeds and prostatic calcifications and differentiate between them on MR images. Methods: An agar-based phantom incorporating four dummy seeds (I-125) and five calcifications of different sizes (from sheep cortical bone) was constructed. Seeds were placed arbitrarily in the coronal plane. The phantom was scanned with 3T Philips Achieva MR scanner using an 8-channel head coil array. Multi-echo turbo spin echo (ME-TSE) and multi-echo gradient recalled echomore » (ME-GRE) sequences were acquired. Due to minimal susceptibility artifacts around seeds, ME-GRE sequence (flip angle=15; TR/TE=20/2.3/2.3; resolution=0.7×0.7×2mm3) was further processed.The induced field inhomogeneity due to the presence of titaniumencapsulated seeds was corrected using a B0 field map. B0 map was calculated using the ME-GRE sequence by calculating the phase difference at two different echo times. Initially, the product of the first echo and B0 map was calculated. The features corresponding to the seeds were then extracted in three steps: 1) the edge pixels were isolated using “Prewitt” operator; 2) the Hough transform was employed to detect ellipses approximately matching the dimensions of the seeds and 3) at the position and orientation of the detected ellipses an ellipse was drawn on the B0-corrected image. Results: The proposed B0-correction process produced positive contrast for the seeds and calcifications. The Hough transform based on Prewitt edge operator successfully identified all the seeds according to their ellipsoidal shape and dimensions in the edge image. Conclusion: The proposed post-processing algorithm successfully visualized the seeds and calcifications with positive contrast and differentiates between them according to their shapes. Further assessments on more realistic phantoms and patient study are required to validate the outcome.« less
Ovesný, Martin; Křížek, Pavel; Borkovec, Josef; Švindrych, Zdeněk; Hagen, Guy M.
2014-01-01
Summary: ThunderSTORM is an open-source, interactive and modular plug-in for ImageJ designed for automated processing, analysis and visualization of data acquired by single-molecule localization microscopy methods such as photo-activated localization microscopy and stochastic optical reconstruction microscopy. ThunderSTORM offers an extensive collection of processing and post-processing methods so that users can easily adapt the process of analysis to their data. ThunderSTORM also offers a set of tools for creation of simulated data and quantitative performance evaluation of localization algorithms using Monte Carlo simulations. Availability and implementation: ThunderSTORM and the online documentation are both freely accessible at https://code.google.com/p/thunder-storm/ Contact: guy.hagen@lf1.cuni.cz Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24771516
Classifying magnetic resonance image modalities with convolutional neural networks
NASA Astrophysics Data System (ADS)
Remedios, Samuel; Pham, Dzung L.; Butman, John A.; Roy, Snehashis
2018-02-01
Magnetic Resonance (MR) imaging allows the acquisition of images with different contrast properties depending on the acquisition protocol and the magnetic properties of tissues. Many MR brain image processing techniques, such as tissue segmentation, require multiple MR contrasts as inputs, and each contrast is treated differently. Thus it is advantageous to automate the identification of image contrasts for various purposes, such as facilitating image processing pipelines, and managing and maintaining large databases via content-based image retrieval (CBIR). Most automated CBIR techniques focus on a two-step process: extracting features from data and classifying the image based on these features. We present a novel 3D deep convolutional neural network (CNN)- based method for MR image contrast classification. The proposed CNN automatically identifies the MR contrast of an input brain image volume. Specifically, we explored three classification problems: (1) identify T1-weighted (T1-w), T2-weighted (T2-w), and fluid-attenuated inversion recovery (FLAIR) contrasts, (2) identify pre vs postcontrast T1, (3) identify pre vs post-contrast FLAIR. A total of 3418 image volumes acquired from multiple sites and multiple scanners were used. To evaluate each task, the proposed model was trained on 2137 images and tested on the remaining 1281 images. Results showed that image volumes were correctly classified with 97.57% accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, J; Son, J; Arun, B
Purpose: To develop and demonstrate a short breast (sb) MRI protocol that acquires both T2-weighted and dynamic contrast-enhanced T1-weighted images in approximately ten minutes. Methods: The sb-MRI protocol consists of two novel pulse sequences. The first is a flexible fast spin-echo triple-echo Dixon (FTED) sequence for high-resolution fat-suppressed T2-weighted imaging, and the second is a 3D fast dual-echo spoiled gradient sequence (FLEX) for volumetric fat-suppressed T1-weighted imaging before and post contrast agent injection. The flexible FTED sequence replaces each single readout during every echo-spacing period of FSE with three fast-switching bipolar readouts to produce three raw images in a singlemore » acquisition. These three raw images are then post-processed using a Dixon algorithm to generate separate water-only and fat-only images. The FLEX sequence acquires two echoes using dual-echo readout after each RF excitation and the corresponding images are post-processed using a similar Dixon algorithm to yield water-only and fat-only images. The sb-MRI protocol was implemented on a 3T MRI scanner and used for patients who had undergone concurrent clinical MRI for breast cancer screening. Results: With the same scan parameters (eg, spatial coverage, field of view, spatial and temporal resolution) as the clinical protocol, the total scan-time of the sb-MRI protocol (including the localizer, bilateral T2-weighted, and dynamic contrast-enhanced T1-weighted images) was 11 minutes. In comparison, the clinical breast MRI protocol took 43 minutes. Uniform fat suppression and high image quality were consistently achieved by sb-MRI. Conclusion: We demonstrated a sb-MRI protocol comprising both T2-weighted and dynamic contrast-enhanced T1-weighted images can be performed in approximately ten minutes. The spatial and temporal resolution of the images easily satisfies the current breast MRI accreditation guidelines by the American College of Radiology. The protocol has the potential of making breast MRI more widely accessible to and more tolerable by the patients. JMA is the inventor of United States patents that are owned by the University of Texas Board of Regents and currently licensed to GE Healthcare and Siemens Gmbh.« less
An all-optronic synthetic aperture lidar
NASA Astrophysics Data System (ADS)
Turbide, Simon; Marchese, Linda; Terroux, Marc; Babin, François; Bergeron, Alain
2012-09-01
Synthetic Aperture Radar (SAR) is a mature technology that overcomes the diffraction limit of an imaging system's real aperture by taking advantage of the platform motion to coherently sample multiple sections of an aperture much larger than the physical one. Synthetic Aperture Lidar (SAL) is the extension of SAR to much shorter wavelengths (1.5 μm vs 5 cm). This new technology can offer higher resolution images in day or night time as well as in certain adverse conditions. It could be a powerful tool for Earth monitoring (ship detection, frontier surveillance, ocean monitoring) from aircraft, unattended aerial vehicle (UAV) or spatial platforms. A continuous flow of high-resolution images covering large areas would however produce a large amount of data involving a high cost in term of post-processing computational time. This paper presents a laboratory demonstration of a SAL system complete with image reconstruction based on optronic processing. This differs from the more traditional digital approach by its real-time processing capability. The SAL system is discussed and images obtained from a non-metallic diffuse target at ranges up to 3m are shown, these images being processed by a real-time optronic SAR processor origiinally designed to reconstruct SAR images from ENVISAT/ASAR data.
Stereo matching and view interpolation based on image domain triangulation.
Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce
2013-09-01
This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.
Container Surface Evaluation by Function Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wendelberger, James G.
Container images are analyzed for specific surface features, such as, pits, cracks, and corrosion. The detection of these features is confounded with complicating features. These complication features include: shape/curvature, welds, edges, scratches, foreign objects among others. A method is provided to discriminate between the various features. The method consists of estimating the image background, determining a residual image and post processing to determine the features present. The methodology is not finalized but demonstrates the feasibility of a method to determine the kind and size of the features present.
Enhanced FIB-SEM systems for large-volume 3D imaging.
Xu, C Shan; Hayworth, Kenneth J; Lu, Zhiyuan; Grob, Patricia; Hassan, Ahmed M; García-Cerdán, José G; Niyogi, Krishna K; Nogales, Eva; Weinberg, Richard J; Hess, Harald F
2017-05-13
Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 µm 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.
Feature Visibility Limits in the Non-Linear Enhancement of Turbid Images
NASA Technical Reports Server (NTRS)
Jobson, Daniel J.; Rahman, Zia-ur; Woodell, Glenn A.
2003-01-01
The advancement of non-linear processing methods for generic automatic clarification of turbid imagery has led us from extensions of entirely passive multiscale Retinex processing to a new framework of active measurement and control of the enhancement process called the Visual Servo. In the process of testing this new non-linear computational scheme, we have identified that feature visibility limits in the post-enhancement image now simplify to a single signal-to-noise figure of merit: a feature is visible if the feature-background signal difference is greater than the RMS noise level. In other words, a signal-to-noise limit of approximately unity constitutes a lower limit on feature visibility.
Social computing for image matching
Rivas, Alberto; Sánchez-Torres, Ramiro; Rodríguez, Sara
2018-01-01
One of the main technological trends in the last five years is mass data analysis. This trend is due in part to the emergence of concepts such as social networks, which generate a large volume of data that can provide added value through their analysis. This article is focused on a business and employment-oriented social network. More specifically, it focuses on the analysis of information provided by different users in image form. The images are analyzed to detect whether other existing users have posted or talked about the same image, even if the image has undergone some type of modification such as watermarks or color filters. This makes it possible to establish new connections among unknown users by detecting what they are posting or whether they are talking about the same images. The proposed solution consists of an image matching algorithm, which is based on the rapid calculation and comparison of hashes. However, there is a computationally expensive aspect in charge of revoking possible image transformations. As a result, the image matching process is supported by a distributed forecasting system that enables or disables nodes to serve all the possible requests. The proposed system has shown promising results for matching modified images, especially when compared with other existing systems. PMID:29813082
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandoval, D; Mlady, G; Selwyn, R
Purpose: To bring together radiologists, technologists, and physicists to utilize post-processing techniques in digital radiography (DR) in order to optimize image acquisition and improve image quality. Methods: Sub-optimal images acquired on a new General Electric (GE) DR system were flagged for follow-up by radiologists and reviewed by technologists and medical physicists. Various exam types from adult musculoskeletal (n=35), adult chest (n=4), and pediatric (n=7) were chosen for review. 673 total images were reviewed. These images were processed using five customized algorithms provided by GE. An image score sheet was created allowing the radiologist to assign a numeric score to eachmore » of the processed images, this allowed for objective comparison to the original images. Each image was scored based on seven properties: 1) overall image look, 2) soft tissue contrast, 3) high contrast, 4) latitude, 5) tissue equalization, 6) edge enhancement, 7) visualization of structures. Additional space allowed for additional comments not captured in scoring categories. Radiologists scored the images from 1 – 10 with 1 being non-diagnostic quality and 10 being superior diagnostic quality. Scores for each custom algorithm for each image set were summed. The algorithm with the highest score for each image set was then set as the default processing. Results: Images placed into the PACS “QC folder” for image processing reasons decreased. Feedback from radiologists was, overall, that image quality for these studies had improved. All default processing for these image types was changed to the new algorithm. Conclusion: This work is an example of the collaboration between radiologists, technologists, and physicists at the University of New Mexico to add value to the radiology department. The significant amount of work required to prepare the processing algorithms, reprocessing and scoring of the images was eagerly taken on by all team members in order to produce better quality images and improve patient care.« less
Glemser, Philip A; Pfleiderer, Michael; Heger, Anna; Tremper, Jan; Krauskopf, Astrid; Schlemmer, Heinz-Peter; Yen, Kathrin; Simons, David
2017-03-01
The aim of this multi-reader feasibility study was to evaluate new post-processing CT imaging tools in rib fracture assessment of forensic cases by analyzing detection time and diagnostic accuracy. Thirty autopsy cases (20 with and 10 without rib fractures in autopsy) were randomly selected and included in this study. All cases received a native whole body CT scan prior to the autopsy procedure, which included dissection and careful evaluation of each rib. In addition to standard transverse sections (modality A), CT images were subjected to a reconstruction algorithm to compute axial labelling of the ribs (modality B) as well as "unfolding" visualizations of the rib cage (modality C, "eagle tool"). Three radiologists with different clinical and forensic experience who were blinded to autopsy results evaluated all cases in a random manner of modality and case. Rib fracture assessment of each reader was evaluated compared to autopsy and a CT consensus read as radiologic reference. A detailed evaluation of relevant test parameters revealed a better accordance to the CT consensus read as to the autopsy. Modality C was the significantly quickest rib fracture detection modality despite slightly reduced statistic test parameters compared to modalities A and B. Modern CT post-processing software is able to shorten reading time and to increase sensitivity and specificity compared to standard autopsy alone. The eagle tool as an easy to use tool is suited for an initial rib fracture screening prior to autopsy and can therefore be beneficial for forensic pathologists.
Saba, Luca; Atzeni, Matteo; Ribuffo, Diego; Mallarini, Giorgio; Suri, Jasjit S
2012-08-01
Our purpose was to compare two post-processing techniques, Maximum-Intensity-Projection (MIP) and Volume Rendering (VR) for the study of perforator arteries. Thirty patients who underwent Multi-Detector-Row CT Angiography (MDCTA) between February 2010 and May 2010 were retrospectively analyzed. For each patient and for each reconstruction method, the image quality was evaluated and the inter- and intra-observer agreement was calculated according to the Cohen statistics. The Hounsfield Unit (HU) value in the common femoral artery was quantified and the correlation (Pearson Statistic) between image quality and HU value was explored. The Pearson r between the right and left common femoral artery was excellent (r=0.955). The highest image quality score was obtained using MIP for both observers (total value 75, with a mean value 2.67 for observer 1 and total value of 79 and a mean value of 2.82 for observer 2). The highest agreement between the two observers was detected using the MIP protocol with a Cohen kappa value of 0.856. The ROC area under the curve (Az) for the VR is 0.786 (0.086 SD; p value=0.0009) whereas the ROC area under the curve (Az) for the MIP is 0.0928 (0.051 SD; p value=0.0001). MIP showed the optimal inter- and intra-observer agreement and the highest quality scores and therefore should be used as post-processing techniques in the analysis of perforating arteries. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Tanaka, R; Nakamura, T
2001-09-01
Myocardial perfusion imaging with 99mTc-labeled agents immediately after reperfusion therapy can underestimate myocardial salvage. It is also conceivable that delayed imaging is useful for assessing the risk area. However, to our knowledge, very few studies have sequentially evaluated these image changes. We conducted 99mTc-tetrofosmin (TF) and 123I-beta-methyl-p-iodophenylpentadecanoic acid (BMIPP) SPECT before and after reperfusion to treat acute myocardial infarction and quantified changes in TF myocardial accumulation and reverse redistribution. Seventeen patients with a first myocardial infarction underwent successful reperfusion. We examined SPECT images obtained at the onset (preimage), those acquired 30 min (early image) and 6 h (delayed image) after TF injection, and images acquired 1, 4, 7, and 20 d after reperfusion (post-1-d, post-4-d, post-7-d, and post-20-d image, respectively). We also examined BMIPP SPECT images after 7 +/- 1.8 d (BMIPP image). Polar maps were divided into 48 segments to calculate percentage uptake, and time course changes in segment numbers below 60% were observed as abnormal area. Moreover, cardiac function was analyzed by gated TF SPECT on 1 and 20 d after reperfusion. In reference to the abnormal area on the early images, the post-1-d image was significantly improved compared with the preimage (P < 0.01) as was the post-7-d image compared with the post-1-d and post-4-d images (P < 0.05, respectively). However, post-20-d and post-7-d images did not significantly differ. Therefore, the improvement in myocardial accumulation reached a plateau 7 d after reperfusion. On the other hand, the abnormal area on the delayed images was significantly greater (P < 0.01) compared with that on the early images from 4 to 20 d after reperfusion, as the value was essentially constant. The correlations of the abnormal area between the preimage and the post-7-d delayed image, the preimage and the BMIPP image, and the post-7-d delayed image and the BMIPP image were very close (r = 0.963, r = 0.981, and r = 0.975, respectively). Gated TF SPECT revealed that the left ventricular ejection fraction was not significantly different (P = not significant) between 1 and 20 d after reperfusion, but regional wall motion was significantly different after reperfusion (P < 0.05). These results suggest that the interval between reperfusion therapy and TF SPECT should be 7 d to evaluate the salvage effect and that TF delayed and BMIPP images are both useful in estimation of risk area.
Mayberry, Addison; Perkins, David L.; Holcomb, Daniel E.
2018-01-01
Miniaturized imaging devices have pushed the boundaries of point-of-care imaging, but existing mobile-phone-based imaging systems do not exploit the full potential of smart phones. This work demonstrates the use of simple imaging configurations to deliver superior image quality and the ability to handle a wide range of biological samples. Results presented in this work are from analysis of fluorescent beads under fluorescence imaging, as well as helminth eggs and freshwater mussel larvae under white light imaging. To demonstrate versatility of the systems, real time analysis and post-processing results of the sample count and sample size are presented in both still images and videos of flowing samples. PMID:29509786
An open architecture for medical image workstation
NASA Astrophysics Data System (ADS)
Liang, Liang; Hu, Zhiqiang; Wang, Xiangyun
2005-04-01
Dealing with the difficulties of integrating various medical image viewing and processing technologies with a variety of clinical and departmental information systems and, in the meantime, overcoming the performance constraints in transferring and processing large-scale and ever-increasing image data in healthcare enterprise, we design and implement a flexible, usable and high-performance architecture for medical image workstations. This architecture is not developed for radiology only, but for any workstations in any application environments that may need medical image retrieving, viewing, and post-processing. This architecture contains an infrastructure named Memory PACS and different kinds of image applications built on it. The Memory PACS is in charge of image data caching, pre-fetching and management. It provides image applications with a high speed image data access and a very reliable DICOM network I/O. In dealing with the image applications, we use dynamic component technology to separate the performance-constrained modules from the flexibility-constrained modules so that different image viewing or processing technologies can be developed and maintained independently. We also develop a weakly coupled collaboration service, through which these image applications can communicate with each other or with third party applications. We applied this architecture in developing our product line and it works well. In our clinical sites, this architecture is applied not only in Radiology Department, but also in Ultrasonic, Surgery, Clinics, and Consultation Center. Giving that each concerned department has its particular requirements and business routines along with the facts that they all have different image processing technologies and image display devices, our workstations are still able to maintain high performance and high usability.
Geostationary Lightning Mapper: Lessons Learned from Post Launch Test
NASA Astrophysics Data System (ADS)
Edgington, S.; Tillier, C. E.; Demroff, H.; VanBezooijen, R.; Christian, H. J., Jr.; Bitzer, P. M.
2017-12-01
Pre-launch calibration and algorithm design for the GOES Geostationary Lightning Mapper resulted in a successful and trouble-free on-orbit activation and post-launch test sequence. Within minutes of opening the GLM aperture door on January 4th, 2017, lightning was detected across the entire field of view. During the six-month post-launch test period, numerous processing parameters on board the instrument and in the ground processing algorithms were fine-tuned. Demonstrated on-orbit performance exceeded pre-launch predictions. We provide an overview of the ground calibration sequence, on-orbit tuning of the instrument, tuning of the ground processing algorithms (event filtering and navigation). We also touch on new insights obtained from analysis of a large and growing archive of raw GLM data, containing 3e8 flash detections derived from over 1e10 full-disk images of the Earth.
Automatic joint alignment measurements in pre- and post-operative long leg standing radiographs.
Goossen, A; Weber, G M; Dries, S P M
2012-01-01
For diagnosis or treatment assessment of knee joint osteoarthritis it is required to measure bone morphometry from radiographic images. We propose a method for automatic measurement of joint alignment from pre-operative as well as post-operative radiographs. In a two step approach we first detect and segment any implants or other artificial objects within the image. We exploit physical characteristics and avoid prior shape information to cope with the vast amount of implant types. Subsequently, we exploit the implant delineations to adapt the initialization and adaptation phase of a dedicated bone segmentation scheme using deformable template models. Implant and bone contours are fused to derive the final joint segmentation and thus the alignment measurements. We evaluated our method on clinical long leg radiographs and compared both the initialization rate, corresponding to the number of images successfully processed by the proposed algorithm, and the accuracy of the alignment measurement. Ground truth has been generated by an experienced orthopedic surgeon. For comparison a second reader reevaluated the measurements. Experiments on two sets of 70 and 120 digital radiographs show that 92% of the joints could be processed automatically and the derived measurements of the automatic method are comparable to a human reader for pre-operative as well as post-operative images with a typical error of 0.7° and correlations of r = 0.82 to r = 0.99 with the ground truth. The proposed method allows deriving objective measures of joint alignment from clinical radiographs. Its accuracy and precision are on par with a human reader for all evaluated measurements.
End-to-end performance analysis using engineering confidence models and a ground processor prototype
NASA Astrophysics Data System (ADS)
Kruse, Klaus-Werner; Sauer, Maximilian; Jäger, Thomas; Herzog, Alexandra; Schmitt, Michael; Huchler, Markus; Wallace, Kotska; Eisinger, Michael; Heliere, Arnaud; Lefebvre, Alain; Maher, Mat; Chang, Mark; Phillips, Tracy; Knight, Steve; de Goeij, Bryan T. G.; van der Knaap, Frits; Van't Hof, Adriaan
2015-10-01
The European Space Agency (ESA) and the Japan Aerospace Exploration Agency (JAXA) are co-operating to develop the EarthCARE satellite mission with the fundamental objective of improving the understanding of the processes involving clouds, aerosols and radiation in the Earth's atmosphere. The EarthCARE Multispectral Imager (MSI) is relatively compact for a space borne imager. As a consequence, the immediate point-spread function (PSF) of the instrument will be mainly determined by the diffraction caused by the relatively small optical aperture. In order to still achieve a high contrast image, de-convolution processing is applied to remove the impact of diffraction on the PSF. A Lucy-Richardson algorithm has been chosen for this purpose. This paper will describe the system setup and the necessary data pre-processing and post-processing steps applied in order to compare the end-to-end image quality with the L1b performance required by the science community.
NASA Astrophysics Data System (ADS)
Zou, Liang; Fu, Zhuang; Zhao, YanZheng; Yang, JunYan
2010-07-01
This paper proposes a kind of pipelined electric circuit architecture implemented in FPGA, a very large scale integrated circuit (VLSI), which efficiently deals with the real time non-uniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPA). Dual Nios II soft-core processors and a DSP with a 64+ core together constitute this image system. Each processor undertakes own systematic task, coordinating its work with each other's. The system on programmable chip (SOPC) in FPGA works steadily under the global clock frequency of 96Mhz. Adequate time allowance makes FPGA perform NUC image pre-processing algorithm with ease, which has offered favorable guarantee for the work of post image processing in DSP. And at the meantime, this paper presents a hardware (HW) and software (SW) co-design in FPGA. Thus, this systematic architecture yields an image processing system with multiprocessor, and a smart solution to the satisfaction with the performance of the system.
Efficiency of the spectral-spatial classification of hyperspectral imaging data
NASA Astrophysics Data System (ADS)
Borzov, S. M.; Potaturkin, O. I.
2017-01-01
The efficiency of methods of the spectral-spatial classification of similarly looking types of vegetation on the basis of hyperspectral data of remote sensing of the Earth, which take into account local neighborhoods of analyzed image pixels, is experimentally studied. Algorithms that involve spatial pre-processing of the raw data and post-processing of pixel-based spectral classification maps are considered. Results obtained both for a large-size hyperspectral image and for its test fragment with different methods of training set construction are reported. The classification accuracy in all cases is estimated through comparisons of ground-truth data and classification maps formed by using the compared methods. The reasons for the differences in these estimates are discussed.
Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity.
Napoletano, Paolo; Piccoli, Flavio; Schettini, Raimondo
2018-01-12
Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art.
Neural network post-processing of grayscale optical correlator
NASA Technical Reports Server (NTRS)
Lu, Thomas T; Hughlett, Casey L.; Zhoua, Hanying; Chao, Tien-Hsin; Hanan, Jay C.
2005-01-01
In this paper we present the use of a radial basis function neural network (RBFNN) as a post-processor to assist the optical correlator to identify the objects and to reject false alarms. Image plane features near the correlation peaks are extracted and fed to the neural network for analysis. The approach is capable of handling large number of object variations and filter sets. Preliminary experimental results are presented and the performance is analyzed.
Saindane, A M; Qiu, D; Oshinski, J N; Newman, N J; Biousse, V; Bruce, B B; Holbrook, J F; Dale, B M; Zhong, X
2018-02-01
Intracranial pressure is estimated invasively by using lumbar puncture with CSF opening pressure measurement. This study evaluated displacement encoding with stimulated echoes (DENSE), an MR imaging technique highly sensitive to brain motion, as a noninvasive means of assessing intracranial pressure status. Nine patients with suspected elevated intracranial pressure and 9 healthy control subjects were included in this prospective study. Controls underwent DENSE MR imaging through the midsagittal brain. Patients underwent DENSE MR imaging followed immediately by lumbar puncture with opening pressure measurement, CSF removal, closing pressure measurement, and immediate repeat DENSE MR imaging. Phase-reconstructed images were processed producing displacement maps, and pontine displacement was calculated. Patient data were analyzed to determine the effects of measured pressure on pontine displacement. Patient and control data were analyzed to assess the effects of clinical status (pre-lumbar puncture, post-lumbar puncture, or control) on pontine displacement. Patients demonstrated imaging findings suggesting chronically elevated intracranial pressure, whereas healthy control volunteers demonstrated no imaging abnormalities. All patients had elevated opening pressure (median, 36.0 cm water), decreased by the removal of CSF to a median closing pressure of 17.0 cm water. Patients pre-lumbar puncture had significantly smaller pontine displacement than they did post-lumbar puncture after CSF pressure reduction ( P = .001) and compared with controls ( P = .01). Post-lumbar puncture patients had statistically similar pontine displacements to controls. Measured CSF pressure in patients pre- and post-lumbar puncture correlated significantly with pontine displacement ( r = 0.49; P = .04). This study establishes a relationship between pontine displacement from DENSE MR imaging and measured pressure obtained contemporaneously by lumbar puncture, providing a method to noninvasively assess intracranial pressure status in idiopathic intracranial hypertension. © 2018 by American Journal of Neuroradiology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mashouf, S; Merino, T; Ravi, A
Purpose: There is strong evidence relating post-implant dosimetry for low-dose-rate (LDR) prostate seed brachytherapy to local control rates. The delineation of the prostate on CT images, however, represents a challenge due to the lack of soft tissue contrast in order to identify the prostate borders. This study aims at quantifying the sensitivity of clinically relevant dosimetric parameters to uncertainty in the contouring of prostate. Methods: CT images, post-op plans and contours of a cohort of patients (n=43) (low risk=55.8%, intermediate risk=39.5%, high risk=4.7%), who had received prostate seed brachytherapy, were imported into MIM Symphony treatment planning system. The prostate contoursmore » in post-implant CT images were expanded/contracted uniformly for margins of ±1.00 mm, ±2.00 mm, ±3.00 mm, ±4.00 mm and ±5.00 mm. The values for V100 and D90 were extracted from Dose Volume Histograms for each contour and compared. Results: Significant changes were observed in the values of D90 and V100 as well as the number of suboptimal plans for expansion or contraction margins of only few millimeters. Evaluation of coverage based on D90 was found to be less sensitive to expansion errors compared to V100. D90 led to a lower number of implants incorrectly identified with insufficient coverage for expanded contours which increases the accuracy of post-implant QA using CT images compared to V100. Conclusion: In order to establish a successful post implant QA for LDR prostate seed brachytherapy, it is necessary to identify the low and high thresholds of important dose metrics of the target volume such as D90 and V100. Since these parameters are sensitive to target volume definition, accurate identification of prostate borders would help to improve accuracy and predictive value of the post-implant QA process. In this respect, use of imaging modalities such as MRI where prostate is well delineated should prove useful.« less
NASA Astrophysics Data System (ADS)
Wuhrer, R.; Moran, K.
2014-03-01
Quantitative X-ray mapping with silicon drift detectors and multi-EDS detector systems have become an invaluable analysis technique and one of the most useful methods of X-ray microanalysis today. The time to perform an X-ray map has reduced considerably with the ability to map minor and trace elements very accurately due to the larger detector area and higher count rate detectors. Live X-ray imaging can now be performed with a significant amount of data collected in a matter of minutes. A great deal of information can be obtained from X-ray maps. This includes; elemental relationship or scatter diagram creation, elemental ratio mapping, chemical phase mapping (CPM) and quantitative X-ray maps. In obtaining quantitative x-ray maps, we are able to easily generate atomic number (Z), absorption (A), fluorescence (F), theoretical back scatter coefficient (η), and quantitative total maps from each pixel in the image. This allows us to generate an image corresponding to each factor (for each element present). These images allow the user to predict and verify where they are likely to have problems in our images, and are especially helpful to look at possible interface artefacts. The post-processing techniques to improve the quantitation of X-ray map data and the development of post processing techniques for improved characterisation are covered in this paper.
Hybrid wavefront sensing and image correction algorithm for imaging through turbulent media
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Robertson Rzasa, John; Ko, Jonathan; Davis, Christopher C.
2017-09-01
It is well known that passive image correction of turbulence distortions often involves using geometry-dependent deconvolution algorithms. On the other hand, active imaging techniques using adaptive optic correction should use the distorted wavefront information for guidance. Our work shows that a hybrid hardware-software approach is possible to obtain accurate and highly detailed images through turbulent media. The processing algorithm also takes much fewer iteration steps in comparison with conventional image processing algorithms. In our proposed approach, a plenoptic sensor is used as a wavefront sensor to guide post-stage image correction on a high-definition zoomable camera. Conversely, we show that given the ground truth of the highly detailed image and the plenoptic imaging result, we can generate an accurate prediction of the blurred image on a traditional zoomable camera. Similarly, the ground truth combined with the blurred image from the zoomable camera would provide the wavefront conditions. In application, our hybrid approach can be used as an effective way to conduct object recognition in a turbulent environment where the target has been significantly distorted or is even unrecognizable.
Pertuz, Said; McDonald, Elizabeth S.; Weinstein, Susan P.; Conant, Emily F.
2016-01-01
Purpose To assess a fully automated method for volumetric breast density (VBD) estimation in digital breast tomosynthesis (DBT) and to compare the findings with those of full-field digital mammography (FFDM) and magnetic resonance (MR) imaging. Materials and Methods Bilateral DBT images, FFDM images, and sagittal breast MR images were retrospectively collected from 68 women who underwent breast cancer screening from October 2011 to September 2012 with institutional review board–approved, HIPAA-compliant protocols. A fully automated computer algorithm was developed for quantitative estimation of VBD from DBT images. FFDM images were processed with U.S. Food and Drug Administration–cleared software, and the MR images were processed with a previously validated automated algorithm to obtain corresponding VBD estimates. Pearson correlation and analysis of variance with Tukey-Kramer post hoc correction were used to compare the multimodality VBD estimates. Results Estimates of VBD from DBT were significantly correlated with FFDM-based and MR imaging–based estimates with r = 0.83 (95% confidence interval [CI]: 0.74, 0.90) and r = 0.88 (95% CI: 0.82, 0.93), respectively (P < .001). The corresponding correlation between FFDM and MR imaging was r = 0.84 (95% CI: 0.76, 0.90). However, statistically significant differences after post hoc correction (α = 0.05) were found among VBD estimates from FFDM (mean ± standard deviation, 11.1% ± 7.0) relative to MR imaging (16.6% ± 11.2) and DBT (19.8% ± 16.2). Differences between VDB estimates from DBT and MR imaging were not significant (P = .26). Conclusion Fully automated VBD estimates from DBT, FFDM, and MR imaging are strongly correlated but show statistically significant differences. Therefore, absolute differences in VBD between FFDM, DBT, and MR imaging should be considered in breast cancer risk assessment. © RSNA, 2015 Online supplemental material is available for this article. PMID:26491909
A stereoscopic lens for digital cinema cameras
NASA Astrophysics Data System (ADS)
Lipton, Lenny; Rupkalvis, John
2015-03-01
Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.
An Integrative Object-Based Image Analysis Workflow for Uav Images
NASA Astrophysics Data System (ADS)
Yu, Huai; Yan, Tianheng; Yang, Wen; Zheng, Hong
2016-06-01
In this work, we propose an integrative framework to process UAV images. The overall process can be viewed as a pipeline consisting of the geometric and radiometric corrections, subsequent panoramic mosaicking and hierarchical image segmentation for later Object Based Image Analysis (OBIA). More precisely, we first introduce an efficient image stitching algorithm after the geometric calibration and radiometric correction, which employs a fast feature extraction and matching by combining the local difference binary descriptor and the local sensitive hashing. We then use a Binary Partition Tree (BPT) representation for the large mosaicked panoramic image, which starts by the definition of an initial partition obtained by an over-segmentation algorithm, i.e., the simple linear iterative clustering (SLIC). Finally, we build an object-based hierarchical structure by fully considering the spectral and spatial information of the super-pixels and their topological relationships. Moreover, an optimal segmentation is obtained by filtering the complex hierarchies into simpler ones according to some criterions, such as the uniform homogeneity and semantic consistency. Experimental results on processing the post-seismic UAV images of the 2013 Ya'an earthquake demonstrate the effectiveness and efficiency of our proposed method.
Laule, Cornelia; Vavasour, Irene M; Shahinfard, Elham; Mädler, Burkhard; Zhang, Jing; Li, David K B; MacKay, Alex L; Sirrs, Sandra M
2018-05-01
Late-onset adult Krabbe disease is a very rare demyelinating leukodystrophy, affecting less than 1 in a million people. Hematopoietic stem cell transplantation (HSCT) strategies can stop the accumulation of toxic metabolites that damage myelin-producing cells. We used quantitative advanced imaging metrics to longitudinally assess the impact of HSCT on brain abnormalities in adult-onset Krabbe disease. A 42-year-old female with late-onset Krabbe disease and an age/sex-matched healthy control underwent annual 3T MRI (baseline was immediately prior to HSCT for the Krabbe subject). Imaging included conventional scans, myelin water imaging, diffusion tensor imaging, and magnetic resonance spectroscopy. Brain abnormalities far beyond those visible on conventional imaging were detected, suggesting a global pathological process occurs in Krabbe disease with adult-onset etiology, with myelin being more affected than axons, and evidence of wide-spread gliosis. After HSCT, our patient showed clinical stability in all measures, as well as improvement in gait, dysarthria, and pseudobulbar affect at 7.5 years post-transplant. No MRI evidence of worsening demyelination and axonal loss was observed up to 4 years post-allograft. Clinical evidence and stability of advanced MR measures related to myelin and axons supports HSCT as an effective treatment strategy for stopping progression associated with late-onset Krabbe disease. Copyright © 2018 by the American Society of Neuroimaging.
High Resolution Imaging of the Sun with CORONAS-1
NASA Technical Reports Server (NTRS)
Karovska, Margarita
1998-01-01
We applied several image restoration and enhancement techniques, to CORONAS-I images. We carried out the characterization of the Point Spread Function (PSF) using the unique capability of the Blind Iterative Deconvolution (BID) technique, which recovers the real PSF at a given location and time of observation, when limited a priori information is available on its characteristics. We also applied image enhancement technique to extract the small scale structure imbeded in bright large scale structures on the disk and on the limb. The results demonstrate the capability of the image post-processing to substantially increase the yield from the space observations by improving the resolution and reducing noise in the images.
Wan Ismail, W Z; Sim, K S; Tso, C P; Ting, H Y
2011-01-01
To reduce undesirable charging effects in scanning electron microscope images, Rayleigh contrast stretching is developed and employed. First, re-scaling is performed on the input image histograms with Rayleigh algorithm. Then, contrast stretching or contrast adjustment is implemented to improve the images while reducing the contrast charging artifacts. This technique has been compared to some existing histogram equalization (HE) extension techniques: recursive sub-image HE, contrast stretching dynamic HE, multipeak HE and recursive mean separate HE. Other post processing methods, such as wavelet approach, spatial filtering, and exponential contrast stretching, are compared as well. Overall, the proposed method produces better image compensation in reducing charging artifacts. Copyright © 2011 Wiley Periodicals, Inc.
Ghonge, Nitin P; Gadanayak, Satyabrat; Rajakumari, Vijaya
2014-01-01
As Laparoscopic Donor Nephrectomy (LDN) offers several advantages for the donor such as lesser post-operative pain, fewer cosmetic concerns and faster recovery time, there is growing global trend towards LDN as compared to open nephrectomy. Comprehensive pre-LDN donor evaluation includes assessment of renal morphology including pelvi-calyceal and vascular system. Apart from donor selection, evaluation of the regional anatomy allows precise surgical planning. Due to limited visualization during laparoscopic renal harvesting, detailed pre-transplant evaluation of regional anatomy, including the renal venous anatomy is of utmost importance. MDCT is the modality of choice for pre-LDN evaluation of potential renal donors. Apart from appropriate scan protocol and post-processing methods, detailed understanding of surgical techniques is essential for the Radiologist for accurate image interpretation during pre-LDN MDCT evaluation of potential renal donors. This review article describes MDCT evaluation of potential living renal donor, prior to LDN with emphasis on scan protocol, post-processing methods and image interpretation. The article laid special emphasis on surgical perspectives of pre-LDN MDCT evaluation and addresses important points which transplant surgeons want to know. PMID:25489130
Retrieval of land cover information under thin fog in Landsat TM image
NASA Astrophysics Data System (ADS)
Wei, Yuchun
2008-04-01
Thin fog, which often appears in remote sensing image of subtropical climate region, has resulted in the low image quantity and bad image mapping. Therefore, it is necessary to develop the image processing method to retrieve land cover information under thin fog. In this paper, the Landsat TM image near the Taihu Lake that is in the subtropical climate zone of China was used as an example, and the workflow and method used to retrieve the land cover information under thin fog have been built based on ENVI software and a single TM image. The basic step covers three parts: 1) isolating the thin fog area in image according to the spectral difference of different bands; 2) retrieving the visible band information of different land cover types under thin fog from the near-infrared bands according to the relationships between near-infrared bands and visible bands of different land cover types in the area without fog; 3) image post-process. The result showed that the method in the paper is easy and suitable, and can be used to improve the quantity of TM image mapping more effectively.
Image velocimetry for clouds with relaxation labeling based on deformation consistency
NASA Astrophysics Data System (ADS)
Horinouchi, Takeshi; Murakami, Shin-ya; Kouyama, Toru; Ogohara, Kazunori; Yamazaki, Atsushi; Yamada, Manabu; Watanabe, Shigeto
2017-08-01
Correlation-based cloud tracking has been extensively used to measure atmospheric winds, but still difficulty remains. In this study, aiming at developing a cloud tracking system for Akatsuki, an artificial satellite orbiting Venus, a formulation is developed for improving the relaxation labeling technique to select appropriate peaks of cross-correlation surfaces which tend to have multiple peaks. The formulation makes an explicit use of consistency inherent in the type of cross-correlation method where template sub-images are slid without deformation; if the resultant motion vectors indicate a too-large deformation, it is contradictory to the assumption of the method. The deformation consistency is exploited further to develop two post processes; one clusters the motion vectors into groups within each of which the consistency is perfect, and the other extends the groups using the original candidate lists. These processes are useful to eliminate erroneous vectors, distinguish motion vectors at different altitudes, and detect phase velocities of waves in fluids such as atmospheric gravity waves. As a basis of the relaxation labeling and the post processes as well as uncertainty estimation, the necessity to find isolated (well-separated) peaks of cross-correlation surfaces is argued, and an algorithm to realize it is presented. All the methods are implemented, and their effectiveness is demonstrated with initial images obtained by the ultraviolet imager onboard Akatsuki. Since the deformation consistency regards the logical consistency inherent in template matching methods, it should have broad application beyond cloud tracking.
2015-04-01
Current routine MRI examinations rely on the acquisition of qualitative images that have a contrast "weighted" for a mixture of (magnetic) tissue properties. Recently, a novel approach was introduced, namely MR Fingerprinting (MRF) with a completely different approach to data acquisition, post-processing and visualization. Instead of using a repeated, serial acquisition of data for the characterization of individual parameters of interest, MRF uses a pseudo randomized acquisition that causes the signals from different tissues to have a unique signal evolution or 'fingerprint' that is simultaneously a function of the multiple material properties under investigation. The processing after acquisition involves a pattern recognition algorithm to match the fingerprints to a predefined dictionary of predicted signal evolutions. These can then be translated into quantitative maps of the magnetic parameters of interest. MR Fingerprinting (MRF) is a technique that could theoretically be applied to most traditional qualitative MRI methods and replaces them with acquisition of truly quantitative tissue measures. MRF is, thereby, expected to be much more accurate and reproducible than traditional MRI and should improve multi-center studies and significantly reduce reader bias when diagnostic imaging is performed. Key Points • MR fingerprinting (MRF) is a new approach to data acquisition, post-processing and visualization.• MRF provides highly accurate quantitative maps of T1, T2, proton density, diffusion.• MRF may offer multiparametric imaging with high reproducibility, and high potential for multicenter/ multivendor studies.
An adaptive optics imaging system designed for clinical use
Zhang, Jie; Yang, Qiang; Saito, Kenichi; Nozato, Koji; Williams, David R.; Rossi, Ethan A.
2015-01-01
Here we demonstrate a new imaging system that addresses several major problems limiting the clinical utility of conventional adaptive optics scanning light ophthalmoscopy (AOSLO), including its small field of view (FOV), reliance on patient fixation for targeting imaging, and substantial post-processing time. We previously showed an efficient image based eye tracking method for real-time optical stabilization and image registration in AOSLO. However, in patients with poor fixation, eye motion causes the FOV to drift substantially, causing this approach to fail. We solve that problem here by tracking eye motion at multiple spatial scales simultaneously by optically and electronically integrating a wide FOV SLO (WFSLO) with an AOSLO. This multi-scale approach, implemented with fast tip/tilt mirrors, has a large stabilization range of ± 5.6°. Our method consists of three stages implemented in parallel: 1) coarse optical stabilization driven by a WFSLO image, 2) fine optical stabilization driven by an AOSLO image, and 3) sub-pixel digital registration of the AOSLO image. We evaluated system performance in normal eyes and diseased eyes with poor fixation. Residual image motion with incremental compensation after each stage was: 1) ~2–3 arc minutes, (arcmin) 2) ~0.5–0.8 arcmin and, 3) ~0.05–0.07 arcmin, for normal eyes. Performance in eyes with poor fixation was: 1) ~3–5 arcmin, 2) ~0.7–1.1 arcmin and 3) ~0.07–0.14 arcmin. We demonstrate that this system is capable of reducing image motion by a factor of ~400, on average. This new optical design provides additional benefits for clinical imaging, including a steering subsystem for AOSLO that can be guided by the WFSLO to target specific regions of interest such as retinal pathology and real-time averaging of registered images to eliminate image post-processing. PMID:26114033
High Resolution Near Real Time Image Processing and Support for MSSS Modernization
NASA Astrophysics Data System (ADS)
Duncan, R. B.; Sabol, C.; Borelli, K.; Spetka, S.; Addison, J.; Mallo, A.; Farnsworth, B.; Viloria, R.
2012-09-01
This paper describes image enhancement software applications engineering development work that has been performed in support of Maui Space Surveillance System (MSSS) Modernization. It also includes R&D and transition activity that has been performed over the past few years with the objective of providing increased space situational awareness (SSA) capabilities. This includes Air Force Research Laboratory (AFRL) use of an FY10 Dedicated High Performance Investment (DHPI) cluster award -- and our selection and planned use for an FY12 DHPI award. We provide an introduction to image processing of electro optical (EO) telescope sensors data; and a high resolution image enhancement and near real time processing and summary status overview. We then describe recent image enhancement applications development and support for MSSS Modernization, results to date, and end with a discussion of desired future development work and conclusions. Significant improvements to image processing enhancement have been realized over the past several years, including a key application that has realized more than a 10,000-times speedup compared to the original R&D code -- and a greater than 72-times speedup over the past few years. The latest version of this code maintains software efficiency for post-mission processing while providing optimization for image processing of data from a new EO sensor at MSSS. Additional work has also been performed to develop low latency, near real time processing of data that is collected by the ground-based sensor during overhead passes of space objects.
Social-Cognitive Biases in Simulated Airline Luggage Screening
NASA Technical Reports Server (NTRS)
Brown, Jeremy R.; Madhavan, Poomima
2011-01-01
This study illustrated how social cognitive biases affect the decision making process of air1ine luggage screeners. Participants (n = 96) performed a computer simulated task to detect hidden weapons in 200 x-ray images of passenger luggage. Participants saw each image for two (high time pressure) or six seconds (low time pressure). Participants observed pictures of the "passenger" who owns the luggage . The "pre-anchor group" answered questions about the passenger before the luggage image appeared, the "post-snchor" group answered questions after the luggage appeared, and the "no-anchor group" answered no questions. Participants either stopped or did not stop the bag. and rated their confidence in their decision. Participants under high time pressure had lower hit rates and higher false alarms, Significant differences between the pre-, no-, and post-anchor groups were based on the gender and race of the passengers. Participants had higher false alarm rates in response to male than female passengers.
Seismic imaging of post-glacial sediments - test study before Spitsbergen expedition
NASA Astrophysics Data System (ADS)
Szalas, Joanna; Grzyb, Jaroslaw; Majdanski, Mariusz
2017-04-01
This work presents results of the analysis of reflection seismic data acquired from testing area in central Poland. For this experiment we used total number of 147 vertical component seismic stations (DATA-CUBE and Reftek "Texan") with accelerated weight drop (PEG-40). The profile was 350 metres long. It is a part of pilot study for future research project on Spitsbergen. The purpose of the study is to recognise the characteristics of seismic response of post-glacial sediments in order to design the most adequate survey acquisition parameters and processing sequence for data from Spitsbergen. Multiple tests and comparisons have been performed to obtain the best possible quality of seismic image. In this research we examine the influence of receiver interval size, front mute application and surface wave attenuation attempts. Although seismic imaging is the main technique we are planning to support this analysis with additional data from traveltime tomography, MASW and other a priori information.
NASA Astrophysics Data System (ADS)
Lesage, F.; Castonguay, A.; Tardif, P. L.; Lefebvre, J.; Li, B.
2015-09-01
A combined serial OCT/confocal scanner was designed to image large sections of biological tissues at microscopic resolution. Serial imaging of organs embedded in agarose blocks is performed by cutting through tissue using a vibratome which sequentially cuts slices in order to reveal new tissue to image, overcoming limited light penetration encountered in microscopy. Two linear stages allow moving the tissue with respect to the microscope objective, acquiring a 2D grid of volumes (1x1x0.3 mm) with OCT and a 2D grid of images (1x1mm) with the confocal arm. This process is repeated automatically, until the entire sample is imaged. Raw data is then post-processed to re-stitch each individual acquisition and obtain a reconstructed volume of the imaged tissue. This design is being used to investigate correlations between white matter and microvasculature changes with aging and with increase in pulse pressure following transaortic constriction in mice. The dual imaging capability of the system allowed to reveal different contrast information: OCT imaging reveals changes in refractive indices giving contrast between white and grey matter in the mouse brain, while transcardial perfusion of FITC or pre-sacrifice injection of Evans Blue shows microsvasculature properties in the brain with confocal imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakowatz, C.V. Jr.; Wahl, D.E.; Thompson, P.A.
1996-12-31
Wavefront curvature defocus effects can occur in spotlight-mode SAR imagery when reconstructed via the well-known polar formatting algorithm (PFA) under certain scenarios that include imaging at close range, use of very low center frequency, and/or imaging of very large scenes. The range migration algorithm (RMA), also known as seismic migration, was developed to accommodate these wavefront curvature effects. However, the along-track upsampling of the phase history data required of the original version of range migration can in certain instances represent a major computational burden. A more recent version of migration processing, the Frequency Domain Replication and Downsampling (FReD) algorithm, obviatesmore » the need to upsample, and is accordingly more efficient. In this paper the authors demonstrate that the combination of traditional polar formatting with appropriate space-variant post-filtering for refocus can be as efficient or even more efficient than FReD under some imaging conditions, as demonstrated by the computer-simulated results in this paper. The post-filter can be pre-calculated from a theoretical derivation of the curvature effect. The conclusion is that the new polar formatting with post filtering algorithm (PF2) should be considered as a viable candidate for a spotlight-mode image formation processor when curvature effects are present.« less
NASA Astrophysics Data System (ADS)
Gurov, I. P.; Kozlov, S. A.
2014-09-01
The first international scientific school "Methods of Digital Image Processing in Optics and Photonics" was held with a view to develop cooperation between world-class experts, young scientists, students and post-graduate students, and to exchange information on the current status and directions of research in the field of digital image processing in optics and photonics. The International Scientific School was managed by: Saint Petersburg National Research University of Information Technologies, Mechanics and Optics (ITMO University) - Saint Petersburg (Russia) Chernyshevsky Saratov State University - Saratov (Russia) National research nuclear University "MEPHI" (NRNU MEPhI) - Moscow (Russia) The school was held with the participation of the local chapters of Optical Society of America (OSA), the Society of Photo-Optical Instrumentation Engineers (SPIE) and IEEE Photonics Society. Further details, including topics, committees and conference photos are available in the PDF
Wavelet Filter Banks for Super-Resolution SAR Imaging
NASA Technical Reports Server (NTRS)
Sheybani, Ehsan O.; Deshpande, Manohar; Memarsadeghi, Nargess
2011-01-01
This paper discusses Innovative wavelet-based filter banks designed to enhance the analysis of super resolution Synthetic Aperture Radar (SAR) images using parametric spectral methods and signal classification algorithms, SAR finds applications In many of NASA's earth science fields such as deformation, ecosystem structure, and dynamics of Ice, snow and cold land processes, and surface water and ocean topography. Traditionally, standard methods such as Fast-Fourier Transform (FFT) and Inverse Fast-Fourier Transform (IFFT) have been used to extract Images from SAR radar data, Due to non-parametric features of these methods and their resolution limitations and observation time dependence, use of spectral estimation and signal pre- and post-processing techniques based on wavelets to process SAR radar data has been proposed. Multi-resolution wavelet transforms and advanced spectral estimation techniques have proven to offer efficient solutions to this problem.
Dependence of quantitative accuracy of CT perfusion imaging on system parameters
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Guang-Hong
2017-03-01
Deconvolution is a popular method to calculate parametric perfusion parameters from four dimensional CT perfusion (CTP) source images. During the deconvolution process, the four dimensional space is squeezed into three-dimensional space by removing the temporal dimension, and a prior knowledge is often used to suppress noise associated with the process. These additional complexities confound the understanding about deconvolution-based CTP imaging system and how its quantitative accuracy depends on parameters and sub-operations involved in the image formation process. Meanwhile, there has been a strong clinical need in answering this question, as physicians often rely heavily on the quantitative values of perfusion parameters to make diagnostic decisions, particularly during an emergent clinical situation (e.g. diagnosis of acute ischemic stroke). The purpose of this work was to develop a theoretical framework that quantitatively relates the quantification accuracy of parametric perfusion parameters with CTP acquisition and post-processing parameters. This goal was achieved with the help of a cascaded systems analysis for deconvolution-based CTP imaging systems. Based on the cascaded systems analysis, the quantitative relationship between regularization strength, source image noise, arterial input function, and the quantification accuracy of perfusion parameters was established. The theory could potentially be used to guide developments of CTP imaging technology for better quantification accuracy and lower radiation dose.
Realistic Simulations of Coronagraphic Observations with WFIRST
NASA Astrophysics Data System (ADS)
Rizzo, Maxime; Zimmerman, Neil; Roberge, Aki; Lincowski, Andrew; Arney, Giada; Stark, Chris; Jansen, Tiffany; Turnbull, Margaret; WFIRST Science Investigation Team (Turnbull)
2018-01-01
We present a framework to simulate observing scenarios with the WFIRST Coronagraphic Instrument (CGI). The Coronagraph and Rapid Imaging Spectrograph in Python (crispy) is an open-source package that can be used to create CGI data products for analysis and development of post-processing routines. The software convolves time-varying coronagraphic PSFs with realistic astrophysical scenes which contain a planetary architecture, a consistent dust structure, and a background field composed of stars and galaxies. The focal plane can be read out by a WFIRST electron-multiplying CCD model directly, or passed through a WFIRST integral field spectrograph model first. Several elementary post-processing routines are provided as part of the package.
A Pipeline for 3D Digital Optical Phenotyping Plant Root System Architecture
NASA Astrophysics Data System (ADS)
Davis, T. W.; Shaw, N. M.; Schneider, D. J.; Shaff, J. E.; Larson, B. G.; Craft, E. J.; Liu, Z.; Kochian, L. V.; Piñeros, M. A.
2017-12-01
This work presents a new pipeline for digital optical phenotyping the root system architecture of agricultural crops. The pipeline begins with a 3D root-system imaging apparatus for hydroponically grown crop lines of interest. The apparatus acts as a self-containing dark room, which includes an imaging tank, motorized rotating bearing and digital camera. The pipeline continues with the Plant Root Imaging and Data Acquisition (PRIDA) software, which is responsible for image capturing and storage. Once root images have been captured, image post-processing is performed using the Plant Root Imaging Analysis (PRIA) command-line tool, which extracts root pixels from color images. Following the pre-processing binarization of digital root images, 3D trait characterization is performed using the next-generation RootReader3D software. RootReader3D measures global root system architecture traits, such as total root system volume and length, total number of roots, and maximum rooting depth and width. While designed to work together, the four stages of the phenotyping pipeline are modular and stand-alone, which provides flexibility and adaptability for various research endeavors.
Random bits, true and unbiased, from atmospheric turbulence
Marangon, Davide G.; Vallone, Giuseppe; Villoresi, Paolo
2014-01-01
Random numbers represent a fundamental ingredient for secure communications and numerical simulation as well as to games and in general to Information Science. Physical processes with intrinsic unpredictability may be exploited to generate genuine random numbers. The optical propagation in strong atmospheric turbulence is here taken to this purpose, by observing a laser beam after a 143 km free-space path. In addition, we developed an algorithm to extract the randomness of the beam images at the receiver without post-processing. The numbers passed very selective randomness tests for qualification as genuine random numbers. The extracting algorithm can be easily generalized to random images generated by different physical processes. PMID:24976499
Visualization of the post-Golgi vesicle-mediated transportation of TGF-β receptor II by quasi-TIRFM.
Luo, Wangxi; Xia, Tie; Xu, Li; Chen, Ye-Guang; Fang, Xiaohong
2014-10-01
Transforming growth factor β receptor II (Tβ RII) is synthesized in the cytoplasm and then transported to the plasma membrane of cells to fulfil its signalling duty. Here, we applied live-cell fluorescence imaging techniques, in particular quasi-total internal reflection fluorescence microscopy, to imaging fluorescent protein-tagged Tβ RII and monitoring its secretion process. We observed punctuate-like Tβ RII-containing post-Golgi vesicles formed in MCF7 cells. Single-particle tracking showed that these vesicles travelled along the microtubules at an average speed of 0.51 μm/s. When stimulated by TGF-β ligand, these receptor-containing vesicles intended to move towards the plasma membrane. We also identified several factors that could inhibit the formation of such post-Golgi vesicles. Although the inhibitory mechanisms still remain unknown, the observed characteristics of Tβ RII-containing vesicles provide new information on intracellular Tβ RII transportation. It also renders Tβ RII a good model system for studying post-Golgi vesicle-trafficking and protein transportation. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Abt, Nicholas B.; Lehar, Mohamed; Guajardo, Carolina Trevino; Penninger, Richard T.; Ward, Bryan K.; Pearl, Monica S.; Carey, John P.
2016-01-01
Hypothesis Whether the RWM is permeable to iodine-based contrast agents (IBCA) is unknown; therefore, our goal was to determine if IBCAs could diffuse through the RWM using CT volume acquisition imaging. Introduction Imaging of hydrops in the living human ear has attracted recent interest. Intratympanic (IT) injection has shown gadolinium's ability to diffuse through the round window membrane (RWM), enhancing the perilymphatic space. Methods Four unfixed human cadaver temporal bones underwent intratympanic IBCA injection using three sequentially studied methods. The first method was direct IT injection. The second method used direct RWM visualization via tympanomeatal flap for IBCA-soaked absorbable gelatin pledget placement. In the third method, the middle ear was filled with contrast after flap elevation. Volume acquisition CT images were obtained immediately post-exposure, and at 1, 6, and 24 hour intervals. Post-processing was accomplished using color ramping and subtraction imaging. Results Following the third method, positive RWM and perilymphatic enhancement were seen with endolymph sparing. Gray scale and color ramp multiplanar reconstructions displayed increased signal within the cochlea compared to pre-contrast imaging. The cochlea was measured for attenuation differences compared to pure water, revealing a pre-injection average of −1,103 HU and a post-injection average of 338 HU. Subtraction imaging shows enhancement remaining within the cochlear space, Eustachian tube, middle ear epithelial lining, and mastoid. Conclusions Iohexol iodine contrast is able to diffuse across the RWM. Volume acquisition CT imaging was able to detect perilymphatic enhancement at 0.5mm slice thickness. The clinical application of IBCA IT injection appears promising but requires further safety studies. PMID:26859543
A colour image reproduction framework for 3D colour printing
NASA Astrophysics Data System (ADS)
Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie
2016-10-01
In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.
Lobster eye X-ray optics: Data processing from two 1D modules
NASA Astrophysics Data System (ADS)
Nentvich, O.; Urban, M.; Stehlikova, V.; Sieger, L.; Hudec, R.
2017-07-01
The X-ray imaging is usually done by Wolter I telescopes. They are suitable for imaging of a small part of the sky, not for all-sky monitoring. This monitoring could be done by a Lobster eye optics which can theoretically have a field of view up to 360 deg. All sky monitoring system enables a quick identification of source and its direction. This paper describes the possibility of using two independent one-dimensional Lobster Eye modules for this purpose instead of Wolter I and their post-processing into an 2D image. This arrangement allows scanning with less energy loss compared to Wolter I or two-dimensional Lobster Eye optics. It is most suitable especially for very weak sources.
Anomaly Detection in Nanofibrous Materials by CNN-Based Self-Similarity
Schettini, Raimondo
2018-01-01
Automatic detection and localization of anomalies in nanofibrous materials help to reduce the cost of the production process and the time of the post-production visual inspection process. Amongst all the monitoring methods, those exploiting Scanning Electron Microscope (SEM) imaging are the most effective. In this paper, we propose a region-based method for the detection and localization of anomalies in SEM images, based on Convolutional Neural Networks (CNNs) and self-similarity. The method evaluates the degree of abnormality of each subregion of an image under consideration by computing a CNN-based visual similarity with respect to a dictionary of anomaly-free subregions belonging to a training set. The proposed method outperforms the state of the art. PMID:29329268
Fundamental techniques for resolution enhancement of average subsampled images
NASA Astrophysics Data System (ADS)
Shen, Day-Fann; Chiu, Chui-Wen
2012-07-01
Although single image resolution enhancement, otherwise known as super-resolution, is widely regarded as an ill-posed inverse problem, we re-examine the fundamental relationship between a high-resolution (HR) image acquisition module and its low-resolution (LR) counterpart. Analysis shows that partial HR information is attenuated but still exists, in its LR version, through the fundamental averaging-and-subsampling process. As a result, we propose a modified Laplacian filter (MLF) and an intensity correction process (ICP) as the pre and post process, respectively, with an interpolation algorithm to partially restore the attenuated information in a super-resolution (SR) enhanced image image. Experiments show that the proposed MLF and ICP provide significant and consistent quality improvements on all 10 test images with three well known interpolation methods including bilinear, bi-cubic, and the SR graphical user interface program provided by Ecole Polytechnique Federale de Lausanne. The proposed MLF and ICP are simple in implementation and generally applicable to all average-subsampled LR images. MLF and ICP, separately or together, can be integrated into most interpolation methods that attempt to restore the original HR contents. Finally, the idea of MLF and ICP can also be applied for average, subsampled one-dimensional signal.
In-flight edge response measurements for high-spatial-resolution remote sensing systems
NASA Astrophysics Data System (ADS)
Blonski, Slawomir; Pagnutti, Mary A.; Ryan, Robert; Zanoni, Vickie
2002-09-01
In-flight measurements of spatial resolution were conducted as part of the NASA Scientific Data Purchase Verification and Validation process. Characterization included remote sensing image products with ground sample distance of 1 meter or less, such as those acquired with the panchromatic imager onboard the IKONOS satellite and the airborne ADAR System 5500 multispectral instrument. Final image products were used to evaluate the effects of both the image acquisition system and image post-processing. Spatial resolution was characterized by full width at half maximum of an edge-response-derived line spread function. The edge responses were analyzed using the tilted-edge technique that overcomes the spatial sampling limitations of the digital imaging systems. As an enhancement to existing algorithms, the slope of the edge response and the orientation of the edge target were determined by a single computational process. Adjacent black and white square panels, either painted on a flat surface or deployed as tarps, formed the ground-based edge targets used in the tests. Orientation of the deployable tarps was optimized beforehand, based on simulations of the imaging system. The effects of such factors as acquisition geometry, temporal variability, Modulation Transfer Function compensation, and ground sample distance on spatial resolution were investigated.
TU-AB-204-01: Device Approval Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delfino, J.
The responsibilities of the Food and Drug Administration (FDA) have increased since the inception of the Food and Drugs Act in 1906. Medical devices first came under comprehensive regulation with the passage of the 1938 Food, Drug, and Cosmetic Act. In 1971 FDA also took on the responsibility for consumer protection against unnecessary exposure to radiation-emitting devices for home and occupational use. However it was not until 1976, under the Medical Device Regulation Act, that the FDA was responsible for the safety and effectiveness of medical devices. This session will be presented by the Division of Radiological Health (DRH) andmore » the Division of Imaging, Diagnostics, and Software Reliability (DIDSR) from the Center for Devices and Radiological Health (CDRH) at the FDA. The symposium will discuss on how we protect and promote public health with a focus on medical physics applications organized into four areas: pre-market device review, post-market surveillance, device compliance, current regulatory research efforts and partnerships with other organizations. The pre-market session will summarize the pathways FDA uses to regulate the investigational use and commercialization of diagnostic imaging and radiation therapy medical devices in the US, highlighting resources available to assist investigators and manufacturers. The post-market session will explain the post-market surveillance and compliance activities FDA performs to monitor the safety and effectiveness of devices on the market. The third session will describe research efforts that support the regulatory mission of the Agency. An overview of our regulatory research portfolio to advance our understanding of medical physics and imaging technologies and approaches to their evaluation will be discussed. Lastly, mechanisms that FDA uses to seek public input and promote collaborations with professional, government, and international organizations, such as AAPM, International Electrotechnical Commission (IEC), Image Gently, and the Quantitative Imaging Biomarkers Alliance (QIBA) among others, to fulfill FDA’s mission will be discussed. Learning Objectives: Understand FDA’s pre-market and post-market review processes for medical devices Understand FDA’s current regulatory research activities in the areas of medical physics and imaging products Understand how being involved with AAPM and other organizations can also help to promote innovative, safe and effective medical devices J. Delfino, nothing to disclose.« less
TU-AB-204-00: CDRH/FDA Regulatory Processes and Device Science Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The responsibilities of the Food and Drug Administration (FDA) have increased since the inception of the Food and Drugs Act in 1906. Medical devices first came under comprehensive regulation with the passage of the 1938 Food, Drug, and Cosmetic Act. In 1971 FDA also took on the responsibility for consumer protection against unnecessary exposure to radiation-emitting devices for home and occupational use. However it was not until 1976, under the Medical Device Regulation Act, that the FDA was responsible for the safety and effectiveness of medical devices. This session will be presented by the Division of Radiological Health (DRH) andmore » the Division of Imaging, Diagnostics, and Software Reliability (DIDSR) from the Center for Devices and Radiological Health (CDRH) at the FDA. The symposium will discuss on how we protect and promote public health with a focus on medical physics applications organized into four areas: pre-market device review, post-market surveillance, device compliance, current regulatory research efforts and partnerships with other organizations. The pre-market session will summarize the pathways FDA uses to regulate the investigational use and commercialization of diagnostic imaging and radiation therapy medical devices in the US, highlighting resources available to assist investigators and manufacturers. The post-market session will explain the post-market surveillance and compliance activities FDA performs to monitor the safety and effectiveness of devices on the market. The third session will describe research efforts that support the regulatory mission of the Agency. An overview of our regulatory research portfolio to advance our understanding of medical physics and imaging technologies and approaches to their evaluation will be discussed. Lastly, mechanisms that FDA uses to seek public input and promote collaborations with professional, government, and international organizations, such as AAPM, International Electrotechnical Commission (IEC), Image Gently, and the Quantitative Imaging Biomarkers Alliance (QIBA) among others, to fulfill FDA’s mission will be discussed. Learning Objectives: Understand FDA’s pre-market and post-market review processes for medical devices Understand FDA’s current regulatory research activities in the areas of medical physics and imaging products Understand how being involved with AAPM and other organizations can also help to promote innovative, safe and effective medical devices J. Delfino, nothing to disclose.« less
NASA Astrophysics Data System (ADS)
Paulsson, Adisa; Xing, Kezhao; Fosshaug, Hans; Lundvall, Axel; Bjoernberg, Charles; Karlsson, Johan
2005-05-01
A continuing improvement in resist process is a necessity for high-end photomask fabrication. In advanced chemically amplified resist systems the lithographic performance is strongly influenced by diffusion of acid and acid quencher (i.e. bases). Beside the resist properties, e.g. size and volatility of the photoacid, the process conditions play important roles for the diffusion control. Understanding and managing these properties influences lithographic characteristics on the photomask such as CD uniformity, CD and pitch linearity, resolution, substrate contamination, clear-dark bias and iso-dense bias. In this paper we have investigated effects on the lithographic characteristics with respect to post exposure bake conditions, when using the chemically amplified resist FEP-171. We used commercially available mask blanks from the Hoya Mask Blank Division with NTAR7 chrome and an optimized resist thickness for the 248 nm laser tool at 3200Å. The photomasks were exposed on the optical DUV (248nm) Sigma7300 pattern generator. Additionally, we investigated the image stability between exposure and post exposure bake. Unlike in wafer fabrication, photomask writing requires several hours, making the resist susceptible to image blur and acid latent image degradation.
Fission gas bubble identification using MATLAB's image processing toolbox
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collette, R.
Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. This study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding proved to bemore » the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods. - Highlights: •Automated image processing can aid in the fuel qualification process. •Routines are developed to characterize fission gas bubbles in irradiated U–Mo fuel. •Frequency domain filtration effectively eliminates FIB curtaining artifacts. •Adaptive thresholding proved to be the most accurate segmentation method. •The techniques established are ready to be applied to large scale data extraction testing.« less
A three-image algorithm for hard x-ray grating interferometry.
Pelliccia, Daniele; Rigon, Luigi; Arfelli, Fulvia; Menk, Ralf-Hendrik; Bukreeva, Inna; Cedola, Alessia
2013-08-12
A three-image method to extract absorption, refraction and scattering information for hard x-ray grating interferometry is presented. The method comprises a post-processing approach alternative to the conventional phase stepping procedure and is inspired by a similar three-image technique developed for analyzer-based x-ray imaging. Results obtained with this algorithm are quantitatively comparable with phase-stepping. This method can be further extended to samples with negligible scattering, where only two images are needed to separate absorption and refraction signal. Thanks to the limited number of images required, this technique is a viable route to bio-compatible imaging with x-ray grating interferometer. In addition our method elucidates and strengthens the formal and practical analogies between grating interferometry and the (non-interferometric) diffraction enhanced imaging technique.
Complications of rotator cuff surgery—the role of post-operative imaging in patient care
Thakkar, R S; Thakkar, S C; Srikumaran, U; Fayad, L M
2014-01-01
When pain or disability occurs after rotator cuff surgery, post-operative imaging is frequently performed. Post-operative complications and expected post-operative imaging findings in the shoulder are presented, with a focus on MRI, MR arthrography (MRA) and CT arthrography. MR and CT techniques are available to reduce image degradation secondary to surgical distortions of native anatomy and implant-related artefacts and to define complications after rotator cuff surgery. A useful approach to image the shoulder after surgery is the standard radiography, followed by MRI/MRA for patients with low “metal presence” and CT for patients who have a higher metal presence. However, for the assessment of patients who have undergone surgery for rotator cuff injuries, imaging findings should always be correlated with the clinical presentation because post-operative imaging abnormalities do not necessarily correlate with symptoms. PMID:24734935
Low-count PET image restoration using sparse representation
NASA Astrophysics Data System (ADS)
Li, Tao; Jiang, Changhui; Gao, Juan; Yang, Yongfeng; Liang, Dong; Liu, Xin; Zheng, Hairong; Hu, Zhanli
2018-04-01
In the field of positron emission tomography (PET), reconstructed images are often blurry and contain noise. These problems are primarily caused by the low resolution of projection data. Solving this problem by improving hardware is an expensive solution, and therefore, we attempted to develop a solution based on optimizing several related algorithms in both the reconstruction and image post-processing domains. As sparse technology is widely used, sparse prediction is increasingly applied to solve this problem. In this paper, we propose a new sparse method to process low-resolution PET images. Two dictionaries (D1 for low-resolution PET images and D2 for high-resolution PET images) are learned from a group real PET image data sets. Among these two dictionaries, D1 is used to obtain a sparse representation for each patch of the input PET image. Then, a high-resolution PET image is generated from this sparse representation using D2. Experimental results indicate that the proposed method exhibits a stable and superior ability to enhance image resolution and recover image details. Quantitatively, this method achieves better performance than traditional methods. This proposed strategy is a new and efficient approach for improving the quality of PET images.
Enhanced FIB-SEM systems for large-volume 3D imaging
Xu, C Shan; Hayworth, Kenneth J; Lu, Zhiyuan; Grob, Patricia; Hassan, Ahmed M; García-Cerdán, José G; Niyogi, Krishna K; Nogales, Eva; Weinberg, Richard J; Hess, Harald F
2017-01-01
Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 106 µm3. These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology. DOI: http://dx.doi.org/10.7554/eLife.25916.001 PMID:28500755
Post-processing images from the WFIRST-AFTA coronagraph testbed
NASA Astrophysics Data System (ADS)
Zimmerman, Neil T.; Ygouf, Marie; Pueyo, Laurent; Soummer, Remi; Perrin, Marshall D.; Mennesson, Bertrand; Cady, Eric; Mejia Prada, Camilo
2016-01-01
The concept for the exoplanet imaging instrument on WFIRST-AFTA relies on the development of mission-specific data processing tools to reduce the speckle noise floor. No instruments have yet functioned on the sky in the planet-to-star contrast regime of the proposed coronagraph (1E-8). Therefore, starlight subtraction algorithms must be tested on a combination of simulated and laboratory data sets to give confidence that the scientific goals can be reached. The High Contrast Imaging Testbed (HCIT) at Jet Propulsion Lab has carried out several technology demonstrations for the instrument concept, demonstrating 1E-8 raw (absolute) contrast. Here, we have applied a mock reference differential imaging strategy to HCIT data sets, treating one subset of images as a reference star observation and another subset as a science target observation. We show that algorithms like KLIP (Karhunen-Loève Image Projection), by suppressing residual speckles, enable the recovery of exoplanet signals at contrast of order 2E-9.
A comparison between different coronagraphic data reduction techniques
NASA Astrophysics Data System (ADS)
Carolo, E.; Vassallo, D.; Farinato, J.; Bergomi, M.; Bonavita, M.; Carlotti, A.; D'Orazi, V.; Greggio, D.; Magrin, D.; Mesa, D.; Pinna, E.; Puglisi, A.; Stangalini, M.; Verinaud, C.; Viotto, V.
2016-07-01
A robust post processing technique is mandatory for analysing the coronagraphic high contrast imaging data. Angular Differential Imaging (ADI) and Principal Component Analysis (PCA) are the most used approaches to suppress the quasi-static structure presents in the Point Spread Function (PSF) for revealing planets at different separations from the host star. In this work, we present the comparison between ADI and PCA applied to System of coronagraphy with High order Adaptive optics from R to K band (SHARK-NIR), which will be implemented at Large Binocular Telescope (LBT). The comparison has been carried out by using as starting point the simulated wavefront residuals of the LBT Adaptive Optics (AO) system, in different observing conditions. Accurate tests for tuning the post processing parameters to obtain the best performance from each technique were performed in various seeing conditions (0:4"-1") for star magnitude ranging from 8 to 12, with particular care in finding the best compromise between quasi static speckle subtraction and planets detection.
NASA Astrophysics Data System (ADS)
Zhang, Yanjun; Jiang, Li; Wang, Chunru
2015-07-01
A porous Sn@C nanocomposite was prepared via a facile hydrothermal method combined with a simple post-calcination process, using stannous octoate as the Sn source and glucose as the C source. The as-prepared Sn@C nanocomposite exhibited excellent electrochemical behavior with a high reversible capacity, long cycle life and good rate capability when used as an anode material for lithium ion batteries.A porous Sn@C nanocomposite was prepared via a facile hydrothermal method combined with a simple post-calcination process, using stannous octoate as the Sn source and glucose as the C source. The as-prepared Sn@C nanocomposite exhibited excellent electrochemical behavior with a high reversible capacity, long cycle life and good rate capability when used as an anode material for lithium ion batteries. Electronic supplementary information (ESI) available: Detailed experimental procedure and additional characterization, including a Raman spectrum, TGA curve, N2 adsorption-desorption isotherm, TEM images and SEM images. See DOI: 10.1039/c5nr03093e
Adaptive optics images restoration based on frame selection and multi-framd blind deconvolution
NASA Astrophysics Data System (ADS)
Tian, Y.; Rao, C. H.; Wei, K.
2008-10-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulent due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frame blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are picked out by frame selection technique is deconvolved. There is no priori knowledge except the positive constraint. The method has been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system in Yunnan Observatory. The results showed that the method can effectively improve the images partially corrected by adaptive optics.
Evaluation of Spontaneous Spinal Cerebrospinal Fluid Leaks Disease by Computerized Image Processing.
Yıldırım, Mustafa S; Kara, Sadık; Albayram, Mehmet S; Okkesim, Şükrü
2016-05-17
Spontaneous Spinal Cerebrospinal Fluid Leaks (SSCFL) is a disease based on tears on the dura mater. Due to widespread symptoms and low frequency of the disease, diagnosis is problematic. Diagnostic lumbar puncture is commonly used for diagnosing SSCFL, though it is invasive and may cause pain, inflammation or new leakages. T2-weighted MR imaging is also used for diagnosis; however, the literature on T2-weighted MRI states that findings for diagnosis of SSCFL could be erroneous when differentiating the diseased and control. One another technique for diagnosis is CT-myelography, but this has been suggested to be less successful than T2-weighted MRI and it needs an initial lumbar puncture. This study aimed to develop an objective, computerized numerical analysis method using noninvasive routine Magnetic Resonance Images that can be used in the evaluation and diagnosis of SSCFL disease. Brain boundaries were automatically detected using methods of mathematical morphology, and a distance transform was employed. According to normalized distances, average densities of certain sites were proportioned and a numerical criterion related to cerebrospinal fluid distribution was calculated. The developed method was able to differentiate between 14 patients and 14 control subjects significantly with p = 0.0088 and d = 0.958. Also, the pre and post-treatment MRI of four patients was obtained and analyzed. The results were differentiated statistically (p = 0.0320, d = 0.853). An original, noninvasive and objective diagnostic test based on computerized image processing has been developed for evaluation of SSCFL. To our knowledge, this is the first computerized image processing method for evaluation of the disease. Discrimination between patients and controls shows the validity of the method. Also, post-treatment changes observed in four patients support this verdict.
Echegaray, Sebastian; Bakr, Shaimaa; Rubin, Daniel L; Napel, Sandy
2017-10-06
The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.
Healthcare provider and patient perspectives on diagnostic imaging investigations.
Makanjee, Chandra R; Bergh, Anne-Marie; Hoffmann, Willem A
2015-05-20
Much has been written about the patient-centred approach in doctor-patient consultations. Little is known about interactions and communication processes regarding healthcare providers' and patients' perspectives on expectations and experiences of diagnostic imaging investigations within the medical encounter. Patients journey through the health system from the point of referral to the imaging investigation itself and then to the post-imaging consultation. AIM AND SETTING: To explore healthcare provider and patient perspectives on interaction and communication processes during diagnostic imaging investigations as part of their clinical journey through a healthcare complex. A qualitative study was conducted, with two phases of data collection. Twenty-four patients were conveniently selected at a public district hospital complex and were followed throughout their journey in the hospital system, from admission to discharge. The second phase entailed focus group interviews conducted with providers in the district hospital and adjacent academic hospital (medical officers and family physicians, nurses, radiographers, radiology consultants and registrars). Two main themes guided our analysis: (1) provider perspectives; and (2) patient dispositions and reactions. Golden threads that cut across these themes are interactions and communication processes in the context of expectations, experiences of the imaging investigations and the outcomes thereof. Insights from this study provide a better understanding of the complexity of the processes and interactions between providers and patients during the imaging investigations conducted as part of their clinical pathway. The interactions and communication processes are provider-patient centred when a referral for a diagnostic imaging investigation is included.
MO-FG-204-02: Reference Image Selection in the Presence of Multiple Scan Realizations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruan, D; Dou, T; Thomas, D
Purpose: Fusing information from multiple correlated realizations (e.g., 4DCT) can improve image quality. This process often involves ill-conditioned and asymmetric nonlinear registration and the proper selection of a reference image is important. This work proposes to examine post-registration variation indirectly for such selection, and develops further insights to reduce the number of cross-registrations needed. Methods: We consider each individual scan as a noisy point in the vicinity of an image manifold, related by motion. Nonrigid registration “transports” a scan along the manifold to the reference neighborhood, and the residual is a surrogate for local variation. To test this conjecture, 10more » thoracic scans from the same session were reconstructed from a recently developed low-dose helical 4DCT protocol. Pairwise registration was repeated bi-directionally (81 times) and fusion was performed with each candidate reference. The fused image quality was assessed with SNR and CNR. Registration residuals in SSD, harmonic energy, and deformation Jacobian behavior were examined. The semi-symmetry is further utilized to reduce the number of registration needed. Results: The comparison of image quality between single image and fused ones identified reduction of local intensity variance as the major contributor of image quality, boosting SNR and CNR by 5 to 7 folds. This observation further suggests the criticality of good agreement across post-registration images. Triangle inequality on the SSD metric provides a proficient upper-bound and surrogate on such disagreement. Empirical observation also confirms that fused images with high residual SSD have lower SNR and CNR than the ones with low or intermediate SSDs. Registration SSD is structurally close enough to symmetry for reduced computation. Conclusion: Registration residual is shown to be a good predictor of post-fusion image quality and can be used to identify good reference centers. Semi-symmetry of the registration residual further reduces computation cost. Supported by in part by NIH R01 CA096679.« less
Inselect: Automating the Digitization of Natural History Collections
Hudson, Lawrence N.; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W.; van der Walt, Stéfan; Smith, Vincent S.
2015-01-01
The world’s natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect—a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization. PMID:26599208
Inselect: Automating the Digitization of Natural History Collections.
Hudson, Lawrence N; Blagoderov, Vladimir; Heaton, Alice; Holtzhausen, Pieter; Livermore, Laurence; Price, Benjamin W; van der Walt, Stéfan; Smith, Vincent S
2015-01-01
The world's natural history collections constitute an enormous evidence base for scientific research on the natural world. To facilitate these studies and improve access to collections, many organisations are embarking on major programmes of digitization. This requires automated approaches to mass-digitization that support rapid imaging of specimens and associated data capture, in order to process the tens of millions of specimens common to most natural history collections. In this paper we present Inselect-a modular, easy-to-use, cross-platform suite of open-source software tools that supports the semi-automated processing of specimen images generated by natural history digitization programmes. The software is made up of a Windows, Mac OS X, and Linux desktop application, together with command-line tools that are designed for unattended operation on batches of images. Blending image visualisation algorithms that automatically recognise specimens together with workflows to support post-processing tasks such as barcode reading, label transcription and metadata capture, Inselect fills a critical gap to increase the rate of specimen digitization.
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina
2012-03-01
Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh M.; Meir, Rinat; Zalevsky, Zeev
2017-02-01
Optical sectioning microscopy can provide highly detailed three dimensional (3D) images of biological samples. However, it requires acquisition of many images per volume, and is therefore time consuming, and may not be suitable for live cell 3D imaging. We propose the use of the modified Gerchberg-Saxton phase retrieval algorithm to enable full 3D imaging of gold nanoparticles tagged sample using only two images. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. The proposed concept is then further enhanced also for tracking of single fluorescent particles within a three dimensional (3D) cellular environment based on image processing algorithms that can significantly increases localization accuracy of the 3D point spread function in respect to regular Gaussian fitting. All proposed concepts are validated both on simulated data as well as experimentally.
In utero imaging of mouse embryonic development with optical coherence tomography
NASA Astrophysics Data System (ADS)
Syed, Saba H.; Dickinson, Mary E.; Larin, Kirill V.; Larina, Irina V.
2011-03-01
Studying progression of congenital diseases in animal models can greatly benefit from live embryonic imaging Mouse have long served as a model of mammalian embryonic developmental processes, however, due to intra-uterine nature of mammalian development live imaging is challenging. In this report we present results on live mouse embryonic imaging in utero with Optical Coherence Tomography. Embryos from 12.5 through 17.5 days post-coitus (dpc) were studied through the uterine wall. In longitudinal studies, same embryos were imaged at developmental stages 13.5, 15.5 and 17.5 dpc. This study suggests that OCT can serve as a powerful tool for live mouse embryo imaging. Potentially this technique can contribute to our understanding developmental abnormalities associated with mutations, toxic drugs.
An automatic panoramic image reconstruction scheme from dental computed tomography images
Papakosta, Thekla K; Savva, Antonis D; Economopoulos, Theodore L; Gröhndal, H G
2017-01-01
Objectives: Panoramic images of the jaws are extensively used for dental examinations and/or surgical planning because they provide a general overview of the patient's maxillary and mandibular regions. Panoramic images are two-dimensional projections of three-dimensional (3D) objects. Therefore, it should be possible to reconstruct them from 3D radiographic representations of the jaws, produced by CBCT scanning, obviating the need for additional exposure to X-rays, should there be a need of panoramic views. The aim of this article is to present an automated method for reconstructing panoramic dental images from CBCT data. Methods: The proposed methodology consists of a series of sequential processing stages for detecting a fitting dental arch which is used for projecting the 3D information of the CBCT data to the two-dimensional plane of the panoramic image. The detection is based on a template polynomial which is constructed from a training data set. Results: A total of 42 CBCT data sets of real clinical pre-operative and post-operative representations from 21 patients were used. Eight data sets were used for training the system and the rest for testing. Conclusions: The proposed methodology was successfully applied to CBCT data sets, producing corresponding panoramic images, suitable for examining pre-operatively and post-operatively the patients' maxillary and mandibular regions. PMID:28112548
OCT image segmentation of the prostate nerves
NASA Astrophysics Data System (ADS)
Chitchian, Shahab; Weldon, Thomas P.; Fried, Nathaniel M.
2009-08-01
The cavernous nerves course along the surface of the prostate and are responsible for erectile function. Improvements in identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery may improve nerve preservation and postoperative sexual potency. In this study, 2-D OCT images of the rat prostate were segmented to differentiate the cavernous nerves from the prostate gland. Three image features were employed: Gabor filter, Daubechies wavelet, and Laws filter. The features were segmented using a nearestneighbor classifier. N-ary morphological post-processing was used to remove small voids. The cavernous nerves were differentiated from the prostate gland with a segmentation error rate of only 0.058 +/- 0.019.
Rahman, Zia Ur; Sethi, Pooja; Murtaza, Ghulam; Virk, Hafeez Ul Hassan; Rai, Aitzaz; Mahmod, Masliza; Schoondyke, Jeffrey; Albalbissi, Kais
2017-01-01
Cardiovascular disease is a leading cause of morbidity and mortality globally. Early diagnostic markers are gaining popularity for better patient care disease outcomes. There is an increasing interest in noninvasive cardiac imaging biomarkers to diagnose subclinical cardiac disease. Feature tracking cardiac magnetic resonance imaging is a novel post-processing technique that is increasingly being employed to assess global and regional myocardial function. This technique has numerous applications in structural and functional diagnostics. It has been validated in multiple studies, although there is still a long way to go for it to become routine standard of care. PMID:28515849
An embedded processor for real-time atmoshperic compensation
NASA Astrophysics Data System (ADS)
Bodnar, Michael R.; Curt, Petersen F.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.
2009-05-01
Imaging over long distances is crucial to a number of defense and security applications, such as homeland security and launch tracking. However, the image quality obtained from current long-range optical systems can be severely degraded by the turbulent atmosphere in the path between the region under observation and the imager. While this obscured image information can be recovered using post-processing techniques, the computational complexity of such approaches has prohibited deployment in real-time scenarios. To overcome this limitation, we have coupled a state-of-the-art atmospheric compensation algorithm, the average-bispectrum speckle method, with a powerful FPGA-based embedded processing board. The end result is a light-weight, lower-power image processing system that improves the quality of long-range imagery in real-time, and uses modular video I/O to provide a flexible interface to most common digital and analog video transport methods. By leveraging the custom, reconfigurable nature of the FPGA, a 20x speed increase over a modern desktop PC was achieved in a form-factor that is compact, low-power, and field-deployable.
Information theoretic methods for image processing algorithm optimization
NASA Astrophysics Data System (ADS)
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
Additive manufacturing of reflective optics: evaluating finishing methods
NASA Astrophysics Data System (ADS)
Leuteritz, G.; Lachmayer, R.
2018-02-01
Individually shaped light distributions become more and more important in lighting technologies and thus the importance of additively manufactured reflectors increases significantly. The vast field of applications ranges from automotive lighting to medical imaging and bolsters the statement. However, the surfaces of additively manufactured reflectors suffer from insufficient optical properties even when manufactured using optimized process parameters for the Selective Laser Melting (SLM) process. Therefore post-process treatments of reflectors are necessary in order to further enhance their optical quality. This work concentrates on the effectiveness of post-process procedures for reflective optics. Based on already optimized aluminum reflectors, which are manufactured with a SLM machine, the parts are differently machined after the SLM process. Selected finishing methods like laser polishing, sputtering or sand blasting are applied and their effects quantified and compared. The post-process procedures are investigated on their impact on surface roughness and reflectance as well as geometrical precision. For each finishing method a demonstrator will be created and compared to a fully milled sample and among themselves. Ultimately, guidelines are developed in order to figure out the optimal treatment of additively manufactured reflectors regarding their optical and geometrical properties. Simulations of the light distributions will be validated with the developed demonstrators.
Using artificial intelligence to automate remittance processing.
Adams, W T; Snow, G M; Helmick, P M
1998-06-01
The consolidated business office of the Allegheny Health Education Research Foundation (AHERF), a large integrated healthcare system based in Pittsburgh, Pennsylvania, sought to improve its cash-related business office activities by implementing an automated remittance processing system that uses artificial intelligence. The goal was to create a completely automated system whereby all monies it processed would be tracked, automatically posted, analyzed, monitored, controlled, and reconciled through a central database. Using a phased approach, the automated payment system has become the central repository for all of the remittances for seven of the hospitals in the AHERF system and has allowed for the complete integration of these hospitals' existing billing systems, document imaging system, and intranet, as well as the new automated payment posting, and electronic cash tracking and reconciling systems. For such new technology, which is designed to bring about major change, factors contributing to the project's success were adequate planning, clearly articulated objectives, marketing, end-user acceptance, and post-implementation plan revision.
NASA Astrophysics Data System (ADS)
White, Joshua S.; Matthews, Jeanna N.; Stacy, John L.
2012-06-01
Phishing website analysis is largely still a time-consuming manual process of discovering potential phishing sites, verifying if suspicious sites truly are malicious spoofs and if so, distributing their URLs to the appropriate blacklisting services. Attackers increasingly use sophisticated systems for bringing phishing sites up and down rapidly at new locations, making automated response essential. In this paper, we present a method for rapid, automated detection and analysis of phishing websites. Our method relies on near real-time gathering and analysis of URLs posted on social media sites. We fetch the pages pointed to by each URL and characterize each page with a set of easily computed values such as number of images and links. We also capture a screen-shot of the rendered page image, compute a hash of the image and use the Hamming distance between these image hashes as a form of visual comparison. We provide initial results demonstrate the feasibility of our techniques by comparing legitimate sites to known fraudulent versions from Phishtank.com, by actively introducing a series of minor changes to a phishing toolkit captured in a local honeypot and by performing some initial analysis on a set of over 2.8 million URLs posted to Twitter over a 4 days in August 2011. We discuss the issues encountered during our testing such as resolvability and legitimacy of URL's posted on Twitter, the data sets used, the characteristics of the phishing sites we discovered, and our plans for future work.
Space Telescope Design to Directly Image the Habitable Zone of Alpha Centauri
NASA Technical Reports Server (NTRS)
Bendek, Eduardo A.; Belikov, Ruslan; Lozi, Julien; Thomas, Sandrine; Males, Jared; Weston, Sasha; McElwain, Michael
2015-01-01
The scientific interest in directly imaging and identifying Earth-like planets within the Habitable Zone (HZ) around nearby stars is driving the design of specialized direct imaging missions such as ACESAT, EXO-C, EXO-S and AFTA-C. The inner edge of Alpha Cen A&B Habitable Zone is found at exceptionally large angular separations of 0.7" and 0.4" respectively. This enables direct imaging of the system with a 0.3m class telescope. Contrast ratios on the order of 10(exp 10) are needed to image Earth-brightness planets. Low-resolution (5-band) spectra of all planets may allow establishing the presence and amount of an atmosphere. This star system configuration is optimal for a specialized small, and stable space telescope that can achieve high-contrast but has limited resolution. This paper describes an innovative instrument design and a mission concept based on a full Silicon Carbide off-axis telescope, which has a Phase Induced Amplitude Apodization coronagraph embedded in the telescope. This architecture maximizes stability and throughput. A Multi-Star Wave Front algorithm is implemented to drive a deformable mirror controlling simultaneously diffracted light from the on-axis and binary companion star. The instrument has a Focal Plane Occulter to reject starlight into a high precision pointing control camera. Finally we utilize a Orbital Differential Imaging (ODI) post-processing method that takes advantage of a highly stable environment (Earth-trailing orbit) and a continuous sequence of images spanning 2 years, to reduce the final noise floor in post processing to approximately 2e-11 levels, enabling high confidence and at least 90% completeness detections of Earth-like planets.
NASA Astrophysics Data System (ADS)
Denker, Carsten; Kuckein, Christoph; Verma, Meetu; González Manrique, Sergio J.; Diercke, Andrea; Enke, Harry; Klar, Jochen; Balthasar, Horst; Louis, Rohan E.; Dineva, Ekaterina
2018-05-01
In high-resolution solar physics, the volume and complexity of photometric, spectroscopic, and polarimetric ground-based data significantly increased in the last decade, reaching data acquisition rates of terabytes per hour. This is driven by the desire to capture fast processes on the Sun and the necessity for short exposure times “freezing” the atmospheric seeing, thus enabling ex post facto image restoration. Consequently, large-format and high-cadence detectors are nowadays used in solar observations to facilitate image restoration. Based on our experience during the “early science” phase with the 1.5 m GREGOR solar telescope (2014–2015) and the subsequent transition to routine observations in 2016, we describe data collection and data management tailored toward image restoration and imaging spectroscopy. We outline our approaches regarding data processing, analysis, and archiving for two of GREGOR’s post-focus instruments (see http://gregor.aip.de), i.e., the GREGOR Fabry–Pérot Interferometer (GFPI) and the newly installed High-Resolution Fast Imager (HiFI). The heterogeneous and complex nature of multidimensional data arising from high-resolution solar observations provides an intriguing but also a challenging example for “big data” in astronomy. The big data challenge has two aspects: (1) establishing a workflow for publishing the data for the whole community and beyond and (2) creating a collaborative research environment (CRE), where computationally intense data and postprocessing tools are colocated and collaborative work is enabled for scientists of multiple institutes. This requires either collaboration with a data center or frameworks and databases capable of dealing with huge data sets based on virtual observatory (VO) and other community standards and procedures.
Holland, Grace; Tiggemann, Marika
2017-01-01
Fitspiration is a recent Internet trend designed to motivate people to eat healthily and to exercise. The aim of the study was to investigate disordered eating and exercise in women who post fitspiration on Instagram. Participants were 101 women who post fitspiration images on Instagram and a comparison group of 102 women who post travel images. Both groups completed measures of disordered eating and compulsive exercise. Women who post fitspiration images scored significantly higher on drive for thinness, bulimia, drive for muscularity, and compulsive exercise. Almost a fifth (17.5%) of these women were at risk for diagnosis of a clinical eating disorder, compared to 4.3% of the travel group. Compulsive exercise was related to disordered eating in both groups, but the relationship was significantly stronger for women who post fitspiration images. For some women, posting fitspiration images on Instagram may signify maladaptive eating and exercise behaviors. © 2016 Wiley Periodicals, Inc. (Int J Eat Disord 2017; 50:76-79). © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Bielski, Conrad; Lemoine, Guido; Syryczynski, Jacek
2009-09-01
High Performance Computing (HPC) hardware solutions such as grid computing and General Processing on a Graphics Processing Unit (GPGPU) are now accessible to users with general computing needs. Grid computing infrastructures in the form of computing clusters or blades are becoming common place and GPGPU solutions that leverage the processing power of the video card are quickly being integrated into personal workstations. Our interest in these HPC technologies stems from the need to produce near real-time maps from a combination of pre- and post-event satellite imagery in support of post-disaster management. Faster processing provides a twofold gain in this situation: 1. critical information can be provided faster and 2. more elaborate automated processing can be performed prior to providing the critical information. In our particular case, we test the use of the PANTEX index which is based on analysis of image textural measures extracted using anisotropic, rotation-invariant GLCM statistics. The use of this index, applied in a moving window, has been shown to successfully identify built-up areas in remotely sensed imagery. Built-up index image masks are important input to the structuring of damage assessment interpretation because they help optimise the workload. The performance of computing the PANTEX workflow is compared on two different HPC hardware architectures: (1) a blade server with 4 blades, each having dual quad-core CPUs and (2) a CUDA enabled GPU workstation. The reference platform is a dual CPU-quad core workstation and the PANTEX workflow total computing time is measured. Furthermore, as part of a qualitative evaluation, the differences in setting up and configuring various hardware solutions and the related software coding effort is presented.
Automatic Image Processing Workflow for the Keck/NIRC2 Vortex Coronagraph
NASA Astrophysics Data System (ADS)
Xuan, Wenhao; Cook, Therese; Ngo, Henry; Zawol, Zoe; Ruane, Garreth; Mawet, Dimitri
2018-01-01
The Keck/NIRC2 camera, equipped with the vortex coronagraph, is an instrument targeted at the high contrast imaging of extrasolar planets. To uncover a faint planet signal from the overwhelming starlight, we utilize the Vortex Image Processing (VIP) library, which carries out principal component analysis to model and remove the stellar point spread function. To bridge the gap between data acquisition and data reduction, we implement a workflow that 1) downloads, sorts, and processes data with VIP, 2) stores the analysis products into a database, and 3) displays the reduced images, contrast curves, and auxiliary information on a web interface. Both angular differential imaging and reference star differential imaging are implemented in the analysis module. A real-time version of the workflow runs during observations, allowing observers to make educated decisions about time distribution on different targets, hence optimizing science yield. The post-night version performs a standardized reduction after the observation, building up a valuable database that not only helps uncover new discoveries, but also enables a statistical study of the instrument itself. We present the workflow, and an examination of the contrast performance of the NIRC2 vortex with respect to factors including target star properties and observing conditions.
Spatial imaging in color and HDR: prometheus unchained
NASA Astrophysics Data System (ADS)
McCann, John J.
2013-03-01
The Human Vision and Electronic Imaging Conferences (HVEI) at the IS and T/SPIE Electronic Imaging meetings have brought together research in the fundamentals of both vision and digital technology. This conference has incorporated many color disciplines that have contributed to the theory and practice of today's imaging: color constancy, models of vision, digital output, high-dynamic-range imaging, and the understanding of perceptual mechanisms. Before digital imaging, silver halide color was a pixel-based mechanism. Color films are closely tied to colorimetry, the science of matching pixels in a black surround. The quanta catch of the sensitized silver salts determines the amount of colored dyes in the final print. The rapid expansion of digital imaging over the past 25 years has eliminated the limitations of using small local regions in forming images. Spatial interactions can now generate images more like vision. Since the 1950's, neurophysiology has shown that post-receptor neural processing is based on spatial interactions. These results reinforced the findings of 19th century experimental psychology. This paper reviews the role of HVEI in color, emphasizing the interaction of research on vision and the new algorithms and processes made possible by electronic imaging.
Reducing the Requirements and Cost of Astronomical Telescopes
NASA Technical Reports Server (NTRS)
Smith, W. Scott; Whitakter, Ann F. (Technical Monitor)
2002-01-01
Limits on astronomical telescope apertures are being rapidly approached. These limits result from logistics, increasing complexity, and finally budgetary constraints. In an historical perspective, great strides have been made in the area of aperture, adaptive optics, wavefront sensors, detectors, stellar interferometers and image reconstruction. What will be the next advances? Emerging data analysis techniques based on communication theory holds the promise of yielding more information from observational data based on significant computer post-processing. This paper explores some of the current telescope limitations and ponders the possibilities increasing the yield of scientific data based on the migration computer post-processing techniques to higher dimensions. Some of these processes hold the promise of reducing the requirements on the basic telescope hardware making the next generation of instruments more affordable.
NASA Astrophysics Data System (ADS)
Fabbrini, L.; Messina, M.; Greco, M.; Pinelli, G.
2011-10-01
In the context of augmented integrity Inertial Navigation System (INS), recent technological developments have been focusing on landmark extraction from high-resolution synthetic aperture radar (SAR) images in order to retrieve aircraft position and attitude. The article puts forward a processing chain that can automatically detect linear landmarks on highresolution synthetic aperture radar (SAR) images and can be successfully exploited also in the context of augmented integrity INS. The processing chain uses constant false alarm rate (CFAR) edge detectors as the first step of the whole processing procedure. Our studies confirm that the ratio of averages (RoA) edge detector detects object boundaries more effectively than Student T-test and Wilcoxon-Mann-Whitney (WMW) test. Nevertheless, all these statistical edge detectors are sensitive to violation of the assumptions which underlie their theory. In addition to presenting a solution to the previous problem, we put forward a new post-processing algorithm useful to remove the main false alarms, to select the most probable edge position, to reconstruct broken edges and finally to vectorize them. SAR images from the "MSTAR clutter" dataset were used to prove the effectiveness of the proposed algorithms.
A Computer-Aided Type-II Fuzzy Image Processing for Diagnosis of Meniscus Tear.
Zarandi, M H Fazel; Khadangi, A; Karimi, F; Turksen, I B
2016-12-01
Meniscal tear is one of the prevalent knee disorders among young athletes and the aging population, and requires correct diagnosis and surgical intervention, if necessary. Not only the errors followed by human intervention but also the obstacles of manual meniscal tear detection highlight the need for automatic detection techniques. This paper presents a type-2 fuzzy expert system for meniscal tear diagnosis using PD magnetic resonance images (MRI). The scheme of the proposed type-2 fuzzy image processing model is composed of three distinct modules: Pre-processing, Segmentation, and Classification. λ-nhancement algorithm is used to perform the pre-processing step. For the segmentation step, first, Interval Type-2 Fuzzy C-Means (IT2FCM) is applied to the images, outputs of which are then employed by Interval Type-2 Possibilistic C-Means (IT2PCM) to perform post-processes. Second stage concludes with re-estimation of "η" value to enhance IT2PCM. Finally, a Perceptron neural network with two hidden layers is used for Classification stage. The results of the proposed type-2 expert system have been compared with a well-known segmentation algorithm, approving the superiority of the proposed system in meniscal tear recognition.
Tell me more: Can a memory test reduce analogue traumatic intrusions?
Krans, Julie; Näring, Gérard; Holmes, Emily A; Becker, Eni S
2009-05-01
Information processing theories of post-traumatic stress disorder (PTSD) state that intrusive images emerge due to a lack of integration of perceptual trauma representations in autobiographical memory. To test this hypothesis experimentally, participants were shown an aversive film to elicit intrusive images. After viewing, they received a recognition test for just one part of the film. The test contained neutrally formulated items to rehearse information from the film. Participants reported intrusive images for the film in an intrusion diary during one week after viewing. In line with expectations, the number of intrusive images decreased only for the part of the film for which the recognition test was given. Furthermore, deliberate cued-recall memory after one week was selectively enhanced for the film part that was in the recognition test a week before. The findings provide new evidence supporting information processing models of PTSD and have potential implications for early interventions after trauma.
Grover, Ginni; DeLuca, Keith; Quirin, Sean; DeLuca, Jennifer; Piestun, Rafael
2012-01-01
Super-resolution imaging with photo-activatable or photo-switchable probes is a promising tool in biological applications to reveal previously unresolved intra-cellular details with visible light. This field benefits from developments in the areas of molecular probes, optical systems, and computational post-processing of the data. The joint design of optics and reconstruction processes using double-helix point spread functions (DH-PSF) provides high resolution three-dimensional (3D) imaging over a long depth-of-field. We demonstrate for the first time a method integrating a Fisher information efficient DH-PSF design, a surface relief optical phase mask, and an optimal 3D localization estimator. 3D super-resolution imaging using photo-switchable dyes reveals the 3D microtubule network in mammalian cells with localization precision approaching the information theoretical limit over a depth of 1.2 µm. PMID:23187521
The image of mathematics held by Irish post-primary students
NASA Astrophysics Data System (ADS)
Lane, Ciara; Stynes, Martin; O'Donoghue, John
2014-08-01
The image of mathematics held by Irish post-primary students was examined and a model for the image found was constructed. Initially, a definition for 'image of mathematics' was adopted with image of mathematics hypothesized as comprising attitudes, beliefs, self-concept, motivation, emotions and past experiences of mathematics. Research focused on students studying ordinary level mathematics for the Irish Leaving Certificate examination - the final examination for students in second-level or post-primary education. Students were aged between 15 and 18 years. A questionnaire was constructed with both quantitative and qualitative aspects. The questionnaire survey was completed by 356 post-primary students. Responses were analysed quantitatively using Statistical Package for the Social Sciences (SPSS) and qualitatively using the constant comparative method of analysis and by reviewing individual responses. Findings provide an insight into Irish post-primary students' images of mathematics and offer a means for constructing a theoretical model of image of mathematics which could be beneficial for future research.
Lamb, Kalina M; Nogg, Kelsey A; Safren, Steven A; Blashill, Aaron J
2018-05-11
Body image disturbance is a common problem reported among sexual minority men living with HIV, and is associated with poor antiretroviral therapy (ART) adherence. Recently, a novel integrated intervention (cognitive behavioral therapy for body image and self-care; CBT-BISC) was developed and pilot tested to simultaneously improve body image and ART adherence in this population. Although CBT-BISC has demonstrated preliminary efficacy in improving ART adherence, the mechanisms of change are unknown. Utilizing data from a two-armed randomized controlled trial (N = 44 sexual minority men living with HIV), comparing CBT-BISC to an enhanced treatment as usual (ETAU) condition, sequential process mediation via latent difference scores was assessed, with changes in body image disturbance entered as the mechanism between treatment condition and changes in ART adherence. Participants assigned to CBT-BISC reported statistically significant reductions in body image disturbance post-intervention, which subsequently predicted changes in ART adherence from post-intervention to long term follow-up (b = 20.01, SE = 9.11, t = 2.19, p = 0.028). One pathway in which CBT-BISC positively impacts ART adherence is through reductions in body image disturbance. Body image disturbance represents one, of likely several, mechanism that prospectively predicts ART adherence among sexual minority men living with HIV.
Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.
2008-01-01
Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.
Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.
2008-01-01
Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757
Quantitative imaging methods in osteoporosis.
Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M Carola; Oei, Edwin H G
2016-12-01
Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research.
Contribution to the benchmark for ternary mixtures: Transient analysis in microgravity conditions.
Ahadi, Amirhossein; Ziad Saghir, M
2015-04-01
We present a transient experimental analysis of the DCMIX1 project conducted onboard the International Space Station for a ternary tetrahydronaphtalene, isobutylbenzene, n-dodecane mixture. Raw images taken in microgravity environment using the SODI (Selectable Optical Diagnostic) apparatus which is equipped with two wavelength diagnostic were processed and the results were analyzed in this work. We measured the concentration profile of the mixture containing 80% THN, 10% IBB and 10% nC12 during the entire experiment using an advanced image processing technique and accordingly we determined the Soret coefficients using an advanced curve-fitting and post-processing technique. It must be noted that the experiment has been repeated five times to ensure the repeatability of the experiment.
NASA Astrophysics Data System (ADS)
Rahman, Md M.; Antani, Sameer K.; Demner-Fushman, Dina; Thoma, George R.
2015-03-01
This paper presents a novel approach to biomedical image retrieval by mapping image regions to local concepts and represent images in a weighted entropy-based concept feature space. The term concept refers to perceptually distinguishable visual patches that are identified locally in image regions and can be mapped to a glossary of imaging terms. Further, the visual significance (e.g., visualness) of concepts is measured as Shannon entropy of pixel values in image patches and is used to refine the feature vector. Moreover, the system can assist user in interactively select a Region-Of-Interest (ROI) and search for similar image ROIs. Further, a spatial verification step is used as a post-processing step to improve retrieval results based on location information. The hypothesis that such approaches would improve biomedical image retrieval, is validated through experiments on a data set of 450 lung CT images extracted from journal articles from four different collections.
Plenoptic Ophthalmoscopy: A Novel Imaging Technique.
Adam, Murtaza K; Aenchbacher, Weston; Kurzweg, Timothy; Hsu, Jason
2016-11-01
This prospective retinal imaging case series was designed to establish feasibility of plenoptic ophthalmoscopy (PO), a novel mydriatic fundus imaging technique. A custom variable intensity LED array light source adapter was created for the Lytro Gen1 light-field camera (Lytro, Mountain View, CA). Initial PO testing was performed on a model eye and rabbit fundi. PO image acquisition was then performed on dilated human subjects with a variety of retinal pathology and images were subjected to computational enhancement. The Lytro Gen1 light-field camera with custom LED array captured fundus images of eyes with diabetic retinopathy, age-related macular degeneration, retinal detachment, and other diagnoses. Post-acquisition computational processing allowed for refocusing and perspective shifting of retinal PO images, resulting in improved image quality. The application of PO to image the ocular fundus is feasible. Additional studies are needed to determine its potential clinical utility. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:1038-1043.]. Copyright 2016, SLACK Incorporated.
Characterization of fission gas bubbles in irradiated U-10Mo fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Casella, Andrew M.; Burkes, Douglas E.; MacFarlan, Paul J.
2017-09-01
Irradiated U-10Mo fuel samples were prepared with traditional mechanical potting and polishing methods with in a hot cell. They were then removed and imaged with an SEM located outside of a hot cell. The images were then processed with basic imaging techniques from 3 separate software packages. The results were compared and a baseline method for characterization of fission gas bubbles in the samples is proposed. It is hoped that through adoption of or comparison to this baseline method that sample characterization can be somewhat standardized across the field of post irradiated examination of metal fuels.
NASA Technical Reports Server (NTRS)
2004-01-01
This image of the martian sundial onboard the Mars Exploration Rover Spirit was processed by students in the Red Rover Goes to Mars program to impose hour markings on the face of the dial. The position of the shadow of the sundial's post within the markings indicates the time of day and the season, which in this image is 12:17 p.m. local solar time, late summer. A team of 16 students from 12 countries were selected by the Planetary Society to participate in this program. This image was taken on Mars by the rover's panoramic camera.[Application of computed tomography (CT) examination for forensic medicine].
Urbanik, Andrzej; Chrzan, Robert
2013-01-01
The aim of the study is to present a own experiences in usage of post mortem CT examination for forensic medicine. With the help of 16-slice CT scanner 181 corpses were examined. Obtained during acquisition imaging data are later developed with dedicated programmes. Analyzed images were extracted from axial sections, multiplanar reconstructions as well as 3D reconstructions. Gained information helped greatly when classical autopsy was performed by making it more accurate. A CT scan images recorded digitally enable to evaluate corpses at any time, despite processes of putrefaction or cremation. If possible CT examination should precede classical autopsy.
NASA Technical Reports Server (NTRS)
Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.
2005-01-01
A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.
NASA Astrophysics Data System (ADS)
Yuan, X.; Wang, X.; Dou, A.; Ding, X.
2014-12-01
As the UAV is widely used in earthquake disaster prevention and mitigation, the efficiency of UAV image processing determines the effectiveness of its application to pre-earthquake disaster prevention, post-earthquake emergency rescue, and disaster assessment. Because of bad weather conditions after destructive earthquake, the wide field cameras captured images with serious vignetting phenomenon, which can significantly affects the speed and efficiency of image mosaic, especially the extraction of pre-earthquake building and geological structure information and also the accuracy of post-earthquake quantitative damage extraction. In this paper, an improved radial gradient correction method (IRGCM) was developed to reduce the influence from random distribution of land surface objects on the images based on radial gradient correction method (RGCM, Y. Zheng, 2008; 2013). First, a mean-value image was obtained by the average of serial UAV images. It was used as calibration instead of single images to obtain the comprehensive vignetting function by using RGCM. Then each UAV image would be corrected by the comprehensive vignetting function. A case study was done to correct the UAV images sequence, which were obtained in Lushan County after Ms7.0 Lushan, Sichuan, China earthquake occurred on April 20, 2013. The results show that the comprehensive vignetting function generated by IRGCM is more robust and accurate to express the specific optical response of camera in a particular setting. Thus it is particularly useful for correction of a mass UAV images with non-uniform illuminations. Also, the correction process was simplified and it is faster than conventional methods. After correction, the images have better radial homogeneity and clearer details, to a certain extent, which reduces the difficulties of image mosaic, and provides a better result for further analysis and damage information extraction. Further test shows also that better results were obtained by taking advantage of comprehensive vignetting function to the other UAV image sequences from different regions. The research was supported by these projects, NO.2012BAK15B02 and 2013IES010106.
The New History School Textbooks in the Russian Federation: 1992-2004
ERIC Educational Resources Information Center
Zajda, Joseph
2007-01-01
This article examines the ideologically-articulated shifts, and the images of transformation, and nation-building process presented in the new generation of school history textbooks in Russia. The article analyses the new content of post-Soviet history textbooks used in Russian secondary schools that represent various transformations from…
Chang, C N; Inouye, H; Model, P; Beckwith, J
1980-01-01
An inner membrane preparation co-translationally cleaved both the alkaline phosphatase and bacteriophage f1 coat protein precursors to the mature proteins. Post-translational outer membrane proteolysis of pre-alkaline phosphatase generated a protein smaller than the authentic monomer. Images PMID:6991486
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Online high-speed NIR diffuse-reflectance imaging spectroscopy in food quality monitoring
NASA Astrophysics Data System (ADS)
Driver, Richard D.; Didona, Kevin
2009-05-01
The use of hyperspectral technology in the NIR for food quality monitoring is discussed. An example of the use of hyperspectral diffuse reflectance scanning and post-processing with a chemometric model shows discrimination between four pharmaceutical samples comprising Aspirin, Acetaminophen, Vitamin C and Vitamin D.
Hahn, Paul; Migacz, Justin; O'Connell, Rachelle; Izatt, Joseph A; Toth, Cynthia A
2013-01-01
We have recently developed a microscope-integrated spectral-domain optical coherence tomography (MIOCT) device towards intrasurgical cross-sectional imaging of surgical maneuvers. In this report, we explore the capability of MIOCT to acquire real-time video imaging of vitreoretinal surgical maneuvers without post-processing modifications. Standard 3-port vitrectomy was performed in human during scheduled surgery as well as in cadaveric porcine eyes. MIOCT imaging of human subjects was performed in healthy normal volunteers and intraoperatively at a normal pause immediately following surgical manipulations, under an Institutional Review Board-approved protocol, with informed consent from all subjects. Video MIOCT imaging of live surgical manipulations was performed in cadaveric porcine eyes by carefully aligning B-scans with instrument orientation and movement. Inverted imaging was performed by lengthening of the reference arm to a position beyond the choroid. Unprocessed MIOCT imaging was successfully obtained in healthy human volunteers and in human patients undergoing surgery, with visualization of post-surgical changes in unprocessed single B-scans. Real-time, unprocessed MIOCT video imaging was successfully obtained in cadaveric porcine eyes during brushing of the retina with the Tano scraper, peeling of superficial retinal tissue with intraocular forceps, and separation of the posterior hyaloid face. Real-time inverted imaging enabled imaging without complex conjugate artifacts. MIOCT is capable of unprocessed imaging of the macula in human patients undergoing surgery and of unprocessed, real-time, video imaging of surgical maneuvers in model eyes. These capabilities represent an important step towards development of MIOCT for efficient, real-time imaging of manipulations during human surgery.
Alizadeh, Mahdi; Conklin, Chris J; Middleton, Devon M; Shah, Pallav; Saksena, Sona; Krisa, Laura; Finsterbusch, Jürgen; Faro, Scott H; Mulcahey, M J; Mohamed, Feroze B
2018-04-01
Ghost artifacts are a major contributor to degradation of spinal cord diffusion tensor images. A multi-stage post-processing pipeline was designed, implemented and validated to automatically remove ghost artifacts arising from reduced field of view diffusion tensor imaging (DTI) of the pediatric spinal cord. A total of 12 pediatric subjects including 7 healthy subjects (mean age=11.34years) with no evidence of spinal cord injury or pathology and 5 patients (mean age=10.96years) with cervical spinal cord injury were studied. Ghost/true cords, labeled as region of interests (ROIs), in non-diffusion weighted b0 images were segmented automatically using mathematical morphological processing. Initially, 21 texture features were extracted from each segmented ROI including 5 first-order features based on the histogram of the image (mean, variance, skewness, kurtosis and entropy) and 16s-order feature vector elements, incorporating four statistical measures (contrast, correlation, homogeneity and energy) calculated from co-occurrence matrices in directions of 0°, 45°, 90° and 135°. Next, ten features with a high value of mutual information (MI) relative to the pre-defined target class and within the features were selected as final features which were input to a trained classifier (adaptive neuro-fuzzy interface system) to separate the true cord from the ghost cord. The implemented pipeline was successfully able to separate the ghost artifacts from true cord structures. The results obtained from the classifier showed a sensitivity of 91%, specificity of 79%, and accuracy of 84% in separating the true cord from ghost artifacts. The results show that the proposed method is promising for the automatic detection of ghost cords present in DTI images of the spinal cord. This step is crucial towards development of accurate, automatic DTI spinal cord post processing pipelines. Copyright © 2017 Elsevier Inc. All rights reserved.
Additive Manufacturing Infrared Inspection
NASA Technical Reports Server (NTRS)
Gaddy, Darrell; Nettles, Mindy
2015-01-01
The Additive Manufacturing Infrared Inspection Task started the development of a real-time dimensional inspection technique and digital quality record for the additive manufacturing process using infrared camera imaging and processing techniques. This project will benefit additive manufacturing by providing real-time inspection of internal geometry that is not currently possible and reduce the time and cost of additive manufactured parts with automated real-time dimensional inspections which deletes post-production inspections.
NASA Astrophysics Data System (ADS)
Cucchiaro, S.; Maset, E.; Fusiello, A.; Cazorzi, F.
2018-05-01
In recent years, the combination of Structure-from-Motion (SfM) algorithms and UAV-based aerial images has revolutionised 3D topographic surveys for natural environment monitoring, offering low-cost, fast and high quality data acquisition and processing. A continuous monitoring of the morphological changes through multi-temporal (4D) SfM surveys allows, e.g., to analyse the torrent dynamic also in complex topography environment like debris-flow catchments, provided that appropriate tools and procedures are employed in the data processing steps. In this work we test two different software packages (3DF Zephyr Aerial and Agisoft Photoscan) on a dataset composed of both UAV and terrestrial images acquired on a debris-flow reach (Moscardo torrent - North-eastern Italian Alps). Unlike other papers in the literature, we evaluate the results not only on the raw point clouds generated by the Structure-from- Motion and Multi-View Stereo algorithms, but also on the Digital Terrain Models (DTMs) created after post-processing. Outcomes show differences between the DTMs that can be considered irrelevant for the geomorphological phenomena under analysis. This study confirms that SfM photogrammetry can be a valuable tool for monitoring sediment dynamics, but accurate point cloud post-processing is required to reliably localize geomorphological changes.
NASA Astrophysics Data System (ADS)
Jünger, Felix; Olshausen, Philipp V.; Rohrbach, Alexander
2016-07-01
Living cells are highly dynamic systems with cellular structures being often below the optical resolution limit. Super-resolution microscopes, usually based on fluorescence cell labelling, are usually too slow to resolve small, dynamic structures. We present a label-free microscopy technique, which can generate thousands of super-resolved, high contrast images at a frame rate of 100 Hertz and without any post-processing. The technique is based on oblique sample illumination with coherent light, an approach believed to be not applicable in life sciences because of too many interference artefacts. However, by circulating an incident laser beam by 360° during one image acquisition, relevant image information is amplified. By combining total internal reflection illumination with dark-field detection, structures as small as 150 nm become separable through local destructive interferences. The technique images local changes in refractive index through scattered laser light and is applied to living mouse macrophages and helical bacteria revealing unexpected dynamic processes.
Jünger, Felix; Olshausen, Philipp v.; Rohrbach, Alexander
2016-01-01
Living cells are highly dynamic systems with cellular structures being often below the optical resolution limit. Super-resolution microscopes, usually based on fluorescence cell labelling, are usually too slow to resolve small, dynamic structures. We present a label-free microscopy technique, which can generate thousands of super-resolved, high contrast images at a frame rate of 100 Hertz and without any post-processing. The technique is based on oblique sample illumination with coherent light, an approach believed to be not applicable in life sciences because of too many interference artefacts. However, by circulating an incident laser beam by 360° during one image acquisition, relevant image information is amplified. By combining total internal reflection illumination with dark-field detection, structures as small as 150 nm become separable through local destructive interferences. The technique images local changes in refractive index through scattered laser light and is applied to living mouse macrophages and helical bacteria revealing unexpected dynamic processes. PMID:27465033
Shoulder Arthroplasty Imaging: What’s New
Gregory, T.M
2017-01-01
Background: Shoulder arthroplasty, in its different forms (hemiarthroplasty, total shoulder arthroplasty and reverse total shoulder arthroplasty) has transformed the clinical outcomes of shoulder disorders. Improvement of general clinical outcome is the result of stronger adequacy of the treatment to the diagnosis, enhanced surgical techniques, specific implanted materials, and more accurate follow up. Imaging is an important tool in each step of these processes. Method: This article is a review article declining recent imaging processes for shoulder arthroplasty. Results: Shoulder imaging is important for shoulder arthroplasty pre-operative planning but also for post-operative monitoring of the prosthesis and this article has a focus on the validity of plain radiographs for detecting radiolucent line and on new Computed Tomography scan method established to eliminate the prosthesis metallic artefacts that obscure the component fixation visualisation. Conclusion: Number of shoulder arthroplasties implanted have grown up rapidly for the past decade, leading to an increase in the number of complications. In parallel, new imaging system have been established to monitor these complications, especially component loosening PMID:29152007
Image quality enhancement for skin cancer optical diagnostics
NASA Astrophysics Data System (ADS)
Bliznuks, Dmitrijs; Kuzmina, Ilona; Bolocko, Katrina; Lihachev, Alexey
2017-12-01
The research presents image quality analysis and enhancement proposals in biophotonic area. The sources of image problems are reviewed and analyzed. The problems with most impact in biophotonic area are analyzed in terms of specific biophotonic task - skin cancer diagnostics. The results point out that main problem for skin cancer analysis is the skin illumination problems. Since it is often not possible to prevent illumination problems, the paper proposes image post processing algorithm - low frequency filtering. Practical results show diagnostic results improvement after using proposed filter. Along that, filter do not reduces diagnostic results' quality for images without illumination defects. Current filtering algorithm requires empirical tuning of filter parameters. Further work needed to test the algorithm in other biophotonic applications and propose automatic filter parameter selection.
New technique for real-time distortion-invariant multiobject recognition and classification
NASA Astrophysics Data System (ADS)
Hong, Rutong; Li, Xiaoshun; Hong, En; Wang, Zuyi; Wei, Hongan
2001-04-01
A real-time hybrid distortion-invariant OPR system was established to make 3D multiobject distortion-invariant automatic pattern recognition. Wavelet transform technique was used to make digital preprocessing of the input scene, to depress the noisy background and enhance the recognized object. A three-layer backpropagation artificial neural network was used in correlation signal post-processing to perform multiobject distortion-invariant recognition and classification. The C-80 and NOA real-time processing ability and the multithread programming technology were used to perform high speed parallel multitask processing and speed up the post processing rate to ROIs. The reference filter library was constructed for the distortion version of 3D object model images based on the distortion parameter tolerance measuring as rotation, azimuth and scale. The real-time optical correlation recognition testing of this OPR system demonstrates that using the preprocessing, post- processing, the nonlinear algorithm os optimum filtering, RFL construction technique and the multithread programming technology, a high possibility of recognition and recognition rate ere obtained for the real-time multiobject distortion-invariant OPR system. The recognition reliability and rate was improved greatly. These techniques are very useful to automatic target recognition.
NASA Astrophysics Data System (ADS)
Canavesi, Cristina; Cogliati, Andrea; Hayes, Adam; Tankam, Patrice; Santhanam, Anand; Rolland, Jannick P.
2017-02-01
Real-time volumetric high-definition wide-field-of-view in-vivo cellular imaging requires micron-scale resolution in 3D. Compactness of the handheld device and distortion-free images with cellular resolution are also critically required for onsite use in clinical applications. By integrating a custom liquid lens-based microscope and a dual-axis MEMS scanner in a compact handheld probe, Gabor-domain optical coherence microscopy (GD-OCM) breaks the lateral resolution limit of optical coherence tomography through depth, overcoming the tradeoff between numerical aperture and depth of focus, enabling advances in biotechnology. Furthermore, distortion-free imaging with no post-processing is achieved with a compact, lightweight handheld MEMS scanner that obtained a 12-fold reduction in volume and 17-fold reduction in weight over a previous dual-mirror galvanometer-based scanner. Approaching the holy grail of medical imaging - noninvasive real-time imaging with histologic resolution - GD-OCM demonstrates invariant resolution of 2 μm throughout a volume of 1 x 1 x 0.6 mm3, acquired and visualized in less than 2 minutes with parallel processing on graphics processing units. Results on the metrology of manufactured materials and imaging of human tissue with GD-OCM are presented.
Statistical distributions of ultra-low dose CT sinograms and their fundamental limits
NASA Astrophysics Data System (ADS)
Lee, Tzu-Cheng; Zhang, Ruoqiao; Alessio, Adam M.; Fu, Lin; De Man, Bruno; Kinahan, Paul E.
2017-03-01
Low dose CT imaging is typically constrained to be diagnostic. However, there are applications for even lowerdose CT imaging, including image registration across multi-frame CT images and attenuation correction for PET/CT imaging. We define this as the ultra-low-dose (ULD) CT regime where the exposure level is a factor of 10 lower than current low-dose CT technique levels. In the ULD regime it is possible to use statistically-principled image reconstruction methods that make full use of the raw data information. Since most statistical based iterative reconstruction methods are based on the assumption of that post-log noise distribution is close to Poisson or Gaussian, our goal is to understand the statistical distribution of ULD CT data with different non-positivity correction methods, and to understand when iterative reconstruction methods may be effective in producing images that are useful for image registration or attenuation correction in PET/CT imaging. We first used phantom measurement and calibrated simulation to reveal how the noise distribution deviate from normal assumption under the ULD CT flux environment. In summary, our results indicate that there are three general regimes: (1) Diagnostic CT, where post-log data are well modeled by normal distribution. (2) Lowdose CT, where normal distribution remains a reasonable approximation and statistically-principled (post-log) methods that assume a normal distribution have an advantage. (3) An ULD regime that is photon-starved and the quadratic approximation is no longer effective. For instance, a total integral density of 4.8 (ideal pi for 24 cm of water) for 120kVp, 0.5mAs of radiation source is the maximum pi value where a definitive maximum likelihood value could be found. This leads to fundamental limits in the estimation of ULD CT data when using a standard data processing stream
Choi, Heejin; Wadduwage, Dushan; Matsudaira, Paul T.; So, Peter T.C.
2014-01-01
A depth resolved hyperspectral imaging spectrometer can provide depth resolved imaging both in the spatial and the spectral domain. Images acquired through a standard imaging Fourier transform spectrometer do not have the depth-resolution. By post processing the spectral cubes (x, y, λ) obtained through a Sagnac interferometer under uniform illumination and structured illumination, spectrally resolved images with depth resolution can be recovered using structured light illumination algorithms such as the HiLo method. The proposed scheme is validated with in vitro specimens including fluorescent solution and fluorescent beads with known spectra. The system is further demonstrated in quantifying spectra from 3D resolved features in biological specimens. The system has demonstrated depth resolution of 1.8 μm and spectral resolution of 7 nm respectively. PMID:25360367
Post Launch Calibration and Testing of the Geostationary Lightning Mapper on the GOES-R Satellite
NASA Technical Reports Server (NTRS)
Rafal, Marc D.; Clarke, Jared T.; Cholvibul, Ruth W.
2016-01-01
The Geostationary Operational Environmental Satellite R (GOES-R) series is the planned next generation of operational weather satellites for the United States National Oceanic and Atmospheric Administration (NOAA). The National Aeronautics and Space Administration (NASA) is procuring the GOES-R spacecraft and instruments with the first launch of the GOES-R series planned for October 2016. Included in the GOES-R Instrument suite is the Geostationary Lightning Mapper (GLM). GLM is a single-channel, near-infrared optical detector that can sense extremely brief (800 microseconds) transient changes in the atmosphere, indicating the presence of lightning. GLM will measure total lightning activity continuously over the Americas and adjacent ocean regions with near-uniform spatial resolution of approximately 10 km. Due to its large CCD (1372x1300 pixels), high frame rate, sensitivity and onboard event filtering, GLM will require extensive post launch characterization and calibration. Daytime and nighttime images will be used to characterize both image quality criteria inherent to GLM as a space-based optic system (focus, stray light, crosstalk, solar glint) and programmable image processing criteria (dark offsets, gain, noise, linearity, dynamic range). In addition ground data filtering will be adjusted based on lightning-specific phenomenology (coherence) to isolate real from false transients with their own characteristics. These parameters will be updated, as needed, on orbit in an iterative process guided by pre-launch testing. This paper discusses the planned tests to be performed on GLM over the six-month Post Launch Test period to optimize and demonstrate GLM performance.
Kupferschmidt, David A.; Cody, Patrick A.; Lovinger, David M.; Davis, Margaret I.
2015-01-01
Optogenetic constructs have revolutionized modern neuroscience, but the ability to accurately and efficiently assess their expression in the brain and associate it with prior functional measures remains a challenge. High-resolution imaging of thick, fixed brain sections would make such post-hoc assessment and association possible; however, thick sections often display autofluorescence that limits their compatibility with fluorescence microscopy. We describe and evaluate a method we call “Brain BLAQ” (Block Lipids and Aldehyde Quench) to rapidly reduce autofluorescence in thick brain sections, enabling efficient axon-level imaging of neurons and their processes in conventional tissue preparations using standard epifluorescence microscopy. Following viral-mediated transduction of optogenetic constructs and fluorescent proteins in mouse cortical pyramidal and dopaminergic neurons, we used BLAQ to assess innervation patterns in the striatum, a region in which autofluorescence often obscures the imaging of fine neural processes. After BLAQ treatment of 250–350 μm-thick brain sections, axons and puncta of labeled afferents were visible throughout the striatum using a standard epifluorescence stereomicroscope. BLAQ histochemistry confirmed that motor cortex (M1) projections preferentially innervated the matrix component of lateral striatum, whereas medial prefrontal cortex projections terminated largely in dorsal striosomes and distinct nucleus accumbens subregions. Ventral tegmental area dopaminergic projections terminated in a similarly heterogeneous pattern within nucleus accumbens and ventral striatum. Using a minimal number of easily manipulated and visualized sections, and microscopes available in most neuroscience laboratories, BLAQ enables simple, high-resolution assessment of virally transduced optogenetic construct expression, and post-hoc association of this expression with molecular markers, physiology and behavior. PMID:25698938
Post launch calibration and testing of the Geostationary Lightning Mapper on GOES-R satellite
NASA Astrophysics Data System (ADS)
Rafal, Marc; Clarke, Jared T.; Cholvibul, Ruth W.
2016-05-01
The Geostationary Operational Environmental Satellite R (GOES-R) series is the planned next generation of operational weather satellites for the United States National Oceanic and Atmospheric Administration (NOAA). The National Aeronautics and Space Administration (NASA) is procuring the GOES-R spacecraft and instruments with the first launch of the GOES-R series planned for October 2016. Included in the GOES-R Instrument suite is the Geostationary Lightning Mapper (GLM). GLM is a single-channel, near-infrared optical detector that can sense extremely brief (800 μs) transient changes in the atmosphere, indicating the presence of lightning. GLM will measure total lightning activity continuously over the Americas and adjacent ocean regions with near-uniform spatial resolution of approximately 10 km. Due to its large CCD (1372x1300 pixels), high frame rate, sensitivity and onboard event filtering, GLM will require extensive post launch characterization and calibration. Daytime and nighttime images will be used to characterize both image quality criteria inherent to GLM as a space-based optic system (focus, stray light, crosstalk, solar glint) and programmable image processing criteria (dark offsets, gain, noise, linearity, dynamic range). In addition ground data filtering will be adjusted based on lightning-specific phenomenology (coherence) to isolate real from false transients with their own characteristics. These parameters will be updated, as needed, on orbit in an iterative process guided by pre-launch testing. This paper discusses the planned tests to be performed on GLM over the six-month Post Launch Test period to optimize and demonstrate GLM performance.
Post Launch Calibration and Testing of the Geostationary Lightning Mapper on GOES-R Satellite
NASA Technical Reports Server (NTRS)
Rafal, Marc; Cholvibul, Ruth; Clarke, Jared
2016-01-01
The Geostationary Operational Environmental Satellite R (GOES-R) series is the planned next generation of operational weather satellites for the United States National Oceanic and Atmospheric Administration (NOAA). The National Aeronautics and Space Administration (NASA) is procuring the GOES-R spacecraft and instruments with the first launch of the GOES-R series planned for October 2016. Included in the GOES-R Instrument suite is the Geostationary Lightning Mapper (GLM). GLM is a single-channel, near-infrared optical detector that can sense extremely brief (800 s) transient changes in the atmosphere, indicating the presence of lightning. GLM will measure total lightning activity continuously over the Americas and adjacent ocean regions with near-uniform spatial resolution of approximately 10 km. Due to its large CCD (1372x1300 pixels), high frame rate, sensitivity and onboard event filtering, GLM will require extensive post launch characterization and calibration. Daytime and nighttime images will be used to characterize both image quality criteria inherent to GLM as a space-based optic system (focus, stray light, crosstalk, solar glint) and programmable image processing criteria (dark offsets, gain, noise, linearity, dynamic range). In addition ground data filtering will be adjusted based on lightning-specific phenomenology (coherence) to isolate real from false transients with their own characteristics. These parameters will be updated, as needed, on orbit in an iterative process guided by pre-launch testing. This paper discusses the planned tests to be performed on GLM over the six-month Post Launch Test period to optimize and demonstrate GLM performance.
Surface roughness analysis of fiber post conditioning processes.
Mazzitelli, C; Ferrari, M; Toledano, M; Osorio, E; Monticelli, F; Osorio, R
2008-02-01
The chemo-mechanical surface treatment of fiber posts increases their bonding properties. The combined use of atomic force and confocal microscopy allows for the assessment and quantification of the changes on surface roughness that justify this behavior. Quartz fiber posts were conditioned with different chemicals, as well as by sandblasting, and by an industrial silicate/silane coating. We analyzed post surfaces by atomic force microscopy, recording average roughness (R(a)) measurements of fibers and resin matrix. A confocal image profiler allowed for the quantitative assessment of the average superficial roughness (R(a)). Hydrofluoric acid, potassium permanganate, sodium ethoxide, and sandblasting increased post surface roughness. Modifications of the epoxy resin matrix occurred after the surface pre-treatments. Hydrofluoric acid affected the superficial texture of quartz fibers. Surface-conditioning procedures that selectively react with the epoxy-resin matrix of the fiber post enhance roughness and improve the surface area available for adhesion by creating micro-retentive spaces without affecting the post's inner structure.
Hyperspectral Fluorescence and Reflectance Imaging Instrument
NASA Technical Reports Server (NTRS)
Ryan, Robert E.; O'Neal, S. Duane; Lanoue, Mark; Russell, Jeffrey
2008-01-01
The system is a single hyperspectral imaging instrument that has the unique capability to acquire both fluorescence and reflectance high-spatial-resolution data that is inherently spatially and spectrally registered. Potential uses of this instrument include plant stress monitoring, counterfeit document detection, biomedical imaging, forensic imaging, and general materials identification. Until now, reflectance and fluorescence spectral imaging have been performed by separate instruments. Neither a reflectance spectral image nor a fluorescence spectral image alone yields as much information about a target surface as does a combination of the two modalities. Before this system was developed, to benefit from this combination, analysts needed to perform time-consuming post-processing efforts to co-register the reflective and fluorescence information. With this instrument, the inherent spatial and spectral registration of the reflectance and fluorescence images minimizes the need for this post-processing step. The main challenge for this technology is to detect the fluorescence signal in the presence of a much stronger reflectance signal. To meet this challenge, the instrument modulates artificial light sources from ultraviolet through the visible to the near-infrared part of the spectrum; in this way, both the reflective and fluorescence signals can be measured through differencing processes to optimize fluorescence and reflectance spectra as needed. The main functional components of the instrument are a hyperspectral imager, an illumination system, and an image-plane scanner. The hyperspectral imager is a one-dimensional (line) imaging spectrometer that includes a spectrally dispersive element and a two-dimensional focal plane detector array. The spectral range of the current imaging spectrometer is between 400 to 1,000 nm, and the wavelength resolution is approximately 3 nm. The illumination system consists of narrowband blue, ultraviolet, and other discrete wavelength light-emitting-diode (LED) sources and white-light LED sources designed to produce consistently spatially stable light. White LEDs provide illumination for the measurement of reflectance spectra, while narrowband blue and UV LEDs are used to excite fluorescence. Each spectral type of LED can be turned on or off depending on the specific remote-sensing process being performed. Uniformity of illumination is achieved by using an array of LEDs and/or an integrating sphere or other diffusing surface. The image plane scanner uses a fore optic with a field of view large enough to provide an entire scan line on the image plane. It builds up a two-dimensional image in pushbroom fashion as the target is scanned across the image plane either by moving the object or moving the fore optic. For fluorescence detection, spectral filtering of a narrowband light illumination source is sometimes necessary to minimize the interference of the source spectrum wings with the fluorescence signal. Spectral filtering is achieved with optical interference filters and absorption glasses. This dual spectral imaging capability will enable the optimization of reflective, fluorescence, and fused datasets as well as a cost-effective design for multispectral imaging solutions. This system has been used in plant stress detection studies and in currency analysis.
Medical image enhancement using resolution synthesis
NASA Astrophysics Data System (ADS)
Wong, Tak-Shing; Bouman, Charles A.; Thibault, Jean-Baptiste; Sauer, Ken D.
2011-03-01
We introduce a post-processing approach to improve the quality of CT reconstructed images. The scheme is adapted from the resolution-synthesis (RS)1 interpolation algorithm. In this approach, we consider the input image, scanned at a particular dose level, as a degraded version of a high quality image scanned at a high dose level. Image enhancement is achieved by predicting the high quality image by classification based linear regression. To improve the robustness of our scheme, we also apply the minimum description length principle to determine the optimal number of predictors to use in the scheme, and the ridge regression to regularize the design of the predictors. Experimental results show that our scheme is effective in reducing the noise in images reconstructed from filtered back projection without significant loss of image details. Alternatively, our scheme can also be applied to reduce dose while maintaining image quality at an acceptable level.
Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas
2014-03-01
The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. © 2013 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.
Jaccard, Nicolas; Griffin, Lewis D; Keser, Ana; Macown, Rhys J; Super, Alexandre; Veraitch, Farlan S; Szita, Nicolas
2014-01-01
The quantitative determination of key adherent cell culture characteristics such as confluency, morphology, and cell density is necessary for the evaluation of experimental outcomes and to provide a suitable basis for the establishment of robust cell culture protocols. Automated processing of images acquired using phase contrast microscopy (PCM), an imaging modality widely used for the visual inspection of adherent cell cultures, could enable the non-invasive determination of these characteristics. We present an image-processing approach that accurately detects cellular objects in PCM images through a combination of local contrast thresholding and post hoc correction of halo artifacts. The method was thoroughly validated using a variety of cell lines, microscope models and imaging conditions, demonstrating consistently high segmentation performance in all cases and very short processing times (<1 s per 1,208 × 960 pixels image). Based on the high segmentation performance, it was possible to precisely determine culture confluency, cell density, and the morphology of cellular objects, demonstrating the wide applicability of our algorithm for typical microscopy image processing pipelines. Furthermore, PCM image segmentation was used to facilitate the interpretation and analysis of fluorescence microscopy data, enabling the determination of temporal and spatial expression patterns of a fluorescent reporter. We created a software toolbox (PHANTAST) that bundles all the algorithms and provides an easy to use graphical user interface. Source-code for MATLAB and ImageJ is freely available under a permissive open-source license. Biotechnol. Bioeng. 2014;111: 504–517. © 2013 Wiley Periodicals, Inc. PMID:24037521
2011-04-01
fractional anisotropymeasures of axonal tracts derived from diffusion tensor imaging ( DTI ). Nine soldiers who incurred a blast-related mTBI during...nauseous for 24 to 36 h, blurred vision, tingling in legs , poor coordination for 3 h. Yes, for unknown period None 5 Subject was a gunner in a Humvee...pairs of distant electrodes in all frequency bands. DTI acquisition and processing Diffusionweighted images were acquired on a 1.5T Philips Achieva
[Design and development of the DSA digital subtraction workstation].
Peng, Wen-Xian; Peng, Tian-Zhou; Xia, Shun-Ren; Jin, Guang-Bo
2008-05-01
According to the patient examination criterion and the demands of all related departments, the DSA digital subtraction workstation has been successfully designed and is introduced in this paper by analyzing the characteristic of video source of DSA which was manufactured by GE Company and has no DICOM standard interface. The workstation includes images-capturing gateway and post-processing software. With the developed workstation, all images from this early DSA equipment are transformed into DICOM format and then are shared in different machines.
Fast epi-detected broadband multiplex CARS and SHG imaging of mouse skull cells
Capitaine, Erwan; Moussa, Nawel Ould; Louot, Christophe; Bardet, Sylvia M.; Kano, Hideaki; Duponchel, Ludovic; Lévêque, Philippe; Couderc, Vincent; Leproux, Philippe
2017-01-01
We present a bimodal imaging system able to obtain epi-detected mutiplex coherent anti-Stokes Raman scattering (M-CARS) and second harmonic generation (SHG) signals coming from biological samples. We studied a fragment of mouse parietal bone and could detect broadband anti-Stokes and SHG responses originating from bone cells and collagen respectively. In addition we compared two post-processing methods to retrieve the imaginary part of the third-order nonlinear susceptibility related to the spontaneous Raman scattering. PMID:29359100
Advantages and Disadvantages in Image Processing with Free Software in Radiology.
Mujika, Katrin Muradas; Méndez, Juan Antonio Juanes; de Miguel, Andrés Framiñan
2018-01-15
Currently, there are sophisticated applications that make it possible to visualize medical images and even to manipulate them. These software applications are of great interest, both from a teaching and a radiological perspective. In addition, some of these applications are known as Free Open Source Software because they are free and the source code is freely available, and therefore it can be easily obtained even on personal computers. Two examples of free open source software are Osirix Lite® and 3D Slicer®. However, this last group of free applications have limitations in its use. For the radiological field, manipulating and post-processing images is increasingly important. Consequently, sophisticated computing tools that combine software and hardware to process medical images are needed. In radiology, graphic workstations allow their users to process, review, analyse, communicate and exchange multidimensional digital images acquired with different image-capturing radiological devices. These radiological devices are basically CT (Computerised Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), etc. Nevertheless, the programs included in these workstations have a high cost which always depends on the software provider and is always subject to its norms and requirements. With this study, we aim to present the advantages and disadvantages of these radiological image visualization systems in the advanced management of radiological studies. We will compare the features of the VITREA2® and AW VolumeShare 5® radiology workstation with free open source software applications like OsiriX® and 3D Slicer®, with examples from specific studies.
NASA Astrophysics Data System (ADS)
Longmore, S. P.; Bikos, D.; Szoke, E.; Miller, S. D.; Brummer, R.; Lindsey, D. T.; Hillger, D.
2014-12-01
The increasing use of mobile phones equipped with digital cameras and the ability to post images and information to the Internet in real-time has significantly improved the ability to report events almost instantaneously. In the context of severe weather reports, a representative digital image conveys significantly more information than a simple text or phone relayed report to a weather forecaster issuing severe weather warnings. It also allows the forecaster to reasonably discern the validity and quality of a storm report. Posting geo-located, time stamped storm report photographs utilizing a mobile phone application to NWS social media weather forecast office pages has generated recent positive feedback from forecasters. Building upon this feedback, this discussion advances the concept, development, and implementation of a formalized Photo Storm Report (PSR) mobile application, processing and distribution system and Advanced Weather Interactive Processing System II (AWIPS-II) plug-in display software.The PSR system would be composed of three core components: i) a mobile phone application, ii) a processing and distribution software and hardware system, and iii) AWIPS-II data, exchange and visualization plug-in software. i) The mobile phone application would allow web-registered users to send geo-location, view direction, and time stamped PSRs along with severe weather type and comments to the processing and distribution servers. ii) The servers would receive PSRs, convert images and information to NWS network bandwidth manageable sizes in an AWIPS-II data format, distribute them on the NWS data communications network, and archive the original PSRs for possible future research datasets. iii) The AWIPS-II data and exchange plug-ins would archive PSRs, and the visualization plug-in would display PSR locations, times and directions by hour, similar to surface observations. Hovering on individual PSRs would reveal photo thumbnails and clicking on them would display the full resolution photograph.Here, we present initial NWS forecaster feedback received from social media posted PSRs, motivating the possible advantages of PSRs within AWIPS-II, the details of developing and implementing a PSR system, and possible future applications beyond severe weather reports and AWIPS-II.
Buckler, Andrew J; Liu, Tiffany Ting; Savig, Erica; Suzek, Baris E; Ouellette, M; Danagoulian, J; Wernsing, G; Rubin, Daniel L; Paik, David
2013-08-01
A widening array of novel imaging biomarkers is being developed using ever more powerful clinical and preclinical imaging modalities. These biomarkers have demonstrated effectiveness in quantifying biological processes as they occur in vivo and in the early prediction of therapeutic outcomes. However, quantitative imaging biomarker data and knowledge are not standardized, representing a critical barrier to accumulating medical knowledge based on quantitative imaging data. We use an ontology to represent, integrate, and harmonize heterogeneous knowledge across the domain of imaging biomarkers. This advances the goal of developing applications to (1) improve precision and recall of storage and retrieval of quantitative imaging-related data using standardized terminology; (2) streamline the discovery and development of novel imaging biomarkers by normalizing knowledge across heterogeneous resources; (3) effectively annotate imaging experiments thus aiding comprehension, re-use, and reproducibility; and (4) provide validation frameworks through rigorous specification as a basis for testable hypotheses and compliance tests. We have developed the Quantitative Imaging Biomarker Ontology (QIBO), which currently consists of 488 terms spanning the following upper classes: experimental subject, biological intervention, imaging agent, imaging instrument, image post-processing algorithm, biological target, indicated biology, and biomarker application. We have demonstrated that QIBO can be used to annotate imaging experiments with standardized terms in the ontology and to generate hypotheses for novel imaging biomarker-disease associations. Our results established the utility of QIBO in enabling integrated analysis of quantitative imaging data.
The Image of Mathematics Held by Irish Post-Primary Students
ERIC Educational Resources Information Center
Lane, Ciara; Stynes, Martin; O'Donoghue, John
2014-01-01
The image of mathematics held by Irish post-primary students was examined and a model for the image found was constructed. Initially, a definition for "image of mathematics" was adopted with image of mathematics hypothesized as comprising attitudes, beliefs, self-concept, motivation, emotions and past experiences of mathematics. Research…
Andrews, Natalie; Ramel, Marie-Christine; Kumar, Sunil; Alexandrov, Yuriy; Kelly, Douglas J; Warren, Sean C; Kerry, Louise; Lockwood, Nicola; Frolov, Antonina; Frankel, Paul; Bugeon, Laurence; McGinty, James; Dallman, Margaret J; French, Paul M W
2016-04-01
Fluorescence lifetime imaging (FLIM) combined with optical projection tomography (OPT) has the potential to map Förster resonant energy transfer (FRET) readouts in space and time in intact transparent or near transparent live organisms such as zebrafish larvae, thereby providing a means to visualise cell signalling processes in their physiological context. Here the first application of FLIM OPT to read out biological function in live transgenic zebrafish larvae using a genetically expressed FRET biosensor is reported. Apoptosis, or programmed cell death, is mapped in 3-D by imaging the activity of a FRET biosensor that is cleaved by Caspase 3, which is a key effector of apoptosis. Although apoptosis is a naturally occurring process during development, it can also be triggered in a variety of ways, including through gamma irradiation. FLIM OPT is shown here to enable apoptosis to be monitored over time, in live zebrafish larvae via changes in Caspase 3 activation following gamma irradiation at 24 hours post fertilisation. Significant apoptosis was observed at 3.5 hours post irradiation, predominantly in the head region. © 2016 The Authors. Journal of Biophotonics published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Zboray, Robert; Dangendorf, Volker; Mor, Ilan; Bromberger, Benjamin; Tittelmeier, Kai
2015-07-01
In a previous work, we have demonstrated the feasibility of high-frame-rate, fast-neutron radiography of generic air-water two-phase flows in a 1.5 cm thick, rectangular flow channel. The experiments have been carried out at the high-intensity, white-beam facility of the Physikalisch-Technische Bundesanstalt, Germany, using an multi-frame, time-resolved detector developed for fast neutron resonance radiography. The results were however not fully optimal and therefore we have decided to modify the detector and optimize it for the given application, which is described in the present work. Furthermore, we managed to improve the image post-processing methodology and the noise suppression. Using the tailored detector and the improved post-processing, significant increase in the image quality and an order of magnitude lower exposure times, down to 3.33 ms, have been achieved with minimized motion artifacts. Similar to the previous study, different two-phase flow regimes such as bubbly slug and churn flows have been examined. The enhanced imaging quality enables an improved prediction of two-phase flow parameters like the instantaneous volumetric gas fraction, bubble size, and bubble velocities. Instantaneous velocity fields around the gas enclosures can also be more robustly predicted using optical flow methods as previously.
3D MEMS in Standard Processes: Fabrication, Quality Assurance, and Novel Measurement Microstructures
NASA Technical Reports Server (NTRS)
Lin, Gisela; Lawton, Russell A.
2000-01-01
Three-dimensional MEMS microsystems that are commercially fabricated require minimal post-processing and are easily integrated with CMOS signal processing electronics. Measurements to evaluate the fabrication process (such as cross-sectional imaging and device performance characterization) provide much needed feedback in terms of reliability and quality assurance. MEMS technology is bringing a new class of microscale measurements to fruition. The relatively small size of MEMS microsystems offers the potential for higher fidelity recordings compared to macrosize counterparts, as illustrated in the measurement of muscle cell forces.
Point Cloud Based Change Detection - an Automated Approach for Cloud-based Services
NASA Astrophysics Data System (ADS)
Collins, Patrick; Bahr, Thomas
2016-04-01
The fusion of stereo photogrammetric point clouds with LiDAR data or terrain information derived from SAR interferometry has a significant potential for 3D topographic change detection. In the present case study latest point cloud generation and analysis capabilities are used to examine a landslide that occurred in the village of Malin in Maharashtra, India, on 30 July 2014, and affected an area of ca. 44.000 m2. It focuses on Pléiades high resolution satellite imagery and the Airbus DS WorldDEMTM as a product of the TanDEM-X mission. This case study was performed using the COTS software package ENVI 5.3. Integration of custom processes and automation is supported by IDL (Interactive Data Language). Thus, ENVI analytics is running via the object-oriented and IDL-based ENVITask API. The pre-event topography is represented by the WorldDEMTM product, delivered with a raster of 12 m x 12 m and based on the EGM2008 geoid (called pre-DEM). For the post-event situation a Pléiades 1B stereo image pair of the AOI affected was obtained. The ENVITask "GeneratePointCloudsByDenseImageMatching" was implemented to extract passive point clouds in LAS format from the panchromatic stereo datasets: • A dense image-matching algorithm is used to identify corresponding points in the two images. • A block adjustment is applied to refine the 3D coordinates that describe the scene geometry. • Additionally, the WorldDEMTM was input to constrain the range of heights in the matching area, and subsequently the length of the epipolar line. The "PointCloudFeatureExtraction" task was executed to generate the post-event digital surface model from the photogrammetric point clouds (called post-DEM). Post-processing consisted of the following steps: • Adding the geoid component (EGM 2008) to the post-DEM. • Pre-DEM reprojection to the UTM Zone 43N (WGS-84) coordinate system and resizing. • Subtraction of the pre-DEM from the post-DEM. • Filtering and threshold based classification of the DEM difference to analyze the surface changes in 3D. The automated point cloud generation and analysis introduced here can be embedded in virtually any existing geospatial workflow for operational applications. Three integration options were implemented in this case study: • Integration within any ArcGIS environment whether deployed on the desktop, in the cloud, or online. Execution uses a customized ArcGIS script tool. A Python script file retrieves the parameters from the user interface and runs the precompiled IDL code. That IDL code is used to interface between the Python script and the relevant ENVITasks. • Publishing the point cloud processing tasks as services via the ENVI Services Engine (ESE). ESE is a cloud-based image analysis solution to publish and deploy advanced ENVI image and data analytics to existing enterprise infrastructures. For this purpose the entire IDL code can be capsuled in a single ENVITask. • Integration in an existing geospatial workflow using the Python-to-IDL Bridge. This mechanism allows calling IDL code within Python on a user-defined platform. The results of this case study allow a 3D estimation of the topographic changes within the tectonically active and anthropogenically invaded Malin area after the landslide event. Accordingly, the point cloud analysis was correlated successfully with modelled displacement contours of the slope. Based on optical satellite imagery, such point clouds of high precision and density distribution can be obtained in a few minutes to support the operational monitoring of landslide processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zoberi, J.
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less
Deblurring adaptive optics retinal images using deep convolutional neural networks.
Fei, Xiao; Zhao, Junlei; Zhao, Haoxin; Yun, Dai; Zhang, Yudong
2017-12-01
The adaptive optics (AO) can be used to compensate for ocular aberrations to achieve near diffraction limited high-resolution retinal images. However, many factors such as the limited aberration measurement and correction accuracy with AO, intraocular scatter, imaging noise and so on will degrade the quality of retinal images. Image post processing is an indispensable and economical method to make up for the limitation of AO retinal imaging procedure. In this paper, we proposed a deep learning method to restore the degraded retinal images for the first time. The method directly learned an end-to-end mapping between the blurred and restored retinal images. The mapping was represented as a deep convolutional neural network that was trained to output high-quality images directly from blurry inputs without any preprocessing. This network was validated on synthetically generated retinal images as well as real AO retinal images. The assessment of the restored retinal images demonstrated that the image quality had been significantly improved.
Deblurring adaptive optics retinal images using deep convolutional neural networks
Fei, Xiao; Zhao, Junlei; Zhao, Haoxin; Yun, Dai; Zhang, Yudong
2017-01-01
The adaptive optics (AO) can be used to compensate for ocular aberrations to achieve near diffraction limited high-resolution retinal images. However, many factors such as the limited aberration measurement and correction accuracy with AO, intraocular scatter, imaging noise and so on will degrade the quality of retinal images. Image post processing is an indispensable and economical method to make up for the limitation of AO retinal imaging procedure. In this paper, we proposed a deep learning method to restore the degraded retinal images for the first time. The method directly learned an end-to-end mapping between the blurred and restored retinal images. The mapping was represented as a deep convolutional neural network that was trained to output high-quality images directly from blurry inputs without any preprocessing. This network was validated on synthetically generated retinal images as well as real AO retinal images. The assessment of the restored retinal images demonstrated that the image quality had been significantly improved. PMID:29296496
Post-determined emotion: motor action retrospectively modulates emotional valence of visual images
Sasaki, Kyoshiro; Yamada, Yuki; Miura, Kayo
2015-01-01
Upward and downward motor actions influence subsequent and ongoing emotional processing in accordance with a space–valence metaphor: positive is up/negative is down. In this study, we examined whether upward and downward motor actions could also affect previous emotional processing. Participants were shown an emotional image on a touch screen. After the image disappeared, they were required to drag a centrally located dot towards a cued area, which was either in the upper or lower portion of the screen. They were then asked to rate the emotional valence of the image using a 7-point scale. We found that the emotional valence of the image was more positive when the cued area was located in the upper portion of the screen. However, this was the case only when the dragging action was required immediately after the image had disappeared. Our findings suggest that when somatic information that is metaphorically associated with an emotion is linked temporally with a visual event, retrospective emotional integration between the visual and somatic events occurs. PMID:25808884
Barlow, Anders J; Portoles, Jose F; Sano, Naoko; Cumpson, Peter J
2016-10-01
The development of the helium ion microscope (HIM) enables the imaging of both hard, inorganic materials and soft, organic or biological materials. Advantages include outstanding topographical contrast, superior resolution down to <0.5 nm at high magnification, high depth of field, and no need for conductive coatings. The instrument relies on helium atom adsorption and ionization at a cryogenically cooled tip that is atomically sharp. Under ideal conditions this arrangement provides a beam of ions that is stable for days to weeks, with beam currents in the order of picoamperes. Over time, however, this stability is lost as gaseous contamination builds up in the source region, leading to adsorbed atoms of species other than helium, which ultimately results in beam current fluctuations. This manifests itself as horizontal stripe artifacts in HIM images. We investigate post-processing methods to remove these artifacts from HIM images, such as median filtering, Gaussian blurring, fast Fourier transforms, and principal component analysis. We arrive at a simple method for completely removing beam current fluctuation effects from HIM images while maintaining the full integrity of the information within the image.
Deformable registration of x-ray to MRI for post-implant dosimetry in prostate brachytherapy
NASA Astrophysics Data System (ADS)
Park, Seyoun; Song, Danny Y.; Lee, Junghoon
2016-03-01
Post-implant dosimetric assessment in prostate brachytherapy is typically performed using CT as the standard imaging modality. However, poor soft tissue contrast in CT causes significant variability in target contouring, resulting in incorrect dose calculations for organs of interest. CT-MR fusion-based approach has been advocated taking advantage of the complementary capabilities of CT (seed identification) and MRI (soft tissue visibility), and has proved to provide more accurate dosimetry calculations. However, seed segmentation in CT requires manual review, and the accuracy is limited by the reconstructed voxel resolution. In addition, CT deposits considerable amount of radiation to the patient. In this paper, we propose an X-ray and MRI based post-implant dosimetry approach. Implanted seeds are localized using three X-ray images by solving a combinatorial optimization problem, and the identified seeds are registered to MR images by an intensity-based points-to-volume registration. We pre-process the MR images using geometric and Gaussian filtering. To accommodate potential soft tissue deformation, our registration is performed in two steps, an initial affine transformation and local deformable registration. An evolutionary optimizer in conjunction with a points-to-volume similarity metric is used for the affine registration. Local prostate deformation and seed migration are then adjusted by the deformable registration step with external and internal force constraints. We tested our algorithm on six patient data sets, achieving registration error of (1.2+/-0.8) mm in < 30 sec. Our proposed approach has the potential to be a fast and cost-effective solution for post-implant dosimetry with equivalent accuracy as the CT-MR fusion-based approach.
Three-dimensional contrasted visualization of pancreas in rats using clinical MRI and CT scanners.
Yin, Ting; Coudyzer, Walter; Peeters, Ronald; Liu, Yewei; Cona, Marlein Miranda; Feng, Yuanbo; Xia, Qian; Yu, Jie; Jiang, Yansheng; Dymarkowski, Steven; Huang, Gang; Chen, Feng; Oyen, Raymond; Ni, Yicheng
2015-01-01
The purpose of this work was to visualize the pancreas in post-mortem rats with local contrast medium infusion by three-dimensional (3D) magnetic resonance imaging (MRI) and computed tomography (CT) using clinical imagers. A total of 16 Sprague Dawley rats of about 300 g were used for the pancreas visualization. Following the baseline imaging, a mixed contrast medium dye called GadoIodo-EB containing optimized concentrations of Gd-DOTA, iomeprol and Evens blue was infused into the distally obstructed common bile duct (CBD) for post-contrast imaging with 3.0 T MRI and 128-slice CT scanners. Images were post-processed with the MeVisLab software package. MRI findings were co-registered with CT scans and validated with histomorphology, with relative contrast ratios quantified. Without contrast enhancement, the pancreas was indiscernible. After infusion of GadoIodo-EB solution, only the pancreatic region became outstandingly visible, as shown by 3D rendering MRI and CT and proven by colored dissection and histological examinations. The measured volume of the pancreas averaged 1.12 ± 0.04 cm(3) after standardization. Relative contrast ratios were 93.28 ± 34.61% and 26.45 ± 5.29% for MRI and CT respectively. We have developed a multifunctional contrast medium dye to help clearly visualize and delineate rat pancreas in situ using clinical MRI and CT scanners. The topographic landmarks thus created with 3D demonstration may help to provide guidelines for the next in vivo pancreatic MRI research in rodents. Copyright © 2015 John Wiley & Sons, Ltd.
Open source software in a practical approach for post processing of radiologic images.
Valeri, Gianluca; Mazza, Francesco Antonino; Maggi, Stefania; Aramini, Daniele; La Riccia, Luigi; Mazzoni, Giovanni; Giovagnoni, Andrea
2015-03-01
The purpose of this paper is to evaluate the use of open source software (OSS) to process DICOM images. We selected 23 programs for Windows and 20 programs for Mac from 150 possible OSS programs including DICOM viewers and various tools (converters, DICOM header editors, etc.). The programs selected all meet the basic requirements such as free availability, stand-alone application, presence of graphical user interface, ease of installation and advanced features beyond simple display monitor. Capabilities of data import, data export, metadata, 2D viewer, 3D viewer, support platform and usability of each selected program were evaluated on a scale ranging from 1 to 10 points. Twelve programs received a score higher than or equal to eight. Among them, five obtained a score of 9: 3D Slicer, MedINRIA, MITK 3M3, VolView, VR Render; while OsiriX received 10. OsiriX appears to be the only program able to perform all the operations taken into consideration, similar to a workstation equipped with proprietary software, allowing the analysis and interpretation of images in a simple and intuitive way. OsiriX is a DICOM PACS workstation for medical imaging and software for image processing for medical research, functional imaging, 3D imaging, confocal microscopy and molecular imaging. This application is also a good tool for teaching activities because it facilitates the attainment of learning objectives among students and other specialists.
Selections from 2017: Image Processing with AstroImageJ
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-12-01
Editors note:In these last two weeks of 2017, well be looking at a few selections that we havent yet discussed on AAS Nova from among the most-downloaded paperspublished in AAS journals this year. The usual posting schedule will resume in January.AstroImageJ: Image Processing and Photometric Extraction for Ultra-Precise Astronomical Light CurvesPublished January2017The AIJ image display. A wide range of astronomy specific image display options and image analysis tools are available from the menus, quick access icons, and interactive histogram. [Collins et al. 2017]Main takeaway:AstroImageJ is a new integrated software package presented in a publication led byKaren Collins(Vanderbilt University,Fisk University, andUniversity of Louisville). Itenables new users even at the level of undergraduate student, high school student, or amateur astronomer to quickly start processing, modeling, and plotting astronomical image data.Why its interesting:Science doesnt just happen the momenta telescope captures a picture of a distantobject. Instead, astronomical images must firstbe carefully processed to clean up thedata, and this data must then be systematically analyzed to learn about the objects within it. AstroImageJ as a GUI-driven, easily installed, public-domain tool is a uniquelyaccessible tool for thisprocessing and analysis, allowing even non-specialist users to explore and visualizeastronomical data.Some features ofAstroImageJ:(as reported by Astrobites)Image calibration:generate master flat, dark, and bias framesImage arithmetic:combineimages viasubtraction, addition, division, multiplication, etc.Stack editing:easily perform operations on a series of imagesImage stabilization and image alignment featuresPrecise coordinate converters:calculate Heliocentric and Barycentric Julian DatesWCS coordinates:determine precisely where atelescope was pointed for an image by PlateSolving using Astronomy.netMacro and plugin support:write your own macrosMulti-aperture photometry with interactive light curve fitting:plot light curves of a star in real timeCitationKaren A. Collins et al 2017 AJ 153 77. doi:10.3847/1538-3881/153/2/77
Towards collaboration between unmanned aerial and ground vehicles for precision agriculture
NASA Astrophysics Data System (ADS)
Bhandari, Subodh; Raheja, Amar; Green, Robert L.; Do, Dat
2017-05-01
This paper presents the work being conducted at Cal Poly Pomona on the collaboration between unmanned aerial and ground vehicles for precision agriculture. The unmanned aerial vehicles (UAVs), equipped with multispectral/hyperspectral cameras and RGB cameras, take images of the crops while flying autonomously. The images are post processed or can be processed onboard. The processed images are used in the detection of unhealthy plants. Aerial data can be used by the UAVs and unmanned ground vehicles (UGVs) for various purposes including care of crops, harvest estimation, etc. The images can also be useful for optimized harvesting by isolating low yielding plants. These vehicles can be operated autonomously with limited or no human intervention, thereby reducing cost and limiting human exposure to agricultural chemicals. The paper discuss the autonomous UAV and UGV platforms used for the research, sensor integration, and experimental testing. Methods for ground truthing the results obtained from the UAVs will be used. The paper will also discuss equipping the UGV with a robotic arm for removing the unhealthy plants and/or weeds.
NASA Astrophysics Data System (ADS)
Sidiropoulos, Panagiotis; Muller, Jan-Peter; Watson, Gillian; Michael, Gregory; Walter, Sebastian
2018-02-01
This work presents the coregistered, orthorectified and mosaiced high-resolution products of the MC11 quadrangle of Mars, which have been processed using novel, fully automatic, techniques. We discuss the development of a pipeline that achieves fully automatic and parameter independent geometric alignment of high-resolution planetary images, starting from raw input images in NASA PDS format and following all required steps to produce a coregistered geotiff image, a corresponding footprint and useful metadata. Additionally, we describe the development of a radiometric calibration technique that post-processes coregistered images to make them radiometrically consistent. Finally, we present a batch-mode application of the developed techniques over the MC11 quadrangle to validate their potential, as well as to generate end products, which are released to the planetary science community, thus assisting in the analysis of Mars static and dynamic features. This case study is a step towards the full automation of signal processing tasks that are essential to increase the usability of planetary data, but currently, require the extensive use of human resources.
All-CMOS night vision viewer with integrated microdisplay
NASA Astrophysics Data System (ADS)
Goosen, Marius E.; Venter, Petrus J.; du Plessis, Monuko; Faure, Nicolaas M.; Janse van Rensburg, Christo; Rademeyer, Pieter
2014-02-01
The unrivalled integration potential of CMOS has made it the dominant technology for digital integrated circuits. With the advent of visible light emission from silicon through hot carrier electroluminescence, several applications arose, all of which rely upon the advantages of mature CMOS technologies for a competitive edge in a very active and attractive market. In this paper we present a low-cost night vision viewer which employs only standard CMOS technologies. A commercial CMOS imager is utilized for near infrared image capturing with a 128x96 pixel all-CMOS microdisplay implemented to convey the image to the user. The display is implemented in a standard 0.35 μm CMOS process, with no process alterations or post processing. The display features a 25 μm pixel pitch and a 3.2 mm x 2.4 mm active area, which through magnification presents the virtual image to the user equivalent of a 19-inch display viewed from a distance of 3 meters. This work represents the first application of a CMOS microdisplay in a low-cost consumer product.
NASA Astrophysics Data System (ADS)
Curt, Petersen F.; Bodnar, Michael R.; Ortiz, Fernando E.; Carrano, Carmen J.; Kelmelis, Eric J.
2009-02-01
While imaging over long distances is critical to a number of security and defense applications, such as homeland security and launch tracking, current optical systems are limited in resolving power. This is largely a result of the turbulent atmosphere in the path between the region under observation and the imaging system, which can severely degrade captured imagery. There are a variety of post-processing techniques capable of recovering this obscured image information; however, the computational complexity of such approaches has prohibited real-time deployment and hampers the usability of these technologies in many scenarios. To overcome this limitation, we have designed and manufactured an embedded image processing system based on commodity hardware which can compensate for these atmospheric disturbances in real-time. Our system consists of a reformulation of the average bispectrum speckle method coupled with a high-end FPGA processing board, and employs modular I/O capable of interfacing with most common digital and analog video transport methods (composite, component, VGA, DVI, SDI, HD-SDI, etc.). By leveraging the custom, reconfigurable nature of the FPGA, we have achieved performance twenty times faster than a modern desktop PC, in a form-factor that is compact, low-power, and field-deployable.
A scientific operations plan for the large space telescope. [ground support system design
NASA Technical Reports Server (NTRS)
West, D. K.
1977-01-01
The paper describes an LST ground system which is compatible with the operational requirements of the LST. The goal of the approach is to minimize the cost of post launch operations without seriously compromising the quality and total throughput of LST science. Attention is given to cost constraints and guidelines, the telemetry operations processing systems (TELOPS), the image processing facility, ground system planning and data flow, and scientific interfaces.
[Progress in Application of Measuring Skeleton by CT in Forensic Anthropology Research].
Miao, C Y; Xu, L; Wang, N; Zhang, M; Li, Y S; Lü, J X
2017-02-01
Individual identification by measuring the human skeleton is an important research in the field of forensic anthropology. Computed tomography (CT) technology can provide high-resolution image of skeleton. Skeleton image can be reformed by software in the post-processing workstation. Different skeleton measurement indexes of anthropology, such as diameter, angle, area and volume, can be measured on section and reformative images. Measurement process is barely affected by human factors. This paper reviews the literatures at home and abroad about the application of measuring skeleton by CT in forensic anthropology research for individual identification in four aspects, including sex determination, height infer, facial soft tissue thickness measurement and age estimation. The major technology and the application of CT in forensic anthropology research are compared and discussed, respectively. Copyright© by the Editorial Department of Journal of Forensic Medicine.
NASA Technical Reports Server (NTRS)
Aldcroft, T.; Karovska, M.; Cresitello-Dittmar, M.; Cameron, R.
2000-01-01
The aspect system of the Chandra Observatory plays a key role in realizing the full potential of Chandra's x-ray optics and detectors. To achieve the highest spatial and spectral resolution (for grating observations), an accurate post-facto time history of the spacecraft attitude and internal alignment is needed. The CXC has developed a suite of tools which process sensor data from the aspect camera assembly and gyroscopes, and produce the spacecraft aspect solution. In this poster, the design of the aspect pipeline software is briefly described, followed by details of aspect system performance during the first eight months of flight. The two key metrics of aspect performance are: image reconstruction accuracy, which measures the x-ray image blurring introduced by aspect; and celestial location, which is the accuracy of detected source positions in absolute sky coordinates.
Optimizing Performance of Scientific Visualization Software to Support Frontier-Class Computations
2015-08-01
Hypersonic Sciences Branch) for providing sample datasets and permission to use an image of Q_Criterion isosurface for this report; Dr Anders Grimsrud...10.1. EnSight CSM and CFD Post processing; c2014 [accessed 2015 July 6] http:// www.ceisoftware.com. Main Page. XDMF; 2014 Nov 7 [2015 July 6] http
fMRI paradigm designing and post-processing tools
James, Jija S; Rajesh, PG; Chandran, Anuvitha VS; Kesavadas, Chandrasekharan
2014-01-01
In this article, we first review some aspects of functional magnetic resonance imaging (fMRI) paradigm designing for major cognitive functions by using stimulus delivery systems like Cogent, E-Prime, Presentation, etc., along with their technical aspects. We also review the stimulus presentation possibilities (block, event-related) for visual or auditory paradigms and their advantage in both clinical and research setting. The second part mainly focus on various fMRI data post-processing tools such as Statistical Parametric Mapping (SPM) and Brain Voyager, and discuss the particulars of various preprocessing steps involved (realignment, co-registration, normalization, smoothing) in these software and also the statistical analysis principles of General Linear Modeling for final interpretation of a functional activation result. PMID:24851001
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jutras, Jean-David
MRI-only Radiation Treatment Planning (RTP) is becoming increasingly popular because of a simplified work-flow, and less inconvenience to the patient who avoids multiple scans. The advantages of MRI-based RTP over traditional CT-based RTP lie in its superior soft-tissue contrast, and absence of ionizing radiation dose. The lack of electron-density information in MRI can be addressed by automatic tissue classification. To distinguish bone from air, which both appear dark in MRI, an ultra-short echo time (UTE) pulse sequence may be used. Quantitative MRI parametric maps can provide improved tissue segmentation/classification and better sensitivity in monitoring disease progression and treatment outcome thanmore » standard weighted images. Superior tumor contrast can be achieved on pure T{sub 1} images compared to conventional T{sub 1}-weighted images acquired in the same scan duration and voxel resolution. In this study, we have developed a robust and fast quantitative MRI acquisition and post-processing work-flow that integrates these latest advances into the MRI-based RTP of brain lesions. Using 3D multi-echo FLASH images at two different optimized flip angles (both acquired in under 9 min, and 1mm isotropic resolution), parametric maps of T{sub 1}, proton-density (M{sub 0}), and T{sub 2}{sup *} are obtained with high contrast-to-noise ratio, and negligible geometrical distortions, water-fat shifts and susceptibility effects. An additional 3D UTE MRI dataset is acquired (in under 4 min) and post-processed to classify tissues for dose simulation. The pipeline was tested on four healthy volunteers and a clinical trial on brain cancer patients is underway.« less
GIFTS SM EDU Radiometric and Spectral Calibrations
NASA Technical Reports Server (NTRS)
Tian, J.; Reisse, R. a.; Johnson, D. G.; Gazarik, J. J.
2007-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument gathers measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration. The calibration procedures can be subdivided into three categories: the pre-calibration stage, the calibration stage, and finally, the post-calibration stage. Detailed derivations for each stage are presented in this paper.
Ridgway, Jessica L; Clayton, Russell B
2016-01-01
The purpose of this study was to examine the predictors and consequences associated with Instagram selfie posting. Thus, this study explored whether body image satisfaction predicts Instagram selfie posting and whether Instagram selfie posting is then associated with Instagram-related conflict and negative romantic relationship outcomes. A total of 420 Instagram users aged 18 to 62 years (M = 29.3, SD = 8.12) completed an online survey questionnaire. Analysis of a serial multiple mediator model using bootstrapping methods indicated that body image satisfaction was sequentially associated with increased Instagram selfie posting and Instagram-related conflict, which related to increased negative romantic relationship outcomes. These findings suggest that when Instagram users promote their body image satisfaction in the form of Instagram selfie posts, risk of Instagram-related conflict and negative romantic relationship outcomes might ensue. Findings from the current study provide a baseline understanding to potential and timely trends regarding Instagram selfie posting.
NASA Astrophysics Data System (ADS)
Montanini, R.; Quattrocchi, A.; Piccolo, S. A.
2016-09-01
Alphanumeric marking is a common technique employed in industrial applications for identification of products. However, the realised mark can undergo deterioration, either by extensive use or voluntary deletion (e.g. removal of identification numbers of weapons or vehicles). For recovery of the lost data many destructive or non-destructive techniques have been endeavoured so far, which however present several restrictions. In this paper, active infrared thermography has been exploited for the first time in order to assess its effectiveness in restoring paint covered and abraded labels made by means of different manufacturing processes (laser, dot peen, impact, cold press and scribe). Optical excitation of the target surface has been achieved using pulse (PT), lock-in (LT) and step heating (SHT) thermography. Raw infrared images were analysed with a dedicated image processing software originally developed in Matlab™, exploiting several methods, which include thermographic signal reconstruction (TSR), guided filtering (GF), block guided filtering (BGF) and logarithmic transformation (LN). Proper image processing of the raw infrared images resulted in superior contrast and enhanced readability. In particular, for deeply abraded marks, good outcomes have been obtained by application of logarithmic transformation to raw PT images and block guided filtering to raw phase LT images. With PT and LT it was relatively easy to recover labels covered by paint, with the latter one providing better thermal contrast for all the examined targets. Step heating thermography never led to adequate label identification instead.
Mennecke, Angelika; Svergun, Stanislav; Scholz, Bernhard; Royalty, Kevin; Dörfler, Arnd; Struffert, Tobias
2017-01-01
Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. • After coiling subarachnoid haemorrhage, metal artefacts seriously reduce FD-CT image quality. • This new metal artefact reduction algorithm is feasible for flat-detector CT. • After coiling, MAR is necessary for diagnostic quality of affected slices. • Slice-wise Pearson correlation is introduced to evaluate improvement of MAR in future studies. • Metal-unaffected parts of image are not modified by this MAR algorithm.
NASA Astrophysics Data System (ADS)
Trefonas, Peter, III; Allen, Mary T.
1992-06-01
Shannon's information theory is adapted to analyze the photolithographic process, defining the mask pattern as the prior state. Definitions and constraints to the general theory are developed so that the information content at various stages of the lithographic process can be described. Its application is illustrated by exploring the information content within projected aerial images and resultant latent images. Next, a 3-dimensional molecular scale model of exposure, acid diffusion, and catalytic crosslinking in acid-hardened resists (AHR) is presented. In this model, initial positions of photogenerated acids are determined by probability functions generated from the aerial images and the local light intensity in the film. In order to simulate post-exposure baking processes, acids are diffused in a random walk manner, for which the catalytic chain length and the average distance between crosslinks can be set. Crosslink locations are defined in terms of the topologically minimized number required to link different chains. The size and location of polymer chains involved in a larger scale crosslinked network is established and related to polymer solubility. In this manner, the nature of the crosslinked latent image can be established. Good correlation with experimental data is found for the calculated percent insolubilization as a function of dose when the rms acid diffusion length is about 500 angstroms. Information analysis is applied in detail to the specific example of AHR chemistry. The information contained within the 3-D crosslinked latent image is explored as a function of exposure dose, catalytic chain length, average distance between crosslinks. Eopt (the exposure dose which optimizes the information contained within the latent image) was found to vary with catalytic chain length in a manner similar to that observed experimentally in a plot of E90 versus post-exposure bake time. Surprisingly, the information content of the crosslinked latent image remains high even when rms diffusion lengths are as long as 1500 angstroms. The information content of a standing wave is shown to decrease with increasing diffusion length, with essentially all standing wave information being lost at diffusion lengths greater than 450 angstroms. A unique mechanism for self-contrast enhancement and high resolution in AHR resist is proposed.
Apodized RFI filtering of synthetic aperture radar images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doerry, Armin Walter
2014-02-01
Fine resolution Synthetic Aperture Radar (SAR) systems necessarily require wide bandwidths that often overlap spectrum utilized by other wireless services. These other emitters pose a source of Radio Frequency Interference (RFI) to the SAR echo signals that degrades SAR image quality. Filtering, or excising, the offending spectral contaminants will mitigate the interference, but at a cost of often degrading the SAR image in other ways, notably by raising offensive sidelobe levels. This report proposes borrowing an idea from nonlinear sidelobe apodization techniques to suppress interference without the attendant increase in sidelobe levels. The simple post-processing technique is termed Apodized RFImore » Filtering (ARF).« less
MO-B-BRC-00: Prostate HDR Treatment Planning - Considering Different Imaging Modalities
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
2016-06-15
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less
NASA Astrophysics Data System (ADS)
Gorczynska, Iwona; Migacz, Justin; Zawadzki, Robert J.; Sudheendran, Narendran; Jian, Yifan; Tiruveedhula, Pavan K.; Roorda, Austin; Werner, John S.
2015-07-01
We tested and compared the capability of multiple optical coherence tomography (OCT) angiography methods: phase variance, amplitude decorrelation and speckle variance, with application of the split spectrum technique, to image the choroiretinal complex of the human eye. To test the possibility of OCT imaging stability improvement we utilized a real-time tracking scanning laser ophthalmoscopy (TSLO) system combined with a swept source OCT setup. In addition, we implemented a post- processing volume averaging method for improved angiographic image quality and reduction of motion artifacts. The OCT system operated at the central wavelength of 1040nm to enable sufficient depth penetration into the choroid. Imaging was performed in the eyes of healthy volunteers and patients diagnosed with age-related macular degeneration.
NASA Astrophysics Data System (ADS)
Mulaveesala, Ravibabu; Dua, Geetika; Arora, Vanita; Siddiqui, Juned A.; Muniyappa, Amarnath
2017-05-01
In recent years, aperiodic, transient pulse compression favourable infrared imaging methodologies demonstrated as reliable, quantitative, remote characterization and evaluation techniques for testing and evaluation of various biomaterials. This present work demonstrates a pulse compression favourable aperiodic thermal wave imaging technique, frequency modulated thermal wave imaging technique for bone diagnostics, especially by considering the bone with tissue, skin and muscle over layers. In order to find the capabilities of the proposed frequency modulated thermal wave imaging technique to detect the density variations in a multi layered skin-fat-muscle-bone structure, finite element modeling and simulation studies have been carried out. Further, frequency and time domain post processing approaches have been adopted on the temporal temperature data in order to improve the detection capabilities of frequency modulated thermal wave imaging.
Brady, Ryan J; Hampton, Robert R
2018-06-01
Working memory is a system by which a limited amount of information can be kept available for processing after the cessation of sensory input. Because working memory resources are limited, it is adaptive to focus processing on the most relevant information. We used a retro-cue paradigm to determine the extent to which monkey working memory possesses control mechanisms that focus processing on the most relevant representations. Monkeys saw a sample array of images, and shortly after the array disappeared, they were visually cued to a location that had been occupied by one of the sample images. The cue indicated which image should be remembered for the upcoming recognition test. By determining whether the monkeys were more accurate and quicker to respond to cued images compared to un-cued images, we tested the hypothesis that monkey working memory focuses processing on relevant information. We found a memory benefit for the cued image in terms of accuracy and retrieval speed with a memory load of two images. With a memory load of three images, we found a benefit in retrieval speed but only after shortening the onset latency of the retro-cue. Our results demonstrate previously unknown flexibility in the cognitive control of memory in monkeys, suggesting that control mechanisms in working memory likely evolved in a common ancestor of humans and monkeys more than 32 million years ago. Future work should be aimed at understanding the interaction between memory load and the ability to control memory resources, and the role of working memory control in generating differences in cognitive capacity among primates. Copyright © 2018 Elsevier B.V. All rights reserved.
Three-dimensional imaging using phase retrieval with two focus planes
NASA Astrophysics Data System (ADS)
Ilovitsh, Tali; Ilovitsh, Asaf; Weiss, Aryeh; Meir, Rinat; Zalevsky, Zeev
2016-03-01
This work presents a technique for a full 3D imaging of biological samples tagged with gold-nanoparticles (GNPs) using only two images, rather than many images per volume as is currently needed for 3D optical sectioning microscopy. The proposed approach is based on the Gerchberg-Saxton (GS) phase retrieval algorithm. The reconstructed field is free space propagated to all other focus planes using post processing, and the 2D z-stack is merged to create a 3D image of the sample with high fidelity. Because we propose to apply the phase retrieving on nano particles, the regular ambiguities typical to the Gerchberg-Saxton algorithm, are eliminated. In addition, since the method requires the capturing of two images only, it can be suitable for 3D live cell imaging. The proposed concept is presented and validated both on simulated data as well as experimentally.
NASA Astrophysics Data System (ADS)
Bredfeldt, Jeremy S.; Liu, Yuming; Pehlke, Carolyn A.; Conklin, Matthew W.; Szulczewski, Joseph M.; Inman, David R.; Keely, Patricia J.; Nowak, Robert D.; Mackie, Thomas R.; Eliceiri, Kevin W.
2014-01-01
Second-harmonic generation (SHG) imaging can help reveal interactions between collagen fibers and cancer cells. Quantitative analysis of SHG images of collagen fibers is challenged by the heterogeneity of collagen structures and low signal-to-noise ratio often found while imaging collagen in tissue. The role of collagen in breast cancer progression can be assessed post acquisition via enhanced computation. To facilitate this, we have implemented and evaluated four algorithms for extracting fiber information, such as number, length, and curvature, from a variety of SHG images of collagen in breast tissue. The image-processing algorithms included a Gaussian filter, SPIRAL-TV filter, Tubeness filter, and curvelet-denoising filter. Fibers are then extracted using an automated tracking algorithm called fiber extraction (FIRE). We evaluated the algorithm performance by comparing length, angle and position of the automatically extracted fibers with those of manually extracted fibers in twenty-five SHG images of breast cancer. We found that the curvelet-denoising filter followed by FIRE, a process we call CT-FIRE, outperforms the other algorithms under investigation. CT-FIRE was then successfully applied to track collagen fiber shape changes over time in an in vivo mouse model for breast cancer.
Dynamic image fusion and general observer preference
NASA Astrophysics Data System (ADS)
Burks, Stephen D.; Doe, Joshua M.
2010-04-01
Recent developments in image fusion give the user community many options for ways of presenting the imagery to an end-user. Individuals at the US Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate have developed an electronic system that allows users to quickly and efficiently determine optimal image fusion algorithms and color parameters based upon collected imagery and videos from environments that are typical to observers in a military environment. After performing multiple multi-band data collections in a variety of military-like scenarios, different waveband, fusion algorithm, image post-processing, and color choices are presented to observers as an output of the fusion system. The observer preferences can give guidelines as to how specific scenarios should affect the presentation of fused imagery.
Thermal imaging of plasma with a phased array antenna in QUEST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Kishore, E-mail: mishra@triam.kyushu-u.ac.jp; Nagata, K.; Akimoto, R.
2014-11-15
A thermal imaging system to measure plasma Electron Bernstein Emission (EBE) emanating from the mode conversion region in overdense plasma is discussed. Unlike conventional ECE/EBE imaging, this diagnostics does not employ any active mechanical scanning mirrors or focusing optics to scan for the emission cones in plasma. Instead, a standard 3 × 3 waveguide array antenna is used as a passive receiver to collect emission from plasma and imaging reconstruction is done by accurate measurements of phase and intensity of these signals by heterodyne detection technique. A broadband noise source simulating the EBE, is installed near the expected mode conversionmore » region and its position is successfully reconstructed using phase array technique which is done in post processing.« less
Kamran, Mudassar; Fowler, Kathryn J; Mellnick, Vincent M; Sicard, Gregorio A; Narra, Vamsi R
2016-06-01
Primary aortic neoplasms are rare. Aortic sarcoma arising after endovascular aneurysm repair (EVAR) is a scarce subset of primary aortic malignancies, reports of which are infrequent in the published literature. The diagnosis of aortic sarcoma is challenging due to its non-specific clinical presentation, and the prognosis is poor due to delayed diagnosis, rapid proliferation, and propensity for metastasis. Post-EVAR, aortic sarcomas may mimic other more common aortic processes on surveillance imaging. Radiologists are rarely knowledgeable about this rare entity for which multimodality imaging and awareness are invaluable in early diagnosis. A series of three pathologically confirmed cases are presented to display the multimodality imaging features and clinical presentations of aortic sarcoma arising after EVAR.
Development of methods for the analysis of multi-mode TFM images
NASA Astrophysics Data System (ADS)
Sy, K.; Bredif, P.; Iakovleva, E.; Roy, O.; Lesselier, D.
2018-05-01
TFM (Total Focusing Method) is an advanced post-processing imaging algorithm of ultrasonic array data that shows good potential in defect detection and characterization. It can be employed using an infinite number of paths between transducer and focusing point. Depending upon the geometry and the characteristics of the defect in a given part, there are not the same modes that are appropriate for the defect reconstruction. Furthermore, non-physical indications can be observed, prone to misinterpretation. These imaging artifacts are due to the coexistence of several contributions involving several modes of propagation and interactions with possible defects and/or the geometry of the part. Two methods for filtering artifacts and reducing the number of TFM images are developed and illustrated.
Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred
2012-01-01
Background Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. Methods 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Results Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500–1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Conclusions Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems. PMID:22844499
Bender, Stephan; Rellum, Thomas; Freitag, Christine; Resch, Franz; Rietschel, Marcella; Treutlein, Jens; Jennen-Steinmetz, Christine; Brandeis, Daniel; Banaschewski, Tobias; Laucht, Manfred
2012-01-01
Dopamine plays an important role in orienting and the regulation of selective attention to relevant stimulus characteristics. Thus, we examined the influences of functional variants related to dopamine inactivation in the dopamine transporter (DAT1) and catechol-O-methyltransferase genes (COMT) on the time-course of visual processing in a contingent negative variation (CNV) task. 64-channel EEG recordings were obtained from 195 healthy adolescents of a community-based sample during a continuous performance task (A-X version). Early and late CNV as well as preceding visual evoked potential components were assessed. Significant additive main effects of DAT1 and COMT on the occipito-temporal early CNV were observed. In addition, there was a trend towards an interaction between the two polymorphisms. Source analysis showed early CNV generators in the ventral visual stream and in frontal regions. There was a strong negative correlation between occipito-temporal visual post-processing and the frontal early CNV component. The early CNV time interval 500-1000 ms after the visual cue was specifically affected while the preceding visual perception stages were not influenced. Late visual potentials allow the genomic imaging of dopamine inactivation effects on visual post-processing. The same specific time-interval has been found to be affected by DAT1 and COMT during motor post-processing but not motor preparation. We propose the hypothesis that similar dopaminergic mechanisms modulate working memory encoding in both the visual and motor and perhaps other systems.
NASA Astrophysics Data System (ADS)
Shemansky, D. E.; Liu, X.; Melin, H.
2009-12-01
Images of the Saturn atmosphere and magnetosphere in H Lyα emission during the Cassini spacecraft pre and post Saturn orbit insertion (SOI) event obtained using the UVIS experiment FUV spectrograph have revealed definitive evidence for the escape of H I atoms from the top of the thermosphere. An image at 0.1×0.1 Saturn equatorial radii ( RS) pixel resolution with an edge-on-view of the rings shows a distinctive structure (plume) with full width at half maximum (FWHM) of 0.56RS at the exobase sub-solar limb at ˜-13.5∘ latitude as part of the distributed outflow of H I from the sunlit hemisphere, with a counterpart on the antisolar side peaking near the equator above the exobase limb. The structure of the image indicates that part of the outflowing population is sub-orbital and re-enters the thermosphere in an approximate 5 h time scale. An evident larger more broadly distributed component fills the magnetosphere to beyond 45RS in the orbital plane in an asymmetric distribution in local time, similar to an image obtained at Voyager 1 post encounter in a different observational geometry. It has been found that H2 singlet ungerade Rydberg EUV/FUV emission spectra collected with the H Lyα into the image mosaic show a distinctive resonance property correlated with the H Lyα plume. The inferred approximate globally averaged energy deposition at the top of the thermosphere from the production of the hot atomic hydrogen accounts for the measured atmospheric temperature. The only known process capable of producing the atoms at the required few eV/atom kinetic energy appears to be the direct electron excitation of non-LTE H2XΣg+1( v:J) into the repulsive H2bΣu+3, although details of the processes need to be examined under the constraints imposed by the observations to determine compatibility with the current knowledge of hydrogen rate processes.
Loudos, George K; Papadimitroulas, Panagiotis G; Kagadis, George C
2014-01-01
Monte Carlo (MC) simulations play a crucial role in nuclear medical imaging since they can provide the ground truth for clinical acquisitions, by integrating and quantifing all physical parameters that affect image quality. The last decade a number of realistic computational anthropomorphic models have been developed to serve imaging, as well as other biomedical engineering applications. The combination of MC techniques with realistic computational phantoms can provide a powerful tool for pre and post processing in imaging, data analysis and dosimetry. This work aims to create a global database for simulated Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) exams and the methodology, as well as the first elements are presented. Simulations are performed using the well validated GATE opensource toolkit, standard anthropomorphic phantoms and activity distribution of various radiopharmaceuticals, derived from literature. The resulting images, projections and sinograms of each study are provided in the database and can be further exploited to evaluate processing and reconstruction algorithms. Patient studies using different characteristics are included in the database and different computational phantoms were tested for the same acquisitions. These include the XCAT, Zubal and the Virtual Family, which some of which are used for the first time in nuclear imaging. The created database will be freely available and our current work is towards its extension by simulating additional clinical pathologies.
Chen, Baoying; Wang, Wei; Huang, Jin; Zhao, Ming; Cui, Guangbin; Xu, Jing; Guo, Wei; Du, Pang; Li, Pei; Yu, Jun
2010-10-01
To retrospectively evaluate the diagnostic abilities of 2 post-processing methods provided by GE Senographe DS system, tissue equalization (TE) and premium view (PV) in full field digital mammography (FFDM). In accordance with the ethical standards of the World Medical Association, this study was approved by regional ethics committee and signed informed patient consents were obtained. We retrospectively reviewed digital mammograms from 101 women (mean age, 47 years; range, 23-81 years) in the modes of TE and PV, respectively. Three radiologists, fully blinded to the post-processing methods, all patient clinical information and histologic results, read images by using objective image interpretation criteria for diagnostic information end points such as lesion border delineation, definition of disease extent, visualization of internal and surrounding morphologic features of the lesions. Also, overall diagnostic impression in terms of lesion conspicuity, detectability and diagnostic confidence was assessed. Between-group comparisons were performed with Wilcoxon signed rank test. Readers 1, 2, and 3 demonstrated significant overall better impression of PV in 29, 27, and 24 patients, compared with that for TE in 12, 13, and 11 patients, respectively (p<0.05). Significant (p<0.05) better impression of PV was also demonstrated for diagnostic information end points. Importantly, PV proved to be more sensitive than TE while detecting malignant lesions in dense breast rather than benign lesions and malignancy in non-dense breast (p<0.01). PV compared with TE provides marked better diagnostic information in FFDM, particularly for patients with malignancy in dense breast. Copyright © 2009 Elsevier Ireland Ltd. All rights reserved.
Galderisi, Maurizio; Mele, Donato; Marino, Paolo Nicola
2005-01-01
Tissue Doppler (TD) is an ultrasound tool providing a quantitative agreement of left ventricular regional myocardial function in different modalities. Spectral pulsed wave (PW) TD, performed online during the examination, measures instantaneous myocardial velocities. By means of color TD, velocity images are digitally stored for subsequent off-line analysis and mean myocardial velocities are measured. An implementation of color TD includes strain rate imaging (SRI), based on post-processing conversion of regional velocities in local myocardial deformation rate (strain rate) and percent deformation (strain). These three modalities have been applied to stress echocardiography for quantitative evaluation of regional left ventricular function and detection of ischemia and viability. They present advantages and limitations. PWTD does not permit the simultaneous assessment of multiple walls and therefore is not compatible with clinical stress echocardiography while it could be used in a laboratory setting. Color TD provides a spatial map of velocity throughout the myocardium but its results are strongly affected by the frame rate. Both color TD and PWTD are also influenced by overall cardiac motion and tethering from adjacent segments and require reference velocity values for interpretation of regional left ventricular function. High frame rate (i.e. > 150 ms) post-processing-derived SRI can potentially overcome these limitations, since measurements of myocardial deformation have not any significant apex-to-base gradient. Preliminary studies have shown encouraging results about the ability of SRI to detect ischemia and viability, in terms of both strain rate changes and/or evidence of post-systolic thickening. SRI is, however, Doppler-dependent and time-consuming. Further technical refinements are needed to improve its application and introduce new ultrasound modalities to overcome the limitations of the Doppler-derived deformation analysis.
NASA Astrophysics Data System (ADS)
Roggemann, M.; Soehnel, G.; Archer, G.
Atmospheric turbulence degrades the resolution of images of space objects far beyond that predicted by diffraction alone. Adaptive optics telescopes have been widely used for compensating these effects, but as users seek to extend the envelopes of operation of adaptive optics telescopes to more demanding conditions, such as daylight operation, and operation at low elevation angles, the level of compensation provided will degrade. We have been investigating the use of advanced wave front reconstructors and post detection image reconstruction to overcome the effects of turbulence on imaging systems in these more demanding scenarios. In this paper we show results comparing the optical performance of the exponential reconstructor, the least squares reconstructor, and two versions of a reconstructor based on the stochastic parallel gradient descent algorithm in a closed loop adaptive optics system using a conventional continuous facesheet deformable mirror and a Hartmann sensor. The performance of these reconstructors has been evaluated under a range of source visual magnitudes and zenith angles ranging up to 70 degrees. We have also simulated satellite images, and applied speckle imaging, multi-frame blind deconvolution algorithms, and deconvolution algorithms that presume the average point spread function is known to compute object estimates. Our work thus far indicates that the combination of adaptive optics and post detection image processing will extend the useful envelope of the current generation of adaptive optics telescopes.
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-01-01
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-05-22
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.
Rapid Disaster Damage Estimation
NASA Astrophysics Data System (ADS)
Vu, T. T.
2012-07-01
The experiences from recent disaster events showed that detailed information derived from high-resolution satellite images could accommodate the requirements from damage analysts and disaster management practitioners. Richer information contained in such high-resolution images, however, increases the complexity of image analysis. As a result, few image analysis solutions can be practically used under time pressure in the context of post-disaster and emergency responses. To fill the gap in employment of remote sensing in disaster response, this research develops a rapid high-resolution satellite mapping solution built upon a dual-scale contextual framework to support damage estimation after a catastrophe. The target objects are building (or building blocks) and their condition. On the coarse processing level, statistical region merging deployed to group pixels into a number of coarse clusters. Based on majority rule of vegetation index, water and shadow index, it is possible to eliminate the irrelevant clusters. The remaining clusters likely consist of building structures and others. On the fine processing level details, within each considering clusters, smaller objects are formed using morphological analysis. Numerous indicators including spectral, textural and shape indices are computed to be used in a rule-based object classification. Computation time of raster-based analysis highly depends on the image size or number of processed pixels in order words. Breaking into 2 level processing helps to reduce the processed number of pixels and the redundancy of processing irrelevant information. In addition, it allows a data- and tasks- based parallel implementation. The performance is demonstrated with QuickBird images captured a disaster-affected area of Phanga, Thailand by the 2004 Indian Ocean tsunami are used for demonstration of the performance. The developed solution will be implemented in different platforms as well as a web processing service for operational uses.
Chun, Ji-Won; Park, Hae-Jeong; Kim, Dai Jin; Kim, Eosu; Kim, Jae-Jin
2017-07-01
Conflict processing mediated by fronto-striatal regions may be influenced by emotional properties of stimuli. This study aimed to examine the effects of emotion repetition on cognitive control in a conflict-provoking situation. Twenty-one healthy subjects were scanned using functional magnetic resonance imaging while performing a sequential cognitive conflict task composed of emotional stimuli. The regional effects were analyzed according to the repetition or non-repetition of cognitive congruency and emotional valence between the preceding and current trials. Post-incongruence interference in error rate and reaction time was significantly smaller than post-congruence interference, particularly under repeated positive and non-repeated positive, respectively, and post-incongruence interference, compared to post-congruence interference, increased activity in the ACC, DLPFC, and striatum. ACC and DLPFC activities were significantly correlated with error rate or reaction time in some conditions, and fronto-striatal connections were related to the conflict processing heightened by negative emotion. These findings suggest that the repetition of emotional stimuli adaptively regulates cognitive control and the fronto-striatal circuit may engage in the conflict adaptation process induced by emotion repetition. Both repetition enhancement and repetition suppression of prefrontal activity may underlie the relationship between emotion and conflict adaptation. Copyright © 2017 Elsevier B.V. All rights reserved.
2011-01-01
Background Segmentation is the most crucial part in the computer-aided bone age assessment. A well-known type of segmentation performed in the system is adaptive segmentation. While providing better result than global thresholding method, the adaptive segmentation produces a lot of unwanted noise that could affect the latter process of epiphysis extraction. Methods A proposed method with anisotropic diffusion as pre-processing and a novel Bounded Area Elimination (BAE) post-processing algorithm to improve the algorithm of ossification site localization technique are designed with the intent of improving the adaptive segmentation result and the region-of interest (ROI) localization accuracy. Results The results are then evaluated by quantitative analysis and qualitative analysis using texture feature evaluation. The result indicates that the image homogeneity after anisotropic diffusion has improved averagely on each age group for 17.59%. Results of experiments showed that the smoothness has been improved averagely 35% after BAE algorithm and the improvement of ROI localization has improved for averagely 8.19%. The MSSIM has improved averagely 10.49% after performing the BAE algorithm on the adaptive segmented hand radiograph. Conclusions The result indicated that hand radiographs which have undergone anisotropic diffusion have greatly reduced the noise in the segmented image and the result as well indicated that the BAE algorithm proposed is capable of removing the artifacts generated in adaptive segmentation. PMID:21952080
Automated processing pipeline for neonatal diffusion MRI in the developing Human Connectome Project.
Bastiani, Matteo; Andersson, Jesper L R; Cordero-Grande, Lucilio; Murgasova, Maria; Hutter, Jana; Price, Anthony N; Makropoulos, Antonios; Fitzgibbon, Sean P; Hughes, Emer; Rueckert, Daniel; Victor, Suresh; Rutherford, Mary; Edwards, A David; Smith, Stephen M; Tournier, Jacques-Donald; Hajnal, Joseph V; Jbabdi, Saad; Sotiropoulos, Stamatios N
2018-05-28
The developing Human Connectome Project is set to create and make available to the scientific community a 4-dimensional map of functional and structural cerebral connectivity from 20 to 44 weeks post-menstrual age, to allow exploration of the genetic and environmental influences on brain development, and the relation between connectivity and neurocognitive function. A large set of multi-modal MRI data from fetuses and newborn infants is currently being acquired, along with genetic, clinical and developmental information. In this overview, we describe the neonatal diffusion MRI (dMRI) image processing pipeline and the structural connectivity aspect of the project. Neonatal dMRI data poses specific challenges, and standard analysis techniques used for adult data are not directly applicable. We have developed a processing pipeline that deals directly with neonatal-specific issues, such as severe motion and motion-related artefacts, small brain sizes, high brain water content and reduced anisotropy. This pipeline allows automated analysis of in-vivo dMRI data, probes tissue microstructure, reconstructs a number of major white matter tracts, and includes an automated quality control framework that identifies processing issues or inconsistencies. We here describe the pipeline and present an exemplar analysis of data from 140 infants imaged at 38-44 weeks post-menstrual age. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Real-Time and Post-Processed Georeferencing for Hyperpspectral Drone Remote Sensing
NASA Astrophysics Data System (ADS)
Oliveira, R. A.; Khoramshahi, E.; Suomalainen, J.; Hakala, T.; Viljanen, N.; Honkavaara, E.
2018-05-01
The use of drones and photogrammetric technologies are increasing rapidly in different applications. Currently, drone processing workflow is in most cases based on sequential image acquisition and post-processing, but there are great interests towards real-time solutions. Fast and reliable real-time drone data processing can benefit, for instance, environmental monitoring tasks in precision agriculture and in forest. Recent developments in miniaturized and low-cost inertial measurement systems and GNSS sensors, and Real-time kinematic (RTK) position data are offering new perspectives for the comprehensive remote sensing applications. The combination of these sensors and light-weight and low-cost multi- or hyperspectral frame sensors in drones provides the opportunity of creating near real-time or real-time remote sensing data of target object. We have developed a system with direct georeferencing onboard drone to be used combined with hyperspectral frame cameras in real-time remote sensing applications. The objective of this study is to evaluate the real-time georeferencing comparing with post-processing solutions. Experimental data sets were captured in agricultural and forested test sites using the system. The accuracy of onboard georeferencing data were better than 0.5 m. The results showed that the real-time remote sensing is promising and feasible in both test sites.
Yaman, Şengül; Ayaz, Sultan
2015-01-01
Objective: To evaluate the effect of information provided before surgery on the self-esteem and body image of women undergoing hysterectomy. Materials and Methods: The study had a semi-experimental design with pre-post tests. A total of 60 women were included in the study and divided into two groups, the intervention group (n=30) and control group (n=30). A questionnaire, the Rosenberg self-esteem scale, and the body image scale were used to collect data. Results: The pre- and post-test body image scores were similar in the intervention group patients, but the post-test scores were significantly higher in the control group (p<0.05). The pre- and post-test self-esteem scores were again similar in the intervention group, but the post-test scores were significantly lower in the control group (p<0.05). Conclusion: This study revealed that health education given to patients prior to hysterectomy protects body image and consequently self-esteem. PMID:28913071
Adaptive hyperspectral imager: design, modeling, and control
NASA Astrophysics Data System (ADS)
McGregor, Scot; Lacroix, Simon; Monmayrant, Antoine
2015-08-01
An adaptive, hyperspectral imager is presented. We propose a system with easily adaptable spectral resolution, adjustable acquisition time, and high spatial resolution which is independent of spectral resolution. The system yields the possibility to define a variety of acquisition schemes, and in particular near snapshot acquisitions that may be used to measure the spectral content of given or automatically detected regions of interest. The proposed system is modelled and simulated, and tests on a first prototype validate the approach to achieve near snapshot spectral acquisitions without resorting to any computationally heavy post-processing, nor cumbersome calibration
Digitizing the KSO white light images
NASA Astrophysics Data System (ADS)
Pötzi, W.
From 1989 up to 2007 the Sun was observed at the Kanzelhöhe Observatory in white light on photographic film material. The images are on transparent sheet films and are not available to the scientific community now. With a photo scanner for transparent film material the films are now scanned and then prepared for scientific use. The programs for post processing are already finished and as an output FITS and JPEG-files are produced. The scanning should be finished end of 2011 and the data should then be available via our homepage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ochs, R.
The responsibilities of the Food and Drug Administration (FDA) have increased since the inception of the Food and Drugs Act in 1906. Medical devices first came under comprehensive regulation with the passage of the 1938 Food, Drug, and Cosmetic Act. In 1971 FDA also took on the responsibility for consumer protection against unnecessary exposure to radiation-emitting devices for home and occupational use. However it was not until 1976, under the Medical Device Regulation Act, that the FDA was responsible for the safety and effectiveness of medical devices. This session will be presented by the Division of Radiological Health (DRH) andmore » the Division of Imaging, Diagnostics, and Software Reliability (DIDSR) from the Center for Devices and Radiological Health (CDRH) at the FDA. The symposium will discuss on how we protect and promote public health with a focus on medical physics applications organized into four areas: pre-market device review, post-market surveillance, device compliance, current regulatory research efforts and partnerships with other organizations. The pre-market session will summarize the pathways FDA uses to regulate the investigational use and commercialization of diagnostic imaging and radiation therapy medical devices in the US, highlighting resources available to assist investigators and manufacturers. The post-market session will explain the post-market surveillance and compliance activities FDA performs to monitor the safety and effectiveness of devices on the market. The third session will describe research efforts that support the regulatory mission of the Agency. An overview of our regulatory research portfolio to advance our understanding of medical physics and imaging technologies and approaches to their evaluation will be discussed. Lastly, mechanisms that FDA uses to seek public input and promote collaborations with professional, government, and international organizations, such as AAPM, International Electrotechnical Commission (IEC), Image Gently, and the Quantitative Imaging Biomarkers Alliance (QIBA) among others, to fulfill FDA’s mission will be discussed. Learning Objectives: Understand FDA’s pre-market and post-market review processes for medical devices Understand FDA’s current regulatory research activities in the areas of medical physics and imaging products Understand how being involved with AAPM and other organizations can also help to promote innovative, safe and effective medical devices J. Delfino, nothing to disclose.« less
Tamagnini, Francesco; Jeynes, J. Charles G.; Mattana, Sara; Swift, Imogen; Nallala, Jayakrupakar; Hancock, Jane; Brown, Jonathan T.; Randall, Andrew D.; Stone, Nick
2018-01-01
Recent work using micro-Fourier transform infrared (μFTIR) imaging has revealed that a lipid-rich layer surrounds many plaques in post-mortem Alzheimer's brain. However, the origin of this lipid layer is not known, nor is its role in the pathogenesis of Alzheimer's disease (AD). Here, we studied the biochemistry of plaques in situ using a model of AD. We combined FTIR, Raman and immunofluorescence images, showing that astrocyte processes co-localise with the lipid ring surrounding many plaques. We used μFTIR imaging to rapidly measure chemical signatures of plaques over large fields of view, and selected plaques for higher resolution analysis with Raman microscopy. Raman maps showed similar lipid rings and dense protein cores as in FTIR images, but also revealed cell bodies. We confirmed the presence of plaques using amylo-glo staining, and detected astrocytes using immunohistochemistry, revealing astrocyte co-localisation with lipid rings. This work is important because it correlates biochemical changes surrounding the plaque with the biological process of astrogliosis. PMID:29230441
Breast cancer mitosis detection in histopathological images with spatial feature extraction
NASA Astrophysics Data System (ADS)
Albayrak, Abdülkadir; Bilgin, Gökhan
2013-12-01
In this work, cellular mitosis detection in histopathological images has been investigated. Mitosis detection is very expensive and time consuming process. Development of digital imaging in pathology has enabled reasonable and effective solution to this problem. Segmentation of digital images provides easier analysis of cell structures in histopathological data. To differentiate normal and mitotic cells in histopathological images, feature extraction step is very crucial step for the system accuracy. A mitotic cell has more distinctive textural dissimilarities than the other normal cells. Hence, it is important to incorporate spatial information in feature extraction or in post-processing steps. As a main part of this study, Haralick texture descriptor has been proposed with different spatial window sizes in RGB and La*b* color spaces. So, spatial dependencies of normal and mitotic cellular pixels can be evaluated within different pixel neighborhoods. Extracted features are compared with various sample sizes by Support Vector Machines using k-fold cross validation method. According to the represented results, it has been shown that separation accuracy on mitotic and non-mitotic cellular pixels gets better with the increasing size of spatial window.
NASA Astrophysics Data System (ADS)
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
ERIC Educational Resources Information Center
Alexiadis, D. S.; Mitianoudis, N.
2013-01-01
Digital signal processing (DSP) has been an integral part of most electrical, electronic, and computer engineering curricula. The applications of DSP in multimedia (audio, image, video) storage, transmission, and analysis are also widely taught at both the undergraduate and post-graduate levels, as digital multimedia can be encountered in most…
Technical Note: Asteroid Detection Demonstration from SkySat-3 - B612 Data Using Synthetic Tracking
NASA Technical Reports Server (NTRS)
Zhai, C.; Shao, M.; Lai, S.; Boerner, P.; Dyer, J.; Lu, E.; Reitsema, H.; Buie, M.
2018-01-01
We report results from analyzing the data taken by the sCMOS cameras on board of SkySat3 using the synthetic tracking technique. The analysis demonstrates the expected sensitivity improvement in the signal-to-noise ratio of the faint asteroids from properly stacking up the short exposure images in post-processing.
NASA Astrophysics Data System (ADS)
Goossens, Bart; Aelterman, Jan; Luong, Hiep; Pizurica, Aleksandra; Philips, Wilfried
2013-02-01
In digital cameras and mobile phones, there is an ongoing trend to increase the image resolution, decrease the sensor size and to use lower exposure times. Because smaller sensors inherently lead to more noise and a worse spatial resolution, digital post-processing techniques are required to resolve many of the artifacts. Color filter arrays (CFAs), which use alternating patterns of color filters, are very popular because of price and power consumption reasons. However, color filter arrays require the use of a post-processing technique such as demosaicing to recover full resolution RGB images. Recently, there has been some interest in techniques that jointly perform the demosaicing and denoising. This has the advantage that the demosaicing and denoising can be performed optimally (e.g. in the MSE sense) for the considered noise model, while avoiding artifacts introduced when using demosaicing and denoising sequentially. In this paper, we will continue the research line of the wavelet-based demosaicing techniques. These approaches are computationally simple and very suited for combination with denoising. Therefore, we will derive Bayesian Minimum Squared Error (MMSE) joint demosaicing and denoising rules in the complex wavelet packet domain, taking local adaptivity into account. As an image model, we will use Gaussian Scale Mixtures, thereby taking advantage of the directionality of the complex wavelets. Our results show that this technique is well capable of reconstructing fine details in the image, while removing all of the noise, at a relatively low computational cost. In particular, the complete reconstruction (including color correction, white balancing etc) of a 12 megapixel RAW image takes 3.5 sec on a recent mid-range GPU.
Zweerink, Alwin; Allaart, Cornelis P; Kuijer, Joost P A; Wu, LiNa; Beek, Aernout M; van de Ven, Peter M; Meine, Mathias; Croisille, Pierre; Clarysse, Patrick; van Rossum, Albert C; Nijveldt, Robin
2017-12-01
Although myocardial strain analysis is a potential tool to improve patient selection for cardiac resynchronization therapy (CRT), there is currently no validated clinical approach to derive segmental strains. We evaluated the novel segment length in cine (SLICE) technique to derive segmental strains from standard cardiovascular MR (CMR) cine images in CRT candidates. Twenty-seven patients with left bundle branch block underwent CMR examination including cine imaging and myocardial tagging (CMR-TAG). SLICE was performed by measuring segment length between anatomical landmarks throughout all phases on short-axis cines. This measure of frame-to-frame segment length change was compared to CMR-TAG circumferential strain measurements. Subsequently, conventional markers of CRT response were calculated. Segmental strains showed good to excellent agreement between SLICE and CMR-TAG (septum strain, intraclass correlation coefficient (ICC) 0.76; lateral wall strain, ICC 0.66). Conventional markers of CRT response also showed close agreement between both methods (ICC 0.61-0.78). Reproducibility of SLICE was excellent for intra-observer testing (all ICC ≥0.76) and good for interobserver testing (all ICC ≥0.61). The novel SLICE post-processing technique on standard CMR cine images offers both accurate and robust segmental strain measures compared to the 'gold standard' CMR-TAG technique, and has the advantage of being widely available. • Myocardial strain analysis could potentially improve patient selection for CRT. • Currently a well validated clinical approach to derive segmental strains is lacking. • The novel SLICE technique derives segmental strains from standard CMR cine images. • SLICE-derived strain markers of CRT response showed close agreement with CMR-TAG. • Future studies will focus on the prognostic value of SLICE in CRT candidates.
Hortolà, Policarp
2010-01-01
When dealing with microscopic still images of some kinds of samples, the out-of-focus problem represents a particularly serious limiting factor for the subsequent generation of fully sharp 3D animations. In order to produce fully-focused 3D animations of strongly uneven surface microareas, a vertical stack of six digital secondary-electron SEM micrographs of a human bloodstain microarea was acquired. Afterwards, single combined images were generated using a macrophotography and light microscope image post-processing software. Subsequently, 3D animations of texture and topography were obtained in different formats using a combination of software tools. Finally, a 3D-like animation of a texture-topography composite was obtained in different formats using another combination of software tools. By one hand, results indicate that the use of image post-processing software not concerned primarily with electron micrographs allows to obtain, in an easy way, fully-focused images of strongly uneven surface microareas of bloodstains from small series of partially out-of-focus digital SEM micrographs. On the other hand, results also indicate that such small series of electron micrographs can be utilized for generating 3D and 3D-like animations that can subsequently be converted into different formats, by using certain user-friendly software facilities not originally designed for use in SEM, that are easily available from Internet. Although the focus of this study was on bloodstains, the methods used in it well probably are also of relevance for studying the surface microstructures of other organic or inorganic materials whose sharp displaying is difficult of obtaining from a single SEM micrograph.
Sanganahalli, Basavaraju G.; Rebello, Michelle R.; Herman, Peter; Papademetris, Xenophon; Shepherd, Gordon M.; Verhagen, Justus V.; Hyder, Fahmeed
2015-01-01
Functional imaging signals arise from distinct metabolic and hemodynamic events at the neuropil, but how these processes are influenced by pre- and post-synaptic activities need to be understood for quantitative interpretation of stimulus-evoked mapping data. The olfactory bulb (OB) glomeruli, spherical neuropil regions with well-defined neuronal circuitry, can provide insights into this issue. Optical calcium-sensitive fluorescent dye imaging (OICa2+) reflects dynamics of pre-synaptic input to glomeruli, whereas high-resolution functional magnetic resonance imaging (fMRI) using deoxyhemoglobin contrast reveals neuropil function within the glomerular layer where both pre- and post-synaptic activities contribute. We imaged odor-specific activity patterns of the dorsal OB in the same anesthetized rats with fMRI and OICa2+ and then co-registered the respective maps to compare patterns in the same space. Maps by each modality were very reproducible as trial-to-trial patterns for a given odor, overlapping by ~80%. Maps evoked by ethyl butyrate and methyl valerate for a given modality overlapped by ~80%, suggesting activation of similar dorsal glomerular networks by these odors. Comparison of maps generated by both methods for a given odor showed ~70% overlap, indicating similar odor-specific maps by each method. These results suggest that odor-specific glomerular patterns by high-resolution fMRI primarily tracks pre-synaptic input to the OB. Thus combining OICa2+ and fMRI lays the framework for studies of OB processing over a range of spatiotemporal scales, where OICa2+ can feature the fast dynamics of dorsal glomerular clusters and fMRI can map the entire glomerular sheet in the OB. PMID:26631819
NASA Astrophysics Data System (ADS)
Zacharek, M.; Delis, P.; Kedzierski, M.; Fryskowska, A.
2017-05-01
These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters), but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.
Santarossa, Sara; Coyne, Paige; Lisinski, Carly; Woodruff, Sarah J
2016-11-01
The #fitspo 'tag' is a recent trend on Instagram, which is used on posts to motivate others towards a healthy lifestyle through exercise/eating habits. This study used a mixed-methods approach consisting of text and network analysis via the Netlytic program ( N = 10,000 #fitspo posts), and content analysis of #fitspo images ( N = 122) was used to examine author and image characteristics. Results suggest that #fitspo posts may motivate through appearance-mediated themes, as the largest content categories (based on the associated text) were 'feeling good' and 'appearance'. Furthermore, #fitspo posts may create peer influence/support as personal (opposed to non-personal) accounts were associated with higher popularity of images (i.e. number of likes/followers). Finally, most images contained posed individuals with some degree of objectification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adams, Evan; Goodale, Wing; Burns, Steve
There is a critical need to develop monitoring tools to track aerofauna (birds and bats) in three dimensions around wind turbines. New monitoring systems will reduce permitting uncertainty by increasing the understanding of how birds and bats are interacting with wind turbines, which will improve the accuracy of impact predictions. Biodiversity Research Institute (BRI), The University of Maine Orono School of Computing and Information Science (UMaine SCIS), HiDef Aerial Surveying Limited (HiDef), and SunEdison, Inc. (formerly First Wind) responded to this need by using stereo-optic cameras with near-infrared (nIR) technology to investigate new methods for documenting aerofauna behavior around windmore » turbines. The stereo-optic camera system used two synchronized high-definition video cameras with fisheye lenses and processing software that detected moving objects, which could be identified in post-processing. The stereo- optic imaging system offered the ability to extract 3-D position information from pairs of images captured from different viewpoints. Fisheye lenses allowed for a greater field of view, but required more complex image rectification to contend with fisheye distortion. The ability to obtain 3-D positions provided crucial data on the trajectory (speed and direction) of a target, which, when the technology is fully developed, will provide data on how animals are responding to and interacting with wind turbines. This project was focused on testing the performance of the camera system, improving video review processing time, advancing the 3-D tracking technology, and moving the system from Technology Readiness Level 4 to 5. To achieve these objectives, we determined the size and distance at which aerofauna (particularly eagles) could be detected and identified, created efficient data management systems, improved the video post-processing viewer, and attempted refinement of 3-D modeling with respect to fisheye lenses. The 29-megapixel camera system successfully captured 16,173 five-minute video segments in the field. During nighttime field trials using nIR, we found that bat-sized objects could not be detected more than 60 m from the camera system. This led to a decision to focus research efforts exclusively on daytime monitoring and to redirect resources towards improving the video post- processing viewer. We redesigned the bird event post-processing viewer, which substantially decreased the review time necessary to detect and identify flying objects. During daytime field trials, we determine that eagles could be detected up to 500 m away using the fisheye wide-angle lenses, and eagle-sized targets could be identified to species within 350 m of the camera system. We used distance sampling survey methods to describe the probability of detecting and identifying eagles and other aerofauna as a function of distance from the system. The previously developed 3-D algorithm for object isolation and tracking was tested, but the image rectification (flattening) required to obtain accurate distance measurements with fish-eye lenses was determined to be insufficient for distant eagles. We used MATLAB and OpenCV to improve fisheye lens rectification towards the center of the image, but accurate measurements towards the image corners could not be achieved. We believe that changing the fisheye lens to rectilinear lens would greatly improve position estimation, but doing so would result in a decrease in viewing angle and depth of field. Finally, we generated simplified shape profiles of birds to look for similarities between unknown animals and known species. With further development, this method could provide a mechanism for filtering large numbers of shapes to reduce data storage and processing. These advancements further refined the camera system and brought this new technology closer to market. Once commercialized, the stereo-optic camera system technology could be used to: a) research how different species interact with wind turbines in order to refine collision risk models and inform mitigation solutions; and b) monitor aerofauna interactions with terrestrial and offshore wind farms replacing costly human observers and allowing for long-term monitoring in the offshore environment. The camera system will provide developers and regulators with data on the risk that wind turbines present to aerofauna, which will reduce uncertainty in the environmental permitting process.« less
Simulation based mask defect repair verification and disposition
NASA Astrophysics Data System (ADS)
Guo, Eric; Zhao, Shirley; Zhang, Skin; Qian, Sandy; Cheng, Guojie; Vikram, Abhishek; Li, Ling; Chen, Ye; Hsiang, Chingyun; Zhang, Gary; Su, Bo
2009-10-01
As the industry moves towards sub-65nm technology nodes, the mask inspection, with increased sensitivity and shrinking critical defect size, catches more and more nuisance and false defects. Increased defect counts pose great challenges in the post inspection defect classification and disposition: which defect is real defect, and among the real defects, which defect should be repaired and how to verify the post-repair defects. In this paper, we address the challenges in mask defect verification and disposition, in particular, in post repair defect verification by an efficient methodology, using SEM mask defect images, and optical inspection mask defects images (only for verification of phase and transmission related defects). We will demonstrate the flow using programmed mask defects in sub-65nm technology node design. In total 20 types of defects were designed including defects found in typical real circuit environments with 30 different sizes designed for each type. The SEM image was taken for each programmed defect after the test mask was made. Selected defects were repaired and SEM images from the test mask were taken again. Wafers were printed with the test mask before and after repair as defect printability references. A software tool SMDD-Simulation based Mask Defect Disposition-has been used in this study. The software is used to extract edges from the mask SEM images and convert them into polygons to save in GDSII format. Then, the converted polygons from the SEM images were filled with the correct tone to form mask patterns and were merged back into the original GDSII design file. This merge is for the purpose of contour simulation-since normally the SEM images cover only small area (~1 μm) and accurate simulation requires including larger area of optical proximity effect. With lithography process model, the resist contour of area of interest (AOI-the area surrounding a mask defect) can be simulated. If such complicated model is not available, a simple optical model can be used to get simulated aerial image intensity in the AOI. With built-in contour analysis functions, the SMDD software can easily compare the contour (or intensity) differences between defect pattern and normal pattern. With user provided judging criteria, this software can be easily disposition the defect based on contour comparison. In addition, process sensitivity properties, like MEEF and NILS, can be readily obtained in the AOI with a lithography model, which will make mask defect disposition criteria more intelligent.
MO-B-BRC-01: Introduction [Brachytherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prisciandaro, J.
2016-06-15
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less
MO-B-BRC-04: MRI-Based Prostate HDR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mourtada, F.
2016-06-15
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less
MO-B-BRC-02: Ultrasound Based Prostate HDR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Z.
2016-06-15
Brachytherapy has proven to be an effective treatment option for prostate cancer. Initially, prostate brachytherapy was delivered through permanently implanted low dose rate (LDR) radioactive sources; however, high dose rate (HDR) temporary brachytherapy for prostate cancer is gaining popularity. Needle insertion during prostate brachytherapy is most commonly performed under ultrasound (U/S) guidance; however, treatment planning may be performed utilizing several imaging modalities either in an intra- or post-operative setting. During intra-operative prostate HDR, the needles are imaged during implantation, and planning may be performed in real time. At present, the most common imaging modality utilized for intra-operative prostate HDR ismore » U/S. Alternatively, in the post-operative setting, following needle implantation, patients may be simulated with computed tomography (CT) or magnetic resonance imaging (MRI). Each imaging modality and workflow provides its share of benefits and limitations. Prostate HDR has been adopted in a number of cancer centers across the nation. In this educational session, we will explore the role of U/S, CT, and MRI in HDR prostate brachytherapy. Example workflows and operational details will be shared, and we will discuss how to establish a prostate HDR program in a clinical setting. Learning Objectives: Review prostate HDR techniques based on the imaging modality Discuss the challenges and pitfalls introduced by the three imagebased options for prostate HDR brachytherapy Review the QA process and learn about the development of clinical workflows for these imaging options at different institutions.« less
Extending RTM Imaging With a Focus on Head Waves
NASA Astrophysics Data System (ADS)
Holicki, Max; Drijkoningen, Guy
2016-04-01
Conventional industry seismic imaging predominantly focuses on pre-critical reflections, muting post-critical arrivals in the process. This standard approach neglects a lot of information present in the recorded wave field. This negligence has been partially remedied with the inclusion of head waves in more advanced imaging techniques, like Full Waveform Inversion (FWI). We would like to see post-critical information leave the realm of labour-intensive travel-time picking and tomographic inversion towards full migration to improve subsurface imaging and parameter estimation. We present a novel seismic imaging approach aimed at exploiting post-critical information, using the constant travel path for head-waves between shots. To this end, we propose to generalize conventional Reverse Time Migration (RTM) to scenarios where the sources for the forward and backward propagated wave-fields are not coinciding. RTM functions on the principle that backward propagated receiver data, due to a source at some locations, must overlap with the forward propagated source wave field, from the same source location, at subsurface scatterers. Where the wave-fields overlap in the subsurface there is a peak at the zero-lag cross-correlation, and this peak is used for the imaging. For the inclusion of head waves, we propose to relax the condition of coincident sources. This means that wave-fields, from non-coincident-sources, will not overlap properly in the subsurface anymore. We can make the wave-fields overlap in the subsurface again, by time shifting either the forward or backward propagated wave-fields until the wave-fields overlap. This is the same as imaging at non-zero cross-correlation lags, where the lag is the travel time difference between the two wave-fields for a given event. This allows us to steer which arrivals we would like to use for imaging. In the simplest case we could use Eikonal travel-times to generate our migration image, or we exclusively image the subsurface with the head wave from the nth-layer. To illustrate the method we apply it to a layered Earth model with five layers and compare it to conventional RTM. We will show that conventional RTM highlights interfaces, while our head-wave based images highlight layers, producing fundamentally different images. We also demonstrate that our proposed imaging scheme is more sensitive to the velocity model than conventional RTM, which is important for improved velocity model building in the future.
Alho, A T D L; Hamani, C; Alho, E J L; da Silva, R E; Santos, G A B; Neves, R C; Carreira, L L; Araújo, C M M; Magalhães, G; Coelho, D B; Alegro, M C; Martin, M G M; Grinberg, L T; Pasqualucci, C A; Heinsen, H; Fonoff, E T; Amaro, E
2017-08-01
The pedunculopontine nucleus (PPN) has been proposed as target for deep brain stimulation (DBS) in patients with postural instability and gait disorders due to its involvement in muscle tonus adjustments and control of locomotion. However, it is a deep-seated brainstem nucleus without clear imaging or electrophysiological markers. Some studies suggested that diffusion tensor imaging (DTI) may help guiding electrode placement in the PPN by showing the surrounding fiber bundles, but none have provided a direct histological correlation. We investigated DTI fractional anisotropy (FA) maps from in vivo and in situ post-mortem magnetic resonance images (MRI) compared to histological evaluations for improving PPN targeting in humans. A post-mortem brain was scanned in a clinical 3T MR system in situ. Thereafter, the brain was processed with a special method ideally suited for cytoarchitectonic analyses. Also, nine volunteers had in vivo brain scanning using the same MRI protocol. Images from volunteers were compared to those obtained in the post-mortem study. FA values of the volunteers were obtained from PPN, inferior colliculus, cerebellar crossing fibers and medial lemniscus using histological data and atlas information. FA values in the PPN were significantly lower than in the surrounding white matter region and higher than in areas with predominantly gray matter. In Nissl-stained histologic sections, the PPN extended for more than 10 mm in the rostro-caudal axis being closely attached to the lateral parabrachial nucleus. Our DTI analyses and the spatial correlation with histological findings proposed a location for PPN that matched the position assigned to this nucleus in the literature. Coregistration of neuroimaging and cytoarchitectonic features can add value to help establishing functional architectonics of the PPN and facilitate neurosurgical targeting of this extended nucleus.
Leigh, Richard; Jen, Shyian S.; Hillis, Argye E.; Krakauer, John W.; Barker, Peter B.
2014-01-01
Background and Purpose Early blood brain barrier (BBB) damage after acute ischemic stroke (AIS) has previously been qualitatively linked to subsequent intracranial hemorrhage (ICH). In this quantitative study, it was investigated whether the amount of BBB damage evident on pre-tPA MRI scans was related to the degree of post-tPA ICH in patients with AIS. Methods Analysis was performed on a database of patients with AIS provided by the STIR and VISTA Imaging Investigators. Patients with perfusion-weighted imaging (PWI) lesions >10mL and negative gradient-recalled echo (GRE) imaging prior to IV tPA were included. Post processing of the PWI source images was performed to estimate changes in BBB permeability within the perfusion deficit relative to the unaffected hemisphere. Follow-up GRE images were reviewed for evidence of ICH and divided into three groups according to ECASS criteria: no hemorrhage (NH), hemorrhagic infarction (HI), and parenchymal hematoma (PH). Results 75 patients from the database met the inclusion criteria, 28 of whom experienced ICH, of which 19 were classified as HI, and nine were classified as PH. The mean permeability (±standard deviations), expressed as an index of contrast leakage, was 17.0%±8.8 in the NH group, 19.4%±4.0 in the HI group, and 24.6%±4.5 in the PH group. Permeability was significantly correlated with ICH grade in univariate (p=0.007) and multivariate (p=0.008) linear regression modeling. Conclusions A PWI-derived index of BBB damage measured prior to IV tPA is associated with the severity of ICH after treatment in patients with AIS. PMID:24876245
NASA Astrophysics Data System (ADS)
Mitri, George H.; Gitas, Ioannis Z.
2013-02-01
Careful evaluation of forest regeneration and vegetation recovery after a fire event provides vital information useful in land management. The use of remotely sensed data is considered to be especially suitable for monitoring ecosystem dynamics after fire. The aim of this work was to map post-fire forest regeneration and vegetation recovery on the Mediterranean island of Thasos by using a combination of very high spatial (VHS) resolution (QuickBird) and hyperspectral (EO-1 Hyperion) imagery and by employing object-based image analysis. More specifically, the work focused on (1) the separation and mapping of three major post-fire classes (forest regeneration, other vegetation recovery, unburned vegetation) existing within the fire perimeter, and (2) the differentiation and mapping of the two main forest regeneration classes, namely, Pinus brutia regeneration, and Pinus nigra regeneration. The data used in this study consisted of satellite images and field observations of homogeneous regenerated and revegetated areas. The methodology followed two main steps: a three-level image segmentation, and, a classification of the segmented images. The process resulted in the separation of classes related to the aforementioned objectives. The overall accuracy assessment revealed very promising results (approximately 83.7% overall accuracy, with a Kappa Index of Agreement of 0.79). The achieved accuracy was 8% higher when compared to the results reported in a previous work in which only the EO-1 Hyperion image was employed in order to map the same classes. Some classification confusions involving the classes of P. brutia regeneration and P. nigra regeneration were observed. This could be attributed to the absence of large and dense homogeneous areas of regenerated pine trees in the study area.
An evolution of image source camera attribution approaches.
Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul
2016-05-01
Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Research on detection method of UAV obstruction based on binocular vision
NASA Astrophysics Data System (ADS)
Zhu, Xiongwei; Lei, Xusheng; Sui, Zhehao
2018-04-01
For the autonomous obstacle positioning and ranging in the process of UAV (unmanned aerial vehicle) flight, a system based on binocular vision is constructed. A three-stage image preprocessing method is proposed to solve the problem of the noise and brightness difference in the actual captured image. The distance of the nearest obstacle is calculated by using the disparity map that generated by binocular vision. Then the contour of the obstacle is extracted by post-processing of the disparity map, and a color-based adaptive parameter adjustment algorithm is designed to extract contours of obstacle automatically. Finally, the safety distance measurement and obstacle positioning during the UAV flight process are achieved. Based on a series of tests, the error of distance measurement can keep within 2.24% of the measuring range from 5 m to 20 m.
Narayan, Sreenath; Kalhan, Satish C.; Wilson, David L.
2012-01-01
I.Abstract Purpose To reduce swaps in fat-water separation methods, a particular issue on 7T small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Materials and Methods Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Results Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Conclusion Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. PMID:23023815
Narayan, Sreenath; Kalhan, Satish C; Wilson, David L
2013-05-01
To reduce swaps in fat-water separation methods, a particular issue on 7 Tesla (T) small animal scanners due to field inhomogeneity, using image postprocessing innovations that detect and correct errors in the B0 field map. Fat-water decompositions and B0 field maps were computed for images of mice acquired on a 7T Bruker BioSpec scanner, using a computationally efficient method for solving the Markov Random Field formulation of the multi-point Dixon model. The B0 field maps were processed with a novel hole-filling method, based on edge strength between regions, and a novel k-means method, based on field-map intensities, which were iteratively applied to automatically detect and reinitialize error regions in the B0 field maps. Errors were manually assessed in the B0 field maps and chemical parameter maps both before and after error correction. Partial swaps were found in 6% of images when processed with FLAWLESS. After REFINED correction, only 0.7% of images contained partial swaps, resulting in an 88% decrease in error rate. Complete swaps were not problematic. Ex post facto error correction is a viable supplement to a priori techniques for producing globally smooth B0 field maps, without partial swaps. With our processing pipeline, it is possible to process image volumes rapidly, robustly, and almost automatically. Copyright © 2012 Wiley Periodicals, Inc.
Visual Typo Correction by Collocative Optimization: A Case Study on Merchandize Images.
Wei, Xiao-Yong; Yang, Zhen-Qun; Ngo, Chong-Wah; Zhang, Wei
2014-02-01
Near-duplicate retrieval (NDR) in merchandize images is of great importance to a lot of online applications on e-Commerce websites. In those applications where the requirement of response time is critical, however, the conventional techniques developed for a general purpose NDR are limited, because expensive post-processing like spatial verification or hashing is usually employed to compromise the quantization errors among the visual words used for the images. In this paper, we argue that most of the errors are introduced because of the quantization process where the visual words are considered individually, which has ignored the contextual relations among words. We propose a "spelling or phrase correction" like process for NDR, which extends the concept of collocations to visual domain for modeling the contextual relations. Binary quadratic programming is used to enforce the contextual consistency of words selected for an image, so that the errors (typos) are eliminated and the quality of the quantization process is improved. The experimental results show that the proposed method can improve the efficiency of NDR by reducing vocabulary size by 1000% times, and under the scenario of merchandize image NDR, the expensive local interest point feature used in conventional approaches can be replaced by color-moment feature, which reduces the time cost by 9202% while maintaining comparable performance to the state-of-the-art methods.
Public sentiment and discourse about Zika virus on Instagram.
Seltzer, E K; Horst-Martz, E; Lu, M; Merchant, R M
2017-09-01
Social media have strongly influenced the awareness and perceptions of public health emergencies, and a considerable amount of social media content is now shared through images, rather than text alone. This content can impact preparedness and response due to the popularity and real-time nature of social media platforms. We sought to explore how the image-sharing platform Instagram is used for information dissemination and conversation during the current Zika outbreak. This was a retrospective review of publicly posted images about Zika on Instagram. Using the keyword '#zika' we identified 500 images posted on Instagram from May to August 2016. Images were coded by three reviewers and contextual information was collected for each image about sentiment, image type, content, audience, geography, reliability, and engagement. Of 500 images tagged with #zika, 342 (68%) contained content actually related to Zika. Of the 342 Zika-specific images, 299 were coded as 'health' and 193 were coded 'public interest'. Some images had multiple 'health' and 'public interest' codes. Health images tagged with #zika were primarily related to transmission (43%, 129/299) and prevention (48%, 145/299). Transmission-related posts were more often mosquito-human transmission (73%, 94/129) than human-human transmission (27%, 35/129). Mosquito bite prevention posts outnumbered safe sex prevention; (84%, 122/145) and (16%, 23/145) respectively. Images with a target audience were primarily aimed at women (95%, 36/38). Many posts (60%, 61/101) included misleading, incomplete, or unclear information about the virus. Additionally, many images expressed fear and negative sentiment, (79/156, 51%). Instagram can be used to characterize public sentiment and highlight areas of focus for public health, such as correcting misleading or incomplete information or expanding messages to reach diverse audiences. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Avdelidis, N. P.; Kappatos, V.; Georgoulas, G.; Karvelis, P.; Deli, C. K.; Theodorakeas, P.; Giakas, G.; Tsiokanos, A.; Koui, M.; Jamurtas, A. Z.
2017-04-01
Exercise induced muscle damage (EIMD), is usually experienced in i) humans who have been physically inactive for prolonged periods of time and then begin with sudden training trials and ii) athletes who train over their normal limits. EIMD is not so easy to be detected and quantified, by means of commonly measurement tools and methods. Thermography has been used successfully as a research detection tool in medicine for the last 6 decades but very limited work has been reported on EIMD area. The main purpose of this research is to assess and characterize EIMD, using thermography and image processing techniques. The first step towards that goal is to develop a reliable segmentation technique to isolate the region of interest (ROI). A semi-automatic image processing software was designed and regions of the left and right leg based on superpixels were segmented. The image is segmented into a number of regions and the user is able to intervene providing the regions which belong to each of the two legs. In order to validate the image processing software, an extensive experimental investigation was carried out, acquiring thermographic images of the rectus femoris muscle before, immediately post and 24, 48 and 72 hours after an acute bout of eccentric exercise (5 sets of 15 maximum repetitions), on males and females (20-30 year-old). Results indicate that the semi-automated approach provides an excellent bench-mark that can be used as a clinical reliable tool.
NASA Astrophysics Data System (ADS)
Rahman, Mir Mustafizur
In collaboration with The City of Calgary 2011 Sustainability Direction and as part of the HEAT (Heat Energy Assessment Technologies) project, the focus of this research is to develop a semi/automated 'protocol' to post-process large volumes of high-resolution (H-res) airborne thermal infrared (TIR) imagery to enable accurate urban waste heat mapping. HEAT is a free GeoWeb service, designed to help Calgary residents improve their home energy efficiency by visualizing the amount and location of waste heat leaving their homes and communities, as easily as clicking on their house in Google Maps. HEAT metrics are derived from 43 flight lines of TABI-1800 (Thermal Airborne Broadband Imager) data acquired on May 13--14, 2012 at night (11:00 pm--5:00 am) over The City of Calgary, Alberta (˜825 km 2) at a 50 cm spatial resolution and 0.05°C thermal resolution. At present, the only way to generate a large area, high-spatial resolution TIR scene is to acquire separate airborne flight lines and mosaic them together. However, the ambient sensed temperature within, and between flight lines naturally changes during acquisition (due to varying atmospheric and local micro-climate conditions), resulting in mosaicked images with different temperatures for the same scene components (e.g. roads, buildings), and mosaic join-lines arbitrarily bisect many thousands of homes. In combination these effects result in reduced utility and classification accuracy including, poorly defined HEAT Metrics, inaccurate hotspot detection and raw imagery that are difficult to interpret. In an effort to minimize these effects, three new semi/automated post-processing algorithms (the protocol) are described, which are then used to generate a 43 flight line mosaic of TABI-1800 data from which accurate Calgary waste heat maps and HEAT metrics can be generated. These algorithms (presented as four peer-reviewed papers)---are: (a) Thermal Urban Road Normalization (TURN)---used to mitigate the microclimatic variability within a thermal flight line based on varying road temperatures; (b) Automated Polynomial Relative Radiometric Normalization (RRN)---which mitigates the between flight line radiometric variability; and (c) Object Based Mosaicking (OBM)---which minimizes the geometric distortion along the mosaic edge between each flight line. A modified Emissivity Modulation technique is also described to correct H-res TIR images for emissivity. This combined radiometric and geometric post-processing protocol (i) increases the visual agreement between TABI-1800 flight lines, (ii) improves radiometric agreement within/between flight lines, (iii) produces a visually seamless mosaic, (iv) improves hot-spot detection and landcover classification accuracy, and (v) provides accurate data for thermal-based HEAT energy models. Keywords: Thermal Infrared, Post-Processing, High Spatial Resolution, Airborne, Thermal Urban Road Normalization (TURN), Relative Radiometric Normalization (RRN), Object Based Mosaicking (OBM), TABI-1800, HEAT, and Automation.
Improving depth estimation from a plenoptic camera by patterned illumination
NASA Astrophysics Data System (ADS)
Marshall, Richard J.; Meah, Chris J.; Turola, Massimo; Claridge, Ela; Robinson, Alex; Bongs, Kai; Gruppetta, Steve; Styles, Iain B.
2015-05-01
Plenoptic (light-field) imaging is a technique that allows a simple CCD-based imaging device to acquire both spatially and angularly resolved information about the "light-field" from a scene. It requires a microlens array to be placed between the objective lens and the sensor of the imaging device1 and the images under each microlens (which typically span many pixels) can be computationally post-processed to shift perspective, digital refocus, extend the depth of field, manipulate the aperture synthetically and generate a depth map from a single image. Some of these capabilities are rigid functions that do not depend upon the scene and work by manipulating and combining a well-defined set of pixels in the raw image. However, depth mapping requires specific features in the scene to be identified and registered between consecutive microimages. This process requires that the image has sufficient features for the registration, and in the absence of such features the algorithms become less reliable and incorrect depths are generated. The aim of this study is to investigate the generation of depth-maps from light-field images of scenes with insufficient features for accurate registration, using projected patterns to impose a texture on the scene that provides sufficient landmarks for the registration methods.
Khan, Arif Ul Maula; Torelli, Angelo; Wolf, Ivo; Gretz, Norbert
2018-05-08
In biological assays, automated cell/colony segmentation and counting is imperative owing to huge image sets. Problems occurring due to drifting image acquisition conditions, background noise and high variation in colony features in experiments demand a user-friendly, adaptive and robust image processing/analysis method. We present AutoCellSeg (based on MATLAB) that implements a supervised automatic and robust image segmentation method. AutoCellSeg utilizes multi-thresholding aided by a feedback-based watershed algorithm taking segmentation plausibility criteria into account. It is usable in different operation modes and intuitively enables the user to select object features interactively for supervised image segmentation method. It allows the user to correct results with a graphical interface. This publicly available tool outperforms tools like OpenCFU and CellProfiler in terms of accuracy and provides many additional useful features for end-users.
Applied high-speed imaging for the icing research program at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Slater, Howard; Owens, Jay; Shin, Jaiwon
1992-01-01
The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.
A median-Gaussian filtering framework for Moiré pattern noise removal from X-ray microscopy image.
Wei, Zhouping; Wang, Jian; Nichol, Helen; Wiebe, Sheldon; Chapman, Dean
2012-02-01
Moiré pattern noise in Scanning Transmission X-ray Microscopy (STXM) imaging introduces significant errors in qualitative and quantitative image analysis. Due to the complex origin of the noise, it is difficult to avoid Moiré pattern noise during the image data acquisition stage. In this paper, we introduce a post-processing method for filtering Moiré pattern noise from STXM images. This method includes a semi-automatic detection of the spectral peaks in the Fourier amplitude spectrum by using a local median filter, and elimination of the spectral noise peaks using a Gaussian notch filter. The proposed median-Gaussian filtering framework shows good results for STXM images with the size of power of two, if such parameters as threshold, sizes of the median and Gaussian filters, and size of the low frequency window, have been properly selected. Copyright © 2011 Elsevier Ltd. All rights reserved.
Flow-gated radial phase-contrast imaging in the presence of weak flow.
Peng, Hsu-Hsia; Huang, Teng-Yi; Wang, Fu-Nien; Chung, Hsiao-Wen
2013-01-01
To implement a flow-gating method to acquire phase-contrast (PC) images of carotid arteries without use of an electrocardiography (ECG) signal to synchronize the acquisition of imaging data with pulsatile arterial flow. The flow-gating method was realized through radial scanning and sophisticated post-processing methods including downsampling, complex difference, and correlation analysis to improve the evaluation of flow-gating times in radial phase-contrast scans. Quantitatively comparable results (R = 0.92-0.96, n = 9) of flow-related parameters, including mean velocity, mean flow rate, and flow volume, with conventional ECG-gated imaging demonstrated that the proposed method is highly feasible. The radial flow-gating PC imaging method is applicable in carotid arteries. The proposed flow-gating method can potentially avoid the setting up of ECG-related equipment for brain imaging. This technique has potential use in patients with arrhythmia or weak ECG signals.
Applied high-speed imaging for the icing research program at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Slater, Howard; Owens, Jay; Shin, Jaiwon
1991-01-01
The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.
Camera processing with chromatic aberration.
Korneliussen, Jan Tore; Hirakawa, Keigo
2014-10-01
Since the refractive index of materials commonly used for lens depends on the wavelengths of light, practical camera optics fail to converge light to a single point on an image plane. Known as chromatic aberration, this phenomenon distorts image details by introducing magnification error, defocus blur, and color fringes. Though achromatic and apochromatic lens designs reduce chromatic aberration to a degree, they are complex and expensive and they do not offer a perfect correction. In this paper, we propose a new postcapture processing scheme designed to overcome these problems computationally. Specifically, the proposed solution is comprised of chromatic aberration-tolerant demosaicking algorithm and post-demosaicking chromatic aberration correction. Experiments with simulated and real sensor data verify that the chromatic aberration is effectively corrected.
Detection of rebars in concrete using advanced ultrasonic pulse compression techniques.
Laureti, S; Ricci, M; Mohamed, M N I B; Senni, L; Davis, L A J; Hutchins, D A
2018-04-01
A pulse compression technique has been developed for the non-destructive testing of concrete samples. Scattering of signals from aggregate has historically been a problem in such measurements. Here, it is shown that a combination of piezocomposite transducers, pulse compression and post processing can lead to good images of a reinforcement bar at a cover depth of 55 mm. This has been achieved using a combination of wide bandwidth operation over the 150-450 kHz range, and processing based on measuring the cumulative energy scattered back to the receiver. Results are presented in the form of images of a 20 mm rebar embedded within a sample containing 10 mm aggregate. Copyright © 2017 Elsevier B.V. All rights reserved.
Particle accelerators in the hot spots of radio galaxy 3C 445, imaged with the VLT.
Prieto, M Almudena; Brunetti, Gianfranco; Mack, Karl-Heinz
2002-10-04
Hot spots (HSs) are regions of enhanced radio emission produced by supersonic jets at the tip of the radio lobes of powerful radio sources. Obtained with the Very Large Telescope (VLT), images of the HSs in the radio galaxy 3C 445 show bright knots embedded in diffuse optical emission distributed along the post-shock region created by the impact of the jet into the intergalactic medium. The observations reported here confirm that relativistic electrons are accelerated by Fermi-I acceleration processes in HSs. Furthermore, both the diffuse emission tracing the rims of the front shock and the multiple knots demonstrate the presence of additional continuous re-acceleration processes of electrons (Fermi-II).
Point target detection utilizing super-resolution strategy for infrared scanning oversampling system
NASA Astrophysics Data System (ADS)
Wang, Longguang; Lin, Zaiping; Deng, Xinpu; An, Wei
2017-11-01
To improve the resolution of remote sensing infrared images, infrared scanning oversampling system is employed with information amount quadrupled, which contributes to the target detection. Generally the image data from double-line detector of infrared scanning oversampling system is shuffled to a whole oversampled image to be post-processed, whereas the aliasing between neighboring pixels leads to image degradation with a great impact on target detection. This paper formulates a point target detection method utilizing super-resolution (SR) strategy concerning infrared scanning oversampling system, with an accelerated SR strategy proposed to realize fast de-aliasing of the oversampled image and an adaptive MRF-based regularization designed to achieve the preserving and aggregation of target energy. Extensive experiments demonstrate the superior detection performance, robustness and efficiency of the proposed method compared with other state-of-the-art approaches.
A novel methodology for litho-to-etch pattern fidelity correction for SADP process
NASA Astrophysics Data System (ADS)
Chen, Shr-Jia; Chang, Yu-Cheng; Lin, Arthur; Chang, Yi-Shiang; Lin, Chia-Chi; Lai, Jun-Cheng
2017-03-01
For 2x nm node semiconductor devices and beyond, more aggressive resolution enhancement techniques (RETs) such as source-mask co-optimization (SMO), litho-etch-litho-etch (LELE) and self-aligned double patterning (SADP) are utilized for the low k1 factor lithography processes. In the SADP process, the pattern fidelity is extremely critical since a slight photoresist (PR) top-loss or profile roughness may impact the later core trim process, due to its sensitivity to environment. During the subsequent sidewall formation and core removal processes, the core trim profile weakness may worsen and induces serious defects that affect the final electrical performance. To predict PR top-loss, a rigorous lithography simulation can provide a reference to modify mask layouts; but it takes a much longer run time and is not capable of full-field mask data preparation. In this paper, we first brought out an algorithm which utilizes multi-intensity levels from conventional aerial image simulation to assess the physical profile through lithography to core trim etching steps. Subsequently, a novel correction method was utilized to improve the post-etch pattern fidelity without the litho. process window suffering. The results not only matched PR top-loss in rigorous lithography simulation, but also agreed with post-etch wafer data. Furthermore, this methodology can also be incorporated with OPC and post-OPC verification to improve core trim profile and final pattern fidelity at an early stage.
A data-management system using sensor technology and wireless devices for port security
NASA Astrophysics Data System (ADS)
Saldaña, Manuel; Rivera, Javier; Oyola, Jose; Manian, Vidya
2014-05-01
Sensor technologies such as infrared sensors and hyperspectral imaging, video camera surveillance are proven to be viable in port security. Drawing from sources such as infrared sensor data, digital camera images and processed hyperspectral images, this article explores the implementation of a real-time data delivery system. In an effort to improve the manner in which anomaly detection data is delivered to interested parties in port security, this system explores how a client-server architecture can provide protected access to data, reports, and device status. Sensor data and hyperspectral image data will be kept in a monitored directory, where the system will link it to existing users in the database. Since this system will render processed hyperspectral images that are dynamically added to the server - which often occupy a large amount of space - the resolution of these images is trimmed down to around 1024×768 pixels. Changes that occur in any image or data modification that originates from any sensor will trigger a message to all users that have a relation with the aforementioned. These messages will be sent to the corresponding users through automatic email generation and through a push notification using Google Cloud Messaging for Android. Moreover, this paper presents the complete architecture for data reception from the sensors, processing, storage and discusses how users of this system such as port security personnel can use benefit from the use of this service to receive secure real-time notifications if their designated sensors have detected anomalies and/or have remote access to results from processed hyperspectral imagery relevant to their assigned posts.
Accelerated speckle imaging with the ATST visible broadband imager
NASA Astrophysics Data System (ADS)
Wöger, Friedrich; Ferayorni, Andrew
2012-09-01
The Advanced Technology Solar Telescope (ATST), a 4 meter class telescope for observations of the solar atmosphere currently in construction phase, will generate data at rates of the order of 10 TB/day with its state of the art instrumentation. The high-priority ATST Visible Broadband Imager (VBI) instrument alone will create two data streams with a bandwidth of 960 MB/s each. Because of the related data handling issues, these data will be post-processed with speckle interferometry algorithms in near-real time at the telescope using the cost-effective Graphics Processing Unit (GPU) technology that is supported by the ATST Data Handling System. In this contribution, we lay out the VBI-specific approach to its image processing pipeline, put this into the context of the underlying ATST Data Handling System infrastructure, and finally describe the details of how the algorithms were redesigned to exploit data parallelism in the speckle image reconstruction algorithms. An algorithm re-design is often required to efficiently speed up an application using GPU technology; we have chosen NVIDIA's CUDA language as basis for our implementation. We present our preliminary results of the algorithm performance using our test facilities, and base a conservative estimate on the requirements of a full system that could achieve near real-time performance at ATST on these results.
Optimization of a hardware implementation for pulse coupled neural networks for image applications
NASA Astrophysics Data System (ADS)
Gimeno Sarciada, Jesús; Lamela Rivera, Horacio; Warde, Cardinal
2010-04-01
Pulse Coupled Neural Networks are a very useful tool for image processing and visual applications, since it has the advantages of being invariant to image changes as rotation, scale, or certain distortion. Among other characteristics, the PCNN changes a given image input into a temporal representation which can be easily later analyzed for pattern recognition. The structure of a PCNN though, makes it necessary to determine all of its parameters very carefully in order to function optimally, so that the responses to the kind of inputs it will be subjected are clearly discriminated allowing for an easy and fast post-processing yielding useful results. This tweaking of the system is a taxing process. In this paper we analyze and compare two methods for modeling PCNNs. A purely mathematical model is programmed and a similar circuital model is also designed. Both are then used to determine the optimal values of the several parameters of a PCNN: gain, threshold, time constants for feed-in and threshold and linking leading to an optimal design for image recognition. The results are compared for usefulness, accuracy and speed, as well as the performance and time requirements for fast and easy design, thus providing a tool for future ease of management of a PCNN for different tasks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamran, Mudassar, E-mail: kamranm@mir.wustl.edu; Fowler, Kathryn J., E-mail: fowlerk@mir.wustl.edu; Mellnick, Vincent M., E-mail: mellnickv@mir.wustl.edu
Primary aortic neoplasms are rare. Aortic sarcoma arising after endovascular aneurysm repair (EVAR) is a scarce subset of primary aortic malignancies, reports of which are infrequent in the published literature. The diagnosis of aortic sarcoma is challenging due to its non-specific clinical presentation, and the prognosis is poor due to delayed diagnosis, rapid proliferation, and propensity for metastasis. Post-EVAR, aortic sarcomas may mimic other more common aortic processes on surveillance imaging. Radiologists are rarely knowledgeable about this rare entity for which multimodality imaging and awareness are invaluable in early diagnosis. A series of three pathologically confirmed cases are presented tomore » display the multimodality imaging features and clinical presentations of aortic sarcoma arising after EVAR.« less
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan
2017-03-01
In this paper, we describe an enhanced DICOM Secondary Capture (SC) that integrates Image Quantification (IQ) results, Regions of Interest (ROIs), and Time Activity Curves (TACs) with screen shots by embedding extra medical imaging information into a standard DICOM header. A software toolkit of DICOM IQSC has been developed to implement the SC-centered information integration of quantitative analysis for routine practice of nuclear medicine. Primary experiments show that the DICOM IQSC method is simple and easy to implement seamlessly integrating post-processing workstations with PACS for archiving and retrieving IQ information. Additional DICOM IQSC applications in routine nuclear medicine and clinic research are also discussed.
High performance embedded system for real-time pattern matching
NASA Astrophysics Data System (ADS)
Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.
2017-02-01
In this paper we present an innovative and high performance embedded system for real-time pattern matching. This system is based on the evolution of hardware and algorithms developed for the field of High Energy Physics and more specifically for the execution of extremely fast pattern matching for tracking of particles produced by proton-proton collisions in hadron collider experiments. A miniaturized version of this complex system is being developed for pattern matching in generic image processing applications. The system works as a contour identifier able to extract the salient features of an image. It is based on the principles of cognitive image processing, which means that it executes fast pattern matching and data reduction mimicking the operation of the human brain. The pattern matching can be executed by a custom designed Associative Memory chip. The reference patterns are chosen by a complex training algorithm implemented on an FPGA device. Post processing algorithms (e.g. pixel clustering) are also implemented on the FPGA. The pattern matching can be executed on a 2D or 3D space, on black and white or grayscale images, depending on the application and thus increasing exponentially the processing requirements of the system. We present the firmware implementation of the training and pattern matching algorithm, performance and results on a latest generation Xilinx Kintex Ultrascale FPGA device.
Diminishing incidence of Internet child pornographic images.
Bagley, Christopher
2003-08-01
Indecent images of children posted to web sites and newsgroups over a 4-yr. period were sampled. A significant decline in the number of such images posted was observed, probably accounted for by the pressure of groups opposed to the distribution of such exploitive material.
PANDA: a pipeline toolbox for analyzing brain diffusion images.
Cui, Zaixu; Zhong, Suyu; Xu, Pengfei; He, Yong; Gong, Gaolang
2013-01-01
Diffusion magnetic resonance imaging (dMRI) is widely used in both scientific research and clinical practice in in-vivo studies of the human brain. While a number of post-processing packages have been developed, fully automated processing of dMRI datasets remains challenging. Here, we developed a MATLAB toolbox named "Pipeline for Analyzing braiN Diffusion imAges" (PANDA) for fully automated processing of brain diffusion images. The processing modules of a few established packages, including FMRIB Software Library (FSL), Pipeline System for Octave and Matlab (PSOM), Diffusion Toolkit and MRIcron, were employed in PANDA. Using any number of raw dMRI datasets from different subjects, in either DICOM or NIfTI format, PANDA can automatically perform a series of steps to process DICOM/NIfTI to diffusion metrics [e.g., fractional anisotropy (FA) and mean diffusivity (MD)] that are ready for statistical analysis at the voxel-level, the atlas-level and the Tract-Based Spatial Statistics (TBSS)-level and can finish the construction of anatomical brain networks for all subjects. In particular, PANDA can process different subjects in parallel, using multiple cores either in a single computer or in a distributed computing environment, thus greatly reducing the time cost when dealing with a large number of datasets. In addition, PANDA has a friendly graphical user interface (GUI), allowing the user to be interactive and to adjust the input/output settings, as well as the processing parameters. As an open-source package, PANDA is freely available at http://www.nitrc.org/projects/panda/. This novel toolbox is expected to substantially simplify the image processing of dMRI datasets and facilitate human structural connectome studies.
3D reconstruction techniques made easy: know-how and pictures.
Luccichenti, Giacomo; Cademartiri, Filippo; Pezzella, Francesca Romana; Runza, Giuseppe; Belgrano, Manuel; Midiri, Massimo; Sabatini, Umberto; Bastianello, Stefano; Krestin, Gabriel P
2005-10-01
Three-dimensional reconstructions represent a visual-based tool for illustrating the basis of three-dimensional post-processing such as interpolation, ray-casting, segmentation, percentage classification, gradient calculation, shading and illumination. The knowledge of the optimal scanning and reconstruction parameters facilitates the use of three-dimensional reconstruction techniques in clinical practise. The aim of this article is to explain the principles of multidimensional image processing in a pictorial way and the advantages and limitations of the different possibilities of 3D visualisation.
The value of magnetic resonance imaging in the diagnosis of penile fracture.
Guler, Ibrahim; Ödev, Kemal; Kalkan, Havva; Simsek, Cihan; Keskin, Suat; Kilinç, Mehmet
2015-01-01
We studied the use of magnetic resonance imaging in the diagnosis of penile fracture. Between 1997 and 2012, fifteen patients (age range 17-48 years, mean age 37 years) with suspected penile fracture underwent MRI examinations. Ten patients were injured during sexual intercourse, whereas four patients were traumatized by non-physiological bending of the penis during self manupilation, one patient was traumatized falling from the bed. Investigations were performed with 1.5 T MR unit. With the patient in the supine position, the penis was taped against the abdominal wall and surface coil was placed on the penis. All patients were studied with axial, coronal, sagittal precontrast and postcontrast T1-weighted TSE(TR/TE:538/13 msn) and T2-weighted TSE(5290/110 msn) sequences. All patient underwent surgical exploration. The follow-up ranged from 3 months to 72 months. Clinically all patients showed normal healing process without complications. In 11 patients a shortening and thickening of tunica albuginea was observed. Three patients have post traumatic erectile disfunction. In all patient corpus cavernosum fractures were clearly depicted on a discontinuity of the low signal intensity of tunica albuginea. These findings were most evident on T1WI and also depicted on T2W sequences. Images obtained shortly after contrast medium administration showed considerable enhancement only in rupture site. Subcutaneous extratunical haematoma in all patients were also recognizable on T2 WI. MRI findings were confirmed at surgery. Magnetic resonance imaging is of great value for the diagnosis of penile fracture. Furthermore this method is well suited for visualising the post-operative healing process.
A content analysis of thinspiration images and text posts on Tumblr.
Wick, Madeline R; Harriger, Jennifer A
2018-03-01
Thinspiration is content advocating extreme weight loss by means of images and/or text posts. While past content analyses have examined thinspiration content on social media and other websites, no research to date has examined thinspiration content on Tumblr. Over the course of a week, 222 images and text posts were collected after entering the keyword 'thinspiration' into the Tumblr search bar. These images were then rated on a variety of characteristics. The majority of thinspiration images included a thin woman adhering to culturally based beauty, often posing in a manner that accentuated her thinness or sexuality. The most common themes for thinspiration text posts included dieting/restraint, weight loss, food guilt, and body guilt. The thinspiration content on Tumblr appears to be consistent with that on other mediums. Future research should utilize experimental methods to examine the potential effects of consuming thinspiration content on Tumblr. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirose, K; Takai, Y; Southern Tohoku BNCT Research Center, Koriyama
2016-06-15
Purpose: The purpose of this study was to prospectively assess the reproducibility of positioning errors due to temporarily indwelled catheter in urethra-sparing image-guided (IG) IMRT. Methods: Ten patients received urethra-sparing prostate IG-IMRT with implanted fiducials. After the first CT scan was performed in supine position, 6-Fr catheter was indwelled into urethra, and the second CT images were taken for planning. While the PTV received 80 Gy, 5% dose reduction was applied for the urethral PRV along the catheter. Additional CT scans were also performed at 5th and 30th fraction. Positions of interests (POIs) were set on posterior edge of prostatemore » at beam isocenter level (POI1) and cranial and caudal edge of prostatic urethra on the post-indwelled CT images. POIs were copied into the pre-indwelled, 5th and 30th fraction’s CT images after fiducial matching on these CT images. The deviation of each POI between pre- and post-indwelled CT and the reproducibility of prostate displacement due to catheter were evaluated. Results: The deviation of POI1 caused by the indwelled catheter to the directions of RL/AP/SI (mm) was 0.20±0.27/−0.64±2.43/1.02±2.31, respectively, and the absolute distances (mm) were 3.15±1.41. The deviation tends to be larger if closer to the caudal edge of prostate. Compared with the pre-indwelled CT scan, a median displacement of all POIs (mm) were 0.3±0.2/2.2±1.1/2.0±2.6 in the post-indwelled, 0.4±0.4/3.4±2.1/2.3±2.6 in 5th, and 0.5±0.5/1.7±2.2/1.9±3.1 in 30th fraction’s CT scan with a similar data distribution. There were 6 patients with 5-mm-over displacement in AP and/or CC directions. Conclusion: Reproducibility of positioning errors due to temporarily indwelling catheter was observed. Especially in case of patients with unusually large shifts by indwelling catheter at the planning process, treatment planning should be performed by using the pre-indwelled CT images with transferred contour of the urethra identified by post-indwelled CT images.« less
Toward image phylogeny forests: automatically recovering semantically similar image relationships.
Dias, Zanoni; Goldenstein, Siome; Rocha, Anderson
2013-09-10
In the past few years, several near-duplicate detection methods appeared in the literature to identify the cohabiting versions of a given document online. Following this trend, there are some initial attempts to go beyond the detection task, and look into the structure of evolution within a set of related images overtime. In this paper, we aim at automatically identify the structure of relationships underlying the images, correctly reconstruct their past history and ancestry information, and group them in distinct trees of processing history. We introduce a new algorithm that automatically handles sets of images comprising different related images, and outputs the phylogeny trees (also known as a forest) associated with them. Image phylogeny algorithms have many applications such as finding the first image within a set posted online (useful for tracking copyright infringement perpetrators), hint at child pornography content creators, and narrowing down a list of suspects for online harassment using photographs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Ibrahim, Reham S; Fathy, Hoda
2018-03-30
Tracking the impact of commonly applied post-harvesting and industrial processing practices on the compositional integrity of ginger rhizome was implemented in this work. Untargeted metabolite profiling was performed using digitally-enhanced HPTLC method where the chromatographic fingerprints were extracted using ImageJ software then analysed with multivariate Principal Component Analysis (PCA) for pattern recognition. A targeted approach was applied using a new, validated, simple and fast HPTLC image analysis method for simultaneous quantification of the officially recognized markers 6-, 8-, 10-gingerol and 6-shogaol in conjunction with chemometric Hierarchical Clustering Analysis (HCA). The results of both targeted and untargeted metabolite profiling revealed that peeling, drying in addition to storage employed during processing have a great influence on ginger chemo-profile, the different forms of processed ginger shouldn't be used interchangeably. Moreover, it deemed necessary to consider the holistic metabolic profile for comprehensive evaluation of ginger during processing. Copyright © 2018. Published by Elsevier B.V.
Tampered Region Localization of Digital Color Images Based on JPEG Compression Noise
NASA Astrophysics Data System (ADS)
Wang, Wei; Dong, Jing; Tan, Tieniu
With the availability of various digital image edit tools, seeing is no longer believing. In this paper, we focus on tampered region localization for image forensics. We propose an algorithm which can locate tampered region(s) in a lossless compressed tampered image when its unchanged region is output of JPEG decompressor. We find the tampered region and the unchanged region have different responses for JPEG compression. The tampered region has stronger high frequency quantization noise than the unchanged region. We employ PCA to separate different spatial frequencies quantization noises, i.e. low, medium and high frequency quantization noise, and extract high frequency quantization noise for tampered region localization. Post-processing is involved to get final localization result. The experimental results prove the effectiveness of our proposed method.
Guo, Lu; Wang, Gang; Feng, Yuanming; Yu, Tonggang; Guo, Yu; Bai, Xu; Ye, Zhaoxiang
2016-09-21
Accurate target volume delineation is crucial for the radiotherapy of tumors. Diffusion and perfusion magnetic resonance imaging (MRI) can provide functional information about brain tumors, and they are able to detect tumor volume and physiological changes beyond the lesions shown on conventional MRI. This review examines recent studies that utilized diffusion and perfusion MRI for tumor volume definition in radiotherapy of brain tumors, and it presents the opportunities and challenges in the integration of multimodal functional MRI into clinical practice. The results indicate that specialized and robust post-processing algorithms and tools are needed for the precise alignment of targets on the images, and comprehensive validations with more clinical data are important for the improvement of the correlation between histopathologic results and MRI parameter images.
No filter: A characterization of #pharmacist posts on Instagram.
Hindman, F Mark; Bukowitz, Alison E; Reed, Brent N; Mattingly, T Joseph
The primary objective was to characterize the underlying intent of Instagram posts using the hashtag metadata term "#pharmacist" over a 1-year period. The secondary objective was to determine whether statistically significant relationships existed between the categories and the 2 dichotomous variables tested, self-portrayed images, and relation to health care. Retrospective, cross-sectional, mixed methods, exploratory, descriptive study. A review of available Instagram posts using the hashtag metadata "#pharmacist" from November 4, 2014, to November 3, 2015. Data were collected using software provided by NEXT Analytics. A sample of 14 random days was selected. Six hundred sixty-one Instagram posts containing "#pharmacist" in the caption. Categorization of post (including both picture and primary caption), self-portrayed images (i.e., "selfie"), and health care-related images. One thousand three hundred thirty-eight posts were collected from the 14-day sample. Of the posts, 661 (49.4%) were analyzed; the remainder were excluded for being written in a non-English language or containing "#pharmacist" in the comments of the post, rather than the primary caption; 19.7% of all posts fell into the Celebration category, followed by Work Experience and Advertisement with 18.6% and 12.6%, respectively. The remainder of the categories contained 10% or fewer posts. Less than 25% of posts were self-portrayed images, and 88% of posts were deemed health care-related. Instagram is an emerging social media platform that can be used to expand patient education, professional advocacy, and public health outreach. In this study, the majority of #pharmacist posts were celebratory in nature, and the majority were determined to be related to health care. Posts containing #pharmacist may provide the opportunity to educate the public regarding the knowledge and capabilities of pharmacists. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-05-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99. 73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
Fundamentals of image acquisition and processing in the digital era.
Farman, A G
2003-01-01
To review the historic context for digital imaging in dentistry and to outline the fundamental issues related to digital imaging modalities. Digital dental X-ray images can be achieved by scanning analog film radiographs (secondary capture), with photostimulable phosphors, or using solid-state detectors (e.g. charge-coupled device and complementary metal oxide semiconductor). There are four characteristics that are basic to all digital image detectors; namely, size of active area, signal-to-noise ratio, contrast resolution and the spatial resolution. To perceive structure in a radiographic image, there needs to be sufficient difference between contrasting densities. This primarily depends on the differences in the attenuation of the X-ray beam by adjacent tissues. It is also depends on the signal received; therefore, contrast tends to increase with increased exposure. Given adequate signal and sufficient differences in radiodensity, contrast will be sufficient to differentiate between adjacent structures, irrespective of the recording modality and processing used. Where contrast is not sufficient, digital images can sometimes be post-processed to disclose details that would otherwise go undetected. For example, cephalogram isodensity mapping can improve soft tissue detail. It is concluded that it could be a further decade or two before three-dimensional digital imaging systems entirely replace two-dimensional analog films. Such systems need not only to produce prettier images, but also to provide a demonstrable evidence-based higher standard of care at a cost that is not economically prohibitive for the practitioner or society, and which allows efficient and effective workflow within the business of dental practice.
NASA Technical Reports Server (NTRS)
DeLuccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
NASA Technical Reports Server (NTRS)
De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24-hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24-hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
NASA Astrophysics Data System (ADS)
Lane, Ciara; Stynes, Martin; O'Donoghue, John
2016-10-01
A questionnaire survey was carried out as part of a PhD research study to investigate the image of mathematics held by post-primary students in Ireland. The study focused on students in fifth year of post-primary education studying ordinary level mathematics for the Irish Leaving Certificate examination - the final examination for students in second-level or post-primary education. At the time this study was conducted, ordinary level mathematics students constituted approximately 72% of Leaving Certificate students. Students were aged between 15 and 18 years. A definition for 'image of mathematics' was adapted from Lim and Wilson, with image of mathematics hypothesized as comprising attitudes, beliefs, self-concept, motivation, emotions and past experiences of mathematics. A questionnaire was composed incorporating 84 fixed-response items chosen from eight pre-established scales by Aiken, Fennema and Sherman, Gourgey and Schoenfeld. This paper focuses on the findings from the questionnaire survey. Students' images of mathematics are compared with regard to gender, type of post-primary school attended and prior mathematical achievement.
NASA Astrophysics Data System (ADS)
Kim, Youngmi; Choi, Jae-Young; Choi, Kwangseon; Choi, Jung-Hoe; Lee, Sooryong
2011-04-01
As IC design complexity keeps increasing, it is more and more difficult to ensure the pattern transfer after optical proximity correction (OPC) due to the continuous reduction of layout dimensions and lithographic limitation by k1 factor. To guarantee the imaging fidelity, resolution enhancement technologies (RET) such as off-axis illumination (OAI), different types of phase shift masks and OPC technique have been developed. In case of model-based OPC, to cross-confirm the contour image versus target layout, post-OPC verification solutions continuously keep developed - contour generation method and matching it to target structure, method for filtering and sorting the patterns to eliminate false errors and duplicate patterns. The way to detect only real errors by excluding false errors is the most important thing for accurate and fast verification process - to save not only reviewing time and engineer resource, but also whole wafer process time and so on. In general case of post-OPC verification for metal-contact/via coverage (CC) check, verification solution outputs huge of errors due to borderless design, so it is too difficult to review and correct all points of them. It should make OPC engineer to miss the real defect, and may it cause the delay time to market, at least. In this paper, we studied method for increasing efficiency of post-OPC verification, especially for the case of CC check. For metal layers, final CD after etch process shows various CD bias, which depends on distance with neighbor patterns, so it is more reasonable that consider final metal shape to confirm the contact/via coverage. Through the optimization of biasing rule for different pitches and shapes of metal lines, we could get more accurate and efficient verification results and decrease the time for review to find real errors. In this paper, the suggestion in order to increase efficiency of OPC verification process by using simple biasing rule to metal layout instead of etch model application is presented.
The Use of Social Tags in Text and Image Searching on the Web
ERIC Educational Resources Information Center
Kim, Yong-Mi
2011-01-01
In recent years, tags have become a standard feature on a diverse range of sites on the Web, accompanying blog posts, photos, videos, and online news stories. Tags are descriptive terms attached to Internet resources. Despite the rapid adoption of tagging, how people use tags during the search process is not well understood. There is little…
ExoSOFT: Exoplanet Simple Orbit Fitting Toolbox
NASA Astrophysics Data System (ADS)
Mede, Kyle; Brandt, Timothy D.
2017-08-01
ExoSOFT provides orbital analysis of exoplanets and binary star systems. It fits any combination of astrometric and radial velocity data, and offers four parameter space exploration techniques, including MCMC. It is packaged with an automated set of post-processing and plotting routines to summarize results, and is suitable for performing orbital analysis during surveys with new radial velocity and direct imaging instruments.
LIBRA is a fully-automatic breast density estimation software solution based on a published algorithm that works on either raw (i.e., “FOR PROCESSING”) or vendor post-processed (i.e., “FOR PRESENTATION”) digital mammography images. LIBRA has been applied to over 30,000 screening exams and is being increasingly utilized in larger studies.
Exploring the Early Organization and Maturation of Linguistic Pathways in the Human Infant Brain.
Dubois, Jessica; Poupon, Cyril; Thirion, Bertrand; Simonnet, Hina; Kulikova, Sofya; Leroy, François; Hertz-Pannier, Lucie; Dehaene-Lambertz, Ghislaine
2016-05-01
Linguistic processing is based on a close collaboration between temporal and frontal regions connected by two pathways: the "dorsal" and "ventral pathways" (assumed to support phonological and semantic processing, respectively, in adults). We investigated here the development of these pathways at the onset of language acquisition, during the first post-natal weeks, using cross-sectional diffusion imaging in 21 healthy infants (6-22 weeks of age) and 17 young adults. We compared the bundle organization and microstructure at these two ages using tractography and original clustering analyses of diffusion tensor imaging parameters. We observed structural similarities between both groups, especially concerning the dorsal/ventral pathway segregation and the arcuate fasciculus asymmetry. We further highlighted the developmental tempos of the linguistic bundles: The ventral pathway maturation was more advanced than the dorsal pathway maturation, but the latter catches up during the first post-natal months. Its fast development during this period might relate to the learning of speech cross-modal representations and to the first combinatorial analyses of the speech input. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Dubois, J; Dehaene-Lambertz, G; Kulikova, S; Poupon, C; Hüppi, P S; Hertz-Pannier, L
2014-09-12
Studying how the healthy human brain develops is important to understand early pathological mechanisms and to assess the influence of fetal or perinatal events on later life. Brain development relies on complex and intermingled mechanisms especially during gestation and first post-natal months, with intense interactions between genetic, epigenetic and environmental factors. Although the baby's brain is organized early on, it is not a miniature adult brain: regional brain changes are asynchronous and protracted, i.e. sensory-motor regions develop early and quickly, whereas associative regions develop later and slowly over decades. Concurrently, the infant/child gradually achieves new performances, but how brain maturation relates to changes in behavior is poorly understood, requiring non-invasive in vivo imaging studies such as magnetic resonance imaging (MRI). Two main processes of early white matter development are reviewed: (1) establishment of connections between brain regions within functional networks, leading to adult-like organization during the last trimester of gestation, (2) maturation (myelination) of these connections during infancy to provide efficient transfers of information. Current knowledge from post-mortem descriptions and in vivo MRI studies is summed up, focusing on T1- and T2-weighted imaging, diffusion tensor imaging, and quantitative mapping of T1/T2 relaxation times, myelin water fraction and magnetization transfer ratio. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Delouche, Aurélie; Attyé, Arnaud; Heck, Olivier; Grand, Sylvie; Kastler, Adrian; Lamalle, Laurent; Renard, Felix; Krainik, Alexandre
2016-01-01
Mild traumatic brain injury (mTBI) is a leading cause of disability in adults, many of whom report a distressing combination of physical, emotional and cognitive symptoms, collectively known as post-concussion syndrome, that persist after the injury. Significant developments in magnetic resonance diffusion imaging, involving voxel-based quantitative analysis through the measurement of fractional anisotropy or mean diffusivity, have enhanced our knowledge on the different stages of mTBI pathophysiology. Other diffusion imaging-derived techniques, including diffusion kurtosis imaging with multi-shell diffusion and high-order tractography models, have recently demonstrated their usefulness in mTBI. Our review starts by briefly outlining the physical basis of diffusion tensor imaging including the pitfalls for use in brain trauma, before discussing findings from diagnostic trials testing its usefulness in assessing brain structural changes in patients with mTBI. Use of different post-processing techniques for the diffusion imaging data, identified the corpus callosum as the most frequently injured structure in mTBI, particularly at sub-acute and chronic stages, and a crucial location for evaluating functional outcome. However, structural changes appear too subtle for identification using traditional diffusion biomarkers, thus disallowing expansion of these techniques into clinical practice. In this regard, more advanced diffusion techniques are promising in the assessment of this complex disease. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Automated data processing architecture for the Gemini Planet Imager Exoplanet Survey
NASA Astrophysics Data System (ADS)
Wang, Jason J.; Perrin, Marshall D.; Savransky, Dmitry; Arriaga, Pauline; Chilcote, Jeffrey K.; De Rosa, Robert J.; Millar-Blanchaer, Maxwell A.; Marois, Christian; Rameau, Julien; Wolff, Schuyler G.; Shapiro, Jacob; Ruffio, Jean-Baptiste; Maire, Jérôme; Marchis, Franck; Graham, James R.; Macintosh, Bruce; Ammons, S. Mark; Bailey, Vanessa P.; Barman, Travis S.; Bruzzone, Sebastian; Bulger, Joanna; Cotten, Tara; Doyon, René; Duchêne, Gaspard; Fitzgerald, Michael P.; Follette, Katherine B.; Goodsell, Stephen; Greenbaum, Alexandra Z.; Hibon, Pascale; Hung, Li-Wei; Ingraham, Patrick; Kalas, Paul; Konopacky, Quinn M.; Larkin, James E.; Marley, Mark S.; Metchev, Stanimir; Nielsen, Eric L.; Oppenheimer, Rebecca; Palmer, David W.; Patience, Jennifer; Poyneer, Lisa A.; Pueyo, Laurent; Rajan, Abhijith; Rantakyrö, Fredrik T.; Schneider, Adam C.; Sivaramakrishnan, Anand; Song, Inseok; Soummer, Remi; Thomas, Sandrine; Wallace, J. Kent; Ward-Duong, Kimberly; Wiktorowicz, Sloane J.
2018-01-01
The Gemini Planet Imager Exoplanet Survey (GPIES) is a multiyear direct imaging survey of 600 stars to discover and characterize young Jovian exoplanets and their environments. We have developed an automated data architecture to process and index all data related to the survey uniformly. An automated and flexible data processing framework, which we term the Data Cruncher, combines multiple data reduction pipelines (DRPs) together to process all spectroscopic, polarimetric, and calibration data taken with GPIES. With no human intervention, fully reduced and calibrated data products are available less than an hour after the data are taken to expedite follow up on potential objects of interest. The Data Cruncher can run on a supercomputer to reprocess all GPIES data in a single day as improvements are made to our DRPs. A backend MySQL database indexes all files, which are synced to the cloud, and a front-end web server allows for easy browsing of all files associated with GPIES. To help observers, quicklook displays show reduced data as they are processed in real time, and chatbots on Slack post observing information as well as reduced data products. Together, the GPIES automated data processing architecture reduces our workload, provides real-time data reduction, optimizes our observing strategy, and maintains a homogeneously reduced dataset to study planet occurrence and instrument performance.
Filho, Mercedes; Ma, Zhen; Tavares, João Manuel R S
2015-11-01
In recent years, the incidence of skin cancer cases has risen, worldwide, mainly due to the prolonged exposure to harmful ultraviolet radiation. Concurrently, the computer-assisted medical diagnosis of skin cancer has undergone major advances, through an improvement in the instrument and detection technology, and the development of algorithms to process the information. Moreover, because there has been an increased need to store medical data, for monitoring, comparative and assisted-learning purposes, algorithms for data processing and storage have also become more efficient in handling the increase of data. In addition, the potential use of common mobile devices to register high-resolution images of skin lesions has also fueled the need to create real-time processing algorithms that may provide a likelihood for the development of malignancy. This last possibility allows even non-specialists to monitor and follow-up suspected skin cancer cases. In this review, we present the major steps in the pre-processing, processing and post-processing of skin lesion images, with a particular emphasis on the quantification and classification of pigmented skin lesions. We further review and outline the future challenges for the creation of minimum-feature, automated and real-time algorithms for the detection of skin cancer from images acquired via common mobile devices.
NASA Astrophysics Data System (ADS)
Khan, F.; Enzmann, F.; Kersten, M.
2015-12-01
In X-ray computed microtomography (μXCT) image processing is the most important operation prior to image analysis. Such processing mainly involves artefact reduction and image segmentation. We propose a new two-stage post-reconstruction procedure of an image of a geological rock core obtained by polychromatic cone-beam μXCT technology. In the first stage, the beam-hardening (BH) is removed applying a best-fit quadratic surface algorithm to a given image data set (reconstructed slice), which minimizes the BH offsets of the attenuation data points from that surface. The final BH-corrected image is extracted from the residual data, or the difference between the surface elevation values and the original grey-scale values. For the second stage, we propose using a least square support vector machine (a non-linear classifier algorithm) to segment the BH-corrected data as a pixel-based multi-classification task. A combination of the two approaches was used to classify a complex multi-mineral rock sample. The Matlab code for this approach is provided in the Appendix. A minor drawback is that the proposed segmentation algorithm may become computationally demanding in the case of a high dimensional training data set.
Depth measurements through controlled aberrations of projected patterns.
Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim
2012-03-12
Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.
THELMA: a mobile app for crowdsourcing environmental data
NASA Astrophysics Data System (ADS)
Hintz, Kenneth J.; Hintz, Christopher J.; Almomen, Faris; Adounvo, Christian; D'Amato, Michael
2014-06-01
The collection of environmental light pollution data related to sea turtle nesting sites is a laborious and time consuming effort entailing the use of several pieces of measurement equipment, their transportation and calibration, the manual logging of results in the field, and subsequent transfer of the data to a computer for post-collection analysis. Serendipitously, the current generation of mobile smart phones (e.g., iPhone® 5) contains the requisite measurement capability, namely location data in aided GPS coordinates, magnetic compass heading, and elevation at the time an image is taken, image parameter data, and the image itself. The Turtle Habitat Environmental Light Measurement App (THELMA) is a mobile phone app whose graphical user interface (GUI) guides an untrained user through the image acquisition process in order to capture 360° of images with pointing guidance. It subsequently uploads the user-tagged images, all of the associated image parameters, and position, azimuth, elevation metadata to a central internet repository. Provision is also made for the capture of calibration images and the review of images before upload. THELMA allows for inexpensive, highly-efficient, worldwide crowdsourcing of calibratable beachfront lighting/light pollution data collected by untrained volunteers. This data can be later processed, analyzed, and used by scientists conducting sea turtle conservation in order to identify beach locations with hazardous levels of light pollution that may alter sea turtle behavior and necessitate human intervention after hatchling emergence.
Textured digital elevation model formation from low-cost UAV LADAR/digital image data
NASA Astrophysics Data System (ADS)
Bybee, Taylor C.; Budge, Scott E.
2015-05-01
Textured digital elevation models (TDEMs) have valuable use in precision agriculture, situational awareness, and disaster response. However, scientific-quality models are expensive to obtain using conventional aircraft-based methods. The cost of creating an accurate textured terrain model can be reduced by using a low-cost (<$20k) UAV system fitted with ladar and electro-optical (EO) sensors. A texel camera fuses calibrated ladar and EO data upon simultaneous capture, creating a texel image. This eliminates the problem of fusing the data in a post-processing step and enables both 2D- and 3D-image registration techniques to be used. This paper describes formation of TDEMs using simulated data from a small UAV gathering swaths of texel images of the terrain below. Being a low-cost UAV, only a coarse knowledge of position and attitude is known, and thus both 2D- and 3D-image registration techniques must be used to register adjacent swaths of texel imagery to create a TDEM. The process of creating an aggregate texel image (a TDEM) from many smaller texel image swaths is described. The algorithm is seeded with the rough estimate of position and attitude of each capture. Details such as the required amount of texel image overlap, registration models, simulated flight patterns (level and turbulent), and texture image formation are presented. In addition, examples of such TDEMs are shown and analyzed for accuracy.
Loehfelm, Thomas W; Prater, Adam B; Debebe, Tequam; Sekhar, Aarti K
2017-02-01
We digitized the radiography teaching file at Black Lion Hospital (Addis Ababa, Ethiopia) during a recent trip, using a standard digital camera and a fluorescent light box. Our goal was to photograph every radiograph in the existing library while optimizing the final image size to the maximum resolution of a high quality tablet computer, preserving the contrast resolution of the radiographs, and minimizing total library file size. A secondary important goal was to minimize the cost and time required to take and process the images. Three workers were able to efficiently remove the radiographs from their storage folders, hang them on the light box, operate the camera, catalog the image, and repack the radiographs back to the storage folder. Zoom, focal length, and film speed were fixed, while aperture and shutter speed were manually adjusted for each image, allowing for efficiency and flexibility in image acquisition. Keeping zoom and focal length fixed, which kept the view box at the same relative position in all of the images acquired during a single photography session, allowed unused space to be batch-cropped, saving considerable time in post-processing, at the expense of final image resolution. We present an analysis of the trade-offs in workflow efficiency and final image quality, and demonstrate that a few people with minimal equipment can efficiently digitize a teaching file library.
Object detection in cinematographic video sequences for automatic indexing
NASA Astrophysics Data System (ADS)
Stauder, Jurgen; Chupeau, Bertrand; Oisel, Lionel
2003-06-01
This paper presents an object detection framework applied to cinematographic post-processing of video sequences. Post-processing is done after production and before editing. At the beginning of each shot of a video, a slate (also called clapperboard) is shown. The slate contains notably an electronic audio timecode that is necessary for audio-visual synchronization. This paper presents an object detection framework to detect slates in video sequences for automatic indexing and post-processing. It is based on five steps. The first two steps aim to reduce drastically the video data to be analyzed. They ensure high recall rate but have low precision. The first step detects images at the beginning of a shot possibly showing up a slate while the second step searches in these images for candidates regions with color distribution similar to slates. The objective is to not miss any slate while eliminating long parts of video without slate appearance. The third and fourth steps are statistical classification and pattern matching to detected and precisely locate slates in candidate regions. These steps ensure high recall rate and high precision. The objective is to detect slates with very little false alarms to minimize interactive corrections. In a last step, electronic timecodes are read from slates to automize audio-visual synchronization. The presented slate detector has a recall rate of 89% and a precision of 97,5%. By temporal integration, much more than 89% of shots in dailies are detected. By timecode coherence analysis, the precision can be raised too. Issues for future work are to accelerate the system to be faster than real-time and to extend the framework for several slate types.
NASA Astrophysics Data System (ADS)
Dutta, P. K.; Mishra, O. P.
2012-04-01
Satellite imagery for 2011 earthquake off the Pacific coast of Tohoku has provided an opportunity to conduct image transformation analyses by employing multi-temporal images retrieval techniques. In this study, we used a new image segmentation algorithm to image coastline deformation by adopting graph cut energy minimization framework. Comprehensive analysis of available INSAR images using coastline deformation analysis helped extract disaster information of the affected region of the 2011 Tohoku tsunamigenic earthquake source zone. We attempted to correlate fractal analysis of seismic clustering behavior with image processing analogies and our observations suggest that increase in fractal dimension distribution is associated with clustering of events that may determine the level of devastation of the region. The implementation of graph cut based image registration technique helps us to detect the devastation across the coastline of Tohoku through change of intensity of pixels that carries out regional segmentation for the change in coastal boundary after the tsunami. The study applies transformation parameters on remotely sensed images by manually segmenting the image to recovering translation parameter from two images that differ by rotation. Based on the satellite image analysis through image segmentation, it is found that the area of 0.997 sq km for the Honshu region was a maximum damage zone localized in the coastal belt of NE Japan forearc region. The analysis helps infer using matlab that the proposed graph cut algorithm is robust and more accurate than other image registration methods. The analysis shows that the method can give a realistic estimate for recovered deformation fields in pixels corresponding to coastline change which may help formulate the strategy for assessment during post disaster need assessment scenario for the coastal belts associated with damages due to strong shaking and tsunamis in the world under disaster risk mitigation programs.
SU-E-J-225: CEST Imaging in Head and Neck Cancer Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J; Hwang, K; Fuller, C
Purpose: Chemical Exchange Saturation Transfer (CEST) imaging is an MRI technique enables the detection and imaging of metabolically active compounds in vivo. It has been used to differentiate tumor types and metabolic characteristics. Unlike PET/CT,CEST imaging does not use isotopes so it can be used on patient repeatedly. This study is to report the preliminary results of CEST imaging in Head and Neck cancer (HNC) patients. Methods: A CEST imaging sequence and the post-processing software was developed on a 3T clinical MRI scanner. Ten patients with Human papilloma virus positive oropharyngeal cancer were imaged in their immobilized treatment position. Amore » 5 mm slice CEST image was acquired (128×128, FOV=20∼24cm) to encompass the maximum dimension of tumor. Twenty-nine off-set frequencies (from −7.8ppm to +7.8 ppm) were acquired to obtain the Z-spectrum. Asymmetry analysis was used to extract the CEST contrasts. ROI at the tumor, node and surrounding tissues were measured. Results: CEST images were successfully acquired and Zspectrum asymmetry analysis demonstrated clear CEST contrasts in tumor as well as the surrounding tissues. 3∼5% CEST contrast in the range of 1 to 4 ppm was noted in tumor as well as grossly involved nodes. Injection of glucose produced a marked increase of CEST contrast in tumor region (∼10%). Motion and pulsation artifacts tend to smear the CEST contrast, making the interpretation of the image contrast difficult. Field nonuniformity, pulsation in blood vesicle and susceptibility artifacts caused by air cavities were also problematic for CEST imaging. Conclusion: We have demonstrated successful CEST acquisition and Z-spectrum reconstruction on HNC patients with a clinical scanner. MRI acquisition in immobilized treatment position is critical for image quality as well as the success of CEST image acquisition. CEST images provide novel contrast of metabolites in HNC and present great potential in the pre- and post-treatment assessment of patients undergoing radiation therapy.« less
Wang, Monan; Zhang, Kai; Yang, Ning
2018-04-09
To help doctors decide their treatment from the aspect of mechanical analysis, the work built a computer assisted optimal system for treatment of femoral neck fracture oriented to clinical application. The whole system encompassed the following three parts: Preprocessing module, finite element mechanical analysis module, post processing module. Preprocessing module included parametric modeling of bone, parametric modeling of fracture face, parametric modeling of fixed screw and fixed position and input and transmission of model parameters. Finite element mechanical analysis module included grid division, element type setting, material property setting, contact setting, constraint and load setting, analysis method setting and batch processing operation. Post processing module included extraction and display of batch processing operation results, image generation of batch processing operation, optimal program operation and optimal result display. The system implemented the whole operations from input of fracture parameters to output of the optimal fixed plan according to specific patient real fracture parameter and optimal rules, which demonstrated the effectiveness of the system. Meanwhile, the system had a friendly interface, simple operation and could improve the system function quickly through modifying single module.
Sahul, Zakir H.; Mukherjee, Rupak; Song, James; McAteer, Jarod; Stroud, Robert E.; Dione, Donald P.; Staib, Lawrence; Papademetris, Xenophon; Dobrucki, Lawrence W.; Duncan, James S.; Spinale, Francis G.; Sinusas, Albert J.
2011-01-01
Background Matrix metalloproteinases (MMPs) are known to modulate left ventricular (LV) remodeling after a myocardial infarction (MI). However, the temporal and spatial variation of MMP activation and their relationship to mechanical dysfunction post MI remains undefined. Methods and Results MI was surgically induced in pigs (n=23) and cine MR and dual isotope hybrid SPECT/CT imaging obtained using thallium-201 (201Tl) and a technetium-99m labeled MMP targeted tracer (99mTc-RP805) at 1, 2 and 4 weeks post MI along with controls (n=5). Regional myocardial strain was computed from MR images and related to MMP zymography and ex vivo myocardial 99mTc-RP805 retention. MMP activation as assessed by in vivo and ex vivo 99mTc-RP805 imaging/retention studies was increased nearly 5-fold within the infarct region at 1 week post-MI and remained elevated up to 1 month post-MI. The post-MI change in LV end-diastolic volumes was correlated with MMP activity (y=31.34e0.48x, p=0.04). MMP activity was increased within the border and remote regions early post-MI, but declined over 1 month. There was a high concordance between regional 99mTc-RP805 uptake and ex vivo MMP-2 activity. Conclusions A novel, multimodality non-invasive hybrid SPECT/CT imaging approach was validated and applied for in vivo evaluation of MMP activation in combination with cine MR analysis of LV deformation. Increased 99mTc-RP805 retention was seen throughout the heart early post-MI and was not purely a reciprocal of 201Tl perfusion. 99mTc-RP805 SPECT/CT imaging may provide unique information regarding regional myocardial MMP activation and predict late post-MI LV remodeling. PMID:21505092
Post-Mortem Magnetic Resonance Imaging Appearances of Feticide in Perinatal Deaths.
Shelmerdine, Susan C; Hickson, Melissa; Sebire, Neil J; Arthurs, Owen J
2018-06-06
The aim of this study was to characterise the imaging features seen in fetuses having undergone feticide by intracardiac potassium chloride injection compared to those of non-terminated fetuses at post-mortem magnetic resonance imaging (PMMRI). A case-control study was performed comparing PMMRI findings between two groups of patients - those having undergone feticide were matched to a control group of miscarried/stillborn fetuses. The groups were matched according to gestational age, weight, and time since death. Two independent readers reviewed the PMMRI for thoracic, abdominal, and musculoskeletal imaging features. The Fishers exact test was conducted for differences between the patient groups. Twenty-six cases of feticide (mean gestation 25 weeks [20-36]) and 75 non-terminated fetuses (mean gestation 26.7 weeks [19-36]) were compared. There was a higher proportion of feticide cases demonstrating pneumothorax (23.1 vs. 1.3%, p = 0.001), haemothorax (42.3 vs. 4%, p = 0.001), pneumopericardium (30.8 vs. 5.3%, p = 0.002), and haemopericardium (34.6 vs. 0%, p = 0.0001). Intracardiac gas and intra-abdominal findings were higher in the feticide group, but the differences were not statistically significant. Characteristic PMMRI features of feticide can help improve reporter confidence in differentiating iatrogenic from physiological/pathological processes. © 2018 S. Karger AG, Basel.
MRA of the skin: mapping for advanced breast reconstructive surgery.
Thimmappa, N D; Vasile, J V; Ahn, C Y; Levine, J L; Prince, M R
2018-02-27
Autologous breast reconstruction using muscle-sparing free flaps are becoming increasingly popular, although microvascular free flap reconstruction has been utilised for autologous breast reconstructions for >20 years. This innovative microsurgical technique involves meticulous dissection of artery-vein bundle (perforators) responsible for perfusion of the subcutaneous fat and skin of the flap; however, due to unpredictable anatomical variations, preoperative imaging of the donor site to select appropriate perforators has become routine. Preoperative imaging also reduces operating time and enhances the surgeon's confidence in choosing the appropriate donor site for harvesting flaps. Although computed tomography angiography has been widely used for preoperative imaging, concerns over excessive exposure to ionising radiation and poor iodinated contrast agent enhancement of the intramuscular perforator course has made magnetic resonance angiography, the first choice imaging modality in our centre. Magnetic resonance angiography with specific post-processing of the images has established itself as a reliable method for mapping tiny perforator vessels. Multiple donor sites can be imaged in a single setting without concern for ionising radiation exposure. This provides anatomical information of more reconstruction donor site options, so that a surgeon can design a flap of tissue centralised around the best perforator, as well as a back-up perforator, and even a back-up flap option located on a different region of the body. This information is especially helpful in patients with a history of scar tissue from previous surgeries, where the primary choice perforator is found to be damaged or unsuitable intraoperatively. In addition, chest magnetic resonance angiography evaluates recipient site blood vessel suitability including vessel diameters, course, and branching patterns. In this article we provide a broad overview of various skin flaps, clinical indications, advantages and disadvantages of each of these flaps, basic imaging technique, along with advanced sequences for visualising tiny arteries in the groin and in the chest. Post-processing techniques, structure of the report and how automation of the reporting system improves workflow is described. We also describe applications of magnetic resonance angiography in postoperative imaging. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Geometric Calibration and Validation of Ultracam Aerial Sensors
NASA Astrophysics Data System (ADS)
Gruber, Michael; Schachinger, Bernhard; Muick, Marc; Neuner, Christian; Tschemmernegg, Helfried
2016-03-01
We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.
Application of High Speed Digital Image Correlation in Rocket Engine Hot Fire Testing
NASA Technical Reports Server (NTRS)
Gradl, Paul R.; Schmidt, Tim
2016-01-01
Hot fire testing of rocket engine components and rocket engine systems is a critical aspect of the development process to understand performance, reliability and system interactions. Ground testing provides the opportunity for highly instrumented development testing to validate analytical model predictions and determine necessary design changes and process improvements. To properly obtain discrete measurements for model validation, instrumentation must survive in the highly dynamic and extreme temperature application of hot fire testing. Digital Image Correlation has been investigated and being evaluated as a technique to augment traditional instrumentation during component and engine testing providing further data for additional performance improvements and cost savings. The feasibility of digital image correlation techniques were demonstrated in subscale and full scale hotfire testing. This incorporated a pair of high speed cameras to measure three-dimensional, real-time displacements and strains installed and operated under the extreme environments present on the test stand. The development process, setup and calibrations, data collection, hotfire test data collection and post-test analysis and results are presented in this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, C. Shan; Hayworth, Kenneth J.; Lu, Zhiyuan
Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 ?m 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processesmore » and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.« less
Automatic segmentation of cortical vessels in pre- and post-tumor resection laser range scan images
NASA Astrophysics Data System (ADS)
Ding, Siyi; Miga, Michael I.; Thompson, Reid C.; Garg, Ishita; Dawant, Benoit M.
2009-02-01
Measurement of intra-operative cortical brain movement is necessary to drive mechanical models developed to predict sub-cortical shift. At our institution, this is done with a tracked laser range scanner. This device acquires both 3D range data and 2D photographic images. 3D cortical brain movement can be estimated if 2D photographic images acquired over time can be registered. Previously, we have developed a method, which permits this registration using vessels visible in the images. But, vessel segmentation required the localization of starting and ending points for each vessel segment. Here, we propose a method, which automates the segmentation process further. This method involves several steps: (1) correction of lighting artifacts, (2) vessel enhancement, and (3) vessels' centerline extraction. Result obtained on 5 images obtained in the operating room suggests that our method is robust and is able to segment vessels reliably.
Crater monitoring through social media observations
NASA Astrophysics Data System (ADS)
Gialampoukidis, I.; Vrochidis, S.; Kompatsiaris, I.
2017-09-01
We have collected more than one lunar image per two days from social media observations. Each one of the collected images has been clustered into two main groups of lunar images and an additional cluster is provided (noise) with pictures that have not been assigned to any cluster. The proposed lunar image clustering process provides two classes of lunar pictures, at different zoom levels; the first showing a clear view of craters grouped into one cluster and the second demonstrating a complete view of the Moon at various phases that are correlated with the crawling date. The clustering stage is unsupervised, so new topics can be detected on-the-fly. We have provided additional sources of planetary images using crowdsourcing information, which is associated with metadata such as time, text, location, links to other users and other related posts. This content has crater information that can be fused with other planetary data to enhance crater monitoring.
Magnetic Resonance Imaging of Surgical Implants Made from Weak Magnetic Materials
NASA Astrophysics Data System (ADS)
Gogola, D.; Krafčík, A.; Štrbák, O.; Frollo, I.
2013-08-01
Materials with high magnetic susceptibility cause local inhomogeneities in the main field of the magnetic resonance (MR) tomograph. These inhomogeneities lead to loss of phase coherence, and thus to a rapid loss of signal in the image. In our research we investigated inhomogeneous field of magnetic implants such as magnetic fibers, designed for inner suture during surgery. The magnetic field inhomogeneities were studied at low magnetic planar phantom, which was made from four thin strips of magnetic tape, arranged grid-wise. We optimized the properties of imaging sequences with the aim to find the best setup for magnetic fiber visualization. These fibers can be potentially exploited in surgery for internal stitches. Stitches can be visualized by the magnetic resonance imaging (MRI) method after surgery. This study shows that the imaging of magnetic implants is possible by using the low field MRI systems, without the use of complicated post processing techniques (e.g., IDEAL).
Introducing PLIA: Planetary Laboratory for Image Analysis
NASA Astrophysics Data System (ADS)
Peralta, J.; Hueso, R.; Barrado, N.; Sánchez-Lavega, A.
2005-08-01
We present a graphical software tool developed under IDL software to navigate, process and analyze planetary images. The software has a complete Graphical User Interface and is cross-platform. It can also run under the IDL Virtual Machine without the need to own an IDL license. The set of tools included allow image navigation (orientation, centring and automatic limb determination), dynamical and photometric atmospheric measurements (winds and cloud albedos), cylindrical and polar projections, as well as image treatment under several procedures. Being written in IDL, it is modular and easy to modify and grow for adding new capabilities. We show several examples of the software capabilities with Galileo-Venus observations: Image navigation, photometrical corrections, wind profiles obtained by cloud tracking, cylindrical projections and cloud photometric measurements. Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.
Schlieren technique in soap film flows
NASA Astrophysics Data System (ADS)
Auliel, M. I.; Hebrero, F. Castro; Sosa, R.; Artana, G.
2017-05-01
We propose the use of the Schlieren technique as a tool to analyse the flows in soap film tunnels. The technique enables to visualize perturbations of the film produced by the interposition of an object in the flow. The variations of intensity of the image are produced as a consequence of the deviations of the light beam traversing the deformed surfaces of the film. The quality of the Schlieren image is compared to images produced by the conventional interferometric technique. The analysis of Schlieren images of a cylinder wake flow indicates that this technique enables an easy visualization of vortex centers. Post-processing of series of two successive images of a grid turbulent flow with a dense motion estimator is used to derive the velocity fields. The results obtained with this self-seeded flow show good agreement with the statistical properties of the 2D turbulent flows reported on the literature.
Fast image interpolation for motion estimation using graphics hardware
NASA Astrophysics Data System (ADS)
Kelly, Francis; Kokaram, Anil
2004-05-01
Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.
CBCT Post-Processing Tools to Manage the Progression of Invasive Cervical Resorption: A Case Report.
Vasconcelos, Karla de Faria; de-Azevedo-Vaz, Sergio Lins; Freitas, Deborah Queiroz; Haiter-Neto, Francisco
2016-01-01
This case report aimed to highlight the usefulness of cone beam computed tomography (CBCT) and its post-processing tools for the diagnosis, follow-up and treatment planning of invasive cervical resorption (ICR). A 16-year-old female patient was referred for periapical radiographic examination, which revealed an irregular but well demarcated radiolucency in the mandibular right central incisor. In addition, CBCT scanning was performed to distinguish between ICR and internal root resorption. After the diagnosis of ICR, the patient was advised to return shortly but did so only six years later. At that time, another CBCT scan was performed and CBCT registration and subtraction were done to document lesion progress. These imaging tools were able to show lesion progress and extent clearly and were fundamental for differential diagnosis and treatment decision.
Gonzalez, Jean; Roman, Manuela; Hall, Michael; Godavarty, Anuradha
2012-01-01
Hand-held near-infrared (NIR) optical imagers are developed by various researchers towards non-invasive clinical breast imaging. Unlike these existing imagers that can perform only reflectance imaging, a generation-2 (Gen-2) hand-held optical imager has been recently developed to perform both reflectance and transillumination imaging. The unique forked design of the hand-held probe head(s) allows for reflectance imaging (as in ultrasound) and transillumination or compressed imaging (as in X-ray mammography). Phantom studies were performed to demonstrate two-dimensional (2D) target detection via reflectance and transillumination imaging at various target depths (1-5 cm deep) and using simultaneous multiple point illumination approach. It was observed that 0.45 cc targets were detected up to 5 cm deep during transillumination, but limited to 2.5 cm deep during reflectance imaging. Additionally, implementing appropriate data post-processing techniques along with a polynomial fitting approach, to plot 2D surface contours of the detected signal, yields distinct target detectability and localization. The ability of the gen-2 imager to perform both reflectance and transillumination imaging allows its direct comparison to ultrasound and X-ray mammography results, respectively, in future clinical breast imaging studies.
Tang, Tien T.; Rendon, David A.; Zawaski, Janice A.; Afshar, Solmaz F.; Kaffes, Caterina K.; Sabek, Omaima M.
2017-01-01
Positron emission tomography using 18F-Fluro-deoxy-glucose (18F-FDG) is a useful tool to detect regions of inflammation in patients. We utilized this imaging technique to investigate the kinetics of gastrointestinal recovery after radiation exposure and the role of bone marrow in the recovery process. Male Sprague-Dawley rats were either sham irradiated, irradiated with their upper half body shielded (UHBS) at a dose of 7.5 Gy, or whole body irradiated (WBI) with 4 or 7.5 Gy. Animals were imaged using 18F-FDG PET/CT at 5, 10 and 35 days post-radiation exposure. The gastrointestinal tract and bone marrow were analyzed for 18F-FDG uptake. Tissue was collected at all-time points for histological analysis. Following 7.5 Gy irradiation, there was a significant increase in inflammation in the gastrointestinal tract as indicated by the significantly higher 18F-FDG uptake compared to sham. UHBS animals had a significantly higher activity compared to 7.5 Gy WBI at 5 days post-exposure. Animals that received 4 Gy WBI did not show any significant increase in uptake compared to sham. Analysis of the bone marrow showed a significant decrease of uptake in the 7.5 Gy animals 5 days post-irradiation, albeit not observed in the 4 Gy group. Interestingly, as the metabolic activity of the gastrointestinal tract returned to sham levels in UHBS animals it was accompanied by an increase in metabolic activity in the bone marrow. At 35 days post-exposure both gastrointestinal tract and bone marrow 18F-FDG uptake returned to sham levels. 18F-FDG imaging is a tool that can be used to study the inflammatory response of the gastrointestinal tract and changes in bone marrow metabolism caused by radiation exposure. The recovery of the gastrointestinal tract coincides with an increase in bone marrow metabolism in partially shielded animals. These findings further demonstrate the relationship between the gastrointestinal syndrome and bone marrow recovery, and that this interaction can be studied using non-invasive imaging modalities. PMID:28052129
Tang, Tien T; Rendon, David A; Zawaski, Janice A; Afshar, Solmaz F; Kaffes, Caterina K; Sabek, Omaima M; Gaber, M Waleed
2017-01-01
Positron emission tomography using 18F-Fluro-deoxy-glucose (18F-FDG) is a useful tool to detect regions of inflammation in patients. We utilized this imaging technique to investigate the kinetics of gastrointestinal recovery after radiation exposure and the role of bone marrow in the recovery process. Male Sprague-Dawley rats were either sham irradiated, irradiated with their upper half body shielded (UHBS) at a dose of 7.5 Gy, or whole body irradiated (WBI) with 4 or 7.5 Gy. Animals were imaged using 18F-FDG PET/CT at 5, 10 and 35 days post-radiation exposure. The gastrointestinal tract and bone marrow were analyzed for 18F-FDG uptake. Tissue was collected at all-time points for histological analysis. Following 7.5 Gy irradiation, there was a significant increase in inflammation in the gastrointestinal tract as indicated by the significantly higher 18F-FDG uptake compared to sham. UHBS animals had a significantly higher activity compared to 7.5 Gy WBI at 5 days post-exposure. Animals that received 4 Gy WBI did not show any significant increase in uptake compared to sham. Analysis of the bone marrow showed a significant decrease of uptake in the 7.5 Gy animals 5 days post-irradiation, albeit not observed in the 4 Gy group. Interestingly, as the metabolic activity of the gastrointestinal tract returned to sham levels in UHBS animals it was accompanied by an increase in metabolic activity in the bone marrow. At 35 days post-exposure both gastrointestinal tract and bone marrow 18F-FDG uptake returned to sham levels. 18F-FDG imaging is a tool that can be used to study the inflammatory response of the gastrointestinal tract and changes in bone marrow metabolism caused by radiation exposure. The recovery of the gastrointestinal tract coincides with an increase in bone marrow metabolism in partially shielded animals. These findings further demonstrate the relationship between the gastrointestinal syndrome and bone marrow recovery, and that this interaction can be studied using non-invasive imaging modalities.
Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.
Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter
Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.
The Quanta Image Sensor: Every Photon Counts
Fossum, Eric R.; Ma, Jiaju; Masoodian, Saleh; Anzagira, Leo; Zizza, Rachel
2016-01-01
The Quanta Image Sensor (QIS) was conceived when contemplating shrinking pixel sizes and storage capacities, and the steady increase in digital processing power. In the single-bit QIS, the output of each field is a binary bit plane, where each bit represents the presence or absence of at least one photoelectron in a photodetector. A series of bit planes is generated through high-speed readout, and a kernel or “cubicle” of bits (x, y, t) is used to create a single output image pixel. The size of the cubicle can be adjusted post-acquisition to optimize image quality. The specialized sub-diffraction-limit photodetectors in the QIS are referred to as “jots” and a QIS may have a gigajot or more, read out at 1000 fps, for a data rate exceeding 1 Tb/s. Basically, we are trying to count photons as they arrive at the sensor. This paper reviews the QIS concept and its imaging characteristics. Recent progress towards realizing the QIS for commercial and scientific purposes is discussed. This includes implementation of a pump-gate jot device in a 65 nm CIS BSI process yielding read noise as low as 0.22 e− r.m.s. and conversion gain as high as 420 µV/e−, power efficient readout electronics, currently as low as 0.4 pJ/b in the same process, creating high dynamic range images from jot data, and understanding the imaging characteristics of single-bit and multi-bit QIS devices. The QIS represents a possible major paradigm shift in image capture. PMID:27517926
NASA Astrophysics Data System (ADS)
Zhu, Yunqiang; Zhu, Huazhong; Lu, Heli; Ni, Jianguang; Zhu, Shaoxia
2005-10-01
Remote sensing dynamic monitoring of land use can detect the change information of land use and update the current land use map, which is important for rational utilization and scientific management of land resources. This paper discusses the technological procedure of remote sensing dynamic monitoring of land use including the process of remote sensing images, the extraction of annual change information of land use, field survey, indoor post processing and accuracy assessment. Especially, we emphasize on comparative research on the choice of remote sensing rectifying models, image fusion algorithms and accuracy assessment methods. Taking Anning district in Lanzhou as an example, we extract the land use change information of the district during 2002-2003, access monitoring accuracy and analyze the reason of land use change.
Examining Students' Intended Image on Facebook: "What Were They Thinking?!"
ERIC Educational Resources Information Center
Peluchette, Joy; Karl, Katherine
2010-01-01
The present article examines factors that influence why students post information on their social network profile which employers would find inappropriate. Results show that many students make a conscious attempt to portray a particular image and, as predicted, their intended image was related to whether they posted inappropriate information.…
NASA Astrophysics Data System (ADS)
Dumpuri, Prashanth; Clements, Logan W.; Li, Rui; Waite, Jonathan M.; Stefansic, James D.; Geller, David A.; Miga, Michael I.; Dawant, Benoit M.
2009-02-01
Preoperative planning combined with image-guidance has shown promise towards increasing the accuracy of liver resection procedures. The purpose of this study was to validate one such preoperative planning tool for four patients undergoing hepatic resection. Preoperative computed tomography (CT) images acquired before surgery were used to identify tumor margins and to plan the surgical approach for resection of these tumors. Surgery was then performed with intraoperative digitization data acquire by an FDA approved image-guided liver surgery system (Pathfinder Therapeutics, Inc., Nashville, TN). Within 5-7 days after surgery, post-operative CT image volumes were acquired. Registration of data within a common coordinate reference was achieved and preoperative plans were compared to the postoperative volumes. Semi-quantitative comparisons are presented in this work and preliminary results indicate that significant liver regeneration/hypertrophy in the postoperative CT images may be present post-operatively. This could challenge pre/post operative CT volume change comparisons as a means to evaluate the accuracy of preoperative surgical plans.
2018-01-01
Background Structural and functional brain images are essential imaging modalities for medical experts to study brain anatomy. These images are typically visually inspected by experts. To analyze images without any bias, they must be first converted to numeric values. Many software packages are available to process the images, but they are complex and difficult to use. The software packages are also hardware intensive. The results obtained after processing vary depending on the native operating system used and its associated software libraries; data processed in one system cannot typically be combined with data on another system. Objective The aim of this study was to fulfill the neuroimaging community’s need for a common platform to store, process, explore, and visualize their neuroimaging data and results using Neuroimaging Web Services Interface: a series of processing pipelines designed as a cyber physical system for neuroimaging and clinical data in brain research. Methods Neuroimaging Web Services Interface accepts magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and functional magnetic resonance imaging. These images are processed using existing and custom software packages. The output is then stored as image files, tabulated files, and MySQL tables. The system, made up of a series of interconnected servers, is password-protected and is securely accessible through a Web interface and allows (1) visualization of results and (2) downloading of tabulated data. Results All results were obtained using our processing servers in order to maintain data validity and consistency. The design is responsive and scalable. The processing pipeline started from a FreeSurfer reconstruction of Structural magnetic resonance imaging images. The FreeSurfer and regional standardized uptake value ratio calculations were validated using Alzheimer’s Disease Neuroimaging Initiative input images, and the results were posted at the Laboratory of Neuro Imaging data archive. Notable leading researchers in the field of Alzheimer’s Disease and epilepsy have used the interface to access and process the data and visualize the results. Tabulated results with unique visualization mechanisms help guide more informed diagnosis and expert rating, providing a truly unique multimodal imaging platform that combines magnetic resonance imaging, positron emission tomography, diffusion tensor imaging, and resting state functional magnetic resonance imaging. A quality control component was reinforced through expert visual rating involving at least 2 experts. Conclusions To our knowledge, there is no validated Web-based system offering all the services that Neuroimaging Web Services Interface offers. The intent of Neuroimaging Web Services Interface is to create a tool for clinicians and researchers with keen interest on multimodal neuroimaging. More importantly, Neuroimaging Web Services Interface significantly augments the Alzheimer’s Disease Neuroimaging Initiative data, especially since our data contain a large cohort of Hispanic normal controls and Alzheimer’s Disease patients. The obtained results could be scrutinized visually or through the tabulated forms, informing researchers on subtle changes that characterize the different stages of the disease. PMID:29699962
Kim, Tae Kyoung; Khalili, Korosh; Jang, Hyun-Jung
2015-01-01
A successful program for local ablation therapy for hepatocellular carcinoma (HCC) requires extensive imaging support for diagnosis and localization of HCC, imaging guidance for the ablation procedures, and post-treatment monitoring. Contrast-enhanced ultrasonography (CEUS) has several advantages over computed tomography/magnetic resonance imaging (CT/MRI), including real-time imaging capability, sensitive detection of arterial-phase hypervascularity and washout, no renal excretion, no ionizing radiation, repeatability, excellent patient compliance, and relatively low cost. CEUS is useful for image guidance for isoechoic lesions. While contrast-enhanced CT/MRI is the standard method for the diagnosis of HCC and post-ablation monitoring, CEUS is useful when CT/MRI findings are indeterminate or CT/MRI is contraindicated. This article provides a practical review of the role of CEUS in imaging algorithms for pre- and post-ablation therapy for HCC. PMID:26169081
Cell-phone-based platform for biomedical device development and education applications.
Smith, Zachary J; Chu, Kaiqin; Espenson, Alyssa R; Rahimzadeh, Mehdi; Gryshuk, Amy; Molinaro, Marco; Dwyre, Denis M; Lane, Stephen; Matthews, Dennis; Wachsmann-Hogiu, Sebastian
2011-03-02
In this paper we report the development of two attachments to a commercial cell phone that transform the phone's integrated lens and image sensor into a 350x microscope and visible-light spectrometer. The microscope is capable of transmission and polarized microscopy modes and is shown to have 1.5 micron resolution and a usable field-of-view of 150 x 50 with no image processing, and approximately 350 x 350 when post-processing is applied. The spectrometer has a 300 nm bandwidth with a limiting spectral resolution of close to 5 nm. We show applications of the devices to medically relevant problems. In the case of the microscope, we image both stained and unstained blood-smears showing the ability to acquire images of similar quality to commercial microscope platforms, thus allowing diagnosis of clinical pathologies. With the spectrometer we demonstrate acquisition of a white-light transmission spectrum through diffuse tissue as well as the acquisition of a fluorescence spectrum. We also envision the devices to have immediate relevance in the educational field.
Cell-Phone-Based Platform for Biomedical Device Development and Education Applications
Smith, Zachary J.; Chu, Kaiqin; Espenson, Alyssa R.; Rahimzadeh, Mehdi; Gryshuk, Amy; Molinaro, Marco; Dwyre, Denis M.; Lane, Stephen; Matthews, Dennis; Wachsmann-Hogiu, Sebastian
2011-01-01
In this paper we report the development of two attachments to a commercial cell phone that transform the phone's integrated lens and image sensor into a 350× microscope and visible-light spectrometer. The microscope is capable of transmission and polarized microscopy modes and is shown to have 1.5 micron resolution and a usable field-of-view of 150×150 with no image processing, and approximately 350×350 when post-processing is applied. The spectrometer has a 300 nm bandwidth with a limiting spectral resolution of close to 5 nm. We show applications of the devices to medically relevant problems. In the case of the microscope, we image both stained and unstained blood-smears showing the ability to acquire images of similar quality to commercial microscope platforms, thus allowing diagnosis of clinical pathologies. With the spectrometer we demonstrate acquisition of a white-light transmission spectrum through diffuse tissue as well as the acquisition of a fluorescence spectrum. We also envision the devices to have immediate relevance in the educational field. PMID:21399693