Graphical user interface for image acquisition and processing
Goldberg, Kenneth A.
2002-01-01
An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.
Accelerated dynamic EPR imaging using fast acquisition and compressive recovery
NASA Astrophysics Data System (ADS)
Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L.
2016-12-01
Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques.
Combined Acquisition/Processing For Data Reduction
NASA Astrophysics Data System (ADS)
Kruger, Robert A.
1982-01-01
Digital image processing systems necessarily consist of three components: acquisition, storage/retrieval and processing. The acquisition component requires the greatest data handling rates. By coupling together the acquisition witn some online hardwired processing, data rates and capacities for short term storage can be reduced. Furthermore, long term storage requirements can be reduced further by appropriate processing and editing of image data contained in short term memory. The net result could be reduced performance requirements for mass storage, processing and communication systems. Reduced amounts of data also snouid speed later data analysis and diagnostic decision making.
A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer
NASA Astrophysics Data System (ADS)
Luckman, Adrian J.; Allinson, Nigel M.
1989-03-01
A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.
Accelerated dynamic EPR imaging using fast acquisition and compressive recovery.
Ahmad, Rizwan; Samouilov, Alexandre; Zweier, Jay L
2016-12-01
Electron paramagnetic resonance (EPR) allows quantitative imaging of tissue redox status, which provides important information about ischemic syndromes, cancer and other pathologies. For continuous wave EPR imaging, however, poor signal-to-noise ratio and low acquisition efficiency limit its ability to image dynamic processes in vivo including tissue redox, where conditions can change rapidly. Here, we present a data acquisition and processing framework that couples fast acquisition with compressive sensing-inspired image recovery to enable EPR-based redox imaging with high spatial and temporal resolutions. The fast acquisition (FA) allows collecting more, albeit noisier, projections in a given scan time. The composite regularization based processing method, called spatio-temporal adaptive recovery (STAR), not only exploits sparsity in multiple representations of the spatio-temporal image but also adaptively adjusts the regularization strength for each representation based on its inherent level of the sparsity. As a result, STAR adjusts to the disparity in the level of sparsity across multiple representations, without introducing any tuning parameter. Our simulation and phantom imaging studies indicate that a combination of fast acquisition and STAR (FASTAR) enables high-fidelity recovery of volumetric image series, with each volumetric image employing less than 10 s of scan. In addition to image fidelity, the time constants derived from FASTAR also match closely to the ground truth even when a small number of projections are used for recovery. This development will enhance the capability of EPR to study fast dynamic processes that cannot be investigated using existing EPR imaging techniques. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1990-01-01
Researchers are concerned with the end-to-end performance of image gathering, coding, and processing. The applications range from high-resolution television to vision-based robotics, wherever the resolution, efficiency and robustness of visual information acquisition and processing are critical. For the presentation at this workshop, it is convenient to divide research activities into the following two overlapping areas: The first is the development of focal-plane processing techniques and technology to effectively combine image gathering with coding, with an emphasis on low-level vision processing akin to the retinal processing in human vision. The approach includes the familiar Laplacian pyramid, the new intensity-dependent spatial summation, and parallel sensing/processing networks. Three-dimensional image gathering is attained by combining laser ranging with sensor-array imaging. The second is the rigorous extension of information theory and optimal filtering to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing.
Research on remote sensing image pixel attribute data acquisition method in AutoCAD
NASA Astrophysics Data System (ADS)
Liu, Xiaoyang; Sun, Guangtong; Liu, Jun; Liu, Hui
2013-07-01
The remote sensing image has been widely used in AutoCAD, but AutoCAD lack of the function of remote sensing image processing. In the paper, ObjectARX was used for the secondary development tool, combined with the Image Engine SDK to realize remote sensing image pixel attribute data acquisition in AutoCAD, which provides critical technical support for AutoCAD environment remote sensing image processing algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castillo, S; Castillo, R; Castillo, E
2014-06-15
Purpose: Artifacts arising from the 4D CT acquisition and post-processing methods add systematic uncertainty to the treatment planning process. We propose an alternate cine 4D CT acquisition and post-processing method to consistently reduce artifacts, and explore patient parameters indicative of image quality. Methods: In an IRB-approved protocol, 18 patients with primary thoracic malignancies received a standard cine 4D CT acquisition followed by an oversampling 4D CT that doubled the number of images acquired. A second cohort of 10 patients received the clinical 4D CT plus 3 oversampling scans for intra-fraction reproducibility. The clinical acquisitions were processed by the standard phasemore » sorting method. The oversampling acquisitions were processed using Dijkstras algorithm to optimize an artifact metric over available image data. Image quality was evaluated with a one-way mixed ANOVA model using a correlation-based artifact metric calculated from the final 4D CT image sets. Spearman correlations and a linear mixed model tested the association between breathing parameters, patient characteristics, and image quality. Results: The oversampling 4D CT scans reduced artifact presence significantly by 27% and 28%, for the first cohort and second cohort respectively. From cohort 2, the inter-replicate deviation for the oversampling method was within approximately 13% of the cross scan average at the 0.05 significance level. Artifact presence for both clinical and oversampling methods was significantly correlated with breathing period (ρ=0.407, p-value<0.032 clinical, ρ=0.296, p-value<0.041 oversampling). Artifact presence in the oversampling method was significantly correlated with amount of data acquired, (ρ=-0.335, p-value<0.02) indicating decreased artifact presence with increased breathing cycles per scan location. Conclusion: The 4D CT oversampling acquisition with optimized sorting reduced artifact presence significantly and reproducibly compared to the phase-sorted clinical acquisition.« less
Jadidi, Masoud; Båth, Magnus; Nyrén, Sven
2018-04-09
To compare the quality of images obtained with two different protocols with different acquisition time and the influence from image post processing in a chest digital tomosynthesis (DTS) system. 20 patients with suspected lung cancer were imaged with a chest X-ray equipment with tomosynthesis option. Two examination protocols with different acquisition times (6.3 and 12 s) were performed on each patient. Both protocols were presented with two different image post-processing (standard DTS processing and more advanced processing optimised for chest radiography). Thus, 4 series from each patient, altogether 80 series, were presented anonymously and in a random order. Five observers rated the quality of the reconstructed section images according to predefined quality criteria in three different classes. Visual grading characteristics (VGC) was used to analyse the data and the area under the VGC curve (AUC VGC ) was used as figure-of-merit. The 12 s protocol and the standard DTS processing were used as references in the analyses. The protocol with 6.3 s acquisition time had a statistically significant advantage over the vendor-recommended protocol with 12 s acquisition time for the classes of criteria, Demarcation (AUC VGC = 0.56, p = 0.009) and Disturbance (AUC VGC = 0.58, p < 0.001). A similar value of AUC VGC was found also for the class Structure (definition of bone structures in the spine) (0.56) but it could not be statistically separated from 0.5 (p = 0.21). For the image processing, the VGC analysis showed a small but statistically significant advantage for the standard DTS processing over the more advanced processing for the classes of criteria Demarcation (AUC VGC = 0.45, p = 0.017) and Disturbance (AUC VGC = 0.43, p = 0.005). A similar value of AUC VGC was found also for the class Structure (0.46), but it could not be statistically separated from 0.5 (p = 0.31). The study indicates that the protocol with 6.3 s acquisition time yields slightly better image quality than the vender-recommended protocol with acquisition time 12 s for several anatomical structures. Furthermore, the standard gradation processing (the vendor-recommended post-processing for DTS), yields to some extent advantage over the gradation processing/multiobjective frequency processing/flexible noise control processing in terms of image quality for all classes of criteria. Advances in knowledge: The study proves that the image quality may be strongly affected by the selection of DTS protocol and that the vendor-recommended protocol may not always be the optimal choice.
Aldridge, Matthew D; Waddington, Wendy W; Dickson, John C; Prakash, Vineet; Ell, Peter J; Bomanji, Jamshed B
2013-11-01
A three-dimensional model-based resolution recovery (RR) reconstruction algorithm that compensates for collimator-detector response, resulting in an improvement in reconstructed spatial resolution and signal-to-noise ratio of single-photon emission computed tomography (SPECT) images, was tested. The software is said to retain image quality even with reduced acquisition time. Clinically, any improvement in patient throughput without loss of quality is to be welcomed. Furthermore, future restrictions in radiotracer supplies may add value to this type of data analysis. The aims of this study were to assess improvement in image quality using the software and to evaluate the potential of performing reduced time acquisitions for bone and parathyroid SPECT applications. Data acquisition was performed using the local standard SPECT/CT protocols for 99mTc-hydroxymethylene diphosphonate bone and 99mTc-methoxyisobutylisonitrile parathyroid SPECT imaging. The principal modification applied was the acquisition of an eight-frame gated data set acquired using an ECG simulator with a fixed signal as the trigger. This had the effect of partitioning the data such that the effect of reduced time acquisitions could be assessed without conferring additional scanning time on the patient. The set of summed data sets was then independently reconstructed using the RR software to permit a blinded assessment of the effect of acquired counts upon reconstructed image quality as adjudged by three experienced observers. Data sets reconstructed with the RR software were compared with the local standard processing protocols; filtered back-projection and ordered-subset expectation-maximization. Thirty SPECT studies were assessed (20 bone and 10 parathyroid). The images reconstructed with the RR algorithm showed improved image quality for both full-time and half-time acquisitions over local current processing protocols (P<0.05). The RR algorithm improved image quality compared with local processing protocols and has been introduced into routine clinical use. SPECT acquisitions are now acquired at half of the time previously required. The method of binning the data can be applied to any other camera system to evaluate the reduction in acquisition time for similar processes. The potential for dose reduction is also inherent with this approach.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-19
... assist the office in processing your requests. See the SUPPLEMENTARY INFORMATION section for electronic... considerations for standardization of image acquisition, image interpretation methods, and other procedures to help ensure imaging data quality. The draft guidance describes two categories of image acquisition and...
Afshar, Yaser; Sbalzarini, Ivo F.
2016-01-01
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 1010 pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments. PMID:27046144
Afshar, Yaser; Sbalzarini, Ivo F
2016-01-01
Modern fluorescence microscopy modalities, such as light-sheet microscopy, are capable of acquiring large three-dimensional images at high data rate. This creates a bottleneck in computational processing and analysis of the acquired images, as the rate of acquisition outpaces the speed of processing. Moreover, images can be so large that they do not fit the main memory of a single computer. We address both issues by developing a distributed parallel algorithm for segmentation of large fluorescence microscopy images. The method is based on the versatile Discrete Region Competition algorithm, which has previously proven useful in microscopy image segmentation. The present distributed implementation decomposes the input image into smaller sub-images that are distributed across multiple computers. Using network communication, the computers orchestrate the collectively solving of the global segmentation problem. This not only enables segmentation of large images (we test images of up to 10(10) pixels), but also accelerates segmentation to match the time scale of image acquisition. Such acquisition-rate image segmentation is a prerequisite for the smart microscopes of the future and enables online data compression and interactive experiments.
Bidgood, W D; Bray, B; Brown, N; Mori, A R; Spackman, K A; Golichowski, A; Jones, R H; Korman, L; Dove, B; Hildebrand, L; Berg, M
1999-01-01
To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. The authors introduce the notion of "image acquisition context," the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries.
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2012 CFR
2012-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2013 CFR
2013-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2014 CFR
2014-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
21 CFR 892.1715 - Full-field digital mammography system.
Code of Federal Regulations, 2011 CFR
2011-04-01
... planar digital x-ray images of the entire breast. This generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component...
Koprowski, Robert
2014-07-04
Dedicated, automatic algorithms for image analysis and processing are becoming more and more common in medical diagnosis. When creating dedicated algorithms, many factors must be taken into consideration. They are associated with selecting the appropriate algorithm parameters and taking into account the impact of data acquisition on the results obtained. An important feature of algorithms is the possibility of their use in other medical units by other operators. This problem, namely operator's (acquisition) impact on the results obtained from image analysis and processing, has been shown on a few examples. The analysed images were obtained from a variety of medical devices such as thermal imaging, tomography devices and those working in visible light. The objects of imaging were cellular elements, the anterior segment and fundus of the eye, postural defects and others. In total, almost 200'000 images coming from 8 different medical units were analysed. All image analysis algorithms were implemented in C and Matlab. For various algorithms and methods of medical imaging, the impact of image acquisition on the results obtained is different. There are different levels of algorithm sensitivity to changes in the parameters, for example: (1) for microscope settings and the brightness assessment of cellular elements there is a difference of 8%; (2) for the thyroid ultrasound images there is a difference in marking the thyroid lobe area which results in a brightness assessment difference of 2%. The method of image acquisition in image analysis and processing also affects: (3) the accuracy of determining the temperature in the characteristic areas on the patient's back for the thermal method - error of 31%; (4) the accuracy of finding characteristic points in photogrammetric images when evaluating postural defects - error of 11%; (5) the accuracy of performing ablative and non-ablative treatments in cosmetology - error of 18% for the nose, 10% for the cheeks, and 7% for the forehead. Similarly, when: (7) measuring the anterior eye chamber - there is an error of 20%; (8) measuring the tooth enamel thickness - error of 15%; (9) evaluating the mechanical properties of the cornea during pressure measurement - error of 47%. The paper presents vital, selected issues occurring when assessing the accuracy of designed automatic algorithms for image analysis and processing in bioengineering. The impact of acquisition of images on the problems arising in their analysis has been shown on selected examples. It has also been indicated to which elements of image analysis and processing special attention should be paid in their design.
Code of Federal Regulations, 2013 CFR
2013-10-01
... radiography (CR) is the term for digital X-ray image acquisition systems that detect X-ray signals using a... stimulating laser beam to convert the latent radiographic image to electronic signals which are then processed... image acquisition systems in which the X-ray signals received by the image detector are converted nearly...
Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael
1999-01-01
Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229
Software for Acquiring Image Data for PIV
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Cheung, H. M.; Kressler, Brian
2003-01-01
PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.
Sechopoulos, Ioannis
2013-01-01
Many important post-acquisition aspects of breast tomosynthesis imaging can impact its clinical performance. Chief among them is the reconstruction algorithm that generates the representation of the three-dimensional breast volume from the acquired projections. But even after reconstruction, additional processes, such as artifact reduction algorithms, computer aided detection and diagnosis, among others, can also impact the performance of breast tomosynthesis in the clinical realm. In this two part paper, a review of breast tomosynthesis research is performed, with an emphasis on its medical physics aspects. In the companion paper, the first part of this review, the research performed relevant to the image acquisition process is examined. This second part will review the research on the post-acquisition aspects, including reconstruction, image processing, and analysis, as well as the advanced applications being investigated for breast tomosynthesis. PMID:23298127
Design of area array CCD image acquisition and display system based on FPGA
NASA Astrophysics Data System (ADS)
Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming
2014-09-01
With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.
ERIC Educational Resources Information Center
Smolík, Filip; Kríž, Adam
2015-01-01
Imageability is the ability of words to elicit mental sensory images of their referents. Recent research has suggested that imageability facilitates the processing and acquisition of inflected word forms. The present study examined whether inflected word forms are acquired earlier in highly imageable words in Czech children. Parents of 317…
NASA Technical Reports Server (NTRS)
Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.
2005-01-01
A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.
Hybrid cardiac imaging with MR-CAT scan: a feasibility study.
Hillenbrand, C; Sandstede, J; Pabst, T; Hahn, D; Haase, A; Jakob, P M
2000-06-01
We demonstrate the feasibility of a new versatile hybrid imaging concept, the combined acquisition technique (CAT), for cardiac imaging. The cardiac CAT approach, which combines new methodology with existing technology, essentially integrates fast low-angle shot (FLASH) and echoplanar imaging (EPI) modules in a sequential fashion, whereby each acquisition module is employed with independently optimized imaging parameters. One important CAT sequence optimization feature is the ability to use different bandwidths for different acquisition modules. Twelve healthy subjects were imaged using three cardiac CAT acquisition strategies: a) CAT was used to reduce breath-hold duration times while maintaining constant spatial resolution; b) CAT was used to increase spatial resolution in a given breath-hold time; and c) single-heart beat CAT imaging was performed. The results obtained demonstrate the feasibility of cardiac imaging using the CAT approach and the potential of this technique to accelerate the imaging process with almost conserved image quality. Copyright 2000 Wiley-Liss, Inc.
Acquiring 4D Thoracic CT Scans Using Ciné CT Acquisition
NASA Astrophysics Data System (ADS)
Low, Daniel
One method for acquiring 4D thoracic CT scans is to use ciné acquisition. Ciné acquisition is conducted by rotating the gantry and acquiring x-ray projections while keeping the couch stationary. After a complete rotation, a single set of CT slices, the number corresponding to the number of CT detector rows, is produced. The rotation period is typically sub second so each image set corresponds to a single point in time. The ciné image acquisition is repeated for at least one breathing cycle to acquire images throughout the breathing cycle. Once the images are acquired at a single couch position, the couch is moved to the abutting position and the acquisition is repeated. Post-processing of the images sets typically resorts the sets into breathing phases, stacking images from a specific phase to produce a thoracic CT scan at that phase. Benefits of the ciné acquisition protocol include, the ability to precisely identify the phase with respect to the acquired image, the ability to resort images after reconstruction, and the ability to acquire images over arbitrarily long times and for arbitrarily many images (within dose constraints).
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
Sparsity based target detection for compressive spectral imagery
NASA Astrophysics Data System (ADS)
Boada, David Alberto; Arguello Fuentes, Henry
2016-09-01
Hyperspectral imagery provides significant information about the spectral characteristics of objects and materials present in a scene. It enables object and feature detection, classification, or identification based on the acquired spectral characteristics. However, it relies on sophisticated acquisition and data processing systems able to acquire, process, store, and transmit hundreds or thousands of image bands from a given area of interest which demands enormous computational resources in terms of storage, computationm, and I/O throughputs. Specialized optical architectures have been developed for the compressed acquisition of spectral images using a reduced set of coded measurements contrary to traditional architectures that need a complete set of measurements of the data cube for image acquisition, dealing with the storage and acquisition limitations. Despite this improvement, if any processing is desired, the image has to be reconstructed by an inverse algorithm in order to be processed, which is also an expensive task. In this paper, a sparsity-based algorithm for target detection in compressed spectral images is presented. Specifically, the target detection model adapts a sparsity-based target detector to work in a compressive domain, modifying the sparse representation basis in the compressive sensing problem by means of over-complete training dictionaries and a wavelet basis representation. Simulations show that the presented method can achieve even better detection results than the state of the art methods.
Image acquisition in the Pi-of-the-Sky project
NASA Astrophysics Data System (ADS)
Jegier, M.; Nawrocki, K.; Poźniak, K.; Sokołowski, M.
2006-10-01
Modern astronomical image acquisition systems dedicated for sky surveys provide large amount of data in a single measurement session. During one session that lasts a few hours it is possible to get as much as 100 GB of data. This large amount of data needs to be transferred from camera and processed. This paper presents some aspects of image acquisition in a sky survey image acquisition system. It describes a dedicated USB linux driver for the first version of the "Pi of The Sky" CCD camera (later versions have also Ethernet interface) and the test program for the camera together with a driver-wrapper providing core device functionality. Finally, the paper contains description of an algorithm for matching several images based on image features, i.e. star positions and their brightness.
Imaging system design and image interpolation based on CMOS image sensor
NASA Astrophysics Data System (ADS)
Li, Yu-feng; Liang, Fei; Guo, Rui
2009-11-01
An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.
2015-04-01
Current routine MRI examinations rely on the acquisition of qualitative images that have a contrast "weighted" for a mixture of (magnetic) tissue properties. Recently, a novel approach was introduced, namely MR Fingerprinting (MRF) with a completely different approach to data acquisition, post-processing and visualization. Instead of using a repeated, serial acquisition of data for the characterization of individual parameters of interest, MRF uses a pseudo randomized acquisition that causes the signals from different tissues to have a unique signal evolution or 'fingerprint' that is simultaneously a function of the multiple material properties under investigation. The processing after acquisition involves a pattern recognition algorithm to match the fingerprints to a predefined dictionary of predicted signal evolutions. These can then be translated into quantitative maps of the magnetic parameters of interest. MR Fingerprinting (MRF) is a technique that could theoretically be applied to most traditional qualitative MRI methods and replaces them with acquisition of truly quantitative tissue measures. MRF is, thereby, expected to be much more accurate and reproducible than traditional MRI and should improve multi-center studies and significantly reduce reader bias when diagnostic imaging is performed. Key Points • MR fingerprinting (MRF) is a new approach to data acquisition, post-processing and visualization.• MRF provides highly accurate quantitative maps of T1, T2, proton density, diffusion.• MRF may offer multiparametric imaging with high reproducibility, and high potential for multicenter/ multivendor studies.
Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms
Perez-Sanz, Fernando; Navarro, Pedro J
2017-01-01
Abstract The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. PMID:29048559
An acquisition system for CMOS imagers with a genuine 10 Gbit/s bandwidth
NASA Astrophysics Data System (ADS)
Guérin, C.; Mahroug, J.; Tromeur, W.; Houles, J.; Calabria, P.; Barbier, R.
2012-12-01
This paper presents a high data throughput acquisition system for pixel detector readout such as CMOS imagers. This CMOS acquisition board offers a genuine 10 Gbit/s bandwidth to the workstation and can provide an on-line and continuous high frame rate imaging capability. On-line processing can be implemented either on the Data Acquisition Board or on the multi-cores workstation depending on the complexity of the algorithms. The different parts composing the acquisition board have been designed to be used first with a single-photon detector called LUSIPHER (800×800 pixels), developed in our laboratory for scientific applications ranging from nano-photonics to adaptive optics. The architecture of the acquisition board is presented and the performances achieved by the produced boards are described. The future developments (hardware and software) concerning the on-line implementation of algorithms dedicated to single-photon imaging are tackled.
Matovic, Milovan; Jankovic, Milica; Barjaktarovic, Marko; Jeremic, Marija
2017-01-01
After radioiodine therapy of differentiated thyroid cancer (DTC) patients, whole body scintigraphy (WBS) is standard procedure before releasing the patient from the hospital. A common problem is the precise localization of regions where the iod-avide tissue is located. Sometimes is practically impossible to perform precise topographic localization of such regions. In order to face this problem, we have developed a low-cost Vision-Fusion system for web-camera image acquisition simultaneously with routine scintigraphic whole body acquisition including the algorithm for fusion of images given from both cameras. For image acquisition in the gamma part of the spectra we used e.cam dual head gamma camera (Siemens, Erlangen, Germany) in WBS modality, with matrix size of 256×1024 pixels and bed speed of 6cm/min, equipped with high energy collimator. For optical image acquisition in visible part of spectra we have used web-camera model C905 (Logitech, USA) with Carl Zeiss® optics, native resolution 1600×1200 pixels, 34 o field of view, 30g weight, with autofocus option turned "off" and auto white balance turned "on". Web camera is connected to upper head of gamma camera (GC) by a holder of lightweight aluminum rod and a plexiglas adapter. Our own Vision-Fusion software for image acquisition and coregistration was developed using NI LabVIEW programming environment 2015 (National Instruments, Texas, USA) and two additional LabVIEW modules: NI Vision Acquisition Software (VAS) and NI Vision Development Module (VDM). Vision acquisition software enables communication and control between laptop computer and web-camera. Vision development module is image processing library used for image preprocessing and fusion. Software starts the web-camera image acquisition before starting image acquisition on GC and stops it when GC completes the acquisition. Web-camera is in continuous acquisition mode with frame rate f depending on speed of patient bed movement v (f=v/∆ cm , where ∆ cm is a displacement step that can be changed in Settings option of Vision-Fusion software; by default, ∆ cm is set to 1cm corresponding to ∆ p =15 pixels). All images captured while patient's bed is moving are processed. Movement of patient's bed is checked using cross-correlation of two successive images. After each image capturing, algorithm extracts the central region of interest (ROI) of the image, with the same width as captured image (1600 pixels) and the height that is equal to the ∆ p displacement in pixels. All extracted central ROI are placed next to each other in the overall whole-body image. Stacking of narrow central ROI introduces negligible distortion in the overall whole-body image. The first step for fusion of the scintigram and the optical image was determination of spatial transformation between them. We have made an experiment with two markers (point radioactivity sources of 99m Tc pertechnetate 1MBq) visible in both images (WBS and optical) to find transformation of coordinates between images. The distance between point markers is used for spatial coregistration of the gamma and optical images. At the end of coregistration process, gamma image is rescaled in spatial domain and added to the optical image (green or red channel, amplification changeable from user interface). We tested our system for 10 patients with DTC who received radioiodine therapy (8 women and two men, with average age of 50.10±12.26 years). Five patients received 5.55Gbq, three 3.70GBq and two 1.85GBq. Whole-body scintigraphy and optical image acquisition were performed 72 hours after application of radioiodine therapy. Based on our first results during clinical testing of our system, we can conclude that our system can improve diagnostic possibility of whole body scintigraphy to detect thyroid remnant tissue in patients with DTC after radioiodine therapy.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-05
... exposure control, image processing and reconstruction programs, patient and equipment supports, component..., acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and... may include was revised by adding automatic exposure control, image processing and reconstruction...
Plant phenomics: an overview of image acquisition technologies and image data analysis algorithms.
Perez-Sanz, Fernando; Navarro, Pedro J; Egea-Cortines, Marcos
2017-11-01
The study of phenomes or phenomics has been a central part of biology. The field of automatic phenotype acquisition technologies based on images has seen an important advance in the last years. As with other high-throughput technologies, it addresses a common set of problems, including data acquisition and analysis. In this review, we give an overview of the main systems developed to acquire images. We give an in-depth analysis of image processing with its major issues and the algorithms that are being used or emerging as useful to obtain data out of images in an automatic fashion. © The Author 2017. Published by Oxford University Press.
Sensor, signal, and image informatics - state of the art and current topics.
Lehmann, T M; Aach, T; Witte, H
2006-01-01
The number of articles published annually in the fields of biomedical signal and image acquisition and processing is increasing. Based on selected examples, this survey aims at comprehensively demonstrating the recent trends and developments. Four articles are selected for biomedical data acquisition covering topics such as dose saving in CT, C-arm X-ray imaging systems for volume imaging, and the replacement of dose-intensive CT-based diagnostic with harmonic ultrasound imaging. Regarding biomedical signal analysis (BSA), the four selected articles discuss the equivalence of different time-frequency approaches for signal analysis, an application to Cochlea implants, where time-frequency analysis is applied for controlling the replacement system, recent trends for fusion of different modalities, and the role of BSA as part of a brain machine interfaces. To cover the broad spectrum of publications in the field of biomedical image processing, six papers are focused. Important topics are content-based image retrieval in medical applications, automatic classification of tongue photographs from traditional Chinese medicine, brain perfusion analysis in single photon emission computed tomography (SPECT), model-based visualization of vascular trees, and virtual surgery, where enhanced visualization and haptic feedback techniques are combined with a sphere-filled model of the organ. The selected papers emphasize the five fields forming the chain of biomedical data processing: (1) data acquisition, (2) data reconstruction and pre-processing, (3) data handling, (4) data analysis, and (5) data visualization. Fields 1 and 2 form the sensor informatics, while fields 2 to 5 form signal or image informatics with respect to the nature of the data considered. Biomedical data acquisition and pre-processing, as well as data handling, analysis and visualization aims at providing reliable tools for decision support that improve the quality of health care. Comprehensive evaluation of the processing methods and their reliable integration in routine applications are future challenges in the field of sensor, signal and image informatics.
Quick acquisition and recognition method for the beacon in deep space optical communications.
Wang, Qiang; Liu, Yuefei; Ma, Jing; Tan, Liying; Yu, Siyuan; Li, Changjiang
2016-12-01
In deep space optical communications, it is very difficult to acquire the beacon given the long communication distance. Acquisition efficiency is essential for establishing and holding the optical communication link. Here we proposed a quick acquisition and recognition method for the beacon in deep optical communications based on the characteristics of the deep optical link. To identify the beacon from the background light efficiently, we utilized the maximum similarity between the collecting image and the reference image for accurate recognition and acquisition of the beacon in the area of uncertainty. First, the collecting image and the reference image were processed by Fourier-Mellin. Second, image sampling and image matching were applied for the accurate positioning of the beacon. Finally, the field programmable gate array (FPGA)-based system was used to verify and realize this method. The experimental results showed that the acquisition time for the beacon was as fast as 8.1s. Future application of this method in the system design of deep optical communication will be beneficial.
Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans.
Bresch, Erik; Nielsen, Jon; Nayak, Krishna; Narayanan, Shrikanth
2006-10-01
This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given.
Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans (L)
Bresch, Erik; Nielsen, Jon; Nayak, Krishna; Narayanan, Shrikanth
2007-01-01
This letter describes a data acquisition setup for recording, and processing, running speech from a person in a magnetic resonance imaging (MRI) scanner. The main focus is on ensuring synchronicity between image and audio acquisition, and in obtaining good signal to noise ratio to facilitate further speech analysis and modeling. A field-programmable gate array based hardware design for synchronizing the scanner image acquisition to other external data such as audio is described. The audio setup itself features two fiber optical microphones and a noise-canceling filter. Two noise cancellation methods are described including a novel approach using a pulse sequence specific model of the gradient noise of the MRI scanner. The setup is useful for scientific speech production studies. Sample results of speech and singing data acquired and processed using the proposed method are given. PMID:17069275
Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.
2014-01-01
Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030
USDA-ARS?s Scientific Manuscript database
Using unmanned aircraft systems (UAS) as remote sensing platforms offers the unique ability for repeated deployment for acquisition of high temporal resolution data at very high spatial resolution. Most image acquisitions from UAS have been in the visible bands, while multispectral remote sensing ap...
Optimizing hippocampal segmentation in infants utilizing MRI post-acquisition processing.
Thompson, Deanne K; Ahmadzai, Zohra M; Wood, Stephen J; Inder, Terrie E; Warfield, Simon K; Doyle, Lex W; Egan, Gary F
2012-04-01
This study aims to determine the most reliable method for infant hippocampal segmentation by comparing magnetic resonance (MR) imaging post-acquisition processing techniques: contrast to noise ratio (CNR) enhancement, or reformatting to standard orientation. MR scans were performed with a 1.5 T GE scanner to obtain dual echo T2 and proton density (PD) images at term equivalent (38-42 weeks' gestational age). 15 hippocampi were manually traced four times on ten infant images by 2 independent raters on the original T2 image, as well as images processed by: a) combining T2 and PD images (T2-PD) to enhance CNR; then b) reformatting T2-PD images perpendicular to the long axis of the left hippocampus. CNRs and intraclass correlation coefficients (ICC) were calculated. T2-PD images had 17% higher CNR (15.2) than T2 images (12.6). Original T2 volumes' ICC was 0.87 for rater 1 and 0.84 for rater 2, whereas T2-PD images' ICC was 0.95 for rater 1 and 0.87 for rater 2. Reliability of hippocampal segmentation on T2-PD images was not improved by reformatting images (rater 1 ICC = 0.88, rater 2 ICC = 0.66). Post-acquisition processing can improve CNR and hence reliability of hippocampal segmentation in neonate MR scans when tissue contrast is poor. These findings may be applied to enhance boundary definition in infant segmentation for various brain structures or in any volumetric study where image contrast is sub-optimal, enabling hippocampal structure-function relationships to be explored.
Adaptive hyperspectral imager: design, modeling, and control
NASA Astrophysics Data System (ADS)
McGregor, Scot; Lacroix, Simon; Monmayrant, Antoine
2015-08-01
An adaptive, hyperspectral imager is presented. We propose a system with easily adaptable spectral resolution, adjustable acquisition time, and high spatial resolution which is independent of spectral resolution. The system yields the possibility to define a variety of acquisition schemes, and in particular near snapshot acquisitions that may be used to measure the spectral content of given or automatically detected regions of interest. The proposed system is modelled and simulated, and tests on a first prototype validate the approach to achieve near snapshot spectral acquisitions without resorting to any computationally heavy post-processing, nor cumbersome calibration
Cerenkov luminescence imaging: physics principles and potential applications in biomedical sciences.
Ciarrocchi, Esther; Belcari, Nicola
2017-12-01
Cerenkov luminescence imaging (CLI) is a novel imaging modality to study charged particles with optical methods by detecting the Cerenkov luminescence produced in tissue. This paper first describes the physical processes that govern the production and transport in tissue of Cerenkov luminescence. The detectors used for CLI and their most relevant specifications to optimize the acquisition of the Cerenkov signal are then presented, and CLI is compared with the other optical imaging modalities sharing the same data acquisition and processing methods. Finally, the scientific work related to CLI and the applications for which CLI has been proposed are reviewed. The paper ends with some considerations about further perspectives for this novel imaging modality.
Raspberry Pi-powered imaging for plant phenotyping.
Tovar, Jose C; Hoyer, J Steen; Lin, Andy; Tielking, Allison; Callen, Steven T; Elizabeth Castillo, S; Miller, Michael; Tessman, Monica; Fahlgren, Noah; Carrington, James C; Nusinow, Dmitri A; Gehan, Malia A
2018-03-01
Image-based phenomics is a powerful approach to capture and quantify plant diversity. However, commercial platforms that make consistent image acquisition easy are often cost-prohibitive. To make high-throughput phenotyping methods more accessible, low-cost microcomputers and cameras can be used to acquire plant image data. We used low-cost Raspberry Pi computers and cameras to manage and capture plant image data. Detailed here are three different applications of Raspberry Pi-controlled imaging platforms for seed and shoot imaging. Images obtained from each platform were suitable for extracting quantifiable plant traits (e.g., shape, area, height, color) en masse using open-source image processing software such as PlantCV. This protocol describes three low-cost platforms for image acquisition that are useful for quantifying plant diversity. When coupled with open-source image processing tools, these imaging platforms provide viable low-cost solutions for incorporating high-throughput phenomics into a wide range of research programs.
Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S
2012-02-23
We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.
Acquisition performance of LAPAN-A3/IPB multispectral imager in real-time mode of operation
NASA Astrophysics Data System (ADS)
Hakim, P. R.; Permala, R.; Jayani, A. P. S.
2018-05-01
LAPAN-A3/IPB satellite was launched in June 2016 and its multispectral imager has been producing Indonesian coverage images. In order to improve its support for remote sensing application, the imager should produce images with high quality and quantity. To improve the quantity of LAPAN-A3/IPB multispectral image captured, image acquisition could be executed in real-time mode from LAPAN ground station in Bogor when the satellite passes west Indonesia region. This research analyses the performance of LAPAN-A3/IPB multispectral imager acquisition in real-time mode, in terms of image quality and quantity, under assumption of several on-board and ground segment limitations. Results show that with real-time operation mode, LAPAN-A3/IPB multispectral imager could produce twice as much as image coverage compare to recorded mode. However, the images produced in real-time mode will have slightly degraded quality due to image compression process involved. Based on several analyses that have been done in this research, it is recommended to use real-time acquisition mode whenever it possible, unless for some circumstances that strictly not allow any quality degradation of the images produced.
Development of image analysis software for quantification of viable cells in microchips.
Georg, Maximilian; Fernández-Cabada, Tamara; Bourguignon, Natalia; Karp, Paola; Peñaherrera, Ana B; Helguera, Gustavo; Lerner, Betiana; Pérez, Maximiliano S; Mertelsmann, Roland
2018-01-01
Over the past few years, image analysis has emerged as a powerful tool for analyzing various cell biology parameters in an unprecedented and highly specific manner. The amount of data that is generated requires automated methods for the processing and analysis of all the resulting information. The software available so far are suitable for the processing of fluorescence and phase contrast images, but often do not provide good results from transmission light microscopy images, due to the intrinsic variation of the acquisition of images technique itself (adjustment of brightness / contrast, for instance) and the variability between image acquisition introduced by operators / equipment. In this contribution, it has been presented an image processing software, Python based image analysis for cell growth (PIACG), that is able to calculate the total area of the well occupied by cells with fusiform and rounded morphology in response to different concentrations of fetal bovine serum in microfluidic chips, from microscopy images in transmission light, in a highly efficient way.
NASA Technical Reports Server (NTRS)
Chien, Steve A.
1996-01-01
A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintainting the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. This paper describes a planning application of automated imaging processing and our overall approach to knowledge acquisition for this application.
D Reconstruction with a Collaborative Approach Based on Smartphones and a Cloud-Based Server
NASA Astrophysics Data System (ADS)
Nocerino, E.; Poiesi, F.; Locher, A.; Tefera, Y. T.; Remondino, F.; Chippendale, P.; Van Gool, L.
2017-11-01
The paper presents a collaborative image-based 3D reconstruction pipeline to perform image acquisition with a smartphone and geometric 3D reconstruction on a server during concurrent or disjoint acquisition sessions. Images are selected from the video feed of the smartphone's camera based on their quality and novelty. The smartphone's app provides on-the-fly reconstruction feedback to users co-involved in the acquisitions. The server is composed of an incremental SfM algorithm that processes the received images by seamlessly merging them into a single sparse point cloud using bundle adjustment. Dense image matching algorithm can be lunched to derive denser point clouds. The reconstruction details, experiments and performance evaluation are presented and discussed.
An Ibm PC/AT-Based Image Acquisition And Processing System For Quantitative Image Analysis
NASA Astrophysics Data System (ADS)
Kim, Yongmin; Alexander, Thomas
1986-06-01
In recent years, a large number of applications have been developed for image processing systems in the area of biological imaging. We have already finished the development of a dedicated microcomputer-based image processing and analysis system for quantitative microscopy. The system's primary function has been to facilitate and ultimately automate quantitative image analysis tasks such as the measurement of cellular DNA contents. We have recognized from this development experience, and interaction with system users, biologists and technicians, that the increasingly widespread use of image processing systems, and the development and application of new techniques for utilizing the capabilities of such systems, would generate a need for some kind of inexpensive general purpose image acquisition and processing system specially tailored for the needs of the medical community. We are currently engaged in the development and testing of hardware and software for a fairly high-performance image processing computer system based on a popular personal computer. In this paper, we describe the design and development of this system. Biological image processing computer systems have now reached a level of hardware and software refinement where they could become convenient image analysis tools for biologists. The development of a general purpose image processing system for quantitative image analysis that is inexpensive, flexible, and easy-to-use represents a significant step towards making the microscopic digital image processing techniques more widely applicable not only in a research environment as a biologist's workstation, but also in clinical environments as a diagnostic tool.
1979-11-01
a generalized cooccurrence matrix. Describing image texture is an important problem in the design of image understanding systems . Applications as...display system design optimization and video signal processing. Based on a study by Southern Research Institute , a number of options were identified...Specification for Target Acquisition Designation System (U), RFP # AMC-DP-AAH-H4020, i2 Apr 77. 4. Terminal Homing Applications of Solid State Image
2010-10-01
Downloaded on February 20,2010 at 10:55:59 EST from IEEE Xplore . Restrictions apply. STUDENSKI et al.: ACQUISITION AND PROCESSING METHODS FOR A BEDSIDE...February 20,2010 at 10:55:59 EST from IEEE Xplore . Restrictions apply. 208 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 57, NO. 1, FEBRUARY 2010 from the...59 EST from IEEE Xplore . Restrictions apply. STUDENSKI et al.: ACQUISITION AND PROCESSING METHODS FOR A BEDSIDE CARDIAC SPECT IMAGING SYSTEM 209
General-purpose interface bus for multiuser, multitasking computer system
NASA Technical Reports Server (NTRS)
Generazio, Edward R.; Roth, Don J.; Stang, David B.
1990-01-01
The architecture of a multiuser, multitasking, virtual-memory computer system intended for the use by a medium-size research group is described. There are three central processing units (CPU) in the configuration, each with 16 MB memory, and two 474 MB hard disks attached. CPU 1 is designed for data analysis and contains an array processor for fast-Fourier transformations. In addition, CPU 1 shares display images viewed with the image processor. CPU 2 is designed for image analysis and display. CPU 3 is designed for data acquisition and contains 8 GPIB channels and an analog-to-digital conversion input/output interface with 16 channels. Up to 9 users can access the third CPU simultaneously for data acquisition. Focus is placed on the optimization of hardware interfaces and software, facilitating instrument control, data acquisition, and processing.
ScanImage: flexible software for operating laser scanning microscopes.
Pologruto, Thomas A; Sabatini, Bernardo L; Svoboda, Karel
2003-05-17
Laser scanning microscopy is a powerful tool for analyzing the structure and function of biological specimens. Although numerous commercial laser scanning microscopes exist, some of the more interesting and challenging applications demand custom design. A major impediment to custom design is the difficulty of building custom data acquisition hardware and writing the complex software required to run the laser scanning microscope. We describe a simple, software-based approach to operating a laser scanning microscope without the need for custom data acquisition hardware. Data acquisition and control of laser scanning are achieved through standard data acquisition boards. The entire burden of signal integration and image processing is placed on the CPU of the computer. We quantitate the effectiveness of our data acquisition and signal conditioning algorithm under a variety of conditions. We implement our approach in an open source software package (ScanImage) and describe its functionality. We present ScanImage, software to run a flexible laser scanning microscope that allows easy custom design.
Data Visualization and Animation Lab (DVAL) overview
NASA Technical Reports Server (NTRS)
Stacy, Kathy; Vonofenheim, Bill
1994-01-01
The general capabilities of the Langley Research Center Data Visualization and Animation Laboratory is described. These capabilities include digital image processing, 3-D interactive computer graphics, data visualization and analysis, video-rate acquisition and processing of video images, photo-realistic modeling and animation, video report generation, and color hardcopies. A specialized video image processing system is also discussed.
Bravo-Zanoguera, Miguel E; Laris, Casey A; Nguyen, Lam K; Oliva, Mike; Price, Jeffrey H
2007-01-01
Efficient image cytometry of a conventional microscope slide means rapid acquisition and analysis of 20 gigapixels of image data (at 0.3-microm sampling). The voluminous data motivate increased acquisition speed to enable many biomedical applications. Continuous-motion time-delay-and-integrate (TDI) scanning has the potential to speed image acquisition while retaining sensitivity, but the challenge of implementing high-resolution autofocus operating simultaneously with acquisition has limited its adoption. We develop a dynamic autofocus system for this need using: 1. a "volume camera," consisting of nine fiber optic imaging conduits to charge-coupled device (CCD) sensors, that acquires images in parallel from different focal planes, 2. an array of mixed analog-digital processing circuits that measure the high spatial frequencies of the multiple image streams to create focus indices, and 3. a software system that reads and analyzes the focus data streams and calculates best focus for closed feedback loop control. Our system updates autofocus at 56 Hz (or once every 21 microm of stage travel) to collect sharply focused images sampled at 0.3x0.3 microm(2)/pixel at a stage speed of 2.3 mms. The system, tested by focusing in phase contrast and imaging long fluorescence strips, achieves high-performance closed-loop image-content-based autofocus in continuous scanning for the first time.
Design and DSP implementation of star image acquisition and star point fast acquiring and tracking
NASA Astrophysics Data System (ADS)
Zhou, Guohui; Wang, Xiaodong; Hao, Zhihang
2006-02-01
Star sensor is a special high accuracy photoelectric sensor. Attitude acquisition time is an important function index of star sensor. In this paper, the design target is to acquire 10 samples per second dynamic performance. On the basis of analyzing CCD signals timing and star image processing, a new design and a special parallel architecture for improving star image processing are presented in this paper. In the design, the operation moving the data in expanded windows including the star to the on-chip memory of DSP is arranged in the invalid period of CCD frame signal. During the CCD saving the star image to memory, DSP processes the data in the on-chip memory. This parallelism greatly improves the efficiency of processing. The scheme proposed here results in enormous savings of memory normally required. In the scheme, DSP HOLD mode and CPLD technology are used to make a shared memory between CCD and DSP. The efficiency of processing is discussed in numerical tests. Only in 3.5ms is acquired the five lightest stars in the star acquisition stage. In 43us, the data in five expanded windows including stars are moved into the internal memory of DSP, and in 1.6ms, five star coordinates are achieved in the star tracking stage.
Xu, Jing; Wong, Kevin; Jian, Yifan; Sarunic, Marinko V
2014-02-01
In this report, we describe a graphics processing unit (GPU)-accelerated processing platform for real-time acquisition and display of flow contrast images with Fourier domain optical coherence tomography (FDOCT) in mouse and human eyes in vivo. Motion contrast from blood flow is processed using the speckle variance OCT (svOCT) technique, which relies on the acquisition of multiple B-scan frames at the same location and tracking the change of the speckle pattern. Real-time mouse and human retinal imaging using two different custom-built OCT systems with processing and display performed on GPU are presented with an in-depth analysis of performance metrics. The display output included structural OCT data, en face projections of the intensity data, and the svOCT en face projections of retinal microvasculature; these results compare projections with and without speckle variance in the different retinal layers to reveal significant contrast improvements. As a demonstration, videos of real-time svOCT for in vivo human and mouse retinal imaging are included in our results. The capability of performing real-time svOCT imaging of the retinal vasculature may be a useful tool in a clinical environment for monitoring disease-related pathological changes in the microcirculation such as diabetic retinopathy.
Optimization of oncological {sup 18}F-FDG PET/CT imaging based on a multiparameter analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menezes, Vinicius O., E-mail: vinicius@radtec.com.br; Machado, Marcos A. D.; Queiroz, Cleiton C.
2016-02-15
Purpose: This paper describes a method to achieve consistent clinical image quality in {sup 18}F-FDG scans accounting for patient habitus, dose regimen, image acquisition, and processing techniques. Methods: Oncological PET/CT scan data for 58 subjects were evaluated retrospectively to derive analytical curves that predict image quality. Patient noise equivalent count rate and coefficient of variation (CV) were used as metrics in their analysis. Optimized acquisition protocols were identified and prospectively applied to 179 subjects. Results: The adoption of different schemes for three body mass ranges (<60 kg, 60–90 kg, >90 kg) allows improved image quality with both point spread functionmore » and ordered-subsets expectation maximization-3D reconstruction methods. The application of this methodology showed that CV improved significantly (p < 0.0001) in clinical practice. Conclusions: Consistent oncological PET/CT image quality on a high-performance scanner was achieved from an analysis of the relations existing between dose regimen, patient habitus, acquisition, and processing techniques. The proposed methodology may be used by PET/CT centers to develop protocols to standardize PET/CT imaging procedures and achieve better patient management and cost-effective operations.« less
A design of camera simulator for photoelectric image acquisition system
NASA Astrophysics Data System (ADS)
Cai, Guanghui; Liu, Wen; Zhang, Xin
2015-02-01
In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.
Theory and applications of structured light single pixel imaging
NASA Astrophysics Data System (ADS)
Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.
2018-02-01
Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.
Automated system for acquisition and image processing for the control and monitoring boned nopal
NASA Astrophysics Data System (ADS)
Luevano, E.; de Posada, E.; Arronte, M.; Ponce, L.; Flores, T.
2013-11-01
This paper describes the design and fabrication of a system for acquisition and image processing to control the removal of thorns nopal vegetable (Opuntia ficus indica) in an automated machine that uses pulses of a laser of Nd: YAG. The areolas, areas where thorns grow on the bark of the Nopal, are located applying segmentation algorithms to the images obtained by a CCD. Once the position of the areolas is known, coordinates are sent to a motors system that controls the laser to interact with all areolas and remove the thorns of the nopal. The electronic system comprises a video decoder, memory for image and software storage, and digital signal processor for system control. The firmware programmed tasks on acquisition, preprocessing, segmentation, recognition and interpretation of the areolas. This system achievement identifying areolas and generating table of coordinates of them, which will be send the motor galvo system that controls the laser for removal
Optimization of image quality and dose for Varian aS500 electronic portal imaging devices (EPIDs).
McGarry, C K; Grattan, M W D; Cosgrove, V P
2007-12-07
This study was carried out to investigate whether the electronic portal imaging (EPI) acquisition process could be optimized, and as a result tolerance and action levels be set for the PIPSPro QC-3V phantom image quality assessment. The aim of the optimization process was to reduce the dose delivered to the patient while maintaining a clinically acceptable image quality. This is of interest when images are acquired in addition to the planned patient treatment, rather than images being acquired using the treatment field during a patient's treatment. A series of phantoms were used to assess image quality for different acquisition settings relative to the baseline values obtained following acceptance testing. Eight Varian aS500 EPID systems on four matched Varian 600C/D linacs and four matched Varian 2100C/D linacs were compared for consistency of performance and images were acquired at the four main orthogonal gantry angles. Images were acquired using a 6 MV beam operating at 100 MU min(-1) and the low-dose acquisition mode. Doses used in the comparison were measured using a Farmer ionization chamber placed at d(max) in solid water. The results demonstrated that the number of reset frames did not have any influence on the image contrast, but the number of frame averages did. The expected increase in noise with corresponding decrease in contrast was also observed when reducing the number of frame averages. The optimal settings for the low-dose acquisition mode with respect to image quality and dose were found to be one reset frame and three frame averages. All patients at the Northern Ireland Cancer Centre are now imaged using one reset frame and three frame averages in the 6 MV 100 MU min(-1) low-dose acquisition mode. Routine EPID QC contrast tolerance (+/-10) and action (+/-20) levels using the PIPSPro phantom based around expected values of 190 (Varian 600C/D) and 225 (Varian 2100C/D) have been introduced. The dose at dmax from electronic portal imaging has been reduced by approximately 28%, and while the image quality has been reduced, the images produced are still clinically acceptable.
Loudos, George K; Papadimitroulas, Panagiotis G; Kagadis, George C
2014-01-01
Monte Carlo (MC) simulations play a crucial role in nuclear medical imaging since they can provide the ground truth for clinical acquisitions, by integrating and quantifing all physical parameters that affect image quality. The last decade a number of realistic computational anthropomorphic models have been developed to serve imaging, as well as other biomedical engineering applications. The combination of MC techniques with realistic computational phantoms can provide a powerful tool for pre and post processing in imaging, data analysis and dosimetry. This work aims to create a global database for simulated Single Photon Emission Computed Tomography (SPECT) and Positron Emission Tomography (PET) exams and the methodology, as well as the first elements are presented. Simulations are performed using the well validated GATE opensource toolkit, standard anthropomorphic phantoms and activity distribution of various radiopharmaceuticals, derived from literature. The resulting images, projections and sinograms of each study are provided in the database and can be further exploited to evaluate processing and reconstruction algorithms. Patient studies using different characteristics are included in the database and different computational phantoms were tested for the same acquisitions. These include the XCAT, Zubal and the Virtual Family, which some of which are used for the first time in nuclear imaging. The created database will be freely available and our current work is towards its extension by simulating additional clinical pathologies.
Full-field wrist pulse signal acquisition and analysis by 3D Digital Image Correlation
NASA Astrophysics Data System (ADS)
Xue, Yuan; Su, Yong; Zhang, Chi; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan; Zhang, Qingchuan; Wu, Xiaoping
2017-11-01
Pulse diagnosis is an essential part in four basic diagnostic methods (inspection, listening, inquiring and palpation) in traditional Chinese medicine, which depends on longtime training and rich experience, so computerized pulse acquisition has been proposed and studied to ensure the objectivity. To imitate the process that doctors using three fingertips with different pressures to feel fluctuations in certain areas containing three acupoints, we established a five dimensional pulse signal acquisition system adopting a non-contacting optical metrology method, 3D digital image correlation, to record the full-field displacements of skin fluctuations under different pressures. The system realizes real-time full-field vibration mode observation with 10 FPS. The maximum sample frequency is 472 Hz for detailed post-processing. After acquisition, the signals are analyzed according to the amplitude, pressure, and pulse wave velocity. The proposed system provides a novel optical approach for digitalizing pulse diagnosis and massive pulse signal data acquisition for various types of patients.
NASA Astrophysics Data System (ADS)
Rosu-Hamzescu, Mihnea; Polonschii, Cristina; Oprea, Sergiu; Popescu, Dragos; David, Sorin; Bratu, Dumitru; Gheorghiu, Eugen
2018-06-01
Electro-optical measurements, i.e., optical waveguides and plasmonic based electrochemical impedance spectroscopy (P-EIS), are based on the sensitive dependence of refractive index of electro-optical sensors on surface charge density, modulated by an AC electrical field applied to the sensor surface. Recently, P-EIS has emerged as a new analytical tool that can resolve local impedance with high, optical spatial resolution, without using microelectrodes. This study describes a high speed image acquisition and processing system for electro-optical measurements, based on a high speed complementary metal-oxide semiconductor (CMOS) sensor and a field-programmable gate array (FPGA) board. The FPGA is used to configure CMOS parameters, as well as to receive and locally process the acquired images by performing Fourier analysis for each pixel, deriving the real and imaginary parts of the Fourier coefficients for the AC field frequencies. An AC field generator, for single or multi-sine signals, is synchronized with the high speed acquisition system for phase measurements. The system was successfully used for real-time angle-resolved electro-plasmonic measurements from 30 Hz up to 10 kHz, providing results consistent to ones obtained by a conventional electrical impedance approach. The system was able to detect amplitude variations with a relative variation of ±1%, even for rather low sampling rates per period (i.e., 8 samples per period). The PC (personal computer) acquisition and control software allows synchronized acquisition for multiple FPGA boards, making it also suitable for simultaneous angle-resolved P-EIS imaging.
Tudela, Raúl; Muñoz-Moreno, Emma; López-Gil, Xavier; Soria, Guadalupe
2017-01-01
Diffusion-weighted imaging (DWI) quantifies water molecule diffusion within tissues and is becoming an increasingly used technique. However, it is very challenging as correct quantification depends on many different factors, ranging from acquisition parameters to a long pipeline of image processing. In this work, we investigated the influence of voxel geometry on diffusion analysis, comparing different acquisition orientations as well as isometric and anisometric voxels. Diffusion-weighted images of one rat brain were acquired with four different voxel geometries (one isometric and three anisometric in different directions) and three different encoding orientations (coronal, axial and sagittal). Diffusion tensor scalar measurements, tractography and the brain structural connectome were analyzed for each of the 12 acquisitions. The acquisition direction with respect to the main magnetic field orientation affected the diffusion results. When the acquisition slice-encoding direction was not aligned with the main magnetic field, there were more artifacts and a lower signal-to-noise ratio that led to less anisotropic tensors (lower fractional anisotropic values), producing poorer quality results. The use of anisometric voxels generated statistically significant differences in the values of diffusion metrics in specific regions. It also elicited differences in tract reconstruction and in different graph metric values describing the brain networks. Our results highlight the importance of taking into account the geometric aspects of acquisitions, especially when comparing diffusion data acquired using different geometries.
On-line 3-dimensional confocal imaging in vivo.
Li, J; Jester, J V; Cavanagh, H D; Black, T D; Petroll, W M
2000-09-01
In vivo confocal microscopy through focusing (CMTF) can provide a 3-D stack of high-resolution corneal images and allows objective measurements of corneal sublayer thickness and backscattering. However, current systems require time-consuming off-line image processing and analysis on multiple software platforms. Furthermore, there is a trade off between the CMTF speed and measurement precision. The purpose of this study was to develop a novel on-line system for in vivo corneal imaging and analysis that overcomes these limitations. A tandem scanning confocal microscope (TSCM) was used for corneal imaging. The TSCM video camera was interfaced directly to a PC image acquisition board to implement real-time digitization. Software was developed to allow in vivo 2-D imaging, CMTF image acquisition, interactive 3-D reconstruction, and analysis of CMTF data to be performed on line in a single user-friendly environment. A procedure was also incorporated to separate the odd/even video fields, thereby doubling the CMTF sampling rate and theoretically improving the precision of CMTF thickness measurements by a factor of two. In vivo corneal examinations of a normal human and a photorefractive keratectomy patient are presented to demonstrate the capabilities of the new system. Improvements in the convenience, speed, and functionality of in vivo CMTF image acquisition, display, and analysis are demonstrated. This is the first full-featured software package designed for in vivo TSCM imaging of the cornea, which performs both 2-D and 3-D image acquisition, display, and processing as well as CMTF analysis. The use of a PC platform and incorporation of easy to use, on line, and interactive features should help to improve the clinical utility of this technology.
High efficient optical remote sensing images acquisition for nano-satellite-framework
NASA Astrophysics Data System (ADS)
Li, Feng; Xin, Lei; Liu, Yang; Fu, Jie; Liu, Yuhong; Guo, Yi
2017-09-01
It is more difficult and challenging to implement Nano-satellite (NanoSat) based optical Earth observation missions than conventional satellites because of the limitation of volume, weight and power consumption. In general, an image compression unit is a necessary onboard module to save data transmission bandwidth and disk space. The image compression unit can get rid of redundant information of those captured images. In this paper, a new image acquisition framework is proposed for NanoSat based optical Earth observation applications. The entire process of image acquisition and compression unit can be integrated in the photo detector array chip, that is, the output data of the chip is already compressed. That is to say, extra image compression unit is no longer needed; therefore, the power, volume, and weight of the common onboard image compression units consumed can be largely saved. The advantages of the proposed framework are: the image acquisition and image compression are combined into a single step; it can be easily built in CMOS architecture; quick view can be provided without reconstruction in the framework; Given a certain compression ratio, the reconstructed image quality is much better than those CS based methods. The framework holds promise to be widely used in the future.
An evaluation on CT image acquisition method for medical VR applications
NASA Astrophysics Data System (ADS)
Jang, Seong-wook; Ko, Junho; Yoo, Yon-sik; Kim, Yoonsang
2017-02-01
Recent medical virtual reality (VR) applications to minimize re-operations are being studied for improvements in surgical efficiency and reduction of operation error. The CT image acquisition method considering three-dimensional (3D) modeling for medical VR applications is important, because the realistic model is required for the actual human organ. However, the research for medical VR applications has focused on 3D modeling techniques and utilized 3D models. In addition, research on a CT image acquisition method considering 3D modeling has never been reported. The conventional CT image acquisition method involves scanning a limited area of the lesion for the diagnosis of doctors once or twice. However, the medical VR application is required to acquire the CT image considering patients' various postures and a wider area than the lesion. A wider area than the lesion is required because of the necessary process of comparing bilateral sides for dyskinesia diagnosis of the shoulder, pelvis, and leg. Moreover, patients' various postures are required due to the different effects on the musculoskeletal system. Therefore, in this paper, we perform a comparative experiment on the acquired CT images considering image area (unilateral/bilateral) and patients' postures (neutral/abducted). CT images are acquired from 10 patients for the experiments, and the acquired CT images are evaluated based on the length per pixel and the morphological deviation. Finally, by comparing the experiment results, we evaluate the CT image acquisition method for medical VR applications.
Enhanced FIB-SEM systems for large-volume 3D imaging.
Xu, C Shan; Hayworth, Kenneth J; Lu, Zhiyuan; Grob, Patricia; Hassan, Ahmed M; García-Cerdán, José G; Niyogi, Krishna K; Nogales, Eva; Weinberg, Richard J; Hess, Harald F
2017-05-13
Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 µm 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.
Concrete thawing studied by single-point ramped imaging.
Prado, P J; Balcom, B J; Beyea, S D; Armstrong, R L; Bremner, T W
1997-12-01
A series of two-dimensional images of proton distribution in a hardened concrete sample has been obtained during the thawing process (from -50 degrees C up to 11 degrees C). The SPRITE sequence is optimal for this study given the characteristic short relaxation times of water in this porous media (T2* < 200 micros and T1 < 3.6 ms). The relaxation parameters of the sample were determined in order to optimize the time efficiency of the sequence, permitting a 4-scan 64 x 64 acquisition in under 3 min. The image acquisition is fast on the time scale of the temperature evolution of the specimen. The frozen water distribution is quantified through a position based study of the image contrast. A multiple point acquisition method is presented and the signal sensitivity improvement is discussed.
NASA Astrophysics Data System (ADS)
De Lorenzo, Danilo; De Momi, Elena; Beretta, Elisa; Cerveri, Pietro; Perona, Franco; Ferrigno, Giancarlo
2009-02-01
Computer Assisted Orthopaedic Surgery (CAOS) systems improve the results and the standardization of surgical interventions. Anatomical landmarks and bone surface detection is straightforward to either register the surgical space with the pre-operative imaging space and to compute biomechanical parameters for prosthesis alignment. Surface points acquisition increases the intervention invasiveness and can be influenced by the soft tissue layer interposition (7-15mm localization errors). This study is aimed at evaluating the accuracy of a custom-made A-mode ultrasound (US) system for non invasive detection of anatomical landmarks and surfaces. A-mode solutions eliminate the necessity of US images segmentation, offers real-time signal processing and requires less invasive equipment. The system consists in a single transducer US probe optically tracked, a pulser/receiver and an FPGA-based board, which is responsible for logic control command generation and for real-time signal processing and three custom-made board (signal acquisition, blanking and synchronization). We propose a new calibration method of the US system. The experimental validation was then performed measuring the length of known-shape polymethylmethacrylate boxes filled with pure water and acquiring bone surface points on a bovine bone phantom covered with soft-tissue mimicking materials. Measurement errors were computed through MR and CT images acquisitions of the phantom. Points acquisition on bone surface with the US system demonstrated lower errors (1.2mm) than standard pointer acquisition (4.2mm).
Classifying magnetic resonance image modalities with convolutional neural networks
NASA Astrophysics Data System (ADS)
Remedios, Samuel; Pham, Dzung L.; Butman, John A.; Roy, Snehashis
2018-02-01
Magnetic Resonance (MR) imaging allows the acquisition of images with different contrast properties depending on the acquisition protocol and the magnetic properties of tissues. Many MR brain image processing techniques, such as tissue segmentation, require multiple MR contrasts as inputs, and each contrast is treated differently. Thus it is advantageous to automate the identification of image contrasts for various purposes, such as facilitating image processing pipelines, and managing and maintaining large databases via content-based image retrieval (CBIR). Most automated CBIR techniques focus on a two-step process: extracting features from data and classifying the image based on these features. We present a novel 3D deep convolutional neural network (CNN)- based method for MR image contrast classification. The proposed CNN automatically identifies the MR contrast of an input brain image volume. Specifically, we explored three classification problems: (1) identify T1-weighted (T1-w), T2-weighted (T2-w), and fluid-attenuated inversion recovery (FLAIR) contrasts, (2) identify pre vs postcontrast T1, (3) identify pre vs post-contrast FLAIR. A total of 3418 image volumes acquired from multiple sites and multiple scanners were used. To evaluate each task, the proposed model was trained on 2137 images and tested on the remaining 1281 images. Results showed that image volumes were correctly classified with 97.57% accuracy.
A CMOS high speed imaging system design based on FPGA
NASA Astrophysics Data System (ADS)
Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui
2015-10-01
CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.
High efficiency multishot interleaved spiral-in/out: acquisition for high-resolution BOLD fMRI.
Jung, Youngkyoo; Samsonov, Alexey A; Liu, Thomas T; Buracas, Giedrius T
2013-08-01
Growing demand for high spatial resolution blood oxygenation level dependent (BOLD) functional magnetic resonance imaging faces a challenge of the spatial resolution versus coverage or temporal resolution tradeoff, which can be addressed by methods that afford increased acquisition efficiency. Spiral acquisition trajectories have been shown to be superior to currently prevalent echo-planar imaging in terms of acquisition efficiency, and high spatial resolution can be achieved by employing multiple-shot spiral acquisition. The interleaved spiral in/out trajectory is preferred over spiral-in due to increased BOLD signal contrast-to-noise ratio (CNR) and higher acquisition efficiency than that of spiral-out or noninterleaved spiral in/out trajectories (Law & Glover. Magn Reson Med 2009; 62:829-834.), but to date applicability of the multishot interleaved spiral in/out for high spatial resolution imaging has not been studied. Herein we propose multishot interleaved spiral in/out acquisition and investigate its applicability for high spatial resolution BOLD functional magnetic resonance imaging. Images reconstructed from interleaved spiral-in and -out trajectories possess artifacts caused by differences in T2 decay, off-resonance, and k-space errors associated with the two trajectories. We analyze the associated errors and demonstrate that application of conjugate phase reconstruction and spectral filtering can substantially mitigate these image artifacts. After applying these processing steps, the multishot interleaved spiral in/out pulse sequence yields high BOLD CNR images at in-plane resolution below 1 × 1 mm while preserving acceptable temporal resolution (4 s) and brain coverage (15 slices of 2 mm thickness). Moreover, this method yields sufficient BOLD CNR at 1.5 mm isotropic resolution for detection of activation in hippocampus associated with cognitive tasks (Stern memory task). The multishot interleaved spiral in/out acquisition is a promising technique for high spatial resolution BOLD functional magnetic resonance imaging applications. © 2012 Wiley Periodicals, Inc.
NIR hyperspectral compressive imager based on a modified Fabry–Perot resonator
NASA Astrophysics Data System (ADS)
Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Stern, Adrian
2018-04-01
The acquisition of hyperspectral (HS) image datacubes with available 2D sensor arrays involves a time consuming scanning process. In the last decade, several compressive sensing (CS) techniques were proposed to reduce the HS acquisition time. In this paper, we present a method for near-infrared (NIR) HS imaging which relies on our rapid CS resonator spectroscopy technique. Within the framework of CS, and by using a modified Fabry–Perot resonator, a sequence of spectrally modulated images is used to recover NIR HS datacubes. Owing to the innovative CS design, we demonstrate the ability to reconstruct NIR HS images with hundreds of spectral bands from an order of magnitude fewer measurements, i.e. with a compression ratio of about 10:1. This high compression ratio, together with the high optical throughput of the system, facilitates fast acquisition of large HS datacubes.
Design and implementation of non-linear image processing functions for CMOS image sensor
NASA Astrophysics Data System (ADS)
Musa, Purnawarman; Sudiro, Sunny A.; Wibowo, Eri P.; Harmanto, Suryadi; Paindavoine, Michel
2012-11-01
Today, solid state image sensors are used in many applications like in mobile phones, video surveillance systems, embedded medical imaging and industrial vision systems. These image sensors require the integration in the focal plane (or near the focal plane) of complex image processing algorithms. Such devices must meet the constraints related to the quality of acquired images, speed and performance of embedded processing, as well as low power consumption. To achieve these objectives, low-level analog processing allows extracting the useful information in the scene directly. For example, edge detection step followed by a local maxima extraction will facilitate the high-level processing like objects pattern recognition in a visual scene. Our goal was to design an intelligent image sensor prototype achieving high-speed image acquisition and non-linear image processing (like local minima and maxima calculations). For this purpose, we present in this article the design and test of a 64×64 pixels image sensor built in a standard CMOS Technology 0.35 μm including non-linear image processing. The architecture of our sensor, named nLiRIC (non-Linear Rapid Image Capture), is based on the implementation of an analog Minima/Maxima Unit. This MMU calculates the minimum and maximum values (non-linear functions), in real time, in a 2×2 pixels neighbourhood. Each MMU needs 52 transistors and the pitch of one pixel is 40×40 mu m. The total area of the 64×64 pixels is 12.5mm2. Our tests have shown the validity of the main functions of our new image sensor like fast image acquisition (10K frames per second), minima/maxima calculations in less then one ms.
Uav Photgrammetric Workflows: a best Practice Guideline
NASA Astrophysics Data System (ADS)
Federman, A.; Santana Quintero, M.; Kretz, S.; Gregg, J.; Lengies, M.; Ouimet, C.; Laliberte, J.
2017-08-01
The increasing commercialization of unmanned aerial vehicles (UAVs) has opened the possibility of performing low-cost aerial image acquisition for the documentation of cultural heritage sites through UAV photogrammetry. The flying of UAVs in Canada is regulated through Transport Canada and requires a Special Flight Operations Certificate (SFOC) in order to fly. Various image acquisition techniques have been explored in this review, as well as well software used to register the data. A general workflow procedure has been formulated based off of the literature reviewed. A case study example of using UAV photogrammetry at Prince of Wales Fort is discussed, specifically in relation to the data acquisition and processing. Some gaps in the literature reviewed highlight the need for streamlining the SFOC application process, and incorporating UAVs into cultural heritage documentation courses.
Acoustic noise and functional magnetic resonance imaging: current strategies and future prospects.
Amaro, Edson; Williams, Steve C R; Shergill, Sukhi S; Fu, Cynthia H Y; MacSweeney, Mairead; Picchioni, Marco M; Brammer, Michael J; McGuire, Philip K
2002-11-01
Functional magnetic resonance imaging (fMRI) has become the method of choice for studying the neural correlates of cognitive tasks. Nevertheless, the scanner produces acoustic noise during the image acquisition process, which is a problem in the study of auditory pathway and language generally. The scanner acoustic noise not only produces activation in brain regions involved in auditory processing, but also interferes with the stimulus presentation. Several strategies can be used to address this problem, including modifications of hardware and software. Although reduction of the source of the acoustic noise would be ideal, substantial hardware modifications to the current base of installed MRI systems would be required. Therefore, the most common strategy employed to minimize the problem involves software modifications. In this work we consider three main types of acquisitions: compressed, partially silent, and silent. For each implementation, paradigms using block and event-related designs are assessed. We also provide new data, using a silent event-related (SER) design, which demonstrate higher blood oxygen level-dependent (BOLD) response to a simple auditory cue when compared to a conventional image acquisition. Copyright 2002 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Iqbal, Asim; Farooq, Umar; Mahmood, Hassan; Asad, Muhammad Usman; Khan, Akrama; Atiq, Hafiz Muhammad
2010-02-01
A self teaching image processing and voice recognition based system is developed to educate visually impaired children, chiefly in their primary education. System comprises of a computer, a vision camera, an ear speaker and a microphone. Camera, attached with the computer system is mounted on the ceiling opposite (on the required angle) to the desk on which the book is placed. Sample images and voices in the form of instructions and commands of English, Urdu alphabets, Numeric Digits, Operators and Shapes are already stored in the database. A blind child first reads the embossed character (object) with the help of fingers than he speaks the answer, name of the character, shape etc into the microphone. With the voice command of a blind child received by the microphone, image is taken by the camera which is processed by MATLAB® program developed with the help of Image Acquisition and Image processing toolbox and generates a response or required set of instructions to child via ear speaker, resulting in self education of a visually impaired child. Speech recognition program is also developed in MATLAB® with the help of Data Acquisition and Signal Processing toolbox which records and process the command of the blind child.
NASA Astrophysics Data System (ADS)
Kwee, Edward; Peterson, Alexander; Stinson, Jeffrey; Halter, Michael; Yu, Liya; Majurski, Michael; Chalfoun, Joe; Bajcsy, Peter; Elliott, John
2018-02-01
Induced pluripotent stem cells (iPSCs) are reprogrammed cells that can have heterogeneous biological potential. Quality assurance metrics of reprogrammed iPSCs will be critical to ensure reliable use in cell therapies and personalized diagnostic tests. We present a quantitative phase imaging (QPI) workflow which includes acquisition, processing, and stitching multiple adjacent image tiles across a large field of view (LFOV) of a culture vessel. Low magnification image tiles (10x) were acquired with a Phasics SID4BIO camera on a Zeiss microscope. iPSC cultures were maintained using a custom stage incubator on an automated stage. We implement an image acquisition strategy that compensates for non-flat illumination wavefronts to enable imaging of an entire well plate, including the meniscus region normally obscured in Zernike phase contrast imaging. Polynomial fitting and background mode correction was implemented to enable comparability and stitching between multiple tiles. LFOV imaging of reference materials indicated that image acquisition and processing strategies did not affect quantitative phase measurements across the LFOV. Analysis of iPSC colony images demonstrated mass doubling time was significantly different than area doubling time. These measurements were benchmarked with prototype microsphere beads and etched-glass gratings with specified spatial dimensions designed to be QPI reference materials with optical pathlength shifts suitable for cell microscopy. This QPI workflow and the use of reference materials can provide non-destructive traceable imaging method for novel iPSC heterogeneity characterization.
Image reconstruction by domain-transform manifold learning.
Zhu, Bo; Liu, Jeremiah Z; Cauley, Stephen F; Rosen, Bruce R; Rosen, Matthew S
2018-03-21
Image reconstruction is essential for imaging applications across the physical and life sciences, including optical and radar systems, magnetic resonance imaging, X-ray computed tomography, positron emission tomography, ultrasound imaging and radio astronomy. During image acquisition, the sensor encodes an intermediate representation of an object in the sensor domain, which is subsequently reconstructed into an image by an inversion of the encoding function. Image reconstruction is challenging because analytic knowledge of the exact inverse transform may not exist a priori, especially in the presence of sensor non-idealities and noise. Thus, the standard reconstruction approach involves approximating the inverse function with multiple ad hoc stages in a signal processing chain, the composition of which depends on the details of each acquisition strategy, and often requires expert parameter tuning to optimize reconstruction performance. Here we present a unified framework for image reconstruction-automated transform by manifold approximation (AUTOMAP)-which recasts image reconstruction as a data-driven supervised learning task that allows a mapping between the sensor and the image domain to emerge from an appropriate corpus of training data. We implement AUTOMAP with a deep neural network and exhibit its flexibility in learning reconstruction transforms for various magnetic resonance imaging acquisition strategies, using the same network architecture and hyperparameters. We further demonstrate that manifold learning during training results in sparse representations of domain transforms along low-dimensional data manifolds, and observe superior immunity to noise and a reduction in reconstruction artefacts compared with conventional handcrafted reconstruction methods. In addition to improving the reconstruction performance of existing acquisition methodologies, we anticipate that AUTOMAP and other learned reconstruction approaches will accelerate the development of new acquisition strategies across imaging modalities.
Image reconstruction by domain-transform manifold learning
NASA Astrophysics Data System (ADS)
Zhu, Bo; Liu, Jeremiah Z.; Cauley, Stephen F.; Rosen, Bruce R.; Rosen, Matthew S.
2018-03-01
Image reconstruction is essential for imaging applications across the physical and life sciences, including optical and radar systems, magnetic resonance imaging, X-ray computed tomography, positron emission tomography, ultrasound imaging and radio astronomy. During image acquisition, the sensor encodes an intermediate representation of an object in the sensor domain, which is subsequently reconstructed into an image by an inversion of the encoding function. Image reconstruction is challenging because analytic knowledge of the exact inverse transform may not exist a priori, especially in the presence of sensor non-idealities and noise. Thus, the standard reconstruction approach involves approximating the inverse function with multiple ad hoc stages in a signal processing chain, the composition of which depends on the details of each acquisition strategy, and often requires expert parameter tuning to optimize reconstruction performance. Here we present a unified framework for image reconstruction—automated transform by manifold approximation (AUTOMAP)—which recasts image reconstruction as a data-driven supervised learning task that allows a mapping between the sensor and the image domain to emerge from an appropriate corpus of training data. We implement AUTOMAP with a deep neural network and exhibit its flexibility in learning reconstruction transforms for various magnetic resonance imaging acquisition strategies, using the same network architecture and hyperparameters. We further demonstrate that manifold learning during training results in sparse representations of domain transforms along low-dimensional data manifolds, and observe superior immunity to noise and a reduction in reconstruction artefacts compared with conventional handcrafted reconstruction methods. In addition to improving the reconstruction performance of existing acquisition methodologies, we anticipate that AUTOMAP and other learned reconstruction approaches will accelerate the development of new acquisition strategies across imaging modalities.
Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng
2016-01-01
Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555
Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng
2016-01-01
Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.
Compressive Sensing Image Sensors-Hardware Implementation
Dadkhah, Mohammadreza; Deen, M. Jamal; Shirani, Shahram
2013-01-01
The compressive sensing (CS) paradigm uses simultaneous sensing and compression to provide an efficient image acquisition technique. The main advantages of the CS method include high resolution imaging using low resolution sensor arrays and faster image acquisition. Since the imaging philosophy in CS imagers is different from conventional imaging systems, new physical structures have been developed for cameras that use the CS technique. In this paper, a review of different hardware implementations of CS encoding in optical and electrical domains is presented. Considering the recent advances in CMOS (complementary metal–oxide–semiconductor) technologies and the feasibility of performing on-chip signal processing, important practical issues in the implementation of CS in CMOS sensors are emphasized. In addition, the CS coding for video capture is discussed. PMID:23584123
MRI Superresolution Using Self-Similarity and Image Priors
Manjón, José V.; Coupé, Pierrick; Buades, Antonio; Collins, D. Louis; Robles, Montserrat
2010-01-01
In Magnetic Resonance Imaging typical clinical settings, both low- and high-resolution images of different types are routinarily acquired. In some cases, the acquired low-resolution images have to be upsampled to match with other high-resolution images for posterior analysis or postprocessing such as registration or multimodal segmentation. However, classical interpolation techniques are not able to recover the high-frequency information lost during the acquisition process. In the present paper, a new superresolution method is proposed to reconstruct high-resolution images from the low-resolution ones using information from coplanar high resolution images acquired of the same subject. Furthermore, the reconstruction process is constrained to be physically plausible with the MR acquisition model that allows a meaningful interpretation of the results. Experiments on synthetic and real data are supplied to show the effectiveness of the proposed approach. A comparison with classical state-of-the-art interpolation techniques is presented to demonstrate the improved performance of the proposed methodology. PMID:21197094
Zhang, Hao; Zeng, Dong; Zhang, Hua; Wang, Jing; Liang, Zhengrong
2017-01-01
Low-dose X-ray computed tomography (LDCT) imaging is highly recommended for use in the clinic because of growing concerns over excessive radiation exposure. However, the CT images reconstructed by the conventional filtered back-projection (FBP) method from low-dose acquisitions may be severely degraded with noise and streak artifacts due to excessive X-ray quantum noise, or with view-aliasing artifacts due to insufficient angular sampling. In 2005, the nonlocal means (NLM) algorithm was introduced as a non-iterative edge-preserving filter to denoise natural images corrupted by additive Gaussian noise, and showed superior performance. It has since been adapted and applied to many other image types and various inverse problems. This paper specifically reviews the applications of the NLM algorithm in LDCT image processing and reconstruction, and explicitly demonstrates its improving effects on the reconstructed CT image quality from low-dose acquisitions. The effectiveness of these applications on LDCT and their relative performance are described in detail. PMID:28303644
Electrophoresis gel image processing and analysis using the KODAK 1D software.
Pizzonia, J
2001-06-01
The present article reports on the performance of the KODAK 1D Image Analysis Software for the acquisition of information from electrophoresis experiments and highlights the utility of several mathematical functions for subsequent image processing, analysis, and presentation. Digital images of Coomassie-stained polyacrylamide protein gels containing molecular weight standards and ethidium bromide stained agarose gels containing DNA mass standards are acquired using the KODAK Electrophoresis Documentation and Analysis System 290 (EDAS 290). The KODAK 1D software is used to optimize lane and band identification using features such as isomolecular weight lines. Mathematical functions for mass standard representation are presented, and two methods for estimation of unknown band mass are compared. Given the progressive transition of electrophoresis data acquisition and daily reporting in peer-reviewed journals to digital formats ranging from 8-bit systems such as EDAS 290 to more expensive 16-bit systems, the utility of algorithms such as Gaussian modeling, which can correct geometric aberrations such as clipping due to signal saturation common at lower bit depth levels, is discussed. Finally, image-processing tools that can facilitate image preparation for presentation are demonstrated.
NASA Astrophysics Data System (ADS)
Wu, Kaihua; Shao, Zhencheng; Chen, Nian; Wang, Wenjie
2018-01-01
The wearing degree of the wheel set tread is one of the main factors that influence the safety and stability of running train. Geometrical parameters mainly include flange thickness and flange height. Line structure laser light was projected on the wheel tread surface. The geometrical parameters can be deduced from the profile image. An online image acquisition system was designed based on asynchronous reset of CCD and CUDA parallel processing unit. The image acquisition was fulfilled by hardware interrupt mode. A high efficiency parallel segmentation algorithm based on CUDA was proposed. The algorithm firstly divides the image into smaller squares, and extracts the squares of the target by fusion of k_means and STING clustering image segmentation algorithm. Segmentation time is less than 0.97ms. A considerable acceleration ratio compared with the CPU serial calculation was obtained, which greatly improved the real-time image processing capacity. When wheel set was running in a limited speed, the system placed alone railway line can measure the geometrical parameters automatically. The maximum measuring speed is 120km/h.
Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT
Bedayat, Arash; Kumamaru, Kanako; Powers, Sara L.; Signorelli, Jason; Steigner, Michael L.; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T.
2011-01-01
The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use. PMID:21336552
Reduced exposure using asymmetric cone beam processing for wide area detector cardiac CT.
Bedayat, Arash; Rybicki, Frank J; Kumamaru, Kanako; Powers, Sara L; Signorelli, Jason; Steigner, Michael L; Steveson, Chloe; Soga, Shigeyoshi; Adams, Kimberly; Mitsouras, Dimitrios; Clouse, Melvin; Mather, Richard T
2012-02-01
The purpose of this study was to estimate dose reduction after implementation of asymmetrical cone beam processing using exposure differences measured in a water phantom and a small cohort of clinical coronary CTA patients. Two separate 320 × 0.5 mm detector row scans of a water phantom used identical cardiac acquisition parameters before and after software modifications from symmetric to asymmetric cone beam acquisition and processing. Exposure was measured at the phantom surface with Optically Stimulated Luminescence (OSL) dosimeters at 12 equally spaced angular locations. Mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at the center plus four peripheral locations in the water phantom. To assess image quality, mean HU and standard deviation (SD) for both approaches were compared using ROI measurements obtained at five points within the water phantom. Retrospective evaluation of 64 patients (37 symmetric; 27 asymmetric acquisition) included clinical data, scanning parameters, quantitative plus qualitative image assessment, and estimated radiation dose. In the water phantom, the asymmetric cone beam processing reduces exposure by approximately 20% with no change in image quality. The clinical coronary CTA patient groups had comparable demographics. The estimated dose reduction after implementation of the asymmetric approach was roughly 24% with no significant difference between the symmetric and asymmetric approach with respect to objective measures of image quality or subjective assessment using a four point scale. When compared to a symmetric approach, the decreased exposure, subsequent lower patient radiation dose, and similar image quality from asymmetric cone beam processing supports its routine clinical use.
Advanced technology development for image gathering, coding, and processing
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1990-01-01
Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-10-20
Radiographic Image Acquisition & Processing Software for Security Markets. Used in operation of commercial x-ray scanners and manipulation of x-ray images for emergency responders including State, Local, Federal, and US Military bomb technicians and analysts.
Rohlfing, Torsten; Schaupp, Frank; Haddad, Daniel; Brandt, Robert; Haase, Axel; Menzel, Randolf; Maurer, Calvin R
2005-01-01
Confocal microscopy (CM) is a powerful image acquisition technique that is well established in many biological applications. It provides 3-D acquisition with high spatial resolution and can acquire several different channels of complementary image information. Due to the specimen extraction and preparation process, however, the shapes of imaged objects may differ considerably from their in vivo appearance. Magnetic resonance microscopy (MRM) is an evolving variant of magnetic resonance imaging, which achieves microscopic resolutions using a high magnetic field and strong magnetic gradients. Compared to CM imaging, MRM allows for in situ imaging and is virtually free of geometrical distortions. We propose to combine the advantages of both methods by unwarping CM images using a MRM reference image. Our method incorporates a sequence of image processing operators applied to the MRM image, followed by a two-stage intensity-based registration to compute a nonrigid coordinate transformation between the CM images and the MRM image. We present results obtained using CM images from the brains of 20 honey bees and a MRM image of an in situ bee brain. Copyright 2005 Society of Photo-Optical Instrumentation Engineers.
Real-Time Intravascular Ultrasound and Photoacoustic Imaging
VanderLaan, Donald; Karpiouk, Andrei; Yeager, Doug; Emelianov, Stanislav
2018-01-01
Combined intravascular ultrasound and photoacoustic imaging (IVUS/IVPA) is an emerging hybrid modality being explored as a means of improving the characterization of atherosclerotic plaque anatomical and compositional features. While initial demonstrations of the technique have been encouraging, they have been limited by catheter rotation and data acquisition, displaying and processing rates on the order of several seconds per frame as well as the use of off-line image processing. Herein, we present a complete IVUS/IVPA imaging system and method capable of real-time IVUS/IVPA imaging, with online data acquisition, image processing and display of both IVUS and IVPA images. The integrated IVUS/IVPA catheter is fully contained within a 1 mm outer diameter torque cable coupled on the proximal end to a custom-designed spindle enabling optical and electrical coupling to system hardware, including a nanosecond-pulsed laser with a controllable pulse repetition frequency capable of greater than 10kHz, motor and servo drive, an ultrasound pulser/receiver, and a 200 MHz digitizer. The system performance is characterized and demonstrated on a vessel-mimicking phantom with an embedded coronary stent intended to provide IVPA contrast within content of an IVUS image. PMID:28092507
2011-07-01
radar [e.g., synthetic aperture radar (SAR)]. EO/IR includes multi- and hyperspectral imaging. Signal processing of data from nonimaging sensors, such...enhanced recognition ability. Other nonimage -based techniques, such as category theory,45 hierarchical systems,46 and gradient index flow,47 are possible...the battle- field. There is a plethora of imaging and nonimaging sensors on the battlefield that are being networked together for trans- mission of
Information Acquisition, Analysis and Integration
2016-08-03
of sensing and processing, theory, applications, signal processing, image and video processing, machine learning , technology transfer. 16. SECURITY... learning . 5. Solved elegantly old problems like image and video debluring, intro- ducing new revolutionary approaches. 1 DISTRIBUTION A: Distribution...Polatkan, G. Sapiro, D. Blei, D. B. Dunson, and L. Carin, “ Deep learning with hierarchical convolution factor analysis,” IEEE 6 DISTRIBUTION A
Hardware Timestamping for an Image Acquisition System Based on FlexRIO and IEEE 1588 v2 Standard
NASA Astrophysics Data System (ADS)
Esquembri, S.; Sanz, D.; Barrera, E.; Ruiz, M.; Bustos, A.; Vega, J.; Castro, R.
2016-02-01
Current fusion devices usually implement distributed acquisition systems for the multiple diagnostics of their experiments. However, each diagnostic is composed by hundreds or even thousands of signals, including images from the vessel interior. These signals and images must be correctly timestamped, because all the information will be analyzed to identify plasma behavior using temporal correlations. For acquisition devices without synchronization mechanisms the timestamp is given by another device with timing capabilities when signaled by the first device. Later, each data should be related with its timestamp, usually via software. This critical action is unfeasible for software applications when sampling rates are high. In order to solve this problem this paper presents the implementation of an image acquisition system with real-time hardware timestamping mechanism. This is synchronized with a master clock using the IEEE 1588 v2 Precision Time Protocol (PTP). Synchronization, image acquisition and processing, and timestamping mechanisms are implemented using Field Programmable Gate Array (FPGA) and a timing card -PTP v2 synchronized. The system has been validated using a camera simulator streaming videos from fusion databases. The developed architecture is fully compatible with ITER Fast Controllers and has been integrated with EPICS to control and monitor the whole system.
Image acquisition device of inspection robot based on adaptive rotation regulation of polarizer
NASA Astrophysics Data System (ADS)
Dong, Maoqi; Wang, Xingguang; Liang, Tao; Yang, Guoqing; Zhang, Chuangyou; Gao, Faqin
2017-12-01
An image processing device of inspection robot with adaptive polarization adjustment is proposed, that the device includes the inspection robot body, the image collecting mechanism, the polarizer and the polarizer automatic actuating device. Where, the image acquisition mechanism is arranged at the front of the inspection robot body for collecting equipment image data in the substation. Polarizer is fixed on the automatic actuating device of polarizer, and installed in front of the image acquisition mechanism, and that the optical axis of the camera vertically goes through the polarizer and the polarizer rotates with the optical axis of the visible camera as the central axis. The simulation results show that the system solves the fuzzy problems of the equipment that are caused by glare, reflection of light and shadow, and the robot can observe details of the running status of electrical equipment. And the full coverage of the substation equipment inspection robot observation target is achieved, which ensures the safe operation of the substation equipment.
UWGSP7: a real-time optical imaging workstation
NASA Astrophysics Data System (ADS)
Bush, John E.; Kim, Yongmin; Pennington, Stan D.; Alleman, Andrew P.
1995-04-01
With the development of UWGSP7, the University of Washington Image Computing Systems Laboratory has a real-time workstation for continuous-wave (cw) optical reflectance imaging. Recent discoveries in optical science and imaging research have suggested potential practical use of the technology as a medical imaging modality and identified the need for a machine to support these applications in real time. The UWGSP7 system was developed to provide researchers with a high-performance, versatile tool for use in optical imaging experiments with the eventual goal of bringing the technology into clinical use. One of several major applications of cw optical reflectance imaging is tumor imaging which uses a light-absorbing dye that preferentially sequesters in tumor tissue. This property could be used to locate tumors and to identify tumor margins intraoperatively. Cw optical reflectance imaging consists of illumination of a target with a band-limited light source and monitoring the light transmitted by or reflected from the target. While continuously illuminating the target, a control image is acquired and stored. A dye is injected into a subject and a sequence of data images are acquired and processed. The data images are aligned with the control image and then subtracted to obtain a signal representing the change in optical reflectance over time. This signal can be enhanced by digital image processing and displayed in pseudo-color. This type of emerging imaging technique requires a computer system that is versatile and adaptable. The UWGSP7 utilizes a VESA local bus PC as a host computer running the Windows NT operating system and includes ICSL developed add-on boards for image acquisition and processing. The image acquisition board is used to digitize and format the analog signal from the input device into digital frames and to the average frames into images. To accommodate different input devices, the camera interface circuitry is designed in a small mezzanine board that supports the RS-170 standard. The image acquisition board is connected to the image- processing board using a direct connect port which provides a 66 Mbytes/s channel independent of the system bus. The image processing board utilizes the Texas Instruments TMS320C80 Multimedia Video Processor chip. This chip is capable of 2 billion operations per second providing the UWGSP7 with the capability to perform real-time image processing functions like median filtering, convolution and contrast enhancement. This processing power allows interactive analysis of the experiments as compared to current practice of off-line processing and analysis. Due to its flexibility and programmability, the UWGSP7 can be adapted into various research needs in intraoperative optical imaging.
Technical aspects of CT imaging of the spine.
Tins, Bernhard
2010-11-01
This review article discusses technical aspects of computed tomography (CT) imaging of the spine. Patient positioning, and its influence on image quality and movement artefact, is discussed. Particular emphasis is placed on the choice of scan parameters and their relation to image quality and radiation burden to the patient. Strategies to reduce radiation burden and artefact from metal implants are outlined. Data acquisition, processing, image display and steps to reduce artefact are reviewed. CT imaging of the spine is put into context with other imaging modalities for specific clinical indications or problems. This review aims to review underlying principles for image acquisition and to provide a rough guide for clinical problems without being prescriptive. Individual practice will always vary and reflect differences in local experience, technical provisions and clinical requirements.
NASA Astrophysics Data System (ADS)
Benalcazar, Wladimir A.; Jiang, Zhi; Marks, Daniel L.; Geddes, Joseph B.; Boppart, Stephen A.
2009-02-01
We validate a molecular imaging technique called Nonlinear Interferometric Vibrational Imaging (NIVI) by comparing vibrational spectra with those acquired from Raman microscopy. This broadband coherent anti-Stokes Raman scattering (CARS) technique uses heterodyne detection and OCT acquisition and design principles to interfere a CARS signal generated by a sample with a local oscillator signal generated separately by a four-wave mixing process. These are mixed and demodulated by spectral interferometry. Its confocal configuration allows the acquisition of 3D images based on endogenous molecular signatures. Images from both phantom and mammary tissues have been acquired by this instrument and its spectrum is compared with its spontaneous Raman signatures.
Research of aerial imaging spectrometer data acquisition technology based on USB 3.0
NASA Astrophysics Data System (ADS)
Huang, Junze; Wang, Yueming; He, Daogang; Yu, Yanan
2016-11-01
With the emergence of UAV (unmanned aerial vehicle) platform for aerial imaging spectrometer, research of aerial imaging spectrometer DAS(data acquisition system) faces new challenges. Due to the limitation of platform and other factors, the aerial imaging spectrometer DAS requires small-light, low-cost and universal. Traditional aerial imaging spectrometer DAS system is expensive, bulky, non-universal and unsupported plug-and-play based on PCIe. So that has been unable to meet promotion and application of the aerial imaging spectrometer. In order to solve these problems, the new data acquisition scheme bases on USB3.0 interface.USB3.0 can provide guarantee of small-light, low-cost and universal relying on the forward-looking technology advantage. USB3.0 transmission theory is up to 5Gbps.And the GPIF programming interface achieves 3.2Gbps of the effective theoretical data bandwidth.USB3.0 can fully meet the needs of the aerial imaging spectrometer data transmission rate. The scheme uses the slave FIFO asynchronous data transmission mode between FPGA and USB3014 interface chip. Firstly system collects spectral data from TLK2711 of high-speed serial interface chip. Then FPGA receives data in DDR2 cache after ping-pong data processing. Finally USB3014 interface chip transmits data via automatic-dma approach and uploads to PC by USB3.0 cable. During the manufacture of aerial imaging spectrometer, the DAS can achieve image acquisition, transmission, storage and display. All functions can provide the necessary test detection for aerial imaging spectrometer. The test shows that system performs stable and no data lose. Average transmission speed and storage speed of writing SSD can stabilize at 1.28Gbps. Consequently ,this data acquisition system can meet application requirements for aerial imaging spectrometer.
High Efficiency Multi-shot Interleaved Spiral-In/Out Acquisition for High Resolution BOLD fMRI
Jung, Youngkyoo; Samsonov, Alexey A.; Liu, Thomas T.; Buracas, Giedrius T.
2012-01-01
Growing demand for high spatial resolution BOLD functional MRI faces a challenge of the spatial resolution vs. coverage or temporal resolution tradeoff, which can be addressed by methods that afford increased acquisition efficiency. Spiral acquisition trajectories have been shown to be superior to currently prevalent echo-planar imaging in terms of acquisition efficiency, and high spatial resolution can be achieved by employing multiple-shot spiral acquisition. The interleaved spiral in-out trajectory is preferred over spiral-in due to increased BOLD signal CNR and higher acquisition efficiency than that of spiral-out or non-interleaved spiral in/out trajectories (1), but to date applicability of the multi-shot interleaved spiral in-out for high spatial resolution imaging has not been studied. Herein we propose multi-shot interleaved spiral in-out acquisition and investigate its applicability for high spatial resolution BOLD fMRI. Images reconstructed from interleaved spiral-in and -out trajectories possess artifacts caused by differences in T2* decay, off-resonance and k-space errors associated with the two trajectories. We analyze the associated errors and demonstrate that application of conjugate phase reconstruction and spectral filtering can substantially mitigate these image artifacts. After applying these processing steps, the multishot interleaved spiral in-out pulse sequence yields high BOLD CNR images at in-plane resolution below 1x1 mm while preserving acceptable temporal resolution (4 s) and brain coverage (15 slices of 2 mm thickness). Moreover, this method yields sufficient BOLD CNR at 1.5 mm isotropic resolution for detection of activation in hippocampus associated with cognitive tasks (Stern memory task). The multi-shot interleaved spiral in-out acquisition is a promising technique for high spatial resolution BOLD fMRI applications. PMID:23023395
Fast imaging of laboratory core floods using 3D compressed sensing RARE MRI.
Ramskill, N P; Bush, I; Sederman, A J; Mantle, M D; Benning, M; Anger, B C; Appel, M; Gladden, L F
2016-09-01
Three-dimensional (3D) imaging of the fluid distributions within the rock is essential to enable the unambiguous interpretation of core flooding data. Magnetic resonance imaging (MRI) has been widely used to image fluid saturation in rock cores; however, conventional acquisition strategies are typically too slow to capture the dynamic nature of the displacement processes that are of interest. Using Compressed Sensing (CS), it is possible to reconstruct a near-perfect image from significantly fewer measurements than was previously thought necessary, and this can result in a significant reduction in the image acquisition times. In the present study, a method using the Rapid Acquisition with Relaxation Enhancement (RARE) pulse sequence with CS to provide 3D images of the fluid saturation in rock core samples during laboratory core floods is demonstrated. An objective method using image quality metrics for the determination of the most suitable regularisation functional to be used in the CS reconstructions is reported. It is shown that for the present application, Total Variation outperforms the Haar and Daubechies3 wavelet families in terms of the agreement of their respective CS reconstructions with a fully-sampled reference image. Using the CS-RARE approach, 3D images of the fluid saturation in the rock core have been acquired in 16min. The CS-RARE technique has been applied to image the residual water saturation in the rock during a water-water displacement core flood. With a flow rate corresponding to an interstitial velocity of vi=1.89±0.03ftday(-1), 0.1 pore volumes were injected over the course of each image acquisition, a four-fold reduction when compared to a fully-sampled RARE acquisition. Finally, the 3D CS-RARE technique has been used to image the drainage of dodecane into the water-saturated rock in which the dynamics of the coalescence of discrete clusters of the non-wetting phase are clearly observed. The enhancement in the temporal resolution that has been achieved using the CS-RARE approach enables dynamic transport processes pertinent to laboratory core floods to be investigated in 3D on a time-scale and with a spatial resolution that, until now, has not been possible. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Enhanced FIB-SEM systems for large-volume 3D imaging
Xu, C Shan; Hayworth, Kenneth J; Lu, Zhiyuan; Grob, Patricia; Hassan, Ahmed M; García-Cerdán, José G; Niyogi, Krishna K; Nogales, Eva; Weinberg, Richard J; Hess, Harald F
2017-01-01
Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 106 µm3. These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processes and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology. DOI: http://dx.doi.org/10.7554/eLife.25916.001 PMID:28500755
In-flight edge response measurements for high-spatial-resolution remote sensing systems
NASA Astrophysics Data System (ADS)
Blonski, Slawomir; Pagnutti, Mary A.; Ryan, Robert; Zanoni, Vickie
2002-09-01
In-flight measurements of spatial resolution were conducted as part of the NASA Scientific Data Purchase Verification and Validation process. Characterization included remote sensing image products with ground sample distance of 1 meter or less, such as those acquired with the panchromatic imager onboard the IKONOS satellite and the airborne ADAR System 5500 multispectral instrument. Final image products were used to evaluate the effects of both the image acquisition system and image post-processing. Spatial resolution was characterized by full width at half maximum of an edge-response-derived line spread function. The edge responses were analyzed using the tilted-edge technique that overcomes the spatial sampling limitations of the digital imaging systems. As an enhancement to existing algorithms, the slope of the edge response and the orientation of the edge target were determined by a single computational process. Adjacent black and white square panels, either painted on a flat surface or deployed as tarps, formed the ground-based edge targets used in the tests. Orientation of the deployable tarps was optimized beforehand, based on simulations of the imaging system. The effects of such factors as acquisition geometry, temporal variability, Modulation Transfer Function compensation, and ground sample distance on spatial resolution were investigated.
Diagnostic Radiology--The Impact of New Technology.
ERIC Educational Resources Information Center
Harrison, R. M.
1989-01-01
Discussed are technological advances applying computer techniques for image acquisition and processing, including digital radiography, computed tomography, and nuclear magnetic resonance imaging. Several diagrams and pictures showing the use of each technique are presented. (YP)
Acquisition and Post-Processing of Immunohistochemical Images.
Sedgewick, Jerry
2017-01-01
Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.
NASA Astrophysics Data System (ADS)
Hwang, Sunghwan; Han, Chang Wan; Venkatakrishnan, Singanallur V.; Bouman, Charles A.; Ortalan, Volkan
2017-04-01
Scanning transmission electron microscopy (STEM) has been successfully utilized to investigate atomic structure and chemistry of materials with atomic resolution. However, STEM’s focused electron probe with a high current density causes the electron beam damages including radiolysis and knock-on damage when the focused probe is exposed onto the electron-beam sensitive materials. Therefore, it is highly desirable to decrease the electron dose used in STEM for the investigation of biological/organic molecules, soft materials and nanomaterials in general. With the recent emergence of novel sparse signal processing theories, such as compressive sensing and model-based iterative reconstruction, possibilities of operating STEM under a sparse acquisition scheme to reduce the electron dose have been opened up. In this paper, we report our recent approach to implement a sparse acquisition in STEM mode executed by a random sparse-scan and a signal processing algorithm called model-based iterative reconstruction (MBIR). In this method, a small portion, such as 5% of randomly chosen unit sampling areas (i.e. electron probe positions), which corresponds to pixels of a STEM image, within the region of interest (ROI) of the specimen are scanned with an electron probe to obtain a sparse image. Sparse images are then reconstructed using the MBIR inpainting algorithm to produce an image of the specimen at the original resolution that is consistent with an image obtained using conventional scanning methods. Experimental results for down to 5% sampling show consistency with the full STEM image acquired by the conventional scanning method. Although, practical limitations of the conventional STEM instruments, such as internal delays of the STEM control electronics and the continuous electron gun emission, currently hinder to achieve the full potential of the sparse acquisition STEM in realizing the low dose imaging condition required for the investigation of beam-sensitive materials, the results obtained in our experiments demonstrate the sparse acquisition STEM imaging is potentially capable of reducing the electron dose by at least 20 times expanding the frontiers of our characterization capabilities for investigation of biological/organic molecules, polymers, soft materials and nanostructures in general.
Embedded, real-time UAV control for improved, image-based 3D scene reconstruction
Jean Liénard; Andre Vogs; Demetrios Gatziolis; Nikolay Strigul
2016-01-01
Unmanned Aerial Vehicles (UAVs) are already broadly employed for 3D modeling of large objects such as trees and monuments via photogrammetry. The usual workflow includes two distinct steps: image acquisition with UAV and computationally demanding postflight image processing. Insufficient feature overlaps across images is a common shortcoming in post-flight image...
Colony image acquisition and segmentation
NASA Astrophysics Data System (ADS)
Wang, W. X.
2007-12-01
For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.
Image processing tools dedicated to quantification in 3D fluorescence microscopy
NASA Astrophysics Data System (ADS)
Dieterlen, A.; De Meyer, A.; Colicchio, B.; Le Calvez, S.; Haeberlé, O.; Jacquey, S.
2006-05-01
3-D optical fluorescent microscopy now becomes an efficient tool for the volume investigation of living biological samples. Developments in instrumentation have permitted to beat off the conventional Abbe limit. In any case the recorded image can be described by the convolution equation between the original object and the Point Spread Function (PSF) of the acquisition system. Due to the finite resolution of the instrument, the original object is recorded with distortions and blurring, and contaminated by noise. This induces that relevant biological information cannot be extracted directly from raw data stacks. If the goal is 3-D quantitative analysis, then to assess optimal performance of the instrument and to ensure the data acquisition reproducibility, the system characterization is mandatory. The PSF represents the properties of the image acquisition system; we have proposed the use of statistical tools and Zernike moments to describe a 3-D PSF system and to quantify the variation of the PSF. This first step toward standardization is helpful to define an acquisition protocol optimizing exploitation of the microscope depending on the studied biological sample. Before the extraction of geometrical information and/or intensities quantification, the data restoration is mandatory. Reduction of out-of-focus light is carried out computationally by deconvolution process. But other phenomena occur during acquisition, like fluorescence photo degradation named "bleaching", inducing an alteration of information needed for restoration. Therefore, we have developed a protocol to pre-process data before the application of deconvolution algorithms. A large number of deconvolution methods have been described and are now available in commercial package. One major difficulty to use this software is the introduction by the user of the "best" regularization parameters. We have pointed out that automating the choice of the regularization level; also greatly improves the reliability of the measurements although it facilitates the use. Furthermore, to increase the quality and the repeatability of quantitative measurements a pre-filtering of images improves the stability of deconvolution process. In the same way, the PSF prefiltering stabilizes the deconvolution process. We have shown that Zemike polynomials can be used to reconstruct experimental PSF, preserving system characteristics and removing the noise contained in the PSF.
Synthetic schlieren—application to the visualization and characterization of air convection
NASA Astrophysics Data System (ADS)
Taberlet, Nicolas; Plihon, Nicolas; Auzémery, Lucile; Sautel, Jérémy; Panel, Grégoire; Gibaud, Thomas
2018-05-01
Synthetic schlieren is a digital image processing optical method relying on the variation of optical index to visualize the flow of a transparent fluid. In this article, we present a step-by-step, easy-to-implement and affordable experimental realization of this technique. The method is applied to air convection caused by a warm surface. We show that the velocity of rising convection plumes can be linked to the temperature of the warm surface and propose a simple physical argument to explain this dependence. Moreover, using this method, one can reveal the tenuous convection plumes rising from one’s hand, a phenomenon invisible to the naked eye. This spectacular result may help students to realize the power of careful data acquisition combined with astute image processing techniques. This spectacular result may help students to realize the power of careful data acquisition combined with astute image processing techniques (refer to the video abstract).
Delpiano, J; Pizarro, L; Peddie, C J; Jones, M L; Griffin, L D; Collinson, L M
2018-04-26
Integrated array tomography combines fluorescence and electron imaging of ultrathin sections in one microscope, and enables accurate high-resolution correlation of fluorescent proteins to cell organelles and membranes. Large numbers of serial sections can be imaged sequentially to produce aligned volumes from both imaging modalities, thus producing enormous amounts of data that must be handled and processed using novel techniques. Here, we present a scheme for automated detection of fluorescent cells within thin resin sections, which could then be used to drive automated electron image acquisition from target regions via 'smart tracking'. The aim of this work is to aid in optimization of the data acquisition process through automation, freeing the operator to work on other tasks and speeding up the process, while reducing data rates by only acquiring images from regions of interest. This new method is shown to be robust against noise and able to deal with regions of low fluorescence. © 2018 The Authors. Journal of Microscopy published by JohnWiley & Sons Ltd on behalf of Royal Microscopical Society.
MRXCAT: Realistic numerical phantoms for cardiovascular magnetic resonance
2014-01-01
Background Computer simulations are important for validating novel image acquisition and reconstruction strategies. In cardiovascular magnetic resonance (CMR), numerical simulations need to combine anatomical information and the effects of cardiac and/or respiratory motion. To this end, a framework for realistic CMR simulations is proposed and its use for image reconstruction from undersampled data is demonstrated. Methods The extended Cardiac-Torso (XCAT) anatomical phantom framework with various motion options was used as a basis for the numerical phantoms. Different tissue, dynamic contrast and signal models, multiple receiver coils and noise are simulated. Arbitrary trajectories and undersampled acquisition can be selected. The utility of the framework is demonstrated for accelerated cine and first-pass myocardial perfusion imaging using k-t PCA and k-t SPARSE. Results MRXCAT phantoms allow for realistic simulation of CMR including optional cardiac and respiratory motion. Example reconstructions from simulated undersampled k-t parallel imaging demonstrate the feasibility of simulated acquisition and reconstruction using the presented framework. Myocardial blood flow assessment from simulated myocardial perfusion images highlights the suitability of MRXCAT for quantitative post-processing simulation. Conclusion The proposed MRXCAT phantom framework enables versatile and realistic simulations of CMR including breathhold and free-breathing acquisitions. PMID:25204441
Parallel asynchronous systems and image processing algorithms
NASA Technical Reports Server (NTRS)
Coon, D. D.; Perera, A. G. U.
1989-01-01
A new hardware approach to implementation of image processing algorithms is described. The approach is based on silicon devices which would permit an independent analog processing channel to be dedicated to evey pixel. A laminar architecture consisting of a stack of planar arrays of the device would form a two-dimensional array processor with a 2-D array of inputs located directly behind a focal plane detector array. A 2-D image data stream would propagate in neuronlike asynchronous pulse coded form through the laminar processor. Such systems would integrate image acquisition and image processing. Acquisition and processing would be performed concurrently as in natural vision systems. The research is aimed at implementation of algorithms, such as the intensity dependent summation algorithm and pyramid processing structures, which are motivated by the operation of natural vision systems. Implementation of natural vision algorithms would benefit from the use of neuronlike information coding and the laminar, 2-D parallel, vision system type architecture. Besides providing a neural network framework for implementation of natural vision algorithms, a 2-D parallel approach could eliminate the serial bottleneck of conventional processing systems. Conversion to serial format would occur only after raw intensity data has been substantially processed. An interesting challenge arises from the fact that the mathematical formulation of natural vision algorithms does not specify the means of implementation, so that hardware implementation poses intriguing questions involving vision science.
42 CFR 37.44 - Approval of radiographic facilities that use digital radiography systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... image acquisition, digitization, processing, compression, transmission, display, archiving, and... quality digital chest radiographs by submitting to NIOSH digital radiographic image files of a test object... digital radiographic image files from six or more sample chest radiographs that are of acceptable quality...
42 CFR 37.44 - Approval of radiographic facilities that use digital radiography systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... image acquisition, digitization, processing, compression, transmission, display, archiving, and... quality digital chest radiographs by submitting to NIOSH digital radiographic image files of a test object... digital radiographic image files from six or more sample chest radiographs that are of acceptable quality...
Computer system for scanning tunneling microscope automation
NASA Astrophysics Data System (ADS)
Aguilar, M.; García, A.; Pascual, P. J.; Presa, J.; Santisteban, A.
1987-03-01
A computerized system for the automation of a scanning tunneling microscope is presented. It is based on an IBM personal computer (PC) either an XT or an AT, which performs the control, data acquisition and storage operations, displays the STM "images" in real time, and provides image processing tools for the restoration and analysis of data. It supports different data acquisition and control cards and image display cards. The software has been designed in a modular way to allow the replacement of these cards and other equipment improvements as well as the inclusion of user routines for data analysis.
Target-locking acquisition with real-time confocal (TARC) microscopy.
Lu, Peter J; Sims, Peter A; Oki, Hidekazu; Macarthur, James B; Weitz, David A
2007-07-09
We present a real-time target-locking confocal microscope that follows an object moving along an arbitrary path, even as it simultaneously changes its shape, size and orientation. This Target-locking Acquisition with Realtime Confocal (TARC) microscopy system integrates fast image processing and rapid image acquisition using a Nipkow spinning-disk confocal microscope. The system acquires a 3D stack of images, performs a full structural analysis to locate a feature of interest, moves the sample in response, and then collects the next 3D image stack. In this way, data collection is dynamically adjusted to keep a moving object centered in the field of view. We demonstrate the system's capabilities by target-locking freely-diffusing clusters of attractive colloidal particles, and activelytransported quantum dots (QDs) endocytosed into live cells free to move in three dimensions, for several hours. During this time, both the colloidal clusters and live cells move distances several times the length of the imaging volume.
Cui, Yang; Hanley, Luke
2015-06-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.
Cui, Yang; Hanley, Luke
2015-01-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science. PMID:26133872
NASA Astrophysics Data System (ADS)
Cui, Yang; Hanley, Luke
2015-06-01
ChiMS is an open-source data acquisition and control software program written within LabVIEW for high speed imaging and depth profiling mass spectrometers. ChiMS can also transfer large datasets from a digitizer to computer memory at high repetition rate, save data to hard disk at high throughput, and perform high speed data processing. The data acquisition mode generally simulates a digital oscilloscope, but with peripheral devices integrated for control as well as advanced data sorting and processing capabilities. Customized user-designed experiments can be easily written based on several included templates. ChiMS is additionally well suited to non-laser based mass spectrometers imaging and various other experiments in laser physics, physical chemistry, and surface science.
NASA Astrophysics Data System (ADS)
Hewawasam, Kuravi; Mendillo, Christopher B.; Howe, Glenn A.; Martel, Jason; Finn, Susanna C.; Cook, Timothy A.; Chakrabarti, Supriya
2017-09-01
The Planetary Imaging Concept Testbed Using a Recoverable Experiment - Coronagraph (PICTURE-C) mission will directly image debris disks and exozodiacal dust around nearby stars from a high-altitude balloon using a vector vortex coronagraph. The PICTURE-C low-order wavefront control (LOWC) system will be used to correct time-varying low-order aberrations due to pointing jitter, gravity sag, thermal deformation, and the gondola pendulum motion. We present the hardware and software implementation of the low-order ShackHartmann and reflective Lyot stop sensors. Development of the high-speed image acquisition and processing system is discussed with the emphasis on the reduction of hardware and computational latencies through the use of a real-time operating system and optimized data handling. By characterizing all of the LOWC latencies, we describe techniques to achieve a framerate of 200 Hz with a mean latency of ˜378 μs
Respiratory motion correction in emission tomography image reconstruction.
Reyes, Mauricio; Malandain, Grégoire; Koulibaly, Pierre Malick; González Ballester, Miguel A; Darcourt, Jacques
2005-01-01
In Emission Tomography imaging, respiratory motion causes artifacts in lungs and cardiac reconstructed images, which lead to misinterpretations and imprecise diagnosis. Solutions like respiratory gating, correlated dynamic PET techniques, list-mode data based techniques and others have been tested with improvements over the spatial activity distribution in lungs lesions, but with the disadvantages of requiring additional instrumentation or discarding part of the projection data used for reconstruction. The objective of this study is to incorporate respiratory motion correction directly into the image reconstruction process, without any additional acquisition protocol consideration. To this end, we propose an extension to the Maximum Likelihood Expectation Maximization (MLEM) algorithm that includes a respiratory motion model, which takes into account the displacements and volume deformations produced by the respiratory motion during the data acquisition process. We present results from synthetic simulations incorporating real respiratory motion as well as from phantom and patient data.
A role for the CAMKK pathway in visual object recognition memory.
Tinsley, Chris J; Narduzzo, Katherine E; Brown, Malcolm W; Warburton, E Clea
2012-03-01
The role of the CAMKK pathway in object recognition memory was investigated. Rats' performance in a preferential object recognition test was examined after local infusion into the perirhinal cortex of the CAMKK inhibitor STO-609. STO-609 infused either before or immediately after acquisition impaired memory tested after a 24 h but not a 20-min delay. Memory was not impaired when STO-609 was infused 20 min after acquisition. The expression of a downstream reaction product of CAMKK was measured by immunohistochemical staining for phospho-CAMKI(Thr177) at 10, 40, 70, and 100 min following the viewing of novel and familiar images of objects. Processing familiar images resulted in more pCAMKI stained neurons in the perirhinal cortex than processing novel images at the 10- and 40-min delays. Prior infusion of STO-609 caused a reduction in pCAMKI stained neurons in response to viewing either novel or familiar images, consistent with its role as an inhibitor of CAMKK. The results establish that the CAMKK pathway within the perirhinal cortex is important for the consolidation of object recognition memory. The activation of pCAMKI after acquisition is earlier than previously reported for pCAMKII. Copyright © 2011 Wiley Periodicals, Inc.
Molecular brain imaging in the multimodality era
Price, Julie C
2012-01-01
Multimodality molecular brain imaging encompasses in vivo visualization, evaluation, and measurement of cellular/molecular processes. Instrumentation and software developments over the past 30 years have fueled advancements in multimodality imaging platforms that enable acquisition of multiple complementary imaging outcomes by either combined sequential or simultaneous acquisition. This article provides a general overview of multimodality neuroimaging in the context of positron emission tomography as a molecular imaging tool and magnetic resonance imaging as a structural and functional imaging tool. Several image examples are provided and general challenges are discussed to exemplify complementary features of the modalities, as well as important strengths and weaknesses of combined assessments. Alzheimer's disease is highlighted, as this clinical area has been strongly impacted by multimodality neuroimaging findings that have improved understanding of the natural history of disease progression, early disease detection, and informed therapy evaluation. PMID:22434068
Plenoptic Ophthalmoscopy: A Novel Imaging Technique.
Adam, Murtaza K; Aenchbacher, Weston; Kurzweg, Timothy; Hsu, Jason
2016-11-01
This prospective retinal imaging case series was designed to establish feasibility of plenoptic ophthalmoscopy (PO), a novel mydriatic fundus imaging technique. A custom variable intensity LED array light source adapter was created for the Lytro Gen1 light-field camera (Lytro, Mountain View, CA). Initial PO testing was performed on a model eye and rabbit fundi. PO image acquisition was then performed on dilated human subjects with a variety of retinal pathology and images were subjected to computational enhancement. The Lytro Gen1 light-field camera with custom LED array captured fundus images of eyes with diabetic retinopathy, age-related macular degeneration, retinal detachment, and other diagnoses. Post-acquisition computational processing allowed for refocusing and perspective shifting of retinal PO images, resulting in improved image quality. The application of PO to image the ocular fundus is feasible. Additional studies are needed to determine its potential clinical utility. [Ophthalmic Surg Lasers Imaging Retina. 2016;47:1038-1043.]. Copyright 2016, SLACK Incorporated.
SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glitzner, M; Lagendijk, J; Raaymakers, B
Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axialmore » volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the resolution. This can be employed to greatly reduce image acquisition times for interventional applications in real-time. This work was funded by the SoRTS consortium, which includes the industry partners Elekta, Philips and Technolution.« less
Evaluation of security algorithms used for security processing on DICOM images
NASA Astrophysics Data System (ADS)
Chen, Xiaomeng; Shuai, Jie; Zhang, Jianguo; Huang, H. K.
2005-04-01
In this paper, we developed security approach to provide security measures and features in PACS image acquisition and Tele-radiology image transmission. The security processing on medical images was based on public key infrastructure (PKI) and including digital signature and data encryption to achieve the security features of confidentiality, privacy, authenticity, integrity, and non-repudiation. There are many algorithms which can be used in PKI for data encryption and digital signature. In this research, we select several algorithms to perform security processing on different DICOM images in PACS environment, evaluate the security processing performance of these algorithms, and find the relationship between performance with image types, sizes and the implementation methods.
NASA Astrophysics Data System (ADS)
Dubuque, Shaun; Coffman, Thayne; McCarley, Paul; Bovik, A. C.; Thomas, C. William
2009-05-01
Foveated imaging has been explored for compression and tele-presence, but gaps exist in the study of foveated imaging applied to acquisition and tracking systems. Results are presented from two sets of experiments comparing simple foveated and uniform resolution targeting (acquisition and tracking) algorithms. The first experiments measure acquisition performance when locating Gabor wavelet targets in noise, with fovea placement driven by a mutual information measure. The foveated approach is shown to have lower detection delay than a notional uniform resolution approach when using video that consumes equivalent bandwidth. The second experiments compare the accuracy of target position estimates from foveated and uniform resolution tracking algorithms. A technique is developed to select foveation parameters that minimize error in Kalman filter state estimates. Foveated tracking is shown to consistently outperform uniform resolution tracking on an abstract multiple target task when using video that consumes equivalent bandwidth. Performance is also compared to uniform resolution processing without bandwidth limitations. In both experiments, superior performance is achieved at a given bandwidth by foveated processing because limited resources are allocated intelligently to maximize operational performance. These findings indicate the potential for operational performance improvements over uniform resolution systems in both acquisition and tracking tasks.
NASA Astrophysics Data System (ADS)
Keller, Brad M.; Nathan, Diane L.; Conant, Emily F.; Kontos, Despina
2012-03-01
Breast percent density (PD%), as measured mammographically, is one of the strongest known risk factors for breast cancer. While the majority of studies to date have focused on PD% assessment from digitized film mammograms, digital mammography (DM) is becoming increasingly common, and allows for direct PD% assessment at the time of imaging. This work investigates the accuracy of a generalized linear model-based (GLM) estimation of PD% from raw and postprocessed digital mammograms, utilizing image acquisition physics, patient characteristics and gray-level intensity features of the specific image. The model is trained in a leave-one-woman-out fashion on a series of 81 cases for which bilateral, mediolateral-oblique DM images were available in both raw and post-processed format. Baseline continuous and categorical density estimates were provided by a trained breast-imaging radiologist. Regression analysis is performed and Pearson's correlation, r, and Cohen's kappa, κ, are computed. The GLM PD% estimation model performed well on both processed (r=0.89, p<0.001) and raw (r=0.75, p<0.001) images. Model agreement with radiologist assigned density categories was also high for processed (κ=0.79, p<0.001) and raw (κ=0.76, p<0.001) images. Model-based prediction of breast PD% could allow for a reproducible estimation of breast density, providing a rapid risk assessment tool for clinical practice.
FPGA-Based Reconfigurable Processor for Ultrafast Interlaced Ultrasound and Photoacoustic Imaging
Alqasemi, Umar; Li, Hai; Aguirre, Andrés; Zhu, Quing
2016-01-01
In this paper, we report, to the best of our knowledge, a unique field-programmable gate array (FPGA)-based reconfigurable processor for real-time interlaced co-registered ultrasound and photoacoustic imaging and its application in imaging tumor dynamic response. The FPGA is used to control, acquire, store, delay-and-sum, and transfer the data for real-time co-registered imaging. The FPGA controls the ultrasound transmission and ultrasound and photoacoustic data acquisition process of a customized 16-channel module that contains all of the necessary analog and digital circuits. The 16-channel module is one of multiple modules plugged into a motherboard; their beamformed outputs are made available for a digital signal processor (DSP) to access using an external memory interface (EMIF). The FPGA performs a key role through ultrafast reconfiguration and adaptation of its structure to allow real-time switching between the two imaging modes, including transmission control, laser synchronization, internal memory structure, beamforming, and EMIF structure and memory size. It performs another role by parallel accessing of internal memories and multi-thread processing to reduce the transfer of data and the processing load on the DSP. Furthermore, because the laser will be pulsing even during ultrasound pulse-echo acquisition, the FPGA ensures that the laser pulses are far enough from the pulse-echo acquisitions by appropriate time-division multiplexing (TDM). A co-registered ultrasound and photoacoustic imaging system consisting of four FPGA modules (64-channels) is constructed, and its performance is demonstrated using phantom targets and in vivo mouse tumor models. PMID:22828830
FPGA-based reconfigurable processor for ultrafast interlaced ultrasound and photoacoustic imaging.
Alqasemi, Umar; Li, Hai; Aguirre, Andrés; Zhu, Quing
2012-07-01
In this paper, we report, to the best of our knowledge, a unique field-programmable gate array (FPGA)-based reconfigurable processor for real-time interlaced co-registered ultrasound and photoacoustic imaging and its application in imaging tumor dynamic response. The FPGA is used to control, acquire, store, delay-and-sum, and transfer the data for real-time co-registered imaging. The FPGA controls the ultrasound transmission and ultrasound and photoacoustic data acquisition process of a customized 16-channel module that contains all of the necessary analog and digital circuits. The 16-channel module is one of multiple modules plugged into a motherboard; their beamformed outputs are made available for a digital signal processor (DSP) to access using an external memory interface (EMIF). The FPGA performs a key role through ultrafast reconfiguration and adaptation of its structure to allow real-time switching between the two imaging modes, including transmission control, laser synchronization, internal memory structure, beamforming, and EMIF structure and memory size. It performs another role by parallel accessing of internal memories and multi-thread processing to reduce the transfer of data and the processing load on the DSP. Furthermore, because the laser will be pulsing even during ultrasound pulse-echo acquisition, the FPGA ensures that the laser pulses are far enough from the pulse-echo acquisitions by appropriate time-division multiplexing (TDM). A co-registered ultrasound and photoacoustic imaging system consisting of four FPGA modules (64-channels) is constructed, and its performance is demonstrated using phantom targets and in vivo mouse tumor models.
THz near-field spectral encoding imaging using a rainbow metasurface.
Lee, Kanghee; Choi, Hyun Joo; Son, Jaehyeon; Park, Hyun-Sung; Ahn, Jaewook; Min, Bumki
2015-09-24
We demonstrate a fast image acquisition technique in the terahertz range via spectral encoding using a metasurface. The metasurface is composed of spatially varying units of mesh filters that exhibit bandpass features. Each mesh filter is arranged such that the centre frequencies of the mesh filters are proportional to their position within the metasurface, similar to a rainbow. For imaging, the object is placed in front of the rainbow metasurface, and the image is reconstructed by measuring the transmitted broadband THz pulses through both the metasurface and the object. The 1D image information regarding the object is linearly mapped into the spectrum of the transmitted wave of the rainbow metasurface. Thus, 2D images can be successfully reconstructed using simple 1D data acquisition processes.
Application of Structure-from-Motion photogrammetry in laboratory flumes
NASA Astrophysics Data System (ADS)
Morgan, Jacob A.; Brogan, Daniel J.; Nelson, Peter A.
2017-01-01
Structure-from-Motion (SfM) photogrammetry has become widely used for topographic data collection in field and laboratory studies. However, the relative performance of SfM against other methods of topographic measurement in a laboratory flume environment has not been systematically evaluated, and there is a general lack of guidelines for SfM application in flume settings. As the use of SfM in laboratory flume settings becomes more widespread, it is increasingly critical to develop an understanding of how to acquire and process SfM data for a given flume size and sediment characteristics. In this study, we: (1) compare the resolution and accuracy of SfM topographic measurements to terrestrial laser scanning (TLS) measurements in laboratory flumes of varying physical dimensions containing sediments of varying grain sizes; (2) explore the effects of different image acquisition protocols and data processing methods on the resolution and accuracy of topographic data derived from SfM techniques; and (3) provide general guidance for image acquisition and processing for SfM applications in laboratory flumes. To investigate the effects of flume size, sediment size, and photo overlap on the density and accuracy of SfM data, we collected topographic data using both TLS and SfM in five flumes with widths ranging from 0.22 to 6.71 m, lengths ranging from 9.14 to 30.48 m, and median sediment sizes ranging from 0.2 to 31 mm. Acquisition time, image overlap, point density, elevation data, and computed roughness parameters were compared to evaluate the performance of SfM against TLS. We also collected images of a pan of gravel where we varied the distance and angle between the camera and sediment in order to explore how photo acquisition affects the ability to capture grain-scale microtopographic features in SfM-derived point clouds. A variety of image combinations and SfM software package settings were also investigated to determine optimal processing techniques. Results from this study suggest that SfM provides topographic data of similar accuracy to TLS, at higher resolution and lower cost. We found that about 100pixels per grain are required to resolve grain-scale topography. We suggest protocols for image acquisition and SfM software settings to achieve best results when using SfM in laboratory settings. In general, convergent imagery, taken from a higher angle, with at least several overlapping images for each desired point in the flume will result in an acceptable point cloud.
Cardio-PACs: a new opportunity
NASA Astrophysics Data System (ADS)
Heupler, Frederick A., Jr.; Thomas, James D.; Blume, Hartwig R.; Cecil, Robert A.; Heisler, Mary
2000-05-01
It is now possible to replace film-based image management in the cardiac catheterization laboratory with a Cardiology Picture Archiving and Communication System (Cardio-PACS) based on digital imaging technology. The first step in the conversion process is installation of a digital image acquisition system that is capable of generating high-quality DICOM-compatible images. The next three steps, which are the subject of this presentation, involve image display, distribution, and storage. Clinical requirements and associated cost considerations for these three steps are listed below: Image display: (1) Image quality equal to film, with DICOM format, lossless compression, image processing, desktop PC-based with color monitor, and physician-friendly imaging software; (2) Performance specifications include: acquire 30 frames/sec; replay 15 frames/sec; access to file server 5 seconds, and to archive 5 minutes; (3) Compatibility of image file, transmission, and processing formats; (4) Image manipulation: brightness, contrast, gray scale, zoom, biplane display, and quantification; (5) User-friendly control of image review. Image distribution: (1) Standard IP-based network between cardiac catheterization laboratories, file server, long-term archive, review stations, and remote sites; (2) Non-proprietary formats; (3) Bidirectional distribution. Image storage: (1) CD-ROM vs disk vs tape; (2) Verification of data integrity; (3) User-designated storage capacity for catheterization laboratory, file server, long-term archive. Costs: (1) Image acquisition equipment, file server, long-term archive; (2) Network infrastructure; (3) Review stations and software; (4) Maintenance and administration; (5) Future upgrades and expansion; (6) Personnel.
Image processing and 3D visualization in the interpretation of patterned injury of the skin
NASA Astrophysics Data System (ADS)
Oliver, William R.; Altschuler, Bruce R.
1995-09-01
The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing in the analysis of patterned injuries and tissue damage. Our interests are currently concentrated on 1) the use of image processing techniques to aid the investigator in observing and evaluating patterned injuries in photographs, 2) measurement of the 3D shape characteristics of surface lesions, and 3) correlation of patterned injuries with deep tissue injury as a problem in 3D visualization. We are beginning investigations in data-acquisition problems for performing 3D scene reconstructions from the pathology perspective of correlating tissue injury to scene features and trace evidence localization. Our primary tool for correlation of surface injuries with deep tissue injuries has been the comparison of processed surface injury photographs with 3D reconstructions from antemortem CT and MRI data. We have developed a prototype robot for the acquisition of 3D wound and scene data.
Unified Ultrasonic/Eddy-Current Data Acquisition
NASA Technical Reports Server (NTRS)
Chern, E. James; Butler, David W.
1993-01-01
Imaging station for detecting cracks and flaws in solid materials developed combining both ultrasonic C-scan and eddy-current imaging. Incorporation of both techniques into one system eliminates duplication of computers and of mechanical scanners; unifies acquisition, processing, and storage of data; reduces setup time for repetitious ultrasonic and eddy-current scans; and increases efficiency of system. Same mechanical scanner used to maneuver either ultrasonic or eddy-current probe over specimen and acquire point-by-point data. For ultrasonic scanning, probe linked to ultrasonic pulser/receiver circuit card, while, for eddy-current imaging, probe linked to impedance-analyzer circuit card. Both ultrasonic and eddy-current imaging subsystems share same desktop-computer controller, containing dedicated plug-in circuit boards for each.
Global Pressure- and Temperature-Measurements in 1.27-m JAXA Hypersonic Wind Tunnel
NASA Astrophysics Data System (ADS)
Yamada, Y.; Miyazaki, T.; Nakagawa, M.; Tsuda, S.; Sakaue, H.
Pressure-sensitive paint (PSP) technique has been widely used in aerodynamic measurements. A PSP is a global optical sensor, which consists of a luminophore and binding material. The luminophore gives a luminescence related to an oxygen concentration known as oxygen quenching. In an aerodynamic measurement, the oxygen concentration is related to a partial pressure of oxygen and a static pressure, thus the luminescent signal can be related to a static pressure [1]. The PSP measurement system consists of a PSP coated model, an image acquisition unit, and an image processing unit (Fig. 1). For the image acquisition, an illumination source and a photo-detector are required. To separate the illumination and PSP emission detected by a photo-detector, appropriate band-pass filters are placed in front of the illumination and photo-detector. The image processing unit includes the calibration and computation. The calibration relates the luminescent signal to pressures and temperatures. Based on these calibrations, luminescent images are converted to a pressure map.
A study of quantification of aortic compliance in mice using radial acquisition phase contrast MRI
NASA Astrophysics Data System (ADS)
Zhao, Xuandong
Spatiotemporal changes in blood flow velocity measured using Phase contrast Magnetic Resonance Imaging (MRI) can be used to quantify Pulse Wave Velocity (PWV) and Wall Shear Stress (WSS), well known indices of vessel compliance. A study was conducted to measure the PWV in the aortic arch in young healthy children using conventional phase contrast MRI and a post processing algorithm that automatically track the peak velocity in phase contrast images. It is shown that the PWV calculated using peak velocity-time data has less variability compared to that using mean velocity and flow. Conventional MR data acquisition techniques lack both the spatial and temporal resolution needed to accurately calculate PWV and WSS in in vivo studies using transgenic animal models of arterial diseases. Radial k-space acquisition can improve both spatial and temporal resolution. A major part of this thesis was devoted to developing technology for Radial Phase Contrast Magnetic Resonance (RPCMR) cine imaging on a 7 Tesla Animal scanner. A pulse sequence with asymmetric radial k-space acquisition was designed and implemented. Software developed to reconstruct the RPCMR images include gridding, density compensation and centering of k-Space that corrects the image ghosting introduced by hardware response time. Image processing software was developed to automatically segment the vessel lumen and correct for phase offset due to eddy currents. Finally, in vivo and ex vivo aortic compliance measurements were conducted in a well-established mouse model for atherosclerosis: Apolipoprotein E-knockout (ApoE-KO). Using RPCMR technique, a significantly higher PWV value as well as a higher average WSS was detected among 9 months old ApoE-KO mice compare to in wild type mice. A follow up ex-vivo test of tissue elasticity confirmed the impaired distensibility of aortic arteries among ApoE-KO mice.
NASA Astrophysics Data System (ADS)
Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.
2017-06-01
This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.
Sharif, Behzad; Derbyshire, J. Andrew; Faranesh, Anthony Z.; Bresler, Yoram
2010-01-01
MR imaging of the human heart without explicit cardiac synchronization promises to extend the applicability of cardiac MR to a larger patient population and potentially expand its diagnostic capabilities. However, conventional non-gated imaging techniques typically suffer from low image quality or inadequate spatio-temporal resolution and fidelity. Patient-Adaptive Reconstruction and Acquisition in Dynamic Imaging with Sensitivity Encoding (PARADISE) is a highly-accelerated non-gated dynamic imaging method that enables artifact-free imaging with high spatio-temporal resolutions by utilizing novel computational techniques to optimize the imaging process. In addition to using parallel imaging, the method gains acceleration from a physiologically-driven spatio-temporal support model; hence, it is doubly accelerated. The support model is patient-adaptive, i.e., its geometry depends on dynamics of the imaged slice, e.g., subject’s heart-rate and heart location within the slice. The proposed method is also doubly adaptive as it adapts both the acquisition and reconstruction schemes. Based on the theory of time-sequential sampling, the proposed framework explicitly accounts for speed limitations of gradient encoding and provides performance guarantees on achievable image quality. The presented in-vivo results demonstrate the effectiveness and feasibility of the PARADISE method for high resolution non-gated cardiac MRI during a short breath-hold. PMID:20665794
Milchenko, Mikhail; Snyder, Abraham Z; LaMontagne, Pamela; Shimony, Joshua S; Benzinger, Tammie L; Fouke, Sarah Jost; Marcus, Daniel S
2016-07-01
Neuroimaging research often relies on clinically acquired magnetic resonance imaging (MRI) datasets that can originate from multiple institutions. Such datasets are characterized by high heterogeneity of modalities and variability of sequence parameters. This heterogeneity complicates the automation of image processing tasks such as spatial co-registration and physiological or functional image analysis. Given this heterogeneity, conventional processing workflows developed for research purposes are not optimal for clinical data. In this work, we describe an approach called Heterogeneous Optimization Framework (HOF) for developing image analysis pipelines that can handle the high degree of clinical data non-uniformity. HOF provides a set of guidelines for configuration, algorithm development, deployment, interpretation of results and quality control for such pipelines. At each step, we illustrate the HOF approach using the implementation of an automated pipeline for Multimodal Glioma Analysis (MGA) as an example. The MGA pipeline computes tissue diffusion characteristics of diffusion tensor imaging (DTI) acquisitions, hemodynamic characteristics using a perfusion model of susceptibility contrast (DSC) MRI, and spatial cross-modal co-registration of available anatomical, physiological and derived patient images. Developing MGA within HOF enabled the processing of neuro-oncology MR imaging studies to be fully automated. MGA has been successfully used to analyze over 160 clinical tumor studies to date within several research projects. Introduction of the MGA pipeline improved image processing throughput and, most importantly, effectively produced co-registered datasets that were suitable for advanced analysis despite high heterogeneity in acquisition protocols.
Video Information Communication and Retrieval/Image Based Information System (VICAR/IBIS)
NASA Technical Reports Server (NTRS)
Wherry, D. B.
1981-01-01
The acquisition, operation, and planning stages of installing a VICAR/IBIS system are described. The system operates in an IBM mainframe environment, and provides image processing of raster data. System support problems with software and documentation are discussed.
Ji, Jim; Wright, Steven
2005-01-01
Parallel imaging using multiple phased-array coils and receiver channels has become an effective approach to high-speed magnetic resonance imaging (MRI). To obtain high spatiotemporal resolution, the k-space is subsampled and later interpolated using multiple channel data. Higher subsampling factors result in faster image acquisition. However, the subsampling factors are upper-bounded by the number of parallel channels. Phase constraints have been previously proposed to overcome this limitation with some success. In this paper, we demonstrate that in certain applications it is possible to obtain acceleration factors potentially up to twice the channel numbers by using a real image constraint. Data acquisition and processing methods to manipulate and estimate of the image phase information are presented for improving image reconstruction. In-vivo brain MRI experimental results show that accelerations up to 6 are feasible with 4-channel data.
Hsu, Shu-Hui; Cao, Yue; Lawrence, Theodore S.; Tsien, Christina; Feng, Mary; Grodzki, David M.; Balter, James M.
2015-01-01
Accurate separation of air and bone is critical for creating synthetic CT from MRI to support Radiation Oncology workflow. This study compares two different ultrashort echo-time sequences in the separation of air from bone, and evaluates post-processing methods that correct intensity nonuniformity of images and account for intensity gradients at tissue boundaries to improve this discriminatory power. CT and MRI scans were acquired on 12 patients under an institution review board-approved prospective protocol. The two MRI sequences tested were ultra-short TE imaging using 3D radial acquisition (UTE), and using pointwise encoding time reduction with radial acquisition (PETRA). Gradient nonlinearity correction was applied to both MR image volumes after acquisition. MRI intensity nonuniformity was corrected by vendor-provided normalization methods, and then further corrected using the N4itk algorithm. To overcome the intensity-gradient at air-tissue boundaries, spatial dilations, from 0 to 4 mm, were applied to threshold-defined air regions from MR images. Receiver operating characteristic (ROC) analyses, by comparing predicted (defined by MR images) versus “true” regions of air and bone (defined by CT images), were performed with and without residual bias field correction and local spatial expansion. The post-processing corrections increased the areas under the ROC curves (AUC) from 0.944 ± 0.012 to 0.976 ± 0.003 for UTE images, and from 0.850 ± 0.022 to 0.887 ± 0.012 for PETRA images, compared to without corrections. When expanding the threshold-defined air volumes, as expected, sensitivity of air identification decreased with an increase in specificity of bone discrimination, but in a non-linear fashion. A 1-mm air mask expansion yielded AUC increases of 1% and 4% for UTE and PETRA images, respectively. UTE images had significantly greater discriminatory power in separating air from bone than PETRA images. Post-processing strategies improved the discriminatory power of air from bone for both UTE and PETRA images, and reduced the difference between the two imaging sequences. Both postprocessed UTE and PETRA images demonstrated sufficient power to discriminate air from bone to support synthetic CT generation from MRI data. PMID:25776205
Diffusion-Weighted Imaging Outside the Brain: Consensus Statement From an ISMRM-Sponsored Workshop
Taouli, Bachir; Beer, Ambros J.; Chenevert, Thomas; Collins, David; Lehman, Constance; Matos, Celso; Padhani, Anwar R.; Rosenkrantz, Andrew B.; Shukla-Dave, Amita; Sigmund, Eric; Tanenbaum, Lawrence; Thoeny, Harriet; Thomassin-Naggara, Isabelle; Barbieri, Sebastiano; Corcuera-Solano, Idoia; Orton, Matthew; Partridge, Savannah C.; Koh, Dow-Mu
2016-01-01
The significant advances in magnetic resonance imaging (MRI) hardware and software, sequence design, and postprocessing methods have made diffusion-weighted imaging (DWI) an important part of body MRI protocols and have fueled extensive research on quantitative diffusion outside the brain, particularly in the oncologic setting. In this review, we summarize the most up-to-date information on DWI acquisition and clinical applications outside the brain, as discussed in an ISMRM-sponsored symposium held in April 2015. We first introduce recent advances in acquisition, processing, and quality control; then review scientific evidence in major organ systems; and finally describe future directions. PMID:26892827
42 CFR 37.44 - Approval of radiographic facilities that use digital radiography systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... effective management, safety, and proper performance of chest image acquisition, digitization, processing... digital chest radiographs by submitting to NIOSH digital radiographic image files of a test object (e.g... radiographic image files from six or more sample chest radiographs that are of acceptable quality to one or...
Living in a digital world: features and applications of FPGA in photon detection
NASA Astrophysics Data System (ADS)
Arnesano, Cosimo
Optical spectroscopy and imaging outcomes rely upon many factors; one of the most critical is the photon acquisition and processing method employed. For some types of measurements it may be crucial to acquire every single photon quickly with temporal resolution, but in other cases it is important to acquire as many photons as possible, regardless of the time information about each of them. Fluorescence Lifetime Imaging Microscopy belongs to the first case, where the information of the time of arrival of every single photon in every single pixel is fundamental in obtaining the desired information. Spectral tissue imaging belongs to the second case, where high photon density is needed in order to calculate the optical parameters necessary to build the spectral image. In both cases, the current instrumentation suffers from limitations in terms of acquisition time, duty cycle, cost, and radio-frequency interference and emission. We developed the Digital Frequency-Domain approach for photon acquisition and processing purpose using new digital technology. This approach is based on the use of photon detectors in photon counting mode, and the digital heterodyning method to acquire data which is analyzed in the frequency domain to provide the information of the time of arrival of the photons . In conjunction with the use of pulsed laser sources, this method allows the determination of the time of arrival of the photons using the harmonic content of the frequency domain analysis. The parallel digital FD design is a powerful approach that others the possibility to implement a variety of different applications in fluorescence spectroscopy and microscopy. It can be applied to fluorometry, Fluorescence Lifetime Imaging (FLIM), and Fluorescence Correlation Spectroscopy (FCS), as well as multi frequency and multi wavelength tissue imaging in compact portable medical devices. It dramatically reduces the acquisition time from the several minutes scale to the seconds scale, performs signal processing in a digital fashion avoiding RF emission and it is extremely inexpensive. This development is the result of a systematic study carried on a previous design known as the FLIMBox developed as part of a thesis of another graduate student. The extensive work done in maximizing the performance of the original FLIMBox led us to develop a new hardware solution with exciting and promising results and potential that were not possible in the previous hardware realization, where the signal harmonic content was limited by the FPGA technology. The new design permits acquisition of a much larger harmonic content of the sample response when it is excited with a pulsed light source in one single measurement using the digital mixing principle that was developed in the original design. Furthermore, we used the parallel digital FD principle to perform tissue imaging through Diffuse Optical Spectroscopy (DOS) measurements. We integrated the FLIMBox in a new system that uses a supercontinuum white laser with high brightness as a single light source and photomultipliers with large detection area, both allowing a high penetration depth with extremely low power at the sample. The parallel acquisition, achieved by using the FlimBox, decreases the time required for standard serial systems that scan through all modulation frequencies. Furthermore, the all-digital acquisition avoids analog noise, removes the analog mixer of the conventional frequency domain approach, and it does not generate radio-frequencies, normally present in current analog systems. We are able to obtain a very sensitive acquisition due to the high signal to noise ratio (S/N). The successful results obtained by utilizing digital technology in photon acquisition and processing, prompted us to extend the use of FPGA to other applications, such as phosphorescence detection. Using the FPGA concept we proposed possible solutions to outstanding problems with the current technology. In this thesis I discuss new possible scenarios where new FPGA chips are applied to spectral tissue imaging.
SU-G-BRA-01: A Real-Time Tumor Localization and Guidance Platform for Radiotherapy Using US and MRI
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bednarz, B; Culberson, W; Bassetti, M
Purpose: To develop and validate a real-time motion management platform for radiotherapy that directly tracks tumor motion using ultrasound and MRI. This will be a cost-effective and non-invasive real-time platform combining the excellent temporal resolution of ultrasound with the excellent soft-tissue contrast of MRI. Methods: A 4D planar ultrasound acquisition during the treatment that is coupled to a pre-treatment calibration training image set consisting of a simultaneous 4D ultrasound and 4D MRI acquisition. The image sets will be rapidly matched using advanced image and signal processing algorithms, allowing the display of virtual MR images of the tumor/organ motion in real-timemore » from an ultrasound acquisition. Results: The completion of this work will result in several innovations including: a (2D) patch-like, MR and LINAC compatible 4D planar ultrasound transducer that is electronically steerable for hands-free operation to provide real-time virtual MR and ultrasound imaging for motion management during radiation therapy; a multi- modal tumor localization strategy that uses ultrasound and MRI; and fast and accurate image processing algorithms that provide real-time information about the motion and location of tumor or related soft-tissue structures within the patient. Conclusion: If successful, the proposed approach will provide real-time guidance for radiation therapy without degrading image or treatment plan quality. The approach would be equally suitable for image-guided proton beam or heavy ion-beam therapy. This work is partially funded by NIH grant R01CA190298.« less
NASA Astrophysics Data System (ADS)
Adi, K.; Widodo, A. P.; Widodo, C. E.; Pamungkas, A.; Putranto, A. B.
2018-05-01
Traffic monitoring on road needs to be done, the counting of the number of vehicles passing the road is necessary. It is more emphasized for highway transportation management in order to prevent efforts. Therefore, it is necessary to develop a system that is able to counting the number of vehicles automatically. Video processing method is able to counting the number of vehicles automatically. This research has development a system of vehicle counting on toll road. This system includes processes of video acquisition, frame extraction, and image processing for each frame. Video acquisition is conducted in the morning, at noon, in the afternoon, and in the evening. This system employs of background subtraction and morphology methods on gray scale images for vehicle counting. The best vehicle counting results were obtained in the morning with a counting accuracy of 86.36 %, whereas the lowest accuracy was in the evening, at 21.43 %. Differences in morning and evening results are caused by different illumination in the morning and evening. This will cause the values in the image pixels to be different.
NASA Astrophysics Data System (ADS)
Gorpas, D.; Yova, D.
2009-07-01
One of the major challenges in biomedical imaging is the extraction of quantified information from the acquired images. Light and tissue interaction leads to the acquisition of images that present inconsistent intensity profiles and thus the accurate identification of the regions of interest is a rather complicated process. On the other hand, the complex geometries and the tangent objects that very often are present in the acquired images, lead to either false detections or to the merging, shrinkage or expansion of the regions of interest. In this paper an algorithm, which is based on alternating sequential filtering and watershed transformation, is proposed for the segmentation of biomedical images. This algorithm has been tested over two applications, each one based on different acquisition system, and the results illustrate its accuracy in segmenting the regions of interest.
Kawakami, Shogo; Ishiyama, Hiromichi; Satoh, Takefumi; Tsumura, Hideyasu; Sekiguchi, Akane; Takenaka, Kouji; Tabata, Ken-Ichi; Iwamura, Masatsugu; Hayakawa, Kazushige
2017-08-01
To compare prostate contours on conventional stepping transverse image acquisitions with those on twister-based sagittal image acquisitions. Twenty prostate cancer patients who were planned to have permanent interstitial prostate brachytherapy were prospectively accrued. A transrectal ultrasonography probe was inserted, with the patient in lithotomy position. Transverse images were obtained with stepping movement of the transverse transducer. In the same patient, sagittal images were also obtained through rotation of the sagittal transducer using the "Twister" mode. The differences of prostate size among the two types of image acquisitions were compared. The relationships among the difference of the two types of image acquisitions, dose-volume histogram (DVH) parameters on the post-implant computed tomography (CT) analysis, as well as other factors were analyzed. The sagittal image acquisitions showed a larger prostate size compared to the transverse image acquisitions especially in the anterior-posterior (AP) direction ( p < 0.05). Interestingly, relative size of prostate apex in AP direction in sagittal image acquisitions compared to that in transverse image acquisitions was correlated to DVH parameters such as D 90 ( R = 0.518, p = 0.019), and V 100 ( R = 0.598, p = 0.005). There were small but significant differences in the prostate contours between the transverse and the sagittal planning image acquisitions. Furthermore, our study suggested that the differences between the two types of image acquisitions might correlated to dosimetric results on CT analysis.
Yan, Xu; Zhou, Minxiong; Ying, Lingfang; Yin, Dazhi; Fan, Mingxia; Yang, Guang; Zhou, Yongdi; Song, Fan; Xu, Dongrong
2013-01-01
Diffusion kurtosis imaging (DKI) is a new method of magnetic resonance imaging (MRI) that provides non-Gaussian information that is not available in conventional diffusion tensor imaging (DTI). DKI requires data acquisition at multiple b-values for parameter estimation; this process is usually time-consuming. Therefore, fewer b-values are preferable to expedite acquisition. In this study, we carefully evaluated various acquisition schemas using different numbers and combinations of b-values. Acquisition schemas that sampled b-values that were distributed to two ends were optimized. Compared to conventional schemas using equally spaced b-values (ESB), optimized schemas require fewer b-values to minimize fitting errors in parameter estimation and may thus significantly reduce scanning time. Following a ranked list of optimized schemas resulted from the evaluation, we recommend the 3b schema based on its estimation accuracy and time efficiency, which needs data from only 3 b-values at 0, around 800 and around 2600 s/mm2, respectively. Analyses using voxel-based analysis (VBA) and region-of-interest (ROI) analysis with human DKI datasets support the use of the optimized 3b (0, 1000, 2500 s/mm2) DKI schema in practical clinical applications. PMID:23735303
Code of Federal Regulations, 2012 CFR
2012-10-01
..., Morgantown, WV 26505. (j) Preemployment physical examination means any medical examination which includes a... image acquisition systems that detect X-ray signals using a cassette-based photostimulable storage... radiographic image to electronic signals which are then processed and stored so they can be displayed. (2...
Sinkó, József; Kákonyi, Róbert; Rees, Eric; Metcalf, Daniel; Knight, Alex E.; Kaminski, Clemens F.; Szabó, Gábor; Erdélyi, Miklós
2014-01-01
Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments. PMID:24688813
The new frontiers of multimodality and multi-isotope imaging
NASA Astrophysics Data System (ADS)
Behnam Azad, Babak; Nimmagadda, Sridhar
2014-06-01
Technological advances in imaging systems and the development of target specific imaging tracers has been rapidly growing over the past two decades. Recent progress in "all-in-one" imaging systems that allow for automated image coregistration has significantly added to the growth of this field. These developments include ultra high resolution PET and SPECT scanners that can be integrated with CT or MR resulting in PET/CT, SPECT/CT, SPECT/PET and PET/MRI scanners for simultaneous high resolution high sensitivity anatomical and functional imaging. These technological developments have also resulted in drastic enhancements in image quality and acquisition time while eliminating cross compatibility issues between modalities. Furthermore, the most cutting edge technology, though mostly preclinical, also allows for simultaneous multimodality multi-isotope image acquisition and image reconstruction based on radioisotope decay characteristics. These scientific advances, in conjunction with the explosion in the development of highly specific multimodality molecular imaging agents, may aid in realizing simultaneous imaging of multiple biological processes and pave the way towards more efficient diagnosis and improved patient care.
High-energy proton imaging for biomedical applications
Prall, Matthias; Durante, Marco; Berger, Thomas; ...
2016-06-10
The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allowsmore » imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. As a result, tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.« less
High-energy proton imaging for biomedical applications
NASA Astrophysics Data System (ADS)
Prall, M.; Durante, M.; Berger, T.; Przybyla, B.; Graeff, C.; Lang, P. M.; Latessa, C.; Shestov, L.; Simoniello, P.; Danly, C.; Mariam, F.; Merrill, F.; Nedrow, P.; Wilde, C.; Varentsov, D.
2016-06-01
The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allows imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. Tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.
High-energy proton imaging for biomedical applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prall, Matthias; Durante, Marco; Berger, Thomas
The charged particle community is looking for techniques exploiting proton interactions instead of X-ray absorption for creating images of human tissue. Due to multiple Coulomb scattering inside the measured object it has shown to be highly non-trivial to achieve sufficient spatial resolution. We present imaging of biological tissue with a proton microscope. This device relies on magnetic optics, distinguishing it from most published proton imaging methods. For these methods reducing the data acquisition time to a clinically acceptable level has turned out to be challenging. In a proton microscope, data acquisition and processing are much simpler. This device even allowsmore » imaging in real time. The primary medical application will be image guidance in proton radiosurgery. Proton images demonstrating the potential for this application are presented. As a result, tomographic reconstructions are included to raise awareness of the possibility of high-resolution proton tomography using magneto-optics.« less
a Metadata Based Approach for Analyzing Uav Datasets for Photogrammetric Applications
NASA Astrophysics Data System (ADS)
Dhanda, A.; Remondino, F.; Santana Quintero, M.
2018-05-01
This paper proposes a methodology for pre-processing and analysing Unmanned Aerial Vehicle (UAV) datasets before photogrammetric processing. In cases where images are gathered without a detailed flight plan and at regular acquisition intervals the datasets can be quite large and be time consuming to process. This paper proposes a method to calculate the image overlap and filter out images to reduce large block sizes and speed up photogrammetric processing. The python-based algorithm that implements this methodology leverages the metadata in each image to determine the end and side overlap of grid-based UAV flights. Utilizing user input, the algorithm filters out images that are unneeded for photogrammetric processing. The result is an algorithm that can speed up photogrammetric processing and provide valuable information to the user about the flight path.
NASA Astrophysics Data System (ADS)
Martinez, J. D.; Benlloch, J. M.; Cerda, J.; Lerche, Ch. W.; Pavon, N.; Sebastia, A.
2004-06-01
This paper is framed into the Positron Emission Mammography (PEM) project, whose aim is to develop an innovative gamma ray sensor for early breast cancer diagnosis. Currently, breast cancer is detected using low-energy X-ray screening. However, functional imaging techniques such as PET/FDG could be employed to detect breast cancer and track disease changes with greater sensitivity. Furthermore, a small and less expensive PET camera can be utilized minimizing main problems of whole body PET. To accomplish these objectives, we are developing a new gamma ray sensor based on a newly released photodetector. However, a dedicated PEM detector requires an adequate data acquisition (DAQ) and processing system. The characterization of gamma events needs a free-running analog-to-digital converter (ADC) with sampling rates of more than 50 Ms/s and must achieve event count rates up to 10 MHz. Moreover, comprehensive data processing must be carried out to obtain event parameters necessary for performing the image reconstruction. A new generation digital signal processor (DSP) has been used to comply with these requirements. This device enables us to manage the DAQ system at up to 80 Ms/s and to execute intensive calculi over the detector signals. This paper describes our designed DAQ and processing architecture whose main features are: very high-speed data conversion, multichannel synchronized acquisition with zero dead time, a digital triggering scheme, and high throughput of data with an extensive optimization of the signal processing algorithms.
V S, Unni; Mishra, Deepak; Subrahmanyam, G R K S
2016-12-01
The need for image fusion in current image processing systems is increasing mainly due to the increased number and variety of image acquisition techniques. Image fusion is the process of combining substantial information from several sensors using mathematical techniques in order to create a single composite image that will be more comprehensive and thus more useful for a human operator or other computer vision tasks. This paper presents a new approach to multifocus image fusion based on sparse signal representation. Block-based compressive sensing integrated with a projection-driven compressive sensing (CS) recovery that encourages sparsity in the wavelet domain is used as a method to get the focused image from a set of out-of-focus images. Compression is achieved during the image acquisition process using a block compressive sensing method. An adaptive thresholding technique within the smoothed projected Landweber recovery process reconstructs high-resolution focused images from low-dimensional CS measurements of out-of-focus images. Discrete wavelet transform and dual-tree complex wavelet transform are used as the sparsifying basis for the proposed fusion. The main finding lies in the fact that sparsification enables a better selection of the fusion coefficients and hence better fusion. A Laplacian mixture model fit is done in the wavelet domain and estimation of the probability density function (pdf) parameters by expectation maximization leads us to the proper selection of the coefficients of the fused image. Using the proposed method compared with the fusion scheme without employing the projected Landweber (PL) scheme and the other existing CS-based fusion approaches, it is observed that with fewer samples itself, the proposed method outperforms other approaches.
Robust Depth Image Acquisition Using Modulated Pattern Projection and Probabilistic Graphical Models
Kravanja, Jaka; Žganec, Mario; Žganec-Gros, Jerneja; Dobrišek, Simon; Štruc, Vitomir
2016-01-01
Depth image acquisition with structured light approaches in outdoor environments is a challenging problem due to external factors, such as ambient sunlight, which commonly affect the acquisition procedure. This paper presents a novel structured light sensor designed specifically for operation in outdoor environments. The sensor exploits a modulated sequence of structured light projected onto the target scene to counteract environmental factors and estimate a spatial distortion map in a robust manner. The correspondence between the projected pattern and the estimated distortion map is then established using a probabilistic framework based on graphical models. Finally, the depth image of the target scene is reconstructed using a number of reference frames recorded during the calibration process. We evaluate the proposed sensor on experimental data in indoor and outdoor environments and present comparative experiments with other existing methods, as well as commercial sensors. PMID:27775570
NASA Astrophysics Data System (ADS)
Dijk, J.; Bijl, P.; Oppeneer, M.; ten Hove, R. J. M.; van Iersel, M.
2017-10-01
The Electro-Optical Signal Transmission and Ranging (EOSTAR) model is an image-based Tactical Decision Aid (TDA) for thermal imaging systems (MWIR/LWIR) developed for a sea environment with an extensive atmosphere model. The Triangle Orientation Discrimination (TOD) Target Acquisition model calculates the sensor and signal processing effects on a set of input triangle test pattern images, judges their orientation using humans or a Human Visual System (HVS) model and derives the system image quality and operational field performance from the correctness of the responses. Combination of the TOD model and EOSTAR, basically provides the possibility to model Target Acquisition (TA) performance over the exact path from scene to observer. In this method ship representative TOD test patterns are placed at the position of the real target, subsequently the combined effects of the environment (atmosphere, background, etc.), sensor and signal processing on the image are calculated using EOSTAR and finally the results are judged by humans. The thresholds are converted into Detection-Recognition-Identification (DRI) ranges of the real target. In experiments is shown that combination of the TOD model and the EOSTAR model is indeed possible. The resulting images look natural and provide insight in the possibilities of combining the two models. The TOD observation task can be done well by humans, and the measured TOD is consistent with analytical TOD predictions for the same camera that was modeled in the ECOMOS project.
Hardware/Software Issues for Video Guidance Systems: The Coreco Frame Grabber
NASA Technical Reports Server (NTRS)
Bales, John W.
1996-01-01
The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.
Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph
2018-06-01
Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.
MMX-I: A data-processing software for multi-modal X-ray imaging and tomography
NASA Astrophysics Data System (ADS)
Bergamaschi, A.; Medjoubi, K.; Messaoudi, C.; Marco, S.; Somogyi, A.
2017-06-01
Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data.
NeuroPG: open source software for optical pattern generation and data acquisition
Avants, Benjamin W.; Murphy, Daniel B.; Dapello, Joel A.; Robinson, Jacob T.
2015-01-01
Patterned illumination using a digital micromirror device (DMD) is a powerful tool for optogenetics. Compared to a scanning laser, DMDs are inexpensive and can easily create complex illumination patterns. Combining these complex spatiotemporal illumination patterns with optogenetics allows DMD-equipped microscopes to probe neural circuits by selectively manipulating the activity of many individual cells or many subcellular regions at the same time. To use DMDs to study neural activity, scientists must develop specialized software to coordinate optical stimulation patterns with the acquisition of electrophysiological and fluorescence data. To meet this growing need we have developed an open source optical pattern generation software for neuroscience—NeuroPG—that combines, DMD control, sample visualization, and data acquisition in one application. Built on a MATLAB platform, NeuroPG can also process, analyze, and visualize data. The software is designed specifically for the Mightex Polygon400; however, as an open source package, NeuroPG can be modified to incorporate any data acquisition, imaging, or illumination equipment that is compatible with MATLAB’s Data Acquisition and Image Acquisition toolboxes. PMID:25784873
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demos, Stavros; Levenson, Richard
The present disclosure relates to a method for analyzing tissue specimens. In one implementation the method involves obtaining a tissue sample and exposing the sample to one or more fluorophores as contrast agents to enhance contrast of subcellular compartments of the tissue sample. The tissue sample is illuminated by an ultraviolet (UV) light having a wavelength between about 200 nm to about 400 nm, with the wavelength being selected to result in penetration to only a specified depth below a surface of the tissue sample. Inter-image operations between images acquired under different imaging parameters allow for improvement of the imagemore » quality via removal of unwanted image components. A microscope may be used to image the tissue sample and provide the image to an image acquisition system that makes use of a camera. The image acquisition system may create a corresponding image that is transmitted to a display system for processing and display.« less
SU-E-I-09: The Impact of X-Ray Scattering On Image Noise for Dedicated Breast CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, K; Gazi, P; Boone, J
2015-06-15
Purpose: To quantify the impact of detected x-ray scatter on image noise in flat panel based dedicated breast CT systems and to determine the optimal scanning geometry given practical trade-offs between radiation dose and scatter reduction. Methods: Four different uniform polyethylene cylinders (104, 131, 156, and 184 mm in diameter) were scanned as the phantoms on a dedicated breast CT scanner developed in our laboratory. Both stationary projection imaging and rotational cone-beam CT imaging was performed. For each acquisition type, three different x-ray beam collimations were used (12, 24, and 109 mm measured at isocenter). The aim was to quantifymore » image noise properties (pixel variance, SNR, and image NPS) under different levels of x-ray scatter, in order to optimize the scanning geometry. For both projection images and reconstructed CT images, individual pixel variance and NPS were determined and compared. Noise measurement from the CT images were also performed with different detector binning modes and reconstruction matrix sizes. Noise propagation was also tracked throughout the intermediate steps of cone-beam CT reconstruction, including the inverse-logarithmic process, Fourier-filtering before backprojection. Results: Image noise was lower in the presence of higher scatter levels. For the 184 mm polyethylene phantom, the image noise (measured in pixel variance) was ∼30% lower with full cone-beam acquisition compared to a narrow (12 mm) fan-beam acquisition. This trend is consistent across all phantom sizes and throughout all steps of CT image reconstruction. Conclusion: From purely a noise perspective, the cone-beam geometry (i.e. the full cone-angle acquisition) produces lower image noise compared to the lower-scatter fan-beam acquisition for breast CT. While these results are relevant in homogeneous phantoms, the full impact of scatter on noise in bCT should involve contrast-to-noise-ratio measurements in heterogeneous phantoms if the goal is to optimize the scanning geometry for dedicated breast CT. This work was supported by a grant from the National Institute for Biomedical Imaging and Bioengineering (R01 EB002138)« less
Velocity fields and spectrum peculiarities in Beta Cephei stars
NASA Technical Reports Server (NTRS)
Lesh, J. R.
1980-01-01
The acquisition of short wavelength spectra of Beta Cephei variable stars from the International Ultraviolet Explorer is reported. A total of 122 images of 10 variable stars and 3 comparison stars were obtained. All of the images were observed in the high dispersion mode through a small aperture. The development of image processing methods is also briefly discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernstein, Dr. Ira
This grant was awarded in support of Phase 2 of the University of Vermont Center for Biomedical Imaging. Phase 2 outlined several specific aims including: The development of expertise in MRI and fMRI imaging and their applications The acquisition of peer reviewed extramural funding in support of the Center The development of a Core Imaging Advisory Board, fee structure and protocol review and approval process.
NASA Astrophysics Data System (ADS)
Lin, Qingyang; Andrew, Matthew; Thompson, William; Blunt, Martin J.; Bijeljic, Branko
2018-05-01
Non-invasive laboratory-based X-ray microtomography has been widely applied in many industrial and research disciplines. However, the main barrier to the use of laboratory systems compared to a synchrotron beamline is its much longer image acquisition time (hours per scan compared to seconds to minutes at a synchrotron), which results in limited application for dynamic in situ processes. Therefore, the majority of existing laboratory X-ray microtomography is limited to static imaging; relatively fast imaging (tens of minutes per scan) can only be achieved by sacrificing imaging quality, e.g. reducing exposure time or number of projections. To alleviate this barrier, we introduce an optimized implementation of a well-known iterative reconstruction algorithm that allows users to reconstruct tomographic images with reasonable image quality, but requires lower X-ray signal counts and fewer projections than conventional methods. Quantitative analysis and comparison between the iterative and the conventional filtered back-projection reconstruction algorithm was performed using a sandstone rock sample with and without liquid phases in the pore space. Overall, by implementing the iterative reconstruction algorithm, the required image acquisition time for samples such as this, with sparse object structure, can be reduced by a factor of up to 4 without measurable loss of sharpness or signal to noise ratio.
Motion artifact detection in four-dimensional computed tomography images
NASA Astrophysics Data System (ADS)
Bouilhol, G.; Ayadi, M.; Pinho, R.; Rit, S.; Sarrut, D.
2014-03-01
Motion artifacts appear in four-dimensional computed tomography (4DCT) images because of suboptimal acquisition parameters or patient breathing irregularities. Frequency of motion artifacts is high and they may introduce errors in radiation therapy treatment planning. Motion artifact detection can be useful for image quality assessment and 4D reconstruction improvement but manual detection in many images is a tedious process. We propose a novel method to evaluate the quality of 4DCT images by automatic detection of motion artifacts. The method was used to evaluate the impact of the optimization of acquisition parameters on image quality at our institute. 4DCT images of 114 lung cancer patients were analyzed. Acquisitions were performed with a rotation period of 0.5 seconds and a pitch of 0.1 (74 patients) or 0.081 (40 patients). A sensitivity of 0.70 and a specificity of 0.97 were observed. End-exhale phases were less prone to motion artifacts. In phases where motion speed is high, the number of detected artifacts was systematically reduced with a pitch of 0.081 instead of 0.1 and the mean reduction was 0.79. The increase of the number of patients with no artifact detected was statistically significant for the 10%, 70% and 80% respiratory phases, indicating a substantial image quality improvement.
PDT - PARTICLE DISPLACEMENT TRACKING SOFTWARE
NASA Technical Reports Server (NTRS)
Wernet, M. P.
1994-01-01
Particle Imaging Velocimetry (PIV) is a quantitative velocity measurement technique for measuring instantaneous planar cross sections of a flow field. The technique offers very high precision (1%) directionally resolved velocity vector estimates, but its use has been limited by high equipment costs and complexity of operation. Particle Displacement Tracking (PDT) is an all-electronic PIV data acquisition and reduction procedure which is simple, fast, and easily implemented. The procedure uses a low power, continuous wave laser and a Charged Coupled Device (CCD) camera to electronically record the particle images. A frame grabber board in a PC is used for data acquisition and reduction processing. PDT eliminates the need for photographic processing, system costs are moderately low, and reduced data are available within seconds of acquisition. The technique results in velocity estimate accuracies on the order of 5%. The software is fully menu-driven from the acquisition to the reduction and analysis of the data. Options are available to acquire a single image or 5- or 25-field series of images separated in time by multiples of 1/60 second. The user may process each image, specifying its boundaries to remove unwanted glare from the periphery and adjusting its background level to clearly resolve the particle images. Data reduction routines determine the particle image centroids and create time history files. PDT then identifies the velocity vectors which describe the particle movement in the flow field. Graphical data analysis routines are included which allow the user to graph the time history files and display the velocity vector maps, interpolated velocity vector grids, iso-velocity vector contours, and flow streamlines. The PDT data processing software is written in FORTRAN 77 and the data acquisition routine is written in C-Language for 80386-based IBM PC compatibles running MS-DOS v3.0 or higher. Machine requirements include 4 MB RAM (3 MB Extended), a single or multiple frequency RGB monitor (EGA or better), a math co-processor, and a pointing device. The printers supported by the graphical analysis routines are the HP Laserjet+, Series II, and Series III with at least 1.5 MB memory. The data acquisition routines require the EPIX 4-MEG video board and optional 12.5MHz oscillator, and associated EPIX software. Data can be acquired from any CCD or RS-170 compatible video camera with pixel resolution of 600hX400v or better. PDT is distributed on one 5.25 inch 360K MS-DOS format diskette. Due to the use of required proprietary software, executable code is not provided on the distribution media. Compiling the source code requires the Microsoft C v5.1 compiler, Microsoft QuickC v2.0, the Microsoft Mouse Library, EPIX Image Processing Libraries, the Microway NDP-Fortran-386 v2.1 compiler, and the Media Cybernetics HALO Professional Graphics Kernal System. Due to the complexities of the machine requirements, COSMIC strongly recommends the purchase and review of the documentation prior to the purchase of the program. The source code, and sample input and output files are provided in PKZIP format; the PKUNZIP utility is included. PDT was developed in 1990. All trade names used are the property of their respective corporate owners.
Functional imaging of conditioned aversive emotional responses in antisocial personality disorder.
Schneider, F; Habel, U; Kessler, C; Posse, S; Grodd, W; Müller-Gärtner, H W
2000-01-01
Individuals with antisocial personality disorder (n = 12) and healthy controls (n = 12) were examined for cerebral regional activation involved in the processing of negative affect. A differential aversive classical conditioning paradigm was applied with odors as unconditioned stimuli and faces as conditioned stimuli. Functional magnetic resonance imaging (fMRI) based on echo-planar imaging was used while cerebral activity was studied during habituation, acquisition, and extinction. Individually defined cerebral regions were analyzed. Both groups indicated behavioral conditioning following subjective ratings of emotional valence to conditioned stimuli. Differential effects were found during acquisition in the amygdala and dorsolateral prefrontal cortex. Controls showed signal decreases, patients signal increases. These preliminary results revealed unexpected signal increases in cortical/subcortical areas of patients. The increases may result from an additional effort put in by these individuals to form negative emotional associations, a pattern of processing that may correspond to their characteristic deviant emotional behavior. Copyright 2000 S. Karger AG, Basel.
Diagnostic report acquisition unit for the Mayo/IBM PACS project
NASA Astrophysics Data System (ADS)
Brooks, Everett G.; Rothman, Melvyn L.
1991-07-01
The Mayo Clinic and IBM Rochester have jointly developed a picture archive and control system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. One of the challenges of developing a useful PACS involves integrating the diagnostic reports with the electronic images so they can be displayed simultaneously. By the time a diagnostic report is generated for a particular case, its images have already been captured and archived by the PACS. To integrate the report with the images, the authors have developed an IBM Personal System/2 computer (PS/2) based diagnostic report acquisition unit (RAU). A typed copy of the report is transmitted via facsimile to the RAU where it is stacked electronically with other reports that have been sent previously but not yet processed. By processing these reports at the RAU, the information they contain is integrated with the image database and a copy of the report is archived electronically on an IBM Application System/400 computer (AS/400). When a user requests a set of images for viewing, the report is automatically integrated with the image data. By using a hot key, the user can toggle on/off the report on the display screen. This report describes process, hardware, and software employed to integrate the diagnostic report information into the PACS, including how the report images are captured, transmitted, and entered into the AS/400 database. Also described is how the archived reports and their associated medical images are located and merged for retrieval and display. The methods used to detect and process error conditions are also discussed.
Optronic System Imaging Simulator (OSIS): imager simulation tool of the ECOMOS project
NASA Astrophysics Data System (ADS)
Wegner, D.; Repasi, E.
2018-04-01
ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defense and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses two approaches to calculate Target Acquisition (TA) ranges, the analytical TRM4 model and the image-based Triangle Orientation Discrimination model (TOD). In this paper the IR imager simulation tool, Optronic System Imaging Simulator (OSIS), is presented. It produces virtual camera imagery required by the TOD approach. Pristine imagery is degraded by various effects caused by atmospheric attenuation, optics, detector footprint, sampling, fixed pattern noise, temporal noise and digital signal processing. Resulting images might be presented to observers or could be further processed for automatic image quality calculations. For convenience OSIS incorporates camera descriptions and intermediate results provided by TRM4. For input OSIS uses pristine imagery tied with meta information about scene content, its physical dimensions, and gray level interpretation. These images represent planar targets placed at specified distances to the imager. Furthermore, OSIS is extended by a plugin functionality that enables integration of advanced digital signal processing techniques in ECOMOS such as compression, local contrast enhancement, digital turbulence mitiga- tion, to name but a few. By means of this image-based approach image degradations and image enhancements can be investigated, which goes beyond the scope of the analytical TRM4 model.
Accessible biometrics: A frustrated total internal reflection approach to imaging fingerprints.
Smith, Nathan D; Sharp, James S
2017-05-01
Fingerprints are widely used as a means of identifying persons of interest because of the highly individual nature of the spatial distribution and types of features (or minuta) found on the surface of a finger. This individuality has led to their wide application in the comparison of fingerprints found at crime scenes with those taken from known offenders and suspects in custody. However, despite recent advances in machine vision technology and image processing techniques, fingerprint evidence is still widely being collected using outdated practices involving ink and paper - a process that can be both time consuming and expensive. Reduction of forensic service budgets increasingly requires that evidence be gathered and processed more rapidly and efficiently. However, many of the existing digital fingerprint acquisition devices have proven too expensive to roll out on a large scale. As a result new, low-cost imaging technologies are required to increase the quality and throughput of the processing of fingerprint evidence. Here we describe an inexpensive approach to digital fingerprint acquisition that is based upon frustrated total internal reflection imaging. The quality and resolution of the images produced are shown to be as good as those currently acquired using ink and paper based methods. The same imaging technique is also shown to be capable of imaging powdered fingerprints that have been lifted from a crime scene using adhesive tape or gel lifters. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Qing; Lin, Haibo; Xiu, Yu-Feng; Wang, Ruixue; Yi, Chuijie
The test platform of wheat precision seeding based on image processing techniques is designed to develop the wheat precision seed metering device with high efficiency and precision. Using image processing techniques, this platform gathers images of seeds (wheat) on the conveyer belt which are falling from seed metering device. Then these data are processed and analyzed to calculate the qualified rate, reseeding rate and leakage sowing rate, etc. This paper introduces the whole structure, design parameters of the platform and hardware & software of the image acquisition system were introduced, as well as the method of seed identification and seed-space measurement using image's threshold and counting the seed's center. By analyzing the experimental result, the measurement error is less than ± 1mm.
Software manual for operating particle displacement tracking data acquisition and reduction system
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1991-01-01
The software manual is presented. The necessary steps required to record, analyze, and reduce Particle Image Velocimetry (PIV) data using the Particle Displacement Tracking (PDT) technique are described. The new PDT system is an all electronic technique employing a CCD video camera and a large memory buffer frame-grabber board to record low velocity (less than or equal to 20 cm/s) flows. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine 2-D velocity vectors. All the PDT data acquisition, analysis, and data reduction software is written to run on an 80386 PC.
Li, Xueming; Zheng, Shawn; Agard, David A.; Cheng, Yifan
2015-01-01
Newly developed direct electron detection cameras have a high image output frame rate that enables recording dose fractionated image stacks of frozen hydrated biological samples by electron cryomicroscopy (cryoEM). Such novel image acquisition schemes provide opportunities to analyze cryoEM data in ways that were previously impossible. The file size of a dose fractionated image stack is 20 ~ 60 times larger than that of a single image. Thus, efficient data acquisition and on-the-fly analysis of a large number of dose-fractionated image stacks become a serious challenge to any cryoEM data acquisition system. We have developed a computer-assisted system, named UCSFImage4, for semi-automated cryo-EM image acquisition that implements an asynchronous data acquisition scheme. This facilitates efficient acquisition, on-the-fly motion correction, and CTF analysis of dose fractionated image stacks with a total time of ~60 seconds/exposure. Here we report the technical details and configuration of this system. PMID:26370395
NASA Astrophysics Data System (ADS)
Masi, Matteo; Ferdos, Farzad; Losito, Gabriella; Solari, Luca
2016-04-01
Electrical Impedance Tomography (EIT) is a technique for the imaging of the electrical properties of conductive materials. In EIT, the spatial distribution of the electrical resistivity or electrical conductivity within a domain is reconstructed using measurements made with electrodes placed at the boundaries of the domain. Data acquisition is typically made by applying an electrical current to the object under investigation using a set of electrodes, and measuring the developed voltage between the other electrodes. The tomographic image is then obtained using an inversion algorithm. This work describes the implementation of a simple and low cost 3D EIT measurement system suitable for laboratory-scale studies. The system was specifically developed for the time-lapse imaging of soil samples subjected to erosion processes during laboratory tests. The tests reproduce the process of internal erosion of soil particles by water flow within a granular media; this process is one of the most common causes of failure of earthen levees and embankment dams. The measurements needed strict requirements of speed and accuracy due to the varying time scale and magnitude of these processes. The developed EIT system consists of a PC which controls I/O cards (multiplexers) through the Arduino micro-controller, an external current generator, a digital acquisition device (DAQ), a power supply and the electrodes. The ease of programming of the Arduino interface greatly helped the implementation of custom acquisition software, increasing the overall flexibility of the system and the creation of specific acquisition schemes and configurations. The system works with a multi-electrode configuration of up to 48 channels but it was designed to be upgraded to an arbitrary large number of electrodes by connecting additional multiplexer cards (> 96 electrodes). The acquisition was optimized for multi-channel measurements so that the overall time of acquisition is dramatically reduced compared to the single channel instrumentation. The accuracy and operation were tested under different conditions. The results from preliminary tests show that the system is able to clearly identify objects discriminated by different resistivity. Furthermore, measurements carried out during internal erosion simulations demonstrate that even small variations in the electrical resistivity can be captured and these changes can be related to the erosion processes.
Digital video system for on-line portal verification
NASA Astrophysics Data System (ADS)
Leszczynski, Konrad W.; Shalev, Shlomo; Cosby, N. Scott
1990-07-01
A digital system has been developed for on-line acquisition, processing and display of portal images during radiation therapy treatment. A metal/phosphor screen combination is the primary detector, where the conversion from high-energy photons to visible light takes place. A mirror angled at 45 degrees reflects the primary image to a low-light-level camera, which is removed from the direct radiation beam. The image registered by the camera is digitized, processed and displayed on a CRT monitor. Advanced digital techniques for processing of on-line images have been developed and implemented to enhance image contrast and suppress the noise. Some elements of automated radiotherapy treatment verification have been introduced.
Yang, Xu; Tang, Songyuan; Tasciotti, Ennio; Righetti, Raffaella
2018-01-17
Ultrasound (US) imaging has long been considered as a potential aid in orthopedic surgeries. US technologies are safe, portable and do not use radiations. This would make them a desirable tool for real-time assessment of fractures and to monitor fracture healing. However, image quality of US imaging methods in bone applications is limited by speckle, attenuation, shadow, multiple reflections and other imaging artifacts. While bone surfaces typically appear in US images as somewhat 'brighter' than soft tissue, they are often not easily distinguishable from the surrounding tissue. Therefore, US imaging methods aimed at segmenting bone surfaces need enhancement in image contrast prior to segmentation to improve the quality of the detected bone surface. In this paper, we present a novel acquisition/processing technique for bone surface enhancement in US images. Inspired by elastography and Doppler imaging methods, this technique takes advantage of the difference between the mechanical and acoustic properties of bones and those of soft tissues to make the bone surface more easily distinguishable in US images. The objective of this technique is to facilitate US-based bone segmentation methods and improve the accuracy of their outcomes. The newly proposed technique is tested both in in vitro and in vivo experiments. The results of these preliminary experiments suggest that the use of the proposed technique has the potential to significantly enhance the detectability of bone surfaces in noisy ultrasound images.
NASA Astrophysics Data System (ADS)
Yang, Xu; Tang, Songyuan; Tasciotti, Ennio; Righetti, Raffaella
2018-01-01
Ultrasound (US) imaging has long been considered as a potential aid in orthopedic surgeries. US technologies are safe, portable and do not use radiations. This would make them a desirable tool for real-time assessment of fractures and to monitor fracture healing. However, image quality of US imaging methods in bone applications is limited by speckle, attenuation, shadow, multiple reflections and other imaging artifacts. While bone surfaces typically appear in US images as somewhat ‘brighter’ than soft tissue, they are often not easily distinguishable from the surrounding tissue. Therefore, US imaging methods aimed at segmenting bone surfaces need enhancement in image contrast prior to segmentation to improve the quality of the detected bone surface. In this paper, we present a novel acquisition/processing technique for bone surface enhancement in US images. Inspired by elastography and Doppler imaging methods, this technique takes advantage of the difference between the mechanical and acoustic properties of bones and those of soft tissues to make the bone surface more easily distinguishable in US images. The objective of this technique is to facilitate US-based bone segmentation methods and improve the accuracy of their outcomes. The newly proposed technique is tested both in in vitro and in vivo experiments. The results of these preliminary experiments suggest that the use of the proposed technique has the potential to significantly enhance the detectability of bone surfaces in noisy ultrasound images.
NASA Astrophysics Data System (ADS)
Dostálová, Alena; Naeimi, Vahid; Wagner, Wolfgang; Elefante, Stefano; Cao, Senmao; Persson, Henrik
2016-10-01
One of the major advantages of the Sentinel-1 data is its capability to provide very high spatio-temporal coverage allowing the mapping of large areas as well as creation of dense time-series of the Sentinel-1 acquisitions. The SGRT software developed at TU Wien aims at automated processing of Sentinel-1 data for global and regional products. The first step of the processing consists of the Sentinel-1 data geocoding with the help of S1TBX software and their resampling to a common grid. These resampled images serve as an input for the product derivation. Thus, it is very important to select the most reliable processing settings and assess the geocoding uncertainty for both backscatter and projected local incidence angle images. Within this study, selection of Sentinel-1 acquisitions over 3 test areas in Europe were processed manually in the S1TBX software, testing multiple software versions, processing settings and digital elevation models (DEM) and the accuracy of the resulting geocoded images were assessed. Secondly, all available Sentinel-1 data over the areas were processed using selected settings and detailed quality check was performed. Overall, strong influence of the used DEM on the geocoding quality was confirmed with differences up to 80 meters in areas with higher terrain variations. In flat areas, the geocoding accuracy of backscatter images was overall good, with observed shifts between 0 and 30m. Larger systematic shifts were identified in case of projected local incidence angle images. These results encourage the automated processing of large volumes of Sentinel-1 data.
Post-image acquisition processing approaches for coherent backscatter validation
NASA Astrophysics Data System (ADS)
Smith, Christopher A.; Belichki, Sara B.; Coffaro, Joseph T.; Panich, Michael G.; Andrews, Larry C.; Phillips, Ronald L.
2014-10-01
Utilizing a retro-reflector from a target point, the reflected irradiance of a laser beam traveling back toward the transmitting point contains a peak point of intensity known as the enhanced backscatter (EBS) phenomenon. EBS is dependent on the strength regime of turbulence currently occurring within the atmosphere as the beam propagates across and back. In order to capture and analyze this phenomenon so that it may be compared to theory, an imaging system is integrated into the optical set up. With proper imaging established, we are able to implement various post-image acquisition techniques to help determine detection and positioning of EBS which can then be validated with theory by inspection of certain dependent meteorological parameters such as the refractive index structure parameter, Cn2 and wind speed.
ASPRS Digital Imagery Guideline Image Gallery Discussion
NASA Technical Reports Server (NTRS)
Ryan, Robert
2002-01-01
The objectives of the image gallery are to 1) give users and providers a simple means of identifying appropriate imagery for a given application/feature extraction; and 2) define imagery sufficiently to be described in engineering and acquisition terms. This viewgraph presentation includes a discussion of edge response and aliasing for image processing, and a series of images illustrating the effects of signal to noise ratio (SNR) on images. Another series of images illustrates how images are affected by varying the ground sample distances (GSD).
Age/Order of Acquisition Effects and the Cumulative Learning of Foreign Words: A Word Training Study
ERIC Educational Resources Information Center
Izura, Cristina; Perez, Miguel A.; Agallou, Elizabeth; Wright, Victoria C.; Marin, Javier; Stadthagen-Gonzalez, Hans; Ellis, Andrew W.
2011-01-01
Early acquired words are processed faster than later acquired words in lexical and semantic tasks. Demonstrating such age of acquisition (AoA) effects beyond reasonable doubt, and then investigating those effects empirically, is complicated by the natural correlation between AoA and other word properties such as frequency and imageability. In an…
Resiliency of the Multiscale Retinex Image Enhancement Algorithm
NASA Technical Reports Server (NTRS)
Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.
1998-01-01
The multiscale retinex with color restoration (MSRCR) continues to prove itself in extensive testing to be very versatile automatic image enhancement algorithm that simultaneously provides dynamic range compression, color constancy, and color rendition, However, issues remain with regard to the resiliency of the MSRCR to different image sources and arbitrary image manipulations which may have been applied prior to retinex processing. In this paper we define these areas of concern, provide experimental results, and, examine the effects of commonly occurring image manipulation on retinex performance. In virtually all cases the MSRCR is highly resilient to the effects of both the image source variations and commonly encountered prior image-processing. Significant artifacts are primarily observed for the case of selective color channel clipping in large dark zones in a image. These issues are of concerning the processing of digital image archives and other applications where there is neither control over the image acquisition process, nor knowledge about any processing done on th data beforehand.
NASA Astrophysics Data System (ADS)
Lee, Junghyun; Kim, Heewon; Chung, Hyun; Kim, Haedong; Choi, Sujin; Jung, Okchul; Chung, Daewon; Ko, Kwanghee
2018-04-01
In this paper, we propose a method that uses a genetic algorithm for the dynamic schedule optimization of imaging missions for multiple satellites and ground systems. In particular, the visibility conflicts of communication and mission operation using satellite resources (electric power and onboard memory) are integrated in sequence. Resource consumption and restoration are considered in the optimization process. Image acquisition is an essential part of satellite missions and is performed via a series of subtasks such as command uplink, image capturing, image storing, and image downlink. An objective function for optimization is designed to maximize the usability by considering the following components: user-assigned priority, resource consumption, and image-acquisition time. For the simulation, a series of hypothetical imaging missions are allocated to a multi-satellite control system comprising five satellites and three ground stations having S- and X-band antennas. To demonstrate the performance of the proposed method, simulations are performed via three operation modes: general, commercial, and tactical.
Ring artifact reduction in synchrotron x-ray tomography through helical acquisition
NASA Astrophysics Data System (ADS)
Pelt, Daniël M.; Parkinson, Dilworth Y.
2018-03-01
In synchrotron x-ray tomography, systematic defects in certain detector elements can result in arc-shaped artifacts in the final reconstructed image of the scanned sample. These ring artifacts are commonly found in many applications of synchrotron tomography, and can make it difficult or impossible to use the reconstructed image in further analyses. The severity of ring artifacts is often reduced in practice by applying pre-processing on the acquired data, or post-processing on the reconstructed image. However, such additional processing steps can introduce additional artifacts as well, and rely on specific choices of hyperparameter values. In this paper, a different approach to reducing the severity of ring artifacts is introduced: a helical acquisition mode. By moving the sample parallel to the rotation axis during the experiment, the sample is detected at different detector positions in each projection, reducing the effect of systematic errors in detector elements. Alternatively, helical acquisition can be viewed as a way to transform ring artifacts to helix-like artifacts in the reconstructed volume, reducing their severity. We show that data acquired with the proposed mode can be transformed to data acquired with a virtual circular trajectory, enabling further processing of the data with existing software packages for circular data. Results for both simulated data and experimental data show that the proposed method is able to significantly reduce ring artifacts in practice, even compared with popular existing methods, without introducing additional artifacts.
NASA Astrophysics Data System (ADS)
Wu, Yichen; Zhang, Yibo; Luo, Wei; Ozcan, Aydogan
2017-03-01
Digital holographic on-chip microscopy achieves large space-bandwidth-products (e.g., >1 billion) by making use of pixel super-resolution techniques. To synthesize a digital holographic color image, one can take three sets of holograms representing the red (R), green (G) and blue (B) parts of the spectrum and digitally combine them to synthesize a color image. The data acquisition efficiency of this sequential illumination process can be improved by 3-fold using wavelength-multiplexed R, G and B illumination that simultaneously illuminates the sample, and using a Bayer color image sensor with known or calibrated transmission spectra to digitally demultiplex these three wavelength channels. This demultiplexing step is conventionally used with interpolation-based Bayer demosaicing methods. However, because the pixels of different color channels on a Bayer image sensor chip are not at the same physical location, conventional interpolation-based demosaicing process generates strong color artifacts, especially at rapidly oscillating hologram fringes, which become even more pronounced through digital wave propagation and phase retrieval processes. Here, we demonstrate that by merging the pixel super-resolution framework into the demultiplexing process, such color artifacts can be greatly suppressed. This novel technique, termed demosaiced pixel super-resolution (D-PSR) for digital holographic imaging, achieves very similar color imaging performance compared to conventional sequential R,G,B illumination, with 3-fold improvement in image acquisition time and data-efficiency. We successfully demonstrated the color imaging performance of this approach by imaging stained Pap smears. The D-PSR technique is broadly applicable to high-throughput, high-resolution digital holographic color microscopy techniques that can be used in resource-limited-settings and point-of-care offices.
Real-time digital signal processing for live electro-optic imaging.
Sasagawa, Kiyotaka; Kanno, Atsushi; Tsuchiya, Masahiro
2009-08-31
We present an imaging system that enables real-time magnitude and phase detection of modulated signals and its application to a Live Electro-optic Imaging (LEI) system, which realizes instantaneous visualization of RF electric fields. The real-time acquisition of magnitude and phase images of a modulated optical signal at 5 kHz is demonstrated by imaging with a Si-based high-speed CMOS image sensor and real-time signal processing with a digital signal processor. In the LEI system, RF electric fields are probed with light via an electro-optic crystal plate and downconverted to an intermediate frequency by parallel optical heterodyning, which can be detected with the image sensor. The artifacts caused by the optics and the image sensor characteristics are corrected by image processing. As examples, we demonstrate real-time visualization of electric fields from RF circuits.
Pursley, Randall H.; Salem, Ghadi; Devasahayam, Nallathamby; Subramanian, Sankaran; Koscielniak, Janusz; Krishna, Murali C.; Pohida, Thomas J.
2006-01-01
The integration of modern data acquisition and digital signal processing (DSP) technologies with Fourier transform electron paramagnetic resonance (FT-EPR) imaging at radiofrequencies (RF) is described. The FT-EPR system operates at a Larmor frequency (Lf) of 300 MHz to facilitate in vivo studies. This relatively low frequency Lf, in conjunction with our ~10 MHz signal bandwidth, enables the use of direct free induction decay time-locked subsampling (TLSS). This particular technique provides advantages by eliminating the traditional analog intermediate frequency downconversion stage along with the corresponding noise sources. TLSS also results in manageable sample rates that facilitate the design of DSP-based data acquisition and image processing platforms. More specifically, we utilize a high-speed field programmable gate array (FPGA) and a DSP processor to perform advanced real-time signal and image processing. The migration to a DSP-based configuration offers the benefits of improved EPR system performance, as well as increased adaptability to various EPR system configurations (i.e., software configurable systems instead of hardware reconfigurations). The required modifications to the FT-EPR system design are described, with focus on the addition of DSP technologies including the application-specific hardware, software, and firmware developed for the FPGA and DSP processor. The first results of using real-time DSP technologies in conjunction with direct detection bandpass sampling to implement EPR imaging at RF frequencies are presented. PMID:16243552
New developments in electron microscopy for serial image acquisition of neuronal profiles.
Kubota, Yoshiyuki
2015-02-01
Recent developments in electron microscopy largely automate the continuous acquisition of serial electron micrographs (EMGs), previously achieved by laborious manual serial ultrathin sectioning using an ultramicrotome and ultrastructural image capture process with transmission electron microscopy. The new systems cut thin sections and capture serial EMGs automatically, allowing for acquisition of large data sets in a reasonably short time. The new methods are focused ion beam/scanning electron microscopy, ultramicrotome/serial block-face scanning electron microscopy, automated tape-collection ultramicrotome/scanning electron microscopy and transmission electron microscope camera array. In this review, their positive and negative aspects are discussed. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Arrigoni, Simone; Turra, Giovanni; Signoroni, Alberto
2017-09-01
With the rapid diffusion of Full Laboratory Automation systems, Clinical Microbiology is currently experiencing a new digital revolution. The ability to capture and process large amounts of visual data from microbiological specimen processing enables the definition of completely new objectives. These include the direct identification of pathogens growing on culturing plates, with expected improvements in rapid definition of the right treatment for patients affected by bacterial infections. In this framework, the synergies between light spectroscopy and image analysis, offered by hyperspectral imaging, are of prominent interest. This leads us to assess the feasibility of a reliable and rapid discrimination of pathogens through the classification of their spectral signatures extracted from hyperspectral image acquisitions of bacteria colonies growing on blood agar plates. We designed and implemented the whole data acquisition and processing pipeline and performed a comprehensive comparison among 40 combinations of different data preprocessing and classification techniques. High discrimination performance has been achieved also thanks to improved colony segmentation and spectral signature extraction. Experimental results reveal the high accuracy and suitability of the proposed approach, driving the selection of most suitable and scalable classification pipelines and stimulating clinical validations. Copyright © 2017 Elsevier Ltd. All rights reserved.
A new scanning electron microscopy approach to image aerogels at the nanoscale
NASA Astrophysics Data System (ADS)
Solá, F.; Hurwitz, F.; Yang, J.
2011-04-01
A new scanning electron microscopy (SEM) technique to image poor electrically conductive aerogels is presented. The process can be performed by non-expert SEM users. We showed that negative charging effects on aerogels can be minimized significantly by inserting dry nitrogen gas close to the region of interest. The process involves the local recombination of accumulated negative charges with positive ions generated from ionization processes. This new technique made possible the acquisition of images of aerogels with pores down to approximately 3 nm in diameter using a positively biased Everhart-Thornley (ET) detector.
Supervised restoration of degraded medical images using multiple-point geostatistics.
Pham, Tuan D
2012-06-01
Reducing noise in medical images has been an important issue of research and development for medical diagnosis, patient treatment, and validation of biomedical hypotheses. Noise inherently exists in medical and biological images due to the acquisition and transmission in any imaging devices. Being different from image enhancement, the purpose of image restoration is the process of removing noise from a degraded image in order to recover as much as possible its original version. This paper presents a statistically supervised approach for medical image restoration using the concept of multiple-point geostatistics. Experimental results have shown the effectiveness of the proposed technique which has potential as a new methodology for medical and biological image processing. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zaborowicz, M.; Przybył, J.; Koszela, K.; Boniecki, P.; Mueller, W.; Raba, B.; Lewicki, A.; Przybył, K.
2014-04-01
The aim of the project was to make the software which on the basis on image of greenhouse tomato allows for the extraction of its characteristics. Data gathered during the image analysis and processing were used to build learning sets of artificial neural networks. Program enables to process pictures in jpeg format, acquisition of statistical information of the picture and export them to an external file. Produced software is intended to batch analyze collected research material and obtained information saved as a csv file. Program allows for analysis of 33 independent parameters implicitly to describe tested image. The application is dedicated to processing and image analysis of greenhouse tomatoes. The program can be used for analysis of other fruits and vegetables of a spherical shape.
Trimodal low-dose X-ray tomography
Zanette, I.; Bech, M.; Rack, A.; Le Duc, G.; Tafforeau, P.; David, C.; Mohr, J.; Pfeiffer, F.; Weitkamp, T.
2012-01-01
X-ray grating interferometry is a coherent imaging technique that bears tremendous potential for three-dimensional tomographic imaging of soft biological tissue and other specimens whose details exhibit very weak absorption contrast. It is intrinsically trimodal, delivering phase contrast, absorption contrast, and scattering (“dark-field”) contrast. Recently reported acquisition strategies for grating-interferometric phase tomography constitute a major improvement of dose efficiency and speed. In particular, some of these techniques eliminate the need for scanning of one of the gratings (“phase stepping”). This advantage, however, comes at the cost of other limitations. These can be a loss in spatial resolution, or the inability to fully separate the three imaging modalities. In the present paper we report a data acquisition and processing method that optimizes dose efficiency but does not share the main limitations of other recently reported methods. Although our method still relies on phase stepping, it effectively uses only down to a single detector frame per projection angle and yields images corresponding to all three contrast modalities. In particular, this means that dark-field imaging remains accessible. The method is also compliant with data acquisition over an angular range of only 180° and with a continuous rotation of the specimen. PMID:22699500
NASA Astrophysics Data System (ADS)
Schäfer, D.; Lin, M.; Rao, P. P.; Loffroy, R.; Liapi, E.; Noordhoek, N.; Eshuis, P.; Radaelli, A.; Grass, M.; Geschwind, J.-F. H.
2012-03-01
C-arm based tomographic 3D imaging is applied in an increasing number of minimal invasive procedures. Due to the limited acquisition speed for a complete projection data set required for tomographic reconstruction, breathing motion is a potential source of artifacts. This is the case for patients who cannot comply breathing commands (e.g. due to anesthesia). Intra-scan motion estimation and compensation is required. Here, a scheme for projection based local breathing motion estimation is combined with an anatomy adapted interpolation strategy and subsequent motion compensated filtered back projection. The breathing motion vector is measured as a displacement vector on the projections of a tomographic short scan acquisition using the diaphragm as a landmark. Scaling of the displacement to the acquisition iso-center and anatomy adapted volumetric motion vector field interpolation delivers a 3D motion vector per voxel. Motion compensated filtered back projection incorporates this motion vector field in the image reconstruction process. This approach is applied in animal experiments on a flat panel C-arm system delivering improved image quality (lower artifact levels, improved tumor delineation) in 3D liver tumor imaging.
NASA Astrophysics Data System (ADS)
Mickevicius, Nikolai J.; Paulson, Eric S.
2017-04-01
The purpose of this work is to investigate the effects of undersampling and reconstruction algorithm on the total processing time and image quality of respiratory phase-resolved 4D MRI data. Specifically, the goal is to obtain quality 4D-MRI data with a combined acquisition and reconstruction time of five minutes or less, which we reasoned would be satisfactory for pre-treatment 4D-MRI in online MRI-gRT. A 3D stack-of-stars, self-navigated, 4D-MRI acquisition was used to scan three healthy volunteers at three image resolutions and two scan durations. The NUFFT, CG-SENSE, SPIRiT, and XD-GRASP reconstruction algorithms were used to reconstruct each dataset on a high performance reconstruction computer. The overall image quality, reconstruction time, artifact prevalence, and motion estimates were compared. The CG-SENSE and XD-GRASP reconstructions provided superior image quality over the other algorithms. The combination of a 3D SoS sequence and parallelized reconstruction algorithms using computing hardware more advanced than those typically seen on product MRI scanners, can result in acquisition and reconstruction of high quality respiratory correlated 4D-MRI images in less than five minutes.
The image acquisition system design of floor grinder
NASA Astrophysics Data System (ADS)
Wang, Yang-jiang; Liu, Wei; Liu, Hui-qin
2018-01-01
Based on linear CCD, high resolution image real-time acquisition system serves as designing a set of image acquisition system for floor grinder through the calculation of optical imaging system. The entire image acquisition system can collect images of ground before and after the work of the floor grinder, and the data is transmitted through the Bluetooth system to the computer and compared to realize real-time monitoring of its working condition. The system provides technical support for the design of unmanned ground grinders.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, C. Shan; Hayworth, Kenneth J.; Lu, Zhiyuan
Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 ?m 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processesmore » and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.« less
Planning applications in image analysis
NASA Technical Reports Server (NTRS)
Boddy, Mark; White, Jim; Goldman, Robert; Short, Nick, Jr.
1994-01-01
We describe two interim results from an ongoing effort to automate the acquisition, analysis, archiving, and distribution of satellite earth science data. Both results are applications of Artificial Intelligence planning research to the automatic generation of processing steps for image analysis tasks. First, we have constructed a linear conditional planner (CPed), used to generate conditional processing plans. Second, we have extended an existing hierarchical planning system to make use of durations, resources, and deadlines, thus supporting the automatic generation of processing steps in time and resource-constrained environments.
Sun, Xiaofei; Shi, Lin; Luo, Yishan; Yang, Wei; Li, Hongpeng; Liang, Peipeng; Li, Kuncheng; Mok, Vincent C T; Chu, Winnie C W; Wang, Defeng
2015-07-28
Intensity normalization is an important preprocessing step in brain magnetic resonance image (MRI) analysis. During MR image acquisition, different scanners or parameters would be used for scanning different subjects or the same subject at a different time, which may result in large intensity variations. This intensity variation will greatly undermine the performance of subsequent MRI processing and population analysis, such as image registration, segmentation, and tissue volume measurement. In this work, we proposed a new histogram normalization method to reduce the intensity variation between MRIs obtained from different acquisitions. In our experiment, we scanned each subject twice on two different scanners using different imaging parameters. With noise estimation, the image with lower noise level was determined and treated as the high-quality reference image. Then the histogram of the low-quality image was normalized to the histogram of the high-quality image. The normalization algorithm includes two main steps: (1) intensity scaling (IS), where, for the high-quality reference image, the intensities of the image are first rescaled to a range between the low intensity region (LIR) value and the high intensity region (HIR) value; and (2) histogram normalization (HN),where the histogram of low-quality image as input image is stretched to match the histogram of the reference image, so that the intensity range in the normalized image will also lie between LIR and HIR. We performed three sets of experiments to evaluate the proposed method, i.e., image registration, segmentation, and tissue volume measurement, and compared this with the existing intensity normalization method. It is then possible to validate that our histogram normalization framework can achieve better results in all the experiments. It is also demonstrated that the brain template with normalization preprocessing is of higher quality than the template with no normalization processing. We have proposed a histogram-based MRI intensity normalization method. The method can normalize scans which were acquired on different MRI units. We have validated that the method can greatly improve the image analysis performance. Furthermore, it is demonstrated that with the help of our normalization method, we can create a higher quality Chinese brain template.
Neutron Tomography at the Los Alamos Neutron Science Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, William Riley
Neutron imaging is an incredibly powerful tool for non-destructive sample characterization and materials science. Neutron tomography is one technique that results in a three-dimensional model of the sample, representing the interaction of the neutrons with the sample. This relies both on reliable data acquisition and on image processing after acquisition. Over the course of the project, the focus has changed from the former to the latter, culminating in a large-scale reconstruction of a meter-long fossilized skull. The full reconstruction is not yet complete, though tools have been developed to improve the speed and accuracy of the reconstruction. This project helpsmore » to improve the capabilities of LANSCE and LANL with regards to imaging large or unwieldy objects.« less
Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570
Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S
2008-01-01
A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).
Abt, Nicholas B.; Lehar, Mohamed; Guajardo, Carolina Trevino; Penninger, Richard T.; Ward, Bryan K.; Pearl, Monica S.; Carey, John P.
2016-01-01
Hypothesis Whether the RWM is permeable to iodine-based contrast agents (IBCA) is unknown; therefore, our goal was to determine if IBCAs could diffuse through the RWM using CT volume acquisition imaging. Introduction Imaging of hydrops in the living human ear has attracted recent interest. Intratympanic (IT) injection has shown gadolinium's ability to diffuse through the round window membrane (RWM), enhancing the perilymphatic space. Methods Four unfixed human cadaver temporal bones underwent intratympanic IBCA injection using three sequentially studied methods. The first method was direct IT injection. The second method used direct RWM visualization via tympanomeatal flap for IBCA-soaked absorbable gelatin pledget placement. In the third method, the middle ear was filled with contrast after flap elevation. Volume acquisition CT images were obtained immediately post-exposure, and at 1, 6, and 24 hour intervals. Post-processing was accomplished using color ramping and subtraction imaging. Results Following the third method, positive RWM and perilymphatic enhancement were seen with endolymph sparing. Gray scale and color ramp multiplanar reconstructions displayed increased signal within the cochlea compared to pre-contrast imaging. The cochlea was measured for attenuation differences compared to pure water, revealing a pre-injection average of −1,103 HU and a post-injection average of 338 HU. Subtraction imaging shows enhancement remaining within the cochlear space, Eustachian tube, middle ear epithelial lining, and mastoid. Conclusions Iohexol iodine contrast is able to diffuse across the RWM. Volume acquisition CT imaging was able to detect perilymphatic enhancement at 0.5mm slice thickness. The clinical application of IBCA IT injection appears promising but requires further safety studies. PMID:26859543
Multislice spiral CT simulator for dynamic cardiopulmonary studies
NASA Astrophysics Data System (ADS)
De Francesco, Silvia; Ferreira da Silva, Augusto M.
2002-04-01
We've developed a Multi-slice Spiral CT Simulator modeling the acquisition process of a real tomograph over a 4-dimensional phantom (4D MCAT) of the human thorax. The simulator allows us to visually characterize artifacts due to insufficient temporal sampling and a priori evaluate the quality of the images obtained in cardio-pulmonary studies (both with single-/multi-slice and ECG gated acquisition processes). The simulating environment allows both for conventional and spiral scanning modes and includes a model of noise in the acquisition process. In case of spiral scanning, reconstruction facilities include longitudinal interpolation methods (360LI and 180LI both for single and multi-slice). Then, the reconstruction of the section is performed through FBP. The reconstructed images/volumes are affected by distortion due to insufficient temporal sampling of the moving object. The developed simulating environment allows us to investigate the nature of the distortion characterizing it qualitatively and quantitatively (using, for example, Herman's measures). Much of our work is focused on the determination of adequate temporal sampling and sinogram regularization techniques. At the moment, the simulator model is limited to the case of multi-slice tomograph, being planned as a next step of development the extension to cone beam or area detectors.
Estimation of urinary stone composition by automated processing of CT images.
Chevreau, Grégoire; Troccaz, Jocelyne; Conort, Pierre; Renard-Penna, Raphaëlle; Mallet, Alain; Daudon, Michel; Mozer, Pierre
2009-10-01
The objective of this article was developing an automated tool for routine clinical practice to estimate urinary stone composition from CT images based on the density of all constituent voxels. A total of 118 stones for which the composition had been determined by infrared spectroscopy were placed in a helical CT scanner. A standard acquisition, low-dose and high-dose acquisitions were performed. All voxels constituting each stone were automatically selected. A dissimilarity index evaluating variations of density around each voxel was created in order to minimize partial volume effects: stone composition was established on the basis of voxel density of homogeneous zones. Stone composition was determined in 52% of cases. Sensitivities for each compound were: uric acid: 65%, struvite: 19%, cystine: 78%, carbapatite: 33.5%, calcium oxalate dihydrate: 57%, calcium oxalate monohydrate: 66.5%, brushite: 75%. Low-dose acquisition did not lower the performances (P < 0.05). This entirely automated approach eliminates manual intervention on the images by the radiologist while providing identical performances including for low-dose protocols.
Acousto-optic RF signal acquisition system
NASA Astrophysics Data System (ADS)
Bloxham, Laurence H.
1990-09-01
This paper describes the architecture and performance of a prototype Acousto-Optic RF Signal Acquisition System designed to intercept, automatically identify, and track communication signals in the VHF band. The system covers 28.0 to 92.0 MHz with five manually selectable, dual conversion; 12.8 MHZ bandwidth front ends. An acousto-optic spectrum analyzer (AOSA) implemented using a tellurium dioxide (Te02) Bragg cell is used to channelize the 12.8 MHz pass band into 512 25 KHz channels. Polarization switching is used to suppress optical noise. Excellent isolation and dynamic range are achieved by using a linear array of 512 custom 40/50 micron fiber optic cables to collect the light at the focal plane of the AOSA and route the light to individual photodetectors. The photodetectors are operated in the photovoltaic mode to compress the greater than 60 dB input optical dynamic range into an easily processed electrical signal. The 512 signals are multiplexed and processed as a line in a video image by a customized digital image processing system. The image processor simultaneously analyzes the channelized signal data and produces a classical waterfall display.
Colony image acquisition and genetic segmentation algorithm and colony analyses
NASA Astrophysics Data System (ADS)
Wang, W. X.
2012-01-01
Colony anaysis is used in a large number of engineerings such as food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing. In order to reduce laboring and increase analysis acuracy, many researchers and developers have made efforts for image analysis systems. The main problems in the systems are image acquisition, image segmentation and image analysis. In this paper, to acquire colony images with good quality, an illumination box was constructed. In the box, the distances between lights and dishe, camra lens and lights, and camera lens and dishe are adjusted optimally. In image segmentation, It is based on a genetic approach that allow one to consider the segmentation problem as a global optimization,. After image pre-processing and image segmentation, the colony analyses are perfomed. The colony image analysis consists of (1) basic colony parameter measurements; (2) colony size analysis; (3) colony shape analysis; and (4) colony surface measurements. All the above visual colony parameters can be selected and combined together, used to make a new engineeing parameters. The colony analysis can be applied into different applications.
Advances in diffusion MRI acquisition and processing in the Human Connectome Project
Sotiropoulos, Stamatios N; Jbabdi, Saad; Xu, Junqian; Andersson, Jesper L; Moeller, Steen; Auerbach, Edward J; Glasser, Matthew F; Hernandez, Moises; Sapiro, Guillermo; Jenkinson, Mark; Feinberg, David A; Yacoub, Essa; Lenglet, Christophe; Ven Essen, David C; Ugurbil, Kamil; Behrens, Timothy EJ
2013-01-01
The Human Connectome Project (HCP) is a collaborative 5-year effort to map human brain connections and their variability in healthy adults. A consortium of HCP investigators will study a population of 1200 healthy adults using multiple imaging modalities, along with extensive behavioral and genetic data. In this overview, we focus on diffusion MRI (dMRI) and the structural connectivity aspect of the project. We present recent advances in acquisition and processing that allow us to obtain very high-quality in-vivo MRI data, while enabling scanning of a very large number of subjects. These advances result from 2 years of intensive efforts in optimising many aspects of data acquisition and processing during the piloting phase of the project. The data quality and methods described here are representative of the datasets and processing pipelines that will be made freely available to the community at quarterly intervals, beginning in 2013. PMID:23702418
Fiberfox: facilitating the creation of realistic white matter software phantoms.
Neher, Peter F; Laun, Frederik B; Stieltjes, Bram; Maier-Hein, Klaus H
2014-11-01
Phantom-based validation of diffusion-weighted image processing techniques is an important key to innovation in the field and is widely used. Openly available and user friendly tools for the flexible generation of tailor-made datasets for the specific tasks at hand can greatly facilitate the work of researchers around the world. We present an open-source framework, Fiberfox, that enables (1) the intuitive definition of arbitrary artificial white matter fiber tracts, (2) signal generation from those fibers by means of the most recent multi-compartment modeling techniques, and (3) simulation of the actual MR acquisition that allows for the introduction of realistic MRI-related effects into the final image. We show that real acquisitions can be closely approximated by simulating the acquisition of the well-known FiberCup phantom. We further demonstrate the advantages of our framework by evaluating the effects of imaging artifacts and acquisition settings on the outcome of 12 tractography algorithms. Our findings suggest that experiments on a realistic software phantom might change the conclusions drawn from earlier hardware phantom experiments. Fiberfox may find application in validating and further developing methods such as tractography, super-resolution, diffusion modeling or artifact correction. Copyright © 2013 Wiley Periodicals, Inc.
Chhatbar, Pratik Y.; Kara, Prakash
2013-01-01
Neural activity leads to hemodynamic changes which can be detected by functional magnetic resonance imaging (fMRI). The determination of blood flow changes in individual vessels is an important aspect of understanding these hemodynamic signals. Blood flow can be calculated from the measurements of vessel diameter and blood velocity. When using line-scan imaging, the movement of blood in the vessel leads to streaks in space-time images, where streak angle is a function of the blood velocity. A variety of methods have been proposed to determine blood velocity from such space-time image sequences. Of these, the Radon transform is relatively easy to implement and has fast data processing. However, the precision of the velocity measurements is dependent on the number of Radon transforms performed, which creates a trade-off between the processing speed and measurement precision. In addition, factors like image contrast, imaging depth, image acquisition speed, and movement artifacts especially in large mammals, can potentially lead to data acquisition that results in erroneous velocity measurements. Here we show that pre-processing the data with a Sobel filter and iterative application of Radon transforms address these issues and provide more accurate blood velocity measurements. Improved signal quality of the image as a result of Sobel filtering increases the accuracy and the iterative Radon transform offers both increased precision and an order of magnitude faster implementation of velocity measurements. This algorithm does not use a priori knowledge of angle information and therefore is sensitive to sudden changes in blood flow. It can be applied on any set of space-time images with red blood cell (RBC) streaks, commonly acquired through line-scan imaging or reconstructed from full-frame, time-lapse images of the vasculature. PMID:23807877
Identifying regions of interest in medical images using self-organizing maps.
Teng, Wei-Guang; Chang, Ping-Lin
2012-10-01
Advances in data acquisition, processing and visualization techniques have had a tremendous impact on medical imaging in recent years. However, the interpretation of medical images is still almost always performed by radiologists. Developments in artificial intelligence and image processing have shown the increasingly great potential of computer-aided diagnosis (CAD). Nevertheless, it has remained challenging to develop a general approach to process various commonly used types of medical images (e.g., X-ray, MRI, and ultrasound images). To facilitate diagnosis, we recommend the use of image segmentation to discover regions of interest (ROI) using self-organizing maps (SOM). We devise a two-stage SOM approach that can be used to precisely identify the dominant colors of a medical image and then segment it into several small regions. In addition, by appropriately conducting the recursive merging steps to merge smaller regions into larger ones, radiologists can usually identify one or more ROIs within a medical image.
Optical method for measuring the surface area of a threaded fastener
Douglas Rammer; Samuel Zelinka
2010-01-01
This article highlights major aspects of a new optical technique to determine the surface area of a threaded fastener; the theoretical framework has been reported elsewhere. Specifically, this article describes general surface area expressions used in the analysis, details of image acquisition system, and major image processing steps contained within the measurement...
Cathodoluminescence | Materials Science | NREL
image, the time to acquire the entire spectrum series is about five minutes. When the acquisition is ) processes the spectrum series to reconstruct images of the photon emission (energy resolved) or to extract : Mapping of the photon energy and full-width-half maximum of selected transitions ASCII output Quantitative
Cell-phone-based platform for biomedical device development and education applications.
Smith, Zachary J; Chu, Kaiqin; Espenson, Alyssa R; Rahimzadeh, Mehdi; Gryshuk, Amy; Molinaro, Marco; Dwyre, Denis M; Lane, Stephen; Matthews, Dennis; Wachsmann-Hogiu, Sebastian
2011-03-02
In this paper we report the development of two attachments to a commercial cell phone that transform the phone's integrated lens and image sensor into a 350x microscope and visible-light spectrometer. The microscope is capable of transmission and polarized microscopy modes and is shown to have 1.5 micron resolution and a usable field-of-view of 150 x 50 with no image processing, and approximately 350 x 350 when post-processing is applied. The spectrometer has a 300 nm bandwidth with a limiting spectral resolution of close to 5 nm. We show applications of the devices to medically relevant problems. In the case of the microscope, we image both stained and unstained blood-smears showing the ability to acquire images of similar quality to commercial microscope platforms, thus allowing diagnosis of clinical pathologies. With the spectrometer we demonstrate acquisition of a white-light transmission spectrum through diffuse tissue as well as the acquisition of a fluorescence spectrum. We also envision the devices to have immediate relevance in the educational field.
Cell-Phone-Based Platform for Biomedical Device Development and Education Applications
Smith, Zachary J.; Chu, Kaiqin; Espenson, Alyssa R.; Rahimzadeh, Mehdi; Gryshuk, Amy; Molinaro, Marco; Dwyre, Denis M.; Lane, Stephen; Matthews, Dennis; Wachsmann-Hogiu, Sebastian
2011-01-01
In this paper we report the development of two attachments to a commercial cell phone that transform the phone's integrated lens and image sensor into a 350× microscope and visible-light spectrometer. The microscope is capable of transmission and polarized microscopy modes and is shown to have 1.5 micron resolution and a usable field-of-view of 150×150 with no image processing, and approximately 350×350 when post-processing is applied. The spectrometer has a 300 nm bandwidth with a limiting spectral resolution of close to 5 nm. We show applications of the devices to medically relevant problems. In the case of the microscope, we image both stained and unstained blood-smears showing the ability to acquire images of similar quality to commercial microscope platforms, thus allowing diagnosis of clinical pathologies. With the spectrometer we demonstrate acquisition of a white-light transmission spectrum through diffuse tissue as well as the acquisition of a fluorescence spectrum. We also envision the devices to have immediate relevance in the educational field. PMID:21399693
NASA Technical Reports Server (NTRS)
Cramer, K. Elliott; Syed, Hazari I.
1995-01-01
This user's manual describes the installation and operation of TIA, the Thermal-Imaging acquisition and processing Application, developed by the Nondestructive Evaluation Sciences Branch at NASA Langley Research Center, Hampton, Virginia. TIA is a user friendly graphical interface application for the Macintosh 2 and higher series computers. The software has been developed to interface with the Perceptics/Westinghouse Pixelpipe(TM) and PixelStore(TM) NuBus cards and the GW Instruments MacADIOS(TM) input-output (I/O) card for the Macintosh for imaging thermal data. The software is also capable of performing generic image-processing functions.
The Hico Image Processing System: A Web-Accessible Hyperspectral Remote Sensing Toolbox
NASA Astrophysics Data System (ADS)
Harris, A. T., III; Goodman, J.; Justice, B.
2014-12-01
As the quantity of Earth-observation data increases, the use-case for hosting analytical tools in geospatial data centers becomes increasingly attractive. To address this need, HySpeed Computing and Exelis VIS have developed the HICO Image Processing System, a prototype cloud computing system that provides online, on-demand, scalable remote sensing image processing capabilities. The system provides a mechanism for delivering sophisticated image processing analytics and data visualization tools into the hands of a global user community, who will only need a browser and internet connection to perform analysis. Functionality of the HICO Image Processing System is demonstrated using imagery from the Hyperspectral Imager for the Coastal Ocean (HICO), an imaging spectrometer located on the International Space Station (ISS) that is optimized for acquisition of aquatic targets. Example applications include a collection of coastal remote sensing algorithms that are directed at deriving critical information on water and habitat characteristics of our vulnerable coastal environment. The project leverages the ENVI Services Engine as the framework for all image processing tasks, and can readily accommodate the rapid integration of new algorithms, datasets and processing tools.
A Sub-Sampling Approach for Data Acquisition in Gamma Ray Emission Tomography
NASA Astrophysics Data System (ADS)
Fysikopoulos, Eleftherios; Kopsinis, Yannis; Georgiou, Maria; Loudos, George
2016-06-01
State of the art data acquisition systems for small animal imaging gamma ray detectors often rely on free running Analog to Digital Converters (ADCs) and high density Field Programmable Gate Arrays (FPGA) devices for digital signal processing. In this work, a sub-sampling acquisition approach, which exploits a priori information regarding the shape of the obtained detector pulses is proposed. Output pulses shape depends on the response of the scintillation crystal, photodetector's properties and amplifier/shaper operation. Using these known characteristics of the detector pulses prior to digitization, one can model the voltage pulse derived from the shaper (a low-pass filter, last in the front-end electronics chain), in order to reduce the desirable sampling rate of ADCs. Fitting with a small number of measurements, pulse shape estimation is then feasible. In particular, the proposed sub-sampling acquisition approach relies on a bi-exponential modeling of the pulse shape. We show that the properties of the pulse that are relevant for Single Photon Emission Computed Tomography (SPECT) event detection (i.e., position and energy) can be calculated by collecting just a small fraction of the number of samples usually collected in data acquisition systems used so far. Compared to the standard digitization process, the proposed sub-sampling approach allows the use of free running ADCs with sampling rate reduced by a factor of 5. Two small detectors consisting of Cerium doped Gadolinium Aluminum Gallium Garnet (Gd3Al2Ga3O12 : Ce or GAGG:Ce) pixelated arrays (array elements: 2 × 2 × 5 mm3 and 1 × 1 × 10 mm3 respectively) coupled to a Position Sensitive Photomultiplier Tube (PSPMT) were used for experimental evaluation. The two detectors were used to obtain raw images and energy histograms under 140 keV and 661.7 keV irradiation respectively. The sub-sampling acquisition technique (10 MHz sampling rate) was compared with a standard acquisition method (52 MHz sampling rate), in terms of energy resolution and image signal to noise ratio for both gamma ray energies. The Levenberg-Marquardt (LM) non-linear least-squares algorithm was used, in post processing, in order to fit the acquired data with the proposed model. The results showed that analog pulses prior to digitization are being estimated with high accuracy after fitting with the bi-exponential model.
Image sequence analysis workstation for multipoint motion analysis
NASA Astrophysics Data System (ADS)
Mostafavi, Hassan
1990-08-01
This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.
NASA Astrophysics Data System (ADS)
Yague-Martinez, N.; Fielding, E. J.; Haghshenas-Haghighi, M.; Cong, X.; Motagh, M.
2014-12-01
This presentation will address the 24 September 2013 Mw 7.7 Balochistan Earthquake in western Pakistan from the point of view of interferometric processing algorithms of wide-swath TerraSAR-X ScanSAR images. The algorithms are also valid for TOPS acquisition mode, the operational mode of the Sentinel-1A ESA satellite that was successfully launched in April 2014. Spectral properties of burst-mode data and an overview of the interferometric processing steps of burst-mode acquisitions, emphasizing the importance of the co-registration stage, will be provided. A co-registration approach based on incoherent cross-correlation will be presented and applied to seismic scenarios. Moreover geodynamic corrections due to differential atmospheric path delay and differential solid Earth tides are considered to achieve accuracy in the order of several centimeters. We previously derived a 3D displacement map using cross-correlation techniques applied to optical images from Landsat-8 satellite and TerraSAR-X ScanSAR amplitude images. The Landsat-8 cross-correlation measurements cover two horizontal directions, and the TerraSAR-X displacements include both horizontal along-track and slant-range (radar line-of-sight) measurements that are sensitive to vertical and horizontal deformation. It will be justified that the co-seismic displacement map from TerraSAR-X ScanSAR data may be contaminated by postseismic deformation due to the fact that the post-seismic acquisition took place one month after the main shock, confirmed in part by a TerraSAR-X stripmap interferogram (processed with conventional InSAR) covering part of the area starting on 27 September 2013. We have arranged the acquisition of a burst-synchronized stack of TerraSAR-X ScanSAR images over the affected area after the earthquake. It will be possible to apply interferometry to these data to measure the lower magnitude of the expected postseismic displacements. The processing of single interferograms will be discussed. A quicklook of the wrapped differential TerraSAR-X ScanSAR co-seismic interferogram is provided in the attachment (range coverage is 100 km by using 4 subswaths).
Giraldo, Paula Jimena Ramos; Aguirre, Álvaro Guerrero; Muñoz, Carlos Mario; Prieto, Flavio Augusto; Oliveros, Carlos Eugenio
2017-04-06
Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas) to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: ( i ) Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and ( ii ) Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases.
Ramos Giraldo, Paula Jimena; Guerrero Aguirre, Álvaro; Muñoz, Carlos Mario; Prieto, Flavio Augusto; Oliveros, Carlos Eugenio
2017-01-01
Smartphones show potential for controlling and monitoring variables in agriculture. Their processing capacity, instrumentation, connectivity, low cost, and accessibility allow farmers (among other users in rural areas) to operate them easily with applications adjusted to their specific needs. In this investigation, the integration of inertial sensors, a GPS, and a camera are presented for the monitoring of a coffee crop. An Android-based application was developed with two operating modes: (i) Navigation: for georeferencing trees, which can be as close as 0.5 m from each other; and (ii) Acquisition: control of video acquisition, based on the movement of the mobile device over a branch, and measurement of image quality, using clarity indexes to select the most appropriate frames for application in future processes. The integration of inertial sensors in navigation mode, shows a mean relative error of ±0.15 m, and total error ±5.15 m. In acquisition mode, the system correctly identifies the beginning and end of mobile phone movement in 99% of cases, and image quality is determined by means of a sharpness factor which measures blurriness. With the developed system, it will be possible to obtain georeferenced information about coffee trees, such as their production, nutritional state, and presence of plagues or diseases. PMID:28383494
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theodorakis, M.C.; Simpson, D.R.; Leung, D.M.
1983-02-01
A new method for monitoring tablet disintegration in vivo was developed. In this method, the tablets were labeled with a short-lived radionuclide, technetium 99m, and monitored by a gamma camera. Several innovations were introduced with this method. First, computer reconstruction algorithms were used to enhance the scintigraphic images of the disintegrating tablet in vivo. Second, the use of a four-pinhole collimator to acquire multiple views of the tablet resulted in high count rates and reduced acquisition times of the scintigraphic images. Third, the magnification of the scintigraphic images achieved by pinhole collimation led to significant improvement in resolution. Fourth, themore » radioinuclide was incorporated into the granulation so that the whole mass of the tablet was uniformly labeled with high levels of activity. This technique allowed the continuous monitoring of the disintegration process of tablets in vivo in experimental animals. Multiple pinhole collimation and the labeling process permitted the acquisition of quality scintigraphic images of the labeled tablet every 30 sec. The resolution of the method was tested in vitro and in vivo.« less
NASA Technical Reports Server (NTRS)
Frye, Stuart; Mandl, Dan; Cappelaere, Pat
2016-01-01
This presentation describes the closed loop satellite autonomy methods used to connect users and the assets on Earth Orbiter- 1 (EO-1) and similar satellites. The base layer is a distributed architecture based on Goddard Mission Services Evolution Concept (GMSEC) thus each asset still under independent control. Situational awareness is provided by a middleware layer through common Application Programmer Interface (API) to GMSEC components developed at GSFC. Users setup their own tasking requests, receive views into immediate past acquisitions in their area of interest, and into future feasibilities for acquisition across all assets. Automated notifications via pubsub feeds are returned to users containing published links to image footprints, algorithm results, and full data sets. Theme-based algorithms are available on-demand for processing.
Using the ATL HDI 1000 to collect demodulated RF data for monitoring HIFU lesion formation
NASA Astrophysics Data System (ADS)
Anand, Ajay; Kaczkowski, Peter J.; Daigle, Ron E.; Huang, Lingyun; Paun, Marla; Beach, Kirk W.; Crum, Lawrence A.
2003-05-01
The ability to accurately track and monitor the progress of lesion formation during HIFU (High Intensity Focused Ultrasound) therapy is important for the success of HIFU-based treatment protocols. To aid in the development of algorithms for accurately targeting and monitoring formation of HIFU induced lesions, we have developed a software system to perform RF data acquisition during HIFU therapy using a commercially available clinical ultrasound scanner (ATL HDI 1000, Philips Medical Systems, Bothell, WA). The HDI 1000 scanner functions on a software dominant architecture, permitting straightforward external control of its operation and relatively easy access to quadrature demodulated RF data. A PC running a custom developed program sends control signals to the HIFU module via GPIB and to the HDI 1000 via Telnet, alternately interleaving HIFU exposures and RF frame acquisitions. The system was tested during experiments in which HIFU lesions were created in excised animal tissue. No crosstalk between the HIFU beam and the ultrasound imager was detected, thus demonstrating synchronization. Newly developed acquisition modes allow greater user control in setting the image geometry and scanline density, and enables high frame rate acquisition. This system facilitates rapid development of signal-processing based HIFU therapy monitoring algorithms and their implementation in image-guided thermal therapy systems. In addition, the HDI 1000 system can be easily customized for use with other emerging imaging modalities that require access to the RF data such as elastographic methods and new Doppler-based imaging and tissue characterization techniques.
Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Reeder, Scott B.
2017-01-01
Purpose To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. Theory and Methods A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Results Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. Conclusion A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. PMID:27403613
Integration of High-resolution Data for Temporal Bone Surgical Simulations
Wiet, Gregory J.; Stredney, Don; Powell, Kimerly; Hittle, Brad; Kerwin, Thomas
2016-01-01
Purpose To report on the state of the art in obtaining high-resolution 3D data of the microanatomy of the temporal bone and to process that data for integration into a surgical simulator. Specifically, we report on our experience in this area and discuss the issues involved to further the field. Data Sources Current temporal bone image acquisition and image processing established in the literature as well as in house methodological development. Review Methods We reviewed the current English literature for the techniques used in computer-based temporal bone simulation systems to obtain and process anatomical data for use within the simulation. Search terms included “temporal bone simulation, surgical simulation, temporal bone.” Articles were chosen and reviewed that directly addressed data acquisition and processing/segmentation and enhancement with emphasis given to computer based systems. We present the results from this review in relationship to our approach. Conclusions High-resolution CT imaging (≤100μm voxel resolution), along with unique image processing and rendering algorithms, and structure specific enhancement are needed for high-level training and assessment using temporal bone surgical simulators. Higher resolution clinical scanning and automated processes that run in efficient time frames are needed before these systems can routinely support pre-surgical planning. Additionally, protocols such as that provided in this manuscript need to be disseminated to increase the number and variety of virtual temporal bones available for training and performance assessment. PMID:26762105
Complete information acquisition in scanning probe microscopy
Belianinov, Alex; Kalinin, Sergei V.; Jesse, Stephen
2015-03-13
In the last three decades, scanning probe microscopy (SPM) has emerged as a primary tool for exploring and controlling the nanoworld. A critical part of the SPM measurements is the information transfer from the tip-surface junction to a macroscopic measurement system. This process reduces the many degrees of freedom of a vibrating cantilever to relatively few parameters recorded as images. Similarly, the details of dynamic cantilever response at sub-microsecond time scales of transients, higher-order eigenmodes and harmonics are averaged out by transitioning to millisecond time scale of pixel acquisition. Hence, the amount of information available to the external observer ismore » severely limited, and its selection is biased by the chosen data processing method. Here, we report a fundamentally new approach for SPM imaging based on information theory-type analysis of the data stream from the detector. This approach allows full exploration of complex tip-surface interactions, spatial mapping of multidimensional variability of material s properties and their mutual interactions, and SPM imaging at the information channel capacity limit.« less
Control Method for Video Guidance Sensor System
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor)
2005-01-01
A method is provided for controlling operations in a video guidance sensor system wherein images of laser output signals transmitted by the system and returned from a target are captured and processed by the system to produce data used in tracking of the target. Six modes of operation are provided as follows: (i) a reset mode; (ii) a diagnostic mode; (iii) a standby mode; (iv) an acquisition mode; (v) a tracking mode; and (vi) a spot mode wherein captured images of returned laser signals are processed to produce data for all spots found in the image. The method provides for automatic transition to the standby mode from the reset mode after integrity checks are performed and from the diagnostic mode to the reset mode after diagnostic operations are commands is permitted only when the system is in the carried out. Further, acceptance of reset and diagnostic standby mode. The method also provides for automatic transition from the acquisition mode to the tracking mode when an acceptable target is found.
NASA Astrophysics Data System (ADS)
Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus
2017-11-01
At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.
Control method for video guidance sensor system
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Book, Michael L. (Inventor); Bryan, Thomas C. (Inventor)
2005-01-01
A method is provided for controlling operations in a video guidance sensor system wherein images of laser output signals transmitted by the system and returned from a target are captured and processed by the system to produce data used in tracking of the target. Six modes of operation are provided as follows: (i) a reset mode; (ii) a diagnostic mode; (iii) a standby mode; (iv) an acquisition mode; (v) a tracking mode; and (vi) a spot mode wherein captured images of returned laser signals are processed to produce data for all spots found in the image. The method provides for automatic transition to the standby mode from the reset mode after integrity checks are performed and from the diagnostic mode to the reset mode after diagnostic operations are carried out. Further, acceptance of reset and diagnostic commands is permitted only when the system is in the standby mode. The method also provides for automatic transition from the acquisition mode to the tracking mode when an acceptable target is found.
Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion
NASA Astrophysics Data System (ADS)
Qiao, Tiezhu; Chen, Lulu; Pang, Yusong; Yan, Gaowei
2018-06-01
Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.
Image-plane processing of visual information
NASA Technical Reports Server (NTRS)
Huck, F. O.; Fales, C. L.; Park, S. K.; Samms, R. W.
1984-01-01
Shannon's theory of information is used to optimize the optical design of sensor-array imaging systems which use neighborhood image-plane signal processing for enhancing edges and compressing dynamic range during image formation. The resultant edge-enhancement, or band-pass-filter, response is found to be very similar to that of human vision. Comparisons of traits in human vision with results from information theory suggest that: (1) Image-plane processing, like preprocessing in human vision, can improve visual information acquisition for pattern recognition when resolving power, sensitivity, and dynamic range are constrained. Improvements include reduced sensitivity to changes in lighter levels, reduced signal dynamic range, reduced data transmission and processing, and reduced aliasing and photosensor noise degradation. (2) Information content can be an appropriate figure of merit for optimizing the optical design of imaging systems when visual information is acquired for pattern recognition. The design trade-offs involve spatial response, sensitivity, and sampling interval.
The Research on Denoising of SAR Image Based on Improved K-SVD Algorithm
NASA Astrophysics Data System (ADS)
Tan, Linglong; Li, Changkai; Wang, Yueqin
2018-04-01
SAR images often receive noise interference in the process of acquisition and transmission, which can greatly reduce the quality of images and cause great difficulties for image processing. The existing complete DCT dictionary algorithm is fast in processing speed, but its denoising effect is poor. In this paper, the problem of poor denoising, proposed K-SVD (K-means and singular value decomposition) algorithm is applied to the image noise suppression. Firstly, the sparse dictionary structure is introduced in detail. The dictionary has a compact representation and can effectively train the image signal. Then, the sparse dictionary is trained by K-SVD algorithm according to the sparse representation of the dictionary. The algorithm has more advantages in high dimensional data processing. Experimental results show that the proposed algorithm can remove the speckle noise more effectively than the complete DCT dictionary and retain the edge details better.
NASA Astrophysics Data System (ADS)
Alqasemi, Umar; Li, Hai; Aguirre, Andres; Zhu, Quing
2011-03-01
Co-registering ultrasound (US) and photoacoustic (PA) imaging is a logical extension to conventional ultrasound because both modalities provide complementary information of tumor morphology, tumor vasculature and hypoxia for cancer detection and characterization. In addition, both modalities are capable of providing real-time images for clinical applications. In this paper, a Field Programmable Gate Array (FPGA) and Digital Signal Processor (DSP) module-based real-time US/PA imaging system is presented. The system provides real-time US/PA data acquisition and image display for up to 5 fps* using the currently implemented DSP board. It can be upgraded to 15 fps, which is the maximum pulse repetition rate of the used laser, by implementing an advanced DSP module. Additionally, the photoacoustic RF data for each frame is saved for further off-line processing. The system frontend consists of eight 16-channel modules made of commercial and customized circuits. Each 16-channel module consists of two commercial 8-channel receiving circuitry boards and one FPGA board from Analog Devices. Each receiving board contains an IC† that combines. 8-channel low-noise amplifiers, variable-gain amplifiers, anti-aliasing filters, and ADC's‡ in a single chip with sampling frequency of 40MHz. The FPGA board captures the LVDSξ Double Data Rate (DDR) digital output of the receiving board and performs data conditioning and subbeamforming. A customized 16-channel transmission circuitry is connected to the two receiving boards for US pulseecho (PE) mode data acquisition. A DSP module uses External Memory Interface (EMIF) to interface with the eight 16-channel modules through a customized adaptor board. The DSP transfers either sub-beamformed data (US pulse-echo mode or PAI imaging mode) or raw data from FPGA boards to its DDR-2 memory through the EMIF link, then it performs additional processing, after that, it transfer the data to the PC** for further image processing. The PC code performs image processing including demodulation, beam envelope detection and scan conversion. Additionally, the PC code pre-calculates the delay coefficients used for transmission focusing and receiving dynamic focusing for different types of transducers to speed up the imaging process. To further speed up the imaging process, a multi-threads technique is implemented in order to allow formation of previous image frame data and acquisition of the next one simultaneously. The system is also capable of doing semi-real-time automated SO2 imaging at 10 seconds per frame by changing the wavelength knob of the laser automatically using a stepper motor controlled by the system. Initial in vivo experiments were performed on animal tumors to map out its vasculature and hypoxia level, which were superimposed on co-registered US images. The real-time system allows capturing co-registered US/PA images free of motion artifacts and also provides dynamitic information when contrast agents are used.
Ringing Artefact Reduction By An Efficient Likelihood Improvement Method
NASA Astrophysics Data System (ADS)
Fuderer, Miha
1989-10-01
In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..
Enhanced FIB-SEM systems for large-volume 3D imaging
Xu, C. Shan; Hayworth, Kenneth J.; Lu, Zhiyuan; ...
2017-05-13
Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) can automatically generate 3D images with superior z-axis resolution, yielding data that needs minimal image registration and related post-processing. Obstacles blocking wider adoption of FIB-SEM include slow imaging speed and lack of long-term system stability, which caps the maximum possible acquisition volume. Here, we present techniques that accelerate image acquisition while greatly improving FIB-SEM reliability, allowing the system to operate for months and generating continuously imaged volumes > 10 6 ?m 3 . These volumes are large enough for connectomics, where the excellent z resolution can help in tracing of small neuronal processesmore » and accelerate the tedious and time-consuming human proofreading effort. Even higher resolution can be achieved on smaller volumes. We present example data sets from mammalian neural tissue, Drosophila brain, and Chlamydomonas reinhardtii to illustrate the power of this novel high-resolution technique to address questions in both connectomics and cell biology.« less
Parametric color coding of digital subtraction angiography.
Strother, C M; Bender, F; Deuerling-Zheng, Y; Royalty, K; Pulfer, K A; Baumgart, J; Zellerhoff, M; Aagaard-Kienitz, B; Niemann, D B; Lindstrom, M L
2010-05-01
Color has been shown to facilitate both visual search and recognition tasks. It was our purpose to examine the impact of a color-coding algorithm on the interpretation of 2D-DSA acquisitions by experienced and inexperienced observers. Twenty-six 2D-DSA acquisitions obtained as part of routine clinical care from subjects with a variety of cerebrovascular disease processes were selected from an internal data base so as to include a variety of disease states (aneurysms, AVMs, fistulas, stenosis, occlusions, dissections, and tumors). Three experienced and 3 less experienced observers were each shown the acquisitions on a prerelease version of a commercially available double-monitor workstation (XWP, Siemens Healthcare). Acquisitions were presented first as a subtracted image series and then as a single composite color-coded image of the entire acquisition. Observers were then asked a series of questions designed to assess the value of the color-coded images for the following purposes: 1) to enhance their ability to make a diagnosis, 2) to have confidence in their diagnosis, 3) to plan a treatment, and 4) to judge the effect of a treatment. The results were analyzed by using 1-sample Wilcoxon tests. Color-coded images enhanced the ease of evaluating treatment success in >40% of cases (P < .0001). They also had a statistically significant impact on treatment planning, making planning easier in >20% of the cases (P = .0069). In >20% of the examples, color-coding made diagnosis and treatment planning easier for all readers (P < .0001). Color-coding also increased the confidence of diagnosis compared with the use of DSA alone (P = .056). The impact of this was greater for the naïve readers than for the expert readers. At no additional cost in x-ray dose or contrast medium, color-coding of DSA enhanced the conspicuity of findings on DSA images. It was particularly useful in situations in which there was a complex flow pattern and in evaluation of pre- and posttreatment acquisitions. Its full potential remains to be defined.
Satellite land use acquisition and applications to hydrologic planning models
NASA Technical Reports Server (NTRS)
Algazi, V. R.; Suk, M.
1977-01-01
A developing operational procedure for use by the Corps of Engineers in the acquisition of land use information for hydrologic planning purposes was described. The operational conditions preclude the use of dedicated, interactive image processing facilities. Given the constraints, an approach to land use classification based on clustering seems promising and was explored in detail. The procedure is outlined and examples of application to two watersheds given.
NASA Astrophysics Data System (ADS)
Jarvis, Jan; Haertelt, Marko; Hugger, Stefan; Butschek, Lorenz; Fuchs, Frank; Ostendorf, Ralf; Wagner, Joachim; Beyerer, Juergen
2017-04-01
In this work we present data analysis algorithms for detection of hazardous substances in hyperspectral observations acquired using active mid-infrared (MIR) backscattering spectroscopy. We present a novel background extraction algorithm based on the adaptive target generation process proposed by Ren and Chang called the adaptive background generation process (ABGP) that generates a robust and physically meaningful set of background spectra for operation of the well-known adaptive matched subspace detection (AMSD) algorithm. It is shown that the resulting AMSD-ABGP detection algorithm competes well with other widely used detection algorithms. The method is demonstrated in measurement data obtained by two fundamentally different active MIR hyperspectral data acquisition devices. A hyperspectral image sensor applicable in static scenes takes a wavelength sequential approach to hyperspectral data acquisition, whereas a rapid wavelength-scanning single-element detector variant of the same principle uses spatial scanning to generate the hyperspectral observation. It is shown that the measurement timescale of the latter is sufficient for the application of the data analysis algorithms even in dynamic scenarios.
NASA Astrophysics Data System (ADS)
Czermak, A.; Zalewska, A.; Dulny, B.; Sowicki, B.; Jastrząb, M.; Nowak, L.
2004-07-01
The needs for real time monitoring of the hadrontherapy beam intensity and profile as well as requirements for the fast dosimetry using Monolithic Active Pixel Sensors (MAPS) forced the SUCIMA collaboration to the design of the unique Data Acquisition System (DAQ SUCIMA Imager). The DAQ system has been developed on one of the most advanced XILINX Field Programmable Gate Array chip - VERTEX II. The dedicated multifunctional electronic board for the detector's analogue signals capture, their parallel digital processing and final data compression as well as transmission through the high speed USB 2.0 port has been prototyped and tested.
Development of an imaging system for single droplet characterization using a droplet generator.
Minov, S Vulgarakis; Cointault, F; Vangeyte, J; Pieters, J G; Hijazi, B; Nuyttens, D
2012-01-01
The spray droplets generated by agricultural nozzles play an important role in the application accuracy and efficiency of plant protection products. The limitations of the non-imaging techniques and the recent improvements in digital image acquisition and processing increased the interest in using high speed imaging techniques in pesticide spray characterisation. The goal of this study was to develop an imaging technique to evaluate the characteristics of a single spray droplet using a piezoelectric single droplet generator and a high speed imaging technique. Tests were done with different camera settings, lenses, diffusers and light sources. The experiments have shown the necessity for having a good image acquisition and processing system. Image analysis results contributed in selecting the optimal set-up for measuring droplet size and velocity which consisted of a high speed camera with a 6 micros exposure time, a microscope lens at a working distance of 43 cm resulting in a field of view of 1.0 cm x 0.8 cm and a Xenon light source without diffuser used as a backlight. For measuring macro-spray characteristics as the droplet trajectory, the spray angle and the spray shape, a Macro Video Zoom lens at a working distance of 14.3 cm with a bigger field of view of 7.5 cm x 9.5 cm in combination with a halogen spotlight with a diffuser and the high speed camera can be used.
PACS 2000: quality control using the task allocation chart
NASA Astrophysics Data System (ADS)
Norton, Gary S.; Romlein, John R.; Lyche, David K.; Richardson, Ronald R., Jr.
2000-05-01
Medical imaging's technological evolution in the next century will continue to include Picture Archive and Communication Systems (PACS) and teleradiology. It is difficult to predict radiology's future in the new millennium with both computed radiography and direct digital capture competing as the primary image acquisition methods for routine radiography. Changes in Computed Axial Tomography (CT) and Magnetic Resonance Imaging (MRI) continue to amaze the healthcare community. No matter how the acquisition, display, and archive functions change, Quality Control (QC) of the radiographic imaging chain will remain an important step in the imaging process. The Task Allocation Chart (TAC) is a tool that can be used in a medical facility's QC process to indicate the testing responsibilities of the image stakeholders and the medical informatics department. The TAC shows a grid of equipment to be serviced, tasks to be performed, and the organization assigned to perform each task. Additionally, skills, tasks, time, and references for each task can be provided. QC of the PACS must be stressed as a primary element of a PACS' implementation. The TAC can be used to clarify responsibilities during warranty and paid maintenance periods. Establishing a TAC a part of a PACS implementation has a positive affect on patient care and clinical acceptance.
Automating High-Precision X-Ray and Neutron Imaging Applications with Robotics
Hashem, Joseph Anthony; Pryor, Mitch; Landsberger, Sheldon; ...
2017-03-28
Los Alamos National Laboratory and the University of Texas at Austin recently implemented a robotically controlled nondestructive testing (NDT) system for X-ray and neutron imaging. This system is intended to address the need for accurate measurements for a variety of parts and, be able to track measurement geometry at every imaging location, and is designed for high-throughput applications. This system was deployed in a beam port at a nuclear research reactor and in an operational inspection X-ray bay. The nuclear research reactor system consisted of a precision industrial seven-axis robot, 1.1-MW TRIGA research reactor, and a scintillator-mirror-camera-based imaging system. Themore » X-ray bay system incorporated the same robot, a 225-keV microfocus X-ray source, and a custom flat panel digital detector. The robotic positioning arm is programmable and allows imaging in multiple configurations, including planar, cylindrical, as well as other user defined geometries that provide enhanced engineering evaluation capability. The imaging acquisition device is coupled with the robot for automated image acquisition. The robot can achieve target positional repeatability within 17 μm in the 3-D space. Flexible automation with nondestructive imaging saves costs, reduces dosage, adds imaging techniques, and achieves better quality results in less time. Specifics regarding the robotic system and imaging acquisition and evaluation processes are presented. In conclusion, this paper reviews the comprehensive testing and system evaluation to affirm the feasibility of robotic NDT, presents the system configuration, and reviews results for both X-ray and neutron radiography imaging applications.« less
Automating High-Precision X-Ray and Neutron Imaging Applications with Robotics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hashem, Joseph Anthony; Pryor, Mitch; Landsberger, Sheldon
Los Alamos National Laboratory and the University of Texas at Austin recently implemented a robotically controlled nondestructive testing (NDT) system for X-ray and neutron imaging. This system is intended to address the need for accurate measurements for a variety of parts and, be able to track measurement geometry at every imaging location, and is designed for high-throughput applications. This system was deployed in a beam port at a nuclear research reactor and in an operational inspection X-ray bay. The nuclear research reactor system consisted of a precision industrial seven-axis robot, 1.1-MW TRIGA research reactor, and a scintillator-mirror-camera-based imaging system. Themore » X-ray bay system incorporated the same robot, a 225-keV microfocus X-ray source, and a custom flat panel digital detector. The robotic positioning arm is programmable and allows imaging in multiple configurations, including planar, cylindrical, as well as other user defined geometries that provide enhanced engineering evaluation capability. The imaging acquisition device is coupled with the robot for automated image acquisition. The robot can achieve target positional repeatability within 17 μm in the 3-D space. Flexible automation with nondestructive imaging saves costs, reduces dosage, adds imaging techniques, and achieves better quality results in less time. Specifics regarding the robotic system and imaging acquisition and evaluation processes are presented. In conclusion, this paper reviews the comprehensive testing and system evaluation to affirm the feasibility of robotic NDT, presents the system configuration, and reviews results for both X-ray and neutron radiography imaging applications.« less
Fast, low-dose patient localization on TomoTherapy via topogram registration.
Moore, Kevin L; Palaniswaamy, Geethpriya; White, Benjamin; Goddu, S Murty; Low, Daniel A
2010-08-01
To investigate a protocol which efficiently localizes TomoTherapy patients with a scout imaging (topogram) mode that can be used with or instead of 3D megavoltage computed tomography (MVCT) imaging. The process presented here is twofold: (a) The acquisition of the topogram using the TomoTherapy MV imaging system and (b) the generation of a digitally reconstructed topogram (DRT) derived from a standard kV CT simulation data set. The unique geometric characteristics of the current TomoTherapy imaging system were explored both theoretically and by acquiring topograms of anthropomorphic phantoms and comparing these images to DRT images. The performance of the MV topogram imaging system in terms of image quality, dose incurred to the patient, and acquisition time was investigated using ionization chamber and radiographic film measurements. The time required to acquire a clinically usable topogram, limited by the maximum couch speed of 4.0 cm s(-1), was 12.5 s for a 50 cm long field. The patient dose was less than 1% of that delivered by a helical MVCT scan. Further refinements within the current TomoTherapy system, most notably decreasing the imaging beam repetition rate during MV topogram acquisition, would further reduce the topogram dose to less than 25 microGy per scan without compromising image quality. Topogram localization on TomoTherapy is a fast and low-dose alternative to 3D MVCT localization. A protocol designed that exclusively utilized MV topograms would result in a 30-fold reduction in imaging time and a 100-fold reduction in dose from localization scans using the current TomoTherapy workflow.
Increasing circular synthetic aperture sonar resolution via adapted wave atoms deconvolution.
Pailhas, Yan; Petillot, Yvan; Mulgrew, Bernard
2017-04-01
Circular Synthetic Aperture Sonar (CSAS) processing computes coherently Synthetic Aperture Sonar (SAS) data acquired along a circular trajectory. This approach has a number of advantages, in particular it maximises the aperture length of a SAS system, producing very high resolution sonar images. CSAS image reconstruction using back-projection algorithms, however, introduces a dissymmetry in the impulse response, as the imaged point moves away from the centre of the acquisition circle. This paper proposes a sampling scheme for the CSAS image reconstruction which allows every point, within the full field of view of the system, to be considered as the centre of a virtual CSAS acquisition scheme. As a direct consequence of using the proposed resampling scheme, the point spread function (PSF) is uniform for the full CSAS image. Closed form solutions for the CSAS PSF are derived analytically, both in the image and the Fourier domain. The thorough knowledge of the PSF leads naturally to the proposed adapted atom waves basis for CSAS image decomposition. The atom wave deconvolution is successfully applied to simulated data, increasing the image resolution by reducing the PSF energy leakage.
Takeshima, Hidenori; Saitoh, Kanako; Nitta, Shuhei; Shiodera, Taichiro; Takeguchi, Tomoyuki; Bannae, Shuhei; Kuhara, Shigehide
2018-03-13
Dynamic MR techniques, such as cardiac cine imaging, benefit from shorter acquisition times. The goal of the present study was to develop a method that achieves short acquisition times, while maintaining a cost-effective reconstruction, for dynamic MRI. k - t sensitivity encoding (SENSE) was identified as the base method to be enhanced meeting these two requirements. The proposed method achieves a reduction in acquisition time by estimating the spatiotemporal (x - f) sensitivity without requiring the acquisition of the alias-free signals, typical of the k - t SENSE technique. The cost-effective reconstruction, in turn, is achieved by a computationally efficient estimation of the x - f sensitivity from the band-limited signals of the aliased inputs. Such band-limited signals are suitable for sensitivity estimation because the strongly aliased signals have been removed. For the same reduction factor 4, the net reduction factor 4 for the proposed method was significantly higher than the factor 2.29 achieved by k - t SENSE. The processing time is reduced from 4.1 s for k - t SENSE to 1.7 s for the proposed method. The image quality obtained using the proposed method proved to be superior (mean squared error [MSE] ± standard deviation [SD] = 6.85 ± 2.73) compared to the k - t SENSE case (MSE ± SD = 12.73 ± 3.60) for the vertical long-axis (VLA) view, as well as other views. In the present study, k - t SENSE was identified as a suitable base method to be improved achieving both short acquisition times and a cost-effective reconstruction. To enhance these characteristics of base method, a novel implementation is proposed, estimating the x - f sensitivity without the need for an explicit scan of the reference signals. Experimental results showed that the acquisition, computational times and image quality for the proposed method were improved compared to the standard k - t SENSE method.
Meyer, Celine; Weinmann, Pierre
2017-08-01
Cadmium-zinc-telluride (CZT) cameras allow to decrease significantly the acquisition time of myocardial perfusion imaging (MPI), but the duration of the examination is still long. Therefore, this study was performed to test the feasibility of early imaging following injection of Tc-99 m sestamibi using a CZT camera. Seventy patients underwent both an early and a delayed image acquisition after exercise stress test (n = 30), dipyridamole stress test (n = 20), and at rest (n = 20). After injection of Tc-99 m sestamibi, the early image acquisition started on average within 5 minutes for the exercise and rest groups, and 3 minutes 30 seconds for the dipyridamole group. Two independent observers evaluated image quality and extracardiac uptake on four-point scales. The difference between early and later images for each patient was scored on a five-point scale. The image quality and extracardiac uptake of early and delayed image acquisitions were not different for the three groups (P > .05). There was no significant difference between early and delayed image acquisitions in the exercise, dipyridamole, and rest groups, respectively, in 63%, 40%, and 80% of cases. In the exercise group and rest group, a defect was only present in early MPI, respectively, in 13% and 20% of cases. A defect was only present in delayed images in 10% of cases in the exercise group and in 45% of cases in the dipyridamole group. There was no difference between early and later image acquisitions in terms of quality. This protocol reduces the length of the procedure for the patient. Beginning with early image acquisitions may help to overcome the artifacts that are observed at the delayed time.
Iris recognition via plenoptic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos-Villalobos, Hector J.; Boehnen, Chris Bensing; Bolme, David S.
Iris recognition can be accomplished for a wide variety of eye images by using plenoptic imaging. Using plenoptic technology, it is possible to correct focus after image acquisition. One example technology reconstructs images having different focus depths and stitches them together, resulting in a fully focused image, even in an off-angle gaze scenario. Another example technology determines three-dimensional data for an eye and incorporates it into an eye model used for iris recognition processing. Another example technology detects contact lenses. Application of the technologies can result in improved iris recognition under a wide variety of scenarios.
Poster — Thur Eve — 55: An automated XML technique for isocentre verification on the Varian TrueBeam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asiev, Krum; Mullins, Joel; DeBlois, François
2014-08-15
Isocentre verification tests, such as the Winston-Lutz (WL) test, have gained popularity in the recent years as techniques such as stereotactic radiosurgery/radiotherapy (SRS/SRT) treatments are more commonly performed on radiotherapy linacs. These highly conformal treatments require frequent monitoring of the geometrical accuracy of the isocentre to ensure proper radiation delivery. At our clinic, the WL test is performed by acquiring with the EPID a collection of 8 images of a WL phantom fixed on the couch for various couch/gantry angles. This set of images is later analyzed to determine the isocentre size. The current work addresses the acquisition process. Amore » manual WL test acquisition performed by and experienced physicist takes in average 25 minutes and is prone to user manipulation errors. We have automated this acquisition on a Varian TrueBeam STx linac (Varian, Palo Alto, USA). The Varian developer mode allows the execution of custom-made XML script files to control all aspects of the linac operation. We have created an XML-WL script that cycles through each couch/gantry combinations taking an EPID image at each position. This automated acquisition is done in less than 4 minutes. The reproducibility of the method was verified by repeating the execution of the XML file 5 times. The analysis of the images showed variation of the isocenter size less than 0.1 mm along the X, Y and Z axes and compares favorably to a manual acquisition for which we typically observe variations up to 0.5 mm.« less
A digital-signal-processor-based optical tomographic system for dynamic imaging of joint diseases
NASA Astrophysics Data System (ADS)
Lasker, Joseph M.
Over the last decade, optical tomography (OT) has emerged as viable biomedical imaging modality. Various imaging systems have been developed that are employed in preclinical as well as clinical studies, mostly targeting breast imaging, brain imaging, and cancer related studies. Of particular interest are so-called dynamic imaging studies where one attempts to image changes in optical properties and/or physiological parameters as they occur during a system perturbation. To successfully perform dynamic imaging studies, great effort is put towards system development that offers increasingly enhanced signal-to-noise performance at ever shorter data acquisition times, thus capturing high fidelity tomographic data within narrower time periods. Towards this goal, I have developed in this thesis a dynamic optical tomography system that is, unlike currently available analog instrumentation, based on digital data acquisition and filtering techniques. At the core of this instrument is a digital signal processor (DSP) that collects, collates, and processes the digitized data set. Complementary protocols between the DSP and a complex programmable logic device synchronizes the sampling process and organizes data flow. Instrument control is implemented through a comprehensive graphical user interface which integrates automated calibration, data acquisition, and signal post-processing. Real-time data is generated at frame rates as high as 140 Hz. An extensive dynamic range (˜190 dB) accommodates a wide scope of measurement geometries and tissue types. Performance analysis demonstrates very low system noise (˜1 pW rms noise equivalent power), excellent signal precision (˜0.04%--0.2%) and long term system stability (˜1% over 40 min). Experiments on tissue phantoms validate spatial and temporal accuracy of the system. As a potential new application of dynamic optical imaging I present the first application of this method to use vascular hemodynamics as a means of characterizing joint diseases, especially effects of rheumatoid arthritis (RA) in the proximal interphalangeal finger joints. Using a dual-wavelength tomographic imaging system and previously implemented reconstruction scheme, I have performed initial dynamic imaging case studies on healthy volunteers and patients diagnosed with RA. These studies support our hypothesis that differences in the vascular and metabolic reactivity exist between affected and unaffected joints and can be used for diagnostic purposes.
Rong, Xing; Du, Yong; Frey, Eric C
2012-06-21
Quantitative Yttrium-90 ((90)Y) bremsstrahlung single photon emission computed tomography (SPECT) imaging has shown great potential to provide reliable estimates of (90)Y activity distribution for targeted radionuclide therapy dosimetry applications. One factor that potentially affects the reliability of the activity estimates is the choice of the acquisition energy window. In contrast to imaging conventional gamma photon emitters where the acquisition energy windows are usually placed around photopeaks, there has been great variation in the choice of the acquisition energy window for (90)Y imaging due to the continuous and broad energy distribution of the bremsstrahlung photons. In quantitative imaging of conventional gamma photon emitters, previous methods for optimizing the acquisition energy window assumed unbiased estimators and used the variance in the estimates as a figure of merit (FOM). However, for situations, such as (90)Y imaging, where there are errors in the modeling of the image formation process used in the reconstruction there will be bias in the activity estimates. In (90)Y bremsstrahlung imaging this will be especially important due to the high levels of scatter, multiple scatter, and collimator septal penetration and scatter. Thus variance will not be a complete measure of reliability of the estimates and thus is not a complete FOM. To address this, we first aimed to develop a new method to optimize the energy window that accounts for both the bias due to model-mismatch and the variance of the activity estimates. We applied this method to optimize the acquisition energy window for quantitative (90)Y bremsstrahlung SPECT imaging in microsphere brachytherapy. Since absorbed dose is defined as the absorbed energy from the radiation per unit mass of tissues in this new method we proposed a mass-weighted root mean squared error of the volume of interest (VOI) activity estimates as the FOM. To calculate this FOM, two analytical expressions were derived for calculating the bias due to model-mismatch and the variance of the VOI activity estimates, respectively. To obtain the optimal acquisition energy window for general situations of interest in clinical (90)Y microsphere imaging, we generated phantoms with multiple tumors of various sizes and various tumor-to-normal activity concentration ratios using a digital phantom that realistically simulates human anatomy, simulated (90)Y microsphere imaging with a clinical SPECT system and typical imaging parameters using a previously validated Monte Carlo simulation code, and used a previously proposed method for modeling the image degrading effects in quantitative SPECT reconstruction. The obtained optimal acquisition energy window was 100-160 keV. The values of the proposed FOM were much larger than the FOM taking into account only the variance of the activity estimates, thus demonstrating in our experiment that the bias of the activity estimates due to model-mismatch was a more important factor than the variance in terms of limiting the reliability of activity estimates.
Markiewicz, Pawel J; Ehrhardt, Matthias J; Erlandsson, Kjell; Noonan, Philip J; Barnes, Anna; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sebastien
2018-01-01
We present a standalone, scalable and high-throughput software platform for PET image reconstruction and analysis. We focus on high fidelity modelling of the acquisition processes to provide high accuracy and precision quantitative imaging, especially for large axial field of view scanners. All the core routines are implemented using parallel computing available from within the Python package NiftyPET, enabling easy access, manipulation and visualisation of data at any processing stage. The pipeline of the platform starts from MR and raw PET input data and is divided into the following processing stages: (1) list-mode data processing; (2) accurate attenuation coefficient map generation; (3) detector normalisation; (4) exact forward and back projection between sinogram and image space; (5) estimation of reduced-variance random events; (6) high accuracy fully 3D estimation of scatter events; (7) voxel-based partial volume correction; (8) region- and voxel-level image analysis. We demonstrate the advantages of this platform using an amyloid brain scan where all the processing is executed from a single and uniform computational environment in Python. The high accuracy acquisition modelling is achieved through span-1 (no axial compression) ray tracing for true, random and scatter events. Furthermore, the platform offers uncertainty estimation of any image derived statistic to facilitate robust tracking of subtle physiological changes in longitudinal studies. The platform also supports the development of new reconstruction and analysis algorithms through restricting the axial field of view to any set of rings covering a region of interest and thus performing fully 3D reconstruction and corrections using real data significantly faster. All the software is available as open source with the accompanying wiki-page and test data.
Popescu, Dan; Ichim, Loretta; Stoican, Florin
2017-02-23
Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes-fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms.
Popescu, Dan; Ichim, Loretta; Stoican, Florin
2017-01-01
Floods are natural disasters which cause the most economic damage at the global level. Therefore, flood monitoring and damage estimation are very important for the population, authorities and insurance companies. The paper proposes an original solution, based on a hybrid network and complex image processing, to this problem. As first novelty, a multilevel system, with two components, terrestrial and aerial, was proposed and designed by the authors as support for image acquisition from a delimited region. The terrestrial component contains a Ground Control Station, as a coordinator at distance, which communicates via the internet with more Ground Data Terminals, as a fixed nodes network for data acquisition and communication. The aerial component contains mobile nodes—fixed wing type UAVs. In order to evaluate flood damage, two tasks must be accomplished by the network: area coverage and image processing. The second novelty of the paper consists of texture analysis in a deep neural network, taking into account new criteria for feature selection and patch classification. Color and spatial information extracted from chromatic co-occurrence matrix and mass fractal dimension were used as well. Finally, the experimental results in a real mission demonstrate the validity of the proposed methodologies and the performances of the algorithms. PMID:28241479
Advancements of labelled radio-pharmaceutics imaging with the PIM-MPGD
NASA Astrophysics Data System (ADS)
Donnard, J.; Arlicot, N.; Berny, R.; Carduner, H.; Leray, P.; Morteau, E.; Servagent, N.; Thers, D.
2009-11-01
The Beta autoradiography is widely used in pharmacology or in biological fields to study the response of an organism to a certain kind of molecule. The image of the distribution is processed by studying the concentration of the radioactivity into different organs. We report on the development of an integrated apparatus based on a PIM device (Parallel Ionization Multiplier) able to process the image of 10 microscope slides at the same time over an area of 18*18 cm2. Thanks to a vacuum pump and a regulation gas circuit, 5 minutes is sufficient to begin an acquisition. All the electronics and the gas distribution are included in the structure leading to a transportable device. Special software has been developed to process data in real time with image visualization. Biological samples can be labelled with β emitters of low energy like 3H/14C or Auger electrons of 125I/99mTc. The measured spatial resolution is 30 μm in 3H and the trigger and the charge rate are constant over more than 6 days of acquisition showing good stability of the device. Moreover, collaboration with doctors and biologists of INSERM (National Institute for Medical Research in France) has started in order to demonstrate that MPGD's can be easily proposed outside a physics laboratory.
Optical image acquisition system for colony analysis
NASA Astrophysics Data System (ADS)
Wang, Weixing; Jin, Wenbiao
2006-02-01
For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems since they belong to a new technology product. One of the main problems is image acquisition. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. A digital camera in the top of the box connected to a PC computer with a USB cable, all the camera functions are controlled by the computer.
Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B
2017-06-01
To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Precision of computer vision systems for real-time inspection of contact wire wear in railways
NASA Astrophysics Data System (ADS)
Borromeo, Susana; Aparicio, Jose L.
2005-02-01
This paper is oriented to study techniques to improve the precision of the systems for wear measurement of contact wire in the railways. The problematic of wear measurement characterized by some important determining factors like rate of sampling and auscultation conditions is studied in detail. The different solutions to resolve the problematic successfully are examined. Issues related to image acquisition and image processing are discussed. Type of illumination and sensors employed, image processing hardware and image processing algorithms are some topics studied. Once analyzed each one factor which have influence on the precision of the measurement system, there are proposed an assembly of solutions that allow to optimize the conditions under which the inspection can be carried out.
Comparison of the signal-to-noise characteristics of quantum versus thermal ghost imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Sullivan, Malcolm N.; Chan, Kam Wai Clifford; Boyd, Robert W.
2010-11-15
We present a theoretical comparison of the signal-to-noise characteristics of quantum versus thermal ghost imaging. We first calculate the signal-to-noise ratio of each process in terms of its controllable experimental conditions. We show that a key distinction is that a thermal ghost image always resides on top of a large background; the fluctuations in this background constitutes an intrinsic noise source for thermal ghost imaging. In contrast, there is a negligible intrinsic background to a quantum ghost image. However, for practical reasons involving achievable illumination levels, acquisition times for thermal ghost images are often much shorter than those for quantummore » ghost images. We provide quantitative predictions for the conditions under which each process provides superior performance. Our conclusion is that each process can provide useful functionality, although under complementary conditions.« less
Litwiller, Daniel V.; Saranathan, Manojkumar; Vasanawala, Shreyas S.
2017-01-01
Purpose To assess image quality and speed improvements for single-shot fast spin-echo (SSFSE) with variable refocusing flip angles and full-Fourier acquisition (vrfSSFSE) pelvic imaging via a prospective trial performed in the context of uterine leiomyoma evaluation. Materials and Methods Institutional review board approval and informed consent were obtained. vrfSSFSE and conventional SSFSE sagittal and coronal oblique acquisitions were performed in 54 consecutive female patients referred for 3-T magnetic resonance (MR) evaluation of known or suspected uterine leiomyomas. Two radiologists who were blinded to the image acquisition technique semiquantitatively scored images on a scale from −2 to 2 for noise, image contrast, sharpness, artifacts, and perceived ability to evaluate uterine, ovarian, and musculoskeletal structures. The null hypothesis of no significant difference between pulse sequences was assessed with a Wilcoxon signed rank test by using a Holm-Bonferroni correction for multiple comparisons. Results Because of reductions in specific absorption rate, vrfSSFSE imaging demonstrated significantly increased speed (more than twofold, P < .0001), with mean repetition times compared with conventional SSFSE imaging decreasing from 1358 to 613 msec for sagittal acquisitions and from 1494 to 621 msec for coronal oblique acquisitions. Almost all assessed image quality and perceived diagnostic capability parameters were significantly improved with vrfSSFSE imaging. These improvements included noise, sharpness, and ability to evaluate the junctional zone, myometrium, and musculoskeletal structures for both sagittal acquisitions (mean values of 0.56, 0.63, 0.42, 0.56, and 0.80, respectively; all P values < .0001) and coronal oblique acquisitions (mean values of 0.81, 1.09, 0.65, 0.93, and 1.12, respectively; all P values < .0001). For evaluation of artifacts, there was an insufficient number of cases with differences to allow statistical testing. Conclusion Compared with conventional SSFSE acquisition, vrfSSFSE acquisition increases 3-T imaging speed via reduced specific absorption rate and leads to significant improvements in perceived image quality and perceived diagnostic capability when evaluating pelvic structures. © RSNA, 2016 Online supplemental material is available for this article. PMID:27564132
Comparison of satellite reflectance algorithms for estimating ...
We analyzed 10 established and 4 new satellite reflectance algorithms for estimating chlorophyll-a (Chl-a) in a temperate reservoir in southwest Ohio using coincident hyperspectral aircraft imagery and dense water truth collected within one hour of image acquisition to develop simple proxies for algal blooms and to facilitate portability between multispectral satellite imagers for regional algal bloom monitoring. Narrow band hyperspectral aircraft images were upscaled spectrally and spatially to simulate 5 current and near future satellite imaging systems. Established and new Chl-a algorithms were then applied to the synthetic satellite images and then compared to calibrated Chl-a water truth measurements collected from 44 sites within one hour of aircraft acquisition of the imagery. Masks based on the spatial resolution of the synthetic satellite imagery were then applied to eliminate mixed pixels including vegetated shorelines. Medium-resolution Landsat and finer resolution data were evaluated against 29 coincident water truth sites. Coarse-resolution MODIS and MERIS-like data were evaluated against 9 coincident water truth sites. Each synthetic satellite data set was then evaluated for the performance of a variety of spectrally appropriate algorithms with regard to the estimation of Chl-a concentrations against the water truth data set. The goal is to inform water resource decisions on the appropriate satellite data acquisition and processing for the es
Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.
Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos
2016-05-05
Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.
Fukao, Mari; Kawamoto, Kiyosumi; Matsuzawa, Hiroaki; Honda, Osamu; Iwaki, Takeshi; Doi, Tsukasa
2015-01-01
We aimed to optimize the exposure conditions in the acquisition of soft-tissue images using dual-energy subtraction chest radiography with a direct-conversion flat-panel detector system. Two separate chest images were acquired at high- and low-energy exposures with standard or thick chest phantoms. The high-energy exposure was fixed at 120 kVp with the use of an auto-exposure control technique. For the low-energy exposure, the tube voltages and entrance surface doses ranged 40-80 kVp and 20-100 % of the dose required for high-energy exposure, respectively. Further, a repetitive processing algorithm was used for reduction of the image noise generated by the subtraction process. Seven radiology technicians ranked soft-tissue images, and these results were analyzed using the normalized-rank method. Images acquired at 60 kVp were of acceptable quality regardless of the entrance surface dose and phantom size. Using a repetitive processing algorithm, the minimum acceptable doses were reduced from 75 to 40 % for the standard phantom and to 50 % for the thick phantom. We determined that the optimum low-energy exposure was 60 kVp at 50 % of the dose required for the high-energy exposure. This allowed the simultaneous acquisition of standard radiographs and soft-tissue images at 1.5 times the dose required for a standard radiograph, which is significantly lower than the values reported previously.
Markl, Michael; Harloff, Andreas; Bley, Thorsten A; Zaitsev, Maxim; Jung, Bernd; Weigang, Ernst; Langer, Mathias; Hennig, Jürgen; Frydrychowicz, Alex
2007-04-01
To evaluate an improved image acquisition and data-processing strategy for assessing aortic vascular geometry and 3D blood flow at 3T. In a study with five normal volunteers and seven patients with known aortic pathology, prospectively ECG-gated cine three-dimensional (3D) MR velocity mapping with improved navigator gating, real-time adaptive k-space ordering and dynamic adjustment of the navigator acceptance criteria was performed. In addition to morphological information and three-directional blood flow velocities, phase-contrast (PC)-MRA images were derived from the same data set, which permitted 3D isosurface rendering of vascular boundaries in combination with visualization of blood-flow patterns. Analysis of navigator performance and image quality revealed improved scan efficiencies of 63.6%+/-10.5% and temporal resolution (<50 msec) compared to previous implementations. Semiquantitative evaluation of image quality by three independent observers demonstrated excellent general image appearance with moderate blurring and minor ghosting artifacts. Results from volunteer and patient examinations illustrate the potential of the improved image acquisition and data-processing strategy for identifying normal and pathological blood-flow characteristics. Navigator-gated time-resolved 3D MR velocity mapping at 3T in combination with advanced data processing is a powerful tool for performing detailed assessments of global and local blood-flow characteristics in the aorta to describe or exclude vascular alterations. Copyright (c) 2007 Wiley-Liss, Inc.
Learning the manifold of quality ultrasound acquisition.
El-Zehiry, Noha; Yan, Michelle; Good, Sara; Fang, Tong; Zhou, S Kevin; Grady, Leo
2013-01-01
Ultrasound acquisition is a challenging task that requires simultaneous adjustment of several acquisition parameters (the depth, the focus, the frequency and its operation mode). If the acquisition parameters are not properly chosen, the resulting image will have a poor quality and will degrade the patient diagnosis and treatment workflow. Several hardware-based systems for autotuning the acquisition parameters have been previously proposed, but these solutions were largely abandoned because they failed to properly account for tissue inhomogeneity and other patient-specific characteristics. Consequently, in routine practice the clinician either uses population-based parameter presets or manually adjusts the acquisition parameters for each patient during the scan. In this paper, we revisit the problem of autotuning the acquisition parameters by taking a completely novel approach and producing a solution based on image analytics. Our solution is inspired by the autofocus capability of conventional digital cameras, but is significantly more challenging because the number of acquisition parameters is large and the determination of "good quality" images is more difficult to assess. Surprisingly, we show that the set of acquisition parameters which produce images that are favored by clinicians comprise a 1D manifold, allowing for a real-time optimization to maximize image quality. We demonstrate our method for acquisition parameter autotuning on several live patients, showing that our system can start with a poor initial set of parameters and automatically optimize the parameters to produce high quality images.
GPU acceleration towards real-time image reconstruction in 3D tomographic diffractive microscopy
NASA Astrophysics Data System (ADS)
Bailleul, J.; Simon, B.; Debailleul, M.; Liu, H.; Haeberlé, O.
2012-06-01
Phase microscopy techniques regained interest in allowing for the observation of unprepared specimens with excellent temporal resolution. Tomographic diffractive microscopy is an extension of holographic microscopy which permits 3D observations with a finer resolution than incoherent light microscopes. Specimens are imaged by a series of 2D holograms: their accumulation progressively fills the range of frequencies of the specimen in Fourier space. A 3D inverse FFT eventually provides a spatial image of the specimen. Consequently, acquisition then reconstruction are mandatory to produce an image that could prelude real-time control of the observed specimen. The MIPS Laboratory has built a tomographic diffractive microscope with an unsurpassed 130nm resolution but a low imaging speed - no less than one minute. Afterwards, a high-end PC reconstructs the 3D image in 20 seconds. We now expect an interactive system providing preview images during the acquisition for monitoring purposes. We first present a prototype implementing this solution on CPU: acquisition and reconstruction are tied in a producer-consumer scheme, sharing common data into CPU memory. Then we present a prototype dispatching some reconstruction tasks to GPU in order to take advantage of SIMDparallelization for FFT and higher bandwidth for filtering operations. The CPU scheme takes 6 seconds for a 3D image update while the GPU scheme can go down to 2 or > 1 seconds depending on the GPU class. This opens opportunities for 4D imaging of living organisms or crystallization processes. We also consider the relevance of GPU for 3D image interaction in our specific conditions.
Normalized Temperature Contrast Processing in Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2016-01-01
The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.
NASA Astrophysics Data System (ADS)
Vallières, Martin; Laberge, Sébastien; Diamant, André; El Naqa, Issam
2017-11-01
Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice (‘span’). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of 0.84 +/- 0.01 in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters (p = 0.04 ), with an average AUC of 0.89 +/- 0.01 . Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.
Quantitative analysis of geomorphic processes using satellite image data at different scales
NASA Technical Reports Server (NTRS)
Williams, R. S., Jr.
1985-01-01
When aerial and satellite photographs and images are used in the quantitative analysis of geomorphic processes, either through direct observation of active processes or by analysis of landforms resulting from inferred active or dormant processes, a number of limitations in the use of such data must be considered. Active geomorphic processes work at different scales and rates. Therefore, the capability of imaging an active or dormant process depends primarily on the scale of the process and the spatial-resolution characteristic of the imaging system. Scale is an important factor in recording continuous and discontinuous active geomorphic processes, because what is not recorded will not be considered or even suspected in the analysis of orbital images. If the geomorphic process of landform change caused by the process is less than 200 m in x to y dimension, then it will not be recorded. Although the scale factor is critical, in the recording of discontinuous active geomorphic processes, the repeat interval of orbital-image acquisition of a planetary surface also is a consideration in order to capture a recurring short-lived geomorphic process or to record changes caused by either a continuous or a discontinuous geomorphic process.
Quantitative evaluation of phase processing approaches in susceptibility weighted imaging
NASA Astrophysics Data System (ADS)
Li, Ningzhi; Wang, Wen-Tung; Sati, Pascal; Pham, Dzung L.; Butman, John A.
2012-03-01
Susceptibility weighted imaging (SWI) takes advantage of the local variation in susceptibility between different tissues to enable highly detailed visualization of the cerebral venous system and sensitive detection of intracranial hemorrhages. Thus, it has been increasingly used in magnetic resonance imaging studies of traumatic brain injury as well as other intracranial pathologies. In SWI, magnitude information is combined with phase information to enhance the susceptibility induced image contrast. Because of global susceptibility variations across the image, the rate of phase accumulation varies widely across the image resulting in phase wrapping artifacts that interfere with the local assessment of phase variation. Homodyne filtering is a common approach to eliminate this global phase variation. However, filter size requires careful selection in order to preserve image contrast and avoid errors resulting from residual phase wraps. An alternative approach is to apply phase unwrapping prior to high pass filtering. A suitable phase unwrapping algorithm guarantees no residual phase wraps but additional computational steps are required. In this work, we quantitatively evaluate these two phase processing approaches on both simulated and real data using different filters and cutoff frequencies. Our analysis leads to an improved understanding of the relationship between phase wraps, susceptibility effects, and acquisition parameters. Although homodyne filtering approaches are faster and more straightforward, phase unwrapping approaches perform more accurately in a wider variety of acquisition scenarios.
High-Resolution Surface Reconstruction from Imagery for Close Range Cultural Heritage Applications
NASA Astrophysics Data System (ADS)
Wenzel, K.; Abdel-Wahab, M.; Cefalu, A.; Fritsch, D.
2012-07-01
The recording of high resolution point clouds with sub-mm resolution is a demanding and cost intensive task, especially with current equipment like handheld laser scanners. We present an image based approached, where techniques of image matching and dense surface reconstruction are combined with a compact and affordable rig of off-the-shelf industry cameras. Such cameras provide high spatial resolution with low radiometric noise, which enables a one-shot solution and thus an efficient data acquisition while satisfying high accuracy requirements. However, the largest drawback of image based solutions is often the acquisition of surfaces with low texture where the image matching process might fail. Thus, an additional structured light projector is employed, represented here by the pseudo-random pattern projector of the Microsoft Kinect. Its strong infrared-laser projects speckles of different sizes. By using dense image matching techniques on the acquired images, a 3D point can be derived for almost each pixel. The use of multiple cameras enables the acquisition of a high resolution point cloud with high accuracy for each shot. For the proposed system up to 3.5 Mio. 3D points with sub-mm accuracy can be derived per shot. The registration of multiple shots is performed by Structure and Motion reconstruction techniques, where feature points are used to derive the camera positions and rotations automatically without initial information.
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-11-17
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.
NASA Astrophysics Data System (ADS)
Westphal, Volker
Optical Coherence Tomography (OCT) is a noninvasive optical imaging technique that allows high-resolution cross-sectional imaging of tissue microstructure, achieving a spatial resolution of about 10 mum. OCT is similar to B-mode ultrasound (US) except that it uses infrared light instead of ultrasound. In contrast to US, no coupling gel is needed, simplifying the image acquisition. Furthermore, the fiber optic implementation of OCT is compatible with endoscopes. In recent years, the transition from slow imaging, bench-top systems to real-time clinical systems has been under way. This has lead to a variety of applications, namely in ophthalmology, gastroenterology, dermatology and cardiology. First, this dissertation will demonstrate that OCT is capable of imaging and differentiating clinically relevant tissue structures in the gastrointestinal tract. A careful in vitro correlation study between endoscopic OCT images and corresponding histological slides was performed. Besides structural imaging, OCT systems were further developed for functional imaging, as for example to visualize blood flow. Previously, imaging flow in small vessels in real-time was not possible. For this research, a new processing scheme similar to real-time Doppler in US was introduced. It was implemented in dedicated hardware to allow real-time acquisition and overlayed display of blood flow in vivo. A sensitivity of 0.5mm/s was achieved. Optical coherence microscopy (OCM) is a variation of OCT, improving the resolution even further to a few micrometers. Advances made in the OCT scan engine for the Doppler setup enabled real-time imaging in vivo with OCM. In order to generate geometrical correct images for all the previous applications in real-time, extensive image processing algorithms were developed. Algorithms for correction of distortions due to non-telecentric scanning, nonlinear scan mirror movements, and refraction were developed and demonstrated. This has led to interesting new applications, as for example in imaging of the anterior segment of the eye.
NASA Astrophysics Data System (ADS)
Hashimoto, Atsushi; Suehara, Ken-Ichiro; Kameoka, Takaharu
To measure the quantitative surface color information of agricultural products with the ambient information during cultivation, a color calibration method for digital camera images and a remote monitoring system of color imaging using the Web were developed. Single-lens reflex and web digital cameras were used for the image acquisitions. The tomato images through the post-ripening process were taken by the digital camera in both the standard image acquisition system and in the field conditions from the morning to evening. Several kinds of images were acquired with the standard RGB color chart set up just behind the tomato fruit on a black matte, and a color calibration was carried out. The influence of the sunlight could be experimentally eliminated, and the calibrated color information consistently agreed with the standard ones acquired in the system through the post-ripening process. Furthermore, the surface color change of the tomato on the tree in a greenhouse was remotely monitored during maturation using the digital cameras equipped with the Field Server. The acquired digital color images were sent from the Farm Station to the BIFE Laboratory of Mie University via VPN. The time behavior of the tomato surface color change during the maturing process could be measured using the color parameter calculated based on the obtained and calibrated color images along with the ambient atmospheric record. This study is a very important step in developing the surface color analysis for both the simple and rapid evaluation of the crop vigor in the field and to construct an ambient and networked remote monitoring system for food security, precision agriculture, and agricultural research.
NASA Technical Reports Server (NTRS)
Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)
2011-01-01
A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.
NASA Astrophysics Data System (ADS)
Tauro, Flavia; Grimaldi, Salvatore
2017-04-01
Recently, several efforts have been devoted to the design and development of innovative, and often unintended, approaches for the acquisition of hydrological data. Among such pioneering techniques, this presentation reports recent advancements towards the establishment of a novel noninvasive and potentially continuous methodology based on the acquisition and analysis of images for spatially distributed observations of the kinematics of surface waters. The approach aims at enabling rapid, affordable, and accurate surface flow monitoring of natural streams. Flow monitoring is an integral part of hydrological sciences and is essential for disaster risk reduction and the comprehension of natural phenomena. However, water processes are inherently complex to observe: they are characterized by multiscale and highly heterogeneous phenomena which have traditionally demanded sophisticated and costly measurement techniques. Challenges in the implementation of such techniques have also resulted in lack of hydrological data during extreme events, in difficult-to-access environments, and at high temporal resolution. By combining low-cost yet high-resolution images and several velocimetry algorithms, noninvasive flow monitoring has been successfully conducted at highly heterogeneous scales, spanning from rills to highly turbulent streams, and medium-scale rivers, with minimal supervision by external users. Noninvasive image data acquisition has also afforded observations in high flow conditions. Latest novelties towards continuous flow monitoring at the catchment scale have entailed the development of a remote gauge-cam station on the Tiber River and integration of flow monitoring through image analysis with unmanned aerial systems (UASs) technology. The gauge-cam station and the UAS platform both afford noninvasive image acquisition and calibration through an innovative laser-based setup. Compared to traditional point-based instrumentation, images allow for generating surface flow velocity maps which fully describe the kinematics of the velocity field in natural streams. Also, continuous observations provide a close picture of the evolving dynamics of natural water bodies. Despite such promising achievements, dealing with images also involves coping with adverse illumination, massive data handling and storage, and data-intensive computing. Most importantly, establishing a novel observational technique requires estimation of the uncertainty associated to measurements and thorough comparison to existing benchmark approaches. In this presentation, we provide answers to some of these issues and perspectives for future research.
[Lymphoscintigrams with anatomical landmarks obtained with vector graphics].
Rubini, Giuseppe; Antonica, Filippo; Renna, Maria Antonia; Ferrari, Cristina; Iuele, Francesca; Stabile Ianora, Antonio Amato; Losco, Matteo; Niccoli Asabella, Artor
2012-11-01
Nuclear medicine images are difficult to interpret because they do not include anatomical details. The aim of this study was to obtain lymphoscintigrams with anatomical landmarks that could be easily interpreted by General Physicians. Traditional lymphoscintigrams were processed with Adobe© Photoshop® CS6 and converted into vector images created by Illustrator®. The combination with a silhouette vector improved image interpretation, without resulting in longer radiation exposure or acquisition times.
ERIC Educational Resources Information Center
Blackman, Graham A.; Hall, Deborah A.
2011-01-01
Purpose: The intense sound generated during functional magnetic resonance imaging (fMRI) complicates studies of speech and hearing. This experiment evaluated the benefits of using active noise cancellation (ANC), which attenuates the level of the scanner sound at the participant's ear by up to 35 dB around the peak at 600 Hz. Method: Speech and…
A Software Platform for Post-Processing Waveform-Based NDE
NASA Technical Reports Server (NTRS)
Roth, Donald J.; Martin, Richard E.; Seebo, Jeff P.; Trinh, Long B.; Walker, James L.; Winfree, William P.
2007-01-01
Ultrasonic, microwave, and terahertz nondestructive evaluation imaging systems generally require the acquisition of waveforms at each scan point to form an image. For such systems, signal and image processing methods are commonly needed to extract information from the waves and improve resolution of, and highlight, defects in the image. Since some similarity exists for all waveform-based NDE methods, it would seem a common software platform containing multiple signal and image processing techniques to process the waveforms and images makes sense where multiple techniques, scientists, engineers, and organizations are involved. This presentation describes NASA Glenn Research Center's approach in developing a common software platform for processing waveform-based NDE signals and images. This platform is currently in use at NASA Glenn and at Lockheed Martin Michoud Assembly Facility for processing of pulsed terahertz and ultrasonic data. Highlights of the software operation will be given. A case study will be shown for use with terahertz data. The authors also request scientists and engineers who are interested in sharing customized signal and image processing algorithms to contribute to this effort by letting the authors code up and include these algorithms in future releases.
NASA Astrophysics Data System (ADS)
Tang, Jing; Rahmim, Arman; Lautamäki, Riikka; Lodge, Martin A.; Bengel, Frank M.; Tsui, Benjamin M. W.
2009-05-01
The purpose of this study is to optimize the dynamic Rb-82 cardiac PET acquisition and reconstruction protocols for maximum myocardial perfusion defect detection using realistic simulation data and task-based evaluation. Time activity curves (TACs) of different organs under both rest and stress conditions were extracted from dynamic Rb-82 PET images of five normal patients. Combined SimSET-GATE Monte Carlo simulation was used to generate nearly noise-free cardiac PET data from a time series of 3D NCAT phantoms with organ activities modeling different pre-scan delay times (PDTs) and total acquisition times (TATs). Poisson noise was added to the nearly noise-free projections and the OS-EM algorithm was applied to generate noisy reconstructed images. The channelized Hotelling observer (CHO) with 32× 32 spatial templates corresponding to four octave-wide frequency channels was used to evaluate the images. The area under the ROC curve (AUC) was calculated from the CHO rating data as an index for image quality in terms of myocardial perfusion defect detection. The 0.5 cycle cm-1 Butterworth post-filtering on OS-EM (with 21 subsets) reconstructed images generates the highest AUC values while those from iteration numbers 1 to 4 do not show different AUC values. The optimized PDTs for both rest and stress conditions are found to be close to the cross points of the left ventricular chamber and myocardium TACs, which may promote an individualized PDT for patient data processing and image reconstruction. Shortening the TATs for <~3 min from the clinically employed acquisition time does not affect the myocardial perfusion defect detection significantly for both rest and stress studies.
Movement measurement of isolated skeletal muscle using imaging microscopy
NASA Astrophysics Data System (ADS)
Elias, David; Zepeda, Hugo; Leija, Lorenzo S.; Sossa, Humberto; de la Rosa, Jose I.
1997-05-01
An imaging-microscopy methodology to measure contraction movement in chemically stimulated crustacean skeletal muscle, whose movement speed is about 0.02 mm/s is presented. For this, a CCD camera coupled to a microscope and a high speed digital image acquisition system, allowing us to capture 960 images per second are used. The images are digitally processed in a PC and displayed in a video monitor. A maximal field of 0.198 X 0.198 mm2 and a spatial resolution of 3.5 micrometers are obtained.
Image reconstruction: an overview for clinicians.
Hansen, Michael S; Kellman, Peter
2015-03-01
Image reconstruction plays a critical role in the clinical use of magnetic resonance imaging (MRI). The MRI raw data is not acquired in image space and the role of the image reconstruction process is to transform the acquired raw data into images that can be interpreted clinically. This process involves multiple signal processing steps that each have an impact on the image quality. This review explains the basic terminology used for describing and quantifying image quality in terms of signal-to-noise ratio and point spread function. In this context, several commonly used image reconstruction components are discussed. The image reconstruction components covered include noise prewhitening for phased array data acquisition, interpolation needed to reconstruct square pixels, raw data filtering for reducing Gibbs ringing artifacts, Fourier transforms connecting the raw data with image space, and phased array coil combination. The treatment of phased array coils includes a general explanation of parallel imaging as a coil combination technique. The review is aimed at readers with no signal processing experience and should enable them to understand what role basic image reconstruction steps play in the formation of clinical images and how the resulting image quality is described. © 2014 Wiley Periodicals, Inc.
2013-05-01
contract or a PhD di sse rtation typically are a " proo f- of-concept" code base that can onl y read a single set of inputs and are not designed ...AFRL-RX-WP-TR-2013-0210 COLLABORATIVE RESEARCH AND DEVELOPMENT (CR&D) III Task Order 0090: Image Processing Framework: From...public release; distribution unlimited. See additional restrictions described on inside pages. STINFO COPY AIR FORCE RESEARCH LABORATORY
Image acquisition system for traffic monitoring applications
NASA Astrophysics Data System (ADS)
Auty, Glen; Corke, Peter I.; Dunn, Paul; Jensen, Murray; Macintyre, Ian B.; Mills, Dennis C.; Nguyen, Hao; Simons, Ben
1995-03-01
An imaging system for monitoring traffic on multilane highways is discussed. The system, named Safe-T-Cam, is capable of operating 24 hours per day in all but extreme weather conditions and can capture still images of vehicles traveling up to 160 km/hr. Systems operating at different remote locations are networked to allow transmission of images and data to a control center. A remote site facility comprises a vehicle detection and classification module (VCDM), an image acquisition module (IAM) and a license plate recognition module (LPRM). The remote site is connected to the central site by an ISDN communications network. The remote site system is discussed in this paper. The VCDM consists of a video camera, a specialized exposure control unit to maintain consistent image characteristics, and a 'real-time' image processing system that processes 50 images per second. The VCDM can detect and classify vehicles (e.g. cars from trucks). The vehicle class is used to determine what data should be recorded. The VCDM uses a vehicle tracking technique to allow optimum triggering of the high resolution camera of the IAM. The IAM camera combines the features necessary to operate consistently in the harsh environment encountered when imaging a vehicle 'head-on' in both day and night conditions. The image clarity obtained is ideally suited for automatic location and recognition of the vehicle license plate. This paper discusses the camera geometry, sensor characteristics and the image processing methods which permit consistent vehicle segmentation from a cluttered background allowing object oriented pattern recognition to be used for vehicle classification. The image capture of high resolution images and the image characteristics required for the LPRMs automatic reading of vehicle license plates, is also discussed. The results of field tests presented demonstrate that the vision based Safe-T-Cam system, currently installed on open highways, is capable of producing automatic classification of vehicle class and recording of vehicle numberplates with a success rate around 90 percent in a period of 24 hours.
GPU-accelerated regularized iterative reconstruction for few-view cone beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matenine, Dmitri, E-mail: dmitri.matenine.1@ulaval.ca; Goussard, Yves, E-mail: yves.goussard@polymtl.ca; Després, Philippe, E-mail: philippe.despres@phy.ulaval.ca
2015-04-15
Purpose: The present work proposes an iterative reconstruction technique designed for x-ray transmission computed tomography (CT). The main objective is to provide a model-based solution to the cone-beam CT reconstruction problem, yielding accurate low-dose images via few-views acquisitions in clinically acceptable time frames. Methods: The proposed technique combines a modified ordered subsets convex (OSC) algorithm and the total variation minimization (TV) regularization technique and is called OSC-TV. The number of subsets of each OSC iteration follows a reduction pattern in order to ensure the best performance of the regularization method. Considering the high computational cost of the algorithm, it ismore » implemented on a graphics processing unit, using parallelization to accelerate computations. Results: The reconstructions were performed on computer-simulated as well as human pelvic cone-beam CT projection data and image quality was assessed. In terms of convergence and image quality, OSC-TV performs well in reconstruction of low-dose cone-beam CT data obtained via a few-view acquisition protocol. It compares favorably to the few-view TV-regularized projections onto convex sets (POCS-TV) algorithm. It also appears to be a viable alternative to full-dataset filtered backprojection. Execution times are of 1–2 min and are compatible with the typical clinical workflow for nonreal-time applications. Conclusions: Considering the image quality and execution times, this method may be useful for reconstruction of low-dose clinical acquisitions. It may be of particular benefit to patients who undergo multiple acquisitions by reducing the overall imaging radiation dose and associated risks.« less
A Method to Recognize Anatomical Site and Image Acquisition View in X-ray Images.
Chang, Xiao; Mazur, Thomas; Li, H Harold; Yang, Deshan
2017-12-01
A method was developed to recognize anatomical site and image acquisition view automatically in 2D X-ray images that are used in image-guided radiation therapy. The purpose is to enable site and view dependent automation and optimization in the image processing tasks including 2D-2D image registration, 2D image contrast enhancement, and independent treatment site confirmation. The X-ray images for 180 patients of six disease sites (the brain, head-neck, breast, lung, abdomen, and pelvis) were included in this study with 30 patients each site and two images of orthogonal views each patient. A hierarchical multiclass recognition model was developed to recognize general site first and then specific site. Each node of the hierarchical model recognized the images using a feature extraction step based on principal component analysis followed by a binary classification step based on support vector machine. Given two images in known orthogonal views, the site recognition model achieved a 99% average F1 score across the six sites. If the views were unknown in the images, the average F1 score was 97%. If only one image was taken either with or without view information, the average F1 score was 94%. The accuracy of the site-specific view recognition models was 100%.
Zheng, Xiaoming
2017-12-01
The purpose of this work was to examine the effects of relationship functions between diagnostic image quality and radiation dose on the governing equations for image acquisition parameter variations in X-ray imaging. Various equations were derived for the optimal selection of peak kilovoltage (kVp) and exposure parameter (milliAmpere second, mAs) in computed tomography (CT), computed radiography (CR), and direct digital radiography. Logistic, logarithmic, and linear functions were employed to establish the relationship between radiation dose and diagnostic image quality. The radiation dose to the patient, as a function of image acquisition parameters (kVp, mAs) and patient size (d), was used in radiation dose and image quality optimization. Both logistic and logarithmic functions resulted in the same governing equation for optimal selection of image acquisition parameters using a dose efficiency index. For image quality as a linear function of radiation dose, the same governing equation was derived from the linear relationship. The general equations should be used in guiding clinical X-ray imaging through optimal selection of image acquisition parameters. The radiation dose to the patient could be reduced from current levels in medical X-ray imaging.
A Pipeline Tool for CCD Image Processing
NASA Astrophysics Data System (ADS)
Bell, Jon F.; Young, Peter J.; Roberts, William H.; Sebo, Kim M.
MSSSO is part of a collaboration developing a wide field imaging CCD mosaic (WFI). As part of this project, we have developed a GUI based pipeline tool that is an integrated part of MSSSO's CICADA data acquisition environment and processes CCD FITS images as they are acquired. The tool is also designed to run as a stand alone program to process previously acquired data. IRAF tasks are used as the central engine, including the new NOAO mscred package for processing multi-extension FITS files. The STScI OPUS pipeline environment may be used to manage data and process scheduling. The Motif GUI was developed using SUN Visual Workshop. C++ classes were written to facilitate launching of IRAF and OPUS tasks. While this first version implements calibration processing up to and including flat field corrections, there is scope to extend it to other processing.
New Approach to Image Aerogels by Scanning Electron Microscopy
NASA Astrophysics Data System (ADS)
Solá, Francisco; Hurwitz, Frances; Yang, Jijing
2011-03-01
A new scanning electron microscopy (SEM) technique to image poor electrically conductive aerogels is presented. The process can be performed by non-expert SEM users. We showed that negative charging effects on aerogels can be minimized significantly by inserting dry nitrogen gas close to the region of interest. The process involves the local recombination of accumulated negative charges with positive ions generated from ionization processes. This new technique made possible the acquisition of images of aerogels with pores down to approximately 3nm in diameter using a positively biased Everhart-Thornley (E-T) detector. Well-founded concepts based on known models will also be presented with the aim to explain the results qualitatively.
Fringe image processing based on structured light series
NASA Astrophysics Data System (ADS)
Gai, Shaoyan; Da, Feipeng; Li, Hongyan
2009-11-01
The code analysis of the fringe image is playing a vital role in the data acquisition of structured light systems, which affects precision, computational speed and reliability of the measurement processing. According to the self-normalizing characteristic, a fringe image processing method based on structured light is proposed. In this method, a series of projective patterns is used when detecting the fringe order of the image pixels. The structured light system geometry is presented, which consist of a white light projector and a digital camera, the former projects sinusoidal fringe patterns upon the object, and the latter acquires the fringe patterns that are deformed by the object's shape. Then the binary images with distinct white and black strips can be obtained and the ability to resist image noise is improved greatly. The proposed method can be implemented easily and applied for profile measurement based on special binary code in a wide field.
An efficient approach to integrated MeV ion imaging.
Nikbakht, T; Kakuee, O; Solé, V A; Vosuoghi, Y; Lamehi-Rachti, M
2018-03-01
An ionoluminescence (IL) spectral imaging system, besides the common MeV ion imaging facilities such as µ-PIXE and µ-RBS, is implemented at the Van de Graaff laboratory of Tehran. A versatile processing software is required to handle the large amount of data concurrently collected in µ-IL and common MeV ion imaging measurements through the respective methodologies. The open-source freeware PyMca, with image processing and multivariate analysis capabilities, is employed to simultaneously process common MeV ion imaging and µ-IL data. Herein, the program was adapted to support the OM_DAQ listmode data format. The appropriate performance of the µ-IL data acquisition system is confirmed through a case study. Moreover, the capabilities of the software for simultaneous analysis of µ-PIXE and µ-RBS experimental data are presented. Copyright © 2017 Elsevier B.V. All rights reserved.
Recognition of Roasted Coffee Bean Levels using Image Processing and Neural Network
NASA Astrophysics Data System (ADS)
Nasution, T. H.; Andayani, U.
2017-03-01
The coffee beans roast levels have some characteristics. However, some people cannot recognize the coffee beans roast level. In this research, we propose to design a method to recognize the coffee beans roast level of images digital by processing the image and classifying with backpropagation neural network. The steps consist of how to collect the images data with image acquisition, pre-processing, feature extraction using Gray Level Co-occurrence Matrix (GLCM) method and finally normalization of data extraction using decimal scaling features. The values of decimal scaling features become an input of classifying in backpropagation neural network. We use the method of backpropagation to recognize the coffee beans roast levels. The results showed that the proposed method is able to identify the coffee roasts beans level with an accuracy of 97.5%.
Globe Browsing: Contextualized Spatio-Temporal Planetary Surface Visualization.
Bladin, Karl; Axelsson, Emil; Broberg, Erik; Emmart, Carter; Ljung, Patric; Bock, Alexander; Ynnerman, Anders
2017-08-29
Results of planetary mapping are often shared openly for use in scientific research and mission planning. In its raw format, however, the data is not accessible to non-experts due to the difficulty in grasping the context and the intricate acquisition process. We present work on tailoring and integration of multiple data processing and visualization methods to interactively contextualize geospatial surface data of celestial bodies for use in science communication. As our approach handles dynamic data sources, streamed from online repositories, we are significantly shortening the time between discovery and dissemination of data and results. We describe the image acquisition pipeline, the pre-processing steps to derive a 2.5D terrain, and a chunked level-of-detail, out-of-core rendering approach to enable interactive exploration of global maps and high-resolution digital terrain models. The results are demonstrated for three different celestial bodies. The first case addresses high-resolution map data on the surface of Mars. A second case is showing dynamic processes, such as concurrent weather conditions on Earth that require temporal datasets. As a final example we use data from the New Horizons spacecraft which acquired images during a single flyby of Pluto. We visualize the acquisition process as well as the resulting surface data. Our work has been implemented in the OpenSpace software [8], which enables interactive presentations in a range of environments such as immersive dome theaters, interactive touch tables, and virtual reality headsets.
ERIC Educational Resources Information Center
Makita, Kai; Yamazaki, Mika; Tanabe, Hiroki C.; Koike, Takahiko; Kochiyama, Takanori; Yokokawa, Hirokazu; Yoshida, Haruyo; Sadato, Norihiro
2013-01-01
Psychological research suggests that foreign-language vocabulary acquisition recruits the phonological loop for verbal working memory. To depict the neural underpinnings and shed light on the process of foreign language learning, we conducted functional magnetic resonance imaging of Japanese participants without previous exposure to the Uzbek…
(abstract) A High Throughput 3-D Inner Product Processor
NASA Technical Reports Server (NTRS)
Daud, Tuan
1996-01-01
A particularily challenging image processing application is the real time scene acquisition and object discrimination. It requires spatio-temporal recognition of point and resolved objects at high speeds with parallel processing algorithms. Neural network paradigms provide fine grain parallism and, when implemented in hardware, offer orders of magnitude speed up. However, neural networks implemented on a VLSI chip are planer architectures capable of efficient processing of linear vector signals rather than 2-D images. Therefore, for processing of images, a 3-D stack of neural-net ICs receiving planar inputs and consuming minimal power are required. Details of the circuits with chip architectures will be described with need to develop ultralow-power electronics. Further, use of the architecture in a system for high-speed processing will be illustrated.
Research on the underwater target imaging based on the streak tube laser lidar
NASA Astrophysics Data System (ADS)
Cui, Zihao; Tian, Zhaoshuo; Zhang, Yanchao; Bi, Zongjie; Yang, Gang; Gu, Erdan
2018-03-01
A high frame rate streak tube imaging lidar (STIL) for real-time 3D imaging of underwater targets is presented in this paper. The system uses 532nm pulse laser as the light source, the maximum repetition rate is 120Hz, and the pulse width is 8ns. LabVIEW platform is used in the system, the system control, synchronous image acquisition, 3D data processing and display are realized through PC. 3D imaging experiment of underwater target is carried out in a flume with attenuation coefficient of 0.2, and the images of different depth and different material targets are obtained, the imaging frame rate is 100Hz, and the maximum detection depth is 31m. For an underwater target with a distance of 22m, the high resolution 3D image real-time acquisition is realized with range resolution of 1cm and space resolution of 0.3cm, the spatial relationship of the targets can be clearly identified by the image. The experimental results show that STIL has a good application prospect in underwater terrain detection, underwater search and rescue, and other fields.
The sky is the limit: reconstructing physical geography fieldwork from an aerial perspective
NASA Astrophysics Data System (ADS)
Williams, R.; Tooth, S.; Gibson, M.; Barrett, B.
2017-12-01
In an era of rapid geographical data acquisition, interpretations of remote sensing products (e.g. aerial photographs, satellite images, digital elevation models) are an integral part of many undergraduate geography degree schemes but there are fewer opportunities for collection and processing of primary remote sensing data. Unmanned aerial vehicles (UAVs) provide a relatively cheap opportunity to introduce the principles and practice of airborne remote sensing into fieldcourses, enabling students to learn about image acquisition, data processing and interpretation of derived products. Three case studies illustrate how a low cost DJI Phantom UAV can be used by students to acquire images that can be processed using off the shelf Structure-from-Motion photogrammetry software. Two case studies are drawn from an international fieldcourse that takes students to field sites that are the focus of current funded research whilst a third case study is from a course in topographic mapping. Results from a student questionnaire and analysis of assessed student reports showed that using UAVs in fieldwork enhanced student engagement with themes on their fieldcourse and equipped them with data processing skills. The derivation of bespoke orthophotos and Digital Elevation Models also provided students with opportunities to gain insight into the various data quality issues that are associated with aerial imagery acquisition and topographic reconstruction, although additional training is required to maximise this potential. Recognition of the successes and limitations of this teaching intervention provides scope for improving exercises that use UAVs and other technologies in future fieldcourses. UAVs are enabling both a reconstruction of how we measure the Earth's surface and a reconstruction of how students do fieldwork.
An efficient multiple exposure image fusion in JPEG domain
NASA Astrophysics Data System (ADS)
Hebbalaguppe, Ramya; Kakarala, Ramakrishna
2012-01-01
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
A Bayesian Model for Highly Accelerated Phase-Contrast MRI
Rich, Adam; Potter, Lee C.; Jin, Ning; Ash, Joshua; Simonetti, Orlando P.; Ahmad, Rizwan
2015-01-01
Purpose Phase-contrast magnetic resonance imaging (PC-MRI) is a noninvasive tool to assess cardiovascular disease by quantifying blood flow; however, low data acquisition efficiency limits the spatial and temporal resolutions, real-time application, and extensions to 4D flow imaging in clinical settings. We propose a new data processing approach called Reconstructing Velocity Encoded MRI with Approximate message passing aLgorithms (ReVEAL) that accelerates the acquisition by exploiting data structure unique to PC-MRI. Theory and Methods ReVEAL models physical correlations across space, time, and velocity encodings. The proposed Bayesian approach exploits the relationships in both magnitude and phase among velocity encodings. A fast iterative recovery algorithm is introduced based on message passing. For validation, prospectively undersampled data are processed from a pulsatile flow phantom and five healthy volunteers. Results ReVEAL is in good agreement, quantified by peak velocity and stroke volume (SV), with reference data for acceleration rates R ≤ 10. For SV, Pearson r ≥ 0.996 for phantom imaging (n = 24) and r ≥ 0.956 for prospectively accelerated in vivo imaging (n = 10) for R ≤ 10. Conclusion ReVEAL enables accurate quantification of blood flow from highly undersampled data. The technique is extensible to 4D flow imaging, where higher acceleration may be possible due to additional redundancy. PMID:26444911
Stable image acquisition for mobile image processing applications
NASA Astrophysics Data System (ADS)
Henning, Kai-Fabian; Fritze, Alexander; Gillich, Eugen; Mönks, Uwe; Lohweg, Volker
2015-02-01
Today, mobile devices (smartphones, tablets, etc.) are widespread and of high importance for their users. Their performance as well as versatility increases over time. This leads to the opportunity to use such devices for more specific tasks like image processing in an industrial context. For the analysis of images requirements like image quality (blur, illumination, etc.) as well as a defined relative position of the object to be inspected are crucial. Since mobile devices are handheld and used in constantly changing environments the challenge is to fulfill these requirements. We present an approach to overcome the obstacles and stabilize the image capturing process such that image analysis becomes significantly improved on mobile devices. Therefore, image processing methods are combined with sensor fusion concepts. The approach consists of three main parts. First, pose estimation methods are used to guide a user moving the device to a defined position. Second, the sensors data and the pose information are combined for relative motion estimation. Finally, the image capturing process is automated. It is triggered depending on the alignment of the device and the object as well as the image quality that can be achieved under consideration of motion and environmental effects.
Smartphone based hemispherical photography for canopy structure measurement
NASA Astrophysics Data System (ADS)
Wan, Xuefen; Cui, Jian; Jiang, Xueqin; Zhang, Jingwen; Yang, Yi; Zheng, Tao
2018-01-01
The canopy is the most direct and active interface layer of the interaction between plant and environment, and has important influence on energy exchange, biodiversity, ecosystem matter and climate change. The measurement about canopy structure of plant is an important foundation to analyze the pattern, process and operation mechanism of forest ecosystem. Through the study of canopy structure of plant, solar radiation, ambient wind speed, air temperature and humidity, soil evaporation, soil temperature and other forest environmental climate characteristics can be evaluated. Because of its accuracy and effectiveness, canopy structure measurement based on hemispherical photography has been widely studied. However, the traditional method of canopy structure hemispherical photogrammetry based on SLR camera and fisheye lens. This method is expensive and difficult to be used in some low-cost occasions. In recent years, smartphone technology has been developing rapidly. The smartphone not only has excellent image acquisition ability, but also has the considerable computational processing ability. In addition, the gyroscope and positioning function on the smartphone will also help to measure the structure of the canopy. In this paper, we present a smartphone based hemispherical photography system. The system consists of smart phones, low-cost fisheye lenses and PMMA adapters. We designed an Android based App to obtain the canopy hemisphere images through low-cost fisheye lenses and provide horizontal collimation information. In addition, the App will add the acquisition location tag obtained by GPS and auxiliary positioning method in hemisphere image information after the canopy structure hemisphere image acquisition. The system was tested in the urban forest after it was completed. The test results show that the smartphone based hemispherical photography system can effectively collect the high-resolution canopy structure image of the plant.
Johnson, Heath E; Haugh, Jason M
2013-12-02
This unit focuses on the use of total internal reflection fluorescence (TIRF) microscopy and image analysis methods to study the dynamics of signal transduction mediated by class I phosphoinositide 3-kinases (PI3Ks) in mammalian cells. The first four protocols cover live-cell imaging experiments, image acquisition parameters, and basic image processing and segmentation. These methods are generally applicable to live-cell TIRF experiments. The remaining protocols outline more advanced image analysis methods, which were developed in our laboratory for the purpose of characterizing the spatiotemporal dynamics of PI3K signaling. These methods may be extended to analyze other cellular processes monitored using fluorescent biosensors. Copyright © 2013 John Wiley & Sons, Inc.
Sentinel 2 global reference image
NASA Astrophysics Data System (ADS)
Dechoz, C.; Poulain, V.; Massera, S.; Languille, F.; Greslou, D.; de Lussy, F.; Gaudel, A.; L'Helguen, C.; Picard, C.; Trémas, T.
2015-10-01
Sentinel-2 is a multispectral, high-resolution, optical imaging mission, developed by the European Space Agency (ESA) in the frame of the Copernicus program of the European Commission. In cooperation with ESA, the Centre National d'Etudes Spatiales (CNES) is responsible for the image quality of the project, and will ensure the CAL/VAL commissioning phase. Sentinel-2 mission is devoted the operational monitoring of land and coastal areas, and will provide a continuity of SPOT- and Landsat-type data. Sentinel-2 will also deliver information for emergency services. Launched in 2015 and 2016, there will be a constellation of 2 satellites on a polar sun-synchronous orbit, imaging systematically terrestrial surfaces with a revisit time of 5 days, in 13 spectral bands in visible and shortwave infra-red. Therefore, multi-temporal series of images, taken under the same viewing conditions, will be available. So as to ensure for the multi-temporal registration of the products, specified to be better than 0.3 pixels at 2σ, a Global Reference Image (GRI) will be produced during the CAL/VAL period. This GRI is composed of a set of Sentinel-2 acquisitions, which geometry has been corrected by bundle block adjustment. During L1B processing, Ground Control Points will be taken between this reference image and the sentinel-2 acquisition processed and the geometric model of the image corrected, so as to ensure the good multi-temporal registration. This paper first details the production of the reference during the CALVAL period, and then details the qualification and geolocation performance assessment of the GRI. It finally presents its use in the Level-1 processing chain and gives a first assessment of the multi-temporal registration.
Lundell, Henrik; Alexander, Daniel C; Dyrby, Tim B
2014-08-01
Stimulated echo acquisition mode (STEAM) diffusion MRI can be advantageous over pulsed-gradient spin-echo (PGSE) for diffusion times that are long compared with T2 . It therefore has potential for biomedical diffusion imaging applications at 7T and above where T2 is short. However, gradient pulses other than the diffusion gradients in the STEAM sequence contribute much greater diffusion weighting than in PGSE and lead to a disrupted experimental design. Here, we introduce a simple compensation to the STEAM acquisition that avoids the orientational bias and disrupted experiment design that these gradient pulses can otherwise produce. The compensation is simple to implement by adjusting the gradient vectors in the diffusion pulses of the STEAM sequence, so that the net effective gradient vector including contributions from diffusion and other gradient pulses is as the experiment intends. High angular resolution diffusion imaging (HARDI) data were acquired with and without the proposed compensation. The data were processed to derive standard diffusion tensor imaging (DTI) maps, which highlight the need for the compensation. Ignoring the other gradient pulses, a bias in DTI parameters from STEAM acquisition is found, due both to confounds in the analysis and the experiment design. Retrospectively correcting the analysis with a calculation of the full B matrix can partly correct for these confounds, but an acquisition that is compensated as proposed is needed to remove the effect entirely. © 2014 The Authors. NMR in Biomedicine published by John Wiley & Sons, Ltd.
Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D
2012-07-01
Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.
Range and Panoramic Image Fusion Into a Textured Range Image for Culture Heritage Documentation
NASA Astrophysics Data System (ADS)
Bila, Z.; Reznicek, J.; Pavelka, K.
2013-07-01
This paper deals with a fusion of range and panoramic images, where the range image is acquired by a 3D laser scanner and the panoramic image is acquired with a digital still camera mounted on a panoramic head and tripod. The fused resulting dataset, called "textured range image", provides more reliable information about the investigated object for conservators and historians, than using both datasets separately. A simple example of fusion of a range and panoramic images, both obtained in St. Francis Xavier Church in town Opařany, is given here. Firstly, we describe the process of data acquisition, then the processing of both datasets into a proper format for following fusion and the process of fusion. The process of fusion can be divided into a two main parts: transformation and remapping. In the first, transformation, part, both images are related by matching similar features detected on both images with a proper detector, which results in transformation matrix enabling transformation of the range image onto a panoramic image. Then, the range data are remapped from the range image space into a panoramic image space and stored as an additional "range" channel. The process of image fusion is validated by comparing similar features extracted on both datasets.
Regionally adaptive histogram equalization of the chest.
Sherrier, R H; Johnson, G A
1987-01-01
Advances in the area of digital chest radiography have resulted in the acquisition of high-quality images of the human chest. With these advances, there arises a genuine need for image processing algorithms specific to the chest, in order to fully exploit this digital technology. We have implemented the well-known technique of histogram equalization, noting the problems encountered when it is adapted to chest images. These problems have been successfully solved with our regionally adaptive histogram equalization method. With this technique histograms are calculated locally and then modified according to both the mean pixel value of that region as well as certain characteristics of the cumulative distribution function. This process, which has allowed certain regions of the chest radiograph to be enhanced differentially, may also have broader implications for other image processing tasks.
Raup, B.H.; Kieffer, H.H.; Hare, T.M.; Kargel, J.S.
2000-01-01
The advanced spaceborne thermal emission and reflection radiometer (ASTER) instrument is scheduled to be launched on the EOS Terra platform in 1999. The Global Land Ice Measurements from Space project has planned to acquire ASTER images of most of the world's land ice annually during the six-year ASTER mission. This article describes the process of creating the data acquisition requests needed to cover approximately 170,000 glacier targets.
Multiscale image processing and antiscatter grids in digital radiography.
Lo, Winnie Y; Hornof, William J; Zwingenberger, Allison L; Robertson, Ian D
2009-01-01
Scatter radiation is a source of noise and results in decreased signal-to-noise ratio and thus decreased image quality in digital radiography. We determined subjectively whether a digitally processed image made without a grid would be of similar quality to an image made with a grid but without image processing. Additionally the effects of exposure dose and of a using a grid with digital radiography on overall image quality were studied. Thoracic and abdominal radiographs of five dogs of various sizes were made. Four acquisition techniques were included (1) with a grid, standard exposure dose, digital image processing; (2) without a grid, standard exposure dose, digital image processing; (3) without a grid, half the exposure dose, digital image processing; and (4) with a grid, standard exposure dose, no digital image processing (to mimic a film-screen radiograph). Full-size radiographs as well as magnified images of specific anatomic regions were generated. Nine reviewers rated the overall image quality subjectively using a five-point scale. All digitally processed radiographs had higher overall scores than nondigitally processed radiographs regardless of patient size, exposure dose, or use of a grid. The images made at half the exposure dose had a slightly lower quality than those made at full dose, but this was only statistically significant in magnified images. Using a grid with digital image processing led to a slight but statistically significant increase in overall quality when compared with digitally processed images made without a grid but whether this increase in quality is clinically significant is unknown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hodek-Wuerz, Roman; Martin, Jean-Baptiste; Wilhelm, Kai
Percutaneous vertebroplasty (PVP) is carried out under fluoroscopic control in most centers. The exclusion of implant leakage and the assessment of implant distribution might be difficult to assess based on two-dimensional radiographic projection images only. We evaluated the feasibility of performing a follow-up examination after PVP with rotational acquisitions and volumetric reconstructions in the angio suite. Twenty consecutive patients underwent standard PVP procedures under fluoroscopic control. Immediate postprocedure evaluation of the implant distribution in the angio suite (BV 3000; Philips, The Netherlands) was performed using rotational acquisitions (typical parameters for the image acquisition included a 17-cm field-of-view, 200 acquired imagesmore » for a total angular range of 180{sup o}). Postprocessing of acquired volumetric datasets included multiplanar reconstruction (MPR), maximum intensity projection (MIP), and volume rendering technique (VRT) images that were displayed as two-dimensional slabs or as entire three-dimensional volumes. Image evaluation included lesion and implant assessment with special attention given to implant leakage. Findings from rotational acquisitions were compared to findings from postinterventional CT. The time to perform and to postprocess the rotational acquisitions was in all cases less then 10 min. Assessment of implant distribution after PVP using rotational image acquisition methods and volumetric reconstructions was possible in all patients. Cement distribution and potential leakage sites were visualized best on MIP images presented as slabs. From a total of 33 detected leakages with CT, 30 could be correctly detected by rotational image acquisition. Rotational image acquisitions and volumetric reconstruction methods provided a fast method to control radiographically the result of PVP in our cases.« less
Correlation processing for correction of phase distortions in subaperture imaging.
Tavh, B; Karaman, M
1999-01-01
Ultrasonic subaperture imaging combines synthetic aperture and phased array approaches and permits low-cost systems with improved image quality. In subaperture processing, a large array is synthesized using echo signals collected from a number of receive subapertures by multiple firings of a phased transmit subaperture. Tissue inhomogeneities and displacements in subaperture imaging may cause significant phase distortions on received echo signals. Correlation processing on reference echo signals can be used for correction of the phase distortions, for which the accuracy and robustness are critically limited by the signal correlation. In this study, we explore correlation processing techniques for adaptive subaperture imaging with phase correction for motion and tissue inhomogeneities. The proposed techniques use new subaperture data acquisition schemes to produce reference signal sets with improved signal correlation. The experimental test results were obtained using raw radio frequency (RF) data acquired from two different phantoms with 3.5 MHz, 128-element transducer array. The results show that phase distortions can effectively be compensated by the proposed techniques in real-time adaptive subaperture imaging.
Mobile Ultrasound Plane Wave Beamforming on iPhone or iPad using Metal- based GPU Processing
NASA Astrophysics Data System (ADS)
Hewener, Holger J.; Tretbar, Steffen H.
Mobile and cost effective ultrasound devices are being used in point of care scenarios or the drama room. To reduce the costs of such devices we already presented the possibilities of consumer devices like the Apple iPad for full signal processing of raw data for ultrasound image generation. Using technologies like plane wave imaging to generate a full image with only one excitation/reception event the acquisition times and power consumption of ultrasound imaging can be reduced for low power mobile devices based on consumer electronics realizing the transition from FPGA or ASIC based beamforming into more flexible software beamforming. The massive parallel beamforming processing can be done with the Apple framework "Metal" for advanced graphics and general purpose GPU processing for the iOS platform. We were able to integrate the beamforming reconstruction into our mobile ultrasound processing application with imaging rates up to 70 Hz on iPad Air 2 hardware.
NASA Astrophysics Data System (ADS)
Heisler, Morgan; Lee, Sieun; Mammo, Zaid; Jian, Yifan; Ju, Myeong Jin; Miao, Dongkai; Raposo, Eric; Wahl, Daniel J.; Merkur, Andrew; Navajas, Eduardo; Balaratnasingam, Chandrakumar; Beg, Mirza Faisal; Sarunic, Marinko V.
2017-02-01
High quality visualization of the retinal microvasculature can improve our understanding of the onset and development of retinal vascular diseases, which are a major cause of visual morbidity and are increasing in prevalence. Optical Coherence Tomography Angiography (OCT-A) images are acquired over multiple seconds and are particularly susceptible to motion artifacts, which are more prevalent when imaging patients with pathology whose ability to fixate is limited. The acquisition of multiple OCT-A images sequentially can be performed for the purpose of removing motion artifact and increasing the contrast of the vascular network through averaging. Due to the motion artifacts, a robust registration pipeline is needed before feature preserving image averaging can be performed. In this report, we present a novel method for a GPU-accelerated pipeline for acquisition, processing, segmentation, and registration of multiple, sequentially acquired OCT-A images to correct for the motion artifacts in individual images for the purpose of averaging. High performance computing, blending CPU and GPU, was introduced to accelerate processing in order to provide high quality visualization of the retinal microvasculature and to enable a more accurate quantitative analysis in a clinically useful time frame. Specifically, image discontinuities caused by rapid micro-saccadic movements and image warping due to smoother reflex movements were corrected by strip-wise affine registration estimated using Scale Invariant Feature Transform (SIFT) keypoints and subsequent local similarity-based non-rigid registration. These techniques improve the image quality, increasing the value for clinical diagnosis and increasing the range of patients for whom high quality OCT-A images can be acquired.
Characterizing probe performance in the aberration corrected STEM.
Batson, P E
2006-01-01
Sub-Angstrom imaging using the 120 kV IBM STEM is now routine if the probe optics is carefully controlled and fully characterized. However, multislice simulation using at least a frozen phonon approximation is required to understand the Annular Dark Field image contrast. Analysis of silicon dumbbell structures in the [110] and [211] projections illustrate this finding. Using fast image acquisition, atomic movement appears ubiquitous under the electron beam, and may be useful to illuminate atomic level processes.
Large Scale Textured Mesh Reconstruction from Mobile Mapping Images and LIDAR Scans
NASA Astrophysics Data System (ADS)
Boussaha, M.; Vallet, B.; Rives, P.
2018-05-01
The representation of 3D geometric and photometric information of the real world is one of the most challenging and extensively studied research topics in the photogrammetry and robotics communities. In this paper, we present a fully automatic framework for 3D high quality large scale urban texture mapping using oriented images and LiDAR scans acquired by a terrestrial Mobile Mapping System (MMS). First, the acquired points and images are sliced into temporal chunks ensuring a reasonable size and time consistency between geometry (points) and photometry (images). Then, a simple, fast and scalable 3D surface reconstruction relying on the sensor space topology is performed on each chunk after an isotropic sampling of the point cloud obtained from the raw LiDAR scans. Finally, the algorithm proposed in (Waechter et al., 2014) is adapted to texture the reconstructed surface with the images acquired simultaneously, ensuring a high quality texture with no seams and global color adjustment. We evaluate our full pipeline on a dataset of 17 km of acquisition in Rouen, France resulting in nearly 2 billion points and 40000 full HD images. We are able to reconstruct and texture the whole acquisition in less than 30 computing hours, the entire process being highly parallel as each chunk can be processed independently in a separate thread or computer.
A Bayesian model for highly accelerated phase-contrast MRI.
Rich, Adam; Potter, Lee C; Jin, Ning; Ash, Joshua; Simonetti, Orlando P; Ahmad, Rizwan
2016-08-01
Phase-contrast magnetic resonance imaging is a noninvasive tool to assess cardiovascular disease by quantifying blood flow; however, low data acquisition efficiency limits the spatial and temporal resolutions, real-time application, and extensions to four-dimensional flow imaging in clinical settings. We propose a new data processing approach called Reconstructing Velocity Encoded MRI with Approximate message passing aLgorithms (ReVEAL) that accelerates the acquisition by exploiting data structure unique to phase-contrast magnetic resonance imaging. The proposed approach models physical correlations across space, time, and velocity encodings. The proposed Bayesian approach exploits the relationships in both magnitude and phase among velocity encodings. A fast iterative recovery algorithm is introduced based on message passing. For validation, prospectively undersampled data are processed from a pulsatile flow phantom and five healthy volunteers. The proposed approach is in good agreement, quantified by peak velocity and stroke volume (SV), with reference data for acceleration rates R≤10. For SV, Pearson r≥0.99 for phantom imaging (n = 24) and r≥0.96 for prospectively accelerated in vivo imaging (n = 10) for R≤10. The proposed approach enables accurate quantification of blood flow from highly undersampled data. The technique is extensible to four-dimensional flow imaging, where higher acceleration may be possible due to additional redundancy. Magn Reson Med 76:689-701, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
[Detection of lung nodules. New opportunities in chest radiography].
Pötter-Lang, S; Schalekamp, S; Schaefer-Prokop, C; Uffmann, M
2014-05-01
Chest radiography still represents the most commonly performed X-ray examination because it is readily available, requires low radiation doses and is relatively inexpensive. However, as previously published, many initially undetected lung nodules are retrospectively visible in chest radiographs. The great improvements in detector technology with the increasing dose efficiency and improved contrast resolution provide a better image quality and reduced dose needs. The dual energy acquisition technique and advanced image processing methods (e.g. digital bone subtraction and temporal subtraction) reduce the anatomical background noise by reduction of overlapping structures in chest radiography. Computer-aided detection (CAD) schemes increase the awareness of radiologists for suspicious areas. The advanced image processing methods show clear improvements for the detection of pulmonary lung nodules in chest radiography and strengthen the role of this method in comparison to 3D acquisition techniques, such as computed tomography (CT). Many of these methods will probably be integrated into standard clinical treatment in the near future. Digital software solutions offer advantages as they can be easily incorporated into radiology departments and are often more affordable as compared to hardware solutions.
Tinsley, C J; Narduzzo, K E; Ho, J W; Barker, G R; Brown, M W; Warburton, E C
2009-09-01
The aim was to investigate the role of calcium-calmodulin-dependent protein kinase (CAMK)II in object recognition memory. The performance of rats in a preferential object recognition test was examined after local infusion of the CAMKII inhibitors KN-62 or autocamtide-2-related inhibitory peptide (AIP) into the perirhinal cortex. KN-62 or AIP infused after acquisition impaired memory tested at 24 h, indicating an involvement of CAMKII in the consolidation of recognition memory. Memory was impaired when KN-62 was infused at 20 min after acquisition or when AIP was infused at 20, 40, 60 or 100 min after acquisition. The time-course of CAMKII activation in rats was further examined by immunohistochemical staining for phospho-CAMKII(Thre286)alpha at 10, 40, 70 and 100 min following the viewing of novel and familiar images. At 70 min, processing novel images resulted in more phospho-CAMKII(Thre286)alpha-stained neurons in the perirhinal cortex than did the processing of familiar images, consistent with the viewing of novel images increasing the activity of CAMKII at this time. This difference was eliminated by prior infusion of AIP. These findings establish that CAMKII is active within the perirhinal region between approximately 20 and 100 min following learning and then returns to baseline. Thus, increased CAMKII activity is essential for the consolidation of long-term object recognition memory but continuation of that increased activity throughout the 24 h memory delay is not necessary for maintenance of the memory.
NASA Astrophysics Data System (ADS)
Esbrand, C.; Royle, G.; Griffiths, J.; Speller, R.
2009-07-01
The integration of technology with healthcare has undoubtedly propelled the medical imaging sector well into the twenty first century. The concept of digital imaging introduced during the 1970s has since paved the way for established imaging techniques where digital mammography, phase contrast imaging and CT imaging are just a few examples. This paper presents a prototype intelligent digital mammography system designed and developed by a European consortium. The final system, the I-ImaS system, utilises CMOS monolithic active pixel sensor (MAPS) technology promoting on-chip data processing, enabling the acts of data processing and image acquisition to be achieved simultaneously; consequently, statistical analysis of tissue is achievable in real-time for the purpose of x-ray beam modulation via a feedback mechanism during the image acquisition procedure. The imager implements a dual array of twenty 520 pixel × 40 pixel CMOS MAPS sensing devices with a 32μm pixel size, each individually coupled to a 100μm thick thallium doped structured CsI scintillator. This paper presents the first intelligent images of real breast tissue obtained from the prototype system of real excised breast tissue where the x-ray exposure was modulated via the statistical information extracted from the breast tissue itself. Conventional images were experimentally acquired where the statistical analysis of the data was done off-line, resulting in the production of simulated real-time intelligently optimised images. The results obtained indicate real-time image optimisation using the statistical information extracted from the breast as a means of a feedback mechanisms is beneficial and foreseeable in the near future.
Physics of fractional imaging in biomedicine.
Sohail, Ayesha; Bég, O A; Li, Zhiwu; Celik, Sebahattin
2018-03-12
The mathematics of imaging is a growing field of research and is evolving rapidly parallel to evolution in the field of imaging. Imaging, which is a sub-field of biomedical engineering, considers novel approaches to visualize biological tissues with the general goal of improving health. "Medical imaging research provides improved diagnostic tools in clinical settings and supports the development of drugs and other therapies. The data acquisition and diagnostic interpretation with minimum error are the important technical aspects of medical imaging. The image quality and resolution are really important in portraying the internal aspects of patient's body. Although there are several user friendly resources for processing image features, such as enhancement, colour manipulation and compression, the development of new processing methods is still worthy of efforts. In this article we aim to present the role of fractional calculus in imaging with the aid of practical examples. Copyright © 2018 Elsevier Ltd. All rights reserved.
Scanning transmission electron microscopy through-focal tilt-series on biological specimens.
Trepout, Sylvain; Messaoudi, Cédric; Perrot, Sylvie; Bastin, Philippe; Marco, Sergio
2015-10-01
Since scanning transmission electron microscopy can produce high signal-to-noise ratio bright-field images of thick (≥500 nm) specimens, this tool is emerging as the method of choice to study thick biological samples via tomographic approaches. However, in a convergent-beam configuration, the depth of field is limited because only a thin portion of the specimen (from a few nanometres to tens of nanometres depending on the convergence angle) can be imaged in focus. A method known as through-focal imaging enables recovery of the full depth of information by combining images acquired at different levels of focus. In this work, we compare tomographic reconstruction with the through-focal tilt-series approach (a multifocal series of images per tilt angle) with reconstruction with the classic tilt-series acquisition scheme (one single-focus image per tilt angle). We visualised the base of the flagellum in the protist Trypanosoma brucei via an acquisition and image-processing method tailored to obtain quantitative and qualitative descriptors of reconstruction volumes. Reconstructions using through-focal imaging contained more contrast and more details for thick (≥500 nm) biological samples. Copyright © 2015 Elsevier Ltd. All rights reserved.
Dual-Energy CT: New Horizon in Medical Imaging
Goo, Jin Mo
2017-01-01
Dual-energy CT has remained underutilized over the past decade probably due to a cumbersome workflow issue and current technical limitations. Clinical radiologists should be made aware of the potential clinical benefits of dual-energy CT over single-energy CT. To accomplish this aim, the basic principle, current acquisition methods with advantages and disadvantages, and various material-specific imaging methods as clinical applications of dual-energy CT should be addressed in detail. Current dual-energy CT acquisition methods include dual tubes with or without beam filtration, rapid voltage switching, dual-layer detector, split filter technique, and sequential scanning. Dual-energy material-specific imaging methods include virtual monoenergetic or monochromatic imaging, effective atomic number map, virtual non-contrast or unenhanced imaging, virtual non-calcium imaging, iodine map, inhaled xenon map, uric acid imaging, automatic bone removal, and lung vessels analysis. In this review, we focus on dual-energy CT imaging including related issues of radiation exposure to patients, scanning and post-processing options, and potential clinical benefits mainly to improve the understanding of clinical radiologists and thus, expand the clinical use of dual-energy CT; in addition, we briefly describe the current technical limitations of dual-energy CT and the current developments of photon-counting detector. PMID:28670151
Cheung, Chris C P; Yu, Alfred C H; Salimi, Nazila; Yiu, Billy Y S; Tsang, Ivan K H; Kerby, Benjamin; Azar, Reza Zahiri; Dickie, Kris
2012-02-01
The lack of open access to the pre-beamformed data of an ultrasound scanner has limited the research of novel imaging methods to a few privileged laboratories. To address this need, we have developed a pre-beamformed data acquisition (DAQ) system that can collect data over 128 array elements in parallel from the Ultrasonix series of research-purpose ultrasound scanners. Our DAQ system comprises three system-level blocks: 1) a connector board that interfaces with the array probe and the scanner through a probe connector port; 2) a main board that triggers DAQ and controls data transfer to a computer; and 3) four receiver boards that are each responsible for acquiring 32 channels of digitized raw data and storing them to the on-board memory. This system can acquire pre-beamformed data with 12-bit resolution when using a 40-MHz sampling rate. It houses a 16 GB RAM buffer that is sufficient to store 128 channels of pre-beamformed data for 8000 to 25 000 transmit firings, depending on imaging depth; corresponding to nearly a 2-s period in typical imaging setups. Following the acquisition, the data can be transferred through a USB 2.0 link to a computer for offline processing and analysis. To evaluate the feasibility of using the DAQ system for advanced imaging research, two proof-of-concept investigations have been conducted on beamforming and plane-wave B-flow imaging. Results show that adaptive beamforming algorithms such as the minimum variance approach can generate sharper images of a wire cross-section whose diameter is equal to the imaging wavelength (150 μm in our example). Also, planewave B-flow imaging can provide more consistent visualization of blood speckle movement given the higher temporal resolution of this imaging approach (2500 fps in our example).
Damewood, Sara; Jeanmonod, Donald; Cadigan, Beth
2011-04-01
This study compared the effectiveness of a multimedia ultrasound (US) simulator to normal human models during the practical portion of a course designed to teach the skills of both image acquisition and image interpretation for the Focused Assessment with Sonography for Trauma (FAST) exam. This was a prospective, blinded, controlled education study using medical students as an US-naïve population. After a standardized didactic lecture on the FAST exam, trainees were separated into two groups to practice image acquisition on either a multimedia simulator or a normal human model. Four outcome measures were then assessed: image interpretation of prerecorded FAST exams, adequacy of image acquisition on a standardized normal patient, perceived confidence of image adequacy, and time to image acquisition. Ninety-two students were enrolled and separated into two groups, a multimedia simulator group (n = 44), and a human model group (n = 48). Bonferroni adjustment factor determined the level of significance to be p = 0.0125. There was no difference between those trained on the multimedia simulator and those trained on a human model in image interpretation (median 80 of 100 points, interquartile range [IQR] 71-87, vs. median 78, IQR 62-86; p = 0.16), image acquisition (median 18 of 24 points, IQR 12-18 points, vs. median 16, IQR 14-20; p = 0.95), trainee's confidence in obtaining images on a 1-10 visual analog scale (median 5, IQR 4.1-6.5, vs. median 5, IQR 3.7-6.0; p = 0.36), or time to acquire images (median 3.8 minutes, IQR 2.7-5.4 minutes, vs. median = 4.5 minutes, IQR = 3.4-5.9 minutes; p = 0.044). There was no difference in teaching the skills of image acquisition and interpretation to novice FAST examiners using the multimedia simulator or normal human models. These data suggest that practical image acquisition skills learned during simulated training can be directly applied to human models. © 2011 by the Society for Academic Emergency Medicine.
MR CAT scan: a modular approach for hybrid imaging.
Hillenbrand, C; Hahn, D; Haase, A; Jakob, P M
2000-07-01
In this study, a modular concept for NMR hybrid imaging is presented. This concept essentially integrates different imaging modules in a sequential fashion and is therefore called CAT (combined acquisition technique). CAT is not a single specific measurement sequence, but rather a sequence design concept whereby distinct acquisition techniques with varying imaging parameters are employed in rapid succession in order to cover k-space. The power of the CAT approach is that it provides a high flexibility toward the acquisition optimization with respect to the available imaging time and the desired image quality. Important CAT sequence optimization steps include the appropriate choice of the k-space coverage ratio and the application of mixed bandwidth technology. Details of both the CAT methodology and possible CAT acquisition strategies, such as FLASH/EPI-, RARE/EPI- and FLASH/BURST-CAT are provided. Examples from imaging experiments in phantoms and healthy volunteers including mixed bandwidth acquisitions are provided to demonstrate the feasibility of the proposed CAT concept.
NASA Astrophysics Data System (ADS)
Enomoto, Ayano; Hirata, Hiroshi
2014-02-01
This article describes a feasibility study of parallel image-acquisition using a two-channel surface coil array in continuous-wave electron paramagnetic resonance (CW-EPR) imaging. Parallel EPR imaging was performed by multiplexing of EPR detection in the frequency domain. The parallel acquisition system consists of two surface coil resonators and radiofrequency (RF) bridges for EPR detection. To demonstrate the feasibility of this method of parallel image-acquisition with a surface coil array, three-dimensional EPR imaging was carried out using a tube phantom. Technical issues in the multiplexing method of EPR detection were also clarified. We found that degradation in the signal-to-noise ratio due to the interference of RF carriers is a key problem to be solved.
A cultural side effect: learning to read interferes with identity processing of familiar objects
Kolinsky, Régine; Fernandes, Tânia
2014-01-01
Based on the neuronal recycling hypothesis (Dehaene and Cohen, 2007), we examined whether reading acquisition has a cost for the recognition of non-linguistic visual materials. More specifically, we checked whether the ability to discriminate between mirror images, which develops through literacy acquisition, interferes with object identity judgments, and whether interference strength varies as a function of the nature of the non-linguistic material. To these aims we presented illiterate, late literate (who learned to read at adult age), and early literate adults with an orientation-independent, identity-based same-different comparison task in which they had to respond “same” to both physically identical and mirrored or plane-rotated images of pictures of familiar objects (Experiment 1) or of geometric shapes (Experiment 2). Interference from irrelevant orientation variations was stronger with plane rotations than with mirror images, and stronger with geometric shapes than with objects. Illiterates were the only participants almost immune to mirror variations, but only for familiar objects. Thus, the process of unlearning mirror-image generalization, necessary to acquire literacy in the Latin alphabet, has a cost for a basic function of the visual ventral object recognition stream, i.e., identification of familiar objects. This demonstrates that neural recycling is not just an adaptation to multi-use but a process of at least partial exaptation. PMID:25400605
Remote canopy hemispherical image collection system
NASA Astrophysics Data System (ADS)
Wan, Xuefen; Liu, Bingyu; Yang, Yi; Han, Fang; Cui, Jian
2016-11-01
Canopies are major part of plant photosynthesis and have distinct architectural elements such as tree crowns, whorls, branches, shoots, etc. By measuring canopy structural parameters, the solar radiation interception, photosynthesis effects and the spatio-temporal distribution of solar radiation under the canopy can be evaluated. Among canopy structure parameters, Leaf Area Index (LAI) is the key one. Leaf area index is a crucial variable in agronomic and environmental studies, because of its importance for estimating the amount of radiation intercepted by the canopy and the crop water requirements. The LAI can be achieved by hemispheric images which are obtained below the canopy with high accuracy and effectiveness. But existing hemispheric images canopy-LAI measurement technique is based on digital SLR camera with a fisheye lens. Users need to collect hemispheric image manually. The SLR camera with fisheye lens is not suit for long-term canopy-LAI outdoor measurement too. And the high cost of SLR limits its capacity. In recent years, with the development of embedded system and image processing technology, low cost remote canopy hemispheric image acquisition technology is becoming possible. In this paper, we present a remote hemispheric canopy image acquisition system with in-field/host configuration. In-field node based on imbed platform, low cost image sensor and fisheye lens is designed to achieve hemispherical image of plant canopy at distance with low cost. Solar radiation and temperature/humidity data, which are important for evaluating image data validation, are obtained for invalid hemispherical image elimination and node maintenance too. Host computer interacts with in-field node by 3G network. The hemispherical image calibration and super resolution are used to improve image quality in host computer. Results show that the remote canopy image collection system can make low cost remote canopy image acquisition for LAI effectively. It will be a potential technology candidate for low-cost remote canopy hemispherical image collection to measure canopy LAI.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.
Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines
Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213
NASA Astrophysics Data System (ADS)
Dangi, Shusil; Ben-Zikri, Yehuda K.; Cahill, Nathan; Schwarz, Karl Q.; Linte, Cristian A.
2015-03-01
Two-dimensional (2D) ultrasound (US) has been the clinical standard for over two decades for monitoring and assessing cardiac function and providing support via intra-operative visualization and guidance for minimally invasive cardiac interventions. Developments in three-dimensional (3D) image acquisition and transducer design and technology have revolutionized echocardiography imaging enabling both real-time 3D trans-esophageal and intra-cardiac image acquisition. However, in most cases the clinicians do not access the entire 3D image volume when analyzing the data, rather they focus on several key views that render the cardiac anatomy of interest during the US imaging exam. This approach enables image acquisition at a much higher spatial and temporal resolution. Two such common approaches are the bi-plane and tri-plane data acquisition protocols; as their name states, the former comprises two orthogonal image views, while the latter depicts the cardiac anatomy based on three co-axially intersecting views spaced at 600 to one another. Since cardiac anatomy is continuously changing, the intra-operative anatomy depicted using real-time US imaging also needs to be updated by tracking the key features of interest and endocardial left ventricle (LV) boundaries. Therefore, rapid automatic feature tracking in US images is critical for three reasons: 1) to perform cardiac function assessment; 2) to identify location of surgical targets for accurate tool to target navigation and on-target instrument positioning; and 3) to enable pre- to intra-op image registration as a means to fuse pre-op CT or MR images used during planning with intra-operative images for enhanced guidance. In this paper we utilize monogenic filtering, graph-cut based segmentation and robust spline smoothing in a combined work flow to process the acquired tri-plane TEE time series US images and demonstrate robust and accurate tracking of the LV endocardial features. We reconstruct the endocardial LV geometry using the tri-plane contours and spline interpolation, and assess the accuracy of the proposed work flow against gold-standard results from the GE Echopac PC clinical software according to quantitative clinical LV characterization parameters, such as the length, circumference, area and volume. Our proposed combined work flow leads to consistent, rapid and automated identification of the LV endocardium, suitable for intra-operative applications and "on-the-fly" computer-assisted assessment of ejection fraction for cardiac function monitoring.Two-dimensional (2D) ultrasound (US) has been the clinical standard for over two decades for monitoring and assessing cardiac function and providing support via intra-operative visualization and guidance for minimally invasive cardiac interventions. Developments in three-dimensional (3D) image acquisition and transducer design and technology have revolutionized echocardiography imaging enabling both real-time 3D trans-esophageal and intra-cardiac image acquisition. However, in most cases the clinicians do not access the entire 3D image volume when analyzing the data, rather they focus on several key views that render the cardiac anatomy of interest during the US imaging exam. This approach enables image acquisition at a much higher spatial and temporal resolution. Two such common approaches are the bi-plane and tri-plane data acquisition protocols; as their name states, the former comprises two orthogonal image views, while the latter depicts the cardiac anatomy based on three co-axially intersecting views spaced at 600 to one another. Since cardiac anatomy is continuously changing, the intra-operative anatomy depicted using real-time US imaging also needs to be updated by tracking the key features of interest and endocardial left ventricle (LV) boundaries. Therefore, rapid automatic feature tracking in US images is critical for three reasons: 1) to perform cardiac function assessment; 2) to identify location of surgical targets for accurate tool to target navigation and on-target instrument positioning; and 3) to enable pre- to intra-op image registration as a means to fuse pre-op CT or MR images used during planning with intra-operative images for enhanced guidance. In this paper we utilize monogenic filtering, graph-cut based segmentation and robust spline smoothing in a combined work flow to process the acquired tri-plane TEE time series US images and demonstrate robust and accurate tracking of the LV endocardial features. We reconstruct the endocardial LV geometry using the tri-plane contours and spline interpolation, and assess the accuracy of the proposed work flow against gold-standard results from the GE Echopac PC clinical software according to quantitative clinical LV characterization parameters, such as the length, circumference, area and volume. Our proposed combined work flow leads to consistent, rapid and automated identification of the LV endocardium, suitable for intra-operative applications and on-the- y" computer-assisted assessment of ejection fraction for cardiac function monitoring.
Korzynska, Anna; Roszkowiak, Lukasz; Pijanowska, Dorota; Kozlowski, Wojciech; Markiewicz, Tomasz
2014-01-01
The aim of this study is to compare the digital images of the tissue biopsy captured with optical microscope using bright field technique under various light conditions. The range of colour's variation in immunohistochemically stained with 3,3'-Diaminobenzidine and Haematoxylin tissue samples is immense and coming from various sources. One of them is inadequate setting of camera's white balance to microscope's light colour temperature. Although this type of error can be easily handled during the stage of image acquisition, it can be eliminated with use of colour adjustment algorithms. The examination of the dependence of colour variation from microscope's light temperature and settings of the camera is done as an introductory research to the process of automatic colour standardization. Six fields of view with empty space among the tissue samples have been selected for analysis. Each field of view has been acquired 225 times with various microscope light temperature and camera white balance settings. The fourteen randomly chosen images have been corrected and compared, with the reference image, by the following methods: Mean Square Error, Structural SIMilarity and visual assessment of viewer. For two types of backgrounds and two types of objects, the statistical image descriptors: range, median, mean and its standard deviation of chromaticity on a and b channels from CIELab colour space, and luminance L, and local colour variability for objects' specific area have been calculated. The results have been averaged for 6 images acquired in the same light conditions and camera settings for each sample. The analysis of the results leads to the following conclusions: (1) the images collected with white balance setting adjusted to light colour temperature clusters in certain area of chromatic space, (2) the process of white balance correction for images collected with white balance camera settings not matched to the light temperature moves image descriptors into proper chromatic space but simultaneously the value of luminance changes. So the process of the image unification in a sense of colour fidelity can be solved in separate introductory stage before the automatic image analysis.
Advanced imaging techniques for the study of plant growth and development.
Sozzani, Rosangela; Busch, Wolfgang; Spalding, Edgar P; Benfey, Philip N
2014-05-01
A variety of imaging methodologies are being used to collect data for quantitative studies of plant growth and development from living plants. Multi-level data, from macroscopic to molecular, and from weeks to seconds, can be acquired. Furthermore, advances in parallelized and automated image acquisition enable the throughput to capture images from large populations of plants under specific growth conditions. Image-processing capabilities allow for 3D or 4D reconstruction of image data and automated quantification of biological features. These advances facilitate the integration of imaging data with genome-wide molecular data to enable systems-level modeling. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Bisconti, Silvia; Shulkin, Masha; Hu, Xiaosu; Basura, Gregory J.; Kileny, Paul R.; Kovelman, Ioulia
2016-01-01
Purpose: The aim of this study was to examine how the brains of individuals with cochlear implants (CIs) respond to spoken language tasks that underlie successful language acquisition and processing. Method: During functional near-infrared spectroscopy imaging, CI recipients with hearing impairment (n = 10, mean age: 52.7 ± 17.3 years) and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, G; Yin, F; Ren, L
Purpose: In order to track the tumor movement for patient positioning verification during arc treatment delivery or in between 3D/IMRT beams for stereotactic body radiation therapy (SBRT), the limited-angle kV projections acquisition simultaneously during arc treatment delivery or in-between static treatment beams as the gantry moves to the next beam angle was proposed. The purpose of this study is to estimate additional imaging dose resulting from multiple tomosynthesis acquisitions in-between static treatment beams and to compare with that of a conventional kV-CBCT acquisition. Methods: kV imaging system integrated into Varian TrueBeam accelerators was modeled using EGSnrc Monte Carlo user code,more » BEAMnrc and DOSXYZnrc code was used in dose calculations. The simulated realistic kV beams from the Varian TrueBeam OBI 1.5 system were used to calculate dose to patient based on CT images. Organ doses were analyzed using DVHs. The imaging dose to patient resulting from realistic multiple tomosynthesis acquisitions with each 25–30 degree kV source rotation between 6 treatment beam gantry angles was studied. Results: For a typical lung SBRT treatment delivery much lower (20–50%) kV imaging doses from the sum of realistic six tomosynthesis acquisitions with each 25–30 degree x-ray source rotation between six treatment beam gantry angles were observed compared to that from a single CBCT image acquisition. Conclusion: This work indicates that the kV imaging in this proposed Limited-angle Intra-fractional Verification (LIVE) System for SBRT Treatments has a negligible imaging dose increase. It is worth to note that the MV imaging dose caused by MV projection acquisition in-between static beams in LIVE can be minimized by restricting the imaging to the target region and reducing the number of projections acquired. For arc treatments, MV imaging acquisition in LIVE does not add additional imaging dose as the MV images are acquired from treatment beams directly during the treatment.« less
Method and apparatus for implementing material thermal property measurement by flash thermal imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Jiangang
A method and apparatus are provided for implementing measurement of material thermal properties including measurement of thermal effusivity of a coating and/or film or a bulk material of uniform property. The test apparatus includes an infrared camera, a data acquisition and processing computer coupled to the infrared camera for acquiring and processing thermal image data, a flash lamp providing an input of heat onto the surface of a two-layer sample with an enhanced optical filter covering the flash lamp attenuating an entire infrared wavelength range with a series of thermal images is taken of the surface of the two-layer sample.
An automated system for whole microscopic image acquisition and analysis.
Bueno, Gloria; Déniz, Oscar; Fernández-Carrobles, María Del Milagro; Vállez, Noelia; Salido, Jesús
2014-09-01
The field of anatomic pathology has experienced major changes over the last decade. Virtual microscopy (VM) systems have allowed experts in pathology and other biomedical areas to work in a safer and more collaborative way. VMs are automated systems capable of digitizing microscopic samples that were traditionally examined one by one. The possibility of having digital copies reduces the risk of damaging original samples, and also makes it easier to distribute copies among other pathologists. This article describes the development of an automated high-resolution whole slide imaging (WSI) system tailored to the needs and problems encountered in digital imaging for pathology, from hardware control to the full digitization of samples. The system has been built with an additional digital monochromatic camera together with the color camera by default and LED transmitted illumination (RGB). Monochrome cameras are the preferred method of acquisition for fluorescence microscopy. The system is able to digitize correctly and form large high resolution microscope images for both brightfield and fluorescence. The quality of the digital images has been quantified using three metrics based on sharpness, contrast and focus. It has been proved on 150 tissue samples of brain autopsies, prostate biopsies and lung cytologies, at five magnifications: 2.5×, 10×, 20×, 40×, and 63×. The article is focused on the hardware set-up and the acquisition software, although results of the implemented image processing techniques included in the software and applied to the different tissue samples are also presented. © 2014 Wiley Periodicals, Inc.
Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test
Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno
2008-01-01
The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930
Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.
2014-01-01
Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868
Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P
2014-07-01
Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.
2010-10-01
4 8 4 | A Publication of the Defense Acquisition University http://www.dau.mil image designed by Miracle Riese » Keywords: Acceleration Test...Std Z39-18 4 8 6 | A Publication of the Defense Acquisition University http://www.dau.mil Generally speaking, medical devices are designed to...However, because the devices are designed for a controlled environment, concerns they may adversely affect the operation of aircraft systems must be
Image processing and Quality Control for the first 10,000 brain imaging datasets from UK Biobank.
Alfaro-Almagro, Fidel; Jenkinson, Mark; Bangerter, Neal K; Andersson, Jesper L R; Griffanti, Ludovica; Douaud, Gwenaëlle; Sotiropoulos, Stamatios N; Jbabdi, Saad; Hernandez-Fernandez, Moises; Vallee, Emmanuel; Vidaurre, Diego; Webster, Matthew; McCarthy, Paul; Rorden, Christopher; Daducci, Alessandro; Alexander, Daniel C; Zhang, Hui; Dragonu, Iulius; Matthews, Paul M; Miller, Karla L; Smith, Stephen M
2018-02-01
UK Biobank is a large-scale prospective epidemiological study with all data accessible to researchers worldwide. It is currently in the process of bringing back 100,000 of the original participants for brain, heart and body MRI, carotid ultrasound and low-dose bone/fat x-ray. The brain imaging component covers 6 modalities (T1, T2 FLAIR, susceptibility weighted MRI, Resting fMRI, Task fMRI and Diffusion MRI). Raw and processed data from the first 10,000 imaged subjects has recently been released for general research access. To help convert this data into useful summary information we have developed an automated processing and QC (Quality Control) pipeline that is available for use by other researchers. In this paper we describe the pipeline in detail, following a brief overview of UK Biobank brain imaging and the acquisition protocol. We also describe several quantitative investigations carried out as part of the development of both the imaging protocol and the processing pipeline. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Near-infrared hyperspectral imaging for quality analysis of agricultural and food products
NASA Astrophysics Data System (ADS)
Singh, C. B.; Jayas, D. S.; Paliwal, J.; White, N. D. G.
2010-04-01
Agricultural and food processing industries are always looking to implement real-time quality monitoring techniques as a part of good manufacturing practices (GMPs) to ensure high-quality and safety of their products. Near-infrared (NIR) hyperspectral imaging is gaining popularity as a powerful non-destructive tool for quality analysis of several agricultural and food products. This technique has the ability to analyse spectral data in a spatially resolved manner (i.e., each pixel in the image has its own spectrum) by applying both conventional image processing and chemometric tools used in spectral analyses. Hyperspectral imaging technique has demonstrated potential in detecting defects and contaminants in meats, fruits, cereals, and processed food products. This paper discusses the methodology of hyperspectral imaging in terms of hardware, software, calibration, data acquisition and compression, and development of prediction and classification algorithms and it presents a thorough review of the current applications of hyperspectral imaging in the analyses of agricultural and food products.
2011-01-01
When applying echo-Doppler imaging for either clinical or research purposes it is very important to select the most adequate modality/technology and choose the most reliable and reproducible measurements. Quality control is a mainstay to reduce variability among institutions and operators and must be obtained by using appropriate procedures for data acquisition, storage and interpretation of echo-Doppler data. This goal can be achieved by employing an echo core laboratory (ECL), with the responsibility for standardizing image acquisition processes (performed at the peripheral echo-labs) and analysis (by monitoring and optimizing the internal intra- and inter-reader variability of measurements). Accordingly, the Working Group of Echocardiography of the Italian Society of Cardiology decided to design standardized procedures for imaging acquisition in peripheral laboratories and reading procedures and to propose a methodological approach to assess the reproducibility of echo-Doppler parameters of cardiac structure and function by using both standard and advanced technologies. A number of cardiologists experienced in cardiac ultrasound was involved to set up an ECL available for future studies involving complex imaging or including echo-Doppler measures as primary or secondary efficacy or safety end-points. The present manuscript describes the methodology of the procedures (imaging acquisition and measurement reading) and provides the documentation of the work done so far to test the reproducibility of the different echo-Doppler modalities (standard and advanced). These procedures can be suggested for utilization also in non referall echocardiographic laboratories as an "inside" quality check, with the aim at optimizing clinical consistency of echo-Doppler data. PMID:21943283
Schmidt, Mark E; Chiao, Ping; Klein, Gregory; Matthews, Dawn; Thurfjell, Lennart; Cole, Patricia E; Margolin, Richard; Landau, Susan; Foster, Norman L; Mason, N Scott; De Santi, Susan; Suhy, Joyce; Koeppe, Robert A; Jagust, William
2015-09-01
In vivo imaging of amyloid burden with positron emission tomography (PET) provides a means for studying the pathophysiology of Alzheimer's and related diseases. Measurement of subtle changes in amyloid burden requires quantitative analysis of image data. Reliable quantitative analysis of amyloid PET scans acquired at multiple sites and over time requires rigorous standardization of acquisition protocols, subject management, tracer administration, image quality control, and image processing and analysis methods. We review critical points in the acquisition and analysis of amyloid PET, identify ways in which technical factors can contribute to measurement variability, and suggest methods for mitigating these sources of noise. Improved quantitative accuracy could reduce the sample size necessary to detect intervention effects when amyloid PET is used as a treatment end point and allow more reliable interpretation of change in amyloid burden and its relationship to clinical course. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Umeda, Takuro; Miwa, Kenta; Murata, Taisuke; Miyaji, Noriaki; Wagatsuma, Kei; Motegi, Kazuki; Terauchi, Takashi; Koizumi, Mitsuru
2017-12-01
The present study aimed to qualitatively and quantitatively evaluate PET images as a function of acquisition time for various leg sizes, and to optimize a shorter variable-acquisition time protocol for legs to achieve better qualitative and quantitative accuracy of true whole-body PET/CT images. The diameters of legs to be modeled as phantoms were defined based on data derived from 53 patients. This study analyzed PET images of a NEMA phantom and three plastic bottle phantoms (diameter, 5.68, 8.54 and 10.7 cm) that simulated the human body and legs, respectively. The phantoms comprised two spheres (diameters, 10 and 17 mm) containing fluorine-18 fluorodeoxyglucose solution with sphere-to-background ratios of 4 at a background radioactivity level of 2.65 kBq/mL. All PET data were reconstructed with acquisition times ranging from 10 to 180, and 1200 s. We visually evaluated image quality and determined the coefficient of variance (CV) of the background, contrast and the quantitative %error of the hot spheres, and then determined two shorter variable-acquisition protocols for legs. Lesion detectability and quantitative accuracy determined based on maximum standardized uptake values (SUV max ) in PET images of a patient using the proposed protocols were also evaluated. A larger phantom and a shorter acquisition time resulted in increased background noise on images and decreased the contrast in hot spheres. A visual score of ≥ 1.5 was obtained when the acquisition time was ≥ 30 s for three leg phantoms, and ≥ 120 s for the NEMA phantom. The quantitative %errors of the 10- and 17-mm spheres in the leg phantoms were ± 15 and ± 10%, respectively, in PET images with a high CV (scan < 30 s). The mean SUV max of three lesions using the current fixed-acquisition and two proposed variable-acquisition time protocols in the clinical study were 3.1, 3.1 and 3.2, respectively, which did not significantly differ. Leg acquisition time per bed position of even 30-90 s allows axial equalization, uniform image noise and a maximum ± 15% quantitative accuracy for the smallest lesion. The overall acquisition time was reduced by 23-42% using the proposed shorter variable than the current fixed-acquisition time for imaging legs, indicating that this is a useful and practical protocol for routine qualitative and quantitative PET/CT assessment in the clinical setting.
A new method of SC image processing for confluence estimation.
Soleimani, Sajjad; Mirzaei, Mohsen; Toncu, Dana-Cristina
2017-10-01
Stem cells images are a strong instrument in the estimation of confluency during their culturing for therapeutic processes. Various laboratory conditions, such as lighting, cell container support and image acquisition equipment, effect on the image quality, subsequently on the estimation efficiency. This paper describes an efficient image processing method for cell pattern recognition and morphological analysis of images that were affected by uneven background. The proposed algorithm for enhancing the image is based on coupling a novel image denoising method through BM3D filter with an adaptive thresholding technique for improving the uneven background. This algorithm works well to provide a faster, easier, and more reliable method than manual measurement for the confluency assessment of stem cell cultures. The present scheme proves to be valid for the prediction of the confluency and growth of stem cells at early stages for tissue engineering in reparatory clinical surgery. The method used in this paper is capable of processing the image of the cells, which have already contained various defects due to either personnel mishandling or microscope limitations. Therefore, it provides proper information even out of the worst original images available. Copyright © 2017 Elsevier Ltd. All rights reserved.
Nishi, Ryuji; Cao, Meng; Kanaji, Atsuko; Nishida, Tomoki; Yoshida, Kiyokazu; Isakozawa, Shigeto
2014-11-01
The ultra-high voltage electron microscope (UHVEM) H-3000 with the world highest acceleration voltage of 3 MV can observe remarkable three dimensional microstructures of microns-thick samples[1]. Acquiring a tilt series of electron tomography is laborious work and thus an automatic technique is highly desired. We proposed the Auto-Focus system using image Sharpness (AFS)[2,3] for UHVEM tomography tilt series acquisition. In the method, five images with different defocus values are firstly acquired and the image sharpness are calculated. The sharpness are then fitted to a quasi-Gaussian function to decide the best focus value[3]. Defocused images acquired by the slow scan CCD (SS-CCD) camera (Hitachi F486BK) are of high quality but one minute is taken for acquisition of five defocused images.In this study, we introduce a high-definition video camera (HD video camera; Hamamatsu Photonics K. K. C9721S) for fast acquisition of images[4]. It is an analog camera but the camera image is captured by a PC and the effective image resolution is 1280×1023 pixels. This resolution is lower than that of the SS-CCD camera of 4096×4096 pixels. However, the HD video camera captures one image for only 1/30 second. In exchange for the faster acquisition the S/N of images are low. To improve the S/N, 22 captured frames are integrated so that each image sharpness is enough to become lower fitting error. As countermeasure against low resolution, we selected a large defocus step, which is typically five times of the manual defocus step, to discriminate different defocused images.By using HD video camera for autofocus process, the time consumption for each autofocus procedure was reduced to about six seconds. It took one second for correction of an image position and the total correction time was seven seconds, which was shorter by one order than that using SS-CCD camera. When we used SS-CCD camera for final image capture, it took 30 seconds to record one tilt image. We can obtain a tilt series of 61 images within 30 minutes. Accuracy and repeatability were good enough to practical use (Figure 1). We successfully reduced the total acquisition time of a tomography tilt series in half than before.jmicro;63/suppl_1/i25/DFU066F1F1DFU066F1Fig. 1.Objective lens current change with a tilt angle during acquisition of tomography series (Sample: a rat hepatocyte, thickness: 2 m, magnification: 4k, acc. voltage: 2 MV). Tilt angle range is ±60 degree with 2 degree step angle. Two series were acquired in the same area. Both data were almost same and the deviation was smaller than the minimum step by manual, so auto-focus worked well. We also developed a computer-aided three dimensional (3D) visualization and analysis software for electron tomography "HawkC" which can sectionalize the 3D data semi-automatically[5,6]. If this auto-acquisition system is used with IMOD reconstruction software[7] and HawkC software, we will be able to do on-line UHVEM tomography. The system would help pathology examination in the future.This work was supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan, under a Grant-in-Aid for Scientific Research (Grant No. 23560024, 23560786), and SENTAN, Japan Science and Technology Agency, Japan. © The Author 2014. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
High dynamic range fringe acquisition: A novel 3-D scanning technique for high-reflective surfaces
NASA Astrophysics Data System (ADS)
Jiang, Hongzhi; Zhao, Huijie; Li, Xudong
2012-10-01
This paper presents a novel 3-D scanning technique for high-reflective surfaces based on phase-shifting fringe projection method. High dynamic range fringe acquisition (HDRFA) technique is developed to process the fringe images reflected from the shiny surfaces, and generates a synthetic fringe image by fusing the raw fringe patterns, acquired with different camera exposure time and the illumination fringe intensity from the projector. Fringe image fusion algorithm is introduced to avoid saturation and under-illumination phenomenon by choosing the pixels in the raw fringes with the highest fringe modulation intensity. A method of auto-selection of HDRFA parameters is developed and largely increases the measurement automation. The synthetic fringes have higher signal-to-noise ratio (SNR) under ambient light by optimizing HDRFA parameters. Experimental results show that the proposed technique can successfully measure objects with high-reflective surfaces and is insensitive to ambient light.
Quantitative nanoscopy: Tackling sampling limitations in (S)TEM imaging of polymers and composites.
Gnanasekaran, Karthikeyan; Snel, Roderick; de With, Gijsbertus; Friedrich, Heiner
2016-01-01
Sampling limitations in electron microscopy questions whether the analysis of a bulk material is representative, especially while analyzing hierarchical morphologies that extend over multiple length scales. We tackled this problem by automatically acquiring a large series of partially overlapping (S)TEM images with sufficient resolution, subsequently stitched together to generate a large-area map using an in-house developed acquisition toolbox (TU/e Acquisition ToolBox) and stitching module (TU/e Stitcher). In addition, we show that quantitative image analysis of the large scale maps provides representative information that can be related to the synthesis and process conditions of hierarchical materials, which moves electron microscopy analysis towards becoming a bulk characterization tool. We demonstrate the power of such an analysis by examining two different multi-phase materials that are structured over multiple length scales. Copyright © 2015 Elsevier B.V. All rights reserved.
Image analysis of multiple moving wood pieces in real time
NASA Astrophysics Data System (ADS)
Wang, Weixing
2006-02-01
This paper presents algorithms for image processing and image analysis of wood piece materials. The algorithms were designed for auto-detection of wood piece materials on a moving conveyor belt or a truck. When wood objects on moving, the hard task is to trace the contours of the objects in n optimal way. To make the algorithms work efficiently in the plant, a flexible online system was designed and developed, which mainly consists of image acquisition, image processing, object delineation and analysis. A number of newly-developed algorithms can delineate wood objects with high accuracy and high speed, and in the wood piece analysis part, each wood piece can be characterized by a number of visual parameters which can also be used for constructing experimental models directly in the system.
Logic design and implementation of FPGA for a high frame rate ultrasound imaging system
NASA Astrophysics Data System (ADS)
Liu, Anjun; Wang, Jing; Lu, Jian-Yu
2002-05-01
Recently, a method has been developed for high frame rate medical imaging [Jian-yu Lu, ``2D and 3D high frame rate imaging with limited diffraction beams,'' IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44(4), 839-856 (1997)]. To realize this method, a complicated system [multiple-channel simultaneous data acquisition, large memory in each channel for storing up to 16 seconds of data at 40 MHz and 12-bit resolution, time-variable-gain (TGC) control, Doppler imaging, harmonic imaging, as well as coded transmissions] is designed. Due to the complexity of the system, field programmable gate array (FPGA) (Xilinx Spartn II) is used. In this presentation, the design and implementation of the FPGA for the system will be reported. This includes the synchronous dynamic random access memory (SDRAM) controller and other system controllers, time sharing for auto-refresh of SDRAMs to reduce peak power, transmission and imaging modality selections, ECG data acquisition and synchronization, 160 MHz delay locked loop (DLL) for accurate timing, and data transfer via either a parallel port or a PCI bus for post image processing. [Work supported in part by Grant 5RO1 HL60301 from NIH.
Shi, Feng; Yap, Pew-Thian; Fan, Yong; Cheng, Jie-Zhi; Wald, Lawrence L.; Gerig, Guido; Lin, Weili; Shen, Dinggang
2010-01-01
The acquisition of high quality MR images of neonatal brains is largely hampered by their characteristically small head size and low tissue contrast. As a result, subsequent image processing and analysis, especially for brain tissue segmentation, are often hindered. To overcome this problem, a dedicated phased array neonatal head coil is utilized to improve MR image quality by effectively combing images obtained from 8 coil elements without lengthening data acquisition time. In addition, a subject-specific atlas based tissue segmentation algorithm is specifically developed for the delineation of fine structures in the acquired neonatal brain MR images. The proposed tissue segmentation method first enhances the sheet-like cortical gray matter (GM) structures in neonatal images with a Hessian filter for generation of cortical GM prior. Then, the prior is combined with our neonatal population atlas to form a cortical enhanced hybrid atlas, which we refer to as the subject-specific atlas. Various experiments are conducted to compare the proposed method with manual segmentation results, as well as with additional two population atlas based segmentation methods. Results show that the proposed method is capable of segmenting the neonatal brain with the highest accuracy, compared to other two methods. PMID:20862268
Robust sliding-window reconstruction for Accelerating the acquisition of MR fingerprinting.
Cao, Xiaozhi; Liao, Congyu; Wang, Zhixing; Chen, Ying; Ye, Huihui; He, Hongjian; Zhong, Jianhui
2017-10-01
To develop a method for accelerated and robust MR fingerprinting (MRF) with improved image reconstruction and parameter matching processes. A sliding-window (SW) strategy was applied to MRF, in which signal and dictionary matching was conducted between fingerprints consisting of mixed-contrast image series reconstructed from consecutive data frames segmented by a sliding window, and a precalculated mixed-contrast dictionary. The effectiveness and performance of this new method, dubbed SW-MRF, was evaluated in both phantom and in vivo. Error quantifications were conducted on results obtained with various settings of SW reconstruction parameters. Compared with the original MRF strategy, the results of both phantom and in vivo experiments demonstrate that the proposed SW-MRF strategy either provided similar accuracy with reduced acquisition time, or improved accuracy with equal acquisition time. Parametric maps of T 1 , T 2 , and proton density of comparable quality could be achieved with a two-fold or more reduction in acquisition time. The effect of sliding-window width on dictionary sensitivity was also estimated. The novel SW-MRF recovers high quality image frames from highly undersampled MRF data, which enables more robust dictionary matching with reduced numbers of data frames. This time efficiency may facilitate MRF applications in time-critical clinical settings. Magn Reson Med 78:1579-1588, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Automated video-microscopic imaging and data acquisition system for colloid deposition measurements
Abdel-Fattah, Amr I.; Reimus, Paul W.
2004-12-28
A video microscopic visualization system and image processing and data extraction and processing method for in situ detailed quantification of the deposition of sub-micrometer particles onto an arbitrary surface and determination of their concentration across the bulk suspension. The extracted data includes (a) surface concentration and flux of deposited, attached and detached colloids, (b) surface concentration and flux of arriving and departing colloids, (c) distribution of colloids in the bulk suspension in the direction perpendicular to the deposition surface, and (d) spatial and temporal distributions of deposited colloids.
SU-E-J-225: CEST Imaging in Head and Neck Cancer Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J; Hwang, K; Fuller, C
Purpose: Chemical Exchange Saturation Transfer (CEST) imaging is an MRI technique enables the detection and imaging of metabolically active compounds in vivo. It has been used to differentiate tumor types and metabolic characteristics. Unlike PET/CT,CEST imaging does not use isotopes so it can be used on patient repeatedly. This study is to report the preliminary results of CEST imaging in Head and Neck cancer (HNC) patients. Methods: A CEST imaging sequence and the post-processing software was developed on a 3T clinical MRI scanner. Ten patients with Human papilloma virus positive oropharyngeal cancer were imaged in their immobilized treatment position. Amore » 5 mm slice CEST image was acquired (128×128, FOV=20∼24cm) to encompass the maximum dimension of tumor. Twenty-nine off-set frequencies (from −7.8ppm to +7.8 ppm) were acquired to obtain the Z-spectrum. Asymmetry analysis was used to extract the CEST contrasts. ROI at the tumor, node and surrounding tissues were measured. Results: CEST images were successfully acquired and Zspectrum asymmetry analysis demonstrated clear CEST contrasts in tumor as well as the surrounding tissues. 3∼5% CEST contrast in the range of 1 to 4 ppm was noted in tumor as well as grossly involved nodes. Injection of glucose produced a marked increase of CEST contrast in tumor region (∼10%). Motion and pulsation artifacts tend to smear the CEST contrast, making the interpretation of the image contrast difficult. Field nonuniformity, pulsation in blood vesicle and susceptibility artifacts caused by air cavities were also problematic for CEST imaging. Conclusion: We have demonstrated successful CEST acquisition and Z-spectrum reconstruction on HNC patients with a clinical scanner. MRI acquisition in immobilized treatment position is critical for image quality as well as the success of CEST image acquisition. CEST images provide novel contrast of metabolites in HNC and present great potential in the pre- and post-treatment assessment of patients undergoing radiation therapy.« less
Wardlaw, Joanna M; Smith, Eric E; Biessels, Geert J; Cordonnier, Charlotte; Fazekas, Franz; Frayne, Richard; Lindley, Richard I; O'Brien, John T; Barkhof, Frederik; Benavente, Oscar R; Black, Sandra E; Brayne, Carol; Breteler, Monique; Chabriat, Hugues; DeCarli, Charles; de Leeuw, Frank-Erik; Doubal, Fergus; Duering, Marco; Fox, Nick C; Greenberg, Steven; Hachinski, Vladimir; Kilimann, Ingo; Mok, Vincent; Oostenbrugge, Robert van; Pantoni, Leonardo; Speck, Oliver; Stephan, Blossom C M; Teipel, Stefan; Viswanathan, Anand; Werring, David; Chen, Christopher; Smith, Colin; van Buchem, Mark; Norrving, Bo; Gorelick, Philip B; Dichgans, Martin
2013-01-01
Summary Cerebral small vessel disease (SVD) is a common accompaniment of ageing. Features seen on neuroimaging include recent small subcortical infarcts, lacunes, white matter hyperintensities, perivascular spaces, microbleeds, and brain atrophy. SVD can present as a stroke or cognitive decline, or can have few or no symptoms. SVD frequently coexists with neurodegenerative disease, and can exacerbate cognitive deficits, physical disabilities, and other symptoms of neurodegeneration. Terminology and definitions for imaging the features of SVD vary widely, which is also true for protocols for image acquisition and image analysis. This lack of consistency hampers progress in identifying the contribution of SVD to the pathophysiology and clinical features of common neurodegenerative diseases. We are an international working group from the Centres of Excellence in Neurodegeneration. We completed a structured process to develop definitions and imaging standards for markers and consequences of SVD. We aimed to achieve the following: first, to provide a common advisory about terms and definitions for features visible on MRI; second, to suggest minimum standards for image acquisition and analysis; third, to agree on standards for scientific reporting of changes related to SVD on neuroimaging; and fourth, to review emerging imaging methods for detection and quantification of preclinical manifestations of SVD. Our findings and recommendations apply to research studies, and can be used in the clinical setting to standardise image interpretation, acquisition, and reporting. This Position Paper summarises the main outcomes of this international effort to provide the STandards for ReportIng Vascular changes on nEuroimaging (STRIVE). PMID:23867200
Acquisition of Stereoscopic Particle Image Velocimetry System for Investigation of Unsteady Flows
2016-04-30
SECURITY CLASSIFICATION OF: The objective of the project titled “Acquisition of Stereoscopic Particle Image Velocimetry (S-PIV) System for...Distribution Unlimited UU UU UU UU 30-04-2016 1-Feb-2015 31-Jan-2016 Final Report: Acquisition of Stereoscopic Particle Image Velocimetry System For...ADDRESS (ES) U.S. Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Particle Image Velocimetry REPORT DOCUMENTATION PAGE 11
Prazeres, Carlos Eduardo Elias Dos; Magalhães, Tiago Augusto; de Castro Carneiro, Adriano Camargo; Cury, Roberto Caldeira; de Melo Moreira, Valéria; Bello, Juliana Hiromi Silva Matsumoto; Rochitte, Carlos Eduardo
The aim of this study was to compare image quality and radiation dose of coronary computed tomography (CT) angiography performed with dual-source CT scanner using 2 different protocols in patients with atrial fibrillation. Forty-seven patients with AF underwent 2 different acquisition protocols: double high-pitch (DHP) spiral acquisition and retrospective spiral acquisition. The image quality was ranked according to a qualitative score by 2 experts: 1, no evident motion; 2, minimal motion not influencing coronary artery luminal evaluation; and 3, motion with impaired luminal evaluation. A third expert solved any disagreement. A total of 732 segments were evaluated. The DHP group (24 patients, 374 segments) showed more segments classified as score 1 than the retrospective spiral acquisition group (71.3% vs 37.4%). Image quality evaluation agreement was high between observers (κ = 0.8). There was significantly lower radiation exposure for the DHP group (3.65 [1.29] vs 23.57 [10.32] mSv). In this original direct comparison, a DHP spiral protocol for coronary CT angiography acquisition in patients with atrial fibrillation resulted in lower radiation exposure and superior image quality compared with conventional spiral retrospective acquisition.
Comparison of breast percent density estimation from raw versus processed digital mammograms
NASA Astrophysics Data System (ADS)
Li, Diane; Gavenonis, Sara; Conant, Emily; Kontos, Despina
2011-03-01
We compared breast percent density (PD%) measures obtained from raw and post-processed digital mammographic (DM) images. Bilateral raw and post-processed medio-lateral oblique (MLO) images from 81 screening studies were retrospectively analyzed. Image acquisition was performed with a GE Healthcare DS full-field DM system. Image post-processing was performed using the PremiumViewTM algorithm (GE Healthcare). Area-based breast PD% was estimated by a radiologist using a semi-automated image thresholding technique (Cumulus, Univ. Toronto). Comparison of breast PD% between raw and post-processed DM images was performed using the Pearson correlation (r), linear regression, and Student's t-test. Intra-reader variability was assessed with a repeat read on the same data-set. Our results show that breast PD% measurements from raw and post-processed DM images have a high correlation (r=0.98, R2=0.95, p<0.001). Paired t-test comparison of breast PD% between the raw and the post-processed images showed a statistically significant difference equal to 1.2% (p = 0.006). Our results suggest that the relatively small magnitude of the absolute difference in PD% between raw and post-processed DM images is unlikely to be clinically significant in breast cancer risk stratification. Therefore, it may be feasible to use post-processed DM images for breast PD% estimation in clinical settings. Since most breast imaging clinics routinely use and store only the post-processed DM images, breast PD% estimation from post-processed data may accelerate the integration of breast density in breast cancer risk assessment models used in clinical practice.
A robust close-range photogrammetric target extraction algorithm for size and type variant targets
NASA Astrophysics Data System (ADS)
Nyarko, Kofi; Thomas, Clayton; Torres, Gilbert
2016-05-01
The Photo-G program conducted by Naval Air Systems Command at the Atlantic Test Range in Patuxent River, Maryland, uses photogrammetric analysis of large amounts of real-world imagery to characterize the motion of objects in a 3-D scene. Current approaches involve several independent processes including target acquisition, target identification, 2-D tracking of image features, and 3-D kinematic state estimation. Each process has its own inherent complications and corresponding degrees of both human intervention and computational complexity. One approach being explored for automated target acquisition relies on exploiting the pixel intensity distributions of photogrammetric targets, which tend to be patterns with bimodal intensity distributions. The bimodal distribution partitioning algorithm utilizes this distribution to automatically deconstruct a video frame into regions of interest (ROI) that are merged and expanded to target boundaries, from which ROI centroids are extracted to mark target acquisition points. This process has proved to be scale, position and orientation invariant, as well as fairly insensitive to global uniform intensity disparities.
Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel
2013-01-01
A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches).
Torres-Sánchez, Jorge; López-Granados, Francisca; De Castro, Ana Isabel; Peña-Barragán, José Manuel
2013-01-01
A new aerial platform has risen recently for image acquisition, the Unmanned Aerial Vehicle (UAV). This article describes the technical specifications and configuration of a UAV used to capture remote images for early season site- specific weed management (ESSWM). Image spatial and spectral properties required for weed seedling discrimination were also evaluated. Two different sensors, a still visible camera and a six-band multispectral camera, and three flight altitudes (30, 60 and 100 m) were tested over a naturally infested sunflower field. The main phases of the UAV workflow were the following: 1) mission planning, 2) UAV flight and image acquisition, and 3) image pre-processing. Three different aspects were needed to plan the route: flight area, camera specifications and UAV tasks. The pre-processing phase included the correct alignment of the six bands of the multispectral imagery and the orthorectification and mosaicking of the individual images captured in each flight. The image pixel size, area covered by each image and flight timing were very sensitive to flight altitude. At a lower altitude, the UAV captured images of finer spatial resolution, although the number of images needed to cover the whole field may be a limiting factor due to the energy required for a greater flight length and computational requirements for the further mosaicking process. Spectral differences between weeds, crop and bare soil were significant in the vegetation indices studied (Excess Green Index, Normalised Green-Red Difference Index and Normalised Difference Vegetation Index), mainly at a 30 m altitude. However, greater spectral separability was obtained between vegetation and bare soil with the index NDVI. These results suggest that an agreement among spectral and spatial resolutions is needed to optimise the flight mission according to every agronomical objective as affected by the size of the smaller object to be discriminated (weed plants or weed patches). PMID:23483997
Ozaki, Yuichi; Kitabata, Hironori; Tsujioka, Hiroto; Hosokawa, Seiki; Kashiwagi, Manabu; Ishibashi, Kohei; Komukai, Kenichi; Tanimoto, Takashi; Ino, Yasushi; Takarada, Shigeho; Kubo, Takashi; Kimura, Keizo; Tanaka, Atsushi; Hirata, Kumiko; Mizukoshi, Masato; Imanishi, Toshio; Akasaka, Takashi
2012-01-01
Although an intracoronary frequency-domain optical coherence tomography (FD-OCT) system overcomes several limitations of the time-domain OCT (TD-OCT) system, the former requires injection of contrast media for image acquisition. The increased total amount of contrast media for FD-OCT image acquisition may lead to the impairment of renal function. The safety and usefulness of the non-occlusion method with low-molecular-weight dextran L (LMD-L) via a guiding catheter for TD-OCT image acquisition have been reported previously. The aim of the present study was to compare the image quality and quantitative measurements between contrast media and LMD-L for FD-OCT image acquisition in coronary stented lesions. Twenty-two patients with 25 coronary stented lesions were enrolled in this study. FD-OCT was performed with the continuous-flushing method via a guiding catheter. Both contrast media and LMD-L were infused at a rate of 4 ml/s by an autoinjector. With regard to image quality, the prevalence of clear image segments was comparable between contrast media and LMD-L (97.9% vs. 96.5%, P=0.90). Furthermore, excellent correlations were observed between both flushing solutions in terms of minimum lumen area, mean lumen area, and mean stent area. The total volumes of contrast media and of LMD-L needed for OCT image acquisition were similar. FD-OCT image acquisition with LMD-L has the potential to reduce the total amount of contrast media without loss of image quality.
Phased array ghost elimination.
Kellman, Peter; McVeigh, Elliot R
2006-05-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. Copyright (c) 2006 John Wiley & Sons, Ltd.
Phased array ghost elimination
Kellman, Peter; McVeigh, Elliot R.
2007-01-01
Parallel imaging may be applied to cancel ghosts caused by a variety of distortion mechanisms, including distortions such as off-resonance or local flow, which are space variant. Phased array combining coefficients may be calculated that null ghost artifacts at known locations based on a constrained optimization, which optimizes SNR subject to the nulling constraint. The resultant phased array ghost elimination (PAGE) technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation is applied to full field-of-view (FOV) images. The phased array method for ghost elimination may result in greater flexibility in designing acquisition strategies. For example, in multi-shot EPI applications ghosts are typically mitigated by the use of an interleaved phase encode acquisition order. An alternative strategy is to use a sequential, non-interleaved phase encode order and cancel the resultant ghosts using PAGE parallel imaging. Cancellation of ghosts by means of phased array processing makes sequential, non-interleaved phase encode acquisition order practical, and permits a reduction in repetition time, TR, by eliminating the need for echo-shifting. Sequential, non-interleaved phase encode order has benefits of reduced distortion due to off-resonance, in-plane flow and EPI delay misalignment. Furthermore, the use of EPI with PAGE has inherent fat-water separation and has been used to provide off-resonance correction using a technique referred to as lipid elimination with an echo-shifting N/2-ghost acquisition (LEENA), and may further generalized using the multi-point Dixon method. Other applications of PAGE include cancelling ghosts which arise due to amplitude or phase variation during the approach to steady state. Parallel imaging requires estimates of the complex coil sensitivities. In vivo estimates may be derived by temporally varying the phase encode ordering to obtain a full k-space dataset in a scheme similar to the autocalibrating TSENSE method. This scheme is a generalization of the UNFOLD method used for removing aliasing in undersampled acquisitions. The more general scheme may be used to modulate each EPI ghost image to a separate temporal frequency as described in this paper. PMID:16705636
Optimal focal-plane restoration
NASA Technical Reports Server (NTRS)
Reichenbach, Stephen E.; Park, Stephen K.
1989-01-01
Image restoration can be implemented efficiently by calculating the convolution of the digital image and a small kernel during image acquisition. Processing the image in the focal-plane in this way requires less computation than traditional Fourier-transform-based techniques such as the Wiener filter and constrained least-squares filter. Here, the values of the convolution kernel that yield the restoration with minimum expected mean-square error are determined using a frequency analysis of the end-to-end imaging system. This development accounts for constraints on the size and shape of the spatial kernel and all the components of the imaging system. Simulation results indicate the technique is effective and efficient.
Quantitative fluorescence microscopy and image deconvolution.
Swedlow, Jason R
2013-01-01
Quantitative imaging and image deconvolution have become standard techniques for the modern cell biologist because they can form the basis of an increasing number of assays for molecular function in a cellular context. There are two major types of deconvolution approaches--deblurring and restoration algorithms. Deblurring algorithms remove blur but treat a series of optical sections as individual two-dimensional entities and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed in this chapter. Image deconvolution in fluorescence microscopy has usually been applied to high-resolution imaging to improve contrast and thus detect small, dim objects that might otherwise be obscured. Their proper use demands some consideration of the imaging hardware, the acquisition process, fundamental aspects of photon detection, and image processing. This can prove daunting for some cell biologists, but the power of these techniques has been proven many times in the works cited in the chapter and elsewhere. Their usage is now well defined, so they can be incorporated into the capabilities of most laboratories. A major application of fluorescence microscopy is the quantitative measurement of the localization, dynamics, and interactions of cellular factors. The introduction of green fluorescent protein and its spectral variants has led to a significant increase in the use of fluorescence microscopy as a quantitative assay system. For quantitative imaging assays, it is critical to consider the nature of the image-acquisition system and to validate its response to known standards. Any image-processing algorithms used before quantitative analysis should preserve the relative signal levels in different parts of the image. A very common image-processing algorithm, image deconvolution, is used to remove blurred signal from an image. There are two major types of deconvolution approaches, deblurring and restoration algorithms. Deblurring algorithms remove blur, but treat a series of optical sections as individual two-dimensional entities, and therefore sometimes mishandle blurred light. Restoration algorithms determine an object that, when convolved with the point-spread function of the microscope, could produce the image data. The advantages and disadvantages of these methods are discussed. Copyright © 1998 Elsevier Inc. All rights reserved.
A versatile nondestructive evaluation imaging workstation
NASA Technical Reports Server (NTRS)
Chern, E. James; Butler, David W.
1994-01-01
Ultrasonic C-scan and eddy current imaging systems are of the pointwise type evaluation systems that rely on a mechanical scanner to physically maneuver a probe relative to the specimen point by point in order to acquire data and generate images. Since the ultrasonic C-scan and eddy current imaging systems are based on the same mechanical scanning mechanisms, the two systems can be combined using the same PC platform with a common mechanical manipulation subsystem and integrated data acquisition software. Based on this concept, we have developed an IBM PC-based combined ultrasonic C-scan and eddy current imaging system. The system is modularized and provides capacity for future hardware and software expansions. Advantages associated with the combined system are: (1) eliminated duplication of the computer and mechanical hardware, (2) unified data acquisition, processing and storage software, (3) reduced setup time for repetitious ultrasonic and eddy current scans, and (4) improved system efficiency. The concept can be adapted to many engineering systems by integrating related PC-based instruments into one multipurpose workstation such as dispensing, machining, packaging, sorting, and other industrial applications.
A programmable light engine for quantitative single molecule TIRF and HILO imaging.
van 't Hoff, Marcel; de Sars, Vincent; Oheim, Martin
2008-10-27
We report on a simple yet powerful implementation of objective-type total internal reflection fluorescence (TIRF) and highly inclined and laminated optical sheet (HILO, a type of dark-field) illumination. Instead of focusing the illuminating laser beam to a single spot close to the edge of the microscope objective, we are scanning during the acquisition of a fluorescence image the focused spot in a circular orbit, thereby illuminating the sample from various directions. We measure parameters relevant for quantitative image analysis during fluorescence image acquisition by capturing an image of the excitation light distribution in an equivalent objective backfocal plane (BFP). Operating at scan rates above 1 MHz, our programmable light engine allows directional averaging by circular spinning the spot even for sub-millisecond exposure times. We show that restoring the symmetry of TIRF/HILO illumination reduces scattering and produces an evenly lit field-of-view that affords on-line analysis of evanescnt-field excited fluorescence without pre-processing. Utilizing crossed acousto-optical deflectors, our device generates arbitrary intensity profiles in BFP, permitting variable-angle, multi-color illumination, or objective lenses to be rapidly exchanged.
A versatile nondestructive evaluation imaging workstation
NASA Astrophysics Data System (ADS)
Chern, E. James; Butler, David W.
1994-02-01
Ultrasonic C-scan and eddy current imaging systems are of the pointwise type evaluation systems that rely on a mechanical scanner to physically maneuver a probe relative to the specimen point by point in order to acquire data and generate images. Since the ultrasonic C-scan and eddy current imaging systems are based on the same mechanical scanning mechanisms, the two systems can be combined using the same PC platform with a common mechanical manipulation subsystem and integrated data acquisition software. Based on this concept, we have developed an IBM PC-based combined ultrasonic C-scan and eddy current imaging system. The system is modularized and provides capacity for future hardware and software expansions. Advantages associated with the combined system are: (1) eliminated duplication of the computer and mechanical hardware, (2) unified data acquisition, processing and storage software, (3) reduced setup time for repetitious ultrasonic and eddy current scans, and (4) improved system efficiency. The concept can be adapted to many engineering systems by integrating related PC-based instruments into one multipurpose workstation such as dispensing, machining, packaging, sorting, and other industrial applications.
Galindo, Enrique; Larralde-Corona, C Patricia; Brito, Teresa; Córdova-Aguilar, Ma Soledad; Taboada, Blanca; Vega-Alvarado, Leticia; Corkidi, Gabriel
2005-03-30
Fermentation bioprocesses typically involve two liquid phases (i.e. water and organic compounds) and one gas phase (air), together with suspended solids (i.e. biomass), which are the components to be dispersed. Characterization of multiphase dispersions is required as it determines mass transfer efficiency and bioreactor homogeneity. It is also needed for the appropriate design of contacting equipment, helping in establishing optimum operational conditions. This work describes the development of image analysis based techniques with advantages (in terms of data acquisition and processing), for the characterization of oil drops and bubble diameters in complex simulated fermentation broths. The system consists of fully digital acquisition of in situ images obtained from the inside of a mixing tank using a CCD camera synchronized with a stroboscopic light source, which are processed with a versatile commercial software. To improve the automation of particle recognition and counting, the Hough transform (HT) was used, so bubbles and oil drops were automatically detected and the processing time was reduced by 55% without losing accuracy with respect to a fully manual analysis. The system has been used for the detailed characterization of a number of operational conditions, including oil content, biomass morphology, presence of surfactants (such as proteins) and viscosity of the aqueous phase.
Underwater Photogrammetry and 3d Reconstruction of Marble Cargos Shipwreck
NASA Astrophysics Data System (ADS)
Balletti, C.; Beltrame, C.; Costa, E.; Guerra, F.; Vernier, P.
2015-04-01
Nowadays archaeological and architectural surveys are based on the acquisition and processing of point clouds, allowing a high metric precision, essential prerequisite for a good documentation. Digital image processing and laser scanner have changed the archaeological survey campaign, from manual and direct survey to a digital one and, actually, multi-image photogrammetry is a good solution for the underwater archaeology. This technical documentation cannot operate alone, but it has to be supported by a topographical survey to georeference all the finds in the same reference system. In the last years the Ca' Foscari and IUAV University of Venice are conducting a research on integrated survey techniques to support underwater metric documentation. The paper will explain all the phases regarding the survey's design, images acquisition, topographic measure and the data processing of two Roman shipwrecks in south Sicily. The cargos of the shipwrecks are composed by huge marble blocks, but they are different for morphological characteristic of the sites, for the depth and for their distribution on the seabed. Photogrammetrical and topographical surveys were organized in two distinct methods, especially for the second one, due to the depth that have allowed an experimentation of GPS RTK's measurements on one shipwreck. Moreover, this kind of three-dimensional documentation is useful for educational and dissemination aspect, for the ease of understanding by wide public.
Mirion--a software package for automatic processing of mass spectrometric images.
Paschke, C; Leisner, A; Hester, A; Maass, K; Guenther, S; Bouschen, W; Spengler, B
2013-08-01
Mass spectrometric imaging (MSI) techniques are of growing interest for the Life Sciences. In recent years, the development of new instruments employing ion sources that are tailored for spatial scanning allowed the acquisition of large data sets. A subsequent data processing, however, is still a bottleneck in the analytical process, as a manual data interpretation is impossible within a reasonable time frame. The transformation of mass spectrometric data into spatial distribution images of detected compounds turned out to be the most appropriate method to visualize the results of such scans, as humans are able to interpret images faster and easier than plain numbers. Image generation, thus, is a time-consuming and complex yet very efficient task. The free software package "Mirion," presented in this paper, allows the handling and analysis of data sets acquired by mass spectrometry imaging. Mirion can be used for image processing of MSI data obtained from many different sources, as it uses the HUPO-PSI-based standard data format imzML, which is implemented in the proprietary software of most of the mass spectrometer companies. Different graphical representations of the recorded data are available. Furthermore, automatic calculation and overlay of mass spectrometric images promotes direct comparison of different analytes for data evaluation. The program also includes tools for image processing and image analysis.
Single shot laser speckle based 3D acquisition system for medical applications
NASA Astrophysics Data System (ADS)
Khan, Danish; Shirazi, Muhammad Ayaz; Kim, Min Young
2018-06-01
The state of the art techniques used by medical practitioners to extract the three-dimensional (3D) geometry of different body parts requires a series of images/frames such as laser line profiling or structured light scanning. Movement of the patients during scanning process often leads to inaccurate measurements due to sequential image acquisition. Single shot structured techniques are robust to motion but the prevalent challenges in single shot structured light methods are the low density and algorithm complexity. In this research, a single shot 3D measurement system is presented that extracts the 3D point cloud of human skin by projecting a laser speckle pattern using a single pair of images captured by two synchronized cameras. In contrast to conventional laser speckle 3D measurement systems that realize stereo correspondence by digital correlation of projected speckle patterns, the proposed system employs KLT tracking method to locate the corresponding points. The 3D point cloud contains no outliers and sufficient quality of 3D reconstruction is achieved. The 3D shape acquisition of human body parts validates the potential application of the proposed system in the medical industry.
Froeling, Martijn; Tax, Chantal M W; Vos, Sjoerd B; Luijten, Peter R; Leemans, Alexander
2017-05-01
In this work, we present the MASSIVE (Multiple Acquisitions for Standardization of Structural Imaging Validation and Evaluation) brain dataset of a single healthy subject, which is intended to facilitate diffusion MRI (dMRI) modeling and methodology development. MRI data of one healthy subject (female, 25 years) were acquired on a clinical 3 Tesla system (Philips Achieva) with an eight-channel head coil. In total, the subject was scanned on 18 different occasions with a total acquisition time of 22.5 h. The dMRI data were acquired with an isotropic resolution of 2.5 mm 3 and distributed over five shells with b-values up to 4000 s/mm 2 and two Cartesian grids with b-values up to 9000 s/mm 2 . The final dataset consists of 8000 dMRI volumes, corresponding B 0 field maps and noise maps for subsets of the dMRI scans, and ten three-dimensional FLAIR, T 1 -, and T 2 -weighted scans. The average signal-to-noise-ratio of the non-diffusion-weighted images was roughly 35. This unique set of in vivo MRI data will provide a robust framework to evaluate novel diffusion processing techniques and to reliably compare different approaches for diffusion modeling. The MASSIVE dataset is made publically available (both unprocessed and processed) on www.massive-data.org. Magn Reson Med 77:1797-1809, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
3D acquisition and modeling for flint artefacts analysis
NASA Astrophysics Data System (ADS)
Loriot, B.; Fougerolle, Y.; Sestier, C.; Seulin, R.
2007-07-01
In this paper, we are interested in accurate acquisition and modeling of flint artefacts. Archaeologists needs accurate geometry measurements to refine their understanding of the flint artefacts manufacturing process. Current techniques require several operations. First, a copy of a flint artefact is reproduced. The copy is then sliced. A picture is taken for each slice. Eventually, geometric information is manually determined from the pictures. Such a technique is very time consuming, and the processing applied to the original, as well as the reproduced object, induces several measurement errors (prototyping approximations, slicing, image acquisition, and measurement). By using 3D scanners, we significantly reduce the number of operations related to data acquisition and completely suppress the prototyping step to obtain an accurate 3D model. The 3D models are segmented into sliced parts that are then analyzed. Each slice is then automatically fitted by mathematical representation. Such a representation offers several interesting properties: geometric features can be characterized (e.g. shapes, curvature, sharp edges, etc), and a shape of the original piece of stone can be extrapolated. The contributions of this paper are an acquisition technique using 3D scanners that strongly reduces human intervention, acquisition time and measurement errors, and the representation of flint artefacts as mathematical 2D sections that enable accurate analysis.
NASA Astrophysics Data System (ADS)
Mang, Ou-Yang; Ko, Mei Lan; Tsai, Yi-Chun; Chiou, Jin-Chern; Huang, Ting-Wei
2016-03-01
The pupil response to light can reflect various kinds of diseases which are related to physiological health. Pupillary abnormalities may be influenced on people by autonomic neuropathy, glaucoma, diabetes, genetic diseases, and high myopia. In the early stage of neuropathy, it is often asymptomatic and difficulty detectable by ophthalmologists. In addition, the position of injured nerve can lead to unsynchronized pupil response for human eyes. In our study, we design the pupilometer to measure the binocular pupil response simultaneously. It uses the different wavelength of LEDs such as white, red, green and blue light to stimulate the pupil and record the process. Therefore, the pupilometer mainly contains two systems. One is the image acquisition system, it use the two cameras modules with the same external triggered signal to capture the images of the pupil simultaneously. The other one is the illumination system. It use the boost converter ICs and LED driver ICs to supply the constant current for LED to maintain the consistent luminance in each experiments for reduced experimental error. Furthermore, the four infrared LEDs are arranged nearby the stimulating LEDs to illuminate eyes and increase contrast of image for image processing. In our design, we success to implement the function of synchronized image acquisition with the sample speed in 30 fps and the stable illumination system for precise measurement of experiment.
SNR-optimized phase-sensitive dual-acquisition turbo spin echo imaging: a fast alternative to FLAIR.
Lee, Hyunyeol; Park, Jaeseok
2013-07-01
Phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo imaging was recently introduced, producing high-resolution isotropic cerebrospinal fluid attenuated brain images without long inversion recovery preparation. Despite the advantages, the weighted-averaging-based technique suffers from noise amplification resulting from different levels of cerebrospinal fluid signal modulations over the two acquisitions. The purpose of this work is to develop a signal-to-noise ratio-optimized version of the phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo. Variable refocusing flip angles in the first acquisition are calculated using a three-step prescribed signal evolution while those in the second acquisition are calculated using a two-step pseudo-steady state signal transition with a high flip-angle pseudo-steady state at a later portion of the echo train, balancing the levels of cerebrospinal fluid signals in both the acquisitions. Low spatial frequency signals are sampled during the high flip-angle pseudo-steady state to further suppress noise. Numerical simulations of the Bloch equations were performed to evaluate signal evolutions of brain tissues along the echo train and optimize imaging parameters. In vivo studies demonstrate that compared with conventional phase-sensitive dual-acquisition single-slab three-dimensional turbo spin echo, the proposed optimization yields 74% increase in apparent signal-to-noise ratio for gray matter and 32% decrease in imaging time. The proposed method can be a potential alternative to conventional fluid-attenuated imaging. Copyright © 2012 Wiley Periodicals, Inc.
Alexander C. Vibrans; Ronald E. McRoberts; Paolo Moser; Adilson L. Nicoletti
2013-01-01
Estimation of large area forest attributes, such as area of forest cover, from remote sensing-based maps is challenging because of image processing, logistical, and data acquisition constraints. In addition, techniques for estimating and compensating for misclassification and estimating uncertainty are often unfamiliar. Forest area for the state of Santa Catarina in...
Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions
Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner
2016-01-01
In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter. PMID:27983669
Towards Automated Large-Scale 3D Phenotyping of Vineyards under Field Conditions.
Rose, Johann Christian; Kicherer, Anna; Wieland, Markus; Klingbeil, Lasse; Töpfer, Reinhard; Kuhlmann, Heiner
2016-12-15
In viticulture, phenotypic data are traditionally collected directly in the field via visual and manual means by an experienced person. This approach is time consuming, subjective and prone to human errors. In recent years, research therefore has focused strongly on developing automated and non-invasive sensor-based methods to increase data acquisition speed, enhance measurement accuracy and objectivity and to reduce labor costs. While many 2D methods based on image processing have been proposed for field phenotyping, only a few 3D solutions are found in the literature. A track-driven vehicle consisting of a camera system, a real-time-kinematic GPS system for positioning, as well as hardware for vehicle control, image storage and acquisition is used to visually capture a whole vine row canopy with georeferenced RGB images. In the first post-processing step, these images were used within a multi-view-stereo software to reconstruct a textured 3D point cloud of the whole grapevine row. A classification algorithm is then used in the second step to automatically classify the raw point cloud data into the semantic plant components, grape bunches and canopy. In the third step, phenotypic data for the semantic objects is gathered using the classification results obtaining the quantity of grape bunches, berries and the berry diameter.
Vulgarakis Minov, Sofija; Cointault, Frédéric; Vangeyte, Jürgen; Pieters, Jan G; Nuyttens, David
2016-01-01
Accurate spray characterization helps to better understand the pesticide spray application process. The goal of this research was to present the proof of principle of a droplet size and velocity measuring technique for different types of hydraulic spray nozzles using a high speed backlight image acquisition and analysis system. As only part of the drops of an agricultural spray can be in focus at any given moment, an in-focus criterion based on the gray level gradient was proposed to decide whether a given droplet is in focus or not. In a first experiment, differently sized droplets were generated with a piezoelectric generator and studied to establish the relationship between size and in-focus characteristics. In a second experiment, it was demonstrated that droplet sizes and velocities from a real sprayer could be measured reliably in a non-intrusive way using the newly developed image acquisition set-up and image processing. Measured droplet sizes ranged from 24 μm to 543 μm, depending on the nozzle type and size. Droplet velocities ranged from around 0.5 m/s to 12 m/s. The droplet size and velocity results were compared and related well with the results obtained with a Phase Doppler Particle Analyzer (PDPA). PMID:26861338
Minov, Sofija Vulgarakis; Cointault, Frédéric; Vangeyte, Jürgen; Pieters, Jan G; Nuyttens, David
2016-02-06
Accurate spray characterization helps to better understand the pesticide spray application process. The goal of this research was to present the proof of principle of a droplet size and velocity measuring technique for different types of hydraulic spray nozzles using a high speed backlight image acquisition and analysis system. As only part of the drops of an agricultural spray can be in focus at any given moment, an in-focus criterion based on the gray level gradient was proposed to decide whether a given droplet is in focus or not. In a first experiment, differently sized droplets were generated with a piezoelectric generator and studied to establish the relationship between size and in-focus characteristics. In a second experiment, it was demonstrated that droplet sizes and velocities from a real sprayer could be measured reliably in a non-intrusive way using the newly developed image acquisition set-up and image processing. Measured droplet sizes ranged from 24 μm to 543 μm, depending on the nozzle type and size. Droplet velocities ranged from around 0.5 m/s to 12 m/s. The droplet size and velocity results were compared and related well with the results obtained with a Phase Doppler Particle Analyzer (PDPA).
Age of Acquisition and Imageability: A Cross-Task Comparison
ERIC Educational Resources Information Center
Ploetz, Danielle M.; Yates, Mark
2016-01-01
Previous research has reported an imageability effect on visual word recognition. Words that are high in imageability are recognised more rapidly than are those lower in imageability. However, later researchers argued that imageability was confounded with age of acquisition. In the current research, these two factors were manipulated in a…
Development of integrated control system for smart factory in the injection molding process
NASA Astrophysics Data System (ADS)
Chung, M. J.; Kim, C. Y.
2018-03-01
In this study, we proposed integrated control system for automation of injection molding process required for construction of smart factory. The injection molding process consists of heating, tool close, injection, cooling, tool open, and take-out. Take-out robot controller, image processing module, and process data acquisition interface module are developed and assembled to integrated control system. By adoption of integrated control system, the injection molding process can be simplified and the cost for construction of smart factory can be inexpensive.
Zhang, Jian; Niu, Xin; Yang, Xue-zhi; Zhu, Qing-wen; Li, Hai-yan; Wang, Xuan; Zhang, Zhi-guo; Sha, Hong
2014-09-01
To design the pulse information which includes the parameter of pulse-position, pulse-number, pulse-shape and pulse-force acquisition and analysis system with function of dynamic recognition, and research the digitalization and visualization of some common cardiovascular mechanism of single pulse. To use some flexible sensors to catch the radial artery pressure pulse wave and utilize the high frequency B mode ultrasound scanning technology to synchronously obtain the information of radial extension and axial movement, by the way of dynamic images, then the gathered information was analyzed and processed together with ECG. Finally, the pulse information acquisition and analysis system was established which has the features of visualization and dynamic recognition, and it was applied to serve for ten healthy adults. The new system overcome the disadvantage of one-dimensional pulse information acquisition and process method which was common used in current research area of pulse diagnosis in traditional Chinese Medicine, initiated a new way of pulse diagnosis which has the new features of dynamic recognition, two-dimensional information acquisition, multiplex signals combination and deep data mining. The newly developed system could translate the pulse signals into digital, visual and measurable motion information of vessel.
Image processing system for the measurement of timber truck loads
NASA Astrophysics Data System (ADS)
Carvalho, Fernando D.; Correia, Bento A. B.; Davies, Roger; Rodrigues, Fernando C.; Freitas, Jose C. A.
1993-01-01
The paper industry uses wood as its raw material. To know the quantity of wood in the pile of sawn tree trunks, every truck load entering the plant is measured to determine its volume. The objective of this procedure is to know the solid volume of wood stocked in the plant. Weighing the tree trunks has its own problems, due to their high capacity for absorbing water. Image processing techniques were used to evaluate the volume of a truck load of logs of wood. The system is based on a PC equipped with an image processing board using data flow processors. Three cameras allow image acquisition of the sides and rear of the truck. The lateral images contain information about the sectional area of the logs, and the rear image contains information about the length of the logs. The machine vision system and the implemented algorithms are described. The results being obtained with the industrial prototype that is now installed in a paper mill are also presented.
NASA Astrophysics Data System (ADS)
Park, Joong Yong; Tuell, Grady
2010-04-01
The Data Processing System (DPS) of the Coastal Zone Mapping and Imaging Lidar (CZMIL) has been designed to automatically produce a number of novel environmental products through the fusion of Lidar, spectrometer, and camera data in a single software package. These new products significantly transcend use of the system as a bathymeter, and support use of CZMIL as a complete coastal and benthic mapping tool. The DPS provides a spinning globe capability for accessing data files; automated generation of combined topographic and bathymetric point clouds; a fully-integrated manual editor and data analysis tool; automated generation of orthophoto mosaics; automated generation of reflectance data cubes from the imaging spectrometer; a coupled air-ocean spectral optimization model producing images of chlorophyll and CDOM concentrations; and a fusion based capability to produce images and classifications of the shallow water seafloor. Adopting a multitasking approach, we expect to achieve computation of the point clouds, DEMs, and reflectance images at a 1:1 processing to acquisition ratio.
Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F
2011-04-01
To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast three-dimensional MRI data acquisition. Copyright © 2011 Wiley-Liss, Inc.
Jung, Youngkyoo; Samsonov, Alexey A; Bydder, Mark; Block, Walter F.
2011-01-01
Purpose To remove phase inconsistencies between multiple echoes, an algorithm using a radial acquisition to provide inherent phase and magnitude information for self correction was developed. The information also allows simultaneous support for parallel imaging for multiple coil acquisitions. Materials and Methods Without a separate field map acquisition, a phase estimate from each echo in multiple echo train was generated. When using a multiple channel coil, magnitude and phase estimates from each echo provide in-vivo coil sensitivities. An algorithm based on the conjugate gradient method uses these estimates to simultaneously remove phase inconsistencies between echoes, and in the case of multiple coil acquisition, simultaneously provides parallel imaging benefits. The algorithm is demonstrated on single channel, multiple channel, and undersampled data. Results Substantial image quality improvements were demonstrated. Signal dropouts were completely removed and undersampling artifacts were well suppressed. Conclusion The suggested algorithm is able to remove phase cancellation and undersampling artifacts simultaneously and to improve image quality of multiecho radial imaging, the important technique for fast 3D MRI data acquisition. PMID:21448967
Semi-automated Image Processing for Preclinical Bioluminescent Imaging.
Slavine, Nikolai V; McColl, Roderick W
Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.
Free-running ADC- and FPGA-based signal processing method for brain PET using GAPD arrays
NASA Astrophysics Data System (ADS)
Hu, Wei; Choi, Yong; Hong, Key Jo; Kang, Jihoon; Jung, Jin Ho; Huh, Youn Suk; Lim, Hyun Keong; Kim, Sang Su; Kim, Byung-Tae; Chung, Yonghyun
2012-02-01
Currently, for most photomultiplier tube (PMT)-based PET systems, constant fraction discriminators (CFD) and time to digital converters (TDC) have been employed to detect gamma ray signal arrival time, whereas anger logic circuits and peak detection analog-to-digital converters (ADCs) have been implemented to acquire position and energy information of detected events. As compared to PMT the Geiger-mode avalanche photodiodes (GAPDs) have a variety of advantages, such as compactness, low bias voltage requirement and MRI compatibility. Furthermore, the individual read-out method using a GAPD array coupled 1:1 with an array scintillator can provide better image uniformity than can be achieved using PMT and anger logic circuits. Recently, a brain PET using 72 GAPD arrays (4×4 array, pixel size: 3 mm×3 mm) coupled 1:1 with LYSO scintillators (4×4 array, pixel size: 3 mm×3 mm×20 mm) has been developed for simultaneous PET/MRI imaging in our laboratory. Eighteen 64:1 position decoder circuits (PDCs) were used to reduce GAPD channel number and three off-the-shelf free-running ADC and field programmable gate array (FPGA) combined data acquisition (DAQ) cards were used for data acquisition and processing. In this study, a free-running ADC- and FPGA-based signal processing method was developed for the detection of gamma ray signal arrival time, energy and position information all together for each GAPD channel. For the method developed herein, three DAQ cards continuously acquired 18 channels of pre-amplified analog gamma ray signals and 108-bit digital addresses from 18 PDCs. In the FPGA, the digitized gamma ray pulses and digital addresses were processed to generate data packages containing pulse arrival time, baseline value, energy value and GAPD channel ID. Finally, these data packages were saved to a 128 Mbyte on-board synchronous dynamic random access memory (SDRAM) and then transferred to a host computer for coincidence sorting and image reconstruction. In order to evaluate the functionality of the developed signal processing method, energy and timing resolutions for brain PET were measured via the placement of a 6 μCi 22Na point source at the center of the PET scanner. Furthermore the PET image of the hot rod phantom (rod diameter: from 2.5 mm to 6.5 mm) with activity of 1 mCi was simulated, and then image acquisition experiment was performed using the brain PET. Measured average energy resolution for 1152 GAPD channels and system timing resolution were 19.5% (FWHM%) and 2.7 ns (FWHM), respectively. With regard to the acquisition of the hot rod phantom image, rods could be resolved down to a diameter of 2.5 mm, which was similar to simulated results. The experimental results demonstrated that the signal processing method developed herein was successfully implemented for brain PET. This reduced the complexity, cost and developing duration for PET system relative to normal PET electronics, and it will obviously be useful for the development of high-performance investigational PET systems.
NASA Astrophysics Data System (ADS)
Jackson, Edward F.
2016-04-01
Over the past decade, there has been an increasing focus on quantitative imaging biomarkers (QIBs), which are defined as "objectively measured characteristics derived from in vivo images as indicators of normal biological processes, pathogenic processes, or response to a therapeutic intervention"1. To evolve qualitative imaging assessments to the use of QIBs requires the development and standardization of data acquisition, data analysis, and data display techniques, as well as appropriate reporting structures. As such, successful implementation of QIB applications relies heavily on expertise from the fields of medical physics, radiology, statistics, and informatics as well as collaboration from vendors of imaging acquisition, analysis, and reporting systems. When successfully implemented, QIBs will provide image-derived metrics with known bias and variance that can be validated with anatomically and physiologically relevant measures, including treatment response (and the heterogeneity of that response) and outcome. Such non-invasive quantitative measures can then be used effectively in clinical and translational research and will contribute significantly to the goals of precision medicine. This presentation will focus on 1) outlining the opportunities for QIB applications, with examples to demonstrate applications in both research and patient care, 2) discussing key challenges in the implementation of QIB applications, and 3) providing overviews of efforts to address such challenges from federal, scientific, and professional organizations, including, but not limited to, the RSNA, NCI, FDA, and NIST. 1Sullivan, Obuchowski, Kessler, et al. Radiology, epub August 2015.
Anatomic modeling using 3D printing: quality assurance and optimization.
Leng, Shuai; McGee, Kiaran; Morris, Jonathan; Alexander, Amy; Kuhlmann, Joel; Vrieze, Thomas; McCollough, Cynthia H; Matsumoto, Jane
2017-01-01
The purpose of this study is to provide a framework for the development of a quality assurance (QA) program for use in medical 3D printing applications. An interdisciplinary QA team was built with expertise from all aspects of 3D printing. A systematic QA approach was established to assess the accuracy and precision of each step during the 3D printing process, including: image data acquisition, segmentation and processing, and 3D printing and cleaning. Validation of printed models was performed by qualitative inspection and quantitative measurement. The latter was achieved by scanning the printed model with a high resolution CT scanner to obtain images of the printed model, which were registered to the original patient images and the distance between them was calculated on a point-by-point basis. A phantom-based QA process, with two QA phantoms, was also developed. The phantoms went through the same 3D printing process as that of the patient models to generate printed QA models. Physical measurement, fit tests, and image based measurements were performed to compare the printed 3D model to the original QA phantom, with its known size and shape, providing an end-to-end assessment of errors involved in the complete 3D printing process. Measured differences between the printed model and the original QA phantom ranged from -0.32 mm to 0.13 mm for the line pair pattern. For a radial-ulna patient model, the mean distance between the original data set and the scanned printed model was -0.12 mm (ranging from -0.57 to 0.34 mm), with a standard deviation of 0.17 mm. A comprehensive QA process from image acquisition to completed model has been developed. Such a program is essential to ensure the required accuracy of 3D printed models for medical applications.
PScan 1.0: flexible software framework for polygon based multiphoton microscopy
NASA Astrophysics Data System (ADS)
Li, Yongxiao; Lee, Woei Ming
2016-12-01
Multiphoton laser scanning microscopes exhibit highly localized nonlinear optical excitation and are powerful instruments for in-vivo deep tissue imaging. Customized multiphoton microscopy has a significantly superior performance for in-vivo imaging because of precise control over the scanning and detection system. To date, there have been several flexible software platforms catered to custom built microscopy systems i.e. ScanImage, HelioScan, MicroManager, that perform at imaging speeds of 30-100fps. In this paper, we describe a flexible software framework for high speed imaging systems capable of operating from 5 fps to 1600 fps. The software is based on the MATLAB image processing toolbox. It has the capability to communicate directly with a high performing imaging card (Matrox Solios eA/XA), thus retaining high speed acquisition. The program is also designed to communicate with LabVIEW and Fiji for instrument control and image processing. Pscan 1.0 can handle high imaging rates and contains sufficient flexibility for users to adapt to their high speed imaging systems.
Yazbek, Sandrine; Prabhu, Sanjay P; Connaughton, Pauline; Grant, Patricia E; Gagoski, Borjan
2015-08-01
Single-voxel spectroscopy (SVS) is usually used in the pediatric population when a short acquisition time is crucial. To overcome the long acquisition time of 3-D phase-encoded chemical shift imaging (CSI) and lack of spatial coverage of single-voxel spectroscopy, efficient encoding schemes using spiral k-space trajectories have been successfully deployed, enabling acquisition of volumetric CSI in <5 min. We assessed feasibility of using 3-D spiral CSI sequence routinely in pediatric clinical settings by comparing its reconstructed spectra against SVS spectra. Volumetric spiral CSI obtained spectra from 2-cc isotropic voxels over a 16×16×10-cm region. SVS acquisition encoded a 3.4-cc (1.5-mm) isotropic voxel. Acquisition time was 3 min for every technique. Data were gathered prospectively from 11 random pediatric patients. Spectra from left basal ganglia were obtained using both techniques and were processed with post-processing software. The following metabolite ratios were calculated: N-acetylaspartate/creatine (NAA/Cr), choline/creatine (Cho/Cr), lactate/creatine (Lac/Cr) and N-acetylapartate/choline (NAA/Cho). We collected data on 11 children ages 4 days to 10 years. In 10/11 cases, spectral quality of both methods was acceptable. Considering 10/11 cases, we found a statistically significant difference between SVS and 3-D spiral CSI for all three ratios. However, this difference was fixed and was probably caused by a fixed bias. This means that 3-D spiral CSI can be used instead of SVS by removing the mean difference between the methods for each ratio. Accelerated 3-D CSI is feasible in pediatric patients and can potentially substitute for SVS.
Using Fourier transform IR spectroscopy to analyze biological materials
Baker, Matthew J; Trevisan, Júlio; Bassan, Paul; Bhargava, Rohit; Butler, Holly J; Dorling, Konrad M; Fielden, Peter R; Fogarty, Simon W; Fullwood, Nigel J; Heys, Kelly A; Hughes, Caryn; Lasch, Peter; Martin-Hirsch, Pierre L; Obinaju, Blessing; Sockalingum, Ganesh D; Sulé-Suso, Josep; Strong, Rebecca J; Walsh, Michael J; Wood, Bayden R; Gardner, Peter; Martin, Francis L
2015-01-01
IR spectroscopy is an excellent method for biological analyses. It enables the nonperturbative, label-free extraction of biochemical information and images toward diagnosis and the assessment of cell functionality. Although not strictly microscopy in the conventional sense, it allows the construction of images of tissue or cell architecture by the passing of spectral data through a variety of computational algorithms. Because such images are constructed from fingerprint spectra, the notion is that they can be an objective reflection of the underlying health status of the analyzed sample. One of the major difficulties in the field has been determining a consensus on spectral pre-processing and data analysis. This manuscript brings together as coauthors some of the leaders in this field to allow the standardization of methods and procedures for adapting a multistage approach to a methodology that can be applied to a variety of cell biological questions or used within a clinical setting for disease screening or diagnosis. We describe a protocol for collecting IR spectra and images from biological samples (e.g., fixed cytology and tissue sections, live cells or biofluids) that assesses the instrumental options available, appropriate sample preparation, different sampling modes as well as important advances in spectral data acquisition. After acquisition, data processing consists of a sequence of steps including quality control, spectral pre-processing, feature extraction and classification of the supervised or unsupervised type. A typical experiment can be completed and analyzed within hours. Example results are presented on the use of IR spectra combined with multivariate data processing. PMID:24992094
Pandit, Prachi; Rivoire, Julien; King, Kevin; Li, Xiaojuan
2016-03-01
Quantitative T1ρ imaging is beneficial for early detection for osteoarthritis but has seen limited clinical use due to long scan times. In this study, we evaluated the feasibility of accelerated T1ρ mapping for knee cartilage quantification using a combination of compressed sensing (CS) and data-driven parallel imaging (ARC-Autocalibrating Reconstruction for Cartesian sampling). A sequential combination of ARC and CS, both during data acquisition and reconstruction, was used to accelerate the acquisition of T1ρ maps. Phantom, ex vivo (porcine knee), and in vivo (human knee) imaging was performed on a GE 3T MR750 scanner. T1ρ quantification after CS-accelerated acquisition was compared with non CS-accelerated acquisition for various cartilage compartments. Accelerating image acquisition using CS did not introduce major deviations in quantification. The coefficient of variation for the root mean squared error increased with increasing acceleration, but for in vivo measurements, it stayed under 5% for a net acceleration factor up to 2, where the acquisition was 25% faster than the reference (only ARC). To the best of our knowledge, this is the first implementation of CS for in vivo T1ρ quantification. These early results show that this technique holds great promise in making quantitative imaging techniques more accessible for clinical applications. © 2015 Wiley Periodicals, Inc.
A Method to Measure and Estimate Normalized Contrast in Infrared Flash Thermography
NASA Technical Reports Server (NTRS)
Koshti, Ajay M.
2016-01-01
The paper presents further development in normalized contrast processing used in flash infrared thermography method. Method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided. Methods of converting image contrast to temperature contrast and vice versa are provided. Normalized contrast processing in flash thermography is useful in quantitative analysis of flash thermography data including flaw characterization and comparison of experimental results with simulation. Computation of normalized temperature contrast involves use of flash thermography data acquisition set-up with high reflectivity foil and high emissivity tape such that the foil, tape and test object are imaged simultaneously. Methods of assessing other quantitative parameters such as emissivity of object, afterglow heat flux, reflection temperature change and surface temperature during flash thermography are also provided. Temperature imaging and normalized temperature contrast processing provide certain advantages over normalized image contrast processing by reducing effect of reflected energy in images and measurements, therefore providing better quantitative data. Examples of incorporating afterglow heat-flux and reflection temperature evolution in flash thermography simulation are also discussed.
Automated segmentation of three-dimensional MR brain images
NASA Astrophysics Data System (ADS)
Park, Jonggeun; Baek, Byungjun; Ahn, Choong-Il; Ku, Kyo Bum; Jeong, Dong Kyun; Lee, Chulhee
2006-03-01
Brain segmentation is a challenging problem due to the complexity of the brain. In this paper, we propose an automated brain segmentation method for 3D magnetic resonance (MR) brain images which are represented as a sequence of 2D brain images. The proposed method consists of three steps: pre-processing, removal of non-brain regions (e.g., the skull, meninges, other organs, etc), and spinal cord restoration. In pre-processing, we perform adaptive thresholding which takes into account variable intensities of MR brain images corresponding to various image acquisition conditions. In segmentation process, we iteratively apply 2D morphological operations and masking for the sequences of 2D sagittal, coronal, and axial planes in order to remove non-brain tissues. Next, final 3D brain regions are obtained by applying OR operation for segmentation results of three planes. Finally we reconstruct the spinal cord truncated during the previous processes. Experiments are performed with fifteen 3D MR brain image sets with 8-bit gray-scale. Experiment results show the proposed algorithm is fast, and provides robust and satisfactory results.
Vehicle counting system using real-time video processing
NASA Astrophysics Data System (ADS)
Crisóstomo-Romero, Pedro M.
2006-02-01
Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.
Motion-gated acquisition for in vivo optical imaging
Gioux, Sylvain; Ashitate, Yoshitomo; Hutteman, Merlijn; Frangioni, John V.
2009-01-01
Wide-field continuous wave fluorescence imaging, fluorescence lifetime imaging, frequency domain photon migration, and spatially modulated imaging have the potential to provide quantitative measurements in vivo. However, most of these techniques have not yet been successfully translated to the clinic due to challenging environmental constraints. In many circumstances, cardiac and respiratory motion greatly impair image quality and∕or quantitative processing. To address this fundamental problem, we have developed a low-cost, field-programmable gate array–based, hardware-only gating device that delivers a phase-locked acquisition window of arbitrary delay and width that is derived from an unlimited number of pseudo-periodic and nonperiodic input signals. All device features can be controlled manually or via USB serial commands. The working range of the device spans the extremes of mouse electrocardiogram (1000 beats per minute) to human respiration (4 breaths per minute), with timing resolution ⩽0.06%, and jitter ⩽0.008%, of the input signal period. We demonstrate the performance of the gating device, including dramatic improvements in quantitative measurements, in vitro using a motion simulator and in vivo using near-infrared fluorescence angiography of beating pig heart. This gating device should help to enable the clinical translation of promising new optical imaging technologies. PMID:20059276
Holtrop, Joseph L.; Sutton, Bradley P.
2016-01-01
Abstract. A diffusion weighted imaging (DWI) approach that is signal-to-noise ratio (SNR) efficient and can be applied to achieve sub-mm resolutions on clinical 3 T systems was developed. The sequence combined a multislab, multishot pulsed gradient spin echo diffusion scheme with spiral readouts for imaging data and navigators. Long data readouts were used to keep the number of shots, and hence total imaging time, for the three-dimensional acquisition short. Image quality was maintained by incorporating a field-inhomogeneity-corrected image reconstruction to remove distortions associated with long data readouts. Additionally, multiple shots were required for the high-resolution images, necessitating motion induced phase correction through the use of efficiently integrated navigator data. The proposed approach is compared with two-dimensional (2-D) acquisitions that use either a spiral or a typical echo-planar imaging (EPI) acquisition to demonstrate the improved SNR efficiency. The proposed technique provided 71% higher SNR efficiency than the standard 2-D EPI approach. The adaptability of the technique to achieve high spatial resolutions is demonstrated by acquiring diffusion tensor imaging data sets with isotropic resolutions of 1.25 and 0.8 mm. The proposed approach allows for SNR-efficient sub-mm acquisitions of DWI data on clinical 3 T systems. PMID:27088107
Use of Vertical Aerial Images for Semi-Oblique Mapping
NASA Astrophysics Data System (ADS)
Poli, D.; Moe, K.; Legat, K.; Toschi, I.; Lago, F.; Remondino, F.
2017-05-01
The paper proposes a methodology for the use of the oblique sections of images from large-format photogrammetric cameras, by exploiting the effect of the central perspective geometry in the lateral parts of the nadir images ("semi-oblique" images). The point of origin of the investigation was the execution of a photogrammetric flight over Norcia (Italy), which was seriously damaged after the earthquake of 30/10/2016. Contrary to the original plan of oblique acquisitions, the flight was executed on 15/11/2017 using an UltraCam Eagle camera with focal length 80 mm, and combining two flight plans, rotated by 90º ("crisscross" flight). The images (GSD 5 cm) were used to extract a 2.5D DSM cloud, sampled to a XY-grid size of 2 GSD, a 3D point clouds with a mean spatial resolution of 1 GSD and a 3D mesh model at a resolution of 10 cm of the historic centre of Norcia for a quantitative assessment of the damages. From the acquired nadir images the "semi-oblique" images (forward, backward, left and right views) could be extracted and processed in a modified version of GEOBLY software for measurements and restitution purposes. The potential of such semi-oblique image acquisitions from nadir-view cameras is hereafter shown and commented.
NASA Astrophysics Data System (ADS)
Patel, M. N.; Looney, P.; Young, K.; Halling-Brown, M. D.
2014-03-01
Radiological imaging is fundamental within the healthcare industry and has become routinely adopted for diagnosis, disease monitoring and treatment planning. Over the past two decades both diagnostic and therapeutic imaging have undergone a rapid growth, the ability to be able to harness this large influx of medical images can provide an essential resource for research and training. Traditionally, the systematic collection of medical images for research from heterogeneous sites has not been commonplace within the NHS and is fraught with challenges including; data acquisition, storage, secure transfer and correct anonymisation. Here, we describe a semi-automated system, which comprehensively oversees the collection of both unprocessed and processed medical images from acquisition to a centralised database. The provision of unprocessed images within our repository enables a multitude of potential research possibilities that utilise the images. Furthermore, we have developed systems and software to integrate these data with their associated clinical data and annotations providing a centralised dataset for research. Currently we regularly collect digital mammography images from two sites and partially collect from a further three, with efforts to expand into other modalities and sites currently ongoing. At present we have collected 34,014 2D images from 2623 individuals. In this paper we describe our medical image collection system for research and discuss the wide spectrum of challenges faced during the design and implementation of such systems.
Rahim, Sarni Suhaila; Palade, Vasile; Shuttleworth, James; Jayne, Chrisina
2016-12-01
Digital retinal imaging is a challenging screening method for which effective, robust and cost-effective approaches are still to be developed. Regular screening for diabetic retinopathy and diabetic maculopathy diseases is necessary in order to identify the group at risk of visual impairment. This paper presents a novel automatic detection of diabetic retinopathy and maculopathy in eye fundus images by employing fuzzy image processing techniques. The paper first introduces the existing systems for diabetic retinopathy screening, with an emphasis on the maculopathy detection methods. The proposed medical decision support system consists of four parts, namely: image acquisition, image preprocessing including four retinal structures localisation, feature extraction and the classification of diabetic retinopathy and maculopathy. A combination of fuzzy image processing techniques, the Circular Hough Transform and several feature extraction methods are implemented in the proposed system. The paper also presents a novel technique for the macula region localisation in order to detect the maculopathy. In addition to the proposed detection system, the paper highlights a novel online dataset and it presents the dataset collection, the expert diagnosis process and the advantages of our online database compared to other public eye fundus image databases for diabetic retinopathy purposes.
Reproducible high-resolution multispectral image acquisition in dermatology
NASA Astrophysics Data System (ADS)
Duliu, Alexandru; Gardiazabal, José; Lasser, Tobias; Navab, Nassir
2015-07-01
Multispectral image acquisitions are increasingly popular in dermatology, due to their improved spectral resolution which enables better tissue discrimination. Most applications however focus on restricted regions of interest, imaging only small lesions. In this work we present and discuss an imaging framework for high-resolution multispectral imaging on large regions of interest.
Automated Geo/Co-Registration of Multi-Temporal Very-High-Resolution Imagery.
Han, Youkyung; Oh, Jaehong
2018-05-17
For time-series analysis using very-high-resolution (VHR) multi-temporal satellite images, both accurate georegistration to the map coordinates and subpixel-level co-registration among the images should be conducted. However, applying well-known matching methods, such as scale-invariant feature transform and speeded up robust features for VHR multi-temporal images, has limitations. First, they cannot be used for matching an optical image to heterogeneous non-optical data for georegistration. Second, they produce a local misalignment induced by differences in acquisition conditions, such as acquisition platform stability, the sensor's off-nadir angle, and relief displacement of the considered scene. Therefore, this study addresses the problem by proposing an automated geo/co-registration framework for full-scene multi-temporal images acquired from a VHR optical satellite sensor. The proposed method comprises two primary steps: (1) a global georegistration process, followed by (2) a fine co-registration process. During the first step, two-dimensional multi-temporal satellite images are matched to three-dimensional topographic maps to assign the map coordinates. During the second step, a local analysis of registration noise pixels extracted between the multi-temporal images that have been mapped to the map coordinates is conducted to extract a large number of well-distributed corresponding points (CPs). The CPs are finally used to construct a non-rigid transformation function that enables minimization of the local misalignment existing among the images. Experiments conducted on five Kompsat-3 full scenes confirmed the effectiveness of the proposed framework, showing that the georegistration performance resulted in an approximately pixel-level accuracy for most of the scenes, and the co-registration performance further improved the results among all combinations of the georegistered Kompsat-3 image pairs by increasing the calculated cross-correlation values.
Hsieh, K S; Lin, C C; Liu, W S; Chen, F L
1996-01-01
Two-dimensional echocardiography had long been a standard diagnostic modality for congenital heart disease. Further attempts of three-dimensional reconstruction using two-dimensional echocardiographic images to visualize stereotypic structure of cardiac lesions have been successful only recently. So far only very few studies have been done to display three-dimensional anatomy of the heart through two-dimensional image acquisition because such complex procedures were involved. This study introduced a recently developed image acquisition and processing system for dynamic three-dimensional visualization of various congenital cardiac lesions. From December 1994 to April 1995, 35 cases were selected in the Echo Laboratory here from about 3000 Echo examinations completed. Each image was acquired on-line with specially designed high resolution image grazmber with EKG and respiratory gating technique. Off-line image processing using a window-architectured interactive software package includes construction of 2-D ehcocardiographic pixel to 3-D "voxel" with conversion of orthogonal to rotatory axial system, interpolation, extraction of region of interest, segmentation, shading and, finally, 3D rendering. Three-dimensional anatomy of various congenital cardiac defects was shown, including four cases with ventricular septal defects, two cases with atrial septal defects, and two cases with aortic stenosis. Dynamic reconstruction of a "beating heart" is recorded as vedio tape with video interface. The potential application of 3D display of the reconstruction from 2D echocardiographic images for the diagnosis of various congenital heart defects has been shown. The 3D display was able to improve the diagnostic ability of echocardiography, and clear-cut display of the various congenital cardiac defects and vavular stenosis could be demonstrated. Reinforcement of current techniques will expand future application of 3D display of conventional 2D images.
Head motion during MRI acquisition reduces gray matter volume and thickness estimates.
Reuter, Martin; Tisdall, M Dylan; Qureshi, Abid; Buckner, Randy L; van der Kouwe, André J W; Fischl, Bruce
2015-02-15
Imaging biomarkers derived from magnetic resonance imaging (MRI) data are used to quantify normal development, disease, and the effects of disease-modifying therapies. However, motion during image acquisition introduces image artifacts that, in turn, affect derived markers. A systematic effect can be problematic since factors of interest like age, disease, and treatment are often correlated with both a structural change and the amount of head motion in the scanner, confounding the ability to distinguish biology from artifact. Here we evaluate the effect of head motion during image acquisition on morphometric estimates of structures in the human brain using several popular image analysis software packages (FreeSurfer 5.3, VBM8 SPM, and FSL Siena 5.0.7). Within-session repeated T1-weighted MRIs were collected on 12 healthy volunteers while performing different motion tasks, including two still scans. We show that volume and thickness estimates of the cortical gray matter are biased by head motion with an average apparent volume loss of roughly 0.7%/mm/min of subject motion. Effects vary across regions and remain significant after excluding scans that fail a rigorous quality check. In view of these results, the interpretation of reported morphometric effects of movement disorders or other conditions with increased motion tendency may need to be revisited: effects may be overestimated when not controlling for head motion. Furthermore, drug studies with hypnotic, sedative, tranquilizing, or neuromuscular-blocking substances may contain spurious "effects" of reduced atrophy or brain growth simply because they affect motion distinct from true effects of the disease or therapeutic process. Copyright © 2014 Elsevier Inc. All rights reserved.
Laser-induced acoustic imaging of underground objects
NASA Astrophysics Data System (ADS)
Li, Wen; DiMarzio, Charles A.; McKnight, Stephen W.; Sauermann, Gerhard O.; Miller, Eric L.
1999-02-01
This paper introduces a new demining technique based on the photo-acoustic interaction, together with results from photo- acoustic experiments. We have buried different types of targets (metal, rubber and plastic) in different media (sand, soil and water) and imaged them by measuring reflection of acoustic waves generated by irradiation with a CO2 laser. Research has been focused on the signal acquisition and signal processing. A deconvolution method using Wiener filters is utilized in data processing. Using a uniform spatial distribution of laser pulses at the ground's surface, we obtained 3D images of buried objects. The images give us a clear representation of the shapes of the underground objects. The quality of the images depends on the mismatch of acoustic impedance of the buried objects, the bandwidth and center frequency of the acoustic sensors and the selection of filter functions.
Guede-Fernandez, F; Ferrer-Mileo, V; Ramos-Castro, J; Fernandez-Chimeno, M; Garcia-Gonzalez, M A
2015-01-01
The aim of this paper is to present a smartphone based system for real-time pulse-to-pulse (PP) interval time series acquisition by frame-to-frame camera image processing. The developed smartphone application acquires image frames from built-in rear-camera at the maximum available rate (30 Hz) and the smartphone GPU has been used by Renderscript API for high performance frame-by-frame image acquisition and computing in order to obtain PPG signal and PP interval time series. The relative error of mean heart rate is negligible. In addition, measurement posture and the employed smartphone model influences on the beat-to-beat error measurement of heart rate and HRV indices have been analyzed. Then, the standard deviation of the beat-to-beat error (SDE) was 7.81 ± 3.81 ms in the worst case. Furthermore, in supine measurement posture, significant device influence on the SDE has been found and the SDE is lower with Samsung S5 than Motorola X. This study can be applied to analyze the reliability of different smartphone models for HRV assessment from real-time Android camera frames processing.
Identification and restoration in 3D fluorescence microscopy
NASA Astrophysics Data System (ADS)
Dieterlen, Alain; Xu, Chengqi; Haeberle, Olivier; Hueber, Nicolas; Malfara, R.; Colicchio, B.; Jacquey, Serge
2004-06-01
3-D optical fluorescent microscopy becomes now an efficient tool for volumic investigation of living biological samples. The 3-D data can be acquired by Optical Sectioning Microscopy which is performed by axial stepping of the object versus the objective. For any instrument, each recorded image can be described by a convolution equation between the original object and the Point Spread Function (PSF) of the acquisition system. To assess performance and ensure the data reproducibility, as for any 3-D quantitative analysis, the system indentification is mandatory. The PSF explains the properties of the image acquisition system; it can be computed or acquired experimentally. Statistical tools and Zernike moments are shown appropriate and complementary to describe a 3-D system PSF and to quantify the variation of the PSF as function of the optical parameters. Some critical experimental parameters can be identified with these tools. This is helpful for biologist to define an aquisition protocol optimizing the use of the system. Reduction of out-of-focus light is the task of 3-D microscopy; it is carried out computationally by deconvolution process. Pre-filtering the images improves the stability of deconvolution results, now less dependent on the regularization parameter; this helps the biologists to use restoration process.
Stereo Image Ranging For An Autonomous Robot Vision System
NASA Astrophysics Data System (ADS)
Holten, James R.; Rogers, Steven K.; Kabrisky, Matthew; Cross, Steven
1985-12-01
The principles of stereo vision for three-dimensional data acquisition are well-known and can be applied to the problem of an autonomous robot vehicle. Coincidental points in the two images are located and then the location of that point in a three-dimensional space can be calculated using the offset of the points and knowledge of the camera positions and geometry. This research investigates the application of artificial intelligence knowledge representation techniques as a means to apply heuristics to relieve the computational intensity of the low level image processing tasks. Specifically a new technique for image feature extraction is presented. This technique, the Queen Victoria Algorithm, uses formal language productions to process the image and characterize its features. These characterized features are then used for stereo image feature registration to obtain the required ranging information. The results can be used by an autonomous robot vision system for environmental modeling and path finding.
Joint MR-PET reconstruction using a multi-channel image regularizer
Koesters, Thomas; Otazo, Ricardo; Bredies, Kristian; Sodickson, Daniel K
2016-01-01
While current state of the art MR-PET scanners enable simultaneous MR and PET measurements, the acquired data sets are still usually reconstructed separately. We propose a new multi-modality reconstruction framework using second order Total Generalized Variation (TGV) as a dedicated multi-channel regularization functional that jointly reconstructs images from both modalities. In this way, information about the underlying anatomy is shared during the image reconstruction process while unique differences are preserved. Results from numerical simulations and in-vivo experiments using a range of accelerated MR acquisitions and different MR image contrasts demonstrate improved PET image quality, resolution, and quantitative accuracy. PMID:28055827
A framework for analysis of large database of old art paintings
NASA Astrophysics Data System (ADS)
Da Rugna, Jérome; Chareyron, Ga"l.; Pillay, Ruven; Joly, Morwena
2011-03-01
For many years, a lot of museums and countries organize the high definition digitalization of their own collections. In consequence, they generate massive data for each object. In this paper, we only focus on art painting collections. Nevertheless, we faced a very large database with heterogeneous data. Indeed, image collection includes very old and recent scans of negative photos, digital photos, multi and hyper spectral acquisitions, X-ray acquisition, and also front, back and lateral photos. Moreover, we have noted that art paintings suffer from much degradation: crack, softening, artifact, human damages and, overtime corruption. Considering that, it appears necessary to develop specific approaches and methods dedicated to digital art painting analysis. Consequently, this paper presents a complete framework to evaluate, compare and benchmark devoted to image processing algorithms.
BOREAS Level 3-b AVHRR-LAC Imagery: Scaled At-sensor Radiance in LGSOWG Format
NASA Technical Reports Server (NTRS)
Hall, Forrest G. (Editor); Nickeson, Jaime; Newcomer, Jeffrey A.; Cihlar, Josef
2000-01-01
The BOREAS Staff Science Satellite Data Acquisition Program focused on providing the research teams with the remotely sensed satellite data products they needed to compare and spatially extend point results. Data acquired from the AVHRR instrument on the NOAA-9, -11, -12, and -14 satellites were processed and archived for the BOREAS region by the MRSC and BORIS. The data were acquired by CCRS and were provided for use by BOREAS researchers. A few winter acquisitions are available, but the archive contains primarily growing season imagery. These gridded, at-sensor radiance image data cover the period of 30-Jan-1994 to 18-Sep-1996. Geographically, the data cover the entire 1,000-km x 1,000-km BOREAS region. The data are stored in binary image format files.
LETTER TO THE EDITOR: Combined optical and single photon emission imaging: preliminary results
NASA Astrophysics Data System (ADS)
Boschi, Federico; Spinelli, Antonello E.; D'Ambrosio, Daniela; Calderan, Laura; Marengo, Mario; Sbarbati, Andrea
2009-12-01
In vivo optical imaging instruments are generally devoted to the acquisition of light coming from fluorescence or bioluminescence processes. Recently, an instrument was conceived with radioisotopic detection capabilities (Kodak in Vivo Multispectral System F) based on the conversion of x-rays from the phosphorus screen. The goal of this work is to demonstrate that an optical imager (IVIS 200, Xenogen Corp., Alameda, USA), designed for in vivo acquisitions of small animals in bioluminescent and fluorescent modalities, can even be employed to detect signals due to radioactive tracers. Our system is based on scintillator crystals for the conversion of high-energy rays and a collimator. No hardware modifications are required. Crystals alone permit the acquisition of photons coming from an in vivo 20 g nude mouse injected with a solution of methyl diphosphonate technetium 99 metastable (Tc99m-MDP). With scintillator crystals and collimators, a set of measurements aimed to fully characterize the system resolution was carried out. More precisely, system point spread function and modulation transfer function were measured at different source depths. Results show that system resolution is always better than 1.3 mm when the source depth is less than 10 mm. The resolution of the images obtained with radioactive tracers is comparable with the resolution achievable with dedicated techniques. Moreover, it is possible to detect both optical and nuclear tracers or bi-modal tracers with only one instrument.
An image is worth a thousand words: why nouns tend to dominate verbs in early word learning.
McDonough, Colleen; Song, Lulu; Hirsh-Pasek, Kathy; Golinkoff, Roberta Michnick; Lannon, Robert
2011-03-01
Nouns are generally easier to learn than verbs (e.g., Bornstein, 2005; Bornstein et al., 2004; Gentner, 1982; Maguire, Hirsh-Pasek, & Golinkoff, 2006). Yet, verbs appear in children's earliest vocabularies, creating a seeming paradox. This paper examines one hypothesis about the difference between noun and verb acquisition. Perhaps the advantage nouns have is not a function of grammatical form class but rather related to a word's imageability. Here, word imageability ratings and form class (nouns and verbs) were correlated with age of acquisition according to the MacArthur-Bates Communicative Development Inventory (CDI) (Fenson et al., 1994). CDI age of acquisition was negatively correlated with words' imageability ratings. Further, a word's imageability contributes to the variance of the word's age of acquisition above and beyond form class, suggesting that at the beginning of word learning, imageability might be a driving factor.
Abe, Hiroyuki; Mori, Naoko; Tsuchiya, Keiko; Schacht, David V; Pineda, Federico D; Jiang, Yulei; Karczmar, Gregory S
2016-11-01
The purposes of this study were to evaluate diagnostic parameters measured with ultrafast MRI acquisition and with standard acquisition and to compare diagnostic utility for differentiating benign from malignant lesions. Ultrafast acquisition is a high-temporal-resolution (7 seconds) imaging technique for obtaining 3D whole-breast images. The dynamic contrast-enhanced 3-T MRI protocol consists of an unenhanced standard and an ultrafast acquisition that includes eight contrast-enhanced ultrafast images and four standard images. Retrospective assessment was performed for 60 patients with 33 malignant and 29 benign lesions. A computer-aided detection system was used to obtain initial enhancement rate and signal enhancement ratio (SER) by means of identification of a voxel showing the highest signal intensity in the first phase of standard imaging. From the same voxel, the enhancement rate at each time point of the ultrafast acquisition and the AUC of the kinetic curve from zero to each time point of ultrafast imaging were obtained. There was a statistically significant difference between benign and malignant lesions in enhancement rate and kinetic AUC for ultrafast imaging and also in initial enhancement rate and SER for standard imaging. ROC analysis showed no significant differences between enhancement rate in ultrafast imaging and SER or initial enhancement rate in standard imaging. Ultrafast imaging is useful for discriminating benign from malignant lesions. The differential utility of ultrafast imaging is comparable to that of standard kinetic assessment in a shorter study time.
Modeling human faces with multi-image photogrammetry
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola
2002-03-01
Modeling and measurement of the human face have been increasing by importance for various purposes. Laser scanning, coded light range digitizers, image-based approaches and digital stereo photogrammetry are the used methods currently employed in medical applications, computer animation, video surveillance, teleconferencing and virtual reality to produce three dimensional computer models of the human face. Depending on the application, different are the requirements. Ours are primarily high accuracy of the measurement and automation in the process. The method presented in this paper is based on multi-image photogrammetry. The equipment, the method and results achieved with this technique are here depicted. The process is composed of five steps: acquisition of multi-images, calibration of the system, establishment of corresponding points in the images, computation of their 3-D coordinates and generation of a surface model. The images captured by five CCD cameras arranged in front of the subject are digitized by a frame grabber. The complete system is calibrated using a reference object with coded target points, which can be measured fully automatically. To facilitate the establishment of correspondences in the images, texture in the form of random patterns can be projected from two directions onto the face. The multi-image matching process, based on a geometrical constrained least squares matching algorithm, produces a dense set of corresponding points in the five images. Neighborhood filters are then applied on the matching results to remove the errors. After filtering the data, the three-dimensional coordinates of the matched points are computed by forward intersection using the results of the calibration process; the achieved mean accuracy is about 0.2 mm in the sagittal direction and about 0.1 mm in the lateral direction. The last step of data processing is the generation of a surface model from the point cloud and the application of smooth filters. Moreover, a color texture image can be draped over the model to achieve a photorealistic visualization. The advantage of the presented method over laser scanning and coded light range digitizers is the acquisition of the source data in a fraction of a second, allowing the measurement of human faces with higher accuracy and the possibility to measure dynamic events like the speech of a person.
1993-11-01
4 Im age M etrics .......................................... 8 Analysis Procedures .................................... 14 3...trgtI’oi4.1 top) then ter jit I" to ,amtqts -i do eno; A26 Appendx A Metices Image Processing S,)ftware Source Code AGANETRIC 4 OF 8 Vat 1.J. k., I I integer...A 4 •A--TIC - OF 8 Appendx A Wbkri Image Prooiing Software Source Code A31 AGACOMPT I OF 3
Space infrared telescope pointing control system. Automated star pattern recognition
NASA Technical Reports Server (NTRS)
Powell, J. D.; Vanbezooijen, R. W. H.
1985-01-01
The Space Infrared Telescope Facility (SIRTF) is a free flying spacecraft carrying a 1 meter class cryogenically cooled infrared telescope nearly three oders of magnitude most sensitive than the current generation of infrared telescopes. Three automatic target acquisition methods will be presented that are based on the use of an imaging star tracker. The methods are distinguished by the number of guidestars that are required per target, the amount of computational capability necessary, and the time required for the complete acquisition process. Each method is described in detail.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lanekoff, Ingela T.; Heath, Brandi S.; Liyu, Andrey V.
2012-10-02
An automated platform has been developed for acquisition and visualization of mass spectrometry imaging (MSI) data using nanospray desorption electrospray ionization (nano-DESI). The new system enables robust operation of the nano-DESI imaging source over many hours. This is achieved by controlling the distance between the sample and the probe by mounting the sample holder onto an automated XYZ stage and defining the tilt of the sample plane. This approach is useful for imaging of relatively flat samples such as thin tissue sections. Custom software called MSI QuickView was developed for visualization of large data sets generated in imaging experiments. MSImore » QuickView enables fast visualization of the imaging data during data acquisition and detailed processing after the entire image is acquired. The performance of the system is demonstrated by imaging rat brain tissue sections. High resolution mass analysis combined with MS/MS experiments enabled identification of lipids and metabolites in the tissue section. In addition, high dynamic range and sensitivity of the technique allowed us to generate ion images of low-abundance isobaric lipids. High-spatial resolution image acquired over a small region of the tissue section revealed the spatial distribution of an abundant brain metabolite, creatine, in the white and gray matter that is consistent with the literature data obtained using magnetic resonance spectroscopy.« less
NASA Technical Reports Server (NTRS)
Andrefeouet, Serge; Robinson, Julie
2000-01-01
Coral reefs worldwide are suffering from severe and rapid degradation (Bryant et A, 1998; Hoegh-Guldberg, 1999). Quick, consistent, large-scale assessment is required to assess and monitor their status (e.g., USDOC/NOAA NESDIS et al., 1999). On-going systematic collection of high resolution digital satellite data will exhaustively complement the relatively small number of SPOT, Landsat 4-5, and IRS scenes acquired for coral reefs the last 20 years. The workhorse for current image acquisition is the Landsat 7 ETM+ Long Term Acquisition Plan (Gasch et al. 2000). Coral reefs are encountered in tropical areas and cloud contamination in satellite images is frequently a problem (Benner and Curry 1998), despite new automated techniques of cloud cover avoidance (Gasch and Campana 2000). Fusion of multidate acquisitions is a classical solution to solve the cloud problems. Though elegant, this solution is costly since multiple images must be purchased for one location; the cost may be prohibitive for institutions in developing countries. There are other difficulties associated with fusing multidate images as well. For example, water quality or surface state can significantly change through time in coral reef areas making the bathymetric processing of a mosaiced image strenuous. Therefore, another strategy must be selected to detect clouds and improve coral reefs mapping. Other supplemental data could be helpful and cost-effective for distinguishing clouds and generating the best possible reef maps in the shortest amount of time. Photographs taken from the 1960s to the present from the Space Shuttle and other human-occupied spacecraft are one under-used source of alternative multitemporal data (Lulla et al. 1996). Nearly 400,000 photographs have been acquired during this period, an estimated 28,000 of these taken to date are of potential value for reef remote sensing (Robinson et al. 2000a). The photographic images can be digitized into three bands (red, green and blue) and processed for various applications (e.g., Benner and Curry 1998, Nedeltchev 1999, Glasser and Lulla 2000, Robinson et al. 2000c, Webb et al, in press).
NASA Astrophysics Data System (ADS)
Lu, Zenghai; Kasaragoda, Deepa K.; Matcher, Stephen J.
2011-03-01
We compare true 8 and 14 bit-depth imaging of SS-OCT and polarization-sensitive SS-OCT (PS-SS-OCT) at 1.3μm wavelength by using two hardware-synchronized high-speed data acquisition (DAQ) boards. The two DAQ boards read exactly the same imaging data for comparison. The measured system sensitivity at 8-bit depth is comparable to that for 14-bit acquisition when using the more sensitive of the available full analog input voltage ranges of the ADC. Ex-vivo structural and birefringence images of an equine tendon sample indicate no significant differences between images acquired by the two DAQ boards suggesting that 8-bit DAQ boards can be employed to increase imaging speeds and reduce storage in clinical SS-OCT/PS-SS-OCT systems. We also compare the resulting image quality when the image data sampled with the 14-bit DAQ from human finger skin is artificially bit-reduced during post-processing. However, in agreement with the results reported previously, we also observe that in our system that real-world 8-bit image shows more artifacts than the image acquired by numerically truncating to 8-bits from the raw 14-bit image data, especially in low intensity image area. This is due to the higher noise floor and reduced dynamic range of the 8-bit DAQ. One possible disadvantage is a reduced imaging dynamic range which can manifest itself as an increase in image artefacts due to strong Fresnel reflection.
System Matrix Analysis for Computed Tomography Imaging
Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo
2015-01-01
In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482
New methods of MR image intensity standardization via generalized scale
NASA Astrophysics Data System (ADS)
Madabhushi, Anant; Udupa, Jayaram K.
2005-04-01
Image intensity standardization is a post-acquisition processing operation designed for correcting acquisition-to-acquisition signal intensity variations (non-standardness) inherent in Magnetic Resonance (MR) images. While existing standardization methods based on histogram landmarks have been shown to produce a significant gain in the similarity of resulting image intensities, their weakness is that, in some instances the same histogram-based landmark may represent one tissue, while in other cases it may represent different tissues. This is often true for diseased or abnormal patient studies in which significant changes in the image intensity characteristics may occur. In an attempt to overcome this problem, in this paper, we present two new intensity standardization methods based on the concept of generalized scale. In reference 1 we introduced the concept of generalized scale (g-scale) to overcome the shape, topological, and anisotropic constraints imposed by other local morphometric scale models. Roughly speaking, the g-scale of a voxel in a scene was defined as the largest set of voxels connected to the voxel that satisfy some homogeneity criterion. We subsequently formulated a variant of the generalized scale notion, referred to as generalized ball scale (gB-scale), which, in addition to having the advantages of g-scale, also has superior noise resistance properties. These scale concepts are utilized in this paper to accurately determine principal tissue regions within MR images, and landmarks derived from these regions are used to perform intensity standardization. The new methods were qualitatively and quantitatively evaluated on a total of 67 clinical 3D MR images corresponding to four different protocols and to normal, Multiple Sclerosis (MS), and brain tumor patient studies. The generalized scale-based methods were found to be better than the existing methods, with a significant improvement observed for severely diseased and abnormal patient studies.
ERIC Educational Resources Information Center
Stowe, Laurie A.; Sabourin, Laura
2005-01-01
In this paper we discuss recent neuroimaging evidence on three issues: (1) whether the same "language" areas are used to process a second language (L2) as the first language (L1) (2) the extent to which this depends on age of acquisition and (3) to the extent that the same areas of the brain are used, are they used in the same way? The results…
Illuminating Asset Value through New Seismic Technology
NASA Astrophysics Data System (ADS)
Brandsberg-Dahl, S.
2007-05-01
The ability to reduce risk and uncertainty across the full life cycle of an asset is directly correlated to creating an accurate subsurface image that enhances our understanding of the geology. This presentation focuses on this objective in areas of complex overburden in deepwater. Marine 3D seismic surveys have been acquired in essentially the same way for the past decade. This configuration of towed streamer acquisition, where the boat acquires data in one azimuth has been very effective in imaging areas in fairly benign geologic settings. As the industry has moved into more complicated geologic settings these surveys no longer meet the imaging objectives for risk reduction in exploration through production. In shallow water, we have seen increasing use of ocean bottom cables to meet this challenge. For deepwater, new breakthroughs in technology were required. This will be highlighted through examples of imaging below large salt bodies in the deep water Gulf of Mexico. GoM - Mad Dog: The Mad Dog field is located approximately 140 miles south of the Louisiana coastline in the southern Green Canyon area in water depths between 4100 feet to 6000 feet. The complex salt canopy overlying a large portion of the field results in generally poor seismic data quality. Advanced processing techniques improved the image, but gaps still remained even after several years of effort. We concluded that wide azimuth acquisition was required to illuminate the field in a new way. Results from the Wide Azimuth Towed Streamer (WATS) survey deployed at Mad Dog demonstrated the anticipated improvement in the subsalt image. GoM - Atlantis Field: An alternative approach to wide azimuth acquisition, ocean bottom seismic (OBS) node technology, was developed and tested. In 2001 deepwater practical experience was limited to a few nodes owned by academic institutions and there were no commercial solutions either available or in development. BP embarked on a program of sea trials designed to both evaluate technologies and subsequently encourage vendor activity to develop and deploy a commercial system. The 3D seismic method exploded into general usage in the 1990's. Our industry delivered 3D cheaper and faster, improving quality through improved acquisition specifications and new processing technology. The need to mitigate business risks in highly material subsalt plays led BP to explore the technical limits of the seismic method, testing novel acquisition techniques to improve illumination and signal to noise ratio. These were successful and are applicable to analogue seismic quality problems globally providing breakthroughs in illuminating previously hidden geology and hydrocarbon reservoirs. A focused business challenge, smart risk taking, investment in people and computing capability, partnerships, and rapid implementation are key themes that will be touched on through out the talk.
Computer vision for microscopy diagnosis of malaria.
Tek, F Boray; Dempster, Andrew G; Kale, Izzet
2009-07-13
This paper reviews computer vision and image analysis studies aiming at automated diagnosis or screening of malaria infection in microscope images of thin blood film smears. Existing works interpret the diagnosis problem differently or propose partial solutions to the problem. A critique of these works is furnished. In addition, a general pattern recognition framework to perform diagnosis, which includes image acquisition, pre-processing, segmentation, and pattern classification components, is described. The open problems are addressed and a perspective of the future work for realization of automated microscopy diagnosis of malaria is provided.
Using the Optical Mouse Sensor as a Two-Euro Counterfeit Coin Detector
Tresanchez, Marcel; Pallejà, Tomàs; Teixidó, Mercè; Palacín, Jordi
2009-01-01
In this paper, the sensor of an optical mouse is presented as a counterfeit coin detector applied to the two-Euro case. The detection process is based on the short distance image acquisition capabilities of the optical mouse sensor where partial images of the coin under analysis are compared with some partial reference coin images for matching. Results show that, using only the vision sense, the counterfeit acceptance and rejection rates are very similar to those of a trained user and better than those of an untrained user. PMID:22399987
Automatic 3D relief acquisition and georeferencing of road sides by low-cost on-motion SfM
NASA Astrophysics Data System (ADS)
Voumard, Jérémie; Bornemann, Perrick; Malet, Jean-Philippe; Derron, Marc-Henri; Jaboyedoff, Michel
2017-04-01
3D terrain relief acquisition is important for a large part of geosciences. Several methods have been developed to digitize terrains, such as total station, LiDAR, GNSS or photogrammetry. To digitize road (or rail tracks) sides on long sections, mobile spatial imaging system or UAV are commonly used. In this project, we compare a still fairly new method -the SfM on-motion technics- with some traditional technics of terrain digitizing (terrestrial laser scanning, traditional SfM, UAS imaging solutions, GNSS surveying systems and total stations). The SfM on-motion technics generates 3D spatial data by photogrammetric processing of images taken from a moving vehicle. Our mobile system consists of six action cameras placed on a vehicle. Four fisheye cameras mounted on a mast on the vehicle roof are placed at 3.2 meters above the ground. Three of them have a GNNS chip providing geotagged images. Two pictures were acquired every second by each camera. 4K resolution fisheye videos were also used to extract 8.3M not geotagged pictures. All these pictures are then processed with the Agisoft PhotoScan Professional software. Results from the SfM on-motion technics are compared with results from classical SfM photogrammetry on a 500 meters long alpine track. They were also compared with mobile laser scanning data on the same road section. First results seem to indicate that slope structures are well observable up to decimetric accuracy. For the georeferencing, the planimetric (XY) accuracy of few meters is much better than the altimetric (Z) accuracy. There is indeed a Z coordinate shift of few tens of meters between GoPro cameras and Garmin camera. This makes necessary to give a greater freedom to altimetric coordinates in the processing software. Benefits of this low-cost SfM on-motion method are: 1) a simple setup to use in the field (easy to switch between vehicle types as car, train, bike, etc.), 2) a low cost and 3) an automatic georeferencing of 3D points clouds. Main disadvantages are: 1) results are less accurate than those from LiDAR system, 2) a heavy images processing and 3) a short distance of acquisition.
Rotation covariant image processing for biomedical applications.
Skibbe, Henrik; Reisert, Marco
2013-01-01
With the advent of novel biomedical 3D image acquisition techniques, the efficient and reliable analysis of volumetric images has become more and more important. The amount of data is enormous and demands an automated processing. The applications are manifold, ranging from image enhancement, image reconstruction, and image description to object/feature detection and high-level contextual feature extraction. In most scenarios, it is expected that geometric transformations alter the output in a mathematically well-defined manner. In this paper we emphasis on 3D translations and rotations. Many algorithms rely on intensity or low-order tensorial-like descriptions to fulfill this demand. This paper proposes a general mathematical framework based on mathematical concepts and theories transferred from mathematical physics and harmonic analysis into the domain of image analysis and pattern recognition. Based on two basic operations, spherical tensor differentiation and spherical tensor multiplication, we show how to design a variety of 3D image processing methods in an efficient way. The framework has already been applied to several biomedical applications ranging from feature and object detection tasks to image enhancement and image restoration techniques. In this paper, the proposed methods are applied on a variety of different 3D data modalities stemming from medical and biological sciences.
3D image acquisition by fiber-based fringe projection
NASA Astrophysics Data System (ADS)
Pfeifer, Tilo; Driessen, Sascha
2005-02-01
In macroscopic production processes several measuring methods are used to assure the quality of 3D parts. Definitely, one of the most widespread techniques is the fringe projection. It"s a fast and accurate method to receive the topography of a part as a computer file which can be processed in further steps, e.g. to compare the measured part to a given CAD file. In this article it will be shown how the fringe projection method is applied to a fiber-optic system. The fringes generated by a miniaturized fringe projector (MiniRot) are first projected onto the front-end of an image guide using special optics. The image guide serves as a transmitter for the fringes in order to get them onto the surface of a micro part. A second image guide is used to observe the micro part. It"s mounted under an angle relating to the illuminating image guide so that the triangulation condition is fulfilled. With a CCD camera connected to the second image guide the projected fringes are recorded and those data is analyzed by an image processing system.
Effects of Resolution, Range, and Image Contrast on Target Acquisition Performance.
Hollands, Justin G; Terhaar, Phil; Pavlovic, Nada J
2018-05-01
We sought to determine the joint influence of resolution, target range, and image contrast on the detection and identification of targets in simulated naturalistic scenes. Resolution requirements for target acquisition have been developed based on threshold values obtained using imaging systems, when target range was fixed, and image characteristics were determined by the system. Subsequent work has examined the influence of factors like target range and image contrast on target acquisition. We varied the resolution and contrast of static images in two experiments. Participants (soldiers) decided whether a human target was located in the scene (detection task) or whether a target was friendly or hostile (identification task). Target range was also varied (50-400 m). In Experiment 1, 30 participants saw color images with a single target exemplar. In Experiment 2, another 30 participants saw monochrome images containing different target exemplars. The effects of target range and image contrast were qualitatively different above and below 6 pixels per meter of target for both tasks in both experiments. Target detection and identification performance were a joint function of image resolution, range, and contrast for both color and monochrome images. The beneficial effects of increasing resolution for target acquisition performance are greater for closer (larger) targets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yin, Zhye, E-mail: yin@ge.com; De Man, Bruno; Yao, Yangyang
Purpose: Traditionally, 2D radiographic preparatory scan images (scout scans) are used to plan diagnostic CT scans. However, a 3D CT volume with a full 3D organ segmentation map could provide superior information for customized scan planning and other purposes. A practical challenge is to design the volumetric scout acquisition and processing steps to provide good image quality (at least good enough to enable 3D organ segmentation) while delivering a radiation dose similar to that of the conventional 2D scout. Methods: The authors explored various acquisition methods, scan parameters, postprocessing methods, and reconstruction methods through simulation and cadaver data studies tomore » achieve an ultralow dose 3D scout while simultaneously reducing the noise and maintaining the edge strength around the target organ. Results: In a simulation study, the 3D scout with the proposed acquisition, preprocessing, and reconstruction strategy provided a similar level of organ segmentation capability as a traditional 240 mAs diagnostic scan, based on noise and normalized edge strength metrics. At the same time, the proposed approach delivers only 1.25% of the dose of a traditional scan. In a cadaver study, the authors’ pictorial-structures based organ localization algorithm successfully located the major abdominal-thoracic organs from the ultralow dose 3D scout obtained with the proposed strategy. Conclusions: The authors demonstrated that images with a similar degree of segmentation capability (interpretability) as conventional dose CT scans can be achieved with an ultralow dose 3D scout acquisition and suitable postprocessing. Furthermore, the authors applied these techniques to real cadaver CT scans with a CTDI dose level of less than 0.1 mGy and successfully generated a 3D organ localization map.« less
Face Recognition and Processing in a Mini Brain
2007-09-28
flying honeybees ( Apis mellifera ) as a model to understand how a non-mammalian brain learns to recognise human faces. Individual bees were trained...understand how a non-mammalian brain processes human faces is the honeybee (J Exp Biol 2005 v208p4709). Individual free flying honeybees ( Apis ... mellifera ) were provided with differential conditioning to achromatic target and distractor face images. Bee acquisition reached >70% correct choices
Total body photography for skin cancer screening.
Dengel, Lynn T; Petroni, Gina R; Judge, Joshua; Chen, David; Acton, Scott T; Schroen, Anneke T; Slingluff, Craig L
2015-11-01
Total body photography may aid in melanoma screening but is not widely applied due to time and cost. We hypothesized that a near-simultaneous automated skin photo-acquisition system would be acceptable to patients and could rapidly obtain total body images that enable visualization of pigmented skin lesions. From February to May 2009, a study of 20 volunteers was performed at the University of Virginia to test a prototype 16-camera imaging booth built by the research team and to guide development of special purpose software. For each participant, images were obtained before and after marking 10 lesions (five "easy" and five "difficult"), and images were evaluated to estimate visualization rates. Imaging logistical challenges were scored by the operator, and participant opinion was assessed by questionnaire. Average time for image capture was three minutes (range 2-5). All 55 "easy" lesions were visualized (sensitivity 100%, 90% CI 95-100%), and 54/55 "difficult" lesions were visualized (sensitivity 98%, 90% CI 92-100%). Operators and patients graded the imaging process favorably, with challenges identified regarding lighting and positioning. Rapid-acquisition automated skin photography is feasible with a low-cost system, with excellent lesion visualization and participant acceptance. These data provide a basis for employing this method in clinical melanoma screening. © 2014 The International Society of Dermatology.
Design of embedded endoscopic ultrasonic imaging system
NASA Astrophysics Data System (ADS)
Li, Ming; Zhou, Hao; Wen, Shijie; Chen, Xiodong; Yu, Daoyin
2008-12-01
Endoscopic ultrasonic imaging system is an important component in the endoscopic ultrasonography system (EUS). Through the ultrasonic probe, the characteristics of the fault histology features of digestive organs is detected by EUS, and then received by the reception circuit which making up of amplifying, gain compensation, filtering and A/D converter circuit, in the form of ultrasonic echo. Endoscopic ultrasonic imaging system is the back-end processing system of the EUS, with the function of receiving digital ultrasonic echo modulated by the digestive tract wall from the reception circuit, acquiring and showing the fault histology features in the form of image and characteristic data after digital signal processing, such as demodulation, etc. Traditional endoscopic ultrasonic imaging systems are mainly based on image acquisition and processing chips, which connecting to personal computer with USB2.0 circuit, with the faults of expensive, complicated structure, poor portability, and difficult to popularize. To against the shortcomings above, this paper presents the methods of digital signal acquisition and processing specially based on embedded technology with the core hardware structure of ARM and FPGA for substituting the traditional design with USB2.0 and personal computer. With built-in FIFO and dual-buffer, FPGA implement the ping-pong operation of data storage, simultaneously transferring the image data into ARM through the EBI bus by DMA function, which is controlled by ARM to carry out the purpose of high-speed transmission. The ARM system is being chosen to implement the responsibility of image display every time DMA transmission over and actualizing system control with the drivers and applications running on the embedded operating system Windows CE, which could provide a stable, safe and reliable running platform for the embedded device software. Profiting from the excellent graphical user interface (GUI) and good performance of Windows CE, we can not only clearly show 511×511 pixels ultrasonic echo images through application program, but also provide a simple and friendly operating interface with mouse and touch screen which is more convenient than the traditional endoscopic ultrasonic imaging system. Including core and peripheral circuits of FPGA and ARM, power network circuit and LCD display circuit, we designed the whole embedded system, achieving the desired purpose by implementing ultrasonic image display properly after the experimental verification, solving the problem of hugeness and complexity of the traditional endoscopic ultrasonic imaging system.
Yang, Pengfei; Niu, Kai; Wu, Yijing; Struffert, Tobias; Dorfler, Arnd; Schafer, Sebastian; Royalty, Kevin; Strother, Charles; Chen, Guang-Hong
2015-12-01
Multimodal imaging using cone beam C-arm computed tomography (CT) may shorten the delay from ictus to revascularization for acute ischemic stroke patients with a large vessel occlusion. Largely because of limited temporal resolution, reconstruction of time-resolved CT angiography (CTA) from these systems has not yielded satisfactory results. We evaluated the image quality and diagnostic value of time-resolved C-arm CTA reconstructed using novel image processing algorithms. Studies were done under an Institutional Review Board approved protocol. Postprocessing of data from 21 C-arm CT dynamic perfusion acquisitions from 17 patients with acute ischemic stroke were done to derive time-resolved C-arm CTA images. Two observers independently evaluated image quality and diagnostic content for each case. ICC and receiver-operating characteristic analysis were performed to evaluate interobserver agreement and diagnostic value of this novel imaging modality. Time-resolved C-arm CTA images were successfully generated from 20 data sets (95.2%, 20/21). Two observers agreed well that the image quality for large cerebral arteries was good but was more limited for small cerebral arteries (distal to M1, A1, and P1). receiver-operating characteristic curves demonstrated excellent diagnostic value for detecting large vessel occlusions (area under the curve=0.987-1). Time-resolved CTAs derived from C-arm CT perfusion acquisitions provide high quality images that allowed accurate diagnosis of large vessel occlusions. Although image quality of smaller arteries in this study was not optimal ongoing modifications of the postprocessing algorithm will likely remove this limitation. Adding time-resolved C-arm CTAs to the capabilities of the angiography suite further enhances its suitability as a one-stop shop for care for patients with acute ischemic stroke. © 2015 American Heart Association, Inc.
Fuzzy Logic Enhanced Digital PIV Processing Software
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1999-01-01
Digital Particle Image Velocimetry (DPIV) is an instantaneous, planar velocity measurement technique that is ideally suited for studying transient flow phenomena in high speed turbomachinery. DPIV is being actively used at the NASA Glenn Research Center to study both stable and unstable operating conditions in a high speed centrifugal compressor. Commercial PIV systems are readily available which provide near real time feedback of the PIV image data quality. These commercial systems are well designed to facilitate the expedient acquisition of PIV image data. However, as with any general purpose system, these commercial PIV systems do not meet all of the data processing needs required for PIV image data reduction in our compressor research program. An in-house PIV PROCessing (PIVPROC) code has been developed for reducing PIV data. The PIVPROC software incorporates fuzzy logic data validation for maximum information recovery from PIV image data. PIVPROC enables combined cross-correlation/particle tracking wherein the highest possible spatial resolution velocity measurements are obtained.
Image reconstructions from super-sampled data sets with resolution modeling in PET imaging.
Li, Yusheng; Matej, Samuel; Metzler, Scott D
2014-12-01
Spatial resolution in positron emission tomography (PET) is still a limiting factor in many imaging applications. To improve the spatial resolution for an existing scanner with fixed crystal sizes, mechanical movements such as scanner wobbling and object shifting have been considered for PET systems. Multiple acquisitions from different positions can provide complementary information and increased spatial sampling. The objective of this paper is to explore an efficient and useful reconstruction framework to reconstruct super-resolution images from super-sampled low-resolution data sets. The authors introduce a super-sampling data acquisition model based on the physical processes with tomographic, downsampling, and shifting matrices as its building blocks. Based on the model, we extend the MLEM and Landweber algorithms to reconstruct images from super-sampled data sets. The authors also derive a backprojection-filtration-like (BPF-like) method for the super-sampling reconstruction. Furthermore, they explore variant methods for super-sampling reconstructions: the separate super-sampling resolution-modeling reconstruction and the reconstruction without downsampling to further improve image quality at the cost of more computation. The authors use simulated reconstruction of a resolution phantom to evaluate the three types of algorithms with different super-samplings at different count levels. Contrast recovery coefficient (CRC) versus background variability, as an image-quality metric, is calculated at each iteration for all reconstructions. The authors observe that all three algorithms can significantly and consistently achieve increased CRCs at fixed background variability and reduce background artifacts with super-sampled data sets at the same count levels. For the same super-sampled data sets, the MLEM method achieves better image quality than the Landweber method, which in turn achieves better image quality than the BPF-like method. The authors also demonstrate that the reconstructions from super-sampled data sets using a fine system matrix yield improved image quality compared to the reconstructions using a coarse system matrix. Super-sampling reconstructions with different count levels showed that the more spatial-resolution improvement can be obtained with higher count at a larger iteration number. The authors developed a super-sampling reconstruction framework that can reconstruct super-resolution images using the super-sampling data sets simultaneously with known acquisition motion. The super-sampling PET acquisition using the proposed algorithms provides an effective and economic way to improve image quality for PET imaging, which has an important implication in preclinical and clinical region-of-interest PET imaging applications.
Truong, Quynh A.; Thai, Wai-ee; Wai, Bryan; Cordaro, Kevin; Cheng, Teresa; Beaudoin, Jonathan; Xiong, Guanglei; Cheung, Jim W.; Altman, Robert; Min, James K.; Singh, Jagmeet P.; Barrett, Conor D.; Danik, Stephan
2015-01-01
Background Myocardial scar is a substrate for ventricular tachycardia and sudden cardiac death. Late enhancement computed tomography (CT) imaging can detect scar, but it remains unclear whether newer late enhancement dual-energy (LE-DECT) acquisition has benefit over standard single-energy late enhancement (LE-CT). Objective We aim to compare late enhancement CT using newer LE-DECT acquisition and single-energy LE-CT acquisitions to pathology and electroanatomical map (EAM) in an experimental chronic myocardial infarction (MI) porcine study. Methods In 8 chronic MI pigs (59±5 kg), we performed dual-source CT, EAM, and pathology. For CT imaging, we performed 3 acquisitions at 10 minutes post-contrast: LE-CT 80 kV, LE-CT 100 kV, and LE-DECT with two post-processing software settings. Results Of the sequences, LE-CT 100 kV provided the best contrast-to-noise ratio (all p≤0.03) and correlation to pathology for scar (ρ=0.88). While LE-DECT overestimated scar (both p=0.02), LE-CT images did not (both p=0.08). On a segment basis (n=136), all CT sequences had high specificity (87–93%) and modest sensitivity (50–67%), with LE-CT 100 kV having the highest specificity of 93% for scar detection compared to pathology and agreement with EAM (κ 0.69). Conclusions Standard single-energy LE-CT, particularly 100kV, matched better to pathology and EAM than dual-energy LE-DECT for scar detection. Larger human trials as well as more technical-based studies that optimize varying different energies with newer hardware and software are warranted. PMID:25977115
Fully phase-encoded MRI near metallic implants using ultrashort echo times and broadband excitation.
Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Koch, Kevin M; Reeder, Scott B
2018-04-01
To develop a fully phase-encoded MRI method for distortion-free imaging near metallic implants, in clinically feasible acquisition times. An accelerated 3D fully phase-encoded acquisition with broadband excitation and ultrashort echo times is presented, which uses a broadband radiofrequency pulse to excite the entire off-resonance induced by the metallic implant. Furthermore, fully phase-encoded imaging is used to prevent distortions caused by frequency encoding, and to obtain ultrashort echo times for rapidly decaying signal. Phantom and in vivo acquisitions were used to describe the relationship among excitation bandwidth, signal loss near metallic implants, and T 1 weighting. Shorter radiofrequency pulses captured signal closer to the implant by improving spectral coverage and allowing shorter echo times, whereas longer pulses improved T 1 weighting through larger maximum attainable flip angles. Comparisons of fully phase-encoded acquisition with broadband excitation and ultrashort echo times to T 1 -weighted multi-acquisition with variable resonance image combination selective were performed in phantoms and subjects with metallic knee and hip prostheses. These acquisitions had similar contrast and acquisition efficiency. Accelerated fully phase-encoded acquisitions with ultrashort echo times and broadband excitation can generate distortion free images near metallic implants in clinically feasible acquisition times. Magn Reson Med 79:2156-2163, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Fully Phase-Encoded MRI Near Metallic Implants Using Ultrashort Echo Times and Broadband Excitation
Wiens, Curtis N.; Artz, Nathan S.; Jang, Hyungseok; McMillan, Alan B.; Koch, Kevin M.; Reeder, Scott B.
2017-01-01
Purpose To develop a fully phase-encoded MRI method for distortion-free imaging near metallic implants, in clinically feasible acquisition times. Theory and Methods An accelerated 3D fully phase-encoded acquisition with broadband excitation and ultrashort echo times is presented, which uses a broadband radiofrequency pulse to excite the entire off-resonance induced by the metallic implant. Furthermore, fully phase-encoded imaging is used to prevent distortions caused by frequency encoding, and to obtain ultrashort echo times for rapidly decaying signal. Results Phantom and in vivo acquisitions were used to describe the relationship among excitation bandwidth, signal loss near metallic implants, and T1 weighting. Shorter radiofrequency pulses captured signal closer to the implant by improving spectral coverage and allowing shorter echo times, whereas longer pulses improved T1 weighting through larger maximum attainable flip angles. Comparisons of fully phase-encoded acquisition with broadband excitation and ultrashort echo times to T1-weighted multi-acquisition with variable resonance image combination selective were performed in phantoms and subjects with metallic knee and hip prostheses. These acquisitions had similar contrast and acquisition efficiency. Conclusions Accelerated fully phase-encoded acquisitions with ultrashort echo times and broadband excitation can generate distortion free images near metallic implants in clinically feasible acquisition times. Magn Reson Med 000:000–000, 2017. PMID:28833407
Quantitative DLA-based compressed sensing for T1-weighted acquisitions
NASA Astrophysics Data System (ADS)
Svehla, Pavel; Nguyen, Khieu-Van; Li, Jing-Rebecca; Ciobanu, Luisa
2017-08-01
High resolution Manganese Enhanced Magnetic Resonance Imaging (MEMRI), which uses manganese as a T1 contrast agent, has great potential for functional imaging of live neuronal tissue at single neuron scale. However, reaching high resolutions often requires long acquisition times which can lead to reduced image quality due to sample deterioration and hardware instability. Compressed Sensing (CS) techniques offer the opportunity to significantly reduce the imaging time. The purpose of this work is to test the feasibility of CS acquisitions based on Diffusion Limited Aggregation (DLA) sampling patterns for high resolution quantitative T1-weighted imaging. Fully encoded and DLA-CS T1-weighted images of Aplysia californica neural tissue were acquired on a 17.2T MRI system. The MR signal corresponding to single, identified neurons was quantified for both versions of the T1 weighted images. For a 50% undersampling, DLA-CS can accurately quantify signal intensities in T1-weighted acquisitions leading to only 1.37% differences when compared to the fully encoded data, with minimal impact on image spatial resolution. In addition, we compared the conventional polynomial undersampling scheme with the DLA and showed that, for the data at hand, the latter performs better. Depending on the image signal to noise ratio, higher undersampling ratios can be used to further reduce the acquisition time in MEMRI based functional studies of living tissues.
Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor
NASA Astrophysics Data System (ADS)
Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji
2006-02-01
We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.
Architecture for a PACS primary diagnosis workstation
NASA Astrophysics Data System (ADS)
Shastri, Kaushal; Moran, Byron
1990-08-01
A major factor in determining the overall utility of a medical Picture Archiving and Communications (PACS) system is the functionality of the diagnostic workstation. Meyer-Ebrecht and Wendler [1] have proposed a modular picture computer architecture with high throughput and Perry et.al [2] have defined performance requirements for radiology workstations. In order to be clinically useful, a primary diagnosis workstation must not only provide functions of current viewing systems (e.g. mechanical alternators [3,4]) such as acceptable image quality, simultaneous viewing of multiple images, and rapid switching of image banks; but must also provide a diagnostic advantage over the current systems. This includes window-level functions on any image, simultaneous display of multi-modality images, rapid image manipulation, image processing, dynamic image display (cine), electronic image archival, hardcopy generation, image acquisition, network support, and an easy user interface. Implementation of such a workstation requires an underlying hardware architecture which provides high speed image transfer channels, local storage facilities, and image processing functions. This paper describes the hardware architecture of the Siemens Diagnostic Reporting Console (DRC) which meets these requirements.
Pain related inflammation analysis using infrared images
NASA Astrophysics Data System (ADS)
Bhowmik, Mrinal Kanti; Bardhan, Shawli; Das, Kakali; Bhattacharjee, Debotosh; Nath, Satyabrata
2016-05-01
Medical Infrared Thermography (MIT) offers a potential non-invasive, non-contact and radiation free imaging modality for assessment of abnormal inflammation having pain in the human body. The assessment of inflammation mainly depends on the emission of heat from the skin surface. Arthritis is a disease of joint damage that generates inflammation in one or more anatomical joints of the body. Osteoarthritis (OA) is the most frequent appearing form of arthritis, and rheumatoid arthritis (RA) is the most threatening form of them. In this study, the inflammatory analysis has been performed on the infrared images of patients suffering from RA and OA. For the analysis, a dataset of 30 bilateral knee thermograms has been captured from the patient of RA and OA by following a thermogram acquisition standard. The thermograms are pre-processed, and areas of interest are extracted for further processing. The investigation of the spread of inflammation is performed along with the statistical analysis of the pre-processed thermograms. The objectives of the study include: i) Generation of a novel thermogram acquisition standard for inflammatory pain disease ii) Analysis of the spread of the inflammation related to RA and OA using K-means clustering. iii) First and second order statistical analysis of pre-processed thermograms. The conclusion reflects that, in most of the cases, RA oriented inflammation affects bilateral knees whereas inflammation related to OA present in the unilateral knee. Also due to the spread of inflammation in OA, contralateral asymmetries are detected through the statistical analysis.
Mechanisms of rule acquisition and rule following in inductive reasoning.
Crescentini, Cristiano; Seyed-Allaei, Shima; De Pisapia, Nicola; Jovicich, Jorge; Amati, Daniele; Shallice, Tim
2011-05-25
Despite the recent interest in the neuroanatomy of inductive reasoning processes, the regional specificity within prefrontal cortex (PFC) for the different mechanisms involved in induction tasks remains to be determined. In this study, we used fMRI to investigate the contribution of PFC regions to rule acquisition (rule search and rule discovery) and rule following. Twenty-six healthy young adult participants were presented with a series of images of cards, each consisting of a set of circles numbered in sequence with one colored blue. Participants had to predict the position of the blue circle on the next card. The rules that had to be acquired pertained to the relationship among succeeding stimuli. Responses given by subjects were categorized in a series of phases either tapping rule acquisition (responses given up to and including rule discovery) or rule following (correct responses after rule acquisition). Mid-dorsolateral PFC (mid-DLPFC) was active during rule search and remained active until successful rule acquisition. By contrast, rule following was associated with activation in temporal, motor, and medial/anterior prefrontal cortex. Moreover, frontopolar cortex (FPC) was active throughout the rule acquisition and rule following phases before a rule became familiar. We attributed activation in mid-DLPFC to hypothesis generation and in FPC to integration of multiple separate inferences. The present study provides evidence that brain activation during inductive reasoning involves a complex network of frontal processes and that different subregions respond during rule acquisition and rule following phases.
Improvements in Speed and Functionality of a 670-GHz Imaging Radar
NASA Technical Reports Server (NTRS)
Dengler, Robert J.; Cooper, Ken B.; Mehdi, Imran; Siegel, Peter H.; Tarsala, Jan A.; Bryllert, Thomas E.
2011-01-01
Significant improvements have been made in the instrument originally described in a prior NASA Tech Briefs article: Improved Speed and Functionality of a 580-GHz Imaging Radar (NPO-45156), Vol. 34, No. 7 (July 2010), p. 51. First, the wideband YIG oscillator has been replaced with a JPL-designed and built phase-locked, low-noise chirp source. Second, further refinements to the data acquisition and signal processing software have been performed by moving critical code sections to C code, and compiling those sections to Windows DLLs, which are then invoked from the main LabVIEW executive. This system is an active, single-pixel scanned imager operating at 670 GHz. The actual chirp signals for the RF and LO chains were generated by a pair of MITEQ 2.5 3.3 GHz chirp sources. Agilent benchtop synthesizers operating at fixed frequencies around 13 GHz were then used to up-convert the chirp sources to 15.5 16.3 GHz. The resulting signals were then multiplied 36 times by a combination of off-the-shelf millimeter- wave components, and JPL-built 200- GHz doublers and 300- and 600-GHz triplers. The power required to drive the submillimeter-wave multipliers was provided by JPL-built W-band amplifiers. The receive and transmit signal paths were combined using a thin, high-resistivity silicon wafer as a beam splitter. While the results at present are encouraging, the system still lacks sufficient speed to be usable for practical applications in a contraband detection. Ideally, an image acquisition speed of ten seconds, or a factor of 30 improvement, is desired. However, the system improvements to date have resulted in a factor of five increase in signal acquisition speed, as well as enhanced signal processing algorithms, permitting clearer imaging of contraband objects hidden underneath clothing. In particular, advances in three distinct areas have enabled these performance enhancements: base source phase noise reduction, chirp rate, and signal processing. Additionally, a second pixel was added, automatically reducing the imaging time by a factor of two. Although adding a second pixel to the system doubles the amount of submillimeter components required, some savings in microwave hardware can be realized by using a common low-noise source.
Implementation of sobel method to detect the seed rubber plant leaves
NASA Astrophysics Data System (ADS)
Suyanto; Munte, J.
2018-03-01
This research was conducted to develop a system that can identify and recognize the type of rubber tree based on the pattern of leaves of the plant. The steps research are started with the identification of the image data acquisition, image processing, image edge detection and identification method template matching. Edge detection is using Sobel edge detection. Pattern recognition would detect image as input and compared with other images in a database called templates. Experiments carried out in one phase, identification of the leaf edge, using a rubber plant leaf image 14 are superior and 5 for each type of test images (clones) of the plant. From the experimental results obtained by the recognition rate of 91.79%.
USDA-ARS?s Scientific Manuscript database
The acquisition of hyperspectral microscopic images containing both spatial and spectral data has shown potential for the early and rapid optical classification of foodborne pathogens. A hyperspectral microscope with a metal halide light source and acousto-optical tunable filter (AOTF) collects 89 ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arinilhaq,; Widita, Rena
2014-09-30
Optical Coherence Tomography is often used in medical image acquisition to diagnose that change due easy to use and low price. Unfortunately, this type of examination produces a two-dimensional retinal image of the point of acquisition. Therefore, this study developed a method that combines and reconstruct 2-dimensional retinal images into three-dimensional images to display volumetric macular accurately. The system is built with three main stages: data acquisition, data extraction and 3-dimensional reconstruction. At data acquisition step, Optical Coherence Tomography produced six *.jpg images of each patient were further extracted with MATLAB 2010a software into six one-dimensional arrays. The six arraysmore » are combined into a 3-dimensional matrix using a kriging interpolation method with SURFER9 resulting 3-dimensional graphics of macula. Finally, system provides three-dimensional color graphs based on the data distribution normal macula. The reconstruction system which has been designed produces three-dimensional images with size of 481 × 481 × h (retinal thickness) pixels.« less
NASA Astrophysics Data System (ADS)
Nitze, Ingmar; Barrett, Brian; Cawkwell, Fiona
2015-02-01
The analysis and classification of land cover is one of the principal applications in terrestrial remote sensing. Due to the seasonal variability of different vegetation types and land surface characteristics, the ability to discriminate land cover types changes over time. Multi-temporal classification can help to improve the classification accuracies, but different constraints, such as financial restrictions or atmospheric conditions, may impede their application. The optimisation of image acquisition timing and frequencies can help to increase the effectiveness of the classification process. For this purpose, the Feature Importance (FI) measure of the state-of-the art machine learning method Random Forest was used to determine the optimal image acquisition periods for a general (Grassland, Forest, Water, Settlement, Peatland) and Grassland specific (Improved Grassland, Semi-Improved Grassland) land cover classification in central Ireland based on a 9-year time-series of MODIS Terra 16 day composite data (MOD13Q1). Feature Importances for each acquisition period of the Enhanced Vegetation Index (EVI) and Normalised Difference Vegetation Index (NDVI) were calculated for both classification scenarios. In the general land cover classification, the months December and January showed the highest, and July and August the lowest separability for both VIs over the entire nine-year period. This temporal separability was reflected in the classification accuracies, where the optimal choice of image dates outperformed the worst image date by 13% using NDVI and 5% using EVI on a mono-temporal analysis. With the addition of the next best image periods to the data input the classification accuracies converged quickly to their limit at around 8-10 images. The binary classification schemes, using two classes only, showed a stronger seasonal dependency with a higher intra-annual, but lower inter-annual variation. Nonetheless anomalous weather conditions, such as the cold winter of 2009/2010 can alter the temporal separability pattern significantly. Due to the extensive use of the NDVI for land cover discrimination, the findings of this study should be transferrable to data from other optical sensors with a higher spatial resolution. However, the high impact of outliers from the general climatic pattern highlights the limitation of spatial transferability to locations with different climatic and land cover conditions. The use of high-temporal, moderate resolution data such as MODIS in conjunction with machine-learning techniques proved to be a good base for the prediction of image acquisition timing for optimal land cover classification results.
NASA Technical Reports Server (NTRS)
Reiber, J. H. C.
1976-01-01
To automate the data acquisition procedure, a real-time contour detection and data acquisition system for the left ventricular outline was developed using video techniques. The X-ray image of the contrast-filled left ventricle is stored for subsequent processing on film (cineangiogram), video tape or disc. The cineangiogram is converted into video format using a television camera. The video signal from either the TV camera, video tape or disc is the input signal to the system. The contour detection is based on a dynamic thresholding technique. Since the left ventricular outline is a smooth continuous function, for each contour side a narrow expectation window is defined in which the next borderpoint will be detected. A computer interface was designed and built for the online acquisition of the coordinates using a PDP-12 computer. The advantage of this system over other available systems is its potential for online, real-time acquisition of the left ventricular size and shape during angiocardiography.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
It has been five years since the last in-depth American College of Nuclear Physicians/Society of Nuclear Medicine Symposium on the subject of single photon emission computed tomography (SPECT) was held. Because this subject was nominated as the single most desired topic we have selected SPECT imaging as the basis for this year's program. The objectives of this symposium are to survey the progress of SPECT clinical applications that have taken place over the last five years and to provide practical and timely guidelines to users of SPECT so that this exciting imaging modality can be fully integrated into the evaluationmore » of pathologic processes. The first half was devoted to a consideration of technical factors important in SPECT acquisition and the second half was devoted to those organ systems about which sufficient clinical SPECT imaging data are available. With respect to the technical aspect of the program we have selected the key areas which demand awareness and attention in order to make SPECT operational in clinical practice. These include selection of equipment, details of uniformity correction, utilization of phantoms for equipment acceptance and quality assurance, the major aspect of algorithms, an understanding of filtered back projection and appropriate choice of filters and an awareness of the most commonly generated artifacts and how to recognize them. With respect to the acquisition and interpretation of organ images, the faculty will present information on the major aspects of hepatic, brain, cardiac, skeletal, and immunologic imaging techniques. Individual papers are processed separately for the data base. (TEM)« less
Dzyubachyk, Oleh; Khmelinskii, Artem; Plenge, Esben; Kok, Peter; Snoeks, Thomas J A; Poot, Dirk H J; Löwik, Clemens W G M; Botha, Charl P; Niessen, Wiro J; van der Weerd, Louise; Meijering, Erik; Lelieveldt, Boudewijn P F
2014-01-01
In small animal imaging studies, when the locations of the micro-structures of interest are unknown a priori, there is a simultaneous need for full-body coverage and high resolution. In MRI, additional requirements to image contrast and acquisition time will often make it impossible to acquire such images directly. Recently, a resolution enhancing post-processing technique called super-resolution reconstruction (SRR) has been demonstrated to improve visualization and localization of micro-structures in small animal MRI by combining multiple low-resolution acquisitions. However, when the field-of-view is large relative to the desired voxel size, solving the SRR problem becomes very expensive, in terms of both memory requirements and computation time. In this paper we introduce a novel local approach to SRR that aims to overcome the computational problems and allow researchers to efficiently explore both global and local characteristics in whole-body small animal MRI. The method integrates state-of-the-art image processing techniques from the areas of articulated atlas-based segmentation, planar reformation, and SRR. A proof-of-concept is provided with two case studies involving CT, BLI, and MRI data of bone and kidney tumors in a mouse model. We show that local SRR-MRI is a computationally efficient complementary imaging modality for the precise characterization of tumor metastases, and that the method provides a feasible high-resolution alternative to conventional MRI.
Molloy, Erin K; Meyerand, Mary E; Birn, Rasmus M
2014-02-01
Functional MRI blood oxygen level-dependent (BOLD) signal changes can be subtle, motivating the use of imaging parameters and processing strategies that maximize the temporal signal-to-noise ratio (tSNR) and thus the detection power of neuronal activity-induced fluctuations. Previous studies have shown that acquiring data at higher spatial resolutions results in greater percent BOLD signal changes, and furthermore that spatially smoothing higher resolution fMRI data improves tSNR beyond that of data originally acquired at a lower resolution. However, higher resolution images come at the cost of increased acquisition time, and the number of image volumes also influences detectability. The goal of our study is to determine how the detection power of neuronally induced BOLD fluctuations acquired at higher spatial resolutions and then spatially smoothed compares to data acquired at the lower resolutions with the same imaging duration. The number of time points acquired during a given amount of imaging time is a practical consideration given the limited ability of certain populations to lie still in the MRI scanner. We compare acquisitions at three different in-plane spatial resolutions (3.50×3.50mm(2), 2.33×2.33mm(2), 1.75×1.75mm(2)) in terms of their tSNR, contrast-to-noise ratio, and the power to detect both task-related activation and resting-state functional connectivity. The impact of SENSE acceleration, which speeds up acquisition time increasing the number of images collected, is also evaluated. Our results show that after spatially smoothing the data to the same intrinsic resolution, lower resolution acquisitions have a slightly higher detection power of task-activation in some, but not all, brain areas. There were no significant differences in functional connectivity as a function of resolution after smoothing. Similarly, the reduced tSNR of fMRI data acquired with a SENSE factor of 2 is offset by the greater number of images acquired, resulting in few significant differences in detection power of either functional activation or connectivity after spatial smoothing. © 2013.
WHOLE BODY NONRIGID CT-PET REGISTRATION USING WEIGHTED DEMONS.
Suh, J W; Kwon, Oh-K; Scheinost, D; Sinusas, A J; Cline, Gary W; Papademetris, X
2011-03-30
We present a new registration method for whole-body rat computed tomography (CT) image and positron emission tomography (PET) images using a weighted demons algorithm. The CT and PET images are acquired in separate scanners at different times and the inherent differences in the imaging protocols produced significant nonrigid changes between the two acquisitions in addition to heterogeneous image characteristics. In this situation, we utilized both the transmission-PET and the emission-PET images in the deformable registration process emphasizing particular regions of the moving transmission-PET image using the emission-PET image. We validated our results with nine rat image sets using M-Hausdorff distance similarity measure. We demonstrate improved performance compared to standard methods such as Demons and normalized mutual information-based non-rigid FFD registration.
Flexible real-time magnetic resonance imaging framework.
Santos, Juan M; Wright, Graham A; Pauly, John M
2004-01-01
The extension of MR imaging to new applications has demonstrated the limitations of the architecture of current real-time systems. Traditional real-time implementations provide continuous acquisition of data and modification of basic sequence parameters on the fly. We have extended the concept of real-time MRI by designing a system that drives the examinations from a real-time localizer and then gets reconfigured for different imaging modes. Upon operator request or automatic feedback the system can immediately generate a new pulse sequence or change fundamental aspects of the acquisition such as gradient waveforms excitation pulses and scan planes. This framework has been implemented by connecting a data processing and control workstation to a conventional clinical scanner. Key components on the design of this framework are the data communication and control mechanisms, reconstruction algorithms optimized for real-time and adaptability, flexible user interface and extensible user interaction. In this paper we describe the various components that comprise this system. Some of the applications implemented in this framework include real-time catheter tracking embedded in high frame rate real-time imaging and immediate switching between real-time localizer and high-resolution volume imaging for coronary angiography applications.
Design and implementation of a contactless multiple hand feature acquisition system
NASA Astrophysics Data System (ADS)
Zhao, Qiushi; Bu, Wei; Wu, Xiangqian; Zhang, David
2012-06-01
In this work, an integrated contactless multiple hand feature acquisition system is designed. The system can capture palmprint, palm vein, and palm dorsal vein images simultaneously. Moreover, the images are captured in a contactless manner, that is, users need not to touch any part of the device when capturing. Palmprint is imaged under visible illumination while palm vein and palm dorsal vein are imaged under near infrared (NIR) illumination. The capturing is controlled by computer and the whole process is less than 1 second, which is sufficient for online biometric systems. Based on this device, this paper also implements a contactless hand-based multimodal biometric system. Palmprint, palm vein, palm dorsal vein, finger vein, and hand geometry features are extracted from the captured images. After similarity measure, the matching scores are fused using weighted sum fusion rule. Experimental results show that although the verification accuracy of each uni-modality is not as high as that of state-of-the-art, the fusion result is superior to most of the existing hand-based biometric systems. This result indicates that the proposed device is competent in the application of contactless multimodal hand-based biometrics.
Bednarkiewicz, Artur; Whelan, Maurice P
2008-01-01
Fluorescence lifetime imaging (FLIM) is very demanding from a technical and computational perspective, and the output is usually a compromise between acquisition/processing time and data accuracy and precision. We present a new approach to acquisition, analysis, and reconstruction of microscopic FLIM images by employing a digital micromirror device (DMD) as a spatial illuminator. In the first step, the whole field fluorescence image is collected by a color charge-coupled device (CCD) camera. Further qualitative spectral analysis and sample segmentation are performed to spatially distinguish between spectrally different regions on the sample. Next, the fluorescence of the sample is excited segment by segment, and fluorescence lifetimes are acquired with a photon counting technique. FLIM image reconstruction is performed by either raster scanning the sample or by directly accessing specific regions of interest. The unique features of the DMD illuminator allow the rapid on-line measurement of global good initial parameters (GIP), which are supplied to the first iteration of the fitting algorithm. As a consequence, a decrease of the computation time required to obtain a satisfactory quality-of-fit is achieved without compromising the accuracy and precision of the lifetime measurements.
Hosseinbor, A. Pasha; Chung, Moo K.; Wu, Yu-Chien; Alexander, Andrew L.
2012-01-01
The ensemble average propagator (EAP) describes the 3D average diffusion process of water molecules, capturing both its radial and angular contents. The EAP can thus provide richer information about complex tissue microstructure properties than the orientation distribution function (ODF), an angular feature of the EAP. Recently, several analytical EAP reconstruction schemes for multiple q-shell acquisitions have been proposed, such as diffusion propagator imaging (DPI) and spherical polar Fourier imaging (SPFI). In this study, a new analytical EAP reconstruction method is proposed, called Bessel Fourier orientation reconstruction (BFOR), whose solution is based on heat equation estimation of the diffusion signal for each shell acquisition, and is validated on both synthetic and real datasets. A significant portion of the paper is dedicated to comparing BFOR, SPFI, and DPI using hybrid, non-Cartesian sampling for multiple b-value acquisitions. Ways to mitigate the effects of Gibbs ringing on EAP reconstruction are also explored. In addition to analytical EAP reconstruction, the aforementioned modeling bases can be used to obtain rotationally invariant q-space indices of potential clinical value, an avenue which has not yet been thoroughly explored. Three such measures are computed: zero-displacement probability (Po), mean squared displacement (MSD), and generalized fractional anisotropy (GFA). PMID:22963853
Acquiring skill at medical image inspection: learning localized in early visual processes
NASA Astrophysics Data System (ADS)
Sowden, Paul T.; Davies, Ian R. L.; Roling, Penny; Watt, Simon J.
1997-04-01
Acquisition of the skill of medical image inspection could be due to changes in visual search processes, 'low-level' sensory learning, and higher level 'conceptual learning.' Here, we report two studies that investigate the extent to which learning in medical image inspection involves low- level learning. Early in the visual processing pathway cells are selective for direction of luminance contrast. We exploit this in the present studies by using transfer across direction of contrast as a 'marker' to indicate the level of processing at which learning occurs. In both studies twelve observers trained for four days at detecting features in x- ray images (experiment one equals discs in the Nijmegen phantom, experiment two equals micro-calcification clusters in digitized mammograms). Half the observers examined negative luminance contrast versions of the images and the remainder examined positive contrast versions. On the fifth day, observers swapped to inspect their respective opposite contrast images. In both experiments leaning occurred across sessions. In experiment one, learning did not transfer across direction of luminance contrast, while in experiment two there was only partial transfer. These findings are consistent with the contention that some of the leaning was localized early in the visual processing pathway. The implications of these results for current medical image inspection training schedules are discussed.
Experimental teaching and training system based on volume holographic storage
NASA Astrophysics Data System (ADS)
Jiang, Zhuqing; Wang, Zhe; Sun, Chan; Cui, Yutong; Wan, Yuhong; Zou, Rufei
2017-08-01
The experiment of volume holographic storage for teaching and training the practical ability of senior students in Applied Physics is introduced. The students can learn to use advanced optoelectronic devices and the automatic control means via this experiment, and further understand the theoretical knowledge of optical information processing and photonics disciplines that have been studied in some courses. In the experiment, multiplexing holographic recording and readout is based on Bragg selectivity of volume holographic grating, in which Bragg diffraction angle is dependent on grating-recording angel. By using different interference angle between reference and object beams, the holograms can be recorded into photorefractive crystal, and then the object images can be read out from these holograms via angular addressing by using the original reference beam. In this system, the experimental data acquisition and the control of the optoelectronic devices, such as the shutter on-off, image loaded in SLM and image acquisition of a CCD sensor, are automatically realized by using LabVIEW programming.
2010-11-05
The Food and Drug Administration (FDA) is announcing the reclassification of the full-field digital mammography (FFDM) system from class III (premarket approval) to class II (special controls). The device type is intended to produce planar digital x-ray images of the entire breast; this generic type of device may include digital mammography acquisition software, full-field digital image receptor, acquisition workstation, automatic exposure control, image processing and reconstruction programs, patient and equipment supports, component parts, and accessories. The special control that will apply to the device is the guidance document entitled "Class II Special Controls Guidance Document: Full-Field Digital Mammography System." FDA is reclassifying the device into class II (special controls) because general controls along with special controls will provide a reasonable assurance of safety and effectiveness of the device. Elsewhere in this issue of the Federal Register, FDA is announcing the availability of the guidance document that will serve as the special control for this device.
Design method of ARM based embedded iris recognition system
NASA Astrophysics Data System (ADS)
Wang, Yuanbo; He, Yuqing; Hou, Yushi; Liu, Ting
2008-03-01
With the advantages of non-invasiveness, uniqueness, stability and low false recognition rate, iris recognition has been successfully applied in many fields. Up to now, most of the iris recognition systems are based on PC. However, a PC is not portable and it needs more power. In this paper, we proposed an embedded iris recognition system based on ARM. Considering the requirements of iris image acquisition and recognition algorithm, we analyzed the design method of the iris image acquisition module, designed the ARM processing module and its peripherals, studied the Linux platform and the recognition algorithm based on this platform, finally actualized the design method of ARM-based iris imaging and recognition system. Experimental results show that the ARM platform we used is fast enough to run the iris recognition algorithm, and the data stream can flow smoothly between the camera and the ARM chip based on the embedded Linux system. It's an effective method of using ARM to actualize portable embedded iris recognition system.
Heuristic Enhancement of Magneto-Optical Images for NDE
NASA Astrophysics Data System (ADS)
Cacciola, Matteo; Megali, Giuseppe; Pellicanò, Diego; Calcagno, Salvatore; Versaci, Mario; Morabito, FrancescoCarlo
2010-12-01
The quality of measurements in nondestructive testing and evaluation plays a key role in assessing the reliability of different inspection techniques. Each different technique, like the magneto-optic imaging here treated, is affected by some special types of noise which are related to the specific device used for their acquisition. Therefore, the design of even more accurate image processing is often required by relevant applications, for instance, in implementing integrated solutions for flaw detection and characterization. The aim of this paper is to propose a preprocessing procedure based on independent component analysis (ICA) to ease the detection of rivets and/or flaws in the specimens under test. A comparison of the proposed approach with some other advanced image processing methodologies used for denoising magneto-optic images (MOIs) is carried out, in order to show advantages and weakness of ICA in improving the accuracy and performance of the rivets/flaw detection.
Catchings, Rufus D.; Rymer, Michael J.; Goldman, Mark R.; Bawden, Gerald W.
2010-01-01
In a comment on our 2008 paper (Catchings, Gandhok, et al., 2008) on the Santa Monica fault in Los Angeles, California, Pratt and Dolan (2010) (herein referred to as P&D) cite numerous objections to our work, inferring that our study is flawed. However, as shown in our reply, their objections contradict their own published works, published works of others, and proven seismic methodologies. Rather than responding to each repeated invalid objection, we address their objections by topic in the subsequent sections.In Catchings, Gandhok, et al. (2008), we presented high-resolution seismic-reflection images that showed two near-surface faults in the upper 50 m beneath the grounds of the Wadsworth Veterans Administration Hospital (WVAH). Although P&D suggest we effectively duplicated their seismic acquisition, our survey was not a duplication of their efforts. Rather, we conducted a seismic-imaging survey over a similar profile as Pratt et al. (1998) but used a different data acquisition system and different data processing methods to evaluate methods of seismically imaging blind faults in the wake of the 17 January 1994 M 6.7 Northridge earthquake. We used an acquisition method that provides both tomographic seismic velocities and reflection images. Our combined-data approach allowed for shallower imaging (∼2.5 m minimum) than the ∼20-m minimum of Pratt et al. (1998), clearer images of the fault zone, and more accurate depth determinations (rather than time images). In processing the reflection images, we used prestack depth migration, which is generally accepted as the only proper imaging method for imaging subsurface structures with strong lateral velocity variations (Versteeg, 1993), a condition shown to exist at the WVAH site. We correlated our reflection images with refraction tomography images, borehole lithology, and velocity data, Interferometric Synthetic Aperture Radar images, and changes in groundwater depths. Except for some minor differences, our seismic-reflection images coincide with previously published seismic-reflection images by Dolan and Pratt (1997) and Pratt et al. (1998), and a paleoseismic study by Dolan et al. (2000). Principal differences among our interpretations and those of Pratt et al. (1998) relate to the upper 20 m and the south side of the fault, which Pratt et al. (1998) did not clearly image. In contrast, our seismic images included structures on both sides of the fault zone from about 2.5 m depth to about 100 m depth at WVAH, allowing us to interpret more details.
Uav Photogrammetry with Oblique Images: First Analysis on Data Acquisition and Processing
NASA Astrophysics Data System (ADS)
Aicardi, I.; Chiabrando, F.; Grasso, N.; Lingua, A. M.; Noardo, F.; Spanò, A.
2016-06-01
In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (e.g. including façades and building footprints). Expensive airborne cameras, installed on traditional aerial platforms, usually acquired the data. The purpose of this paper is to evaluate the possibility of acquire and use oblique images for the 3D reconstruction of a historical building, obtained by UAV (Unmanned Aerial Vehicle) and traditional COTS (Commercial Off-the-Shelf) digital cameras (more compact and lighter than generally used devices), for the realization of high-level-of-detail architectural survey. The critical issues of the acquisitions from a common UAV (flight planning strategies, ground control points, check points distribution and measurement, etc.) are described. Another important considered aspect was the evaluation of the possibility to use such systems as low cost methods for obtaining complete information from an aerial point of view in case of emergency problems or, as in the present paper, in the cultural heritage application field. The data processing was realized using SfM-based approach for point cloud generation: different dense image-matching algorithms implemented in some commercial and open source software were tested. The achieved results are analysed and the discrepancies from some reference LiDAR data are computed for a final evaluation. The system was tested on the S. Maria Chapel, a part of the Novalesa Abbey (Italy).
de Souza, John Kennedy Schettino; Pinto, Marcos Antonio da Silva; Vieira, Pedro Gabrielle; Baron, Jerome; Tierra-Criollo, Carlos Julio
2013-12-01
The dynamic, accurate measurement of pupil size is extremely valuable for studying a large number of neuronal functions and dysfunctions. Despite tremendous and well-documented progress in image processing techniques for estimating pupil parameters, comparatively little work has been reported on practical hardware issues involved in designing image acquisition systems for pupil analysis. Here, we describe and validate the basic features of such a system which is based on a relatively compact, off-the-shelf, low-cost FireWire digital camera. We successfully implemented two configurable modes of video record: a continuous mode and an event-triggered mode. The interoperability of the whole system is guaranteed by a set of modular software components hosted on a personal computer and written in Labview. An offline analysis suite of image processing algorithms for automatically estimating pupillary and eyelid parameters were assessed using data obtained in human subjects. Our benchmark results show that such measurements can be done in a temporally precise way at a sampling frequency of up to 120 Hz and with an estimated maximum spatial resolution of 0.03 mm. Our software is made available free of charge to the scientific community, allowing end users to either use the software as is or modify it to suit their own needs. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Multimodal imaging of temporal processing in typical and atypical language development.
Kovelman, Ioulia; Wagley, Neelima; Hay, Jessica S F; Ugolini, Margaret; Bowyer, Susan M; Lajiness-O'Neill, Renee; Brennan, Jonathan
2015-03-01
New approaches to understanding language and reading acquisition propose that the human brain's ability to synchronize its neural firing rate to syllable-length linguistic units may be important to children's ability to acquire human language. Yet, little evidence from brain imaging studies has been available to support this proposal. Here, we summarize three recent brain imaging (functional near-infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI), and magnetoencephalography (MEG)) studies from our laboratories with young English-speaking children (aged 6-12 years). In the first study (fNIRS), we used an auditory beat perception task to show that, in children, the left superior temporal gyrus (STG) responds preferentially to rhythmic beats at 1.5 Hz. In the second study (fMRI), we found correlations between children's amplitude rise-time sensitivity, phonological awareness, and brain activation in the left STG. In the third study (MEG), typically developing children outperformed children with autism spectrum disorder in extracting words from rhythmically rich foreign speech and displayed different brain activation during the learning phase. The overall findings suggest that the efficiency with which left temporal regions process slow temporal (rhythmic) information may be important for gains in language and reading proficiency. These findings carry implications for better understanding of the brain's mechanisms that support language and reading acquisition during both typical and atypical development. © 2014 New York Academy of Sciences.
NASA Astrophysics Data System (ADS)
Strocchi, S.; Ghielmi, M.; Basilico, F.; Macchi, A.; Novario, R.; Ferretti, R.; Binaghi, E.
2016-03-01
This work quantitatively evaluates the effects induced by susceptibility characteristics of materials commonly used in dental practice on the quality of head MR images in a clinical 1.5T device. The proposed evaluation procedure measures the image artifacts induced by susceptibility in MR images by providing an index consistent with the global degradation as perceived by the experts. Susceptibility artifacts were evaluated in a near-clinical setup, using a phantom with susceptibility and geometric characteristics similar to that of a human head. We tested different dentist materials, called PAL Keramit, Ti6Al4V-ELI, Keramit NP, ILOR F, Zirconia and used different clinical MR acquisition sequences, such as "classical" SE and fast, gradient, and diffusion sequences. The evaluation is designed as a matching process between reference and artifacts affected images recording the same scene. The extent of the degradation induced by susceptibility is then measured in terms of similarity with the corresponding reference image. The matching process involves a multimodal registration task and the use an adequate similarity index psychophysically validated, based on correlation coefficient. The proposed analyses are integrated within a computer-supported procedure that interactively guides the users in the different phases of the evaluation method. 2-Dimensional and 3-dimensional indexes are used for each material and each acquisition sequence. From these, we drew a ranking of the materials, averaging the results obtained. Zirconia and ILOR F appear to be the best choice from the susceptibility artefacts point of view, followed, in order, by PAL Keramit, Ti6Al4V-ELI and Keramit NP.
Use of Low-Cost Acquisition Systems with an Embedded Linux Device for Volcanic Monitoring
Moure, David; Torres, Pedro; Casas, Benito; Toma, Daniel; Blanco, María José; Del Río, Joaquín; Manuel, Antoni
2015-01-01
This paper describes the development of a low-cost multiparameter acquisition system for volcanic monitoring that is applicable to gravimetry and geodesy, as well as to the visual monitoring of volcanic activity. The acquisition system was developed using a System on a Chip (SoC) Broadcom BCM2835 Linux operating system (based on DebianTM) that allows for the construction of a complete monitoring system offering multiple possibilities for storage, data-processing, configuration, and the real-time monitoring of volcanic activity. This multiparametric acquisition system was developed with a software environment, as well as with different hardware modules designed for each parameter to be monitored. The device presented here has been used and validated under different scenarios for monitoring ocean tides, ground deformation, and gravity, as well as for monitoring with images the island of Tenerife and ground deformation on the island of El Hierro. PMID:26295394
Use of Low-Cost Acquisition Systems with an Embedded Linux Device for Volcanic Monitoring.
Moure, David; Torres, Pedro; Casas, Benito; Toma, Daniel; Blanco, María José; Del Río, Joaquín; Manuel, Antoni
2015-08-19
This paper describes the development of a low-cost multiparameter acquisition system for volcanic monitoring that is applicable to gravimetry and geodesy, as well as to the visual monitoring of volcanic activity. The acquisition system was developed using a System on a Chip (SoC) Broadcom BCM2835 Linux operating system (based on DebianTM) that allows for the construction of a complete monitoring system offering multiple possibilities for storage, data-processing, configuration, and the real-time monitoring of volcanic activity. This multiparametric acquisition system was developed with a software environment, as well as with different hardware modules designed for each parameter to be monitored. The device presented here has been used and validated under different scenarios for monitoring ocean tides, ground deformation, and gravity, as well as for monitoring with images the island of Tenerife and ground deformation on the island of El Hierro.
A sub-sampled approach to extremely low-dose STEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A.; Luzi, L.; Yang, H.
The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e -Å 2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis ofmore » the node distribution in metal-organic frameworks (MOFs).« less
NASA Astrophysics Data System (ADS)
Pape, Dennis R.
1990-09-01
The present conference discusses topics in optical image processing, optical signal processing, acoustooptic spectrum analyzer systems and components, and optical computing. Attention is given to tradeoffs in nonlinearly recorded matched filters, miniature spatial light modulators, detection and classification using higher-order statistics of optical matched filters, rapid traversal of an image data base using binary synthetic discriminant filters, wideband signal processing for emitter location, an acoustooptic processor for autonomous SAR guidance, and sampling of Fresnel transforms. Also discussed are an acoustooptic RF signal-acquisition system, scanning acoustooptic spectrum analyzers, the effects of aberrations on acoustooptic systems, fast optical digital arithmetic processors, information utilization in analog and digital processing, optical processors for smart structures, and a self-organizing neural network for unsupervised learning.