A Tentative Application Of Morphological Filters To Time-Varying Images
NASA Astrophysics Data System (ADS)
Billard, D.; Poquillon, B.
1989-03-01
In this paper, morphological filters, which are commonly used to process either 2D or multidimensional static images, are generalized to the analysis of time-varying image sequence. The introduction of the time dimension induces then interesting prop-erties when designing such spatio-temporal morphological filters. In particular, the specification of spatio-temporal structuring ele-ments (equivalent to time-varying spatial structuring elements) can be adjusted according to the temporal variations of the image sequences to be processed : this allows to derive specific morphological transforms to perform noise filtering or moving objects discrimination on dynamic images viewed by a non-stationary sensor. First, a brief introduction to the basic principles underlying morphological filters will be given. Then, a straightforward gener-alization of these principles to time-varying images will be pro-posed. This will lead us to define spatio-temporal opening and closing and to introduce some of their possible applications to process dynamic images. At last, preliminary results obtained us-ing a natural forward looking infrared (FUR) image sequence are presented.
Bit-level plane image encryption based on coupled map lattice with time-varying delay
NASA Astrophysics Data System (ADS)
Lv, Xiupin; Liao, Xiaofeng; Yang, Bo
2018-04-01
Most of the existing image encryption algorithms had two basic properties: confusion and diffusion in a pixel-level plane based on various chaotic systems. Actually, permutation in a pixel-level plane could not change the statistical characteristics of an image, and many of the existing color image encryption schemes utilized the same method to encrypt R, G and B components, which means that the three color components of a color image are processed three times independently. Additionally, dynamical performance of a single chaotic system degrades greatly with finite precisions in computer simulations. In this paper, a novel coupled map lattice with time-varying delay therefore is applied in color images bit-level plane encryption to solve the above issues. Spatiotemporal chaotic system with both much longer period in digitalization and much excellent performances in cryptography is recommended. Time-varying delay embedded in coupled map lattice enhances dynamical behaviors of the system. Bit-level plane image encryption algorithm has greatly reduced the statistical characteristics of an image through the scrambling processing. The R, G and B components cross and mix with one another, which reduces the correlation among the three components. Finally, simulations are carried out and all the experimental results illustrate that the proposed image encryption algorithm is highly secure, and at the same time, also demonstrates superior performance.
NASA Astrophysics Data System (ADS)
Garcia-Belmonte, Germà
2017-06-01
Spatial visualization is a well-established topic of education research that has allowed improving science and engineering students' skills on spatial relations. Connections have been established between visualization as a comprehension tool and instruction in several scientific fields. Learning about dynamic processes mainly relies upon static spatial representations or images. Visualization of time is inherently problematic because time can be conceptualized in terms of two opposite conceptual metaphors based on spatial relations as inferred from conventional linguistic patterns. The situation is particularly demanding when time-varying signals are recorded using displaying electronic instruments, and the image should be properly interpreted. This work deals with the interplay between linguistic metaphors, visual thinking and scientific instrument mediation in the process of interpreting time-varying signals displayed by electronic instruments. The analysis draws on a simplified version of a communication system as example of practical signal recording and image visualization in a physics and engineering laboratory experience. Instrumentation delivers meaningful signal representations because it is designed to incorporate a specific and culturally favored time view. It is suggested that difficulties in interpreting time-varying signals are linked with the existing dual perception of conflicting time metaphors. The activation of specific space-time conceptual mapping might allow for a proper signal interpretation. Instruments play then a central role as visualization mediators by yielding an image that matches specific perception abilities and practical purposes. Here I have identified two ways of understanding time as used in different trajectories through which students are located. Interestingly specific displaying instruments belonging to different cultural traditions incorporate contrasting time views. One of them sees time in terms of a dynamic metaphor consisting of a static observer looking at passing events. This is a general and widespread practice common in the contemporary mass culture, which lies behind the process of making sense to moving images usually visualized by means of movie shots. In contrast scientific culture favored another way of time conceptualization (static time metaphor) that historically fostered the construction of graphs and the incorporation of time-dependent functions, as represented on the Cartesian plane, into displaying instruments. Both types of cultures, scientific and mass, are considered highly technological in the sense that complex instruments, apparatus or machines participate in their visual practices.
Comparison of turbulence mitigation algorithms
NASA Astrophysics Data System (ADS)
Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric
2017-07-01
When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.
NASA Astrophysics Data System (ADS)
Zenian, Suzelawati; Ahmad, Tahir; Idris, Amidora
2017-09-01
Medical imaging is a subfield in image processing that deals with medical images. It is very crucial in visualizing the body parts in non-invasive way by using appropriate image processing techniques. Generally, image processing is used to enhance visual appearance of images for further interpretation. However, the pixel values of an image may not be precise as uncertainty arises within the gray values of an image due to several factors. In this paper, the input and output images of Flat Electroencephalography (fEEG) of an epileptic patient at varied time are presented. Furthermore, ordinary fuzzy and intuitionistic fuzzy approaches are implemented to the input images and the results are compared between these two approaches.
Real-Time Symbol Extraction From Grey-Level Images
NASA Astrophysics Data System (ADS)
Massen, R.; Simnacher, M.; Rosch, J.; Herre, E.; Wuhrer, H. W.
1988-04-01
A VME-bus image pipeline processor for extracting vectorized contours from grey-level images in real-time is presented. This 3 Giga operation per second processor uses large kernel convolvers and new non-linear neighbourhood processing algorithms to compute true 1-pixel wide and noise-free contours without thresholding even from grey-level images with quite varying edge sharpness. The local edge orientation is used as an additional cue to compute a list of vectors describing the closed and open contours in real-time and to dump a CAD-like symbolic image description into a symbol memory at pixel clock rate.
Mori, S
2014-05-01
To ensure accuracy in respiratory-gating treatment, X-ray fluoroscopic imaging is used to detect tumour position in real time. Detection accuracy is strongly dependent on image quality, particularly positional differences between the patient and treatment couch. We developed a new algorithm to improve the quality of images obtained in X-ray fluoroscopic imaging and report the preliminary results. Two oblique X-ray fluoroscopic images were acquired using a dynamic flat panel detector (DFPD) for two patients with lung cancer. The weighting factor was applied to the DFPD image in respective columns, because most anatomical structures, as well as the treatment couch and port cover edge, were aligned in the superior-inferior direction when the patient lay on the treatment couch. The weighting factors for the respective columns were varied until the standard deviation of the pixel values within the image region was minimized. Once the weighting factors were calculated, the quality of the DFPD image was improved by applying the factors to multiframe images. Applying the image-processing algorithm produced substantial improvement in the quality of images, and the image contrast was increased. The treatment couch and irradiation port edge, which were not related to a patient's position, were removed. The average image-processing time was 1.1 ms, showing that this fast image processing can be applied to real-time tumour-tracking systems. These findings indicate that this image-processing algorithm improves the image quality in patients with lung cancer and successfully removes objects not related to the patient. Our image-processing algorithm might be useful in improving gated-treatment accuracy.
Electro-Optical Imaging Fourier-Transform Spectrometer
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Zhou, Hanying
2006-01-01
An electro-optical (E-O) imaging Fourier-transform spectrometer (IFTS), now under development, is a prototype of improved imaging spectrometers to be used for hyperspectral imaging, especially in the infrared spectral region. Unlike both imaging and non-imaging traditional Fourier-transform spectrometers, the E-O IFTS does not contain any moving parts. Elimination of the moving parts and the associated actuator mechanisms and supporting structures would increase reliability while enabling reductions in size and mass, relative to traditional Fourier-transform spectrometers that offer equivalent capabilities. Elimination of moving parts would also eliminate the vibrations caused by the motions of those parts. Figure 1 schematically depicts a traditional Fourier-transform spectrometer, wherein a critical time delay is varied by translating one the mirrors of a Michelson interferometer. The time-dependent optical output is a periodic representation of the input spectrum. Data characterizing the input spectrum are generated through fast-Fourier-transform (FFT) post-processing of the output in conjunction with the varying time delay.
Hielscher, Andreas H; Bartel, Sebastian
2004-02-01
Optical tomography (OT) is a fast developing novel imaging modality that uses near-infrared (NIR) light to obtain cross-sectional views of optical properties inside the human body. A major challenge remains the time-consuming, computational-intensive image reconstruction problem that converts NIR transmission measurements into cross-sectional images. To increase the speed of iterative image reconstruction schemes that are commonly applied for OT, we have developed and implemented several parallel algorithms on a cluster of workstations. Static process distribution as well as dynamic load balancing schemes suitable for heterogeneous clusters and varying machine performances are introduced and tested. The resulting algorithms are shown to accelerate the reconstruction process to various degrees, substantially reducing the computation times for clinically relevant problems.
Enhancement of time images for photointerpretation
NASA Technical Reports Server (NTRS)
Gillespie, A. R.
1986-01-01
The Thermal Infrared Multispectral Scanner (TIMS) images consist of six channels of data acquired in bands between 8 and 12 microns, thus they contain information about both temperature and emittance. Scene temperatures are controlled by reflectivity of the surface, but also by its geometry with respect to the Sun, time of day, and other factors unrelated to composition. Emittance is dependent upon composition alone. Thus the photointerpreter may wish to enhance emittance information selectively. Because thermal emittances in real scenes vary but little, image data tend to be highly correlated along channels. Special image processing is required to make this information available for the photointerpreter. Processing includes noise removal, construction of model emittance images, and construction of false-color pictures enhanced by decorrelation techniques.
On-line monitoring of fluid bed granulation by photometric imaging.
Soppela, Ira; Antikainen, Osmo; Sandler, Niklas; Yliruusi, Jouko
2014-11-01
This paper introduces and discusses a photometric surface imaging approach for on-line monitoring of fluid bed granulation. Five granule batches consisting of paracetamol and varying amounts of lactose and microcrystalline cellulose were manufactured with an instrumented fluid bed granulator. Photometric images and NIR spectra were continuously captured on-line and particle size information was extracted from them. Also key process parameters were recorded. The images provided direct real-time information on the growth, attrition and packing behaviour of the batches. Moreover, decreasing image brightness in the drying phase was found to indicate granule drying. The changes observed in the image data were also linked to the moisture and temperature profiles of the processes. Combined with complementary process analytical tools, photometric imaging opens up possibilities for improved real-time evaluation fluid bed granulation. Furthermore, images can give valuable insight into the behaviour of excipients or formulations during product development. Copyright © 2014 Elsevier B.V. All rights reserved.
Fast, Accurate and Shift-Varying Line Projections for Iterative Reconstruction Using the GPU
Pratx, Guillem; Chinn, Garry; Olcott, Peter D.; Levin, Craig S.
2013-01-01
List-mode processing provides an efficient way to deal with sparse projections in iterative image reconstruction for emission tomography. An issue often reported is the tremendous amount of computation required by such algorithm. Each recorded event requires several back- and forward line projections. We investigated the use of the programmable graphics processing unit (GPU) to accelerate the line-projection operations and implement fully-3D list-mode ordered-subsets expectation-maximization for positron emission tomography (PET). We designed a reconstruction approach that incorporates resolution kernels, which model the spatially-varying physical processes associated with photon emission, transport and detection. Our development is particularly suitable for applications where the projection data is sparse, such as high-resolution, dynamic, and time-of-flight PET reconstruction. The GPU approach runs more than 50 times faster than an equivalent CPU implementation while image quality and accuracy are virtually identical. This paper describes in details how the GPU can be used to accelerate the line projection operations, even when the lines-of-response have arbitrary endpoint locations and shift-varying resolution kernels are used. A quantitative evaluation is included to validate the correctness of this new approach. PMID:19244015
A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading
NASA Astrophysics Data System (ADS)
Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.
2018-05-01
Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.
Learning random networks for compression of still and moving images
NASA Technical Reports Server (NTRS)
Gelenbe, Erol; Sungur, Mert; Cramer, Christopher
1994-01-01
Image compression for both still and moving images is an extremely important area of investigation, with numerous applications to videoconferencing, interactive education, home entertainment, and potential applications to earth observations, medical imaging, digital libraries, and many other areas. We describe work on a neural network methodology to compress/decompress still and moving images. We use the 'point-process' type neural network model which is closer to biophysical reality than standard models, and yet is mathematically much more tractable. We currently achieve compression ratios of the order of 120:1 for moving grey-level images, based on a combination of motion detection and compression. The observed signal-to-noise ratio varies from values above 25 to more than 35. The method is computationally fast so that compression and decompression can be carried out in real-time. It uses the adaptive capabilities of a set of neural networks so as to select varying compression ratios in real-time as a function of quality achieved. It also uses a motion detector which will avoid retransmitting portions of the image which have varied little from the previous frame. Further improvements can be achieved by using on-line learning during compression, and by appropriate compensation of nonlinearities in the compression/decompression scheme. We expect to go well beyond the 250:1 compression level for color images with good quality levels.
Narayanan, Shrikanth
2009-01-01
We describe a method for unsupervised region segmentation of an image using its spatial frequency domain representation. The algorithm was designed to process large sequences of real-time magnetic resonance (MR) images containing the 2-D midsagittal view of a human vocal tract airway. The segmentation algorithm uses an anatomically informed object model, whose fit to the observed image data is hierarchically optimized using a gradient descent procedure. The goal of the algorithm is to automatically extract the time-varying vocal tract outline and the position of the articulators to facilitate the study of the shaping of the vocal tract during speech production. PMID:19244005
Non-stationary noise estimation using dictionary learning and Gaussian mixture models
NASA Astrophysics Data System (ADS)
Hughes, James M.; Rockmore, Daniel N.; Wang, Yang
2014-02-01
Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.
Time-of-Day and Appendicitis: Impact on Management and Outcomes
Drake, Frederick Thurston; Mottey, Neli E.; Castelli, Anthony A.; Florence, Michael G.; Johnson, Morris G.; Steele, Scott R.; Thirlby, Richard C.; Flum, David R.
2017-01-01
Background Observational research has shown that delayed presentation is associated with perforation in appendicitis. Many factors that impact the ability to present for evaluation are influenced by time-of-day; for example, child care, work, transportation, and primary care office hours. Our objective was to evaluate for an association between care processes or clinical outcomes and presentation time. Methods Prospective cohort of 7,548 adults undergoing appendectomy at 56 hospitals across Washington State. Relative to presentation time, patient characteristics, time to surgery, imaging use, negative appendectomy (NA), and perforation were compared using univariate and multivariate methodologies. Results Overall, 63% of patients presented between noon and midnight. More men presented in the morning; however, race, insurance status, co-morbid conditions, and WBC count did not differ by presentation time. Daytime presenters (6AM-6PM) were less likely to undergo imaging (94% vs. 98% p<0.05) and had a nearly 50% decrease in median pre-operative time (6.0h vs. 8.7h p<0.001). Perforation significantly differed by time-of-day. Patients who presented during the workday (9AM-3PM) had a 30% increase in odds of perforation compared to early morning/late night presenters (adjusted OR 1.29, 95%CI 1.05–1.59). NA did not vary by time-of-day. Conclusions Most patients with appendicitis presented in afternoon/evening. Socioeconomic characteristics did not vary with time-of-presentation. Patients who presented during the workday more often had perforated appendicitis compared to those who presented early morning or late night. Processes of care differed (both time-to-surgery and imaging use). Time-of-day is associated with patient outcomes, process of care, and decisions to present for evaluation; this has implications for surgical workforce planning and quality improvement efforts. PMID:27592212
NASA Astrophysics Data System (ADS)
Berthon, Beatrice; Dansette, Pierre-Marc; Tanter, Mickaël; Pernot, Mathieu; Provost, Jean
2017-07-01
Direct imaging of the electrical activation of the heart is crucial to better understand and diagnose diseases linked to arrhythmias. This work presents an ultrafast acoustoelectric imaging (UAI) system for direct and non-invasive ultrafast mapping of propagating current densities using the acoustoelectric effect. Acoustoelectric imaging is based on the acoustoelectric effect, the modulation of the medium’s electrical impedance by a propagating ultrasonic wave. UAI triggers this effect with plane wave emissions to image current densities. An ultrasound research platform was fitted with electrodes connected to high common-mode rejection ratio amplifiers and sampled by up to 128 independent channels. The sequences developed allow for both real-time display of acoustoelectric maps and long ultrafast acquisition with fast off-line processing. The system was evaluated by injecting controlled currents into a saline pool via copper wire electrodes. Sensitivity to low current and low acoustic pressure were measured independently. Contrast and spatial resolution were measured for varying numbers of plane waves and compared to line per line acoustoelectric imaging with focused beams at equivalent peak pressure. Temporal resolution was assessed by measuring time-varying current densities associated with sinusoidal currents. Complex intensity distributions were also imaged in 3D. Electrical current densities were detected for injected currents as low as 0.56 mA. UAI outperformed conventional focused acoustoelectric imaging in terms of contrast and spatial resolution when using 3 and 13 plane waves or more, respectively. Neighboring sinusoidal currents with opposed phases were accurately imaged and separated. Time-varying currents were mapped and their frequency accurately measured for imaging frame rates up to 500 Hz. Finally, a 3D image of a complex intensity distribution was obtained. The results demonstrated the high sensitivity of the UAI system proposed. The plane wave based approach provides a highly flexible trade-off between frame rate, resolution and contrast. In conclusion, the UAI system shows promise for non-invasive, direct and accurate real-time imaging of electrical activation in vivo.
Kimme-Smith, C; Rothschild, P A; Bassett, L W; Gold, R H; Moler, C
1989-01-01
Six different combinations of film-processor temperature (33.3 degrees C, 35 degrees C), development time (22 sec, 44 sec), and chemistry (Du Pont medium contrast developer [MCD] and Kodak rapid process [RP] developer) were each evaluated by separate analyses with Hurter and Driffield curves, test images of plastic step wedges, noise variance analysis, and phantom images; each combination also was evaluated clinically. Du Pont MCD chemistry produced greater contrast than did Kodak RP chemistry. A change in temperature from 33.3 degrees C (92 degrees F) to 35 degrees C (95 degrees F) had the least effect on dose and image contrast. Temperatures of 36.7 degrees C (98 degrees F) and 38.3 degrees C (101 degrees F) also were tested with extended processing. The speed increased for 36.7 degrees C but decreased at 38.3 degrees C. Base plus fog increased, but contrast decreased for these higher temperatures. Increasing development time had the greatest effect on decreasing the dose required for equivalent film darkening when imaging BR12 breast equivalent test objects; ion chamber measurements showed a 32% reduction in dose when the development time was increased from 22 to 44 sec. Although noise variance doubled in images processed with the extended development time, diagnostic capability was not compromised. Extending the processing time for mammographic films was an effective method of dose reduction, whereas varying the processing temperature and chemicals had less effect on contrast and dose.
Image Processing for Educators in Global Hands-On Universe
NASA Astrophysics Data System (ADS)
Miller, J. P.; Pennypacker, C. R.; White, G. L.
2006-08-01
A method of image processing to find time-varying objects is being developed for the National Virtual Observatory as part of Global Hands-On Universe(tm) (Lawrence Hall of Science; University of California, Berkeley). Objects that vary in space or time are of prime importance in modern astronomy and astrophysics. Such objects include active galactic nuclei, variable stars, supernovae, or moving objects across a field of view such as an asteroid, comet, or extrasolar planet transiting its parent star. The search for these objects is undertaken by acquiring an image of the region of the sky where they occur followed by a second image taken at a later time. Ideally, both images are taken with the same telescope using the same filter and charge-coupled device. The two images are aligned and subtracted with the subtracted image revealing any changes in light during the time period between the two images. We have used a method of Christophe Alard using the image processing software IDL Version 6.2 (Research Systems, Inc.) with the exception of the background correction, which is done on the two images prior to the subtraction. Testing has been extensive, using images provided by a number of National Virtual Observatory and collaborating projects. They include the Supernovae Trace Cosmic Expansion (Cerro Tololo Inter-American Observatory), Supernovae/ Acceleration Program (Lawrence Berkeley National Laboratory), Lowell Observatory Near-Earth Object Search (Lowell Observatory), and the Centre National de la Recherche Scientifique (Paris, France). Further testing has been done with students, including a May 2006 two week program at the Lawrence Berkeley National Laboratory. Students from Hardin-Simmons University (Abilene, TX) and Jackson State University (Jackson, MS) used the subtraction method to analyze images from the Cerro Tololo Inter-American Observatory (CTIO) searching for new asteroids and Kuiper Belt objects. In October 2006 students from five U.S. high schools will use the subtraction method in an asteroid search campaign using CTIO images with 7-day follow-up images to be provided by the Las Cumbres Observatory (Santa Barbara, CA). During the Spring 2006 semester, students from Cape Fear High School used the method to search for near-Earth objects and supernovae. Using images from the Astronomical Research Institute (Charleston, IL) the method contributed to the original discovery of two supernovae, SN 2006al and SN 2006bi.
AnimalFinder: A semi-automated system for animal detection in time-lapse camera trap images
Price Tack, Jennifer L.; West, Brian S.; McGowan, Conor P.; Ditchkoff, Stephen S.; Reeves, Stanley J.; Keever, Allison; Grand, James B.
2017-01-01
Although the use of camera traps in wildlife management is well established, technologies to automate image processing have been much slower in development, despite their potential to drastically reduce personnel time and cost required to review photos. We developed AnimalFinder in MATLAB® to identify animal presence in time-lapse camera trap images by comparing individual photos to all images contained within the subset of images (i.e. photos from the same survey and site), with some manual processing required to remove false positives and collect other relevant data (species, sex, etc.). We tested AnimalFinder on a set of camera trap images and compared the presence/absence results with manual-only review with white-tailed deer (Odocoileus virginianus), wild pigs (Sus scrofa), and raccoons (Procyon lotor). We compared abundance estimates, model rankings, and coefficient estimates of detection and abundance for white-tailed deer using N-mixture models. AnimalFinder performance varied depending on a threshold value that affects program sensitivity to frequently occurring pixels in a series of images. Higher threshold values led to fewer false negatives (missed deer images) but increased manual processing time, but even at the highest threshold value, the program reduced the images requiring manual review by ~40% and correctly identified >90% of deer, raccoon, and wild pig images. Estimates of white-tailed deer were similar between AnimalFinder and the manual-only method (~1–2 deer difference, depending on the model), as were model rankings and coefficient estimates. Our results show that the program significantly reduced data processing time and may increase efficiency of camera trapping surveys.
Maximizing Total QoS-Provisioning of Image Streams with Limited Energy Budget
NASA Astrophysics Data System (ADS)
Lee, Wan Yeon; Kim, Kyong Hoon; Ko, Young Woong
To fully utilize the limited battery energy of mobile electronic devices, we propose an adaptive adjustment method of processing quality for multiple image stream tasks running with widely varying execution times. This adjustment method completes the worst-case executions of the tasks with a given budget of energy, and maximizes the total reward value of processing quality obtained during their executions by exploiting the probability distribution of task execution times. The proposed method derives the maximum reward value for the tasks being executable with arbitrary processing quality, and near maximum value for the tasks being executable with a finite number of processing qualities. Our evaluation on a prototype system shows that the proposed method achieves larger reward values, by up to 57%, than the previous method.
Computational Imaging in Demanding Conditions
2015-11-18
spatiotemporal domain where such blur is not present. Detailed Accomplishments: ● Removing Atmospheric Turbulence via Space-Invariant Deconvolution: ○ To...given image sequence distorted by atmospheric turbulence . This approach reduces the space and time-varying deblurring problem to a shift invariant...SUBJECT TERMS Image processing, Computational imaging, turbulence , blur, enhancement 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18
Imaging synthetic aperture radar
Burns, Bryan L.; Cordaro, J. Thomas
1997-01-01
A linear-FM SAR imaging radar method and apparatus to produce a real-time image by first arranging the returned signals into a plurality of subaperture arrays, the columns of each subaperture array having samples of dechirped baseband pulses, and further including a processing of each subaperture array to obtain coarse-resolution in azimuth, then fine-resolution in range, and lastly, to combine the processed subapertures to obtain the final fine-resolution in azimuth. Greater efficiency is achieved because both the transmitted signal and a local oscillator signal mixed with the returned signal can be varied on a pulse-to-pulse basis as a function of radar motion. Moreover, a novel circuit can adjust the sampling location and the A/D sample rate of the combined dechirped baseband signal which greatly reduces processing time and hardware. The processing steps include implementing a window function, stabilizing either a central reference point and/or all other points of a subaperture with respect to doppler frequency and/or range as a function of radar motion, sorting and compressing the signals using a standard fourier transforms. The stabilization of each processing part is accomplished with vector multiplication using waveforms generated as a function of radar motion wherein these waveforms may be synthesized in integrated circuits. Stabilization of range migration as a function of doppler frequency by simple vector multiplication is a particularly useful feature of the invention; as is stabilization of azimuth migration by correcting for spatially varying phase errors prior to the application of an autofocus process.
Image encryption based on a delayed fractional-order chaotic logistic system
NASA Astrophysics Data System (ADS)
Wang, Zhen; Huang, Xia; Li, Ning; Song, Xiao-Na
2012-05-01
A new image encryption scheme is proposed based on a delayed fractional-order chaotic logistic system. In the process of generating a key stream, the time-varying delay and fractional derivative are embedded in the proposed scheme to improve the security. Such a scheme is described in detail with security analyses including correlation analysis, information entropy analysis, run statistic analysis, mean-variance gray value analysis, and key sensitivity analysis. Experimental results show that the newly proposed image encryption scheme possesses high security.
NASA Astrophysics Data System (ADS)
Hewawasam, Kuravi; Mendillo, Christopher B.; Howe, Glenn A.; Martel, Jason; Finn, Susanna C.; Cook, Timothy A.; Chakrabarti, Supriya
2017-09-01
The Planetary Imaging Concept Testbed Using a Recoverable Experiment - Coronagraph (PICTURE-C) mission will directly image debris disks and exozodiacal dust around nearby stars from a high-altitude balloon using a vector vortex coronagraph. The PICTURE-C low-order wavefront control (LOWC) system will be used to correct time-varying low-order aberrations due to pointing jitter, gravity sag, thermal deformation, and the gondola pendulum motion. We present the hardware and software implementation of the low-order ShackHartmann and reflective Lyot stop sensors. Development of the high-speed image acquisition and processing system is discussed with the emphasis on the reduction of hardware and computational latencies through the use of a real-time operating system and optimized data handling. By characterizing all of the LOWC latencies, we describe techniques to achieve a framerate of 200 Hz with a mean latency of ˜378 μs
Volumetric breast density measurement: sensitivity analysis of a relative physics approach
Lau, Susie; Abdul Aziz, Yang Faridah
2016-01-01
Objective: To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. Methods: 3317 raw digital mammograms were processed with Volpara® (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Results: Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Conclusion: Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Advances in knowledge: Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be. PMID:27452264
Volumetric breast density measurement: sensitivity analysis of a relative physics approach.
Lau, Susie; Ng, Kwan Hoong; Abdul Aziz, Yang Faridah
2016-10-01
To investigate the sensitivity and robustness of a volumetric breast density (VBD) measurement system to errors in the imaging physics parameters including compressed breast thickness (CBT), tube voltage (kVp), filter thickness, tube current-exposure time product (mAs), detector gain, detector offset and image noise. 3317 raw digital mammograms were processed with Volpara(®) (Matakina Technology Ltd, Wellington, New Zealand) to obtain fibroglandular tissue volume (FGV), breast volume (BV) and VBD. Errors in parameters including CBT, kVp, filter thickness and mAs were simulated by varying them in the Digital Imaging and Communications in Medicine (DICOM) tags of the images up to ±10% of the original values. Errors in detector gain and offset were simulated by varying them in the Volpara configuration file up to ±10% from their default values. For image noise, Gaussian noise was generated and introduced into the original images. Errors in filter thickness, mAs, detector gain and offset had limited effects on FGV, BV and VBD. Significant effects in VBD were observed when CBT, kVp, detector offset and image noise were varied (p < 0.0001). Maximum shifts in the mean (1.2%) and median (1.1%) VBD of the study population occurred when CBT was varied. Volpara was robust to expected clinical variations, with errors in most investigated parameters giving limited changes in results, although extreme variations in CBT and kVp could lead to greater errors. Despite Volpara's robustness, rigorous quality control is essential to keep the parameter errors within reasonable bounds. Volpara appears robust within those bounds, albeit for more advanced applications such as tracking density change over time, it remains to be seen how accurate the measures need to be.
Mapping language to visual referents: Does the degree of image realism matter?
Saryazdi, Raheleh; Chambers, Craig G
2018-01-01
Studies of real-time spoken language comprehension have shown that listeners rapidly map unfolding speech to available referents in the immediate visual environment. This has been explored using various kinds of 2-dimensional (2D) stimuli, with convenience or availability typically motivating the choice of a particular image type. However, work in other areas has suggested that certain cognitive processes are sensitive to the level of realism in 2D representations. The present study examined the process of mapping language to depictions of objects that are more or less realistic, namely photographs versus clipart images. A custom stimulus set was first created by generating clipart images directly from photographs of real objects. Two visual world experiments were then conducted, varying whether referent identification was driven by noun or verb information. A modest benefit for clipart stimuli was observed during real-time processing, but only for noun-driving mappings. The results are discussed in terms of their implications for studies of visually situated language processing. Crown Copyright © 2017. Published by Elsevier B.V. All rights reserved.
Quantitative imaging of mammalian transcriptional dynamics: from single cells to whole embryos.
Zhao, Ziqing W; White, Melanie D; Bissiere, Stephanie; Levi, Valeria; Plachta, Nicolas
2016-12-23
Probing dynamic processes occurring within the cell nucleus at the quantitative level has long been a challenge in mammalian biology. Advances in bio-imaging techniques over the past decade have enabled us to directly visualize nuclear processes in situ with unprecedented spatial and temporal resolution and single-molecule sensitivity. Here, using transcription as our primary focus, we survey recent imaging studies that specifically emphasize the quantitative understanding of nuclear dynamics in both time and space. These analyses not only inform on previously hidden physical parameters and mechanistic details, but also reveal a hierarchical organizational landscape for coordinating a wide range of transcriptional processes shared by mammalian systems of varying complexity, from single cells to whole embryos.
Parallel Processing Systems for Passive Ranging During Helicopter Flight
NASA Technical Reports Server (NTRS)
Sridhar, Bavavar; Suorsa, Raymond E.; Showman, Robert D. (Technical Monitor)
1994-01-01
The complexity of rotorcraft missions involving operations close to the ground result in high pilot workload. In order to allow a pilot time to perform mission-oriented tasks, sensor-aiding and automation of some of the guidance and control functions are highly desirable. Images from an electro-optical sensor provide a covert way of detecting objects in the flight path of a low-flying helicopter. Passive ranging consists of processing a sequence of images using techniques based on optical low computation and recursive estimation. The passive ranging algorithm has to extract obstacle information from imagery at rates varying from five to thirty or more frames per second depending on the helicopter speed. We have implemented and tested the passive ranging algorithm off-line using helicopter-collected images. However, the real-time data and computation requirements of the algorithm are beyond the capability of any off-the-shelf microprocessor or digital signal processor. This paper describes the computational requirements of the algorithm and uses parallel processing technology to meet these requirements. Various issues in the selection of a parallel processing architecture are discussed and four different computer architectures are evaluated regarding their suitability to process the algorithm in real-time. Based on this evaluation, we conclude that real-time passive ranging is a realistic goal and can be achieved with a short time.
Dynamic Black-Level Correction and Artifact Flagging for Kepler Pixel Time Series
NASA Technical Reports Server (NTRS)
Kolodziejczak, J. J.; Clarke, B. D.; Caldwell, D. A.
2011-01-01
Methods applied to the calibration stage of Kepler pipeline data processing [1] (CAL) do not currently use all of the information available to identify and correct several instrument-induced artifacts. These include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, and manifestations of drifting moire pattern as locally correlated nonstationary noise, and rolling bands in the images which find their way into the time series [2], [3]. As the Kepler Mission continues to improve the fidelity of its science data products, we are evaluating the benefits of adding pipeline steps to more completely model and dynamically correct the FGS crosstalk, then use the residuals from these model fits to detect and flag spatial regions and time intervals of strong time-varying black-level which may complicate later processing or lead to misinterpretation of instrument behavior as stellar activity.
Meyer-Lindenberg, Andrea; Ebermaier, Christine; Wolvekamp, Pim; Tellhelm, Bernd; Meutstege, Freek J; Lang, Johann; Hartung, Klaus; Fehr, Michael; Nolte, Ingo
2008-01-01
In this study the quality of digital and analog radiography in dogs was compared. For this purpose, three conventional radiographs (varying in exposure) and three digital radiographs (varying in MUSI-contrast [MUSI = MUlti Scale Image Contrast], the main post-processing parameter) of six different body regions of the dog were evaluated (thorax, abdomen, skull, femur, hip joints, elbow). The quality of the radiographs was evaluated by eight veterinary specialists familiar with radiographic images using a questionnaire based on details of each body region significant in obtaining a radiographic diagnosis. In the first part of the study the overall quality of the radiographs was evaluated. Within one region, 89.5% (43/48) chose a digital radiograph as the best image. Divided into analog and digital groups, the digital image with the highest MUSI-contrast was most often considered the best, while the analog image considered the best varied between the one with the medium and the one with the longest exposure time. In the second part of the study, each image was rated for the visibility of specific, diagnostically important details. After summarisation of the scores for each criterion, divided into analog and digital imaging, the digital images were rated considerably superior to conventional images. The results of image comparison revealed that digital radiographs showed better image detail than radiographs taken with the analog technique in all six areas of the body.
An Automatic Image Processing Workflow for Daily Magnetic Resonance Imaging Quality Assurance.
Peltonen, Juha I; Mäkelä, Teemu; Sofiev, Alexey; Salli, Eero
2017-04-01
The performance of magnetic resonance imaging (MRI) equipment is typically monitored with a quality assurance (QA) program. The QA program includes various tests performed at regular intervals. Users may execute specific tests, e.g., daily, weekly, or monthly. The exact interval of these measurements varies according to the department policies, machine setup and usage, manufacturer's recommendations, and available resources. In our experience, a single image acquired before the first patient of the day offers a low effort and effective system check. When this daily QA check is repeated with identical imaging parameters and phantom setup, the data can be used to derive various time series of the scanner performance. However, daily QA with manual processing can quickly become laborious in a multi-scanner environment. Fully automated image analysis and results output can positively impact the QA process by decreasing reaction time, improving repeatability, and by offering novel performance evaluation methods. In this study, we have developed a daily MRI QA workflow that can measure multiple scanner performance parameters with minimal manual labor required. The daily QA system is built around a phantom image taken by the radiographers at the beginning of day. The image is acquired with a consistent phantom setup and standardized imaging parameters. Recorded parameters are processed into graphs available to everyone involved in the MRI QA process via a web-based interface. The presented automatic MRI QA system provides an efficient tool for following the short- and long-term stability of MRI scanners.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines.
Khan, Arif Ul Maula; Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts.
A New Feedback-Based Method for Parameter Adaptation in Image Processing Routines
Mikut, Ralf; Reischl, Markus
2016-01-01
The parametrization of automatic image processing routines is time-consuming if a lot of image processing parameters are involved. An expert can tune parameters sequentially to get desired results. This may not be productive for applications with difficult image analysis tasks, e.g. when high noise and shading levels in an image are present or images vary in their characteristics due to different acquisition conditions. Parameters are required to be tuned simultaneously. We propose a framework to improve standard image segmentation methods by using feedback-based automatic parameter adaptation. Moreover, we compare algorithms by implementing them in a feedforward fashion and then adapting their parameters. This comparison is proposed to be evaluated by a benchmark data set that contains challenging image distortions in an increasing fashion. This promptly enables us to compare different standard image segmentation algorithms in a feedback vs. feedforward implementation by evaluating their segmentation quality and robustness. We also propose an efficient way of performing automatic image analysis when only abstract ground truth is present. Such a framework evaluates robustness of different image processing pipelines using a graded data set. This is useful for both end-users and experts. PMID:27764213
Effective image differencing with convolutional neural networks for real-time transient hunting
NASA Astrophysics Data System (ADS)
Sedaghat, Nima; Mahabal, Ashish
2018-06-01
Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.
Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing
NASA Technical Reports Server (NTRS)
Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.
1995-01-01
Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.
Stochastic simulation by image quilting of process-based geological models
NASA Astrophysics Data System (ADS)
Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef
2017-09-01
Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.
Fractional domain varying-order differential denoising method
NASA Astrophysics Data System (ADS)
Zhang, Yan-Shan; Zhang, Feng; Li, Bing-Zhao; Tao, Ran
2014-10-01
Removal of noise is an important step in the image restoration process, and it remains a challenging problem in image processing. Denoising is a process used to remove the noise from the corrupted image, while retaining the edges and other detailed features as much as possible. Recently, denoising in the fractional domain is a hot research topic. The fractional-order anisotropic diffusion method can bring a less blocky effect and preserve edges in image denoising, a method that has received much interest in the literature. Based on this method, we propose a new method for image denoising, in which fractional-varying-order differential, rather than constant-order differential, is used. The theoretical analysis and experimental results show that compared with the state-of-the-art fractional-order anisotropic diffusion method, the proposed fractional-varying-order differential denoising model can preserve structure and texture well, while quickly removing noise, and yields good visual effects and better peak signal-to-noise ratio.
Experimental application of simulation tools for evaluating UAV video change detection
NASA Astrophysics Data System (ADS)
Saur, Günter; Bartelsen, Jan
2015-10-01
Change detection is one of the most important tasks when unmanned aerial vehicles (UAV) are used for video reconnaissance and surveillance. In this paper, we address changes on short time scale, i.e. the observations are taken within time distances of a few hours. Each observation is a short video sequence corresponding to the near-nadir overflight of the UAV above the interesting area and the relevant changes are e.g. recently added or removed objects. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are versatile objects like trees and compression or transmission artifacts. To enable the usage of an automatic change detection within an interactive workflow of an UAV video exploitation system, an evaluation and assessment procedure has to be performed. Large video data sets which contain many relevant objects with varying scene background and altering influence parameters (e.g. image quality, sensor and flight parameters) including image metadata and ground truth data are necessary for a comprehensive evaluation. Since the acquisition of real video data is limited by cost and time constraints, from our point of view, the generation of synthetic data by simulation tools has to be considered. In this paper the processing chain of Saur et al. (2014) [1] and the interactive workflow for video change detection is described. We have selected the commercial simulation environment Virtual Battle Space 3 (VBS3) to generate synthetic data. For an experimental setup, an example scenario "road monitoring" has been defined and several video clips have been produced with varying flight and sensor parameters and varying objects in the scene. Image registration and change mask extraction, both components of the processing chain, are applied to corresponding frames of different video clips. For the selected examples, the images could be registered, the modelled changes could be extracted and the artifacts of the image rendering considered as noise (slight differences of heading angles, disparity of vegetation, 3D parallax) could be suppressed. We conclude that these image data could be considered to be realistic enough to serve as evaluation data for the selected processing components. Future work will extend the evaluation to other influence parameters and may include the human operator for mission planning and sensor control.
Mishra, Pankaj; Li, Ruijiang; Mak, Raymond H.; Rottmann, Joerg; Bryant, Jonathan H.; Williams, Christopher L.; Berbeco, Ross I.; Lewis, John H.
2014-01-01
Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculated through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model. This is the first method to estimate volumetric time-varying images from single MV cine EPID images, and has the potential to provide volumetric information with no additional imaging dose to the patient. PMID:25086523
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mishra, Pankaj, E-mail: pankaj.mishra@varian.com; Mak, Raymond H.; Rottmann, Joerg
2014-08-15
Purpose: In this work the authors develop and investigate the feasibility of a method to estimate time-varying volumetric images from individual MV cine electronic portal image device (EPID) images. Methods: The authors adopt a two-step approach to time-varying volumetric image estimation from a single cine EPID image. In the first step, a patient-specific motion model is constructed from 4DCT. In the second step, parameters in the motion model are tuned according to the information in the EPID image. The patient-specific motion model is based on a compact representation of lung motion represented in displacement vector fields (DVFs). DVFs are calculatedmore » through deformable image registration (DIR) of a reference 4DCT phase image (typically peak-exhale) to a set of 4DCT images corresponding to different phases of a breathing cycle. The salient characteristics in the DVFs are captured in a compact representation through principal component analysis (PCA). PCA decouples the spatial and temporal components of the DVFs. Spatial information is represented in eigenvectors and the temporal information is represented by eigen-coefficients. To generate a new volumetric image, the eigen-coefficients are updated via cost function optimization based on digitally reconstructed radiographs and projection images. The updated eigen-coefficients are then multiplied with the eigenvectors to obtain updated DVFs that, in turn, give the volumetric image corresponding to the cine EPID image. Results: The algorithm was tested on (1) Eight digital eXtended CArdiac-Torso phantom datasets based on different irregular patient breathing patterns and (2) patient cine EPID images acquired during SBRT treatments. The root-mean-squared tumor localization error is (0.73 ± 0.63 mm) for the XCAT data and (0.90 ± 0.65 mm) for the patient data. Conclusions: The authors introduced a novel method of estimating volumetric time-varying images from single cine EPID images and a PCA-based lung motion model. This is the first method to estimate volumetric time-varying images from single MV cine EPID images, and has the potential to provide volumetric information with no additional imaging dose to the patient.« less
Fast Image Subtraction Using Multi-cores and GPUs
NASA Astrophysics Data System (ADS)
Hartung, Steven; Shukla, H.
2013-01-01
Many important image processing techniques in astronomy require a massive number of computations per pixel. Among them is an image differencing technique known as Optimal Image Subtraction (OIS), which is very useful for detecting and characterizing transient phenomena. Like many image processing routines, OIS computations increase proportionally with the number of pixels being processed, and the number of pixels in need of processing is increasing rapidly. Utilizing many-core graphical processing unit (GPU) technology in a hybrid conjunction with multi-core CPU and computer clustering technologies, this work presents a new astronomy image processing pipeline architecture. The chosen OIS implementation focuses on the 2nd order spatially-varying kernel with the Dirac delta function basis, a powerful image differencing method that has seen limited deployment in part because of the heavy computational burden. This tool can process standard image calibration and OIS differencing in a fashion that is scalable with the increasing data volume. It employs several parallel processing technologies in a hierarchical fashion in order to best utilize each of their strengths. The Linux/Unix based application can operate on a single computer, or on an MPI configured cluster, with or without GPU hardware. With GPU hardware available, even low-cost commercial video cards, the OIS convolution and subtraction times for large images can be accelerated by up to three orders of magnitude.
Bailey, Rachel L
2017-10-01
From an ecological perception perspective (Gibson, 1977), the availability of perceptual information alters what behaviors are more and less likely at different times. This study examines how perceptual information delivered in food advertisements and packaging alters the time course of information processing and decision making. Participants categorized images of food that varied in information delivered in terms of color, glossiness, and texture (e.g., food cues) before and after being exposed to a set of advertisements that also varied in this way. In general, items with more direct cues enhanced appetitive motivational processes, especially if they were also advertised with direct food cues. Individuals also chose to eat products that were packaged with more available direct food cues compared to opaque packaging.
Light sensitometry of mammography films at varying development temperatures and times
Sharma, Reena; Sharma, Sunil Dutt; Mayya, Y. S.
2012-01-01
Kodak MinR-2000 mammography film is widely used for mammography imaging. The sensitometric indices like base plus fog level (B + F), maximum optical density (ODmax), average gradient (AG) and speed of this film at varying development temperatures and times were evaluated using a light sensitometer. Totally 33 film strips were cut from a single Kodak MinR-2000 mammography film box and exposed in a light sensitometer operated in the green light spectrum to produce a 21-step sensitometric strip. These exposed film strips were processed at temperatures in the range of 32°C–37°C in the step of 1°C and at processing times in the range of 1–6 minutes in the step of 1 minute. The results of the present study show that the measured base plus fog level of the mammography film was not affected much, whereas significant changes were seen in the ODmax, AG and speed with varying development temperatures and times. The ODmax values of the film were found in the range of 3.67–3.76, AG values were in the range of 2.48–3.4 and speed values were in the range of 0.015–0.0236 when the processing temperature was varied from 32°C to 37°C. With processing time variation from 1 to 6 minutes, the observed changes in ODmax values were in the range of 3.54-3.71, changes in AG were in the range of 2.66–3.27 and changes in speed were in the range of 0.011–0.025. Based on these observations, recommendations for optimum processing parameters to be used for this film are made. PMID:22363111
Radar signal analysis of ballistic missile with micro-motion based on time-frequency distribution
NASA Astrophysics Data System (ADS)
Wang, Jianming; Liu, Lihua; Yu, Hua
2015-12-01
The micro-motion of ballistic missile targets induces micro-Doppler modulation on the radar return signal, which is a unique feature for the warhead discrimination during flight. In order to extract the micro-Doppler feature of ballistic missile targets, time-frequency analysis is employed to process the micro-Doppler modulated time-varying radar signal. The images of time-frequency distribution (TFD) reveal the micro-Doppler modulation characteristic very well. However, there are many existing time-frequency analysis methods to generate the time-frequency distribution images, including the short-time Fourier transform (STFT), Wigner distribution (WD) and Cohen class distribution, etc. Under the background of ballistic missile defence, the paper aims at working out an effective time-frequency analysis method for ballistic missile warhead discrimination from the decoys.
NASA Astrophysics Data System (ADS)
Kolekar, Sadhu; Patole, Shashikant P.; Yoo, Ji-Beom; Dharmadhikari, Chandrakant V.
2018-03-01
Field emission from nanostructured films is known to be dominated by only small number of localized spots which varies with the voltage, electric field and heat treatment. It is important to develop processing methods which will produce stable and uniform emitting sites. In this paper we report a novel approach which involves analysis of Proximity Field Emission Microscopic (PFEM) images using Scanning Probe Image Processing technique. Vertically aligned carbon nanotube emitters have been deposited on tungsten foil by water assisted chemical vapor deposition. Prior to the field electron emission studies, these films were characterized by scanning electron microscopy, transmission electron microscopy, and Atomic Force Microscopy (AFM). AFM images of the samples show bristle like structure, the size of bristle varying from 80 to 300 nm. The topography images were found to exhibit strong correlation with current images. Current-Voltage (I-V) measurements both from Scanning Tunneling Microscopy and Conducting-AFM mode suggest that electron transport mechanism in imaging vertically grown CNTs is ballistic rather than usual tunneling or field emission with a junction resistance of 10 kΩ. It was found that I-V curves for field emission mode in PFEM geometry vary initially with number of I-V cycles until reproducible I-V curves are obtained. Even for reasonably stable I-V behavior the number of spots was found to increase with the voltage leading to a modified Fowler-Nordheim (F-N) behavior. A plot of ln(I/V3) versus 1/V was found to be linear. Current versus time data exhibit large fluctuation with the power spectral density obeying 1/f2 law. It is suggested that an analogue of F-N equation of the form ln(I/Vα) versus 1/V may be used for the analysis of field emission data, where α may depend on nanostructure configuration and can be determined from the dependence of emitting spots on the voltage.
Parallel algorithm of real-time infrared image restoration based on total variation theory
NASA Astrophysics Data System (ADS)
Zhu, Ran; Li, Miao; Long, Yunli; Zeng, Yaoyuan; An, Wei
2015-10-01
Image restoration is a necessary preprocessing step for infrared remote sensing applications. Traditional methods allow us to remove the noise but penalize too much the gradients corresponding to edges. Image restoration techniques based on variational approaches can solve this over-smoothing problem for the merits of their well-defined mathematical modeling of the restore procedure. The total variation (TV) of infrared image is introduced as a L1 regularization term added to the objective energy functional. It converts the restoration process to an optimization problem of functional involving a fidelity term to the image data plus a regularization term. Infrared image restoration technology with TV-L1 model exploits the remote sensing data obtained sufficiently and preserves information at edges caused by clouds. Numerical implementation algorithm is presented in detail. Analysis indicates that the structure of this algorithm can be easily implemented in parallelization. Therefore a parallel implementation of the TV-L1 filter based on multicore architecture with shared memory is proposed for infrared real-time remote sensing systems. Massive computation of image data is performed in parallel by cooperating threads running simultaneously on multiple cores. Several groups of synthetic infrared image data are used to validate the feasibility and effectiveness of the proposed parallel algorithm. Quantitative analysis of measuring the restored image quality compared to input image is presented. Experiment results show that the TV-L1 filter can restore the varying background image reasonably, and that its performance can achieve the requirement of real-time image processing.
Spatiotemporal Pixelization to Increase the Recognition Score of Characters for Retinal Prostheses
Kim, Hyun Seok; Park, Kwang Suk
2017-01-01
Most of the retinal prostheses use a head-fixed camera and a video processing unit. Some studies proposed various image processing methods to improve visual perception for patients. However, previous studies only focused on using spatial information. The present study proposes a spatiotemporal pixelization method mimicking fixational eye movements to generate stimulation images for artificial retina arrays by combining spatial and temporal information. Input images were sampled with a resolution that was four times higher than the number of pixel arrays. We subsampled this image and generated four different phosphene images. We then evaluated the recognition scores of characters by sequentially presenting phosphene images with varying pixel array sizes (6 × 6, 8 × 8 and 10 × 10) and stimulus frame rates (10 Hz, 15 Hz, 20 Hz, 30 Hz, and 60 Hz). The proposed method showed the highest recognition score at a stimulus frame rate of approximately 20 Hz. The method also significantly improved the recognition score for complex characters. This method provides a new way to increase practical resolution over restricted spatial resolution by merging the higher resolution image into high-frame time slots. PMID:29073735
Guyader, Jean-Marie; Bernardin, Livia; Douglas, Naomi H M; Poot, Dirk H J; Niessen, Wiro J; Klein, Stefan
2015-08-01
To evaluate the influence of image registration on apparent diffusion coefficient (ADC) images obtained from abdominal free-breathing diffusion-weighted MR images (DW-MRIs). A comprehensive pipeline based on automatic three-dimensional nonrigid image registrations is developed to compensate for misalignments in DW-MRI datasets obtained from five healthy subjects scanned twice. Motion is corrected both within each image and between images in a time series. ADC distributions are compared with and without registration in two abdominal volumes of interest (VOIs). The effects of interpolations and Gaussian blurring as alternative strategies to reduce motion artifacts are also investigated. Among the four considered scenarios (no processing, interpolation, blurring and registration), registration yields the best alignment scores. Median ADCs vary according to the chosen scenario: for the considered datasets, ADCs obtained without processing are 30% higher than with registration. Registration improves voxelwise reproducibility at least by a factor of 2 and decreases uncertainty (Fréchet-Cramér-Rao lower bound). Registration provides similar improvements in reproducibility and uncertainty as acquiring four times more data. Patient motion during image acquisition leads to misaligned DW-MRIs and inaccurate ADCs, which can be addressed using automatic registration. © 2014 Wiley Periodicals, Inc.
Time-Optimized High-Resolution Readout-Segmented Diffusion Tensor Imaging
Reishofer, Gernot; Koschutnig, Karl; Langkammer, Christian; Porter, David; Jehna, Margit; Enzinger, Christian; Keeling, Stephen; Ebner, Franz
2013-01-01
Readout-segmented echo planar imaging with 2D navigator-based reacquisition is an uprising technique enabling the sampling of high-resolution diffusion images with reduced susceptibility artifacts. However, low signal from the small voxels and long scan times hamper the clinical applicability. Therefore, we introduce a regularization algorithm based on total variation that is applied directly on the entire diffusion tensor. The spatially varying regularization parameter is determined automatically dependent on spatial variations in signal-to-noise ratio thus, avoiding over- or under-regularization. Information about the noise distribution in the diffusion tensor is extracted from the diffusion weighted images by means of complex independent component analysis. Moreover, the combination of those features enables processing of the diffusion data absolutely user independent. Tractography from in vivo data and from a software phantom demonstrate the advantage of the spatially varying regularization compared to un-regularized data with respect to parameters relevant for fiber-tracking such as Mean Fiber Length, Track Count, Volume and Voxel Count. Specifically, for in vivo data findings suggest that tractography results from the regularized diffusion tensor based on one measurement (16 min) generates results comparable to the un-regularized data with three averages (48 min). This significant reduction in scan time renders high resolution (1×1×2.5 mm3) diffusion tensor imaging of the entire brain applicable in a clinical context. PMID:24019951
Limiting Magnitude, τ, t eff, and Image Quality in DES Year 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
H. Neilsen, Jr.; Bernstein, Gary; Gruendl, Robert
The Dark Energy Survey (DES) is an astronomical imaging survey being completed with the DECam imager on the Blanco telescope at CTIO. After each night of observing, the DES data management (DM) group performs an initial processing of that night's data, and uses the results to determine which exposures are of acceptable quality, and which need to be repeated. The primary measure by which we declare an image of acceptable quality ismore » $$\\tau$$, a scaling of the exposure time. This is the scale factor that needs to be applied to the open shutter time to reach the same photometric signal to noise ratio for faint point sources under a set of canonical good conditions. These conditions are defined to be seeing resulting in a PSF full width at half maximum (FWHM) of 0.9" and a pre-defined sky brightness which approximates the zenith sky brightness under fully dark conditions. Point source limiting magnitude and signal to noise should therefore vary with t in the same way they vary with exposure time. Measurements of point sources and $$\\tau$$ in the first year of DES data confirm that they do. In the context of DES, the symbol $$t_{eff}$$ and the expression "effective exposure time" usually refer to the scaling factor, $$\\tau$$, rather than the actual effective exposure time; the "effective exposure time" in this case refers to the effective duration of one second, rather than the effective duration of an exposure.« less
State Space Modeling of Time-Varying Contemporaneous and Lagged Relations in Connectivity Maps
Molenaar, Peter C. M.; Beltz, Adriene M.; Gates, Kathleen M.; Wilson, Stephen J.
2017-01-01
Most connectivity mapping techniques for neuroimaging data assume stationarity (i.e., network parameters are constant across time), but this assumption does not always hold true. The authors provide a description of a new approach for simultaneously detecting time-varying (or dynamic) contemporaneous and lagged relations in brain connectivity maps. Specifically, they use a novel raw data likelihood estimation technique (involving a second-order extended Kalman filter/smoother embedded in a nonlinear optimizer) to determine the variances of the random walks associated with state space model parameters and their autoregressive components. The authors illustrate their approach with simulated and blood oxygen level-dependent functional magnetic resonance imaging data from 30 daily cigarette smokers performing a verbal working memory task, focusing on seven regions of interest (ROIs). Twelve participants had dynamic directed functional connectivity maps: Eleven had one or more time-varying contemporaneous ROI state loadings, and one had a time-varying autoregressive parameter. Compared to smokers without dynamic maps, smokers with dynamic maps performed the task with greater accuracy. Thus, accurate detection of dynamic brain processes is meaningfully related to behavior in a clinical sample. PMID:26546863
Karami, Ebrahim; Shehata, Mohamed S; Smith, Andrew
2018-05-04
Medical research suggests that the anterior-posterior (AP)-diameter of the inferior vena cava (IVC) and its associated temporal variation as imaged by bedside ultrasound is useful in guiding fluid resuscitation of the critically-ill patient. Unfortunately, indistinct edges and gaps in vessel walls are frequently present which impede accurate estimation of the IVC AP-diameter for both human operators and segmentation algorithms. The majority of research involving use of the IVC to guide fluid resuscitation involves manual measurement of the maximum and minimum AP-diameter as it varies over time. This effort proposes using a time-varying circle fitted inside the typically ellipsoid IVC as an efficient, consistent and novel approach to tracking and approximating the AP-diameter even in the context of poor image quality. In this active-circle algorithm, a novel evolution functional is proposed and shown to be a useful tool for ultrasound image processing. The proposed algorithm is compared with an expert manual measurement, and state-of-the-art relevant algorithms. It is shown that the algorithm outperforms other techniques and performs very close to manual measurement. Copyright © 2018 Elsevier Ltd. All rights reserved.
Prevalence of Imaging Biomarkers to Guide the Planning of Acute Stroke Reperfusion Trials.
Jiang, Bin; Ball, Robyn L; Michel, Patrik; Jovin, Tudor; Desai, Manisha; Eskandari, Ashraf; Naqvi, Zack; Wintermark, Max
2017-06-01
Imaging biomarkers are increasingly used as selection criteria for stroke clinical trials. The goal of our study was to determine the prevalence of commonly studied imaging biomarkers in different time windows after acute ischemic stroke onset to better facilitate the design of stroke clinical trials using such biomarkers for patient selection. This retrospective study included 612 patients admitted with a clinical suspicion of acute ischemic stroke with symptom onset no more than 24 hours before completing baseline imaging. Patients with subacute/chronic/remote infarcts and hemorrhage were excluded from this study. Imaging biomarkers were extracted from baseline imaging, which included a noncontrast head computed tomography (CT), perfusion CT, and CT angiography. The prevalence of dichotomized versions of each of the imaging biomarkers in several time windows (time since symptom onset) was assessed and statistically modeled to assess time dependence (not lack thereof). We created tables showing the prevalence of the imaging biomarkers pertaining to the core, the penumbra and the arterial occlusion for different time windows. All continuous imaging features vary over time. The dichotomized imaging features that vary significantly over time include: noncontrast head computed tomography Alberta Stroke Program Early CT (ASPECT) score and dense artery sign, perfusion CT infarct volume, and CT angiography collateral score and visible clot. The dichotomized imaging features that did not vary significantly over time include the thresholded perfusion CT penumbra volumes. As part of the feasibility analysis in stroke clinical trials, this analysis and the resulting tables can help investigators determine sample size and the number needed to screen. © 2017 American Heart Association, Inc.
Bastin, M E; Armitage, P A
2000-07-01
The accurate determination of absolute measures of diffusion anisotropy in vivo using single-shot, echo-planar imaging techniques requires the acquisition of a set of high signal-to-noise ratio, diffusion-weighted images that are free from eddy current induced image distortions. Such geometric distortions can be characterized and corrected in brain imaging data using magnification (M), translation (T), and shear (S) distortion parameters derived from separate water phantom calibration experiments. Here we examine the practicalities of using separate phantom calibration data to correct high b-value diffusion tensor imaging data by investigating the stability of these distortion parameters, and hence the eddy currents, with time. It is found that M, T, and S vary only slowly with time (i.e., on the order of weeks), so that calibration scans need not be performed after every patient examination. This not only minimises the scan time required to collect the calibration data, but also the computational time needed to characterize these eddy current induced distortions. Examples of how measurements of diffusion anisotropy are improved using this post-processing scheme are also presented.
Flight Results from the HST SM4 Relative Navigation Sensor System
NASA Technical Reports Server (NTRS)
Naasz, Bo; Eepoel, John Van; Queen, Steve; Southward, C. Michael; Hannah, Joel
2010-01-01
On May 11, 2009, Space Shuttle Atlantis roared off of Launch Pad 39A enroute to the Hubble Space Telescope (HST) to undertake its final servicing of HST, Servicing Mission 4. Onboard Atlantis was a small payload called the Relative Navigation Sensor experiment, which included three cameras of varying focal ranges, avionics to record images and estimate, in real time, the relative position and attitude (aka "pose") of the telescope during rendezvous and deploy. The avionics package, known as SpaceCube and developed at the Goddard Space Flight Center, performed image processing using field programmable gate arrays to accelerate this process, and in addition executed two different pose algorithms in parallel, the Goddard Natural Feature Image Recognition and the ULTOR Passive Pose and Position Engine (P3E) algorithms
Varying-energy CT imaging method based on EM-TV
NASA Astrophysics Data System (ADS)
Chen, Ping; Han, Yan
2016-11-01
For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.
Novel ultrasonic real-time scanner featuring servo controlled transducers displaying a sector image.
Matzuk, T; Skolnick, M L
1978-07-01
This paper describes a new real-time servo controlled sector scanner that produces high resolution images and has functionally programmable features similar to phased array systems, but possesses the simplicity of design and low cost best achievable in a mechanical sector scanner. The unique feature is the transducer head which contains a single moving part--the transducer--enclosed within a light-weight, hand held, and vibration free case. The frame rate, sector width, stop action angle, are all operator programmable. The frame rate can be varied from 12 to 30 frames s-1 and the sector width from 0 degrees to 60 degrees. Conversion from sector to time motion (T/M) modes are instant and two options are available, a freeze position high density T/M and a low density T/M obtainable simultaneously during sector visualization. Unusual electronic features are: automatic gain control, electronic recording of images on video tape in rf format, and ability to post-process images during video playback to extract T/M display and to change time gain control (tgc) and image size.
Aberration-free superresolution imaging via binary speckle pattern encoding and processing
NASA Astrophysics Data System (ADS)
Ben-Eliezer, Eyal; Marom, Emanuel
2007-04-01
We present an approach that provides superresolution beyond the classical limit as well as image restoration in the presence of aberrations; in particular, the ability to obtain superresolution while extending the depth of field (DOF) simultaneously is tested experimentally. It is based on an approach, recently proposed, shown to increase the resolution significantly for in-focus images by speckle encoding and decoding. In our approach, an object multiplied by a fine binary speckle pattern may be located anywhere along an extended DOF region. Since the exact magnification is not known in the presence of defocus aberration, the acquired low-resolution image is electronically processed via a parallel-branch decoding scheme, where in each branch the image is multiplied by the same high-resolution synchronized time-varying binary speckle but with different magnification. Finally, a hard-decision algorithm chooses the branch that provides the highest-resolution output image, thus achieving insensitivity to aberrations as well as DOF variations. Simulation as well as experimental results are presented, exhibiting significant resolution improvement factors.
A portable low-cost long-term live-cell imaging platform for biomedical research and education.
Walzik, Maria P; Vollmar, Verena; Lachnit, Theresa; Dietz, Helmut; Haug, Susanne; Bachmann, Holger; Fath, Moritz; Aschenbrenner, Daniel; Abolpour Mofrad, Sepideh; Friedrich, Oliver; Gilbert, Daniel F
2015-02-15
Time-resolved visualization and analysis of slow dynamic processes in living cells has revolutionized many aspects of in vitro cellular studies. However, existing technology applied to time-resolved live-cell microscopy is often immobile, costly and requires a high level of skill to use and maintain. These factors limit its utility to field research and educational purposes. The recent availability of rapid prototyping technology makes it possible to quickly and easily engineer purpose-built alternatives to conventional research infrastructure which are low-cost and user-friendly. In this paper we describe the prototype of a fully automated low-cost, portable live-cell imaging system for time-resolved label-free visualization of dynamic processes in living cells. The device is light-weight (3.6 kg), small (22 × 22 × 22 cm) and extremely low-cost (<€1250). We demonstrate its potential for biomedical use by long-term imaging of recombinant HEK293 cells at varying culture conditions and validate its ability to generate time-resolved data of high quality allowing for analysis of time-dependent processes in living cells. While this work focuses on long-term imaging of mammalian cells, the presented technology could also be adapted for use with other biological specimen and provides a general example of rapidly prototyped low-cost biosensor technology for application in life sciences and education. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Fast Image Restoration for Spatially Varying Defocus Blur of Imaging Sensor
Cheong, Hejin; Chae, Eunjung; Lee, Eunsung; Jo, Gwanghyun; Paik, Joonki
2015-01-01
This paper presents a fast adaptive image restoration method for removing spatially varying out-of-focus blur of a general imaging sensor. After estimating the parameters of space-variant point-spread-function (PSF) using the derivative in each uniformly blurred region, the proposed method performs spatially adaptive image restoration by selecting the optimal restoration filter according to the estimated blur parameters. Each restoration filter is implemented in the form of a combination of multiple FIR filters, which guarantees the fast image restoration without the need of iterative or recursive processing. Experimental results show that the proposed method outperforms existing space-invariant restoration methods in the sense of both objective and subjective performance measures. The proposed algorithm can be employed to a wide area of image restoration applications, such as mobile imaging devices, robot vision, and satellite image processing. PMID:25569760
Assessing the impact of graphical quality on automatic text recognition in digital maps
NASA Astrophysics Data System (ADS)
Chiang, Yao-Yi; Leyk, Stefan; Honarvar Nazari, Narges; Moghaddam, Sima; Tan, Tian Xiang
2016-08-01
Converting geographic features (e.g., place names) in map images into a vector format is the first step for incorporating cartographic information into a geographic information system (GIS). With the advancement in computational power and algorithm design, map processing systems have been considerably improved over the last decade. However, the fundamental map processing techniques such as color image segmentation, (map) layer separation, and object recognition are sensitive to minor variations in graphical properties of the input image (e.g., scanning resolution). As a result, most map processing results would not meet user expectations if the user does not "properly" scan the map of interest, pre-process the map image (e.g., using compression or not), and train the processing system, accordingly. These issues could slow down the further advancement of map processing techniques as such unsuccessful attempts create a discouraged user community, and less sophisticated tools would be perceived as more viable solutions. Thus, it is important to understand what kinds of maps are suitable for automatic map processing and what types of results and process-related errors can be expected. In this paper, we shed light on these questions by using a typical map processing task, text recognition, to discuss a number of map instances that vary in suitability for automatic processing. We also present an extensive experiment on a diverse set of scanned historical maps to provide measures of baseline performance of a standard text recognition tool under varying map conditions (graphical quality) and text representations (that can vary even within the same map sheet). Our experimental results help the user understand what to expect when a fully or semi-automatic map processing system is used to process a scanned map with certain (varying) graphical properties and complexities in map content.
Design of an automated imaging system for use in a space experiment
NASA Technical Reports Server (NTRS)
Hartz, William G.; Bozzolo, Nora G.; Lewis, Catherine C.; Pestak, Christopher J.
1991-01-01
An experiment, occurring in an orbiting platform, examines the mass transfer across gas-liquid and liquid-liquid interfaces. It employs an imaging system with real time image analysis. The design includes optical design, imager selection and integration, positioner control, image recording, software development for processing and interfaces to telemetry. It addresses the constraints of weight, volume, and electric power associated with placing the experiment in the Space Shuttle cargo bay. Challenging elements of the design are: imaging and recording of a 200-micron-diameter bubble with a resolution of 2 microns to serve a primary source of data; varying frame rates from 500 per second to 1 frame per second, depending on the experiment phase; and providing three-dimensional information to determine the shape of the bubble.
Dynamic electrical impedance imaging with the interacting multiple model scheme.
Kim, Kyung Youn; Kim, Bong Seok; Kim, Min Chan; Kim, Sin; Isaacson, David; Newell, Jonathan C
2005-04-01
In this paper, an effective dynamical EIT imaging scheme is presented for on-line monitoring of the abruptly changing resistivity distribution inside the object, based on the interacting multiple model (IMM) algorithm. The inverse problem is treated as a stochastic nonlinear state estimation problem with the time-varying resistivity (state) being estimated on-line with the aid of the IMM algorithm. In the design of the IMM algorithm multiple models with different process noise covariance are incorporated to reduce the modeling uncertainty. Simulations and phantom experiments are provided to illustrate the proposed algorithm.
Landsat image and sample design for water reservoirs (Rapel dam Central Chile).
Lavanderos, L; Pozo, M E; Pattillo, C; Miranda, H
1990-01-01
Spatial heterogeneity of the Rapel reservoir surface waters is analyzed through Landsat images. The image digital counts are used with the aim or developing an aprioristic quantitative sample design.Natural horizontal stratification of the Rapel Reservoir (Central Chile) is produced mainly by suspended solids. The spatial heterogeneity conditions of the reservoir for the Spring 86-Summer 87 period were determined by qualitative analysis and image processing of the MSS Landsat, bands 1 and 3. The space-time variations of the different observed strata obtained with multitemporal image analysis.A random stratified sample design (r.s.s.d) was developed, based on the digital counts statistical analysis. Strata population size as well as the average, variance and sampling size of the digital counts were obtained by the r.s.s.d method.Stratification determined by analysis of satellite images were later correlated with ground data. Though the stratification of the reservoir is constant over time, the shape and size of the strata varys.
Varying ultrasound power level to distinguish surgical instruments and tissue.
Ren, Hongliang; Anuraj, Banani; Dupont, Pierre E
2018-03-01
We investigate a new framework of surgical instrument detection based on power-varying ultrasound images with simple and efficient pixel-wise intensity processing. Without using complicated feature extraction methods, we identified the instrument with an estimated optimal power level and by comparing pixel values of varying transducer power level images. The proposed framework exploits the physics of ultrasound imaging system by varying the transducer power level to effectively distinguish metallic surgical instruments from tissue. This power-varying image-guidance is motivated from our observations that ultrasound imaging at different power levels exhibit different contrast enhancement capabilities between tissue and instruments in ultrasound-guided robotic beating-heart surgery. Using lower transducer power levels (ranging from 40 to 75% of the rated lowest ultrasound power levels of the two tested ultrasound scanners) can effectively suppress the strong imaging artifacts from metallic instruments and thus, can be utilized together with the images from normal transducer power levels to enhance the separability between instrument and tissue, improving intraoperative instrument tracking accuracy from the acquired noisy ultrasound volumetric images. We performed experiments in phantoms and ex vivo hearts in water tank environments. The proposed multi-level power-varying ultrasound imaging approach can identify robotic instruments of high acoustic impedance from low-signal-to-noise-ratio ultrasound images by power adjustments.
High speed real-time wavefront processing system for a solid-state laser system
NASA Astrophysics Data System (ADS)
Liu, Yuan; Yang, Ping; Chen, Shanqiu; Ma, Lifang; Xu, Bing
2008-03-01
A high speed real-time wavefront processing system for a solid-state laser beam cleanup system has been built. This system consists of a core2 Industrial PC (IPC) using Linux and real-time Linux (RT-Linux) operation system (OS), a PCI image grabber, a D/A card. More often than not, the phase aberrations of the output beam from solid-state lasers vary fast with intracavity thermal effects and environmental influence. To compensate the phase aberrations of solid-state lasers successfully, a high speed real-time wavefront processing system is presented. Compared to former systems, this system can improve the speed efficiently. In the new system, the acquisition of image data, the output of control voltage data and the implementation of reconstructor control algorithm are treated as real-time tasks in kernel-space, the display of wavefront information and man-machine conversation are treated as non real-time tasks in user-space. The parallel processing of real-time tasks in Symmetric Multi Processors (SMP) mode is the main strategy of improving the speed. In this paper, the performance and efficiency of this wavefront processing system are analyzed. The opened-loop experimental results show that the sampling frequency of this system is up to 3300Hz, and this system can well deal with phase aberrations from solid-state lasers.
Space station microscopy: Beyond the box
NASA Technical Reports Server (NTRS)
Hunter, N. R.; Pierson, Duane L.; Mishra, S. K.
1993-01-01
Microscopy aboard Space Station Freedom poses many unique challenges for in-flight investigations. Disciplines such as material processing, plant and animal research, human reseach, enviromental monitoring, health care, and biological processing have diverse microscope requirements. The typical microscope not only does not meet the comprehensive needs of these varied users, but also tends to require excessive crew time. To assess user requirements, a comprehensive survey was conducted among investigators with experiments requiring microscopy. The survey examined requirements such as light sources, objectives, stages, focusing systems, eye pieces, video accessories, etc. The results of this survey and the application of an Intelligent Microscope Imaging System (IMIS) may address these demands for efficient microscopy service in space. The proposed IMIS can accommodate multiple users with varied requirements, operate in several modes, reduce crew time needed for experiments, and take maximum advantage of the restrictive data/ instruction transmission environment on Freedom.
State space modeling of time-varying contemporaneous and lagged relations in connectivity maps.
Molenaar, Peter C M; Beltz, Adriene M; Gates, Kathleen M; Wilson, Stephen J
2016-01-15
Most connectivity mapping techniques for neuroimaging data assume stationarity (i.e., network parameters are constant across time), but this assumption does not always hold true. The authors provide a description of a new approach for simultaneously detecting time-varying (or dynamic) contemporaneous and lagged relations in brain connectivity maps. Specifically, they use a novel raw data likelihood estimation technique (involving a second-order extended Kalman filter/smoother embedded in a nonlinear optimizer) to determine the variances of the random walks associated with state space model parameters and their autoregressive components. The authors illustrate their approach with simulated and blood oxygen level-dependent functional magnetic resonance imaging data from 30 daily cigarette smokers performing a verbal working memory task, focusing on seven regions of interest (ROIs). Twelve participants had dynamic directed functional connectivity maps: Eleven had one or more time-varying contemporaneous ROI state loadings, and one had a time-varying autoregressive parameter. Compared to smokers without dynamic maps, smokers with dynamic maps performed the task with greater accuracy. Thus, accurate detection of dynamic brain processes is meaningfully related to behavior in a clinical sample. Published by Elsevier Inc.
Time-varying bispectral analysis of visually evoked multi-channel EEG
NASA Astrophysics Data System (ADS)
Chandran, Vinod
2012-12-01
Theoretical foundations of higher order spectral analysis are revisited to examine the use of time-varying bicoherence on non-stationary signals using a classical short-time Fourier approach. A methodology is developed to apply this to evoked EEG responses where a stimulus-locked time reference is available. Short-time windowed ensembles of the response at the same offset from the reference are considered as ergodic cyclostationary processes within a non-stationary random process. Bicoherence can be estimated reliably with known levels at which it is significantly different from zero and can be tracked as a function of offset from the stimulus. When this methodology is applied to multi-channel EEG, it is possible to obtain information about phase synchronization at different regions of the brain as the neural response develops. The methodology is applied to analyze evoked EEG response to flash visual stimulii to the left and right eye separately. The EEG electrode array is segmented based on bicoherence evolution with time using the mean absolute difference as a measure of dissimilarity. Segment maps confirm the importance of the occipital region in visual processing and demonstrate a link between the frontal and occipital regions during the response. Maps are constructed using bicoherence at bifrequencies that include the alpha band frequency of 8Hz as well as 4 and 20Hz. Differences are observed between responses from the left eye and the right eye, and also between subjects. The methodology shows potential as a neurological functional imaging technique that can be further developed for diagnosis and monitoring using scalp EEG which is less invasive and less expensive than magnetic resonance imaging.
Hybrid vision activities at NASA Johnson Space Center
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1990-01-01
NASA's Johnson Space Center in Houston, Texas, is active in several aspects of hybrid image processing. (The term hybrid image processing refers to a system that combines digital and photonic processing). The major thrusts are autonomous space operations such as planetary landing, servicing, and rendezvous and docking. By processing images in non-Cartesian geometries to achieve shift invariance to canonical distortions, researchers use certain aspects of the human visual system for machine vision. That technology flow is bidirectional; researchers are investigating the possible utility of video-rate coordinate transformations for human low-vision patients. Man-in-the-loop teleoperations are also supported by the use of video-rate image-coordinate transformations, as researchers plan to use bandwidth compression tailored to the varying spatial acuity of the human operator. Technological elements being developed in the program include upgraded spatial light modulators, real-time coordinate transformations in video imagery, synthetic filters that robustly allow estimation of object pose parameters, convolutionally blurred filters that have continuously selectable invariance to such image changes as magnification and rotation, and optimization of optical correlation done with spatial light modulators that have limited range and couple both phase and amplitude in their response.
Physics-based interactive volume manipulation for sharing surgical process.
Nakao, Megumi; Minato, Kotaro
2010-05-01
This paper presents a new set of techniques by which surgeons can interactively manipulate patient-specific volumetric models for sharing surgical process. To handle physical interaction between the surgical tools and organs, we propose a simple surface-constraint-based manipulation algorithm to consistently simulate common surgical manipulations such as grasping, holding and retraction. Our computation model is capable of simulating soft-tissue deformation and incision in real time. We also present visualization techniques in order to rapidly visualize time-varying, volumetric information on the deformed image. This paper demonstrates the success of the proposed methods in enabling the simulation of surgical processes, and the ways in which this simulation facilitates preoperative planning and rehearsal.
Multiple Acquisition InSAR Analysis: Persistent Scatterer and Small Baseline Approaches
NASA Astrophysics Data System (ADS)
Hooper, A.
2006-12-01
InSAR techniques that process data from multiple acquisitions enable us to form time series of deformation and also allow us to reduce error terms present in single interferograms. There are currently two broad categories of methods that deal with multiple images: persistent scatterer methods and small baseline methods. The persistent scatterer approach relies on identifying pixels whose scattering properties vary little with time and look angle. Pixels that are dominated by a singular scatterer best meet these criteria; therefore, images are processed at full resolution to both increase the chance of there being only one dominant scatterer present, and to reduce the contribution from other scatterers within each pixel. In images where most pixels contain multiple scatterers of similar strength, even at the highest possible resolution, the persistent scatterer approach is less optimal, as the scattering characteristics of these pixels vary substantially with look angle. In this case, an approach that interferes only pairs of images for which the difference in look angle is small makes better sense, and resolution can be sacrificed to reduce the effects of the look angle difference by band-pass filtering. This is the small baseline approach. Existing small baseline methods depend on forming a series of multilooked interferograms and unwrapping each one individually. This approach fails to take advantage of two of the benefits of processing multiple acquisitions, however, which are usually embodied in persistent scatterer methods: the ability to find and extract the phase for single-look pixels with good signal-to-noise ratio that are surrounded by noisy pixels, and the ability to unwrap more robustly in three dimensions, the third dimension being that of time. We have developed, therefore, a new small baseline method to select individual single-look pixels that behave coherently in time, so that isolated stable pixels may be found. After correction for various error terms, the phase values of the selected pixels are unwrapped using a new three-dimensional algorithm. We apply our small baseline method to an area in southern Iceland that includes Katla and Eyjafjallajökull volcanoes, and retrieve a time series of deformation that shows transient deformation due to intrusion of magma beneath Eyjafjallajökull. We also process the data using the Stanford method for persistent scatterers (StaMPS) for comparison.
Flagging and Correction of Pattern Noise in the Kepler Focal Plane Array
NASA Technical Reports Server (NTRS)
Kolodziejczak, Jeffery J.; Caldwell, Douglas A.; VanCleve, Jeffrey E.; Clarke, Bruce D.; Jenkins, Jon M.; Cote, Miles T.; Klaus, Todd C.; Argabright, Vic S.
2010-01-01
In order for Kepler to achieve its required less than 20 PPM photometric precision for magnitude 12 and brighter stars, instrument-induced variations in the CCD readout bias pattern (our "2D black image"), which are either fixed or slowly varying in time, must be identified and the corresponding pixels either corrected or removed from further data processing. The two principle sources of these readout bias variations are crosstalk between the 84 science CCDs and the 4 fine guidance sensor (FGS) CCDs and a high frequency amplifier oscillation on less than 40% of the CCD readout channels. The crosstalk produces a synchronous pattern in the 2D black image with time-variation observed in less than 10% of individual pixel bias histories. We will describe a method of removing the crosstalk signal using continuously-collected data from masked and over-clocked image regions (our "collateral data"), and occasionally-collected full-frame images and reverse-clocked readout signals. We use this same set to detect regions affected by the oscillating amplifiers. The oscillations manifest as time-varying moir pattern and rolling bands in the affected channels. Because this effect reduces the performance in only a small fraction of the array at any given time, we have developed an approach for flagging suspect data. The flags will provide the necessary means to resolve any potential ambiguity between instrument-induced variations and real photometric variations in a target time series. We will also evaluate the effectiveness of these techniques using flight data from background and selected target pixels.
Real-time correction of beamforming time delay errors in abdominal ultrasound imaging
NASA Astrophysics Data System (ADS)
Rigby, K. W.
2000-04-01
The speed of sound varies with tissue type, yet commercial ultrasound imagers assume a constant sound speed. Sound speed variation in abdominal fat and muscle layers is widely believed to be largely responsible for poor contrast and resolution in some patients. The simplest model of the abdominal wall assumes that it adds a spatially varying time delay to the ultrasound wavefront. The adequacy of this model is controversial. We describe an adaptive imaging system consisting of a GE LOGIQ 700 imager connected to a multi- processor computer. Arrival time errors for each beamforming channel, estimated by correlating each channel signal with the beamsummed signal, are used to correct the imager's beamforming time delays at the acoustic frame rate. A multi- row transducer provides two-dimensional sampling of arrival time errors. We observe significant improvement in abdominal images of healthy male volunteers: increased contrast of blood vessels, increased visibility of the renal capsule, and increased brightness of the liver.
NASA Astrophysics Data System (ADS)
Barwick, Brett; Gronniger, Glen; Yuan, Lu; Liou, Sy-Hwang; Batelaan, Herman
2006-10-01
Electron diffraction from metal coated freestanding nanofabricated gratings is presented, with a quantitative path integral analysis of the electron-grating interactions. Electron diffraction out to the 20th order was observed indicating the high quality of our nanofabricated gratings. The electron beam is collimated to its diffraction limit with ion-milled material slits. Our path integral analysis is first tested against single slit electron diffraction, and then further expanded with the same theoretical approach to describe grating diffraction. Rotation of the grating with respect to the incident electron beam varies the effective distance between the electron and grating bars. This allows the measurement of the image charge potential between the electron and the grating bars. Image charge potentials that were about 15% of the value for that of a pure electron-metal wall interaction were found. We varied the electron energy from 50to900eV. The interaction time is of the order of typical metal image charge response times and in principle allows the investigation of image charge formation. In addition to the image charge interaction there is a dephasing process reducing the transverse coherence length of the electron wave. The dephasing process causes broadening of the diffraction peaks and is consistent with a model that ascribes the dephasing process to microscopic contact potentials. Surface structures with length scales of about 200nm observed with a scanning tunneling microscope, and dephasing interaction strength typical of contact potentials of 0.35eV support this claim. Such a dephasing model motivated the investigation of different metallic coatings, in particular Ni, Ti, Al, and different thickness Au-Pd coatings. Improved quality of diffraction patterns was found for Ni. This coating made electron diffraction possible at energies as low as 50eV. This energy was limited by our electron gun design. These results are particularly relevant for the use of these gratings as coherent beam splitters in low energy electron interferometry.
Spectral Imaging from Uavs Under Varying Illumination Conditions
NASA Astrophysics Data System (ADS)
Hakala, T.; Honkavaara, E.; Saari, H.; Mäkynen, J.; Kaivosoja, J.; Pesonen, L.; Pölönen, I.
2013-08-01
Rapidly developing unmanned aerial vehicles (UAV) have provided the remote sensing community with a new rapidly deployable tool for small area monitoring. The progress of small payload UAVs has introduced greater demand for light weight aerial payloads. For applications requiring aerial images, a simple consumer camera provides acceptable data. For applications requiring more detailed spectral information about the surface, a new Fabry-Perot interferometer based spectral imaging technology has been developed. This new technology produces tens of successive images of the scene at different wavelength bands in very short time. These images can be assembled in spectral data cubes with stereoscopic overlaps. On field the weather conditions vary and the UAV operator often has to decide between flight in sub optimal conditions and no flight. Our objective was to investigate methods for quantitative radiometric processing of images taken under varying illumination conditions, thus expanding the range of weather conditions during which successful imaging flights can be made. A new method that is based on insitu measurement of irradiance either in UAV platform or in ground was developed. We tested the methods in a precision agriculture application using realistic data collected in difficult illumination conditions. Internal homogeneity of the original image data (average coefficient of variation in overlapping images) was 0.14-0.18. In the corrected data, the homogeneity was 0.10-0.12 with a correction based on broadband irradiance measured in UAV, 0.07-0.09 with a correction based on spectral irradiance measurement on ground, and 0.05-0.08 with a radiometric block adjustment based on image data. Our results were very promising, indicating that quantitative UAV based remote sensing could be operational in diverse conditions, which is prerequisite for many environmental remote sensing applications.
NASA Astrophysics Data System (ADS)
Shaked, Natan T.; Girshovitz, Pinhas; Frenklach, Irena
2014-06-01
We present our recent advances in the development of compact, highly portable and inexpensive wide-field interferometric modules. By a smart design of the interferometric system, including the usage of low-coherence illumination sources and common-path off-axis geometry of the interferometers, spatial and temporal noise levels of the resulting quantitative thickness profile can be sub-nanometric, while processing the phase profile in real time. In addition, due to novel experimentally-implemented multiplexing methods, we can capture low-coherence off-axis interferograms with significantly extended field of view and in faster acquisition rates. Using these techniques, we quantitatively imaged rapid dynamics of live biological cells including sperm cells and unicellular microorganisms. Then, we demonstrated dynamic profiling during lithography processes of microscopic elements, with thicknesses that may vary from several nanometers to hundreds of microns. Finally, we present new algorithms for fast reconstruction (including digital phase unwrapping) of off-axis interferograms, which allow real-time processing in more than video rate on regular single-core computers.
NASA Astrophysics Data System (ADS)
Saito, Takahiro; Takahashi, Hiromi; Komatsu, Takashi
2006-02-01
The Retinex theory was first proposed by Land, and deals with separation of irradiance from reflectance in an observed image. The separation problem is an ill-posed problem. Land and others proposed various Retinex separation algorithms. Recently, Kimmel and others proposed a variational framework that unifies the previous Retinex algorithms such as the Poisson-equation-type Retinex algorithms developed by Horn and others, and presented a Retinex separation algorithm with the time-evolution of a linear diffusion process. However, the Kimmel's separation algorithm cannot achieve physically rational separation, if true irradiance varies among color channels. To cope with this problem, we introduce a nonlinear diffusion process into the time-evolution. Moreover, as to its extension to color images, we present two approaches to treat color channels: the independent approach to treat each color channel separately and the collective approach to treat all color channels collectively. The latter approach outperforms the former. Furthermore, we apply our separation algorithm to a high quality chroma key in which before combining a foreground frame and a background frame into an output image a color of each pixel in the foreground frame are spatially adaptively corrected through transformation of the separated irradiance. Experiments demonstrate superiority of our separation algorithm over the Kimmel's separation algorithm.
Mueller, R F; Characklis, W G; Jones, W L; Sears, J T
1992-05-01
The processes leading to bacterial colonization on solid-water interfaces are adsorption, desorption, growth, and erosion. These processes have been measured individually in situ in a flowing system in real time using image analysis. Four different substrata (copper, silicon, 316 stainless-steel and glass) and 2 different bacterial species (Pseudomonas aeruginosa and Pseudomonas fluorescens) were used in the experiments. The flow was laminar (Re = 1.4) and the shear stress was kept constant during all experiments at 0.75 N m(-2). The surface roughness varied among the substrata from 0.002 microm (for silicon) to 0.015 microm (for copper). Surface free energies varied from 25.1 dynes cm(-1) for silicon to 31.2 dynes cm(-1) for copper. Cell curface hydrophobicity, reported as hydrocarbon partitioning values, ranged from 0.67 for Ps. fluorescens to 0.97 for Ps. aeruginosa.The adsorption rate coefficient varied by as much as a factor of 10 among the combinations of bacterial strain and substratum material, and was positively correlated with surface free energy, the surface roughness of the substratum, and the hydrophobicity of the cells. The probability of desorption decreased with increasing surface free energy and surface roughness of the substratum. Cell growth was inhibited on copper, but replication of cells overlying an initial cell layer was observed with increased exposure time to the cell-containing bulk water. A mathematical model describing cell accumulation on a substratum is presented.
Gintautas, Vadas; Ham, Michael I.; Kunsberg, Benjamin; Barr, Shawn; Brumby, Steven P.; Rasmussen, Craig; George, John S.; Nemenman, Ilya; Bettencourt, Luís M. A.; Kenyon, Garret T.
2011-01-01
Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas. PMID:21998562
Comparing multiple turbulence restoration algorithms performance on noisy anisoplanatic imagery
NASA Astrophysics Data System (ADS)
Rucci, Michael A.; Hardie, Russell C.; Dapore, Alexander J.
2017-05-01
In this paper, we compare the performance of multiple turbulence mitigation algorithms to restore imagery degraded by atmospheric turbulence and camera noise. In order to quantify and compare algorithm performance, imaging scenes were simulated by applying noise and varying levels of turbulence. For the simulation, a Monte-Carlo wave optics approach is used to simulate the spatially and temporally varying turbulence in an image sequence. A Poisson-Gaussian noise mixture model is then used to add noise to the observed turbulence image set. These degraded image sets are processed with three separate restoration algorithms: Lucky Look imaging, bispectral speckle imaging, and a block matching method with restoration filter. These algorithms were chosen because they incorporate different approaches and processing techniques. The results quantitatively show how well the algorithms are able to restore the simulated degraded imagery.
The pupil's response to affective pictures: Role of image duration, habituation, and viewing mode
O'Farrell, Katherine R.; Burley, Daniel; Erichsen, Jonathan T.; Newton, Naomi V.; Gray, Nicola S.
2016-01-01
Abstract The pupil has been shown to be sensitive to the emotional content of stimuli. We examined this phenomenon by comparing fearful and neutral images carefully matched in the domains of luminance, image contrast, image color, and complexity of content. The pupil was more dilated after viewing affective pictures, and this effect was (a) shown to be independent of the presentation time of the images (from 100–3,000 ms), (b) not diminished by repeated presentations of the images, and (c) not affected by actively naming the emotion of the stimuli in comparison to passive viewing. Our results show that the emotional modulation of the pupil is present over a range of variables that typically vary from study to study (image duration, number of trials, free viewing vs. task), and encourages the use of pupillometry as a measure of emotional processing in populations where alternative techniques may not be appropriate. PMID:27172997
Automated measurement of pressure injury through image processing.
Li, Dan; Mathews, Carol
2017-11-01
To develop an image processing algorithm to automatically measure pressure injuries using electronic pressure injury images stored in nursing documentation. Photographing pressure injuries and storing the images in the electronic health record is standard practice in many hospitals. However, the manual measurement of pressure injury is time-consuming, challenging and subject to intra/inter-reader variability with complexities of the pressure injury and the clinical environment. A cross-sectional algorithm development study. A set of 32 pressure injury images were obtained from a western Pennsylvania hospital. First, we transformed the images from an RGB (i.e. red, green and blue) colour space to a YC b C r colour space to eliminate inferences from varying light conditions and skin colours. Second, a probability map, generated by a skin colour Gaussian model, guided the pressure injury segmentation process using the Support Vector Machine classifier. Third, after segmentation, the reference ruler - included in each of the images - enabled perspective transformation and determination of pressure injury size. Finally, two nurses independently measured those 32 pressure injury images, and intraclass correlation coefficient was calculated. An image processing algorithm was developed to automatically measure the size of pressure injuries. Both inter- and intra-rater analysis achieved good level reliability. Validation of the size measurement of the pressure injury (1) demonstrates that our image processing algorithm is a reliable approach to monitoring pressure injury progress through clinical pressure injury images and (2) offers new insight to pressure injury evaluation and documentation. Once our algorithm is further developed, clinicians can be provided with an objective, reliable and efficient computational tool for segmentation and measurement of pressure injuries. With this, clinicians will be able to more effectively monitor the healing process of pressure injuries. © 2017 John Wiley & Sons Ltd.
Real-time, in situ monitoring of nanoporation using electric field-induced acoustic signal
NASA Astrophysics Data System (ADS)
Zarafshani, Ali; Faiz, Rowzat; Samant, Pratik; Zheng, Bin; Xiang, Liangzhong
2018-02-01
The use of nanoporation in reversible or irreversible electroporation, e.g. cancer ablation, is rapidly growing. This technique uses an ultra-short and intense electric pulse to increase the membrane permeability, allowing non-permeant drugs and genes access to the cytosol via nanopores in the plasma membrane. It is vital to create a real-time in situ monitoring technique to characterize this process and answer the need created by the successful electroporation procedure of cancer treatment. All suggested monitoring techniques for electroporation currently are for pre-and post-stimulation exposure with no real-time monitoring during electric field exposure. This study was aimed at developing an innovative technology for real-time in situ monitoring of electroporation based on the typical cell exposure-induced acoustic emissions. The acoustic signals are the result of the electric field, which itself can be used in realtime to characterize the process of electroporation. We varied electric field distribution by varying the electric pulse from 1μ - 100ns and varying the voltage intensity from 0 - 1.2ܸ݇ to energize two electrodes in a bi-polar set-up. An ultrasound transducer was used for collecting acoustic signals around the subject under test. We determined the relative location of the acoustic signals by varying the position of the electrodes relative to the transducer and varying the electric field distribution between the electrodes to capture a variety of acoustic signals. Therefore, the electric field that is utilized in the nanoporation technique also produces a series of corresponding acoustic signals. This offers a novel imaging technique for the real-time in situ monitoring of electroporation that may directly improve treatment efficiency.
On-board multispectral classification study
NASA Technical Reports Server (NTRS)
Ewalt, D.
1979-01-01
The factors relating to onboard multispectral classification were investigated. The functions implemented in ground-based processing systems for current Earth observation sensors were reviewed. The Multispectral Scanner, Thematic Mapper, Return Beam Vidicon, and Heat Capacity Mapper were studied. The concept of classification was reviewed and extended from the ground-based image processing functions to an onboard system capable of multispectral classification. Eight different onboard configurations, each with varying amounts of ground-spacecraft interaction, were evaluated. Each configuration was evaluated in terms of turnaround time, onboard processing and storage requirements, geometric and classification accuracy, onboard complexity, and ancillary data required from the ground.
Zysset, S; Müller, K; Lehmann, C; Thöne-Otto, A I; von Cramon, D Y
2001-11-13
Previous studies have shown that reaction time in an item-recognition task with both short and long lists is a quadratic function of list length. This suggests that either different memory retrieval processes are implied for short and long lists or an adaptive process is involved. An event-related functional magnetic resonance imaging study with nine subjects and list lengths varying between 3 and 18 words was conducted to identify the underlying neuronal structures of retrieval from long and short lists. For the retrieval and processing of word-lists a single fronto-parietal network, including premotor, left prefrontal, left precuneal and left parietal regions, was activated. With increasing list length, no additional regions became involved in retrieving information from long-term memory, suggesting that not necessarily different, but highly adaptive retrieval processes are involved.
Real-time volume rendering of 4D image using 3D texture mapping
NASA Astrophysics Data System (ADS)
Hwang, Jinwoo; Kim, June-Sic; Kim, Jae Seok; Kim, In Young; Kim, Sun Il
2001-05-01
Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.
Phantom evaluation of the effect of film processing on mammographic screen-film combinations.
McLean, D; Rickard, M T
1994-08-01
Mammographic image quality should be optimal for diagnosis, and the film contrast can be manipulated by altering development parameters. In this study phantom test objects were radiographed and processed for a given range of developer temperatures and times for four film-screen systems. Radiologists scored the phantom test objects on the resultant films to evaluate the effect on diagnosis of varying image contrast. While for three film-screen systems processing led to appreciable contrast differences, for only one film system did maximum contrast correspond with optimal phantom test object scoring. The inability to show an effect on diagnosis in all cases is possibly due to the variation in radiologist responses found in this study and in normal clinical circumstances. Other technical factors such as changes in film fog, grain and mottle may contribute to the study findings.
Multisensor data fusion across time and space
NASA Astrophysics Data System (ADS)
Villeneuve, Pierre V.; Beaven, Scott G.; Reed, Robert A.
2014-06-01
Field measurement campaigns typically deploy numerous sensors having different sampling characteristics for spatial, temporal, and spectral domains. Data analysis and exploitation is made more difficult and time consuming as the sample data grids between sensors do not align. This report summarizes our recent effort to demonstrate feasibility of a processing chain capable of "fusing" image data from multiple independent and asynchronous sensors into a form amenable to analysis and exploitation using commercially-available tools. Two important technical issues were addressed in this work: 1) Image spatial registration onto a common pixel grid, 2) Image temporal interpolation onto a common time base. The first step leverages existing image matching and registration algorithms. The second step relies upon a new and innovative use of optical flow algorithms to perform accurate temporal upsampling of slower frame rate imagery. Optical flow field vectors were first derived from high-frame rate, high-resolution imagery, and then finally used as a basis for temporal upsampling of the slower frame rate sensor's imagery. Optical flow field values are computed using a multi-scale image pyramid, thus allowing for more extreme object motion. This involves preprocessing imagery to varying resolution scales and initializing new vector flow estimates using that from the previous coarser-resolution image. Overall performance of this processing chain is demonstrated using sample data involving complex too motion observed by multiple sensors mounted to the same base. Multiple sensors were included, including a high-speed visible camera, up to a coarser resolution LWIR camera.
Quality Assurance By Laser Scanning And Imaging Techniques
NASA Astrophysics Data System (ADS)
SchmalfuB, Harald J.; Schinner, Karl Ludwig
1989-03-01
Laser scanning systems are well established in the world of fast industrial in-process quality inspection systems. The materials inspected by laser scanning systems are e.g. "endless" sheets of steel, paper, textile, film or foils. The web width varies from 50 mm up to 5000 mm or more. The web speed depends strongly on the production process and can reach several hundred meters per minute. The continuous data flow in one of different channels of the optical receiving system exceeds ten Megapixels/sec. Therefore it is clear that the electronic evaluation system has to process these data streams in real time and no image storage is possible. But sometimes (e.g. first installation of the system, change of the defect classification) it would be very helpful to have the possibility for a visual look on the original, i.e. not processed sensor data. At first we show the principle set up of a standard laser scanning system. Then we will introduce a large image memory especially designed for the needs of high-speed inspection sensors. This image memory co-operates with the standard on-line evaluation electronics and provides therefore an easy comparison between processed and non-processed data. We will discuss the basic system structure and we will show the first industrial results.
NASA Technical Reports Server (NTRS)
May, C. E.; Philipp, W. H.; Marsik, S. J.
1972-01-01
Crystalline sodium hypophosphite was X-irradiated and then treated with an ammoniacal nickel hypophosphite solution. Treatment resulted in the precipitation of nickel metal. The yield of nickel metal varied directly with particle size, sample weight, X-ray voltage, target current, exposure time, and development time. These findings show the process to be potentially useful in X-ray type photography. The half-life for the latent image species was found to be relatively short; but this is not critical in most X-ray photography applications. Furthermore, the work can be interpreted on the basis that a hydrogen atom is involved in the mechanism and indicates that the autocatalytic development step may be self-poisoning.
Theory and applications of structured light single pixel imaging
NASA Astrophysics Data System (ADS)
Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.
2018-02-01
Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.
NASA Technical Reports Server (NTRS)
Watkins, R. N.; Jolliff, B. L.; Lawrence, S. J.; Hayne, P. O.; Ghent, R. R.
2017-01-01
Understanding how the distribution of boulders on the lunar surface changes over time is key to understanding small-scale erosion processes and the rate at which rocks become regolith. Boulders degrade over time, primarily as a result of micrometeorite bombardment so their residence time at the surface can inform the rate at which rocks become regolith or become buried within regolith. Because of the gradual degradation of exposed boulders, we expect that the boulder population around an impact crater will decrease as crater age increases. Boulder distributions around craters of varying ages are needed to understand regolith production rates, and Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) images provide one of the best tools for conducting these studies. Using NAC images to assess how the distribution of boulders varies as a function of crater age provides key constraints for boulder erosion processes. Boulders also represent a potential hazard that must be addressed in the planning of future lunar landings. A boulder under a landing leg can contribute to deck tilt, and boulders can damage spacecraft during landing. Using orbital data to characterize boulder populations at locations where landers have safely touched down (Apollo, Luna, Surveyor, Chang'e-3) provides validation for landed mission hazard avoidance planning. Additionally, counting boulders at legacy landing sites is useful because: 1) LROC has extensive coverage of these sites at high resolutions (approximately 0.5 meters per pixel). 2) Returned samples from craters at these sites have been radiometrically dated, allowing assessment of how boulder distributions vary as a function of crater age. 3) Surface photos at these sites can be used to correlate with remote sensing measurements.
An efficient multiple exposure image fusion in JPEG domain
NASA Astrophysics Data System (ADS)
Hebbalaguppe, Ramya; Kakarala, Ramakrishna
2012-01-01
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
Yip, Hon Ming; Li, John C. S.; Cui, Xin; Gao, Qiannan; Leung, Chi Chiu
2014-01-01
As microfluidics has been applied extensively in many cell and biochemical applications, monitoring the related processes is an important requirement. In this work, we design and fabricate a high-throughput microfluidic device which contains 32 microchambers to perform automated parallel microfluidic operations and monitoring on an automated stage of a microscope. Images are captured at multiple spots on the device during the operations for monitoring samples in microchambers in parallel; yet the device positions may vary at different time points throughout operations as the device moves back and forth on a motorized microscopic stage. Here, we report an image-based positioning strategy to realign the chamber position before every recording of microscopic image. We fabricate alignment marks at defined locations next to the chambers in the microfluidic device as reference positions. We also develop image processing algorithms to recognize the chamber positions in real-time, followed by realigning the chambers to their preset positions in the captured images. We perform experiments to validate and characterize the device functionality and the automated realignment operation. Together, this microfluidic realignment strategy can be a platform technology to achieve precise positioning of multiple chambers for general microfluidic applications requiring long-term parallel monitoring of cell and biochemical activities. PMID:25133248
Standardizing Quality Assessment of Fused Remotely Sensed Images
NASA Astrophysics Data System (ADS)
Pohl, C.; Moellmann, J.; Fries, K.
2017-09-01
The multitude of available operational remote sensing satellites led to the development of many image fusion techniques to provide high spatial, spectral and temporal resolution images. The comparison of different techniques is necessary to obtain an optimized image for the different applications of remote sensing. There are two approaches in assessing image quality: 1. Quantitatively by visual interpretation and 2. Quantitatively using image quality indices. However an objective comparison is difficult due to the fact that a visual assessment is always subject and a quantitative assessment is done by different criteria. Depending on the criteria and indices the result varies. Therefore it is necessary to standardize both processes (qualitative and quantitative assessment) in order to allow an objective image fusion quality evaluation. Various studies have been conducted at the University of Osnabrueck (UOS) to establish a standardized process to objectively compare fused image quality. First established image fusion quality assessment protocols, i.e. Quality with No Reference (QNR) and Khan's protocol, were compared on varies fusion experiments. Second the process of visual quality assessment was structured and standardized with the aim to provide an evaluation protocol. This manuscript reports on the results of the comparison and provides recommendations for future research.
Optimal Binarization of Gray-Scaled Digital Images via Fuzzy Reasoning
NASA Technical Reports Server (NTRS)
Dominguez, Jesus A. (Inventor); Klinko, Steven J. (Inventor)
2007-01-01
A technique for finding an optimal threshold for binarization of a gray scale image employs fuzzy reasoning. A triangular membership function is employed which is dependent on the degree to which the pixels in the image belong to either the foreground class or the background class. Use of a simplified linear fuzzy entropy factor function facilitates short execution times and use of membership values between 0.0 and 1.0 for improved accuracy. To improve accuracy further, the membership function employs lower and upper bound gray level limits that can vary from image to image and are selected to be equal to the minimum and the maximum gray levels, respectively, that are present in the image to be converted. To identify the optimal binarization threshold, an iterative process is employed in which different possible thresholds are tested and the one providing the minimum fuzzy entropy measure is selected.
Universal Rim Thickness in Unsteady Sheet Fragmentation.
Wang, Y; Dandekar, R; Bustos, N; Poulain, S; Bourouiba, L
2018-05-18
Unsteady fragmentation of a fluid bulk into droplets is important for epidemiology as it governs the transport of pathogens from sneezes and coughs, or from contaminated crops in agriculture. It is also ubiquitous in industrial processes such as paint, coating, and combustion. Unsteady fragmentation is distinct from steady fragmentation on which most theoretical efforts have been focused thus far. We address this gap by studying a canonical unsteady fragmentation process: the breakup from a drop impact on a finite surface where the drop fluid is transferred to a free expanding sheet of time-varying properties and bounded by a rim of time-varying thickness. The continuous rim destabilization selects the final spray droplets, yet this process remains poorly understood. We combine theory with advanced image analysis to study the unsteady rim destabilization. We show that, at all times, the rim thickness is governed by a local instantaneous Bond number equal to unity, defined with the instantaneous, local, unsteady rim acceleration. This criterion is found to be robust and universal for a family of unsteady inviscid fluid sheet fragmentation phenomena, from impacts of drops on various surface geometries to impacts on films. We discuss under which viscous and viscoelastic conditions the criterion continues to govern the unsteady rim thickness.
Universal Rim Thickness in Unsteady Sheet Fragmentation
NASA Astrophysics Data System (ADS)
Wang, Y.; Dandekar, R.; Bustos, N.; Poulain, S.; Bourouiba, L.
2018-05-01
Unsteady fragmentation of a fluid bulk into droplets is important for epidemiology as it governs the transport of pathogens from sneezes and coughs, or from contaminated crops in agriculture. It is also ubiquitous in industrial processes such as paint, coating, and combustion. Unsteady fragmentation is distinct from steady fragmentation on which most theoretical efforts have been focused thus far. We address this gap by studying a canonical unsteady fragmentation process: the breakup from a drop impact on a finite surface where the drop fluid is transferred to a free expanding sheet of time-varying properties and bounded by a rim of time-varying thickness. The continuous rim destabilization selects the final spray droplets, yet this process remains poorly understood. We combine theory with advanced image analysis to study the unsteady rim destabilization. We show that, at all times, the rim thickness is governed by a local instantaneous Bond number equal to unity, defined with the instantaneous, local, unsteady rim acceleration. This criterion is found to be robust and universal for a family of unsteady inviscid fluid sheet fragmentation phenomena, from impacts of drops on various surface geometries to impacts on films. We discuss under which viscous and viscoelastic conditions the criterion continues to govern the unsteady rim thickness.
Using SAR satellite data time series for regional glacier mapping
NASA Astrophysics Data System (ADS)
Winsvold, Solveig H.; Kääb, Andreas; Nuth, Christopher; Andreassen, Liss M.; van Pelt, Ward J. J.; Schellenberger, Thomas
2018-03-01
With dense SAR satellite data time series it is possible to map surface and subsurface glacier properties that vary in time. On Sentinel-1A and RADARSAT-2 backscatter time series images over mainland Norway and Svalbard, we outline how to map glaciers using descriptive methods. We present five application scenarios. The first shows potential for tracking transient snow lines with SAR backscatter time series and correlates with both optical satellite images (Sentinel-2A and Landsat 8) and equilibrium line altitudes derived from in situ surface mass balance data. In the second application scenario, time series representation of glacier facies corresponding to SAR glacier zones shows potential for a more accurate delineation of the zones and how they change in time. The third application scenario investigates the firn evolution using dense SAR backscatter time series together with a coupled energy balance and multilayer firn model. We find strong correlation between backscatter signals with both the modeled firn air content and modeled wetness in the firn. In the fourth application scenario, we highlight how winter rain events can be detected in SAR time series, revealing important information about the area extent of internal accumulation. In the last application scenario, averaged summer SAR images were found to have potential in assisting the process of mapping glaciers outlines, especially in the presence of seasonal snow. Altogether we present examples of how to map glaciers and to further understand glaciological processes using the existing and future massive amount of multi-sensor time series data.
Ultrafast chirped optical waveform recording using referenced heterodyning and a time microscope
Bennett, Corey Vincent
2010-06-15
A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. This invention expands upon previous work in temporal imaging by adding heterodyning, which can be self-referenced for improved precision and stability, to convert frequency chirp (the second derivative of phase with respect to time) into a time varying intensity modulation. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.
Ultrafast chirped optical waveform recorder using referenced heterodyning and a time microscope
Bennett, Corey Vincent [Livermore, CA
2011-11-22
A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. This invention expands upon previous work in temporal imaging by adding heterodyning, which can be self-referenced for improved precision and stability, to convert frequency chirp (the second derivative of phase with respect to time) into a time varying intensity modulation. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.
Fast Time-Varying Volume Rendering Using Time-Space Partition (TSP) Tree
NASA Technical Reports Server (NTRS)
Shen, Han-Wei; Chiang, Ling-Jen; Ma, Kwan-Liu
1999-01-01
We present a new, algorithm for rapid rendering of time-varying volumes. A new hierarchical data structure that is capable of capturing both the temporal and the spatial coherence is proposed. Conventional hierarchical data structures such as octrees are effective in characterizing the homogeneity of the field values existing in the spatial domain. However, when treating time merely as another dimension for a time-varying field, difficulties frequently arise due to the discrepancy between the field's spatial and temporal resolutions. In addition, treating spatial and temporal dimensions equally often prevents the possibility of detecting the coherence that is unique in the temporal domain. Using the proposed data structure, our algorithm can meet the following goals. First, both spatial and temporal coherence are identified and exploited for accelerating the rendering process. Second, our algorithm allows the user to supply the desired error tolerances at run time for the purpose of image-quality/rendering-speed trade-off. Third, the amount of data that are required to be loaded into main memory is reduced, and thus the I/O overhead is minimized. This low I/O overhead makes our algorithm suitable for out-of-core applications.
Leiva-Valenzuela, Gabriel A; Quilaqueo, Marcela; Lagos, Daniela; Estay, Danilo; Pedreschi, Franco
2018-04-01
The aim of this research was to determine the effect of composition (dietary fiber = DF, fat = F, and gluten = G) and baking time on the target microstructural parameters that were observed using images of potato and wheat starch biscuits. Microstructures were studied Scanning Electron Microscope (SEM). Non-enzymatic browning (NEB) was assessed using color image analysis. Texture and moisture analysis was performed to have a better understanding of the baking process. Analysis of images revealed that the starch granules retained their native form at the end of baking, suggesting their in complete gelatinization. Granules size was similar at several different baking times, with an average equivalent diameter of 9 and 27 µm for wheat and potato starch, respectively. However, samples with different levels of DF and G increased circularity during baking to more than 30%, and also increasing hardness. NEB developed during baking, with the maximum increase observed between 13 and 19 min. This was reflected in decreased luminosity (L*) values due to a decrease in moisture levels. After 19 min, luminosity did not vary significantly. The ingredients that are used, as well as their quantities, can affect sample L* values. Therefore, choosing the correct ingredients and quantities can lead to different microstructures in the biscuits, with varying amounts of NEB products.
A GPU-Accelerated Approach for Feature Tracking in Time-Varying Imagery Datasets.
Peng, Chao; Sahani, Sandip; Rushing, John
2017-10-01
We propose a novel parallel connected component labeling (CCL) algorithm along with efficient out-of-core data management to detect and track feature regions of large time-varying imagery datasets. Our approach contributes to the big data field with parallel algorithms tailored for GPU architectures. We remove the data dependency between frames and achieve pixel-level parallelism. Due to the large size, the entire dataset cannot fit into cached memory. Frames have to be streamed through the memory hierarchy (disk to CPU main memory and then to GPU memory), partitioned, and processed as batches, where each batch is small enough to fit into the GPU. To reconnect the feature regions that are separated due to data partitioning, we present a novel batch merging algorithm to extract the region connection information across multiple batches in a parallel fashion. The information is organized in a memory-efficient structure and supports fast indexing on the GPU. Our experiment uses a commodity workstation equipped with a single GPU. The results show that our approach can efficiently process a weather dataset composed of terabytes of time-varying radar images. The advantages of our approach are demonstrated by comparing to the performance of an efficient CPU cluster implementation which is being used by the weather scientists.
Panorama of acute diarrhoeal diseases in Mexico.
Cifuentes, E; Hernández, J E; Venczel, L; Hurtado, M
1999-09-01
We examined the recent panorama of ADD related deaths in Mexico in an effort to assess the overall impact of control measures that may vary in space and time. We pay particular attention to mortality rates recorded between 1985-1995, that is, before and after the cholera emergency. The aim is to focus on the social groups at risk, using time series data represented in the form of images and produced by a geographic information system (GIS). We show the potential of such methods to define populations at risk and support the decision process.
NASA Astrophysics Data System (ADS)
Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan
2018-03-01
Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.
Ymeti, Irena; van der Werff, Harald; Shrestha, Dhruba Pikha; Jetten, Victor G.; Lievens, Caroline; van der Meer, Freek
2017-01-01
Remote sensing has shown its potential to assess soil properties and is a fast and non-destructive method for monitoring soil surface changes. In this paper, we monitor soil aggregate breakdown under natural conditions. From November 2014 to February 2015, images and weather data were collected on a daily basis from five soils susceptible to detachment (Silty Loam with various organic matter content, Loam and Sandy Loam). Three techniques that vary in image processing complexity and user interaction were tested for the ability of monitoring aggregate breakdown. Considering that the soil surface roughness causes shadow cast, the blue/red band ratio is utilized to observe the soil aggregate changes. Dealing with images with high spatial resolution, image texture entropy, which reflects the process of soil aggregate breakdown, is used. In addition, the Huang thresholding technique, which allows estimation of the image area occupied by soil aggregate, is performed. Our results show that all three techniques indicate soil aggregate breakdown over time. The shadow ratio shows a gradual change over time with no details related to weather conditions. Both the entropy and the Huang thresholding technique show variations of soil aggregate breakdown responding to weather conditions. Using data obtained with a regular camera, we found that freezing–thawing cycles are the cause of soil aggregate breakdown. PMID:28556803
Ymeti, Irena; van der Werff, Harald; Shrestha, Dhruba Pikha; Jetten, Victor G; Lievens, Caroline; van der Meer, Freek
2017-05-30
Remote sensing has shown its potential to assess soil properties and is a fast and non-destructive method for monitoring soil surface changes. In this paper, we monitor soil aggregate breakdown under natural conditions. From November 2014 to February 2015, images and weather data were collected on a daily basis from five soils susceptible to detachment (Silty Loam with various organic matter content, Loam and Sandy Loam). Three techniques that vary in image processing complexity and user interaction were tested for the ability of monitoring aggregate breakdown. Considering that the soil surface roughness causes shadow cast, the blue/red band ratio is utilized to observe the soil aggregate changes. Dealing with images with high spatial resolution, image texture entropy, which reflects the process of soil aggregate breakdown, is used. In addition, the Huang thresholding technique, which allows estimation of the image area occupied by soil aggregate, is performed. Our results show that all three techniques indicate soil aggregate breakdown over time. The shadow ratio shows a gradual change over time with no details related to weather conditions. Both the entropy and the Huang thresholding technique show variations of soil aggregate breakdown responding to weather conditions. Using data obtained with a regular camera, we found that freezing-thawing cycles are the cause of soil aggregate breakdown.
GEOMETRIC PROCESSING OF DIGITAL IMAGES OF THE PLANETS.
Edwards, Kathleen
1987-01-01
New procedures and software have been developed for geometric transformations of images to support digital cartography of the planets. The procedures involve the correction of spacecraft camera orientation of each image with the use of ground control and the transformation of each image to a Sinusoidal Equal-Area map projection with an algorithm which allows the number of transformation calculations to vary as the distortion varies within the image. When the distortion is low in an area of an image, few transformation computations are required, and most pixels can be interpolated. When distortion is extreme, the location of each pixel is computed. Mosaics are made of these images and stored as digital databases.
Defocus and magnification dependent variation of TEM image astigmatism.
Yan, Rui; Li, Kunpeng; Jiang, Wen
2018-01-10
Daily alignment of the microscope is a prerequisite to reaching optimal lens conditions for high resolution imaging in cryo-EM. In this study, we have investigated how image astigmatism varies with the imaging conditions (e.g. defocus, magnification). We have found that the large change of defocus/magnification between visual correction of astigmatism and subsequent data collection tasks, or during data collection, will inevitably result in undesirable astigmatism in the final images. The dependence of astigmatism on the imaging conditions varies significantly from time to time, so that it cannot be reliably compensated by pre-calibration of the microscope. Based on these findings, we recommend that the same magnification and the median defocus of the intended defocus range for final data collection are used in the objective lens astigmatism correction task during microscope alignment and in the focus mode of the iterative low-dose imaging. It is also desirable to develop a fast, accurate method that can perform dynamic correction of the astigmatism for different intended defocuses during automated imaging. Our findings also suggest that the slope of astigmatism changes caused by varying defocuses can be used as a convenient measurement of objective lens rotation symmetry and potentially an acceptance test of new electron microscopes.
FISH Finder: a high-throughput tool for analyzing FISH images
Shirley, James W.; Ty, Sereyvathana; Takebayashi, Shin-ichiro; Liu, Xiuwen; Gilbert, David M.
2011-01-01
Motivation: Fluorescence in situ hybridization (FISH) is used to study the organization and the positioning of specific DNA sequences within the cell nucleus. Analyzing the data from FISH images is a tedious process that invokes an element of subjectivity. Automated FISH image analysis offers savings in time as well as gaining the benefit of objective data analysis. While several FISH image analysis software tools have been developed, they often use a threshold-based segmentation algorithm for nucleus segmentation. As fluorescence signal intensities can vary significantly from experiment to experiment, from cell to cell, and within a cell, threshold-based segmentation is inflexible and often insufficient for automatic image analysis, leading to additional manual segmentation and potential subjective bias. To overcome these problems, we developed a graphical software tool called FISH Finder to automatically analyze FISH images that vary significantly. By posing the nucleus segmentation as a classification problem, compound Bayesian classifier is employed so that contextual information is utilized, resulting in reliable classification and boundary extraction. This makes it possible to analyze FISH images efficiently and objectively without adjustment of input parameters. Additionally, FISH Finder was designed to analyze the distances between differentially stained FISH probes. Availability: FISH Finder is a standalone MATLAB application and platform independent software. The program is freely available from: http://code.google.com/p/fishfinder/downloads/list Contact: gilbert@bio.fsu.edu PMID:21310746
Framework for Processing Videos in the Presence of Spatially Varying Motion Blur
2014-04-18
international journals. Expected impact The related problems of image restoration, registration, dehazing, and superresolution , all in the presence of blurring...real-time, it can be very valuable for applications involving aerial surveillance. Our work on superresolution will be especially valuable while...unified approach to superresolution and multichannel blind decon- volution,” Trans. Img. Proc., vol. 16, no. 9, pp. 2322–2332, Sept. 2007. 5, 5.2.1
Henderson, Rory; Day-Lewis, Frederick D.; Abarca, Elena; Harvey, Charles F.; Karam, Hanan N.; Liu, Lanbo; Lane, John W.
2010-01-01
Electrical resistivity imaging has been used in coastal settings to characterize fresh submarine groundwater discharge and the position of the freshwater/salt-water interface because of the relation of bulk electrical conductivity to pore-fluid conductivity, which in turn is a function of salinity. Interpretation of tomograms for hydrologic processes is complicated by inversion artifacts, uncertainty associated with survey geometry limitations, measurement errors, and choice of regularization method. Variation of seawater over tidal cycles poses unique challenges for inversion. The capabilities and limitations of resistivity imaging are presented for characterizing the distribution of freshwater and saltwater beneath a beach. The experimental results provide new insight into fresh submarine groundwater discharge at Waquoit Bay National Estuarine Research Reserve, East Falmouth, Massachusetts (USA). Tomograms from the experimental data indicate that fresh submarine groundwater discharge may shut down at high tide, whereas temperature data indicate that the discharge continues throughout the tidal cycle. Sensitivity analysis and synthetic modeling provide insight into resolving power in the presence of a time-varying saline water layer. In general, vertical electrodes and cross-hole measurements improve the inversion results regardless of the tidal level, whereas the resolution of surface arrays is more sensitive to time-varying saline water layer.
Magnusson, P; Bäck, S A; Olsson, L E
1999-11-01
MR image nonuniformity can vary significantly with the spin-echo pulse sequence repetition time. When MR images with different nonuniformity shapes are used in a T1-calculation the resulting T1-image becomes nonuniform. As shown in this work the uniformity TR-dependence of the spin-echo pulse sequence is a critical property for T1 measurements in general and for ferrous sulfate dosimeter gel (FeGel) applications in particular. The purpose was to study the characteristics of the MR image plane nonuniformity in FeGel evaluation. This included studies of the possibility of decreasing nonuniformities by selecting uniformity optimized repetition times, studies of the transmitted and received RF-fields and studies of the effectiveness of the correction methods background subtraction and quotient correction. A pronounced MR image nonuniformity variation with repetition and T1 relaxation time was observed, and was found to originate from nonuniform RF-transmission in combination with the inherent differences in T1 relaxation for different repetition times. The T1 calculation itself, the uniformity optimized repetition times, nor none of the correction methods studied could sufficiently correct the nonuniformities observed in the T1 images. The nonuniformities were found to vary considerably less with inversion time for the inversion-recovery pulse sequence, than with repetition time for the spin-echo pulse sequence, resulting in considerably lower T1 image nonuniformity levels.
Imaging through turbulence using a plenoptic sensor
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.
2015-09-01
Atmospheric turbulence can significantly affect imaging through paths near the ground. Atmospheric turbulence is generally treated as a time varying inhomogeneity of the refractive index of the air, which disrupts the propagation of optical signals from the object to the viewer. Under circumstances of deep or strong turbulence, the object is hard to recognize through direct imaging. Conventional imaging methods can't handle those problems efficiently. The required time for lucky imaging can be increased significantly and the image processing approaches require much more complex and iterative de-blurring algorithms. We propose an alternative approach using a plenoptic sensor to resample and analyze the image distortions. The plenoptic sensor uses a shared objective lens and a microlens array to form a mini Keplerian telescope array. Therefore, the image obtained by a conventional method will be separated into an array of images that contain multiple copies of the object's image and less correlated turbulence disturbances. Then a highdimensional lucky imaging algorithm can be performed based on the collected video on the plenoptic sensor. The corresponding algorithm will select the most stable pixels from various image cells and reconstruct the object's image as if there is only weak turbulence effect. Then, by comparing the reconstructed image with the recorded images in each MLA cell, the difference can be regarded as the turbulence effects. As a result, the retrieval of the object's image and extraction of turbulence effect can be performed simultaneously.
Pécot, Thierry; Bouthemy, Patrick; Boulanger, Jérôme; Chessel, Anatole; Bardin, Sabine; Salamero, Jean; Kervrann, Charles
2015-02-01
Image analysis applied to fluorescence live cell microscopy has become a key tool in molecular biology since it enables to characterize biological processes in space and time at the subcellular level. In fluorescence microscopy imaging, the moving tagged structures of interest, such as vesicles, appear as bright spots over a static or nonstatic background. In this paper, we consider the problem of vesicle segmentation and time-varying background estimation at the cellular scale. The main idea is to formulate the joint segmentation-estimation problem in the general conditional random field framework. Furthermore, segmentation of vesicles and background estimation are alternatively performed by energy minimization using a min cut-max flow algorithm. The proposed approach relies on a detection measure computed from intensity contrasts between neighboring blocks in fluorescence microscopy images. This approach permits analysis of either 2D + time or 3D + time data. We demonstrate the performance of the so-called C-CRAFT through an experimental comparison with the state-of-the-art methods in fluorescence video-microscopy. We also use this method to characterize the spatial and temporal distribution of Rab6 transport carriers at the cell periphery for two different specific adhesion geometries.
Direction Dependent Effects In Widefield Wideband Full Stokes Radio Imaging
NASA Astrophysics Data System (ADS)
Jagannathan, Preshanth; Bhatnagar, Sanjay; Rau, Urvashi; Taylor, Russ
2015-01-01
Synthesis imaging in radio astronomy is affected by instrumental and atmospheric effects which introduce direction dependent gains.The antenna power pattern varies both as a function of time and frequency. The broad band time varying nature of the antenna power pattern when not corrected leads to gross errors in full stokes imaging and flux estimation. In this poster we explore the errors that arise in image deconvolution while not accounting for the time and frequency dependence of the antenna power pattern. Simulations were conducted with the wideband full stokes power pattern of the Very Large Array(VLA) antennas to demonstrate the level of errors arising from direction-dependent gains. Our estimate is that these errors will be significant in wide-band full-pol mosaic imaging as well and algorithms to correct these errors will be crucial for many up-coming large area surveys (e.g. VLASS)
Application of Structure-from-Motion photogrammetry in laboratory flumes
NASA Astrophysics Data System (ADS)
Morgan, Jacob A.; Brogan, Daniel J.; Nelson, Peter A.
2017-01-01
Structure-from-Motion (SfM) photogrammetry has become widely used for topographic data collection in field and laboratory studies. However, the relative performance of SfM against other methods of topographic measurement in a laboratory flume environment has not been systematically evaluated, and there is a general lack of guidelines for SfM application in flume settings. As the use of SfM in laboratory flume settings becomes more widespread, it is increasingly critical to develop an understanding of how to acquire and process SfM data for a given flume size and sediment characteristics. In this study, we: (1) compare the resolution and accuracy of SfM topographic measurements to terrestrial laser scanning (TLS) measurements in laboratory flumes of varying physical dimensions containing sediments of varying grain sizes; (2) explore the effects of different image acquisition protocols and data processing methods on the resolution and accuracy of topographic data derived from SfM techniques; and (3) provide general guidance for image acquisition and processing for SfM applications in laboratory flumes. To investigate the effects of flume size, sediment size, and photo overlap on the density and accuracy of SfM data, we collected topographic data using both TLS and SfM in five flumes with widths ranging from 0.22 to 6.71 m, lengths ranging from 9.14 to 30.48 m, and median sediment sizes ranging from 0.2 to 31 mm. Acquisition time, image overlap, point density, elevation data, and computed roughness parameters were compared to evaluate the performance of SfM against TLS. We also collected images of a pan of gravel where we varied the distance and angle between the camera and sediment in order to explore how photo acquisition affects the ability to capture grain-scale microtopographic features in SfM-derived point clouds. A variety of image combinations and SfM software package settings were also investigated to determine optimal processing techniques. Results from this study suggest that SfM provides topographic data of similar accuracy to TLS, at higher resolution and lower cost. We found that about 100pixels per grain are required to resolve grain-scale topography. We suggest protocols for image acquisition and SfM software settings to achieve best results when using SfM in laboratory settings. In general, convergent imagery, taken from a higher angle, with at least several overlapping images for each desired point in the flume will result in an acceptable point cloud.
NASA Astrophysics Data System (ADS)
Mitra, Debasis; Boutchko, Rostyslav; Ray, Judhajeet; Nilsen-Hamilton, Marit
2015-03-01
In this work we present a time-lapsed confocal microscopy image analysis technique for an automated gene expression study of multiple single living cells. Fluorescence Resonance Energy Transfer (FRET) is a technology by which molecule-to-molecule interactions are visualized. We analyzed a dynamic series of ~102 images obtained using confocal microscopy of fluorescence in yeast cells containing RNA reporters that give a FRET signal when the gene promoter is activated. For each time frame, separate images are available for three spectral channels and the integrated intensity snapshot of the system. A large number of time-lapsed frames must be analyzed to identify each cell individually across time and space, as it is moving in and out of the focal plane of the microscope. This makes it a difficult image processing problem. We have proposed an algorithm here, based on scale-space technique, which solves the problem satisfactorily. The algorithm has multiple directions for even further improvement. The ability to rapidly measure changes in gene expression simultaneously in many cells in a population will open the opportunity for real-time studies of the heterogeneity of genetic response in a living cell population and the interactions between cells that occur in a mixed population, such as the ones found in the organs and tissues of multicellular organisms.
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
NASA Technical Reports Server (NTRS)
Mcclain, Charles R.; Ishizaka, Joji; Hofmann, Eileen E.
1990-01-01
Five coastal-zone-color-scanner images from the southeastern U.S. continental shelf are combined with concurrent moored current meter measurements to assess the processes controlling the variability in chlorophyll concentration and distribution in this region. An equation governing the space and time distribution of a nonconservative quantity such as chlorophyll is used in the calculations. The terms of the equation, estimated from observations, show that advective, diffusive, and local processes contribute to the plankton distributions and vary with time and location. The results from this calculation are compared with similar results obtained using a numerical physical-biological model with circulation fields derived from an optimal interpolation of the current meter observations and it is concluded that the two approaches produce different estimates of the processes controlling phytoplankton variability.
Classification of pollen species using autofluorescence image analysis.
Mitsumoto, Kotaro; Yabusaki, Katsumi; Aoyagi, Hideki
2009-01-01
A new method to classify pollen species was developed by monitoring autofluorescence images of pollen grains. The pollens of nine species were selected, and their autofluorescence images were captured by a microscope equipped with a digital camera. The pollen size and the ratio of the blue to red pollen autofluorescence spectra (the B/R ratio) were calculated by image processing. The B/R ratios and pollen size varied among the species. Furthermore, the scatter-plot of pollen size versus the B/R ratio showed that pollen could be classified to the species level using both parameters. The pollen size and B/R ratio were confirmed by means of particle flow image analysis and the fluorescence spectra, respectively. These results suggest that a flow system capable of measuring both scattered light and the autofluorescence of particles could classify and count pollen grains in real time.
Optical diagnostics of turbulent mixing in explosively-driven shock tube
NASA Astrophysics Data System (ADS)
Anderson, James; Hargather, Michael
2016-11-01
Explosively-driven shock tube experiments were performed to investigate the turbulent mixing of explosive product gases and ambient air. A small detonator initiated Al / I2O5 thermite, which produced a shock wave and expanding product gases. Schlieren and imaging spectroscopy were applied simultaneously along a common optical path to identify correlations between turbulent structures and spatially-resolved absorbance. The schlieren imaging identifies flow features including shock waves and turbulent structures while the imaging spectroscopy identifies regions of iodine gas presence in the product gases. Pressure transducers located before and after the optical diagnostic section measure time-resolved pressure. Shock speed is measured from tracking the leading edge of the shockwave in the schlieren images and from the pressure transducers. The turbulent mixing characteristics were determined using digital image processing. Results show changes in shock speed, product gas propagation, and species concentrations for varied explosive charge mass. Funded by DTRA Grant HDTRA1-14-1-0070.
Trans-dimensional MCMC methods for fully automatic motion analysis in tagged MRI.
Smal, Ihor; Carranza-Herrezuelo, Noemí; Klein, Stefan; Niessen, Wiro; Meijering, Erik
2011-01-01
Tagged magnetic resonance imaging (tMRI) is a well-known noninvasive method allowing quantitative analysis of regional heart dynamics. Its clinical use has so far been limited, in part due to the lack of robustness and accuracy of existing tag tracking algorithms in dealing with low (and intrinsically time-varying) image quality. In this paper, we propose a novel probabilistic method for tag tracking, implemented by means of Bayesian particle filtering and a trans-dimensional Markov chain Monte Carlo (MCMC) approach, which efficiently combines information about the imaging process and tag appearance with prior knowledge about the heart dynamics obtained by means of non-rigid image registration. Experiments using synthetic image data (with ground truth) and real data (with expert manual annotation) from preclinical (small animal) and clinical (human) studies confirm that the proposed method yields higher consistency, accuracy, and intrinsic tag reliability assessment in comparison with other frequently used tag tracking methods.
Controlling cavitation-based image contrast in focused ultrasound histotripsy surgery.
Allen, Steven P; Hall, Timothy L; Cain, Charles A; Hernandez-Garcia, Luis
2015-01-01
To develop MRI feedback for cavitation-based, focused ultrasound, tissue erosion surgery (histotripsy), we investigate image contrast generated by transient cavitation events. Changes in GRE image intensity are observed while balanced pairs of field gradients are varied in the presence of an acoustically driven cavitation event. The amplitude of the acoustic pulse and the timing between a cavitation event and the start of these gradient waveforms are also varied. The magnitudes and phases of the cavitation site are compared with those of control images. An echo-planar sequence is used to evaluate histotripsy lesions in ex vivo tissue. Cavitation events in water cause localized attenuation when acoustic pulses exceed a pressure threshold. Attenuation increases with increasing gradient amplitude and gradient lobe separation times and is isotropic with gradient direction. This attenuation also depends upon the relative timing between the cavitation event and the start of the balanced gradients. These factors can be used to control the appearance of attenuation while imaging ex vivo tissue. By controlling the timing between cavitation events and the imaging gradients, MR images can be made alternately sensitive or insensitive to cavitation. During therapy, these images can be used to isolate contrast generated by cavitation. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Hirayama, Ryuji; Shiraki, Atsushi; Nakayama, Hirotaka; Kakue, Takashi; Shimobaba, Tomoyoshi; Ito, Tomoyoshi
2017-07-01
We designed and developed a control circuit for a three-dimensional (3-D) light-emitting diode (LED) array to be used in volumetric displays exhibiting full-color dynamic 3-D images. The circuit was implemented on a field-programmable gate array; therefore, pulse-width modulation, which requires high-speed processing, could be operated in real time. We experimentally evaluated the developed system by measuring the luminance of an LED with varying input and confirmed that the system works appropriately. In addition, we demonstrated that the volumetric display exhibits different full-color dynamic two-dimensional images in two orthogonal directions. Each of the exhibited images could be obtained only from the prescribed viewpoint. Such directional characteristics of the system are beneficial for applications, including digital signage, security systems, art, and amusement.
Dynamic image reconstruction: MR movies from motion ghosts.
Xiang, Q S; Henkelman, R M
1992-01-01
It has been previously shown that an image with motion ghost artifacts can be decomposed into a ghost mask superimposed over a ghost-free image. The present study demonstrates that the ghost components carry useful dynamic information and should not be discarded. Specifically, ghosts of different orders indicate the intensity and phase of the corresponding harmonics contained in the quasi-periodically varying spin-density distribution. A summation of the ghosts weighted by appropriate temporal phase factors can give a time-dependent dynamic image that is a movie of the object motion. This dynamic image reconstruction technique does not necessarily require monitoring of the motion and thus is easy to implement and operate. It also has a shorter imaging time than point-by-point imaging of temporal variation, because the periodic motion is more efficiently sampled with a limited number of harmonics recorded in the motion ghosts. This technique was tested in both moving phantoms and volunteers. It is believed to be useful for dynamic imaging of time-varying anatomic structures, such as in the cardiovascular system.
Subcellular real-time in vivo imaging of intralymphatic and intravascular cancer-cell trafficking
NASA Astrophysics Data System (ADS)
McElroy, M.; Hayashi, K.; Kaushal, S.; Bouvet, M.; Hoffman, Robert M.
2008-02-01
With the use of fluorescent cells labeled with green fluorescent protein (GFP) in the nucleus and red fluorescent protein (RFP) in the cytoplasm and a highly sensitive small animal imaging system with both macro-optics and micro-optics, we have developed subcellular real-time imaging of cancer cell trafficking in live mice. Dual-color cancer cells were injected by a vascular route in an abdominal skin flap in nude mice. The mice were imaged with an Olympus OV100 small animal imaging system with a sensitive CCD camera and four objective lenses, parcentered and parfocal, enabling imaging from macrocellular to subcellular. We observed the nuclear and cytoplasmic behavior of cancer cells in real time in blood vessels as they moved by various means or adhered to the vessel surface in the abdominal skin flap. During extravasation, real-time dual-color imaging showed that cytoplasmic processes of the cancer cells exited the vessels first, with nuclei following along the cytoplasmic projections. Both cytoplasm and nuclei underwent deformation during extravasation. Different cancer cell lines seemed to strongly vary in their ability to extravasate. We have also developed real-time imaging of cancer cell trafficking in lymphatic vessels. Cancer cells labeled with GFP and/or RFP were injected into the inguinal lymph node of nude mice. The labeled cancer cells trafficked through lymphatic vessels where they were imaged via a skin flap in real-time at the cellular level until they entered the axillary lymph node. The bright dual-color fluorescence of the cancer cells and the real-time microscopic imaging capability of the Olympus OV100 enabled imaging the trafficking cancer cells in both blood vessels and lymphatics. With the dual-color cancer cells and the highly sensitive imaging system described here, the subcellular dynamics of cancer metastasis can now be observed in live mice in real time.
Ashton, Gage P; Harding, Lindsay P; Parkes, Gareth M B
2017-12-19
This paper describes a new analytical instrument that combines a precisely temperature-controlled hot-stage with digital microscopy and Direct Analysis in Real Time-mass spectrometry (DART-MS) detection. The novelty of the instrument lies in its ability to monitor processes as a function of temperature through the simultaneous recording of images, quantitative color changes, and mass spectra. The capability of the instrument was demonstrated through successful application to four very varied systems including profiling an organic reaction, decomposition of silicone polymers, and the desorption of rhodamine B from an alumina surface. The multidimensional, real-time analytical data provided by this instrument allow for a much greater insight into thermal processes than could be achieved previously.
Video-guided calibration of an augmented reality mobile C-arm.
Chen, Xin; Naik, Hemal; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal
2014-11-01
The augmented reality (AR) fluoroscope augments an X-ray image by video and provides the surgeon with a real-time in situ overlay of the anatomy. The overlay alignment is crucial for diagnostic and intra-operative guidance, so precise calibration of the AR fluoroscope is required. The first and most complex step of the calibration procedure is the determination of the X-ray source position. Currently, this is achieved using a biplane phantom with movable metallic rings on its top layer and fixed X-ray opaque markers on its bottom layer. The metallic rings must be moved to positions where at least two pairs of rings and markers are isocentric in the X-ray image. The current "trial and error" calibration process currently requires acquisition of many X-ray images, a task that is both time consuming and radiation intensive. An improved process was developed and tested for C-arm calibration. Video guidance was used to drive the calibration procedure to minimize both X-ray exposure and the time involved. For this, a homography between X-ray and video images is estimated. This homography is valid for the plane at which the metallic rings are positioned and is employed to guide the calibration procedure. Eight users having varying calibration experience (i.e., 2 experts, 2 semi-experts, 4 novices) were asked to participate in the evaluation. The video-guided technique reduced the number of intra-operative X-ray calibration images by 89% and decreased the total time required by 59%. A video-based C-arm calibration method has been developed that improves the usability of the AR fluoroscope with a friendlier interface, reduced calibration time and clinically acceptable radiation doses.
Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; McCrea, Andrew C.
2009-01-01
The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.
Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment
NASA Technical Reports Server (NTRS)
Schwartz, Richard J.; McCrea, Andrew C.
2010-01-01
The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.
An approach for characterising cellular polymeric foam structures using computed tomography
NASA Astrophysics Data System (ADS)
Chen, Youming; Das, Raj; Battley, Mark
2018-02-01
Global properties of foams depend on foam base materials and microstructures. Characterisation of foam microstructures is important for developing numerical foam models. In this study, the microstructures of four polymeric structural foams were imaged using a micro-CT scanner. Image processing and analysis methods were proposed to quantify the relative density, cell wall thickness and cell size of these foams from the captured CT images. Overall, the cells in these foams are fairly isotropic, and cell walls are rather straight. The measured average relative densities are in good agreement with the actual values. Relative density, cell size and cell wall thickness in these foams are found to vary along the thickness of foam panel direction. Cell walls in two of these foams are found to be filled with secondary pores. In addition, it is found that the average cell wall thickness measured from 2D images is around 1.4 times of that measured from 3D images, and the average cell size measured from 3D images is 1.16 times of that measured from 2D images. The distributions of cell wall thickness and cell size measured from 2D images exhibit lager dispersion in comparison to those measured from 3D images.
Is it a bird? Is it a plane? Ultra-rapid visual categorisation of natural and artifactual objects.
VanRullen, R; Thorpe, S J
2001-01-01
Visual processing is known to be very fast in ultra-rapid categorisation tasks where the subject has to decide whether a briefly flashed image belongs to a target category or not. Human subjects can respond in under 400 ms, and event-related-potential studies have shown that the underlying processing can be done in less than 150 ms. Monkeys trained to perform the same task have proved even faster. However, most of these experiments have only been done with biologically relevant target categories such as animals or food. Here we performed the same study on human subjects, alternating between a task in which the target category was 'animal', and a task in which the target category was 'means of transport'. These natural images of clearly artificial objects contained targets as varied as cars, trucks, trains, boats, aircraft, and hot-air balloons. However, the subjects performed almost identically in both tasks, with reaction times not significantly longer in the 'means of transport' task. These reaction times were much shorter than in any previous study on natural-image processing. We conclude that, at least for these two superordinate categories, the speed of ultra-rapid visual categorisation of natural scenes does not depend on the target category, and that this processing could rely primarily on feed-forward, automatic mechanisms.
A neuromorphic approach to satellite image understanding
NASA Astrophysics Data System (ADS)
Partsinevelos, Panagiotis; Perakakis, Manolis
2014-05-01
Remote sensing satellite imagery provides high altitude, top viewing aspects of large geographic regions and as such the depicted features are not always easily recognizable. Nevertheless, geoscientists familiar to remote sensing data, gradually gain experience and enhance their satellite image interpretation skills. The aim of this study is to devise a novel computational neuro-centered classification approach for feature extraction and image understanding. Object recognition through image processing practices is related to a series of known image/feature based attributes including size, shape, association, texture, etc. The objective of the study is to weight these attribute values towards the enhancement of feature recognition. The key cognitive experimentation concern is to define the point when a user recognizes a feature as it varies in terms of the above mentioned attributes and relate it with their corresponding values. Towards this end, we have set up an experimentation methodology that utilizes cognitive data from brain signals (EEG) and eye gaze data (eye tracking) of subjects watching satellite images of varying attributes; this allows the collection of rich real-time data that will be used for designing the image classifier. Since the data are already labeled by users (using an input device) a first step is to compare the performance of various machine-learning algorithms on the collected data. On the long-run, the aim of this work would be to investigate the automatic classification of unlabeled images (unsupervised learning) based purely on image attributes. The outcome of this innovative process is twofold: First, in an abundance of remote sensing image datasets we may define the essential image specifications in order to collect the appropriate data for each application and improve processing and resource efficiency. E.g. for a fault extraction application in a given scale a medium resolution 4-band image, may be more effective than costly, multispectral, very high resolution imagery. Second, we attempt to relate the experienced against the non-experienced user understanding in order to indirectly assess the possible limits of purely computational systems. In other words, obtain the conceptual limits of computation vs human cognition concerning feature recognition from satellite imagery. Preliminary results of this pilot study show relations between collected data and differentiation of the image attributes which indicates that our methodology can lead to important results.
Pulse Holography: Review Of Applications
NASA Astrophysics Data System (ADS)
Smigielski, Paul
1990-04-01
Pulse Holography includes studies concerning time-varying phase objects as well as time-varying reflective objects involving the use of pulse ruby- and YAG-lasers. The paper is divided in two parts. One part concerns the direct use of 3-1) images reconstructed from holograms, i.e. applications to particle size analysis, 3-I) velocity measurements, 3-I) cinematography ... The second part describes applications using holographic interferometry in laboratory or in an industrial environment, i.e. applications to fluid mechanics, vibration analysis, non-destructive testing ... Recent developments including interferornetric cineholography, fiber optics, measurement of non-interferometric displacements ... , are also described. The future of holography depends to a great extent on data processing and interpretation of informations contained in holograms or holographic intericrograms. Therefore, we give the state of art in this field in Europe illustrated with some industrial applications.
Dynamic Black-Level Correction and Artifact Flagging in the Kepler Data Pipeline
NASA Technical Reports Server (NTRS)
Clarke, B. D.; Kolodziejczak, J. J.; Caldwell, D. A.
2013-01-01
Instrument-induced artifacts in the raw Kepler pixel data include time-varying crosstalk from the fine guidance sensor (FGS) clock signals, manifestations of drifting moiré pattern as locally correlated nonstationary noise and rolling bands in the images which find their way into the calibrated pixel time series and ultimately into the calibrated target flux time series. Using a combination of raw science pixel data, full frame images, reverse-clocked pixel data and ancillary temperature data the Keplerpipeline models and removes the FGS crosstalk artifacts by dynamically adjusting the black level correction. By examining the residuals to the model fits, the pipeline detects and flags spatial regions and time intervals of strong time-varying blacklevel (rolling bands ) on a per row per cadence basis. These flags are made available to downstream users of the data since the uncorrected rolling band artifacts could complicate processing or lead to misinterpretation of instrument behavior as stellar. This model fitting and artifact flagging is performed within the new stand-alone pipeline model called Dynablack. We discuss the implementation of Dynablack in the Kepler data pipeline and present results regarding the improvement in calibrated pixels and the expected improvement in cotrending performances as a result of including FGS corrections in the calibration. We also discuss the effectiveness of the rolling band flagging for downstream users and illustrate with some affected light curves.
Video-Camera-Based Position-Measuring System
NASA Technical Reports Server (NTRS)
Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert
2005-01-01
A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white squares to an object of interest (see Figure 2). For other situations, where circular symmetry is more desirable, circular targets also can be created. Such a target can readily be generated and modified by use of commercially available software and printed by use of a standard office printer. All three relative coordinates (x, y, and z) of each target can be determined by processing the video image of the target. Because of the unique design of corresponding image-processing filters and targets, the vision-based position- measurement system is extremely robust and tolerant of widely varying fields of view, lighting conditions, and varying background imagery.
A multiscale MDCT image-based breathing lung model with time-varying regional ventilation
Yin, Youbing; Choi, Jiwoong; Hoffman, Eric A.; Tawhai, Merryn H.; Lin, Ching-Long
2012-01-01
A novel algorithm is presented that links local structural variables (regional ventilation and deforming central airways) to global function (total lung volume) in the lung over three imaged lung volumes, to derive a breathing lung model for computational fluid dynamics simulation. The algorithm constitutes the core of an integrative, image-based computational framework for subject-specific simulation of the breathing lung. For the first time, the algorithm is applied to three multi-detector row computed tomography (MDCT) volumetric lung images of the same individual. A key technique in linking global and local variables over multiple images is an in-house mass-preserving image registration method. Throughout breathing cycles, cubic interpolation is employed to ensure C1 continuity in constructing time-varying regional ventilation at the whole lung level, flow rate fractions exiting the terminal airways, and airway deformation. The imaged exit airway flow rate fractions are derived from regional ventilation with the aid of a three-dimensional (3D) and one-dimensional (1D) coupled airway tree that connects the airways to the alveolar tissue. An in-house parallel large-eddy simulation (LES) technique is adopted to capture turbulent-transitional-laminar flows in both normal and deep breathing conditions. The results obtained by the proposed algorithm when using three lung volume images are compared with those using only one or two volume images. The three-volume-based lung model produces physiologically-consistent time-varying pressure and ventilation distribution. The one-volume-based lung model under-predicts pressure drop and yields un-physiological lobar ventilation. The two-volume-based model can account for airway deformation and non-uniform regional ventilation to some extent, but does not capture the non-linear features of the lung. PMID:23794749
CropEx Web-Based Agricultural Monitoring and Decision Support
NASA Technical Reports Server (NTRS)
Harvey. Craig; Lawhead, Joel
2011-01-01
CropEx is a Web-based agricultural Decision Support System (DSS) that monitors changes in crop health over time. It is designed to be used by a wide range of both public and private organizations, including individual producers and regional government offices with a vested interest in tracking vegetation health. The database and data management system automatically retrieve and ingest data for the area of interest. Another stores results of the processing and supports the DSS. The processing engine will allow server-side analysis of imagery with support for image sub-setting and a set of core raster operations for image classification, creation of vegetation indices, and change detection. The system includes the Web-based (CropEx) interface, data ingestion system, server-side processing engine, and a database processing engine. It contains a Web-based interface that has multi-tiered security profiles for multiple users. The interface provides the ability to identify areas of interest to specific users, user profiles, and methods of processing and data types for selected or created areas of interest. A compilation of programs is used to ingest available data into the system, classify that data, profile that data for quality, and make data available for the processing engine immediately upon the data s availability to the system (near real time). The processing engine consists of methods and algorithms used to process the data in a real-time fashion without copying, storing, or moving the raw data. The engine makes results available to the database processing engine for storage and further manipulation. The database processing engine ingests data from the image processing engine, distills those results into numerical indices, and stores each index for an area of interest. This process happens each time new data is ingested and processed for the area of interest, and upon subsequent database entries, the database processing engine qualifies each value for each area of interest and conducts a logical processing of results indicating when and where thresholds are exceeded. Reports are provided at regular, operator-determined intervals that include variances from thresholds and links to view raw data for verification, if necessary. The technology and method of development allow the code base to easily be modified for varied use in the real-time and near-real-time processing environments. In addition, the final product will be demonstrated as a means for rapid draft assessment of imagery.
NASA Astrophysics Data System (ADS)
Li, Jiqing; Huang, Jing; Li, Jianchang
2018-06-01
The time-varying design flood can make full use of the measured data, which can provide the reservoir with the basis of both flood control and operation scheduling. This paper adopts peak over threshold method for flood sampling in unit periods and Poisson process with time-dependent parameters model for simulation of reservoirs time-varying design flood. Considering the relationship between the model parameters and hypothesis, this paper presents the over-threshold intensity, the fitting degree of Poisson distribution and the design flood parameters are the time-varying design flood unit period and threshold discriminant basis, deduced Longyangxia reservoir time-varying design flood process at 9 kinds of design frequencies. The time-varying design flood of inflow is closer to the reservoir actual inflow conditions, which can be used to adjust the operating water level in flood season and make plans for resource utilization of flood in the basin.
Exact reconstruction analysis/synthesis filter banks with time-varying filters
NASA Technical Reports Server (NTRS)
Arrowood, J. L., Jr.; Smith, M. J. T.
1993-01-01
This paper examines some of the analysis/synthesis issues associated with FIR time-varying filter banks where the filter bank coefficients are allowed to change in response to the input signal. Several issues are identified as being important in order to realize performance gains from time-varying filter banks in image coding applications. These issues relate to the behavior of the filters as transition from one set of filter banks to another occurs. Lattice structure formulations for the time varying filter bank problem are introduced and discussed in terms of their properties and transition characteristics.
Image Motion Detection And Estimation: The Modified Spatio-Temporal Gradient Scheme
NASA Astrophysics Data System (ADS)
Hsin, Cheng-Ho; Inigo, Rafael M.
1990-03-01
The detection and estimation of motion are generally involved in computing a velocity field of time-varying images. A completely new modified spatio-temporal gradient scheme to determine motion is proposed. This is derived by using gradient methods and properties of biological vision. A set of general constraints is proposed to derive motion constraint equations. The constraints are that the second directional derivatives of image intensity at an edge point in the smoothed image will be constant at times t and t+L . This scheme basically has two stages: spatio-temporal filtering, and velocity estimation. Initially, image sequences are processed by a set of oriented spatio-temporal filters which are designed using a Gaussian derivative model. The velocity is then estimated for these filtered image sequences based on the gradient approach. From a computational stand point, this scheme offers at least three advantages over current methods. The greatest advantage of the modified spatio-temporal gradient scheme over the traditional ones is that an infinite number of motion constraint equations are derived instead of only one. Therefore, it solves the aperture problem without requiring any additional assumptions and is simply a local process. The second advantage is that because of the spatio-temporal filtering, the direct computation of image gradients (discrete derivatives) is avoided. Therefore the error in gradients measurement is reduced significantly. The third advantage is that during the processing of motion detection and estimation algorithm, image features (edges) are produced concurrently with motion information. The reliable range of detected velocity is determined by parameters of the oriented spatio-temporal filters. Knowing the velocity sensitivity of a single motion detection channel, a multiple-channel mechanism for estimating image velocity, seldom addressed by other motion schemes in machine vision, can be constructed by appropriately choosing and combining different sets of parameters. By applying this mechanism, a great range of velocity can be detected. The scheme has been tested for both synthetic and real images. The results of simulations are very satisfactory.
Real-time Mesoscale Visualization of Dynamic Damage and Reaction in Energetic Materials under Impact
NASA Astrophysics Data System (ADS)
Chen, Wayne; Harr, Michael; Kerschen, Nicholas; Maris, Jesus; Guo, Zherui; Parab, Niranjan; Sun, Tao; Fezzaa, Kamel; Son, Steven
Energetic materials may be subjected to impact and vibration loading. Under these dynamic loadings, local stress or strain concentrations may lead to the formation of hot spots and unintended reaction. To visualize the dynamic damage and reaction processes in polymer bonded energetic crystals under dynamic compressive loading, a high speed X-ray phase contrast imaging setup was synchronized with a Kolsky bar and a light gas gun. Controlled compressive loading was applied on PBX specimens with a single or multiple energetic crystal particles and impact-induced damage and reaction processes were captured using the high speed X-ray imaging setup. Impact velocities were systematically varied to explore the critical conditions for reaction. At lower loading rates, ultrasonic exercitations were also applied to progressively damage the crystals, eventually leading to reaction. AFOSR, ONR.
Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D
2013-01-01
Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.
A technique for automatically extracting useful field of view and central field of view images.
Pandey, Anil Kumar; Sharma, Param Dev; Aheer, Deepak; Kumar, Jay Prakash; Sharma, Sanjay Kumar; Patel, Chetan; Kumar, Rakesh; Bal, Chandra Sekhar
2016-01-01
It is essential to ensure the uniform response of the single photon emission computed tomography gamma camera system before using it for the clinical studies by exposing it to uniform flood source. Vendor specific acquisition and processing protocol provide for studying flood source images along with the quantitative uniformity parameters such as integral and differential uniformity. However, a significant difficulty is that the time required to acquire a flood source image varies from 10 to 35 min depending both on the activity of Cobalt-57 flood source and the pre specified counts in the vendors protocol (usually 4000K-10,000K counts). In case the acquired total counts are less than the total prespecified counts, and then the vendor's uniformity processing protocol does not precede with the computation of the quantitative uniformity parameters. In this study, we have developed and verified a technique for reading the flood source image, remove unwanted information, and automatically extract and save the useful field of view and central field of view images for the calculation of the uniformity parameters. This was implemented using MATLAB R2013b running on Ubuntu Operating system and was verified by subjecting it to the simulated and real flood sources images. The accuracy of the technique was found to be encouraging, especially in view of practical difficulties with vendor-specific protocols. It may be used as a preprocessing step while calculating uniformity parameters of the gamma camera in lesser time with fewer constraints.
Li, Ruijiang; Jia, Xun; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B
2010-06-01
To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.
Implementation and optimization of ultrasound signal processing algorithms on mobile GPU
NASA Astrophysics Data System (ADS)
Kong, Woo Kyu; Lee, Wooyoul; Kim, Kyu Cheol; Yoo, Yangmo; Song, Tai-Kyong
2014-03-01
A general-purpose graphics processing unit (GPGPU) has been used for improving computing power in medical ultrasound imaging systems. Recently, a mobile GPU becomes powerful to deal with 3D games and videos at high frame rates on Full HD or HD resolution displays. This paper proposes the method to implement ultrasound signal processing on a mobile GPU available in the high-end smartphone (Galaxy S4, Samsung Electronics, Seoul, Korea) with programmable shaders on the OpenGL ES 2.0 platform. To maximize the performance of the mobile GPU, the optimization of shader design and load sharing between vertex and fragment shader was performed. The beamformed data were captured from a tissue mimicking phantom (Model 539 Multipurpose Phantom, ATS Laboratories, Inc., Bridgeport, CT, USA) by using a commercial ultrasound imaging system equipped with a research package (Ultrasonix Touch, Ultrasonix, Richmond, BC, Canada). The real-time performance is evaluated by frame rates while varying the range of signal processing blocks. The implementation method of ultrasound signal processing on OpenGL ES 2.0 was verified by analyzing PSNR with MATLAB gold standard that has the same signal path. CNR was also analyzed to verify the method. From the evaluations, the proposed mobile GPU-based processing method has no significant difference with the processing using MATLAB (i.e., PSNR<52.51 dB). The comparable results of CNR were obtained from both processing methods (i.e., 11.31). From the mobile GPU implementation, the frame rates of 57.6 Hz were achieved. The total execution time was 17.4 ms that was faster than the acquisition time (i.e., 34.4 ms). These results indicate that the mobile GPU-based processing method can support real-time ultrasound B-mode processing on the smartphone.
A GPU-Parallelized Eigen-Based Clutter Filter Framework for Ultrasound Color Flow Imaging.
Chee, Adrian J Y; Yiu, Billy Y S; Yu, Alfred C H
2017-01-01
Eigen-filters with attenuation response adapted to clutter statistics in color flow imaging (CFI) have shown improved flow detection sensitivity in the presence of tissue motion. Nevertheless, its practical adoption in clinical use is not straightforward due to the high computational cost for solving eigendecompositions. Here, we provide a pedagogical description of how a real-time computing framework for eigen-based clutter filtering can be developed through a single-instruction, multiple data (SIMD) computing approach that can be implemented on a graphical processing unit (GPU). Emphasis is placed on the single-ensemble-based eigen-filtering approach (Hankel singular value decomposition), since it is algorithmically compatible with GPU-based SIMD computing. The key algebraic principles and the corresponding SIMD algorithm are explained, and annotations on how such algorithm can be rationally implemented on the GPU are presented. Real-time efficacy of our framework was experimentally investigated on a single GPU device (GTX Titan X), and the computing throughput for varying scan depths and slow-time ensemble lengths was studied. Using our eigen-processing framework, real-time video-range throughput (24 frames/s) can be attained for CFI frames with full view in azimuth direction (128 scanlines), up to a scan depth of 5 cm ( λ pixel axial spacing) for slow-time ensemble length of 16 samples. The corresponding CFI image frames, with respect to the ones derived from non-adaptive polynomial regression clutter filtering, yielded enhanced flow detection sensitivity in vivo, as demonstrated in a carotid imaging case example. These findings indicate that the GPU-enabled eigen-based clutter filtering can improve CFI flow detection performance in real time.
CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.
Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos
2013-12-31
Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.
Optimization of exposure factors for X-ray radiography non-destructive testing of pearl oyster
NASA Astrophysics Data System (ADS)
Susilo; Yulianti, I.; Addawiyah, A.; Setiawan, R.
2018-03-01
One of the processes in pearl oyster cultivation is detecting the pearl nucleus to gain information whether the pearl nucleus is still attached in the shell or vomited. The common tool used to detect pearl nucleus is an X-ray machine. However, an X-ray machine has a drawback that is the energy used is higher than that used by digital radiography. The high energy make the resulted image is difficult to be analysed. One of the advantages of digital radiography is the energy used can be adjusted so that the resulted image can be analysed easily. To obtain a high quality of pearl image using digital radiography, the exposure factors should be optimized. In this work, optimization was done by varying the voltage, current, and exposure time. Then, the radiography images were analysed using Contrast to Noise Ratio (CNR). From the analysis, it can be determined that the optimum exposure factors are 60 kV of voltage, 16 mA of current, and 0.125 s of exposure time which result in CNR of 5.71.
NASA Technical Reports Server (NTRS)
Ford, J. P.; Arvidson, R. E.
1989-01-01
The high sensitivity of imaging radars to slope at moderate to low incidence angles enhances the perception of linear topography on images. It reveals broad spatial patterns that are essential to landform mapping and interpretation. As radar responses are strongly directional, the ability to discriminate linear features on images varies with their orientation. Landforms that appear prominent on images where they are transverse to the illumination may be obscure to indistinguishable on images where they are parallel to it. Landform detection is also influenced by the spatial resolution in radar images. Seasat radar images of the Gran Desierto Dunes complex, Sonora, Mexico; the Appalachian Valley and Ridge Province; and accreted terranes in eastern interior Alaska were processed to simulate both Venera 15 and 16 images (1000 to 3000 km resolution) and image data expected from the Magellan mission (120 to 300 m resolution. The Gran Desierto Dunes are not discernable in the Venera simulation, whereas the higher resolution Magellan simulation shows dominant dune patterns produced from differential erosion of the rocks. The Magellan simulation also shows that fluvial processes have dominated erosion and exposure of the folds.
Wiebe, Alex; Kersting, Anette; Suslow, Thomas
2017-06-01
Alexithymia is a multidimensional personality construct including the components difficulties identifying feelings (DIF), difficulties describing feelings (DDF), and externally oriented thinking (EOT). Different features of alexithymia are thought to reflect specific deficits in the cognitive processing and regulation of emotions. The aim of the present study was to examine for the first time patterns of deployment of attention as a function of alexithymia components in healthy persons by using eye-tracking technology. It was assumed that EOT is linked to avoidance of negative images. 99 healthy adults viewed freely pictures consisting of anxiety-related, depression-related, positive, and neutral images while gaze behavior was registered. Alexithymia was assessed by the 20-Item Toronto Alexithymia Scale. Measures of anxiety, depression, and (visual-perceptual) intelligence were also administered. A main effect of emotion condition on dwell times was observed. Viewing time was lowest for neutral images, longer for depression-related and happy images, and longest for anxiety-related images. Gender and EOT had significant effects on dwell times. EOT correlated negatively with dwell time on depression-related (but not anxiety-related) images. There were no correlations of dwell times with depression, trait anxiety, intelligence, DIF, or DDF. Alexithymia was assessed exclusively by self-report. Our results show that EOT but not DIF or DDF influences attention deployment to simultaneously presented emotional pictures. EOT may reduce attention allocation to dysphoric information. This attentional characteristic of EOT individuals might have mood protecting effects but also detrimental impacts on social relationships and coping competencies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Elasticity Imaging of Polymeric Media
Sridhar, Mallika; Liu, Jie; Insana, Michael F.
2009-01-01
Viscoelastic properties of soft tissues and hydropolymers depend on the strength of molecular bonding forces connecting the polymer matrix and surrounding fluids. The basis for diagnostic imaging is that disease processes alter molecular-scale bonding in ways that vary the measurable stiffness and viscosity of the tissues. This paper reviews linear viscoelastic theory as applied to gelatin hydrogels for the purpose of formulating approaches to molecular-scale interpretation of elasticity imaging in soft biological tissues. Comparing measurements acquired under different geometries, we investigate the limitations of viscoelastic parameters acquired under various imaging conditions. Quasistatic (step-and-hold and low-frequency harmonic) stimuli applied to gels during creep and stress relaxation experiments in confined and unconfined geometries reveal continuous, bimodal distributions of respondance times. Within the linear range of responses, gelatin will behave more like a solid or fluid depending on the stimulus magnitude. Gelatin can be described statistically from a few parameters of low-order rheological models that form the basis of viscoelastic imaging. Unbiased estimates of imaging parameters are obtained only if creep data are acquired for greater than twice the highest retardance time constant and any steady-state viscous response has been eliminated. Elastic strain and retardance time images are found to provide the best combination of contrast and signal strength in gelatin. Retardance times indicate average behavior of fast (1–10 s) fluid flows and slow (50–400 s) matrix restructuring in response to the mechanical stimulus. Insofar as gelatin mimics other polymers, such as soft biological tissues, elasticity imaging can provide unique insights into complex structural and biochemical features of connectives tissues affected by disease. PMID:17408331
Sensor image prediction techniques
NASA Astrophysics Data System (ADS)
Stenger, A. J.; Stone, W. R.; Berry, L.; Murray, T. J.
1981-02-01
The preparation of prediction imagery is a complex, costly, and time consuming process. Image prediction systems which produce a detailed replica of the image area require the extensive Defense Mapping Agency data base. The purpose of this study was to analyze the use of image predictions in order to determine whether a reduced set of more compact image features contains enough information to produce acceptable navigator performance. A job analysis of the navigator's mission tasks was performed. It showed that the cognitive and perceptual tasks he performs during navigation are identical to those performed for the targeting mission function. In addition, the results of the analysis of his performance when using a particular sensor can be extended to the analysis of this mission tasks using any sensor. An experimental approach was used to determine the relationship between navigator performance and the type of amount of information in the prediction image. A number of subjects were given image predictions containing varying levels of scene detail and different image features, and then asked to identify the predicted targets in corresponding dynamic flight sequences over scenes of cultural, terrain, and mixed (both cultural and terrain) content.
Iterative filtering decomposition based on local spectral evolution kernel
Wang, Yang; Wei, Guo-Wei; Yang, Siyang
2011-01-01
The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559
Single-shot ultrafast tomographic imaging by spectral multiplexing
NASA Astrophysics Data System (ADS)
Matlis, N. H.; Axley, A.; Leemans, W. P.
2012-10-01
Computed tomography has profoundly impacted science, medicine and technology by using projection measurements scanned over multiple angles to permit cross-sectional imaging of an object. The application of computed tomography to moving or dynamically varying objects, however, has been limited by the temporal resolution of the technique, which is set by the time required to complete the scan. For objects that vary on ultrafast timescales, traditional scanning methods are not an option. Here we present a non-scanning method capable of resolving structure on femtosecond timescales by using spectral multiplexing of a single laser beam to perform tomographic imaging over a continuous range of angles simultaneously. We use this technique to demonstrate the first single-shot ultrafast computed tomography reconstructions and obtain previously inaccessible structure and position information for laser-induced plasma filaments. This development enables real-time tomographic imaging for ultrafast science, and offers a potential solution to the challenging problem of imaging through scattering surfaces.
An NV-Diamond Magnetic Imager for Neuroscience
NASA Astrophysics Data System (ADS)
Turner, Matthew; Schloss, Jennifer; Bauch, Erik; Hart, Connor; Walsworth, Ronald
2017-04-01
We present recent progress towards imaging time-varying magnetic fields from neurons using nitrogen-vacancy centers in diamond. The diamond neuron imager is noninvasive, label-free, and achieves single-cell resolution and state-of-the-art broadband sensitivity. By imaging magnetic fields from injected currents in mammalian neurons, we will map functional neuronal network connections and illuminate biophysical properties of neurons invisible to traditional electrophysiology. Furthermore, through enhancing magnetometer sensitivity, we aim to demonstrate real-time imaging of action potentials from networks of mammalian neurons.
Using endmembers in AVIRIS images to estimate changes in vegetative biomass
NASA Technical Reports Server (NTRS)
Smith, Milton O.; Adams, John B.; Ustin, Susan L.; Roberts, Dar A.
1992-01-01
Field techniques for estimating vegetative biomass are labor intensive, and rarely are used to monitor changes in biomass over time. Remote-sensing offers an attractive alternative to field measurements; however, because there is no simple correspondence between encoded radiance in multispectral images and biomass, it is not possible to measure vegetative biomass directly from AVIRIS images. Ways to estimate vegetative biomass by identifying community types and then applying biomass scalars derived from field measurements are investigated. Field measurements of community-scale vegetative biomass can be made, at least for local areas, but it is not always possible to identify vegetation communities unambiguously using remote measurements and conventional image-processing techniques. Furthermore, even when communities are well characterized in a single image, it typically is difficult to assess the extent and nature of changes in a time series of images, owing to uncertainties introduced by variations in illumination geometry, atmospheric attenuation, and instrumental responses. Our objective is to develop an improved method based on spectral mixture analysis to characterize and identify vegetative communities, that can be applied to multi-temporal AVIRIS and other types of images. In previous studies, multi-temporal data sets (AVIRIS and TM) of Owens Valley, CA were analyzed and vegetation communities were defined in terms of fractions of reference (laboratory and field) endmember spectra. An advantage of converting an image to fractions of reference endmembers is that, although fractions in a given pixel may vary from image to image in a time series, the endmembers themselves typically are constant, thus providing a consistent frame of reference.
Evaluating the Efficacy of Wavelet Configurations on Turbulent-Flow Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Shaomeng; Gruchalla, Kenny; Potter, Kristin
2015-10-25
I/O is increasingly becoming a significant constraint for simulation codes and visualization tools on modern supercomputers. Data compression is an attractive workaround, and, in particular, wavelets provide a promising solution. However, wavelets can be applied in multiple configurations, and the variations in configuration impact accuracy, storage cost, and execution time. While the variation in these factors over wavelet configurations have been explored in image processing, they are not well understood for visualization and analysis of scientific data. To illuminate this issue, we evaluate multiple wavelet configurations on turbulent-flow data. Our approach is to repeat established analysis routines on uncompressed andmore » lossy-compressed versions of a data set, and then quantitatively compare their outcomes. Our findings show that accuracy varies greatly based on wavelet configuration, while storage cost and execution time vary less. Overall, our study provides new insights for simulation analysts and visualization experts, who need to make tradeoffs between accuracy, storage cost, and execution time.« less
Realistic Simulations of Coronagraphic Observations with WFIRST
NASA Astrophysics Data System (ADS)
Rizzo, Maxime; Zimmerman, Neil; Roberge, Aki; Lincowski, Andrew; Arney, Giada; Stark, Chris; Jansen, Tiffany; Turnbull, Margaret; WFIRST Science Investigation Team (Turnbull)
2018-01-01
We present a framework to simulate observing scenarios with the WFIRST Coronagraphic Instrument (CGI). The Coronagraph and Rapid Imaging Spectrograph in Python (crispy) is an open-source package that can be used to create CGI data products for analysis and development of post-processing routines. The software convolves time-varying coronagraphic PSFs with realistic astrophysical scenes which contain a planetary architecture, a consistent dust structure, and a background field composed of stars and galaxies. The focal plane can be read out by a WFIRST electron-multiplying CCD model directly, or passed through a WFIRST integral field spectrograph model first. Several elementary post-processing routines are provided as part of the package.
Image interpolation by adaptive 2-D autoregressive modeling and soft-decision estimation.
Zhang, Xiangjun; Wu, Xiaolin
2008-06-01
The challenge of image interpolation is to preserve spatial details. We propose a soft-decision interpolation technique that estimates missing pixels in groups rather than one at a time. The new technique learns and adapts to varying scene structures using a 2-D piecewise autoregressive model. The model parameters are estimated in a moving window in the input low-resolution image. The pixel structure dictated by the learnt model is enforced by the soft-decision estimation process onto a block of pixels, including both observed and estimated. The result is equivalent to that of a high-order adaptive nonseparable 2-D interpolation filter. This new image interpolation approach preserves spatial coherence of interpolated images better than the existing methods, and it produces the best results so far over a wide range of scenes in both PSNR measure and subjective visual quality. Edges and textures are well preserved, and common interpolation artifacts (blurring, ringing, jaggies, zippering, etc.) are greatly reduced.
Automated identification of cone photoreceptors in adaptive optics retinal images.
Li, Kaccie Y; Roorda, Austin
2007-05-01
In making noninvasive measurements of the human cone mosaic, the task of labeling each individual cone is unavoidable. Manual labeling is a time-consuming process, setting the motivation for the development of an automated method. An automated algorithm for labeling cones in adaptive optics (AO) retinal images is implemented and tested on real data. The optical fiber properties of cones aided the design of the algorithm. Out of 2153 manually labeled cones from six different images, the automated method correctly identified 94.1% of them. The agreement between the automated and the manual labeling methods varied from 92.7% to 96.2% across the six images. Results between the two methods disagreed for 1.2% to 9.1% of the cones. Voronoi analysis of large montages of AO retinal images confirmed the general hexagonal-packing structure of retinal cones as well as the general cone density variability across portions of the retina. The consistency of our measurements demonstrates the reliability and practicality of having an automated solution to this problem.
NASA Astrophysics Data System (ADS)
Vallières, Martin; Laberge, Sébastien; Diamant, André; El Naqa, Issam
2017-11-01
Texture-based radiomic models constructed from medical images have the potential to support cancer treatment management via personalized assessment of tumour aggressiveness. While the identification of stable texture features under varying imaging settings is crucial for the translation of radiomics analysis into routine clinical practice, we hypothesize in this work that a complementary optimization of image acquisition parameters prior to texture feature extraction could enhance the predictive performance of texture-based radiomic models. As a proof of concept, we evaluated the possibility of enhancing a model constructed for the early prediction of lung metastases in soft-tissue sarcomas by optimizing PET and MR image acquisition protocols via computerized simulations of image acquisitions with varying parameters. Simulated PET images from 30 STS patients were acquired by varying the extent of axial data combined per slice (‘span’). Simulated T 1-weighted and T 2-weighted MR images were acquired by varying the repetition time and echo time in a spin-echo pulse sequence, respectively. We analyzed the impact of the variations of PET and MR image acquisition parameters on individual textures, and we investigated how these variations could enhance the global response and the predictive properties of a texture-based model. Our results suggest that it is feasible to identify an optimal set of image acquisition parameters to improve prediction performance. The model constructed with textures extracted from simulated images acquired with a standard clinical set of acquisition parameters reached an average AUC of 0.84 +/- 0.01 in bootstrap testing experiments. In comparison, the model performance significantly increased using an optimal set of image acquisition parameters (p = 0.04 ), with an average AUC of 0.89 +/- 0.01 . Ultimately, specific acquisition protocols optimized to generate superior radiomics measurements for a given clinical problem could be developed and standardized via dedicated computer simulations and thereafter validated using clinical scanners.
Varying face occlusion detection and iterative recovery for face recognition
NASA Astrophysics Data System (ADS)
Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei
2017-05-01
In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.
Wang, Xuefeng
2017-01-01
This paper presents a survey on a system that uses digital image processing techniques to identify anthracnose and powdery mildew diseases of sandalwood from digital images. Our main objective is researching the most suitable identification technology for the anthracnose and powdery mildew diseases of the sandalwood leaf, which provides algorithmic support for the real-time machine judgment of the health status and disease level of sandalwood. We conducted real-time monitoring of Hainan sandalwood leaves with varying severity levels of anthracnose and powdery mildew beginning in March 2014. We used image segmentation, feature extraction and digital image classification and recognition technology to carry out a comparative experimental study for the image analysis of powdery mildew, anthracnose disease and healthy leaves in the field. Performing the actual test for a large number of diseased leaves pointed to three conclusions: (1) Distinguishing effects of BP (Back Propagation) neural network method, in all kinds of classical methods, for sandalwood leaf anthracnose and powdery mildew disease are relatively good; the size of the lesion areas were closest to the actual. (2) The differences between two diseases can be shown well by the shape feature, color feature and texture feature of the disease image. (3) Identifying and diagnosing the diseased leaves have ideal results by SVM, which is based on radial basis kernel function. The identification rate of the anthracnose and healthy leaves was 92% respectively, and that of powdery mildew was 84%. Disease identification technology lays the foundation for remote monitoring disease diagnosis, preparing for remote transmission of the disease images, which is a very good guide and reference for further research of the disease identification and diagnosis system in sandalwood and other species of trees. PMID:28749977
Wu, Chunyan; Wang, Xuefeng
2017-01-01
This paper presents a survey on a system that uses digital image processing techniques to identify anthracnose and powdery mildew diseases of sandalwood from digital images. Our main objective is researching the most suitable identification technology for the anthracnose and powdery mildew diseases of the sandalwood leaf, which provides algorithmic support for the real-time machine judgment of the health status and disease level of sandalwood. We conducted real-time monitoring of Hainan sandalwood leaves with varying severity levels of anthracnose and powdery mildew beginning in March 2014. We used image segmentation, feature extraction and digital image classification and recognition technology to carry out a comparative experimental study for the image analysis of powdery mildew, anthracnose disease and healthy leaves in the field. Performing the actual test for a large number of diseased leaves pointed to three conclusions: (1) Distinguishing effects of BP (Back Propagation) neural network method, in all kinds of classical methods, for sandalwood leaf anthracnose and powdery mildew disease are relatively good; the size of the lesion areas were closest to the actual. (2) The differences between two diseases can be shown well by the shape feature, color feature and texture feature of the disease image. (3) Identifying and diagnosing the diseased leaves have ideal results by SVM, which is based on radial basis kernel function. The identification rate of the anthracnose and healthy leaves was 92% respectively, and that of powdery mildew was 84%. Disease identification technology lays the foundation for remote monitoring disease diagnosis, preparing for remote transmission of the disease images, which is a very good guide and reference for further research of the disease identification and diagnosis system in sandalwood and other species of trees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shoaf, S.; APS Engineering Support Division
A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.
Ong, Ta-Hsuan; Romanova, Elena V.; Roberts-Galbraith, Rachel H.; Yang, Ning; Zimmerman, Tyler A.; Collins, James J.; Lee, Ji Eun; Kelleher, Neil L.; Newmark, Phillip A.; Sweedler, Jonathan V.
2016-01-01
Tissue regeneration is a complex process that involves a mosaic of molecules that vary spatially and temporally. Insights into the chemical signaling underlying this process can be achieved with a multiplex and untargeted chemical imaging method such as mass spectrometry imaging (MSI), which can enable de novo studies of nervous system regeneration. A combination of MSI and multivariate statistics was used to differentiate peptide dynamics in the freshwater planarian flatworm Schmidtea mediterranea at different time points during cephalic ganglia regeneration. A protocol was developed to make S. mediterranea tissues amenable for MSI. MS ion images of planarian tissue sections allow changes in peptides and unknown compounds to be followed as a function of cephalic ganglia regeneration. In conjunction with fluorescence imaging, our results suggest that even though the cephalic ganglia structure is visible after 6 days of regeneration, the original chemical composition of these regenerated structures is regained only after 12 days. Differences were observed in many peptides, such as those derived from secreted peptide 4 and EYE53-1. Peptidomic analysis further identified multiple peptides from various known prohormones, histone proteins, and DNA- and RNA-binding proteins as being associated with the regeneration process. Mass spectrometry data also facilitated the identification of a new prohormone, which we have named secreted peptide prohormone 20 (SPP-20), and is up-regulated during regeneration in planarians. PMID:26884331
Geometric processing of digital images of the planets
NASA Technical Reports Server (NTRS)
Edwards, Kathleen
1987-01-01
New procedures and software have been developed for geometric transformation of images to support digital cartography of the planets. The procedures involve the correction of spacecraft camera orientation of each image with the use of ground control and the transformation of each image to a Sinusoidal Equal-Area map projection with an algorithm which allows the number of transformation calculations to vary as the distortion varies within the image. When the distortion is low in an area of an image, few transformation computations are required, and most pixels can be interpolated. When distortion is extreme, the location of each pixel is computed. Mosaics are made of these images and stored as digital databases. Completed Sinusoidal databases may be used for digital analysis and registration with other spatial data. They may also be reproduced as published image maps by digitally transforming them to appropriate map projections.
Brain connectivity study of joint attention using frequency-domain optical imaging technique
NASA Astrophysics Data System (ADS)
Chaudhary, Ujwal; Zhu, Banghe; Godavarty, Anuradha
2010-02-01
Autism is a socio-communication brain development disorder. It is marked by degeneration in the ability to respond to joint attention skill task, from as early as 12 to 18 months of age. This trait is used to distinguish autistic from nonautistic populations. In this study, diffuse optical imaging is being used to study brain connectivity for the first time in response to joint attention experience in normal adults. The prefrontal region of the brain was non-invasively imaged using a frequency-domain based optical imager. The imaging studies were performed on 11 normal right-handed adults and optical measurements were acquired in response to joint-attention based video clips. While the intensity-based optical data provides information about the hemodynamic response of the underlying neural process, the time-dependent phase-based optical data has the potential to explicate the directional information on the activation of the brain. Thus brain connectivity studies are performed by computing covariance/correlations between spatial units using this frequency-domain based optical measurements. The preliminary results indicate that the extent of synchrony and directional variation in the pattern of activation varies in the left and right frontal cortex. The results have significant implication for research in neural pathways associated with autism that can be mapped using diffuse optical imaging tools in the future.
Dragovic, A S; Stringer, A K; Campbell, L; Shaul, C; O'Leary, S J; Briggs, R J
2018-05-01
To investigate the clinical usefulness and practicality of co-registration of Cone Beam CT (CBCT) with preoperative Magnetic Resonance Imaging (MRI) for intracochlear localization of electrodes after cochlear implantation. Images of 20 adult patients who underwent CBCT after implantation were co-registered with preoperative MRI scans. Time taken for co-registration was recorded. The images were analysed by clinicians of varying levels of expertise to determine electrode position and ease of interpretation. After a short learning curve, the average co-registration time was 10.78 minutes (StdDev 2.37). All clinicians found the co-registered images easier to interpret than CBCT alone. The mean concordance of CBCT vs. co-registered image analysis between consultant otologists was 60% (17-100%) and 86% (60-100%), respectively. The sensitivity and specificity for CBCT to identify Scala Vestibuli insertion or translocation was 100 and 75%, respectively. The negative predictive value was 100%. CBCT should be performed following adult cochlear implantation for audit and quality control of surgical technique. If SV insertion or translocation is suspected, co-registration with preoperative MRI should be performed to enable easier analysis. There will be a learning curve for this process in terms of both the co-registration and the interpretation of images by clinicians.
Dynamic laser piercing of thick section metals
NASA Astrophysics Data System (ADS)
Pocorni, Jetro; Powell, John; Frostevarg, Jan; Kaplan, Alexander F. H.
2018-01-01
Before a contour can be laser cut the laser first needs to pierce the material. The time taken to achieve piercing should be minimised to optimise productivity. One important aspect of laser piercing is the reliability of the process because industrial laser cutting machines are programmed for the minimum reliable pierce time. In this work piercing experiments were carried out in 15 mm thick stainless steel sheets, comparing a stationary laser and a laser which moves along a circular trajectory with varying processing speeds. Results show that circular piercing can decrease the pierce duration by almost half compared to stationary piercing. High speed imaging (HSI) was employed during the piercing process to understand melt behaviour inside the pierce hole. HSI videos show that circular rotation of the laser beam forces melt to eject in opposite direction of the beam movement, while in stationary piercing the melt ejects less efficiently in random directions out of the hole.
An accelerated image matching technique for UAV orthoimage registration
NASA Astrophysics Data System (ADS)
Tsai, Chung-Hsien; Lin, Yu-Ching
2017-06-01
Using an Unmanned Aerial Vehicle (UAV) drone with an attached non-metric camera has become a popular low-cost approach for collecting geospatial data. A well-georeferenced orthoimage is a fundamental product for geomatics professionals. To achieve high positioning accuracy of orthoimages, precise sensor position and orientation data, or a number of ground control points (GCPs), are often required. Alternatively, image registration is a solution for improving the accuracy of a UAV orthoimage, as long as a historical reference image is available. This study proposes a registration scheme, including an Accelerated Binary Robust Invariant Scalable Keypoints (ABRISK) algorithm and spatial analysis of corresponding control points for image registration. To determine a match between two input images, feature descriptors from one image are compared with those from another image. A "Sorting Ring" is used to filter out uncorrected feature pairs as early as possible in the stage of matching feature points, to speed up the matching process. The results demonstrate that the proposed ABRISK approach outperforms the vector-based Scale Invariant Feature Transform (SIFT) approach where radiometric variations exist. ABRISK is 19.2 times and 312 times faster than SIFT for image sizes of 1000 × 1000 pixels and 4000 × 4000 pixels, respectively. ABRISK is 4.7 times faster than Binary Robust Invariant Scalable Keypoints (BRISK). Furthermore, the positional accuracy of the UAV orthoimage after applying the proposed image registration scheme is improved by an average of root mean square error (RMSE) of 2.58 m for six test orthoimages whose spatial resolutions vary from 6.7 cm to 10.7 cm.
NASA Astrophysics Data System (ADS)
Gallardo, Athena Marie
Past nuclear accidents, such as Chernobyl, resulted in a large release of radionuclides into the atmosphere. Radiological assessment of the vicinity of the site of the incident is vital to assess the exposure levels and dose received by the population and workers. Therefore, it is critical to thoroughly understand the situation and risks associated with a particular event in a timely manner in order to properly manage the event. Current atmospheric radiological assessments of alpha emitting radioisotopes include acquiring large quantities of air samples, chemical separation of radionuclides, sample mounting, counting through alpha spectrometry, and analysis of the data. The existing methodology is effective, but time consuming and labor intensive. Autoradiography, and the properties of phosphor imaging films, may be used as an additional technique to facilitate and expedite the alpha analysis process in these types of situations. Although autoradiography is not as sensitive to alpha radiation as alpha spectrometry, autoradiography may benefit alpha analysis by providing information about the activity as well as the spatial distribution of radioactivity in the sample under investigation. The objective for this research was to develop an efficient method for quantification and visualization of air filter samples taken in the aftermath of a nuclear emergency through autoradiography using 241Am and 239Pu tracers. Samples containing varying activities of either 241Am or 239Pu tracers were produced through microprecipitation and assayed by alpha spectroscopy. The samples were subsequently imaged and an activity calibration curve was produced by comparing the digital light units recorded from the image to the known activity of the source. The usefulness of different phosphor screens was examined by exposing each type of film to the same standard nuclide for varying quantities of time. Unknown activity samples created through microprecipiation containing activities of either 241Am or 239Pu as well as air filters doped with beta and alpha emitting nuclides were imaged and activities were determined by comparing the image to the activity calibration curve.
Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung
2017-07-08
A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.
NASA Astrophysics Data System (ADS)
Mulligan, Jeffrey A.; Adie, Steven G.
2017-02-01
Mechanobiology is an emerging field which seeks to link mechanical forces and properties to the behaviors of cells and tissues in cancer, stem cell growth, and other processes. Traction force microscopy (TFM) is an imaging technique that enables the study of traction forces exerted by cells on their environment to migrate as well as sense and manipulate their surroundings. To date, TFM research has been performed using incoherent imaging modalities and, until recently, has been largely confined to the study of cell-induced tractions within two-dimensions using highly artificial and controlled environments. As the field of mechanobiology advances, and demand grows for research in physiologically relevant 3D culture and in vivo models, TFM will require imaging modalities that support such settings. Optical coherence microscopy (OCM) is an interferometric imaging modality which enables 3D cellular resolution imaging in highly scattering environments. Moreover, optical coherence elastography (OCE) enables the measurement of tissue mechanical properties. OCE relies on the principle of measuring material deformations in response to artificially applied stress. By extension, similar techniques can enable the measurement of cell-induced deformations, imaged with OCM. We propose traction force optical coherence microscopy (TF-OCM) as a natural extension and partner to existing OCM and OCE methods. We report the first use of OCM data and digital image correlation to track temporally varying displacement fields exhibited within a 3D culture setting. These results mark the first steps toward the realization of TF-OCM in 2D and 3D settings, bolstering OCM as a platform for advancing research in mechanobiology.
Qualitative and quantitative processing of side-scan sonar data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dwan, F.S.; Anderson, A.L.; Hilde, T.W.C.
1990-06-01
Modern side-scan sonar systems allow vast areas of seafloor to be rapidly imaged and quantitatively mapped in detail. The application of remote sensing image processing techniques can be used to correct for various distortions inherent in raw sonography. Corrections are possible for water column, slant-range, aspect ratio, speckle and striping noise, multiple returns, power drop-off, and for georeferencing. The final products reveal seafloor features and patterns that are geometrically correct, georeferenced, and have improved signal/noise ratio. These products can be merged with other georeferenced data bases for further database management and information extraction. In order to compare data collected bymore » different systems from a common area and to ground truth measurements and geoacoustic models, quantitative correction must be made for calibrated sonar system and bathymetry effects. Such data inversion must account for system source level, beam pattern, time-varying gain, processing gain, transmission loss, absorption, insonified area, and grazing angle effects. Seafloor classification can then be performed on the calculated back-scattering strength using Lambert's Law and regression analysis. Examples are given using both approaches: image analysis and inversion of data based on the sonar equation.« less
Polarimetric Calibration and Assessment of GF-3 Images in Steppe
NASA Astrophysics Data System (ADS)
Chang, Y.; Yang, J.; Li, P.; Shi, L.; Zhao, L.
2018-04-01
The GaoFen-3 (GF-3) satellite is the first fully polarimetric synthetic aperture radar (PolSAR) satellite in China. It has three fully polarimetric imaging modes and is available for many applications. The system has been taken on several calibration experiments after the launch in Inner Mongolia by the Institute of Electronics, Chinese Academy of Sciences (IECAS), and the polarimetric calibration (PolCAL) strategy of GF-3 are also improved. Therefore, it is necessary to assess the image quality before any further applications. In this paper, we evaluated the polarimetric residual errors of GF-3 images that acquired in July 2017 in a steppe site. The results shows that the crosstalk of these images varies from -36 dB to -46 dB, and the channel imbalance varies from -0.43 dB to 0.55 dB with angle varying from -1.6 to 3.6 degree. We also made a PolCAL experiment to restrain the polarimetric distortion afterwards, and the polarimetric quality of the image got better after the PolCAL processing.
The effects of nuclear magnetic resonance on patients with cardiac pacemakers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlicek, W.; Geisinger, M.; Castle, L.
1983-04-01
The effect of nuclear magnetic resonance (NMR) imaging on six representative cardiac pacemakers was studied. The results indicate that the threshold for initiating the asynchronous mode of a pacemaker is 17 gauss. Radiofrequency levels are present in an NMR unit and may confuse or possibly inhibit demand pacemakers, although sensing circuitry is normally provided with electromagnetic interference discrimination. Time-varying magnetic fields can generate pulse amplitudes and frequencies to mimic cardiac activity. A serious limitation in the possibility of imaging a patient with a pacemaker would be the alteration of normal pulsing parameters due to time-varying magnetic fields.
ASPRS Digital Imagery Guideline Image Gallery Discussion
NASA Technical Reports Server (NTRS)
Ryan, Robert
2002-01-01
The objectives of the image gallery are to 1) give users and providers a simple means of identifying appropriate imagery for a given application/feature extraction; and 2) define imagery sufficiently to be described in engineering and acquisition terms. This viewgraph presentation includes a discussion of edge response and aliasing for image processing, and a series of images illustrating the effects of signal to noise ratio (SNR) on images. Another series of images illustrates how images are affected by varying the ground sample distances (GSD).
Art and Design Blogs: A Socially-Wise Approach to Creativity
ERIC Educational Resources Information Center
Budge, Kylie
2012-01-01
Many images of the "artist" or "designer" pervade the media and popular consciousness. Contemporary images of the artist and creativity that focus solely on the individual offer a very narrow depiction of the varying ways creativity occurs for artists and designers. These images do not capture the variety of creative processes and myriad ways…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teskey, G.C.; Prato, F.S.; Ossenkopp, K.P.
The effects of exposure to clinical magnetic resonance imaging (MRI) on analgesia induced by the mu opiate agonist, fentanyl, was examined in mice. During the dark period, adult male mice were exposed for 23.2 min to the time-varying (0.6 T/sec) magnetic field (TVMF) component of the MRI procedure. Following this exposure, the analgesic potency of fentanyl citrate (0.1 mg/kg) was determined at 5, 10, 15, and 30 min post-injection, using a thermal test stimulus (hot-plate 50 degrees C). Exposure to the magnetic-field gradients attenuated the fentanyl-induced analgesia in a manner comparable to that previously observed with morphine. These results indicatemore » that the time-varying magnetic fields associated with MRI have significant inhibitory effects on the analgesic effects of specific mu-opiate-directed ligands.« less
LANDSAT imagery of the Venetian Lagoon: A multitemporal analysis
NASA Technical Reports Server (NTRS)
Alberotanza, L.; Zandonella, A. (Principal Investigator)
1980-01-01
The use of LANDSAT multispectral scanner images from 1975 to 1979 to determine pollution dispersion in the central basin of the lagoon under varying tidal conditions is described. Images taken during the late spring and representing both short and long range tidal dynamics were processed for partial haze removal and removal of residual striping. Selected spectral bands were correlated to different types of turbid water. The multitemporal data was calibrated, classified considering sea truth data, and evaluated. The classification differentiated tide diffusion, algae belts, and industrial, agricultural, and urban turbidity distributions. Pollution concentration is derived during the short time interval between inflow and outflow and from the distance between the two lagoon inlets and the industrial zones. Increasing pollution of the lagoon is indicated.
Berns, G S; Song, A W; Mao, H
1999-07-15
Linear experimental designs have dominated the field of functional neuroimaging, but although successful at mapping regions of relative brain activation, the technique assumes that both cognition and brain activation are linear processes. To test these assumptions, we performed a continuous functional magnetic resonance imaging (MRI) experiment of finger opposition. Subjects performed a visually paced bimanual finger-tapping task. The frequency of finger tapping was continuously varied between 1 and 5 Hz, without any rest blocks. After continuous acquisition of fMRI images, the task-related brain regions were identified with independent components analysis (ICA). When the time courses of the task-related components were plotted against tapping frequency, nonlinear "dose- response" curves were obtained for most subjects. Nonlinearities appeared in both the static and dynamic sense, with hysteresis being prominent in several subjects. The ICA decomposition also demonstrated the spatial dynamics with different components active at different times. These results suggest that the brain response to tapping frequency does not scale linearly, and that it is history-dependent even after accounting for the hemodynamic response function. This implies that finger tapping, as measured with fMRI, is a nonstationary process. When analyzed with a conventional general linear model, a strong correlation to tapping frequency was identified, but the spatiotemporal dynamics were not apparent.
Method and apparatus of high dynamic range image sensor with individual pixel reset
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Pain, Bedabrata (Inventor); Fossum, Eric R. (Inventor)
2001-01-01
A wide dynamic range image sensor provides individual pixel reset to vary the integration time of individual pixels. The integration time of each pixel is controlled by column and row reset control signals which activate a logical reset transistor only when both signals coincide for a given pixel.
Lam, Mie K; Huisman, Merel; Nijenhuis, Robbert J; van den Bosch, Maurice Aaj; Viergever, Max A; Moonen, Chrit Tw; Bartels, Lambertus W
2015-01-01
Magnetic resonance (MR)-guided high-intensity focused ultrasound has emerged as a clinical option for palliative treatment of painful bone metastases, with MR thermometry (MRT) used for treatment monitoring. In this study, the general image quality of the MRT was assessed in terms of signal-to-noise ratio (SNR) and apparent temperature variation. Also, MRT artifacts were scored for their occurrence and hampering of the treatment monitoring. Analyses were performed on 224 MRT datasets retrieved from 13 treatments. The SNR was measured per voxel over time in magnitude images, in the target lesion and surrounding muscle, and was averaged per treatment. The standard deviation over time of the measured temperature per voxel in MRT images, in the muscle outside the heated region, was defined as the apparent temperature variation and was averaged per treatment. The scored MRT artifacts originated from the following sources: respiratory and non-respiratory time-varying field inhomogeneities, arterial ghosting, and patient motion by muscle contraction and by gross body movement. Distinction was made between lesion type, location, and procedural sedation and analgesic (PSA). The average SNR was highest in and around osteolytic lesions (21 in lesions, 27 in surrounding muscle, n = 4) and lowest in the upper body (9 in lesions, 16 in surrounding muscle, n = 4). The average apparent temperature variation was lowest in osteolytic lesions (1.2°C, n = 4) and the highest in the upper body (1.7°C, n = 4). Respiratory time-varying field inhomogeneity MRT artifacts occurred in 85% of the datasets and hampered treatment monitoring in 81%. Non-respiratory time-varying field inhomogeneities and arterial ghosting MRT artifacts were most frequent (94% and 95%) but occurred only locally. Patient motion artifacts were highly variable and occurred less in treatments of osteolytic lesions and using propofol and esketamine as PSA. In this study, the general image quality of MRT was observed to be higher in osteolytic lesions and lower in the upper body. Respiratory time-varying field inhomogeneity was the most prominent MRT artifact. Patient motion occurrence varied between treatments and seemed to be related to lesion type and type of PSA. Clinicians should be aware of these observed characteristics when interpreting MRT images.
NASA Technical Reports Server (NTRS)
Simpson, James J.; Harkins, Daniel N.
1993-01-01
Historically, locating and browsing satellite data has been a cumbersome and expensive process. This has impeded the efficient and effective use of satellite data in the geosciences. SSABLE is a new interactive tool for the archive, browse, order, and distribution of satellite date based upon X Window, high bandwidth networks, and digital image rendering techniques. SSABLE provides for automatically constructing relational database queries to archived image datasets based on time, data, geographical location, and other selection criteria. SSABLE also provides a visual representation of the selected archived data for viewing on the user's X terminal. SSABLE is a near real-time system; for example, data are added to SSABLE's database within 10 min after capture. SSABLE is network and machine independent; it will run identically on any machine which satisfies the following three requirements: 1) has a bitmapped display (monochrome or greater); 2) is running the X Window system; and 3) is on a network directly reachable by the SSABLE system. SSABLE has been evaluated at over 100 international sites. Network response time in the United States and Canada varies between 4 and 7 s for browse image updates; reported transmission times to Europe and Australia typically are 20-25 s.
Stereoscopic Imaging in Hypersonics Boundary Layers using Planar Laser-Induced Fluorescence
NASA Technical Reports Server (NTRS)
Danehy, Paul M.; Bathel, Brett; Inman, Jennifer A.; Alderfer, David W.; Jones, Stephen B.
2008-01-01
Stereoscopic time-resolved visualization of three-dimensional structures in a hypersonic flow has been performed for the first time. Nitric Oxide (NO) was seeded into hypersonic boundary layer flows that were designed to transition from laminar to turbulent. A thick laser sheet illuminated and excited the NO, causing spatially-varying fluorescence. Two cameras in a stereoscopic configuration were used to image the fluorescence. The images were processed in a computer visualization environment to provide stereoscopic image pairs. Two methods were used to display these image pairs: a cross-eyed viewing method which can be viewed by naked eyes, and red/blue anaglyphs, which require viewing through red/blue glasses. The images visualized three-dimensional information that would be lost if conventional planar laser-induced fluorescence imaging had been used. Two model configurations were studied in NASA Langley Research Center's 31-Inch Mach 10 Air Wind tunnel. One model was a 10 degree half-angle wedge containing a small protuberance to force the flow to transition. The other model was a 1/3-scale, truncated Hyper-X forebody model with blowing through a series of holes to force the boundary layer flow to transition to turbulence. In the former case, low flowrates of pure NO seeded and marked the boundary layer fluid. In the latter, a trace concentration of NO was seeded into the injected N2 gas. The three-dimensional visualizations have an effective time resolution of about 500 ns, which is fast enough to freeze this hypersonic flow. The 512x512 resolution of the resulting images is much higher than high-speed laser-sheet scanning systems with similar time response, which typically measure 10-20 planes.
NASA Astrophysics Data System (ADS)
Zhou, Renjie; Jin, Di; Yaqoob, Zahid; So, Peter T. C.
2017-02-01
Due to the large number of available mirrors, the patterning speed, low-cost, and compactness, digital-micromirror devices (DMDs) have been extensively used in biomedical imaging system. Recently, DMDs have been brought to the quantitative phase microscopy (QPM) field to achieve synthetic-aperture imaging and tomographic imaging. Last year, our group demonstrated using DMD for QPM, where the phase-retrieval is based on a recently developed Fourier ptychography algorithm. In our previous system, the illumination angle was varied through coding the aperture plane of the illumination system, which has a low efficiency on utilizing the laser power. In our new DMD-based QPM system, we use the Lee-holograms, which is conjugated to the sample plane, to change the illumination angles for much higher power efficiency. Multiple-angle illumination can also be achieved with this method. With this versatile system, we can achieve FPM-based high-resolution phase imaging with 250 nm lateral resolution using the Rayleigh criteria. Due to the use of a powerful laser, the imaging speed would only be limited by the camera acquisition speed. With a fast camera, we expect to achieve close to 100 fps phase imaging speed that has not been achieved in current FPM imaging systems. By adding reference beam, we also expect to achieve synthetic-aperture imaging while directly measuring the phase of the sample fields. This would reduce the phase-retrieval processing time to allow for real-time imaging applications in the future.
Ahmad, M D; Biggs, T; Turral, H; Scott, C A
2006-01-01
Evapotranspiration (ET) from irrigated land is one of the most useful indicators to explain whether the water is used as "intended". In this study, the Surface Energy Balance Algorithm for Land (SEBAL) was used to compute actual ET from a Landsat7 image of December 29, 2000 for diverse land use in the Krishna Basin in India. SEBAL ETa varies between 0 to 4.7 mm per day over the image and was quantified for identified land use classes. Seasonal/annual comparison of ETa from different land uses requires time series images, processed by SEBAL. In this study, the Landsat-derived snapshot SEBAL ETa result was interpreted using the cropping calendar and time series analysis of MODIS imagery. The wastewater irrigated area in the basin has the highest ETa in the image, partly due to its advanced growth stage compared to groundwater-irrigated rice. Shrub and forests in the senescence phase have similar ETa to vegetable/cash crops, and ETa from grasslands is a low 0.8 mm per day after the end of the monsoon. The results indicate that wastewater irrigation of fodder and rice is sufficient to meet crop water demand but there appears to be deficit irrigation of rice using groundwater.
Real-time clinically oriented array-based in vivo combined photoacoustic and power Doppler imaging
NASA Astrophysics Data System (ADS)
Harrison, Tyler; Jeffery, Dean; Wiebe, Edward; Zemp, Roger J.
2014-03-01
Photoacoustic imaging has great potential for identifying vascular regions for clinical imaging. In addition to assessing angiogenesis in cancers, there are many other disease processes that result in increased vascularity that present novel targets for photoacoustic imaging. Doppler imaging can provide good localization of large vessels, but poor imaging of small or low flow speed vessels and is susceptible to motion artifacts. Photoacoustic imaging can provide visualization of small vessels, but due to the filtering effects of ultrasound transducers, only shows the edges of large vessels. Thus, we have combined photoacoustic imaging with ultrasound power Doppler to provide contrast agent- free vascular imaging. We use a research-oriented ultrasound array system to provide interlaced ultrasound, Doppler, and photoacoustic imaging. This system features realtime display of all three modalities with adjustable persistence, rejection, and compression. For ease of use in a clinical setting, display of each mode can be disabled. We verify the ability of this system to identify vessels with varying flow speeds using receiver operating characteristic curves, and find that as flow speed falls, photoacoustic imaging becomes a much better method for identifying blood vessels. We also present several in vivo images of the thyroid and several synovial joints to assess the practicality of this imaging for clinical applications.
Hierarchical process memory: memory as an integral component of information processing
Hasson, Uri; Chen, Janice; Honey, Christopher J.
2015-01-01
Models of working memory commonly focus on how information is encoded into and retrieved from storage at specific moments. However, in the majority of real-life processes, past information is used continuously to process incoming information across multiple timescales. Considering single unit, electrocorticography, and functional imaging data, we argue that (i) virtually all cortical circuits can accumulate information over time, and (ii) the timescales of accumulation vary hierarchically, from early sensory areas with short processing timescales (tens to hundreds of milliseconds) to higher-order areas with long processing timescales (many seconds to minutes). In this hierarchical systems perspective, memory is not restricted to a few localized stores, but is intrinsic to information processing that unfolds throughout the brain on multiple timescales. “The present contains nothing more than the past, and what is found in the effect was already in the cause.”Henri L Bergson PMID:25980649
Crott, Ralph; Lawson, Georges; Nollevaux, Marie-Cécile; Castiaux, Annick; Krug, Bruno
2016-09-01
Head and neck cancer (HNC) is predominantly a locoregional disease. Sentinel lymph node (SLN) biopsy offers a minimally invasive means of accurately staging the neck. Value in healthcare is determined by both outcomes and the costs associated with achieving them. Time-driven activity-based costing (TDABC) may offer more precise estimates of the true cost. Process maps were developed for nuclear medicine, operating room and pathology care phases. TDABC estimates the costs by combining information about the process with the unit cost of each resource used. Resource utilization is based on observation of care and staff interviews. Unit costs are calculated as a capacity cost rate, measured as a Euros/min (2014), for each resource consumed. Multiplying together the unit costs and resource quantities and summing across all resources used will produce the average cost for each phase of care. Three time equations with six different scenarios were modeled based on the type of camera, the number of SLN and the type of staining used. Total times for different SLN scenarios vary between 284 and 307 min, respectively, with a total cost between 2794 and 3541€. The unit costs vary between 788€/h for the intraoperative evaluation with a gamma-probe and 889€/h for a preoperative imaging with a SPECT/CT. The unit costs for the lymphadenectomy and the pathological examination are, respectively, 560 and 713€/h. A 10 % increase of time per individual activity generates only 1 % change in the total cost. TDABC evaluates the cost of SLN in HNC. The total costs across all phases which varied between 2761 and 3744€ per standard case.
Wavelet domain image restoration with adaptive edge-preserving regularization.
Belge, M; Kilmer, M E; Miller, E L
2000-01-01
In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.
FPGA implementation of image dehazing algorithm for real time applications
NASA Astrophysics Data System (ADS)
Kumar, Rahul; Kaushik, Brajesh Kumar; Balasubramanian, R.
2017-09-01
Weather degradation such as haze, fog, mist, etc. severely reduces the effective range of visual surveillance. This degradation is a spatially varying phenomena, which makes this problem non trivial. Dehazing is an essential preprocessing stage in applications such as long range imaging, border security, intelligent transportation system, etc. However, these applications require low latency of the preprocessing block. In this work, single image dark channel prior algorithm is modified and implemented for fast processing with comparable visual quality of the restored image/video. Although conventional single image dark channel prior algorithm is computationally expensive, it yields impressive results. Moreover, a two stage image dehazing architecture is introduced, wherein, dark channel and airlight are estimated in the first stage. Whereas, transmission map and intensity restoration are computed in the next stages. The algorithm is implemented using Xilinx Vivado software and validated by using Xilinx zc702 development board, which contains an Artix7 equivalent Field Programmable Gate Array (FPGA) and ARM Cortex A9 dual core processor. Additionally, high definition multimedia interface (HDMI) has been incorporated for video feed and display purposes. The results show that the dehazing algorithm attains 29 frames per second for the image resolution of 1920x1080 which is suitable of real time applications. The design utilizes 9 18K_BRAM, 97 DSP_48, 6508 FFs and 8159 LUTs.
Radar Imaging Using The Wigner-Ville Distribution
NASA Astrophysics Data System (ADS)
Boashash, Boualem; Kenny, Owen P.; Whitehouse, Harper J.
1989-12-01
The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. This paper first discusses the radar equation in terms of the time-frequency representation of the signal received from a radar system. It then presents a method of tomographic reconstruction for time-frequency images to estimate the scattering function of the aircraft. An optical archi-tecture is then discussed for the real-time implementation of the analysis method based on the WVD.
MAGNETIC FLUX SUPPLEMENT TO CORONAL BRIGHT POINTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mou, Chaozhou; Huang, Zhenghua; Xia, Lidong
Coronal bright points (BPs) are associated with magnetic bipolar features (MBFs) and magnetic cancellation. Here we investigate how BP-associated MBFs form and how the consequent magnetic cancellation occurs. We analyze longitudinal magnetograms from the Helioseismic and Magnetic Imager to investigate the photospheric magnetic flux evolution of 70 BPs. From images taken in the 193 Å passband of the Atmospheric Imaging Assembly (AIA) we dermine that the BPs’ lifetimes vary from 2.7 to 58.8 hr. The formation of the BP MBFs is found to involve three processes, namely, emergence, convergence, and local coalescence of the magnetic fluxes. The formation of anmore » MBF can involve more than one of these processes. Out of the 70 cases, flux emergence is the main process of an MBF buildup of 52 BPs, mainly convergence is seen in 28, and 14 cases are associated with local coalescence. For MBFs formed by bipolar emergence, the time difference between the flux emergence and the BP appearance in the AIA 193 Å passband varies from 0.1 to 3.2 hr with an average of 1.3 hr. While magnetic cancellation is found in all 70 BPs, it can occur in three different ways: (I) between an MBF and small weak magnetic features (in 33 BPs); (II) within an MBF with the two polarities moving toward each other from a large distance (34 BPs); (III) within an MBF whose two main polarities emerge in the same place simultaneously (3 BPs). While an MBF builds up the skeleton of a BP, we find that the magnetic activities responsible for the BP heating may involve small weak fields.« less
NASA Astrophysics Data System (ADS)
Johnson, T.; Hammond, G. E.; Versteeg, R. J.; Zachara, J. M.
2013-12-01
The Hanford 300 Area, located adjacent to the Columbia River in south-central Washington, USA, is the site of former research and uranium fuel rod fabrication facilities. Waste disposal practices at site included discharging between 33 and 59 metric tons of uranium over a 40 year period into shallow infiltration galleries, resulting in persistent uranium contamination within the vadose and saturated zones. Uranium transport from the vadose zone to the saturated zone is intimately linked with water table fluctuations and river water intrusion driven by upstream dam operations. As river stage increases, the water table rises into the vadose zone and mobilizes contaminated pore water. At the same time, river water moves inland into the aquifer, and river water chemistry facilitates further mobilization by enabling uranium desorption from contaminated sediments. As river stage decreases, flow moves toward the river, ultimately discharging contaminated water at the river bed. River water specific conductance at the 300 Area varies around 0.018 S/m whereas groundwater specific conductance varies around 0.043 S/m. This contrast provides the opportunity to monitor groundwater/river water interaction by imaging changes in bulk conductivity within the saturated zone using time-lapse electrical resistivity tomography. Previous efforts have demonstrated this capability, but have also shown that disconnecting regularization constraints at the water table is critical for obtaining meaningful time-lapse images. Because the water table moves with time, the regularization constraints must also be transient to accommodate the water table boundary. This was previously accomplished with 2D time-lapse ERT imaging by using a finely discretized computational mesh within the water table interval, enabling a relatively smooth water table to be defined without modifying the mesh. However, in 3D this approach requires a computational mesh with an untenable number of elements. In order to accommodate the water table boundary in 3D, we propose a time-lapse warping mesh inversion, whereby mesh elements that traverse the water table are modified to generate a smooth boundary at the known water table position, enabling regularization constraints to be accurately disconnected across the water table boundary at a given time. We demonstrate the approach using a surface ERT array installed adjacent to the Columbia River at the 300 Area, consisting of 352 electrodes and covering an area of approximately 350 m x 350 m. Using autonomous data collection, transmission, and filtering tools coupled with high performance computing resources, the 4D imaging process is automated and executed in real time. Each time lapse survey consists of approximately 40,000 measurements and 4 surveys are collected and processed per day from April 1st , 2013 to September 30th, 2013. The data are inverted on an unstructured tetrahedral mesh that honors LiDAR-based surface topography and is comprised of approximately 905,000 elements. Imaging results show the dynamic 4D extent of river water intrusion, and are validated with well-based fluid conductivity measurements at each monitoring well within the imaging domain.
Kim, Kio; Habas, Piotr A.; Rajagopalan, Vidya; Scott, Julia A.; Corbett-Detig, James M.; Rousseau, Francois; Barkovich, A. James; Glenn, Orit A.; Studholme, Colin
2012-01-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multi-slice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types. PMID:21511561
Kim, Kio; Habas, Piotr A; Rajagopalan, Vidya; Scott, Julia A; Corbett-Detig, James M; Rousseau, Francois; Barkovich, A James; Glenn, Orit A; Studholme, Colin
2011-09-01
A common solution to clinical MR imaging in the presence of large anatomical motion is to use fast multislice 2D studies to reduce slice acquisition time and provide clinically usable slice data. Recently, techniques have been developed which retrospectively correct large scale 3D motion between individual slices allowing the formation of a geometrically correct 3D volume from the multiple slice stacks. One challenge, however, in the final reconstruction process is the possibility of varying intensity bias in the slice data, typically due to the motion of the anatomy relative to imaging coils. As a result, slices which cover the same region of anatomy at different times may exhibit different sensitivity. This bias field inconsistency can induce artifacts in the final 3D reconstruction that can impact both clinical interpretation of key tissue boundaries and the automated analysis of the data. Here we describe a framework to estimate and correct the bias field inconsistency in each slice collectively across all motion corrupted image slices. Experiments using synthetic and clinical data show that the proposed method reduces intensity variability in tissues and improves the distinction between key tissue types.
Hybrid space-airborne bistatic SAR geometric resolutions
NASA Astrophysics Data System (ADS)
Moccia, Antonio; Renga, Alfredo
2009-09-01
Performance analysis of Bistatic Synthetic Aperture Radar (SAR) characterized by arbitrary geometric configurations is usually complex and time-consuming since system impulse response has to be evaluated by bistatic SAR processing. This approach does not allow derivation of general equations regulating the behaviour of image resolutions with varying the observation geometry. It is well known that for an arbitrary configuration of bistatic SAR there are not perpendicular range and azimuth directions, but the capability to produce an image is not prevented as it depends only on the possibility to generate image pixels from time delay and Doppler measurements. However, even if separately range and Doppler resolutions are good, bistatic SAR geometries can exist in which imaging capabilities are very poor when range and Doppler directions become locally parallel. The present paper aims to derive analytical tools for calculating the geometric resolutions of arbitrary configuration of bistatic SAR. The method has been applied to a hybrid bistatic Synthetic Aperture Radar formed by a spaceborne illuminator and a receiving-only airborne forward-looking Synthetic Aperture Radar (F-SAR). It can take advantage of the spaceborne illuminator to dodge the limitations of monostatic FSAR. Basic modeling and best illumination conditions have been detailed in the paper.
Brodsky, Ethan K.; Klaers, Jessica L.; Samsonov, Alexey A.; Kijowski, Richard; Block, Walter F.
2014-01-01
Non-Cartesian imaging sequences and navigational methods can be more sensitive to scanner imperfections that have little impact on conventional clinical sequences, an issue which has repeatedly complicated the commercialization of these techniques by frustrating transitions to multi-center evaluations. One such imperfection is phase errors caused by resonant frequency shifts from eddy currents induced in the cryostat by time-varying gradients, a phenomemon known as B0 eddy currents. These phase errors can have a substantial impact on sequences that use ramp sampling, bipolar gradients, and readouts at varying azimuthal angles. We present a method for measuring and correcting phase errors from B0 eddy currents and examine the results on two different scanner models. This technique yields significant improvements in image quality for high-resolution joint imaging on certain scanners. The results suggest that correction of short time B0 eddy currents in manufacturer provided service routines would simplify adoption of non-Cartesian sampling methods. PMID:22488532
Effects of Convective Transport of Solute and Impurities on Defect-Causing Kinetics Instabilities
NASA Technical Reports Server (NTRS)
Vekilov, Peter G.; Higginbotham, Henry Keith (Technical Monitor)
2001-01-01
For in-situ studies of the formation and evolution of step patterns during the growth of protein crystals, we have designed and assembled an experimental setup based on Michelson interferometry with the surface of the growing protein crystal as one of the reflective surfaces. The crystallization part of the device allows optical monitoring of a face of a crystal growing at temperature stable within 0.05 C in a developed solution flow of controlled direction and speed. The reference arm of the interferometer contains a liquid-crystal element that allows controlled shifts of the phase of the interferograms. We employ an image processing algorithm which combines five images with a pi/2 phase difference between each pair of images. The images are transferred to a computer by a camera capable of capturing 6-8 frames per second. The device allows data collection data regarding growth over a relatively large area (approximately .3 sq. mm) in-situ and in real time during growth. The estimated dept resolution of the phase shifting interferometry is about 100 A. The lateral resolution, depending on the zoom ratio, varies between 0.3 and 0.6 micrometers. We have now collected quantitative results on the onset, initial stages and development of instabilities in moving step trains on vicinal crystal surfaces at varying supersaturation, position on the facet, crystal size and temperature with the proteins ferritin, apoferritin and thaumatin. Comparisons with theory, especially with the AFM results on the molecular level processes, see below, allow tests of the rational for the effects of convective flows and, as a particular case, the lack thereof, on step bunching.
LobeFinder: A Convex Hull-Based Method for Quantitative Boundary Analyses of Lobed Plant Cells1[OPEN
Wu, Tzu-Ching; Belteton, Samuel A.; Szymanski, Daniel B.; Umulis, David M.
2016-01-01
Dicot leaves are composed of a heterogeneous mosaic of jigsaw puzzle piece-shaped pavement cells that vary greatly in size and the complexity of their shape. Given the importance of the epidermis and this particular cell type for leaf expansion, there is a strong need to understand how pavement cells morph from a simple polyhedral shape into highly lobed and interdigitated cells. At present, it is still unclear how and when the patterns of lobing are initiated in pavement cells, and one major technological bottleneck to addressing the problem is the lack of a robust and objective methodology to identify and track lobing events during the transition from simple cell geometry to lobed cells. We developed a convex hull-based algorithm termed LobeFinder to identify lobes, quantify geometric properties, and create a useful graphical output of cell coordinates for further analysis. The algorithm was validated against manually curated images of pavement cells of widely varying sizes and shapes. The ability to objectively count and detect new lobe initiation events provides an improved quantitative framework to analyze mutant phenotypes, detect symmetry-breaking events in time-lapse image data, and quantify the time-dependent correlation between cell shape change and intracellular factors that may play a role in the morphogenesis process. PMID:27288363
NASA Astrophysics Data System (ADS)
Cardille, J. A.; Crowley, M.; Fortin, J. A.; Lee, J.; Perez, E.; Sleeter, B. M.; Thau, D.
2016-12-01
With the opening of the Landsat archive, researchers have a vast new data source teeming with imagery and potential. Beyond Landsat, data from other sensors is newly available as well: these include ALOS/PALSAR, Sentinel-1 and -2, MERIS, and many more. Google Earth Engine, developed to organize and provide analysis tools for these immense data sets, is an ideal platform for researchers trying to sift through huge image stacks. It offers nearly unlimited processing power and storage with a straightforward programming interface. Yet labeling land-cover change through time remains challenging given the current state of the art for interpreting remote sensing image sequences. Moreover, combining data from very different image platforms remains quite difficult. To address these challenges, we developed the BULC algorithm (Bayesian Updating of Land Cover), designed for the continuous updating of land-cover classifications through time in large data sets. The algorithm ingests data from any of the wide variety of earth-resources sensors; it maintains a running estimate of land-cover probabilities and the most probable class at all time points along a sequence of events. Here we compare BULC results from two study sites that witnessed considerable forest change in the last 40 years: the Pacific Northwest of the United States and the Mato Grosso region of Brazil. In Brazil, we incorporated rough classifications from more than 100 images of varying quality, mixing imagery from more than 10 different sensors. In the Pacific Northwest, we used BULC to identify forest changes due to logging and urbanization from 1973 to the present. Both regions had classification sequences that were better than many of the component days, effectively ignoring clouds and other unwanted noise while fusing the information contained on several platforms. As we leave remote sensing's data-poor era and enter a period with multiple looks at Earth's surface from multiple sensors over a short period of time, the BULC algorithm can help to sift through images of varying quality in Google Earth Engine to extract the most useful information for mapping the state and history of Earth's land cover.
NASA Astrophysics Data System (ADS)
Cardille, J. A.
2015-12-01
With the opening of the Landsat archive, researchers have a vast new data source teeming with imagery and potential. Beyond Landsat, data from other sensors is newly available as well: these include ALOS/PALSAR, Sentinel-1 and -2, MERIS, and many more. Google Earth Engine, developed to organize and provide analysis tools for these immense data sets, is an ideal platform for researchers trying to sift through huge image stacks. It offers nearly unlimited processing power and storage with a straightforward programming interface. Yet labeling forest change through time remains challenging given the current state of the art for interpreting remote sensing image sequences. Moreover, combining data from very different image platforms remains quite difficult. To address these challenges, we developed the BULC algorithm (Bayesian Updating of Land Cover), designed for the continuous updating of land-cover classifications through time in large data sets. The algorithm ingests data from any of the wide variety of earth-resources sensors; it maintains a running estimate of land-cover probabilities and the most probable class at all time points along a sequence of events. Here we compare BULC results from two study sites that witnessed considerable forest change in the last 40 years: the Pacific Northwest of the United States and the Mato Grosso region of Brazil. In Brazil, we incorporated rough classifications from more than 100 images of varying quality, mixing imagery from more than 10 different sensors. In the Pacific Northwest, we used BULC to identify forest changes due to logging and urbanization from 1973 to the present. Both regions had classification sequences that were better than many of the component days, effectively ignoring clouds and other unwanted signal while fusing the information contained on several platforms. As we leave remote sensing's data-poor era and enter a period with multiple looks at Earth's surface from multiple sensors over a short period of time, this algorithm may help to sift through images of varying quality in Google Earth Engine to extract the most useful information for mapping.
Ultrasonic imaging of textured alumina
NASA Technical Reports Server (NTRS)
Stang, David B.; Salem, Jonathan A.; Generazio, Edward R.
1989-01-01
Ultrasonic images representing the bulk attenuation and velocity of a set of alumina samples were obtained by a pulse-echo contact scanning technique. The samples were taken from larger bodies that were chemically similar but were processed by extrusion or isostatic processing. The crack growth resistance and fracture toughness of the larger bodies were found to vary with processing method and test orientation. The results presented here demonstrate that differences in texture that contribute to variations in structural performance can be revealed by analytic ultrasonic techniques.
Magellan radar to reveal secrets of enshrouded Venus
NASA Technical Reports Server (NTRS)
Saunders, R. Stephen
1990-01-01
Imaging Venus with a synthetic aperture radar (SAR) with 70 percent global coverage at 1-km optical line-pair resolution to provide a detailed global characterization of the volcanic land-forms on Venus by an integration of image data with altimetry is discussed. The Magellan radar system uses navigation predictions to preset the radar data collection parameters. The data are collected in such a way as to preserve the Doppler signature of surface elements and later they are transmitted to the earth for processing into high-resolution radar images. To maintain high accuracy, a complex on-board filter algorithm allows the altitude control logic to respond only to a narrow range of expected photon intensity levels and only to signals that occur within a small predicted interval of time. Each mapping pass images a swath of the planet that varies in width from 20 to 25 km. Since the orbital plane of the spacecraft remains fixed in the inertial space, the slow rotation of Venus continually brings new areas into view of the spacecraft.
Distance-Dependent Multimodal Image Registration for Agriculture Tasks
Berenstein, Ron; Hočevar, Marko; Godeša, Tone; Edan, Yael; Ben-Shahar, Ohad
2015-01-01
Image registration is the process of aligning two or more images of the same scene taken at different times; from different viewpoints; and/or by different sensors. This research focuses on developing a practical method for automatic image registration for agricultural systems that use multimodal sensory systems and operate in natural environments. While not limited to any particular modalities; here we focus on systems with visual and thermal sensory inputs. Our approach is based on pre-calibrating a distance-dependent transformation matrix (DDTM) between the sensors; and representing it in a compact way by regressing the distance-dependent coefficients as distance-dependent functions. The DDTM is measured by calculating a projective transformation matrix for varying distances between the sensors and possible targets. To do so we designed a unique experimental setup including unique Artificial Control Points (ACPs) and their detection algorithms for the two sensors. We demonstrate the utility of our approach using different experiments and evaluation criteria. PMID:26308000
NASA Technical Reports Server (NTRS)
Cheng, Li-Jen (Inventor); Liu, Tsuen-Hsi (Inventor)
1991-01-01
A method and apparatus for detecting and tracking moving objects in a noise environment cluttered with fast- and slow-moving objects and other time-varying background. A pair of phase conjugate light beams carrying the same spatial information commonly cancel each other out through an image subtraction process in a phase conjugate interferometer, wherein gratings are formed in a fast photorefractive phase conjugate mirror material. In the steady state, there is no output. When the optical path of one of the two phase conjugate beams is suddenly changed, the return beam loses its phase conjugate nature and the interferometer is out of balance, resulting in an observable output. The observable output lasts until the phase conjugate nature of the beam has recovered. The observable time of the output signal is roughly equal to the formation time of the grating. If the optical path changing time is slower than the formation time, the change of optical path becomes unobservable, because the index grating can follow the change. Thus, objects traveling at speeds which result in a path changing time which is slower than the formation time are not observable and do not clutter the output image view.
Distinct Contributions of the Magnocellular and Parvocellular Visual Streams to Perceptual Selection
Denison, Rachel N.; Silver, Michael A.
2014-01-01
During binocular rivalry, conflicting images presented to the two eyes compete for perceptual dominance, but the neural basis of this competition is disputed. In interocular switch (IOS) rivalry, rival images periodically exchanged between the two eyes generate one of two types of perceptual alternation: 1) a fast, regular alternation between the images that is time-locked to the stimulus switches and has been proposed to arise from competition at lower levels of the visual processing hierarchy, or 2) a slow, irregular alternation spanning multiple stimulus switches that has been associated with higher levels of the visual system. The existence of these two types of perceptual alternation has been influential in establishing the view that rivalry may be resolved at multiple hierarchical levels of the visual system. We varied the spatial, temporal, and luminance properties of IOS rivalry gratings and found, instead, an association between fast, regular perceptual alternations and processing by the magnocellular stream and between slow, irregular alternations and processing by the parvocellular stream. The magnocellular and parvocellular streams are two early visual pathways that are specialized for the processing of motion and form, respectively. These results provide a new framework for understanding the neural substrates of binocular rivalry that emphasizes the importance of parallel visual processing streams, and not only hierarchical organization, in the perceptual resolution of ambiguities in the visual environment. PMID:21861685
Contrast-Enhanced Magnetic Resonance Imaging of Gastric Emptying and Motility in Rats.
Lu, Kun-Han; Cao, Jiayue; Oleson, Steven Thomas; Powley, Terry L; Liu, Zhongming
2017-11-01
The assessment of gastric emptying and motility in humans and animals typically requires radioactive imaging or invasive measurements. Here, we developed a robust strategy to image and characterize gastric emptying and motility in rats based on contrast-enhanced magnetic resonance imaging (MRI) and computer-assisted image processing. The animals were trained to naturally consume a gadolinium-labeled dietgel while bypassing any need for oral gavage. Following this test meal, the animals were scanned under low-dose anesthesia for high-resolution T1-weighted MRI in 7 Tesla, visualizing the time-varying distribution of the meal with greatly enhanced contrast against non-gastrointestinal (GI) tissues. Such contrast-enhanced images not only depicted the gastric anatomy, but also captured and quantified stomach emptying, intestinal filling, antral contraction, and intestinal absorption with fully automated image processing. Over four postingestion hours, the stomach emptied by 27%, largely attributed to the emptying of the forestomach rather than the corpus and the antrum, and most notable during the first 30 min. Stomach emptying was accompanied by intestinal filling for the first 2 h, whereas afterward intestinal absorption was observable as cumulative contrast enhancement in the renal medulla. The antral contraction was captured as a peristaltic wave propagating from the proximal to distal antrum. The frequency, velocity, and amplitude of the antral contraction were on average 6.34 ± 0.07 contractions per minute, 0.67 ± 0.01 mm/s, and 30.58 ± 1.03%, respectively. These results demonstrate an optimized MRI-based strategy to assess gastric emptying and motility in healthy rats, paving the way for using this technique to understand GI diseases, or test new therapeutics in rat models.The assessment of gastric emptying and motility in humans and animals typically requires radioactive imaging or invasive measurements. Here, we developed a robust strategy to image and characterize gastric emptying and motility in rats based on contrast-enhanced magnetic resonance imaging (MRI) and computer-assisted image processing. The animals were trained to naturally consume a gadolinium-labeled dietgel while bypassing any need for oral gavage. Following this test meal, the animals were scanned under low-dose anesthesia for high-resolution T1-weighted MRI in 7 Tesla, visualizing the time-varying distribution of the meal with greatly enhanced contrast against non-gastrointestinal (GI) tissues. Such contrast-enhanced images not only depicted the gastric anatomy, but also captured and quantified stomach emptying, intestinal filling, antral contraction, and intestinal absorption with fully automated image processing. Over four postingestion hours, the stomach emptied by 27%, largely attributed to the emptying of the forestomach rather than the corpus and the antrum, and most notable during the first 30 min. Stomach emptying was accompanied by intestinal filling for the first 2 h, whereas afterward intestinal absorption was observable as cumulative contrast enhancement in the renal medulla. The antral contraction was captured as a peristaltic wave propagating from the proximal to distal antrum. The frequency, velocity, and amplitude of the antral contraction were on average 6.34 ± 0.07 contractions per minute, 0.67 ± 0.01 mm/s, and 30.58 ± 1.03%, respectively. These results demonstrate an optimized MRI-based strategy to assess gastric emptying and motility in healthy rats, paving the way for using this technique to understand GI diseases, or test new therapeutics in rat models.
Image Discrimination Models With Stochastic Channel Selection
NASA Technical Reports Server (NTRS)
Ahumada, Albert J., Jr.; Beard, Bettina L.; Null, Cynthia H. (Technical Monitor)
1995-01-01
Many models of human image processing feature a large fixed number of channels representing cortical units varying in spatial position (visual field direction and eccentricity) and spatial frequency (radial frequency and orientation). The values of these parameters are usually sampled at fixed values selected to ensure adequate overlap considering the bandwidth and/or spread parameters, which are usually fixed. Even high levels of overlap does not always ensure that the performance of the model will vary smoothly with image translation or scale changes. Physiological measurements of bandwidth and/or spread parameters result in a broad distribution of estimated parameter values and the prediction of some psychophysical results are facilitated by the assumption that these parameters also take on a range of values. Selecting a sample of channels from a continuum of channels rather than using a fixed set can make model performance vary smoothly with changes in image position, scale, and orientation. It also facilitates the addition of spatial inhomogeneity, nonlinear feature channels, and focus of attention to channel models.
Observing in space and time the ephemeral nucleation of liquid-to-crystal phase transitions.
Yoo, Byung-Kuk; Kwon, Oh-Hoon; Liu, Haihua; Tang, Jau; Zewail, Ahmed H
2015-10-19
The phase transition of crystalline ordering is a general phenomenon, but its evolution in space and time requires microscopic probes for visualization. Here we report direct imaging of the transformation of amorphous titanium dioxide nanofilm, from the liquid state, passing through the nucleation step and finally to the ordered crystal phase. Single-pulse transient diffraction profiles at different times provide the structural transformation and the specific degree of crystallinity (η) in the evolution process. It is found that the temporal behaviour of η exhibits unique 'two-step' dynamics, with a robust 'plateau' that extends over a microsecond; the rate constants vary by two orders of magnitude. Such behaviour reflects the presence of intermediate structure(s) that are the precursor of the ordered crystal state. Theoretically, we extend the well-known Johnson-Mehl-Avrami-Kolmogorov equation, which describes the isothermal process with a stretched-exponential function, but here over the range of times covering the melt-to-crystal transformation.
Recursive time-varying filter banks for subband image coding
NASA Technical Reports Server (NTRS)
Smith, Mark J. T.; Chung, Wilson C.
1992-01-01
Filter banks and wavelet decompositions that employ recursive filters have been considered previously and are recognized for their efficiency in partitioning the frequency spectrum. This paper presents an analysis of a new infinite impulse response (IIR) filter bank in which these computationally efficient filters may be changed adaptively in response to the input. The filter bank is presented and discussed in the context of finite-support signals with the intended application in subband image coding. In the absence of quantization errors, exact reconstruction can be achieved and by the proper choice of an adaptation scheme, it is shown that IIR time-varying filter banks can yield improvement over conventional ones.
Deep architecture neural network-based real-time image processing for image-guided radiotherapy.
Mori, Shinichiro
2017-08-01
To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Ahmad, Sahar; Khan, Muhammad Faisal
2015-12-01
In this paper, we present a new non-rigid image registration method that imposes a topology preservation constraint on the deformation. We propose to incorporate the time varying elasticity model into the deformable image matching procedure and constrain the Jacobian determinant of the transformation over the entire image domain. The motion of elastic bodies is governed by a hyperbolic partial differential equation, generally termed as elastodynamics wave equation, which we propose to use as a deformation model. We carried out clinical image registration experiments on 3D magnetic resonance brain scans from IBSR database. The results of the proposed registration approach in terms of Kappa index and relative overlap computed over the subcortical structures were compared against the existing topology preserving non-rigid image registration methods and non topology preserving variant of our proposed registration scheme. The Jacobian determinant maps obtained with our proposed registration method were qualitatively and quantitatively analyzed. The results demonstrated that the proposed scheme provides good registration accuracy with smooth transformations, thereby guaranteeing the preservation of topology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tensor-based dynamic reconstruction method for electrical capacitance tomography
NASA Astrophysics Data System (ADS)
Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.
2017-03-01
Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.
An approach to real-time magnetic resonance imaging for speech production
NASA Astrophysics Data System (ADS)
Narayanan, Shrikanth; Nayak, Krishna; Byrd, Dani; Lee, Sungbok
2003-04-01
Magnetic resonance imaging has served as a valuable tool for studying primarily static postures in speech production. Now, recent improvements in imaging techniques, particularly in temporal resolution, are making it possible to examine the dynamics of vocal tract shaping during speech. Examples include Mady et al. (2001, 2002) (8 images/second, T1 fast gradient echo) and Demolin et al. (2000) (4-5 images/second, ultra fast turbo spin echo sequence). The present study uses a non 2D-FFT acquisition strategy (spiral k-space trajectory) on a GE Signa 1.5T CV/i scanner with a low-flip angle spiral gradient echo originally developed for cardiac imaging [Kerr et al. (1997), Nayak et al. (2001)] with reconstruction rates of 8-10 images/second. The experimental stimuli included English sentences varying the syllable position of /n, r, l/ (spoken by 2 subjects) and Tamil sentences varying among five liquids (spoken by one subject). The imaging parameters were the following: 15 deg flip angle, 20-interleaves, 6.7 ms TR, 1.88 mm resolution over a 20 cm FOV, 5 mm slice thickness, and 2.4 ms spiral readouts. Data show clear real-time movements of the lips, tongue and velum. Sample movies and data analysis strategies will be presented. Segmental durations, positions, and inter-articulator timing can all be quantitatively evaluated. [Work supported by NIH.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, K; Lu, Z; MacMahon, H
Purpose: To investigate the effect of varying system image processing parameters on lung nodule detectability in digital radiography. Methods: An anthropomorphic chest phantom was imaged in the posterior-anterior position using a GE Discovery XR656 digital radiography system. To simulate lung nodules, a polystyrene board with 6.35mm diameter PMMA spheres was placed adjacent to the phantom (into the x-ray path). Due to magnification, the projected simulated nodules had a diameter in the radiographs of approximately 7.5 mm. The images were processed using one of GE’s default chest settings (Factory3) and reprocessed by varying the “Edge” and “Tissue Contrast” processing parameters, whichmore » were the two user-configurable parameters for a single edge and contrast enhancement algorithm. For each parameter setting, the nodule signals were calculated by subtracting the chest-only image from the image with simulated nodules. Twenty nodule signals were averaged, Gaussian filtered, and radially averaged in order to generate an approximately noiseless signal. For each processing parameter setting, this noise-free signal and 180 background samples from across the lung were used to estimate ideal observer performance in a signal-known-exactly detection task. Performance was estimated using a channelized Hotelling observer with 10 Laguerre-Gauss channel functions. Results: The “Edge” and “Tissue Contrast” parameters each had an effect on the detectability as calculated by the model observer. The CHO-estimated signal detectability ranged from 2.36 to 2.93 and was highest for “Edge” = 4 and “Tissue Contrast” = −0.15. In general, detectability tended to decrease as “Edge” was increased and as “Tissue Contrast” was increased. A human observer study should be performed to validate the relation to human detection performance. Conclusion: Image processing parameters can affect lung nodule detection performance in radiography. While validation with a human observer study is needed, model observer detectability for common tasks could provide a means for optimizing image processing parameters.« less
Towards real-time medical diagnostics using hyperspectral imaging technology
NASA Astrophysics Data System (ADS)
Bjorgan, Asgeir; Randeberg, Lise L.
2015-07-01
Hyperspectral imaging provides non-contact, high resolution spectral images which has a substantial diagnostic potential. This can be used for e.g. diagnosis and early detection of arthritis in finger joints. Processing speed is currently a limitation for clinical use of the technique. A real-time system for analysis and visualization using GPU processing and threaded CPU processing is presented. Images showing blood oxygenation, blood volume fraction and vessel enhanced images are among the data calculated in real-time. This study shows the potential of real-time processing in this context. A combination of the processing modules will be used in detection of arthritic finger joints from hyperspectral reflectance and transmittance data.
Miller, Robyn L; Yaesoubi, Maziar; Turner, Jessica A; Mathalon, Daniel; Preda, Adrian; Pearlson, Godfrey; Adali, Tulay; Calhoun, Vince D
2016-01-01
Resting-state functional brain imaging studies of network connectivity have long assumed that functional connections are stationary on the timescale of a typical scan. Interest in moving beyond this simplifying assumption has emerged only recently. The great hope is that training the right lens on time-varying properties of whole-brain network connectivity will shed additional light on previously concealed brain activation patterns characteristic of serious neurological or psychiatric disorders. We present evidence that multiple explicitly dynamical properties of time-varying whole-brain network connectivity are strongly associated with schizophrenia, a complex mental illness whose symptomatic presentation can vary enormously across subjects. As with so much brain-imaging research, a central challenge for dynamic network connectivity lies in determining transformations of the data that both reduce its dimensionality and expose features that are strongly predictive of important population characteristics. Our paper introduces an elegant, simple method of reducing and organizing data around which a large constellation of mutually informative and intuitive dynamical analyses can be performed. This framework combines a discrete multidimensional data-driven representation of connectivity space with four core dynamism measures computed from large-scale properties of each subject's trajectory, ie., properties not identifiable with any specific moment in time and therefore reasonable to employ in settings lacking inter-subject time-alignment, such as resting-state functional imaging studies. Our analysis exposes pronounced differences between schizophrenia patients (Nsz = 151) and healthy controls (Nhc = 163). Time-varying whole-brain network connectivity patterns are found to be markedly less dynamically active in schizophrenia patients, an effect that is even more pronounced in patients with high levels of hallucinatory behavior. To the best of our knowledge this is the first demonstration that high-level dynamic properties of whole-brain connectivity, generic enough to be commensurable under many decompositions of time-varying connectivity data, exhibit robust and systematic differences between schizophrenia patients and healthy controls.
Miller, Robyn L.; Yaesoubi, Maziar; Turner, Jessica A.; Mathalon, Daniel; Preda, Adrian; Pearlson, Godfrey; Adali, Tulay; Calhoun, Vince D.
2016-01-01
Resting-state functional brain imaging studies of network connectivity have long assumed that functional connections are stationary on the timescale of a typical scan. Interest in moving beyond this simplifying assumption has emerged only recently. The great hope is that training the right lens on time-varying properties of whole-brain network connectivity will shed additional light on previously concealed brain activation patterns characteristic of serious neurological or psychiatric disorders. We present evidence that multiple explicitly dynamical properties of time-varying whole-brain network connectivity are strongly associated with schizophrenia, a complex mental illness whose symptomatic presentation can vary enormously across subjects. As with so much brain-imaging research, a central challenge for dynamic network connectivity lies in determining transformations of the data that both reduce its dimensionality and expose features that are strongly predictive of important population characteristics. Our paper introduces an elegant, simple method of reducing and organizing data around which a large constellation of mutually informative and intuitive dynamical analyses can be performed. This framework combines a discrete multidimensional data-driven representation of connectivity space with four core dynamism measures computed from large-scale properties of each subject’s trajectory, ie., properties not identifiable with any specific moment in time and therefore reasonable to employ in settings lacking inter-subject time-alignment, such as resting-state functional imaging studies. Our analysis exposes pronounced differences between schizophrenia patients (Nsz = 151) and healthy controls (Nhc = 163). Time-varying whole-brain network connectivity patterns are found to be markedly less dynamically active in schizophrenia patients, an effect that is even more pronounced in patients with high levels of hallucinatory behavior. To the best of our knowledge this is the first demonstration that high-level dynamic properties of whole-brain connectivity, generic enough to be commensurable under many decompositions of time-varying connectivity data, exhibit robust and systematic differences between schizophrenia patients and healthy controls. PMID:26981625
Stamova, Ivanka; Stamov, Gani
2017-12-01
In this paper, we propose a fractional-order neural network system with time-varying delays and reaction-diffusion terms. We first develop a new Mittag-Leffler synchronization strategy for the controlled nodes via impulsive controllers. Using the fractional Lyapunov method sufficient conditions are given. We also study the global Mittag-Leffler synchronization of two identical fractional impulsive reaction-diffusion neural networks using linear controllers, which was an open problem even for integer-order models. Since the Mittag-Leffler stability notion is a generalization of the exponential stability concept for fractional-order systems, our results extend and improve the exponential impulsive control theory of neural network system with time-varying delays and reaction-diffusion terms to the fractional-order case. The fractional-order derivatives allow us to model the long-term memory in the neural networks, and thus the present research provides with a conceptually straightforward mathematical representation of rather complex processes. Illustrative examples are presented to show the validity of the obtained results. We show that by means of appropriate impulsive controllers we can realize the stability goal and to control the qualitative behavior of the states. An image encryption scheme is extended using fractional derivatives. Copyright © 2017 Elsevier Ltd. All rights reserved.
Tomaszewski, Michał; Ruszczak, Bogdan; Michalski, Paweł
2018-06-01
Electrical insulators are elements of power lines that require periodical diagnostics. Due to their location on the components of high-voltage power lines, their imaging can be cumbersome and time-consuming, especially under varying lighting conditions. Insulator diagnostics with the use of visual methods may require localizing insulators in the scene. Studies focused on insulator localization in the scene apply a number of methods, including: texture analysis, MRF (Markov Random Field), Gabor filters or GLCM (Gray Level Co-Occurrence Matrix) [1], [2]. Some methods, e.g. those which localize insulators based on colour analysis [3], rely on object and scene illumination, which is why the images from the dataset are taken under varying lighting conditions. The dataset may also be used to compare the effectiveness of different methods of localizing insulators in images. This article presents high-resolution images depicting a long rod electrical insulator under varying lighting conditions and against different backgrounds: crops, forest and grass. The dataset contains images with visible laser spots (generated by a device emitting light at the wavelength of 532 nm) and images without such spots, as well as complementary data concerning the illumination level and insulator position in the scene, the number of registered laser spots, and their coordinates in the image. The laser spots may be used to support object-localizing algorithms, while the images without spots may serve as a source of information for those algorithms which do not need spots to localize an insulator.
Video Extrapolation Method Based on Time-Varying Energy Optimization and CIP.
Sakaino, Hidetomo
2016-09-01
Video extrapolation/prediction methods are often used to synthesize new videos from images. For fluid-like images and dynamic textures as well as moving rigid objects, most state-of-the-art video extrapolation methods use non-physics-based models that learn orthogonal bases from a number of images but at high computation cost. Unfortunately, data truncation can cause image degradation, i.e., blur, artifact, and insufficient motion changes. To extrapolate videos that more strictly follow physical rules, this paper proposes a physics-based method that needs only a few images and is truncation-free. We utilize physics-based equations with image intensity and velocity: optical flow, Navier-Stokes, continuity, and advection equations. These allow us to use partial difference equations to deal with the local image feature changes. Image degradation during extrapolation is minimized by updating model parameters, where a novel time-varying energy balancer model that uses energy based image features, i.e., texture, velocity, and edge. Moreover, the advection equation is discretized by high-order constrained interpolation profile for lower quantization error than can be achieved by the previous finite difference method in long-term videos. Experiments show that the proposed energy based video extrapolation method outperforms the state-of-the-art video extrapolation methods in terms of image quality and computation cost.
Vorwerk, H; Zink, K; Schiller, R; Budach, V; Böhmer, D; Kampfer, S; Popp, W; Sack, H; Engenhart-Cabillic, R
2014-05-01
A number of national and international societies published recommendations regarding the required equipment and manpower assumed to be necessary to treat a number of patients with radiotherapy. None of these recommendations were based on actual time measurements needed for specific radiotherapy procedures. The German Society of Radiation Oncology (DEGRO) was interested in substantiating these recommendations by prospective evaluations of all important core procedures of radiotherapy in the most frequent cancers treated by radiotherapy. The results of the examinations of radiotherapy with intensity-modulated radiation therapy (IMRT) in patients with different tumor entities are presented in this manuscript. Four radiation therapy centers [University Hospital of Marburg, University Hospital of Giessen, University Hospital of Berlin (Charité), Klinikum rechts der Isar der Technischen Universität München] participated in this prospective study. The workload of the different occupational groups and room occupancies for the core procedures of radiotherapy were prospectively documented during a 2-month period per center and subsequently statistically analyzed. The time needed per patient varied considerably between individual patients and between centers for all the evaluated procedures. The technical preparation (contouring of target volume and organs at risk, treatment planning, and approval of treatment plan) was the most time-consuming process taking 3 h 54 min on average. The time taken by the medical physicists for this procedure amounted to about 57%. The training part of the preparation time was 87% of the measured time for the senior physician and resident. The total workload for all involved personnel comprised 74.9 min of manpower for the first treatment, 39.7 min for a routine treatment with image guidance, and 22.8 min without image guidance. The mean room occupancy varied between 10.6 min (routine treatment without image guidance) and 23.7 min (first treatment with image guidance). The prospective data presented here allow for an estimate of the required machine time and manpower needed for the core procedures of radiotherapy in an average radiation treatment with IMRT. However, one should be aware that a number of necessary and time-consuming activities were not evaluated in the present study.
Asou, Hiroya; Imada, N; Sato, T
2010-06-20
On coronary MR angiography (CMRA), cardiac motions worsen the image quality. To improve the image quality, detection of cardiac especially for individual coronary motion is very important. Usually, scan delay and duration were determined manually by the operator. We developed a new evaluation method to calculate static time of individual coronary artery. At first, coronary cine MRI was taken at the level of about 3 cm below the aortic valve (80 images/R-R). Chronological change of the signals were evaluated with Fourier transformation of each pixel of the images were done. Noise reduction with subtraction process and extraction process were done. To extract higher motion such as coronary arteries, morphological filter process and labeling process were added. Using these imaging processes, individual coronary motion was extracted and individual coronary static time was calculated automatically. We compared the images with ordinary manual method and new automated method in 10 healthy volunteers. Coronary static times were calculated with our method. Calculated coronary static time was shorter than that of ordinary manual method. And scan time became about 10% longer than that of ordinary method. Image qualities were improved in our method. Our automated detection method for coronary static time with chronological Fourier transformation has a potential to improve the image quality of CMRA and easy processing.
Visual Depth from Motion Parallax and Eye Pursuit
Stroyan, Keith; Nawrot, Mark
2012-01-01
A translating observer viewing a rigid environment experiences “motion parallax,” the relative movement upon the observer’s retina of variously positioned objects in the scene. This retinal movement of images provides a cue to the relative depth of objects in the environment, however retinal motion alone cannot mathematically determine relative depth of the objects. Visual perception of depth from lateral observer translation uses both retinal image motion and eye movement. In (Nawrot & Stroyan, 2009, Vision Res. 49, p.1969) we showed mathematically that the ratio of the rate of retinal motion over the rate of smooth eye pursuit mathematically determines depth relative to the fixation point in central vision. We also reported on psychophysical experiments indicating that this ratio is the important quantity for perception. Here we analyze the motion/pursuit cue for the more general, and more complicated, case when objects are distributed across the horizontal viewing plane beyond central vision. We show how the mathematical motion/pursuit cue varies with different points across the plane and with time as an observer translates. If the time varying retinal motion and smooth eye pursuit are the only signals used for this visual process, it is important to know what is mathematically possible to derive about depth and structure. Our analysis shows that the motion/pursuit ratio determines an excellent description of depth and structure in these broader stimulus conditions, provides a detailed quantitative hypothesis of these visual processes for the perception of depth and structure from motion parallax, and provides a computational foundation to analyze the dynamic geometry of future experiments. PMID:21695531
Lakshmanan, Shanmugam; Prakash, Mani; Lim, Chee Peng; Rakkiyappan, Rajan; Balasubramaniam, Pagavathigounder; Nahavandi, Saeid
2018-01-01
In this paper, synchronization of an inertial neural network with time-varying delays is investigated. Based on the variable transformation method, we transform the second-order differential equations into the first-order differential equations. Then, using suitable Lyapunov-Krasovskii functionals and Jensen's inequality, the synchronization criteria are established in terms of linear matrix inequalities. Moreover, a feedback controller is designed to attain synchronization between the master and slave models, and to ensure that the error model is globally asymptotically stable. Numerical examples and simulations are presented to indicate the effectiveness of the proposed method. Besides that, an image encryption algorithm is proposed based on the piecewise linear chaotic map and the chaotic inertial neural network. The chaotic signals obtained from the inertial neural network are utilized for the encryption process. Statistical analyses are provided to evaluate the effectiveness of the proposed encryption algorithm. The results ascertain that the proposed encryption algorithm is efficient and reliable for secure communication applications.
On processing of Ni-Cr3C2 based functionally graded clads through microwave heating
NASA Astrophysics Data System (ADS)
Kaushal, Sarbjeet; Gupta, Dheeraj; Bhowmick, Hiralal
2018-06-01
In the current study, functionally graded clads (FGC) of Ni-Cr3C2 based composite powders with varying percentage of Cr3C2 (0%–30% by weight) were developed on austenitic stainless steel (SS-304) substrate through microwave hybrid heating method. A domestic microwave oven working at 2.45 GHz and variable power level of 180–900 W was used to conduct the experimental trials. The exposure time was varied with compositional gradient and was optimized. Scanning electron microscopic (SEM) image of the FGC shows the uniform distribution of Cr3C2 particles inside the Ni matrix. Presence of Ni3C, Ni3Si, Ni3Cr2, and Cr3C2 phases was observed in the different layers of FGC. The top FGC layer exhibits the maximum value of microhardness of order 576 ± 25 HV which was 2.5 times more than that of the substrate.
1981-06-15
relationships 5 3. Normalized energy in ambiguity function for i = 0 14 k ilI SACLANTCEN SR-50 A RESUME OF STOCHASTIC, TIME-VARYING, LINEAR SYSTEM THEORY WITH...the order in which systems are concatenated is unimportant. These results are exactly analogous to the results of time-invariant linear system theory in...REFERENCES 1. MEIER, L. A rdsum6 of deterministic time-varying linear system theory with application to active sonar signal processing problems, SACLANTCEN
NASA Astrophysics Data System (ADS)
Steinman, Joe; Koletar, Margaret; Stefanovic, Bojana; Sled, John G.
2016-03-01
This study evaluates 2-Photon fluorescence microscopy of in vivo and ex vivo cleared samples for visualizing cortical vasculature. Four mice brains were imaged with in vivo 2PFM. Mice were then perfused with a FITC gel and cleared in fructose. The same regions imaged in vivo were imaged ex vivo. Vessels were segmented automatically in both images using an in-house developed algorithm that accounts for the anisotropic and spatially varying PSF ex vivo. Through non-linear warping, the ex vivo image and tracing were aligned to the in vivo image. The corresponding vessels were identified through a local search algorithm. This enabled comparison of identical vessels in vivo/ex vivo. A similar process was conducted on the in vivo tracing to determine the percentage of vessels perfused. Of all the vessels identified over the four brains in vivo, 98% were present ex vivo. There was a trend towards reduced vessel diameter ex vivo by 12.7%, and the shrinkage varied between specimens (0% to 26%). Large diameter surface vessels, through a process termed 'shadowing', attenuated in vivo signal from deeper cortical vessels by 40% at 300 μm below the cortical surface, which does not occur ex vivo. In summary, though there is a mean diameter shrinkage ex vivo, ex vivo imaging has a reduced shadowing artifact. Additionally, since imaging depths are only limited by the working distance of the microscope objective, ex vivo imaging is more suitable for imaging large portions of the brain.
Real-time digital signal processing for live electro-optic imaging.
Sasagawa, Kiyotaka; Kanno, Atsushi; Tsuchiya, Masahiro
2009-08-31
We present an imaging system that enables real-time magnitude and phase detection of modulated signals and its application to a Live Electro-optic Imaging (LEI) system, which realizes instantaneous visualization of RF electric fields. The real-time acquisition of magnitude and phase images of a modulated optical signal at 5 kHz is demonstrated by imaging with a Si-based high-speed CMOS image sensor and real-time signal processing with a digital signal processor. In the LEI system, RF electric fields are probed with light via an electro-optic crystal plate and downconverted to an intermediate frequency by parallel optical heterodyning, which can be detected with the image sensor. The artifacts caused by the optics and the image sensor characteristics are corrected by image processing. As examples, we demonstrate real-time visualization of electric fields from RF circuits.
Liu, Xin; Li, Weiyi; Chong, Tzyy Haur; Fane, Anthony G
2017-03-01
Spacer design plays an important role in improving the performance of membrane processes for water/wastewater treatment. This work focused on a fundamental issue of spacer design, i.e., investigating the effects of spacer orientations on the fouling behavior during a membrane process. A series of fouling experiments with different spacer orientation were carried out to in situ characterize the formation of a cake layer in a spacer unit cell via 3D optical coherence tomography (OCT) imaging. The cake layers formed at different times were digitalized for quantitatively analyzing the variation in the cake morphology as a function of time. In particular, the local deposition rates were evaluated to determine the active regions where the instantaneous changes in deposit thickness were significant. The characterization results indicate that varying the spacer orientation could substantially change the evolution of membrane fouling by particulate foulants and thereby result in a cake layer with various morphologies; the competition between growth and erosion at different locations would instantaneously respond to the micro-hydrodynamic environment that might change with time. This work confirms that the OCT-based characterization method is a powerful tool for exploring novel spacer design. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jamlongkul, P.; Wannawichian, S.
2017-12-01
Earth's aurora in low latitude region was studied via time variations of oxygen emission spectra, simultaneously with solar wind data. The behavior of spectrum intensity, in corresponding with solar wind condition, could be a trace of aurora in low latitude region including some effects of high energetic auroral particles. Oxygen emission spectral lines were observed by Medium Resolution Echelle Spectrograph (MRES) at 2.4-m diameter telescope at Thai National Observatory, Inthanon Mountain, Chiang Mai, Thailand, during 1-5 LT on 5 and 6 February 2017. The observed spectral lines were calibrated via Dech95 - 2D image processing program and Dech-Fits spectra processing program for spectrum image processing and spectrum wavelength calibration, respectively. The variations of observed intensities each day were compared with solar wind parameters, which are magnitude of IMF (|BIMF|) including IMF in RTN coordinate (BR, BT, BN), ion density (ρ), plasma flow pressure (P), and speed (v). The correlation coefficients between oxygen spectral emissions and different solar wind parameters were found to vary in both positive and negative behaviors.
Live-cell imaging to measure BAX recruitment kinetics to mitochondria during apoptosis
Maes, Margaret E.; Schlamp, Cassandra L.
2017-01-01
The pro-apoptotic BCL2 gene family member, BAX, plays a pivotal role in the intrinsic apoptotic pathway. Under cellular stress, BAX recruitment to the mitochondria occurs when activated BAX forms dimers, then oligomers, to initiate mitochondria outer membrane permeabilization (MOMP), a process critical for apoptotic progression. The activation and recruitment of BAX to form oligomers has been studied for two decades using fusion proteins with a fluorescent reporter attached in-frame to the BAX N-terminus. We applied high-speed live cell imaging to monitor the recruitment of BAX fusion proteins in dying cells. Data from time-lapse imaging was validated against the activity of endogenous BAX in cells, and analyzed using sigmoid mathematical functions to obtain detail of the kinetic parameters of the recruitment process at individual mitochondrial foci. BAX fusion proteins behave like endogenous BAX during apoptosis. Kinetic studies show that fusion protein recruitment is also minimally affected in cells lacking endogenous BAK or BAX genes, but that the kinetics are moderately, but significantly, different with different fluorescent tags in the fusion constructs. In experiments testing BAX recruitment in 3 different cell lines, our results show that regardless of cell type, once activated, BAX recruitment initiates simultaneously within a cell, but exhibits varying rates of recruitment at individual mitochondrial foci. Very early during BAX recruitment, pro-apoptotic molecules are released in the process of MOMP, but different molecules are released at different times and rates relative to the time of BAX recruitment initiation. These results provide a method for BAX kinetic analysis in living cells and yield greater detail of multiple characteristics of BAX-induced MOMP in living cells that were initially observed in cell free studies. PMID:28880942
Live-cell imaging to measure BAX recruitment kinetics to mitochondria during apoptosis.
Maes, Margaret E; Schlamp, Cassandra L; Nickells, Robert W
2017-01-01
The pro-apoptotic BCL2 gene family member, BAX, plays a pivotal role in the intrinsic apoptotic pathway. Under cellular stress, BAX recruitment to the mitochondria occurs when activated BAX forms dimers, then oligomers, to initiate mitochondria outer membrane permeabilization (MOMP), a process critical for apoptotic progression. The activation and recruitment of BAX to form oligomers has been studied for two decades using fusion proteins with a fluorescent reporter attached in-frame to the BAX N-terminus. We applied high-speed live cell imaging to monitor the recruitment of BAX fusion proteins in dying cells. Data from time-lapse imaging was validated against the activity of endogenous BAX in cells, and analyzed using sigmoid mathematical functions to obtain detail of the kinetic parameters of the recruitment process at individual mitochondrial foci. BAX fusion proteins behave like endogenous BAX during apoptosis. Kinetic studies show that fusion protein recruitment is also minimally affected in cells lacking endogenous BAK or BAX genes, but that the kinetics are moderately, but significantly, different with different fluorescent tags in the fusion constructs. In experiments testing BAX recruitment in 3 different cell lines, our results show that regardless of cell type, once activated, BAX recruitment initiates simultaneously within a cell, but exhibits varying rates of recruitment at individual mitochondrial foci. Very early during BAX recruitment, pro-apoptotic molecules are released in the process of MOMP, but different molecules are released at different times and rates relative to the time of BAX recruitment initiation. These results provide a method for BAX kinetic analysis in living cells and yield greater detail of multiple characteristics of BAX-induced MOMP in living cells that were initially observed in cell free studies.
Graves, William W.; Binder, Jeffrey R.; Desai, Rutvik H.; Humphries, Colin; Stengel, Benjamin C.; Seidenberg, Mark S.
2014-01-01
Are there multiple ways to be a skilled reader? To address this longstanding, unresolved question, we hypothesized that individual variability in using semantic information in reading aloud would be associated with neuroanatomical variation in pathways linking semantics and phonology. Left-hemisphere regions of interest for diffusion tensor imaging analysis were defined based on fMRI results, including two regions linked with semantic processing – angular gyrus (AG) and inferior temporal sulcus (ITS) – and two linked with phonological processing – posterior superior temporal gyrus (pSTG) and posterior middle temporal gyrus (pMTG). Effects of imageability (a semantic measure) on response times varied widely among individuals and covaried with the volume of pathways through the ITS and pMTG, and through AG and pSTG, partially overlapping the inferior longitudinal fasciculus and the posterior branch of the arcuate fasciculus. These results suggest strategy differences among skilled readers associated with structural variation in the neural reading network. PMID:24735993
Speckle size in optical Fourier domain imaging
NASA Astrophysics Data System (ADS)
Lamouche, G.; Vergnole, S.; Bisaillon, C.-E.; Dufour, M.; Maciejko, R.; Monchalin, J.-P.
2007-06-01
As in conventional time-domain optical coherence tomography (OCT), speckle is inherent to any Optical Fourier Domain Imaging (OFDI) of biological tissue. OFDI is also known as swept-source OCT (SS-OCT). The axial speckle size is mainly determined by the OCT resolution length and the transverse speckle size by the focusing optics illuminating the sample. There is also a contribution from the sample related to the number of scatterers contained within the probed volume. In the OFDI data processing, there is some liberty in selecting the range of wavelengths used and this allows variation in the OCT resolution length. Consequently the probed volume can be varied. By performing measurements on an optical phantom with a controlled density of discrete scatterers and by changing the probed volume with different range of wavelengths in the OFDI data processing, there is an obvious change in the axial speckle size, but we show that there is also a less obvious variation in the transverse speckle size. This work contributes to a better understanding of speckle in OCT.
NASA Technical Reports Server (NTRS)
1998-01-01
Under a Jet Propulsion Laboratory SBIR (Small Business Innovative Research), Cambridge Research and Instrumentation Inc., developed a new class of filters for the construction of small, low-cost multispectral imagers. The VariSpec liquid crystal enables users to obtain multi-spectral, ultra-high resolution images using a monochrome CCD (charge coupled device) camera. Application areas include biomedical imaging, remote sensing, and machine vision.
Usability of a real-time tracked augmented reality display system in musculoskeletal injections
NASA Astrophysics Data System (ADS)
Baum, Zachary; Ungi, Tamas; Lasso, Andras; Fichtinger, Gabor
2017-03-01
PURPOSE: Image-guided needle interventions are seldom performed with augmented reality guidance in clinical practice due to many workspace and usability restrictions. We propose a real-time optically tracked image overlay system to make image-guided musculoskeletal injections more efficient and assess its usability in a bed-side clinical environment. METHODS: An image overlay system consisting of an optically tracked viewbox, tablet computer, and semitransparent mirror allows users to navigate scanned patient volumetric images in real-time using software built on the open-source 3D Slicer application platform. A series of experiments were conducted to evaluate the latency and screen refresh rate of the system using different image resolutions. To assess the usability of the system and software, five medical professionals were asked to navigate patient images while using the overlay and completed a questionnaire to assess the system. RESULTS: In assessing the latency of the system with scanned images of varying size, screen refresh rates were approximately 5 FPS. The study showed that participants found using the image overlay system easy, and found the table-mounted system was significantly more usable and effective than the handheld system. CONCLUSION: It was determined that the system performs comparably with scanned images of varying size when assessing the latency of the system. During our usability study, participants preferred the table-mounted system over the handheld. The participants also felt that the system itself was simple to use and understand. With these results, the image overlay system shows promise for use in a clinical environment.
1981-12-01
ocessors has led to the possibility of implementing a large number of image processing functions in near real time . ~CC~ jnro _ j:% UNLSSFE (b-.YC ASIIAINO...to the possibility of implementing a large number of image processing functions in near " real - time ," a result which is essential to establishing a...for example, and S) rapid image handling for near real - time in- teraction by a user at a display. For example, for a large resolution image, say
NASA Astrophysics Data System (ADS)
Lewis, J. R.; Irwin, M.; Bunclark, P.
2010-12-01
The VISTA telescope is a 4 metre instrument which has recently been commissioned at Paranal, Chile. Equipped with an infrared camera, 16 2Kx2K Raytheon detectors and a 1.7 square degree field of view, VISTA represents a huge leap in infrared survey capability in the southern hemisphere. Pipeline processing of IR data is far more technically challenging than for optical data. IR detectors are inherently more unstable, while the sky emission is over 100 times brighter than most objects of interest, and varies in a complex spatial and temporal manner. To compensate for this, exposure times are kept short, leading to high nightly data rates. VISTA is expected to generate an average of 250 GB of data per night over the next 5-10 years, which far exceeds the current total data rate of all 8m-class telescopes. In this presentation we discuss the pipelines that have been developed to deal with IR imaging data from VISTA and discuss the primary issues involved in an end-to-end system capable of: robustly removing instrument and night sky signatures; monitoring data quality and system integrity; providing astrometric and photometric calibration; and generating photon noise-limited images and science-ready astronomical catalogues.
Guang, Huizhi; Cai, Chuangjian; Zuo, Simin; Cai, Wenjuan; Zhang, Jiulou; Luo, Jianwen
2017-03-01
Peripheral arterial disease (PAD) can further cause lower limb ischemia. Quantitative evaluation of the vascular perfusion in the ischemic limb contributes to diagnosis of PAD and preclinical development of new drug. In vivo time-series indocyanine green (ICG) fluorescence imaging can noninvasively monitor blood flow and has a deep tissue penetration. The perfusion rate estimated from the time-series ICG images is not enough for the evaluation of hindlimb ischemia. The information relevant to the vascular density is also important, because angiogenesis is an essential mechanism for post-ischemic recovery. In this paper, a multiparametric evaluation method is proposed for simultaneous estimation of multiple vascular perfusion parameters, including not only the perfusion rate but also the vascular perfusion density and the time-varying ICG concentration in veins. The target method is based on a mathematical model of ICG pharmacokinetics in the mouse hindlimb. The regression analysis performed on the time-series ICG images obtained from a dynamic reflectance fluorescence imaging system. The results demonstrate that the estimated multiple parameters are effective to quantitatively evaluate the vascular perfusion and distinguish hypo-perfused tissues from well-perfused tissues in the mouse hindlimb. The proposed multiparametric evaluation method could be useful for PAD diagnosis. The estimated perfusion rate and vascular perfusion density maps (left) and the time-varying ICG concentration in veins of the ankle region (right) of the normal and ischemic hindlimbs. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Interactive brain shift compensation using GPU based programming
NASA Astrophysics Data System (ADS)
van der Steen, Sander; Noordmans, Herke Jan; Verdaasdonk, Rudolf
2009-02-01
Processing large images files or real-time video streams requires intense computational power. Driven by the gaming industry, the processing power of graphic process units (GPUs) has increased significantly. With the pixel shader model 4.0 the GPU can be used for image processing 10x faster than the CPU. Dedicated software was developed to deform 3D MR and CT image sets for real-time brain shift correction during navigated neurosurgery using landmarks or cortical surface traces defined by the navigation pointer. Feedback was given using orthogonal slices and an interactively raytraced 3D brain image. GPU based programming enables real-time processing of high definition image datasets and various applications can be developed in medicine, optics and image sciences.
Effect of Local TOF Kernel Miscalibrations on Contrast-Noise in TOF PET
NASA Astrophysics Data System (ADS)
Clementel, Enrico; Mollet, Pieter; Vandenberghe, Stefaan
2013-06-01
TOF PET imaging requires specific calibrations: accurate characterization of the system timing resolution and timing offset is required to achieve the full potential image quality. Current system models used in image reconstruction assume a spatially uniform timing resolution kernel. Furthermore, although the timing offset errors are often pre-corrected, this correction becomes less accurate with the time since, especially in older scanners, the timing offsets are often calibrated only during the installation, as the procedure is time-consuming. In this study, we investigate and compare the effects of local mismatch of timing resolution when a uniform kernel is applied to systems with local variations in timing resolution and the effects of uncorrected time offset errors on image quality. A ring-like phantom was acquired on a Philips Gemini TF scanner and timing histograms were obtained from coincidence events to measure timing resolution along all sets of LORs crossing the scanner center. In addition, multiple acquisitions of a cylindrical phantom, 20 cm in diameter with spherical inserts, and a point source were simulated. A location-dependent timing resolution was simulated, with a median value of 500 ps and increasingly large local variations, and timing offset errors ranging from 0 to 350 ps were also simulated. Images were reconstructed with TOF MLEM with a uniform kernel corresponding to the effective timing resolution of the data, as well as with purposefully mismatched kernels. To CRC vs noise curves were measured over the simulated cylinder realizations, while the simulated point source was processed to generate timing histograms of the data. Results show that timing resolution is not uniform over the FOV of the considered scanner. The simulated phantom data indicate that CRC is moderately reduced in data sets with locally varying timing resolution reconstructed with a uniform kernel, while still performing better than non-TOF reconstruction. On the other hand, uncorrected offset errors in our setup have a larger potential for decreasing image quality and can lead to a reduction of CRC of up to 15% and an increase in the measured timing resolution kernel up to 40%. However, in realistic conditions in frequently calibrated systems, using a larger effective timing kernel in image reconstruction can compensate uncorrected offset errors.
Human body motion capture from multi-image video sequences
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola
2003-01-01
In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.
Raben, Jaime S; Hariharan, Prasanna; Robinson, Ronald; Malinauskas, Richard; Vlachos, Pavlos P
2016-03-01
We present advanced particle image velocimetry (PIV) processing, post-processing, and uncertainty estimation techniques to support the validation of computational fluid dynamics analyses of medical devices. This work is an extension of a previous FDA-sponsored multi-laboratory study, which used a medical device mimicking geometry referred to as the FDA benchmark nozzle model. Experimental measurements were performed using time-resolved PIV at five overlapping regions of the model for Reynolds numbers in the nozzle throat of 500, 2000, 5000, and 8000. Images included a twofold increase in spatial resolution in comparison to the previous study. Data was processed using ensemble correlation, dynamic range enhancement, and phase correlations to increase signal-to-noise ratios and measurement accuracy, and to resolve flow regions with large velocity ranges and gradients, which is typical of many blood-contacting medical devices. Parameters relevant to device safety, including shear stress at the wall and in bulk flow, were computed using radial basis functions. In addition, in-field spatially resolved pressure distributions, Reynolds stresses, and energy dissipation rates were computed from PIV measurements. Velocity measurement uncertainty was estimated directly from the PIV correlation plane, and uncertainty analysis for wall shear stress at each measurement location was performed using a Monte Carlo model. Local velocity uncertainty varied greatly and depended largely on local conditions such as particle seeding, velocity gradients, and particle displacements. Uncertainty in low velocity regions in the sudden expansion section of the nozzle was greatly reduced by over an order of magnitude when dynamic range enhancement was applied. Wall shear stress uncertainty was dominated by uncertainty contributions from velocity estimations, which were shown to account for 90-99% of the total uncertainty. This study provides advancements in the PIV processing methodologies over the previous work through increased PIV image resolution, use of robust image processing algorithms for near-wall velocity measurements and wall shear stress calculations, and uncertainty analyses for both velocity and wall shear stress measurements. The velocity and shear stress analysis, with spatially distributed uncertainty estimates, highlights the challenges of flow quantification in medical devices and provides potential methods to overcome such challenges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, A; Little, K; Baad, M
Purpose: To use phantom and simulation experiments to relate technique factors, patient size and antiscatter grid use to image quality in portable digital radiography (DR), in light of advancements in detector design and image processing. Methods: Image contrast-to-noise ratio (CNR) on a portable DR system (MobileDaRt Evolution, Shimadzu) was measured by imaging four aluminum inserts of varying thickness, superimposed on a Lucite slab phantom using a pediatric abdominal protocol. Three thicknesses of Lucite were used: 6.1cm, 12cm, and 18.2cm, with both 55 and 65 kVp beams. The mAs was set so that detector entrance exposure (DEE) was matched between kVpmore » values. Each technique and phantom was used with and without an antiscatter grid (focused linear grid embedded in aluminum with an 8:1 ratio). The CNR-improvement-factor was then used to determine the thickness- and technique-dependent appropriateness of grid use. Finally, the same experiment was performed via Monte Carlo simulation, integrating incident energy fluence at each detector pixel, so that effects of detector design and image processing could be isolated from physical factors upstream of the detector. Results: The physical phantom experiment demonstrated a clear improvement for the lower tube voltage (55kVp), along with substantial CNR benefits with grid use for 12–18cm phantoms. Neither trend was evident with Monte Carlo, suggesting that suboptimal quantum-detection-efficiency and automated grid-removal could explain trends in kVp and grid use, respectively. Conclusion: Physical experiments demonstrate marked improvement in CNR when using a grid for phantoms of 12 and 18cm Lucite thickness (above ∼10cm soft-tissue equivalent). This benefit is likely due to image processing, as this result was not seen with Monte Carlo. The impact of image processing on image resolution should also be investigated, and the CNR benefit of low kVp and grid use should be weighed against the increased exposure time necessary to achieve adequate DEE.« less
Ice Processes and Growth History on Arctic and Sub-Arctic Lakes Using ERS-1 SAR Data
NASA Technical Reports Server (NTRS)
Morris, K.; Jeffries, M. O.; Weeks, W. F.
1995-01-01
A survey of ice growth and decay processes on a selection of shallow and deep sub-Arctic and Arctic lakes was conducted using radiometrically calibrated ERS-1 SAR images. Time series of radar backscatter data were compiled for selected sites on the lakes during the period ot ice cover (September to June) for the years 1991-1992 and 1992-1993. A variety of lake-ice processes could be observed, and significant changes in backscatter occurred from the time of initial ice formation in autumn until the onset of the spring thaw. Backscatter also varied according to the location and depth of the lakes. The spatial and temporal changes in backscatter were most constant and predictable at the shallow lakes on the North Slope of Alaska. As a consequence, they represent the most promising sites for long-term monitoring and the detection of changes related to global warming and its effects on the polar regions.
Jones, Peter; Athaullah, Waheedah; Harper, Alana; Wells, Susan; LeFevre, James; Stewart, Joanna; Curtis, Elana; Reid, Papaarangi; Ameratunga, Shanthi
2018-05-21
A national health target for length of stay in emergency departments (ED) was introduced in 2009 to reduce crowding and improve quality of care. We aimed to determine whether the target was associated with changes in time to CT and appropriateness of CT imaging, as markers of care quality for suspected acute traumatic brain injury (TBI). We undertook a retrospective review of the case records of a random sample of people aged ≥15 years presenting to the ED with TBI from 2006 to 2013. General linear models were used to investigate changes in outcomes along with routine process times before and after the introduction of the target. Among 501 eligible cases the median (IQR) time to CT was 136 (76-247) pre target versus 119 (59-209) minutes post target, p = 0.014. The proportion of appropriate imaging was similar between periods: 77.9% (95% CI 71-83%) versus 76.6% (95%CI 72-81%), p = 0.825. Interactions suggested that the time to CT and appropriateness of imaging before and after the introduction of the target varied by ethnicity, although the changes were not clinically important. Time to assessment and length of stay did not change importantly. We found no evidence of a clinically important change in time to CT or appropriateness of imaging for suspected TBI in association with the introduction of the SSED time target. Additional research with larger cohorts of Māori and Pacific participants is recommended to understand our observed patterns by ethnicity. Copyright © 2018 Elsevier Ltd. All rights reserved.
Effect of image quality on calcification detection in digital mammography
Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M.; Wallis, Matthew G.; Chakraborty, Dev P.; Dance, David R.; Bosmans, Hilde; Young, Kenneth C.
2012-01-01
Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Conclusions: Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection. PMID:22755704
Effect of image quality on calcification detection in digital mammography.
Warren, Lucy M; Mackenzie, Alistair; Cooke, Julie; Given-Wilson, Rosalind M; Wallis, Matthew G; Chakraborty, Dev P; Dance, David R; Bosmans, Hilde; Young, Kenneth C
2012-06-01
This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into half of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection. © 2012 American Association of Physicists in Medicine.
Combustion behaviors of GO2/GH2 swirl-coaxial injector using non-intrusive optical diagnostics
NASA Astrophysics Data System (ADS)
GuoBiao, Cai; Jian, Dai; Yang, Zhang; NanJia, Yu
2016-06-01
This research evaluates the combustion behaviors of a single-element, swirl-coaxial injector in an atmospheric combustion chamber with gaseous oxygen and gaseous hydrogen (GO2/GH2) as the propellants. A brief simulated flow field schematic comparison between a shear-coaxial injector and the swirl-coaxial injector reveals the distribution characteristics of the temperature field and streamline patterns. Advanced optical diagnostics, i.e., OH planar laser-induced fluorescence and high-speed imaging, are simultaneously employed to determine the OH radical spatial distribution and flame fluctuations, respectively. The present study focuses on the flame structures under varying O/F mixing ratios and center oxygen swirl intensities. The combined use of several image-processing methods aimed at OH instantaneous images, including time-averaged, root-mean-square, and gradient transformation, provides detailed information regarding the distribution of the flow field. The results indicate that the shear layers anchored on the oxygen injector lip are the main zones of chemical heat release and that the O/F mixing ratio significantly affects the flame shape. Furthermore, with high-speed imaging, an intuitionistic ignition process and several consecutive steady-state images reveal that lean conditions make it easy to drive the combustion instabilities and that the center swirl intensity has a moderate influence on the flame oscillation strength. The results of this study provide a visualized analysis for future optimal swirl-coaxial injector designs.
Viability of imaging structures inside human dentin using dental transillumination
NASA Astrophysics Data System (ADS)
Grandisoli, C. L.; Alves-de-Souza, F. D.; Costa, M. M.; Castro, L.; Ana, P. A.; Zezell, D. M.; Lins, E. C.
2014-02-01
Dental Transillumination (DT) is a technique for imaging internal structures of teeth by detecting infrared radiation transmitted throughout the specimens. It was successfully used to detect caries even considering dental enamel and dentin scatter infrared radiation strongly. Literature reports enamel's scattering coefficient is 10 to 30 times lower than dentin; this explain why DT is useful for imaging pathologies in dental enamel, but does not disable its using for imaging dental structures or pathologies inside the dentin. There was no conclusive data in the literature about the limitations of using DT to access biomedical information of dentin. The goal in this study was to present an application of DT to imaging internal structures of dentin. Slices of tooth were confectioned varying the thickness of groups from 0.5 mm up to 2,5 mm. For imaging a FPA InGaAs camera Xeva 1.7- 320 (900-1700 nm; Xenics, Inc., Belgium) and a 3W lamp-based broadband light source (Ocean Optics, Inc., USA) was used; bandpass optical filters at 1000+/-10 nm, 1100+/-10 nm, 1200+/-10 nm and 1300+/-50 nm spectral region were also applied to spectral selection. Images were captured for different camera exposure times and finally a computational processing was applied. The best results revealed the viability to imaging dent in tissue with thickness up to 2,5 mm without a filter (900-1700nm spectral range). After these results a pilot experiment of using DT to detect the pulp chamber of an incisive human tooth was made. New data showed the viability to imaging the pulp chamber of specimen.
AFFINE-CORRECTED PARADISE: FREE-BREATHING PATIENT-ADAPTIVE CARDIAC MRI WITH SENSITIVITY ENCODING
Sharif, Behzad; Bresler, Yoram
2013-01-01
We propose a real-time cardiac imaging method with parallel MRI that allows for free breathing during imaging and does not require cardiac or respiratory gating. The method is based on the recently proposed PARADISE (Patient-Adaptive Reconstruction and Acquisition Dynamic Imaging with Sensitivity Encoding) scheme. The new acquisition method adapts the PARADISE k-t space sampling pattern according to an affine model of the respiratory motion. The reconstruction scheme involves multi-channel time-sequential imaging with time-varying channels. All model parameters are adapted to the imaged patient as part of the experiment and drive both data acquisition and cine reconstruction. Simulated cardiac MRI experiments using the realistic NCAT phantom show high quality cine reconstructions and robustness to modeling inaccuracies. PMID:24390159
Path Searching Based Crease Detection for Large Scale Scanned Document Images
NASA Astrophysics Data System (ADS)
Zhang, Jifu; Li, Yi; Li, Shutao; Sun, Bin; Sun, Jun
2017-12-01
Since the large size documents are usually folded for preservation, creases will occur in the scanned images. In this paper, a crease detection method is proposed to locate the crease pixels for further processing. According to the imaging process of contactless scanners, the shading on both sides of the crease usually varies a lot. Based on this observation, a convex hull based algorithm is adopted to extract the shading information of the scanned image. Then, the possible crease path can be achieved by applying the vertical filter and morphological operations on the shading image. Finally, the accurate crease is detected via Dijkstra path searching. Experimental results on the dataset of real scanned newspapers demonstrate that the proposed method can obtain accurate locations of the creases in the large size document images.
Turbulent mixing induced by Richtmyer-Meshkov instability
NASA Astrophysics Data System (ADS)
Krivets, V. V.; Ferguson, K. J.; Jacobs, J. W.
2017-01-01
Richtmyer-Meshkov instability is studied in shock tube experiments with an Atwood number of 0.7. The interface is formed in a vertical shock tube using opposed gas flows, and three-dimensional random initial interface perturbations are generated by the vertical oscillation of gas column producing Faraday waves. Planar Laser Mie scattering is used for flow visualization and for measurements of the mixing process. Experimental image sequences are recorded at 6 kHz frequency and processed to obtain the time dependent variation of the integral mixing layer width. Measurements of the mixing layer width are compared with Mikaelian's [1] model in order to extract the growth exponent θ where a fairly wide range of values is found varying from θ ≈ 0.2 to 0.6.
Dielectric and piezoelectric properties of percolative three-phase piezoelectric polymer composites
NASA Astrophysics Data System (ADS)
Sundar, Udhay
Three-phase piezoelectric bulk composites were fabricated using a mix and cast method. The composites were comprised of lead zirconate titanate (PZT), aluminum (Al) and an epoxy matrix. The volume fraction of the PZT and Al were varied from 0.1 to 0.3 and 0.0 to 0.17, respectively. The influences of three entities on piezoelectric and dielectric properties: inclusion of an electrically conductive filler (Al), poling process (contact and Corona) and Al surface treatment, were observed. The piezoelectric strain coefficient, d33, effective dielectric constant, epsilon r, capacitance, C, and resistivity were measured and compared according to poling process, volume fraction of constituent phases and Al surface treatment. The maximum values of d33 were 3.475 and 1.0 pC/N for Corona and contact poled samples respectively, for samples with volume fractions of 0.40 and 0.13 of PZT and Al (surface treated) respectively. Also, the maximum dielectric constant for the surface treated Al samples was 411 for volume fractions of 0.40 and 0.13 for PZT and Al respectively. The percolation threshold was observed to occur at an Al volume fraction of 0.13. The composites achieved a percolated state for Al volume fractions >0.13 for both contact and corona poled samples. In addition, a comparative time study was conducted to examine the influence of surface treatment processing time of Al particles. The effectiveness of the surface treatment, sample morphology and composition was observed with the aid of SEM and EDS images. These images were correlated with piezoelectric and dielectric properties. PZT-epoxy-aluminum thick films (200 mum) were also fabricated using a two-step spin coat deposition and annealing method. The PZT volume fraction were varied from 0.2, 0.3 and 0.4, wherein the Aluminum volume fraction was varied from 0.1 to 0.17 for each PZT volume fraction, respectively. The two-step process included spin coating the first layer at 500 RPM for 30 seconds, and the second layer at 1000 RPM for 1 minute. The piezoelectric strain coefficients d33 and d31, capacitance and the dielectric constant were measured, and were studied as a function of Aluminum volume fraction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bott-Suzuki, S. C.; Cordaro, S. W.; Caballero Bendixsen, L. S.
We present a study of the time varying current density distribution in solid metallic liner experiments at the 1MA level. Measurements are taken using an array of magnetic field probes which provide 2D triangulation of the average centroid of the drive current in the load at 3 discrete axial positions. These data are correlated with gated optical self-emission imaging which directly images the breakdown and plasma formation region. Results show that the current density is azimuthally non-uniform, and changes significantly throughout the 100ns experimental timescale. Magnetic field probes show clearly motion of the current density around the liner azimuth overmore » 10ns timescales. If breakdown is initiated at one azimuthal location, the current density remains non-uniform even over large spatial extents throughout the current drive. The evolution timescales are suggestive of a resistive diffusion process or uneven current distributions among simultaneously formed but discrete plasma conduction paths.« less
Bott-Suzuki, S. C.; Cordaro, S. W.; Caballero Bendixsen, L. S.; ...
2016-09-01
We present a study of the time varying current density distribution in solid metallic liner experiments at the 1MA level. Measurements are taken using an array of magnetic field probes which provide 2D triangulation of the average centroid of the drive current in the load at 3 discrete axial positions. These data are correlated with gated optical self-emission imaging which directly images the breakdown and plasma formation region. Results show that the current density is azimuthally non-uniform, and changes significantly throughout the 100ns experimental timescale. Magnetic field probes show clearly motion of the current density around the liner azimuth overmore » 10ns timescales. If breakdown is initiated at one azimuthal location, the current density remains non-uniform even over large spatial extents throughout the current drive. The evolution timescales are suggestive of a resistive diffusion process or uneven current distributions among simultaneously formed but discrete plasma conduction paths.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Lucy M.; Mackenzie, Alistair; Cooke, Julie
Purpose: This study aims to investigate if microcalcification detection varies significantly when mammographic images are acquired using different image qualities, including: different detectors, dose levels, and different image processing algorithms. An additional aim was to determine how the standard European method of measuring image quality using threshold gold thickness measured with a CDMAM phantom and the associated limits in current EU guidelines relate to calcification detection. Methods: One hundred and sixty two normal breast images were acquired on an amorphous selenium direct digital (DR) system. Microcalcification clusters extracted from magnified images of slices of mastectomies were electronically inserted into halfmore » of the images. The calcification clusters had a subtle appearance. All images were adjusted using a validated mathematical method to simulate the appearance of images from a computed radiography (CR) imaging system at the same dose, from both systems at half this dose, and from the DR system at quarter this dose. The original 162 images were processed with both Hologic and Agfa (Musica-2) image processing. All other image qualities were processed with Agfa (Musica-2) image processing only. Seven experienced observers marked and rated any identified suspicious regions. Free response operating characteristic (FROC) and ROC analyses were performed on the data. The lesion sensitivity at a nonlesion localization fraction (NLF) of 0.1 was also calculated. Images of the CDMAM mammographic test phantom were acquired using the automatic setting on the DR system. These images were modified to the additional image qualities used in the observer study. The images were analyzed using automated software. In order to assess the relationship between threshold gold thickness and calcification detection a power law was fitted to the data. Results: There was a significant reduction in calcification detection using CR compared with DR: the alternative FROC (AFROC) area decreased from 0.84 to 0.63 and the ROC area decreased from 0.91 to 0.79 (p < 0.0001). This corresponded to a 30% drop in lesion sensitivity at a NLF equal to 0.1. Detection was also sensitive to the dose used. There was no significant difference in detection between the two image processing algorithms used (p > 0.05). It was additionally found that lower threshold gold thickness from CDMAM analysis implied better cluster detection. The measured threshold gold thickness passed the acceptable limit set in the EU standards for all image qualities except half dose CR. However, calcification detection varied significantly between image qualities. This suggests that the current EU guidelines may need revising. Conclusions: Microcalcification detection was found to be sensitive to detector and dose used. Standard measurements of image quality were a good predictor of microcalcification cluster detection.« less
The vectorization of a ray tracing program for image generation
NASA Technical Reports Server (NTRS)
Plunkett, D. J.; Cychosz, J. M.; Bailey, M. J.
1984-01-01
Ray tracing is a widely used method for producing realistic computer generated images. Ray tracing involves firing an imaginary ray from a view point, through a point on an image plane, into a three dimensional scene. The intersections of the ray with the objects in the scene determines what is visible at the point on the image plane. This process must be repeated many times, once for each point (commonly called a pixel) in the image plane. A typical image contains more than a million pixels making this process computationally expensive. A traditional ray tracing program processes one ray at a time. In such a serial approach, as much as ninety percent of the execution time is spent computing the intersection of a ray with the surface in the scene. With the CYBER 205, many rays can be intersected with all the bodies im the scene with a single series of vector operations. Vectorization of this intersection process results in large decreases in computation time. The CADLAB's interest in ray tracing stems from the need to produce realistic images of mechanical parts. A high quality image of a part during the design process can increase the productivity of the designer by helping him visualize the results of his work. To be useful in the design process, these images must be produced in a reasonable amount of time. This discussion will explain how the ray tracing process was vectorized and gives examples of the images obtained.
NASA Astrophysics Data System (ADS)
Islam, Atiq; Iftekharuddin, Khan M.; Ogg, Robert J.; Laningham, Fred H.; Sivakumar, Bhuvaneswari
2008-03-01
In this paper, we characterize the tumor texture in pediatric brain magnetic resonance images (MRIs) and exploit these features for automatic segmentation of posterior fossa (PF) tumors. We focus on PF tumor because of the prevalence of such tumor in pediatric patients. Due to varying appearance in MRI, we propose to model the tumor texture with a multi-fractal process, such as a multi-fractional Brownian motion (mBm). In mBm, the time-varying Holder exponent provides flexibility in modeling irregular tumor texture. We develop a detailed mathematical framework for mBm in two-dimension and propose a novel algorithm to estimate the multi-fractal structure of tissue texture in brain MRI based on wavelet coefficients. This wavelet based multi-fractal feature along with MR image intensity and a regular fractal feature obtained using our existing piecewise-triangular-prism-surface-area (PTPSA) method, are fused in segmenting PF tumor and non-tumor regions in brain T1, T2, and FLAIR MR images respectively. We also demonstrate a non-patient-specific automated tumor prediction scheme based on these image features. We experimentally show the tumor discriminating power of our novel multi-fractal texture along with intensity and fractal features in automated tumor segmentation and statistical prediction. To evaluate the performance of our tumor prediction scheme, we obtain ROCs and demonstrate how sharply the curves reach the specificity of 1.0 sacrificing minimal sensitivity. Experimental results show the effectiveness of our proposed techniques in automatic detection of PF tumors in pediatric MRIs.
TH-CD-207B-03: How to Quantify Temporal Resolution in X-Ray MDCT Imaging?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budde, A; GE Healthcare Technologies, Madison, WI; Li, Y
Purpose: In modern CT scanners, a quantitative metric to assess temporal response, namely, to quantify the temporal resolution (TR), remains elusive. Rough surrogate metrics, such as half of the gantry rotation time for single source CT, a quarter of the gantry rotation time for dual source CT, or measurements of motion artifact’s size, shape, or intensity have previously been used. In this work, a rigorous framework which quantifies TR and a practical measurement method are developed. Methods: A motion phantom was simulated which consisted of a single rod that is in motion except during a static period at the temporalmore » center of the scan, termed the TR window. If the image of the motion scan has negligible motion artifacts compared to an image from a totally static scan, then the system has a TR no worse than the TR window used. By repeating this comparison with varying TR windows, the TR of the system can be accurately determined. Motion artifacts were also visually assessed and the TR was measured across varying rod motion speeds, directions, and locations. Noiseless fan beam acquisitions were simulated and images were reconstructed with a short-scan image reconstruction algorithm. Results: The size, shape, and intensity of motion artifacts varied when the rod speed, direction, or location changed. TR measured using the proposed method, however, was consistent across rod speeds, directions, and locations. Conclusion: Since motion artifacts vary depending upon the motion speed, direction, and location, they are not suitable for measuring TR. In this work, a CT system with a specified TR is defined as having the ability to produce a static image with negligible motion artifacts, no matter what motion occurs outside of a static window of width TR. This framework allows for practical measurement of temporal resolution in clinical CT imaging systems. Funding support: GE Healthcare; Conflict of Interest: Employee, GE Healthcare.« less
Imaging as characterization techniques for thin-film cadmium telluride photovoltaics
NASA Astrophysics Data System (ADS)
Zaunbrecher, Katherine
The goal of increasing the efficiency of solar cell devices is a universal one. Increased photovoltaic (PV) performance means an increase in competition with other energy technologies. One way to improve PV technologies is to develop rapid, accurate characterization tools for quality control. Imaging techniques developed over the past decade are beginning to fill that role. Electroluminescence (EL), photoluminescence (PL), and lock-in thermography are three types of imaging implemented in this study to provide a multifaceted approach to studying imaging as applied to thin-film CdTe solar cells. Images provide spatial information about cell operation, which in turn can be used to identify defects that limit performance. This study began with developing EL, PL, and dark lock-in thermography (DLIT) for CdTe. Once imaging data were acquired, luminescence and thermography signatures of non-uniformities that disrupt the generation and collection of carriers were identified and cataloged. Additional data acquisition and analysis were used to determine luminescence response to varying operating conditions. This includes acquiring spectral data, varying excitation conditions, and correlating luminescence to device performance. EL measurements show variations in a cell's local voltage, which include inhomogeneities in the transparent-conductive oxide (TCO) front contact, CdS window layer, and CdTe absorber layer. EL signatures include large gradients, local reduction of luminescence, and local increases in luminescence on the interior of the device as well as bright spots located on the cell edges. The voltage bias and spectral response were analyzed to determine the response of these non-uniformities and surrounding areas. PL images of CdTe have not shown the same level of detail and features compared to their EL counterparts. Many of the signatures arise from reflections and severe inhomogeneities, but the technique is limited by the external illumination source used to excite carriers. Measurements on unfinished CdS and CdTe films reveal changes in signal after post-deposition processing treatments. DLIT images contained heat signatures arising from defect-related current crowding. Forward- and reverse-bias measurements revealed hot spots related to shunt and weak-diode defects. Modeling and previous studies done on Cu(In,Ga)Se 2 thin-film solar cells aided in identifying the physical causes of these thermographic and luminescence signatures. Imaging data were also coupled with other characterization techniques to provide a more comprehensive examination of nonuniform features and their origins and effects on device performance. These techniques included light-beam-induced-current (LBIC) measurements, which provide spatial quantum efficiency maps of the cell at varying resolutions, as well as time-resolved photoluminescence and spectral PL mapping. Local drops in quantum efficiency seen in LBIC typically corresponded with reductions in EL signal while minority-carrier lifetime values acquired by time-resolved PL measurements correlate with PL intensity.
Coherent hybrid electromagnetic field imaging
Cooke, Bradly J [Jemez Springs, NM; Guenther, David C [Los Alamos, NM
2008-08-26
An apparatus and corresponding method for coherent hybrid electromagnetic field imaging of a target, where an energy source is used to generate a propagating electromagnetic beam, an electromagnetic beam splitting means to split the beam into two or more coherently matched beams of about equal amplitude, and where the spatial and temporal self-coherence between each two or more coherently matched beams is preserved. Two or more differential modulation means are employed to modulate each two or more coherently matched beams with a time-varying polarization, frequency, phase, and amplitude signal. An electromagnetic beam combining means is used to coherently combine said two or more coherently matched beams into a coherent electromagnetic beam. One or more electromagnetic beam controlling means are used for collimating, guiding, or focusing the coherent electromagnetic beam. One or more apertures are used for transmitting and receiving the coherent electromagnetic beam to and from the target. A receiver is used that is capable of square-law detection of the coherent electromagnetic beam. A waveform generator is used that is capable of generation and control of time-varying polarization, frequency, phase, or amplitude modulation waveforms and sequences. A means of synchronizing time varying waveform is used between the energy source and the receiver. Finally, a means of displaying the images created by the interaction of the coherent electromagnetic beam with target is employed.
Real-time hyperspectral imaging for food safety applications
USDA-ARS?s Scientific Manuscript database
Multispectral imaging systems with selected bands can commonly be used for real-time applications of food processing. Recent research has demonstrated several image processing methods including binning, noise removal filter, and appropriate morphological analysis in real-time mode can remove most fa...
Atmospheric imaging results from the Mars Exploration Rovers
NASA Astrophysics Data System (ADS)
Lemmon, M.; Athena Science Team
The Athena science payload of the Spirit and Opportunity Mars Exploration Rovers contains instruments capable of measuring radiometric properties of the Martian atmosphere in the visible and the thermal infrared. Remote sensing instruments include Pancam, a color panoramic camera covering 0.4-1.0 microns, and Mini-TES, a thermal infrared spectrometer covering 5-29 microns. Results from atmospheric imaging by Pancam will be covered here. Visible and near-infrared aerosol opacity is monitored by direct solar imaging. Early results show dust opacity near 1 when both rovers landed. Both Spirit and Opportunity have seen dust opacity fall with time, somewhat faster at Spirit's Gusev crater landing site. Diurnal variations are also being monitored at both sites. There is no direct probe of the dust's vertical distribution, but images of the Sun near the horizon and of the twilight will provide constraints on the dust distribution. Dust optical properties and a cross-section weighted aerosol size will be estimated from Pancam images of the sky at varying geometries and times of day. A series of sky imaging sequences has been run with varying illumination geometry. The observations are similar to those reported for Mars Pathfinder.
Spatio-temporal imaging of the hemoglobin in the compressed breast with diffuse optical tomography
NASA Astrophysics Data System (ADS)
Boverman, Gregory; Fang, Qianqian; Carp, Stefan A.; Miller, Eric L.; Brooks, Dana H.; Selb, Juliette; Moore, Richard H.; Kopans, Daniel B.; Boas, David A.
2007-07-01
We develop algorithms for imaging the time-varying optical absorption within the breast given diffuse optical tomographic data collected over a time span that is long compared to the dynamics of the medium. Multispectral measurements allow for the determination of the time-varying total hemoglobin concentration and of oxygen saturation. To facilitate the image reconstruction, we decompose the hemodynamics in time into a linear combination of spatio-temporal basis functions, the coefficients of which are estimated using all of the data simultaneously, making use of a Newton-based nonlinear optimization algorithm. The solution of the extremely large least-squares problem which arises in computing the Newton update is obtained iteratively using the LSQR algorithm. A Laplacian spatial regularization operator is applied, and, in addition, we make use of temporal regularization which tends to encourage similarity between the images of the spatio-temporal coefficients. Results are shown for an extensive simulation, in which we are able to image and quantify localized changes in both total hemoglobin concentration and oxygen saturation. Finally, a breast compression study has been performed for a normal breast cancer screening subject, using an instrument which allows for highly accurate co-registration of multispectral diffuse optical measurements with an x-ray tomosynthesis image of the breast. We are able to quantify the global return of blood to the breast following compression, and, in addition, localized changes are observed which correspond to the glandular region of the breast.
Doutsi, Effrosyni; Fillatre, Lionel; Antonini, Marc; Gaulmin, Julien
2018-07-01
This paper introduces a novel filter, which is inspired by the human retina. The human retina consists of three different layers: the Outer Plexiform Layer (OPL), the inner plexiform layer, and the ganglionic layer. Our inspiration is the linear transform which takes place in the OPL and has been mathematically described by the neuroscientific model "virtual retina." This model is the cornerstone to derive the non-separable spatio-temporal OPL retina-inspired filter, briefly renamed retina-inspired filter, studied in this paper. This filter is connected to the dynamic behavior of the retina, which enables the retina to increase the sharpness of the visual stimulus during filtering before its transmission to the brain. We establish that this retina-inspired transform forms a group of spatio-temporal Weighted Difference of Gaussian (WDoG) filters when it is applied to a still image visible for a given time. We analyze the spatial frequency bandwidth of the retina-inspired filter with respect to time. It is shown that the WDoG spectrum varies from a lowpass filter to a bandpass filter. Therefore, while time increases, the retina-inspired filter enables to extract different kinds of information from the input image. Finally, we discuss the benefits of using the retina-inspired filter in image processing applications such as edge detection and compression.
Study of pipe thickness loss using a neutron radiography method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohamed, Abdul Aziz; Wahab, Aliff Amiru Bin; Yazid, Hafizal B.
2014-02-12
The purpose of this preliminary work is to study for thickness changes in objects using neutron radiography. In doing the project, the technique for the radiography was studied. The experiment was done at NUR-2 facility at TRIGA research reactor in Malaysian Nuclear Agency, Malaysia. Test samples of varying materials were used in this project. The samples were radiographed using direct technique. Radiographic images were recorded using Nitrocellulose film. The films obtained were digitized to processed and analyzed. Digital processing is done on the images using software Isee!. The images were processed to produce better image for analysis. The thickness changesmore » in the image were measured to be compared with real thickness of the objects. From the data collected, percentages difference between measured and real thickness are below than 2%. This is considerably very low variation from original values. Therefore, verifying the neutron radiography technique used in this project.« less
Color sensitivity of the multi-exposure HDR imaging process
NASA Astrophysics Data System (ADS)
Lenseigne, Boris; Jacobs, Valéry Ann; Withouck, Martijn; Hanselaer, Peter; Jonker, Pieter P.
2013-04-01
Multi-exposure high dynamic range(HDR) imaging builds HDR radiance maps by stitching together different views of a same scene with varying exposures. Practically, this process involves converting raw sensor data into low dynamic range (LDR) images, estimate the camera response curves, and use them in order to recover the irradiance for every pixel. During the export, applying white balance settings and image stitching, which both have an influence on the color balance in the final image. In this paper, we use a calibrated quasi-monochromatic light source, an integrating sphere, and a spectrograph in order to evaluate and compare the average spectral response of the image sensor. We finally draw some conclusion about the color consistency of HDR imaging and the additional steps necessary to use multi-exposure HDR imaging as a tool to measure the physical quantities such as radiance and luminance.
Application of Dynamic Speckle Techniques in Monitoring Biofilms Drying Process
NASA Astrophysics Data System (ADS)
Enes, Adilson M.; Júnior, Roberto A. Braga; Dal Fabbro, Inácio M.; da Silva, Washington A.; Pereira, Joelma
2008-04-01
Horticultural crops exhibit losses far greater than grains in Brazil which are associated to inappropriate maturation, mechanical bruising, infestation by microorganisms, wilting, etc. Appropriate packing prevents excessive mass loss associated to transpiration as well as to respiration, by controlling gas exchanging with outside environment. Common packing materials are identified as plastic films, waxes and biofilms. Although research developed with edible films and biopolymers has increased during last years to attend the food industry demands, avoiding environmental problems, little efforts have been reported on biofilm physical properties investigations. These properties, as drying time and biofilm interactions with environment are considered of basic importance. This research work aimed to contribute to development of a methodology to evaluate yucca (Maniot vulgaris) based biofilms drying time supported by a biospeckle technique. Biospeckle is a phenomenon generated by a laser beam scattered on a dynamic active surface, producing a time varying pattern which is proportional to the surface activity level. By capturing and processing the biospeckle image it is possible to attribute a numerical quantity to the surface bioactivity. Materials exhibiting high moisture content will also show high activity, which will support the drying time determination. Tests were set by placing biofilm samples on polyetilen plates and further submitted to laser exposition at four hours interval to capture the pattern images, generating the Intensities Dispersion Modulus. Results indicates that proposed methodology is applicable in determining biofilm drying time as well as vapor losses to environment.
NASA Astrophysics Data System (ADS)
Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.
2018-01-01
Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.
NASA Technical Reports Server (NTRS)
Cheng, Li-Jen (Inventor); Liu, Tsuen-Hsi (Inventor)
1990-01-01
A method and apparatus is disclosed for detecting and tracking moving objects in a noise environment cluttered with fast-and slow-moving objects and other time-varying background. A pair of phase conjugate light beams carrying the same spatial information commonly cancel each other out through an image subtraction process in a phase conjugate interferometer, wherein gratings are formed in a fast photo-refractive phase conjugate mirror material. In the steady state, there is no output. When the optical path of one of the two phase conjugate beams is suddenly changed, the return beam loses its phase conjugate nature and the inter-ferometer is out of balance, resulting in an observable output. The observable output lasts until the phase conjugate nature of the beam has recovered. The observable time of the output signal is roughly equal to the formation time of the grating. If the optical path changing time is slower than the formation time, the change of optical path becomes unobservable, because the index grating can follow the change. Thus, objects traveling at speeds which result in a path changing time which is slower than the formation time are not observable and do not clutter the output image view.
Performance enhancement of various real-time image processing techniques via speculative execution
NASA Astrophysics Data System (ADS)
Younis, Mohamed F.; Sinha, Purnendu; Marlowe, Thomas J.; Stoyenko, Alexander D.
1996-03-01
In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.
Synthetic Foveal Imaging Technology
NASA Technical Reports Server (NTRS)
Nikzad, Shouleh (Inventor); Monacos, Steve P. (Inventor); Hoenk, Michael E. (Inventor)
2013-01-01
Apparatuses and methods are disclosed that create a synthetic fovea in order to identify and highlight interesting portions of an image for further processing and rapid response. Synthetic foveal imaging implements a parallel processing architecture that uses reprogrammable logic to implement embedded, distributed, real-time foveal image processing from different sensor types while simultaneously allowing for lossless storage and retrieval of raw image data. Real-time, distributed, adaptive processing of multi-tap image sensors with coordinated processing hardware used for each output tap is enabled. In mosaic focal planes, a parallel-processing network can be implemented that treats the mosaic focal plane as a single ensemble rather than a set of isolated sensors. Various applications are enabled for imaging and robotic vision where processing and responding to enormous amounts of data quickly and efficiently is important.
Fission gas bubble identification using MATLAB's image processing toolbox
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collette, R.; King, J.; Keiser, Jr., D.
Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less
Fission gas bubble identification using MATLAB's image processing toolbox
Collette, R.; King, J.; Keiser, Jr., D.; ...
2016-06-08
Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. In addition, this study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding provedmore » to be the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods.« less
EPR oximetry in three spatial dimensions using sparse spin distribution
NASA Astrophysics Data System (ADS)
Som, Subhojit; Potter, Lee C.; Ahmad, Rizwan; Vikram, Deepti S.; Kuppusamy, Periannan
2008-08-01
A method is presented to use continuous wave electron paramagnetic resonance imaging for rapid measurement of oxygen partial pressure in three spatial dimensions. A particulate paramagnetic probe is employed to create a sparse distribution of spins in a volume of interest. Information encoding location and spectral linewidth is collected by varying the spatial orientation and strength of an applied magnetic gradient field. Data processing exploits the spatial sparseness of spins to detect voxels with nonzero spin and to estimate the spectral linewidth for those voxels. The parsimonious representation of spin locations and linewidths permits an order of magnitude reduction in data acquisition time, compared to four-dimensional tomographic reconstruction using traditional spectral-spatial imaging. The proposed oximetry method is experimentally demonstrated for a lithium octa- n-butoxy naphthalocyanine (LiNc-BuO) probe using an L-band EPR spectrometer.
Fully automated motion correction in first-pass myocardial perfusion MR image sequences.
Milles, Julien; van der Geest, Rob J; Jerosch-Herold, Michael; Reiber, Johan H C; Lelieveldt, Boudewijn P F
2008-11-01
This paper presents a novel method for registration of cardiac perfusion magnetic resonance imaging (MRI). The presented method is capable of automatically registering perfusion data, using independent component analysis (ICA) to extract physiologically relevant features together with their time-intensity behavior. A time-varying reference image mimicking intensity changes in the data of interest is computed based on the results of that ICA. This reference image is used in a two-pass registration framework. Qualitative and quantitative validation of the method is carried out using 46 clinical quality, short-axis, perfusion MR datasets comprising 100 images each. Despite varying image quality and motion patterns in the evaluation set, validation of the method showed a reduction of the average right ventricle (LV) motion from 1.26+/-0.87 to 0.64+/-0.46 pixels. Time-intensity curves are also improved after registration with an average error reduced from 2.65+/-7.89% to 0.87+/-3.88% between registered data and manual gold standard. Comparison of clinically relevant parameters computed using registered data and the manual gold standard show a good agreement. Additional tests with a simulated free-breathing protocol showed robustness against considerable deviations from a standard breathing protocol. We conclude that this fully automatic ICA-based method shows an accuracy, a robustness and a computation speed adequate for use in a clinical environment.
A contemporary perspective on capitated reimbursement for imaging services.
Schwartz, H W
1995-01-01
Capitation ensures predictability of healthcare costs, requires acceptance of a premium in return for providing all required medical services and defines the actual dollar amount paid to a physician or hospital on a per member per month basis for a service or group of services. Capitation is expected to dramatically affect the marketplace in the near future, as private enterprise demands lower, more stable healthcare costs. Capitation requires detailed quantitative and financial data, including: eligibility and benefits determination, encounter processing, referral management, claims processing, case management, physician compensation, insurance management functions, outcomes reporting, performance management and cost accounting. It is important to understand actuarial risk and capitation marketing when considering a capitation contract. Also, capitated payment methodologies may vary to include modified fee-for-service, incentive pay, risk pool redistributions, merit, or a combination. Risk is directly related to the ability to predict utilization and unit cost of imaging services provided to a specific insured population. In capitated environments, radiologists will have even less control over referrals than they have today and will serve many more "covered lives"; long-term relationships with referring physicians will continue to evaporate; and services will be provided under exclusive, multi-year contracts. In addition to intensified use of technology for image transfer, telecommunications and sophisticated data processing and tracking systems, imaging departments must continue to provide the greatest amount of appropriate diagnostic information in a timely fashion at the lowest feasible cost and risk to the patient.
Digital Image Processing Overview For Helmet Mounted Displays
NASA Astrophysics Data System (ADS)
Parise, Michael J.
1989-09-01
Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.
Mirror Image Confusability in Adults.
ERIC Educational Resources Information Center
Wolff, Peter
Several studies have indicated that children have difficulty differentiating mirror-image stimuli. In the present study adults were required to classify pairs of horseshoe stimuli as same or different. Response times were compared for stimulus pairs that varied in orientation (left-right vs up-down) and spatial plane of the pair (horizontal vs.…
2016-12-14
This image, taken by the JunoCam imager on NASA's Juno spacecraft, highlights the seventh of eight features forming a 'string of pearls' on Jupiter -- massive counterclockwise rotating storms that appear as white ovals in the gas giant's southern hemisphere. Since 1986, these white ovals have varied in number from six to nine. There are currently eight white ovals visible. Since 1986, these white ovals have varied in number from six to nine. There are currently eight white ovals visible. The image was taken on Dec. 11, 2016, at 9:27 a.m. PST (12:27 EST) as the Juno spacecraft performed its third close flyby of the planet. At the time the image was taken, the spacecraft was about 15,300 miles (24,600 kilometers) from Jupiter. http://photojournal.jpl.nasa.gov/catalog/PIA21219
Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm*
Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan
2010-01-01
The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement. PMID:20617122
Region of Interest Imaging for a General Trajectory with the Rebinned BPF Algorithm.
Bian, Junguo; Xia, Dan; Sidky, Emil Y; Pan, Xiaochuan
2010-02-01
The back-projection-filtration (BPF) algorithm has been applied to image reconstruction for cone-beam configurations with general source trajectories. The BPF algorithm can reconstruct 3-D region-of-interest (ROI) images from data containing truncations. However, like many other existing algorithms for cone-beam configurations, the BPF algorithm involves a back-projection with a spatially varying weighting factor, which can result in the non-uniform noise levels in reconstructed images and increased computation time. In this work, we propose a BPF algorithm to eliminate the spatially varying weighting factor by using a rebinned geometry for a general scanning trajectory. This proposed BPF algorithm has an improved noise property, while retaining the advantages of the original BPF algorithm such as minimum data requirement.
Real-time computation of parameter fitting and image reconstruction using graphical processing units
NASA Astrophysics Data System (ADS)
Locans, Uldis; Adelmann, Andreas; Suter, Andreas; Fischer, Jannis; Lustermann, Werner; Dissertori, Günther; Wang, Qiulin
2017-06-01
In recent years graphical processing units (GPUs) have become a powerful tool in scientific computing. Their potential to speed up highly parallel applications brings the power of high performance computing to a wider range of users. However, programming these devices and integrating their use in existing applications is still a challenging task. In this paper we examined the potential of GPUs for two different applications. The first application, created at Paul Scherrer Institut (PSI), is used for parameter fitting during data analysis of μSR (muon spin rotation, relaxation and resonance) experiments. The second application, developed at ETH, is used for PET (Positron Emission Tomography) image reconstruction and analysis. Applications currently in use were examined to identify parts of the algorithms in need of optimization. Efficient GPU kernels were created in order to allow applications to use a GPU, to speed up the previously identified parts. Benchmarking tests were performed in order to measure the achieved speedup. During this work, we focused on single GPU systems to show that real time data analysis of these problems can be achieved without the need for large computing clusters. The results show that the currently used application for parameter fitting, which uses OpenMP to parallelize calculations over multiple CPU cores, can be accelerated around 40 times through the use of a GPU. The speedup may vary depending on the size and complexity of the problem. For PET image analysis, the obtained speedups of the GPU version were more than × 40 larger compared to a single core CPU implementation. The achieved results show that it is possible to improve the execution time by orders of magnitude.
First imaging Fourier-transform spectral measurements of detonation in an internal combustion engine
NASA Astrophysics Data System (ADS)
Gross, Kevin C.; Borel, Chris; White, Allen; Sakai, Stephen; DeVasher, Rebecca; Perram, Glen P.
2010-08-01
The Telops Hyper-Cam midwave (InSb 1.5-5.5μm) imaging Fourier-transformspectrometer (IFTS) observed repeated detonations in an ethanol-powered internal combustion (IC) engine. The IC engine is aMegatech Corporation MEG 150 with a 1in. bore, 4in. stroke, and a compression ratio of 3 : 1. The IC combustion cylinder is made from sapphire permitting observation in the visible and infrared. From a distance of 3m, the IFTS imaged the combustion cylinder on a 64×32 pixel array with each pixel covering a 0.1×0.1cm2 area. More than 14,000 interferograms were collected at a rate of 16Hz. The maximum optical path difference of the interferograms was 0.017cm corresponding to an unapodized spectral resolution of 36cm-1. Engine speed was varied between 600-1200RPM to de-correlate the observation time scale from the occurrence of detonations. A method is devised to process the ensemble of interferograms which takes advantage of the DC component so that the time history of the combustion spectrum can be recovered at each pixel location. Preliminary results of this analysis will be presented.
Hyperspectral imaging for food processing automation
NASA Astrophysics Data System (ADS)
Park, Bosoon; Lawrence, Kurt C.; Windham, William R.; Smith, Doug P.; Feldner, Peggy W.
2002-11-01
This paper presents the research results that demonstrates hyperspectral imaging could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system included a line scan camera with prism-grating-prism spectrograph, fiber optic line lighting, motorized lens control, and hyperspectral image processing software. Hyperspectral image processing algorithms, specifically band ratio of dual-wavelength (565/517) images and thresholding were effective on the identification of fecal and ingesta contamination of poultry carcasses. A multispectral imaging system including a common aperture camera with three optical trim filters (515.4 nm with 8.6- nm FWHM), 566.4 nm with 8.8-nm FWHM, and 631 nm with 10.2-nm FWHM), which were selected and validated by a hyperspectral imaging system, was developed for a real-time, on-line application. A total image processing time required to perform the current multispectral images captured by a common aperture camera was approximately 251 msec or 3.99 frames/sec. A preliminary test shows that the accuracy of real-time multispectral imaging system to detect feces and ingesta on corn/soybean fed poultry carcasses was 96%. However, many false positive spots that cause system errors were also detected.
A catalyzing phantom for reproducible dynamic conversion of hyperpolarized [1-¹³C]-pyruvate.
Walker, Christopher M; Lee, Jaehyuk; Ramirez, Marc S; Schellingerhout, Dawid; Millward, Steven; Bankson, James A
2013-01-01
In vivo real time spectroscopic imaging of hyperpolarized ¹³C labeled metabolites shows substantial promise for the assessment of physiological processes that were previously inaccessible. However, reliable and reproducible methods of measurement are necessary to maximize the effectiveness of imaging biomarkers that may one day guide personalized care for diseases such as cancer. Animal models of human disease serve as poor reference standards due to the complexity, heterogeneity, and transient nature of advancing disease. In this study, we describe the reproducible conversion of hyperpolarized [1-¹³C]-pyruvate to [1-¹³C]-lactate using a novel synthetic enzyme phantom system. The rate of reaction can be controlled and tuned to mimic normal or pathologic conditions of varying degree. Variations observed in the use of this phantom compare favorably against within-group variations observed in recent animal studies. This novel phantom system provides crucial capabilities as a reference standard for the optimization, comparison, and certification of quantitative imaging strategies for hyperpolarized tracers.
NASA Astrophysics Data System (ADS)
Blume, H.; Alexandru, R.; Applegate, R.; Giordano, T.; Kamiya, K.; Kresina, R.
1986-06-01
In a digital diagnostic imaging department, the majority of operations for handling and processing of images can be grouped into a small set of basic operations, such as image data buffering and storage, image processing and analysis, image display, image data transmission and image data compression. These operations occur in almost all nodes of the diagnostic imaging communications network of the department. An image processor architecture was developed in which each of these functions has been mapped into hardware and software modules. The modular approach has advantages in terms of economics, service, expandability and upgradeability. The architectural design is based on the principles of hierarchical functionality, distributed and parallel processing and aims at real time response. Parallel processing and real time response is facilitated in part by a dual bus system: a VME control bus and a high speed image data bus, consisting of 8 independent parallel 16-bit busses, capable of handling combined up to 144 MBytes/sec. The presented image processor is versatile enough to meet the video rate processing needs of digital subtraction angiography, the large pixel matrix processing requirements of static projection radiography, or the broad range of manipulation and display needs of a multi-modality diagnostic work station. Several hardware modules are described in detail. For illustrating the capabilities of the image processor, processed 2000 x 2000 pixel computed radiographs are shown and estimated computation times for executing the processing opera-tions are presented.
Effects of illumination on image reconstruction via Fourier ptychography
NASA Astrophysics Data System (ADS)
Cao, Xinrui; Sinzinger, Stefan
2017-12-01
The Fourier ptychographic microscopy (FPM) technique provides high-resolution images by combining a traditional imaging system, e.g. a microscope or a 4f-imaging system, with a multiplexing illumination system, e.g. an LED array and numerical image processing for enhanced image reconstruction. In order to numerically combine images that are captured under varying illumination angles, an iterative phase-retrieval algorithm is often applied. However, in practice, the performance of the FPM algorithm degrades due to the imperfections of the optical system, the image noise caused by the camera, etc. To eliminate the influence of the aberrations of the imaging system, an embedded pupil function recovery (EPRY)-FPM algorithm has been proposed [Opt. Express 22, 4960-4972 (2014)]. In this paper, we study how the performance of FPM and EPRY-FPM algorithms are affected by imperfections of the illumination system using both numerical simulations and experiments. The investigated imperfections include varying and non-uniform intensities, and wavefront aberrations. Our study shows that the aberrations of the illumination system significantly affect the performance of both FPM and EPRY-FPM algorithms. Hence, in practice, aberrations in the illumination system gain significant influence on the resulting image quality.
The impact of hunger on food cue processing: an event-related brain potential study.
Stockburger, Jessica; Schmälzle, Ralf; Flaisch, Tobias; Bublatzky, Florian; Schupp, Harald T
2009-10-01
The present study used event-related brain potentials to examine deprivation effects on visual attention to food stimuli at the level of distinct processing stages. Thirty-two healthy volunteers (16 females) were tested twice 1 week apart, either after 24 h of food deprivation or after normal food intake. Participants viewed a continuous stream of food and flower images while dense sensor ERPs were recorded. As revealed by distinct ERP modulations in relatively earlier and later time windows, deprivation affected the processing of food and flower pictures. Between 300 and 360 ms, food pictures were associated with enlarged occipito-temporal negativity and centro-parietal positivity in deprived compared to satiated state. Of main interest, in a later time window (approximately 450-600 ms), deprivation increased amplitudes of the late positive potential elicited by food pictures. Conversely, flower processing varied by motivational state with decreased positive potentials in the deprived state. Minimum-Norm analyses provided further evidence that deprivation enhanced visual attention to food cues in later processing stages. From the perspective of motivated attention, hunger may induce a heightened state of attention for food stimuli in a processing stage related to stimulus recognition and focused attention.
IMAGES: An interactive image processing system
NASA Technical Reports Server (NTRS)
Jensen, J. R.
1981-01-01
The IMAGES interactive image processing system was created specifically for undergraduate remote sensing education in geography. The system is interactive, relatively inexpensive to operate, almost hardware independent, and responsive to numerous users at one time in a time-sharing mode. Most important, it provides a medium whereby theoretical remote sensing principles discussed in lecture may be reinforced in laboratory as students perform computer-assisted image processing. In addition to its use in academic and short course environments, the system has also been used extensively to conduct basic image processing research. The flow of information through the system is discussed including an overview of the programs.
A customizable commercial miniaturized 320×256 indium gallium arsenide shortwave infrared camera
NASA Astrophysics Data System (ADS)
Huang, Shih-Che; O'Grady, Matthew; Groppe, Joseph V.; Ettenberg, Martin H.; Brubaker, Robert M.
2004-10-01
The design and performance of a commercial short-wave-infrared (SWIR) InGaAs microcamera engine is presented. The 0.9-to-1.7 micron SWIR imaging system consists of a room-temperature-TEC-stabilized, 320x256 (25 μm pitch) InGaAs focal plane array (FPA) and a high-performance, highly customizable image-processing set of electronics. The detectivity, D*, of the system is greater than 1013 cm-√Hz/W at 1.55 μm, and this sensitivity may be adjusted in real-time over 100 dB. It features snapshot-mode integration with a minimum exposure time of 130 μs. The digital video processor provides real time pixel-to-pixel, 2-point dark-current subtraction and non-uniformity compensation along with defective-pixel substitution. Other features include automatic gain control (AGC), gamma correction, 7 preset configurations, adjustable exposure time, external triggering, and windowing. The windowing feature is highly flexible; the region of interest (ROI) may be placed anywhere on the imager and can be varied at will. Windowing allows for high-speed readout enabling such applications as target acquisition and tracking; for example, a 32x32 ROI window may be read out at over 3500 frames per second (fps). Output video is provided as EIA170-compatible analog, or as 12-bit CameraLink-compatible digital. All the above features are accomplished in a small volume < 28 cm3, weight < 70 g, and with low power consumption < 1.3 W at room temperature using this new microcamera engine. Video processing is based on a field-programmable gate array (FPGA) platform with a soft-embedded processor that allows for ease of integration/addition of customer-specific algorithms, processes, or design requirements. The camera was developed with the high-performance, space-restricted, power-conscious application in mind, such as robotic or UAV deployment.
Tang, Hongying Lilian; Goh, Jonathan; Peto, Tunde; Ling, Bingo Wing-Kuen; Al Turk, Lutfiah Ismail; Hu, Yin; Wang, Su; Saleh, George Michael
2013-01-01
In any diabetic retinopathy screening program, about two-thirds of patients have no retinopathy. However, on average, it takes a human expert about one and a half times longer to decide an image is normal than to recognize an abnormal case with obvious features. In this work, we present an automated system for filtering out normal cases to facilitate a more effective use of grading time. The key aim with any such tool is to achieve high sensitivity and specificity to ensure patients' safety and service efficiency. There are many challenges to overcome, given the variation of images and characteristics to identify. The system combines computed evidence obtained from various processing stages, including segmentation of candidate regions, classification and contextual analysis through Hidden Markov Models. Furthermore, evolutionary algorithms are employed to optimize the Hidden Markov Models, feature selection and heterogeneous ensemble classifiers. In order to evaluate its capability of identifying normal images across diverse populations, a population-oriented study was undertaken comparing the software's output to grading by humans. In addition, population based studies collect large numbers of images on subjects expected to have no abnormality. These studies expect timely and cost-effective grading. Altogether 9954 previously unseen images taken from various populations were tested. All test images were masked so the automated system had not been exposed to them before. This system was trained using image subregions taken from about 400 sample images. Sensitivities of 92.2% and specificities of 90.4% were achieved varying between populations and population clusters. Of all images the automated system decided to be normal, 98.2% were true normal when compared to the manual grading results. These results demonstrate scalability and strong potential of such an integrated computational intelligence system as an effective tool to assist a grading service.
Zhao, Chenhui; Zhang, Guangcheng; Wu, Yibo
2012-01-01
The resin flow behavior in the vacuum assisted resin infusion molding process (VARI) of foam sandwich composites was studied by both visualization flow experiments and computer simulation. Both experimental and simulation results show that: the distribution medium (DM) leads to a shorter molding filling time in grooved foam sandwich composites via the VARI process, and the mold filling time is linearly reduced with the increase of the ratio of DM/Preform. Patterns of the resin sources have a significant influence on the resin filling time. The filling time of center source is shorter than that of edge pattern. Point pattern results in longer filling time than of linear source. Short edge/center patterns need a longer time to fill the mould compared with Long edge/center sources.
Time-frequency analysis of backscattered signals from diffuse radar targets
NASA Astrophysics Data System (ADS)
Kenny, O. P.; Boashash, B.
1993-06-01
The need for analysis of time-varying signals has led to the formulation of a class of joint time-frequency distributions (TFDs). One of these TFDs, the Wigner-Ville distribution (WVD), has useful properties which can be applied to radar imaging. The authors discuss time-frequency representation of the backscattered signal from a diffuse radar target. It is then shown that for point scatterers which are statistically dependent or for which the reflectivity coefficient has a nonzero mean value, reconstruction using time of flight positron emission tomography on time-frequency images is effective for estimating the scattering function of the target.
Time-lapse contact microscopy of cell cultures based on non-coherent illumination
NASA Astrophysics Data System (ADS)
Gabriel, Marion; Balle, Dorothée; Bigault, Stéphanie; Pornin, Cyrille; Gétin, Stéphane; Perraut, François; Block, Marc R.; Chatelain, François; Picollet-D'Hahan, Nathalie; Gidrol, Xavier; Haguet, Vincent
2015-10-01
Video microscopy offers outstanding capabilities to investigate the dynamics of biological and pathological mechanisms in optimal culture conditions. Contact imaging is one of the simplest imaging architectures to digitally record images of cells due to the absence of any objective between the sample and the image sensor. However, in the framework of in-line holography, other optical components, e.g., an optical filter or a pinhole, are placed underneath the light source in order to illuminate the cells with a coherent or quasi-coherent incident light. In this study, we demonstrate that contact imaging with an incident light of both limited temporal and spatial coherences can be achieved with sufficiently high quality for most applications in cell biology, including monitoring of cell sedimentation, rolling, adhesion, spreading, proliferation, motility, death and detachment. Patterns of cells were recorded at various distances between 0 and 1000 μm from the pixel array of the image sensors. Cells in suspension, just deposited or at mitosis focalise light into photonic nanojets which can be visualised by contact imaging. Light refraction by cells significantly varies during the adhesion process, the cell cycle and among the cell population in connection with every modification in the tridimensional morphology of a cell.
Inference for local autocorrelations in locally stationary models.
Zhao, Zhibiao
2015-04-01
For non-stationary processes, the time-varying correlation structure provides useful insights into the underlying model dynamics. We study estimation and inferences for local autocorrelation process in locally stationary time series. Our constructed simultaneous confidence band can be used to address important hypothesis testing problems, such as whether the local autocorrelation process is indeed time-varying and whether the local autocorrelation is zero. In particular, our result provides an important generalization of the R function acf() to locally stationary Gaussian processes. Simulation studies and two empirical applications are developed. For the global temperature series, we find that the local autocorrelations are time-varying and have a "V" shape during 1910-1960. For the S&P 500 index, we conclude that the returns satisfy the efficient-market hypothesis whereas the magnitudes of returns show significant local autocorrelations.
Spectrally Adaptable Compressive Sensing Imaging System
2014-05-01
signal recovering [?, ?]. The time-varying coded apertures can be implemented using micro-piezo motors [?] or through the use of Digital Micromirror ...feasibility of this testbed by developing a Digital- Micromirror -Device-based Snapshot Spectral Imaging (DMD-SSI) system, which implements CS measurement...Y. Wu, I. O. Mirza, G. R. Arce, and D. W. Prather, ”Development of a digital- micromirror - device- based multishot snapshot spectral imaging
3D localization of electrophysiology catheters from a single x-ray cone-beam projection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert, Normand, E-mail: normand.robert@sri.utoronto.ca; Polack, George G.; Sethi, Benu
2015-10-15
Purpose: X-ray images allow the visualization of percutaneous devices such as catheters in real time but inherently lack depth information. The provision of 3D localization of these devices from cone beam x-ray projections would be advantageous for interventions such as electrophysiology (EP), whereby the operator needs to return a device to the same anatomical locations during the procedure. A method to achieve real-time 3D single view localization (SVL) of an object of known geometry from a single x-ray image is presented. SVL exploits the change in the magnification of an object as its distance from the x-ray source is varied.more » The x-ray projection of an object of interest is compared to a synthetic x-ray projection of a model of said object as its pose is varied. Methods: SVL was tested with a 3 mm spherical marker and an electrophysiology catheter. The effect of x-ray acquisition parameters on SVL was investigated. An independent reference localization method was developed to compare results when imaging a catheter translated via a computer controlled three-axes stage. SVL was also performed on clinical fluoroscopy image sequences. A commercial navigation system was used in some clinical image sequences for comparison. Results: SVL estimates exhibited little change as x-ray acquisition parameters were varied. The reproducibility of catheter position estimates in phantoms denoted by the standard deviations, (σ{sub x}, σ{sub y}, σ{sub z}) = (0.099 mm, 0.093 mm, 2.2 mm), where x and y are parallel to the detector plane and z is the distance from the x-ray source. Position estimates (x, y, z) exhibited a 4% systematic error (underestimation) when compared to the reference method. The authors demonstrated that EP catheters can be tracked in clinical fluoroscopic images. Conclusions: It has been shown that EP catheters can be localized in real time in phantoms and clinical images at fluoroscopic exposure rates. Further work is required to characterize performance in clinical images as well as the sensitivity to clinical image quality.« less
Estimation of stress relaxation time for normal and abnormal breast phantoms using optical technique
NASA Astrophysics Data System (ADS)
Udayakumar, K.; Sujatha, N.
2015-03-01
Many of the early occurring micro-anomalies in breast may transform into a deadliest cancer tumor in future. Probability of curing early occurring abnormalities in breast is more if rightly identified. Even in mammogram, considered as a golden standard technique for breast imaging, it is hard to pick up early occurring changes in the breast tissue due to the difference in mechanical behavior of the normal and abnormal tissue when subjected to compression prior to x-ray or laser exposure. In this paper, an attempt has been made to estimate the stress relaxation time of normal and abnormal breast mimicking phantom using laser speckle image correlation. Phantoms mimicking normal breast is prepared and subjected to precise mechanical compression. The phantom is illuminated by a Helium Neon laser and by using a CCD camera, a sequence of strained phantom speckle images are captured and correlated by the image mean intensity value at specific time intervals. From the relation between mean intensity versus time, tissue stress relaxation time is quantified. Experiments were repeated for phantoms with increased stiffness mimicking abnormal tissue for similar ranges of applied loading. Results shows that phantom with more stiffness representing abnormal tissue shows uniform relaxation for varying load of the selected range, whereas phantom with less stiffness representing normal tissue shows irregular behavior for varying loadings in the given range.
Mitry, Danny; Peto, Tunde; Hayat, Shabina; Morgan, James E; Khaw, Kay-Tee; Foster, Paul J
2013-01-01
Crowdsourcing is the process of outsourcing numerous tasks to many untrained individuals. Our aim was to assess the performance and repeatability of crowdsourcing for the classification of retinal fundus photography. One hundred retinal fundus photograph images with pre-determined disease criteria were selected by experts from a large cohort study. After reading brief instructions and an example classification, we requested that knowledge workers (KWs) from a crowdsourcing platform classified each image as normal or abnormal with grades of severity. Each image was classified 20 times by different KWs. Four study designs were examined to assess the effect of varying incentive and KW experience in classification accuracy. All study designs were conducted twice to examine repeatability. Performance was assessed by comparing the sensitivity, specificity and area under the receiver operating characteristic curve (AUC). Without restriction on eligible participants, two thousand classifications of 100 images were received in under 24 hours at minimal cost. In trial 1 all study designs had an AUC (95%CI) of 0.701(0.680-0.721) or greater for classification of normal/abnormal. In trial 1, the highest AUC (95%CI) for normal/abnormal classification was 0.757 (0.738-0.776) for KWs with moderate experience. Comparable results were observed in trial 2. In trial 1, between 64-86% of any abnormal image was correctly classified by over half of all KWs. In trial 2, this ranged between 74-97%. Sensitivity was ≥ 96% for normal versus severely abnormal detections across all trials. Sensitivity for normal versus mildly abnormal varied between 61-79% across trials. With minimal training, crowdsourcing represents an accurate, rapid and cost-effective method of retinal image analysis which demonstrates good repeatability. Larger studies with more comprehensive participant training are needed to explore the utility of this compelling technique in large scale medical image analysis.
Preferred Visuographic Images to Support Reading by People with Chronic Aphasia.
Knollman-Porter, Kelly; Brown, Jessica; Hux, Karen; Wallace, Sarah E; Uchtman, Elizabeth
2016-08-01
Written materials used both clinically and in everyday reading tasks can contain visuographic images that vary in content and attributes. People with aphasia may benefit from visuographic images to support reading comprehension. Understanding the image type and feature preferences of individuals with aphasia is an important first step when developing guidelines for selecting reading materials that motivate and support reading comprehension. The study purposes were to determine the preferences and explore the perceptions of and opinions provided by adults with chronic aphasia regarding various image features and types on facilitating the reading process. Six adults with chronic aphasia ranked visuographic materials varying in context, engagement, and content regarding their perceived degree of helpfulness in comprehending written materials. Then, they participated in semi-structured interviews that allowed them to elaborate on their choices and convey opinions about potential benefits and detriments associated with preferred and non-preferred materials. All participants preferred high-context photographs rather than iconic images or portraits as potential supports to facilitate reading activities. Differences in opinions emerged across participants regarding the amount of preferred content included in high context images.
Advances in interpretation of subsurface processes with time-lapse electrical imaging
Singha, Kaminit; Day-Lewis, Frederick D.; Johnson, Tim B.; Slater, Lee D.
2015-01-01
Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.
Advances in interpretation of subsurface processes with time-lapse electrical imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singha, Kamini; Day-Lewis, Frederick D.; Johnson, Timothy C.
2015-03-15
Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.
Task difficulty modulates brain activation in the emotional oddball task.
Siciliano, Rachel E; Madden, David J; Tallman, Catherine W; Boylan, Maria A; Kirste, Imke; Monge, Zachary A; Packard, Lauren E; Potter, Guy G; Wang, Lihong
2017-06-01
Previous functional magnetic resonance imaging (fMRI) studies have reported that task-irrelevant, emotionally salient events can disrupt target discrimination, particularly when attentional demands are low, while others demonstrate alterations in the distracting effects of emotion in behavior and neural activation in the context of attention-demanding tasks. We used fMRI, in conjunction with an emotional oddball task, at different levels of target discrimination difficulty, to investigate the effects of emotional distractors on the detection of subsequent targets. In addition, we distinguished different behavioral components of target detection representing decisional, nondecisional, and response criterion processes. Results indicated that increasing target discrimination difficulty led to increased time required for both the decisional and nondecisional components of the detection response, as well as to increased target-related neural activation in frontoparietal regions. The emotional distractors were associated with activation in ventral occipital and frontal regions and dorsal frontal regions, but this activation was attenuated with increased difficulty. Emotional distraction did not alter the behavioral measures of target detection, but did lead to increased target-related frontoparietal activation for targets following emotional images as compared to those following neutral images. This latter effect varied with target discrimination difficulty, with an increased influence of the emotional distractors on subsequent target-related frontoparietal activation in the more difficult discrimination condition. This influence of emotional distraction was in addition associated specifically with the decisional component of target detection. These findings indicate that emotion-cognition interactions, in the emotional oddball task, vary depending on the difficulty of the target discrimination and the associated limitations on processing resources. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Meng, R.; Wu, J.; Zhao, F. R.; Kathy, S. L.; Dennison, P. E.; Cook, B.; Hanavan, R. P.; Serbin, S.
2016-12-01
As a primary disturbance agent, fire significantly influences forest ecosystems, including the modification or resetting of vegetation composition and structure, which can then significantly impact landscape-scale plant function and carbon stocks. Most ecological processes associated with fire effects (e.g. tree damage, mortality, and vegetation recovery) display fine-scale, species specific responses but can also vary spatially within the boundary of the perturbation. For example, both oak and pine species are fire-adapted, but fire can still induce changes in composition, structure, and dominance in a mixed pine-oak forest, mainly because of their varying degrees of fire adaption. Evidence of post-fire shifts in dominance between oak and pine species has been documented in mixed pine-oak forests, but these processes have been poorly investigated in a spatially explicit manner. In addition, traditional field-based means of quantifying the response of partially damaged trees across space and time is logistically challenging. Here we show how combining high resolution satellite imagery (i.e. Worldview-2,WV-2) and airborne imaging spectroscopy and LiDAR (i.e. NASA Goddard's Lidar, Hyperspectral and Thermal airborne imager, G-LiHT) can be effectively used to remotely quantify spatial and temporal patterns of vegetation recovery following a top-killing fire that occurred in 2012 within mixed pine-oak forests in the Long Island Central Pine Barrens Region, New York. We explore the following questions: 1) what are the impacts of fire on species composition, dominance, plant health, and vertical structure; 2) what are the recovery trajectories of forest biomass, structure, and spectral properties for three years following the fire; and 3) to what extent can fire impacts be captured and characterized by multi-sensor remote sensing techniques from active and passive optical remote sensing.
Lu, Hoang D; Lim, Tristan L; Javitt, Shoshana; Heinmiller, Andrew; Prud'homme, Robert K
2017-06-12
Optical imaging is a rapidly progressing medical technique that can benefit from the development of new and improved optical imaging agents suitable for use in vivo. However, the molecular rules detailing what optical agents can be processed and encapsulated into in vivo presentable forms are not known. We here present the screening of series of highly hydrophobic porphyrin, phthalocyanine, and naphthalocyanine dye macrocycles through a self-assembling Flash NanoPrecipitation process to form a series of water dispersible dye nanoparticles (NPs). Ten out of 19 tested dyes could be formed into poly(ethylene glycol) coated nanoparticles 60-150 nm in size, and these results shed insight on dye structural criteria that are required to permit dye assembly into NPs. Dye NPs display a diverse range of absorbance profiles with absorbance maxima within the NIR region, and have absorbance that can be tuned by varying dye choice or by doping bulking materials in the NP core. Particle properties such as dye core load and the compositions of co-core dopants were varied, and subsequent effects on photoacoustic and fluorescence signal intensities were measured. These results provide guidelines for designing NPs optimized for photoacoustic imaging and NPs optimized for fluorescence imaging. This work provides important details for dye NP engineering, and expands the optical imaging tools available for use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fogg, P; Aland, T; West, M
Purpose: To investigate the effects of external surrogate and tumour motion by observing the reconstructed phases and AveCT in an Amplitude and Time based 4DCT. Methods: Based on patient motion studies, Cos6 and sinusoidal motions were simulated as external surrogate and tumour motions in a motion phantom. The diaphragm and tumour motions may or may not display the same waveform therefore the same and different waveforms were programmed into the phantom, scanned and reconstructed based on Amplitude and Time. The AveCT and phases were investigated with these different scenarios. The AveCT phantom images were also compared with CBCT phantom imagesmore » programmed with the same motions. Results: For the same surrogate and tumour sin motions, the phases (Amplitude and Time) and AveCT indicated similar motions based on the position of the BB at the slice and displayed contrast values respectively. For cos6 motions, due to the varied time the tumour spends at each position, the Amplitude and Time based phases differed. The AveCT images represented the actual tumour motions and the Time and Amplitude based phases were represented by the surrogate with varied times. Conclusion: Different external surrogate and tumour motions may result in different displayed image motions when observing the AveCT and reconstructed phases. During the 4DCT, the surrogate motion is readily available for observation of the amplitude and time of the diaphragm position. Following image reconstruction, the user may need to observe the AveCT in addition to the reconstructed phases to comprehend the time weightings of the tumour motion during the scan. This may also apply to 3D CBCT images where the displayed tumour position in the images is influenced by the long duration of the CBCT. Knowledge of the tumour motion represented by the greyscale of the AveCT may also assist in CBCT treatment beam verification matching.« less
NASA Astrophysics Data System (ADS)
Oh, Mirae; Lee, Hoonsoo; Cho, Hyunjeong; Moon, Sang-Ho; Kim, Eun-Kyung; Kim, Moon S.
2016-05-01
Current meat inspection in slaughter plants, for food safety and quality attributes including potential fecal contamination, is conducted through by visual examination human inspectors. A handheld fluorescence-based imaging device (HFID) was developed to be an assistive tool for human inspectors by highlighting contaminated food and food contact surfaces on a display monitor. It can be used under ambient lighting conditions in food processing plants. Critical components of the imaging device includes four 405-nm 10-W LEDs for fluorescence excitation, a charge-coupled device (CCD) camera, optical filter (670 nm used for this study), and Wi-Fi transmitter for broadcasting real-time video/images to monitoring devices such as smartphone and tablet. This study aimed to investigate the effectiveness of HFID in enhancing visual detection of fecal contamination on red meat, fat, and bone surfaces of beef under varying ambient luminous intensities (0, 10, 30, 50 and 70 foot-candles). Overall, diluted feces on fat, red meat and bone areas of beef surfaces were detectable in the 670-nm single-band fluorescence images when using the HFID under 0 to 50 foot-candle ambient lighting.
Automated x-ray/light field congruence using the LINAC EPID panel.
Polak, Wojciech; O'Doherty, Jim; Jones, Matt
2013-03-01
X-ray/light field alignment is a test described in many guidelines for the routine quality control of clinical linear accelerators (LINAC). Currently, the gold standard method for measuring alignment is through utilization of radiographic film. However, many modern LINACs are equipped with an electronic portal imaging device (EPID) that may be used to perform this test and thus subsequently reducing overall cost, processing, and analysis time, removing operator dependency and the requirement to sustain the departmental film processor. This work describes a novel method of utilizing the EPID together with a custom inhouse designed jig and automatic image processing software allowing measurement of the light field size, x-ray field size, and congruence between them. The authors present results of testing the method for aS1000 and aS500 Varian EPID detectors for six LINACs at a range of energies (6, 10, and 15 MV) in comparison with the results obtained from the use of radiographic film. Reproducibility of the software in fully automatic operation under a range of operating conditions for a single image showed a congruence of 0.01 cm with a coefficient of variation of 0. Slight variation in congruence repeatability was noted through semiautomatic processing by four independent operators due to manual marking of positions on the jig. Testing of the methodology using the automatic method shows a high precision of 0.02 mm compared to a maximum of 0.06 mm determined by film processing. Intraindividual examination of operator measurements of congruence was shown to vary as much as 0.75 mm. Similar congruence measurements of 0.02 mm were also determined for a lower resolution EPID (aS500 model), after rescaling of the image to the aS1000 image size. The designed methodology was proven to be time efficient, cost effective, and at least as accurate as using the gold standard radiographic film. Additionally, congruence testing can be easily performed for all four cardinal gantry angles which can be difficult when using radiographic film. Therefore, the authors propose it can be used as an alternative to the radiographic film method allowing decommissioning of the film processor.
Visualization of time-varying MRI data for MS lesion analysis
NASA Astrophysics Data System (ADS)
Tory, Melanie K.; Moeller, Torsten; Atkins, M. Stella
2001-05-01
Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.
[Image processing system of visual prostheses based on digital signal processor DM642].
Xie, Chengcheng; Lu, Yanyu; Gu, Yun; Wang, Jing; Chai, Xinyu
2011-09-01
This paper employed a DSP platform to create the real-time and portable image processing system, and introduced a series of commonly used algorithms for visual prostheses. The results of performance evaluation revealed that this platform could afford image processing algorithms to be executed in real time.
Stroke-model-based character extraction from gray-level document images.
Ye, X; Cheriet, M; Suen, C Y
2001-01-01
Global gray-level thresholding techniques such as Otsu's method, and local gray-level thresholding techniques such as edge-based segmentation or the adaptive thresholding method are powerful in extracting character objects from simple or slowly varying backgrounds. However, they are found to be insufficient when the backgrounds include sharply varying contours or fonts in different sizes. A stroke-model is proposed to depict the local features of character objects as double-edges in a predefined size. This model enables us to detect thin connected components selectively, while ignoring relatively large backgrounds that appear complex. Meanwhile, since the stroke width restriction is fully factored in, the proposed technique can be used to extract characters in predefined font sizes. To process large volumes of documents efficiently, a hybrid method is proposed for character extraction from various backgrounds. Using the measurement of class separability to differentiate images with simple backgrounds from those with complex backgrounds, the hybrid method can process documents with different backgrounds by applying the appropriate methods. Experiments on extracting handwriting from a check image, as well as machine-printed characters from scene images demonstrate the effectiveness of the proposed model.
High-speed multislice T1 mapping using inversion-recovery echo-planar imaging.
Ordidge, R J; Gibbs, P; Chapman, B; Stehling, M K; Mansfield, P
1990-11-01
Tissue contrast in MR images is a strong function of spin-lattice (T1) and spin-spin (T2) relaxation times. However, the T1 relaxation time is rarely quantified because of the long scan time required to produce an accurate T1 map of the subject. In a standard 2D FT technique, this procedure may take up to 30 min. Modifications of the echo-planar imaging (EPI) technique which incorporate the principle of inversion recovery (IR) enable multislice T1 maps to be produced in total scan times varying from a few seconds up to a minute. Using IR-EPI, rapid quantification of T1 values may thus lead to better discrimination between tissue types in an acceptable scan time.
Towards Portable Large-Scale Image Processing with High-Performance Computing.
Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A
2018-05-03
High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software development and expansion, and (3) scalable spider deployment compatible with HPC clusters and local workstations.
Skolnick, M L; Matzuk, T
1978-08-01
This paper describes a new real-time servo-controlled sector scanner that produces high-resolution images similar to phased-array systems, but possesses the simplicity of design and low cost best achievable in a mechanical sector scanner. Its unique feature is the transducer head which contains a single moving part--the transducer. Frame rates vary from 0 to 30 degrees and the sector angle from 0 to 60 degrees. Abdominal applications include: differentiation of vascular structures, detection of small masses, imaging of diagonally oriented organs. Survey scanning, and demonstration of regions difficult to image with contact scanners. Cardiac uses are also described.
Four-dimensional ultrasound current source density imaging of a dipole field
NASA Astrophysics Data System (ADS)
Wang, Z. H.; Olafsson, R.; Ingram, P.; Li, Q.; Qin, Y.; Witte, R. S.
2011-09-01
Ultrasound current source density imaging (UCSDI) potentially transforms conventional electrical mapping of excitable organs, such as the brain and heart. For this study, we demonstrate volume imaging of a time-varying current field by scanning a focused ultrasound beam and detecting the acoustoelectric (AE) interaction signal. A pair of electrodes produced an alternating current distribution in a special imaging chamber filled with a 0.9% NaCl solution. A pulsed 1 MHz ultrasound beam was scanned near the source and sink, while the AE signal was detected on remote recording electrodes, resulting in time-lapsed volume movies of the alternating current distribution.
Analysing the Image Building Effects of TV Advertisements Using Internet Community Data
NASA Astrophysics Data System (ADS)
Uehara, Hiroshi; Sato, Tadahiko; Yoshida, Kenichi
This paper proposes a method to measure the effects of TV advertisements on the Internet bulletin boards. It aims to clarify how the viewes' interests on TV advertisements are reflected on their images on the promoted products. Two kinds of time series data are generated based on the proposed method. First one represents the time series fluctuation of the interests on the TV advertisements. Another one represents the time series fluctuation of the images on the products. By analysing the correlations between these two time series data, we try to clarify the implicit relationship between the viewer's interests on the TV advertisement and their images on the promoted products. By applying the proposed method to an Internet bulletin board that deals with certain cosmetic brand, we show that the images on the products vary depending on the difference of the interests on each TV advertisement.
Rapid Material Appearance Acquisition Using Consumer Hardware
Filip, Jiří; Vávra, Radomír; Krupička, Mikuláš
2014-01-01
A photo-realistic representation of material appearance can be achieved by means of bidirectional texture function (BTF) capturing a material’s appearance for varying illumination, viewing directions, and spatial pixel coordinates. BTF captures many non-local effects in material structure such as inter-reflections, occlusions, shadowing, or scattering. The acquisition of BTF data is usually time and resource-intensive due to the high dimensionality of BTF data. This results in expensive, complex measurement setups and/or excessively long measurement times. We propose an approximate BTF acquisition setup based on a simple, affordable mechanical gantry containing a consumer camera and two LED lights. It captures a very limited subset of material surface images by shooting several video sequences. A psychophysical study comparing captured and reconstructed data with the reference BTFs of seven tested materials revealed that results of our method show a promising visual quality. Speed of the setup has been demonstrated on measurement of human skin and measurement and modeling of a glue dessication time-varying process. As it allows for fast, inexpensive, acquisition of approximate BTFs, this method can be beneficial to visualization applications demanding less accuracy, where BTF utilization has previously been limited. PMID:25340451
Slow-rotation dynamic SPECT with a temporal second derivative constraint.
Humphries, T; Celler, A; Trummer, M
2011-08-01
Dynamic tracer behavior in the human body arises as a result of continuous physiological processes. Hence, the change in tracer concentration within a region of interest (ROI) should follow a smooth curve. The authors propose a modification to an existing slow-rotation dynamic SPECT reconstruction algorithm (dSPECT) with the goal of improving the smoothness of time activity curves (TACs) and other properties of the reconstructed image. The new method, denoted d2EM, imposes a constraint on the second derivative (concavity) of the TAC in every voxel of the reconstructed image, allowing it to change sign at most once. Further constraints are enforced to prevent other nonphysical behaviors from arising. The new method is compared with dSPECT using digital phantom simulations and experimental dynamic 99mTc -DTPA renal SPECT data, to assess any improvement in image quality. In both phantom simulations and healthy volunteer experiments, the d2EM method provides smoother TACs than dSPECT, with more consistent shapes in regions with dynamic behavior. Magnitudes of TACs within an ROI still vary noticeably in both dSPECT and d2EM images, but also in images produced using an OSEM approach that reconstructs each time frame individually, based on much more complete projection data. TACs produced by averaging over a region are similar using either method, even for small ROIs. Results for experimental renal data show expected behavior in images produced by both methods, with d2EM providing somewhat smoother mean TACs and more consistent TAC shapes. The d2EM method is successful in improving the smoothness of time activity curves obtained from the reconstruction, as well as improving consistency of TAC shapes within ROIs.
Rajaraman, Sivaramakrishnan; Rodriguez, Jeffery J.; Graff, Christian; Altbach, Maria I.; Dragovich, Tomislav; Sirlin, Claude B.; Korn, Ronald L.; Raghunand, Natarajan
2011-01-01
Dynamic Contrast-Enhanced MRI (DCE-MRI) is increasingly in use as an investigational biomarker of response in cancer clinical studies. Proper registration of images acquired at different time-points is essential for deriving diagnostic information from quantitative pharmacokinetic analysis of these data. Motion artifacts in the presence of time-varying intensity due to contrast-enhancement make this registration problem challenging. DCE-MRI of chest and abdominal lesions is typically performed during sequential breath-holds, which introduces misregistration due to inconsistent diaphragm positions, and also places constraints on temporal resolution vis-à-vis free-breathing. In this work, we have employed a computer-generated DCE-MRI phantom to compare the performance of two published methods, Progressive Principal Component Registration and Pharmacokinetic Model-Driven Registration, with Sequential Elastic Registration (SER) to register adjacent time-sample images using a published general-purpose elastic registration algorithm. In all 3 methods, a 3-D rigid-body registration scheme with a mutual information similarity measure was used as a pre-processing step. The DCE-MRI phantom images were mathematically deformed to simulate misregistration which was corrected using the 3 schemes. All 3 schemes were comparably successful in registering large regions of interest (ROIs) such as muscle, liver, and spleen. SER was superior in retaining tumor volume and shape, and in registering smaller but important ROIs such as tumor core and tumor rim. The performance of SER on clinical DCE-MRI datasets is also presented. PMID:21531108
Imaging of blood plasma coagulation at supported lipid membranes.
Faxälv, Lars; Hume, Jasmin; Kasemo, Bengt; Svedhem, Sofia
2011-12-15
The blood coagulation system relies on lipid membrane constituents to act as regulators of the coagulation process upon vascular trauma, and in particular the 2D configuration of the lipid membranes is known to efficiently catalyze enzymatic activity of blood coagulation factors. This work demonstrates a new application of a recently developed methodology to study blood coagulation at lipid membrane interfaces with the use of imaging technology. Lipid membranes with varied net charges were formed on silica supports by systematically using different combinations of lipids where neutral phosphocholine (PC) lipids were mixed with phospholipids having either positively charged ethylphosphocholine (EPC), or negatively charged phosphatidylserine (PS) headgroups. Coagulation imaging demonstrated that negatively charged SiO(2) and membrane surfaces exposing PS (obtained from liposomes containing 30% of PS) had coagulation times which were significantly shorter than those for plain PC membranes and EPC exposing membrane surfaces (obtained from liposomes containing 30% of EPC). Coagulation times decreased non-linearly with increasing negative surface charge for lipid membranes. A threshold value for shorter coagulation times was observed below a PS content of ∼6%. We conclude that the lipid membranes on solid support studied with the imaging setup as presented in this study offers a flexible and non-expensive solution for coagulation studies at biological membranes. It will be interesting to extend the present study towards examining coagulation on more complex lipid-based model systems. Copyright © 2011 Elsevier Inc. All rights reserved.
A Q-Ising model application for linear-time image segmentation
NASA Astrophysics Data System (ADS)
Bentrem, Frank W.
2010-10-01
A computational method is presented which efficiently segments digital grayscale images by directly applying the Q-state Ising (or Potts) model. Since the Potts model was first proposed in 1952, physicists have studied lattice models to gain deep insights into magnetism and other disordered systems. For some time, researchers have realized that digital images may be modeled in much the same way as these physical systems ( i.e., as a square lattice of numerical values). A major drawback in using Potts model methods for image segmentation is that, with conventional methods, it processes in exponential time. Advances have been made via certain approximations to reduce the segmentation process to power-law time. However, in many applications (such as for sonar imagery), real-time processing requires much greater efficiency. This article contains a description of an energy minimization technique that applies four Potts (Q-Ising) models directly to the image and processes in linear time. The result is analogous to partitioning the system into regions of four classes of magnetism. This direct Potts segmentation technique is demonstrated on photographic, medical, and acoustic images.
Pattern formation in triboelectrically charged binary packings
NASA Astrophysics Data System (ADS)
Schella, Andre; Vincent, Thomas; Herminghaus, Stephan; Schröter, Matthias
2015-11-01
Electrostatic self-assembly is an interesting route to aim at creating well-defined microstructures. In this spirit, we study the process of self-assembling for vertically shaken granular materials. Our system consists from 1 to 400 plastic beads of 3mm size made from Teflon and Nylon in 2D and 3D geometries. We find self-organization in four, five and sixfold order which is due to charging of the system via triboelectric effects between the grains. We observe that the binary system solidifies on a time scale of a few minutes. Image processing is used to extract the structural and dynamical properties of the assemblies. The mixture ratio is tuned from 1:5 to 5:1 and the humidity level is varied between 10% and 90% leading to various transitions between the morphologies.
NASA Technical Reports Server (NTRS)
2003-01-01
[figure removed for brevity, see original site] (Released 28 June 2002) The Science This THEMIS visible image illustrates the complex terrains within Terra Meridiani. This general region is one of the more complex on Mars, with a rich array of sedimentary, volcanic, and impact surfaces that span a wide range of martian history. This image lies at the eastern edge of a unique geologic unit that was discovered by the Mars Global Surveyor Thermal Emission Spectrometer (TES) Science Team to have high concentrations of a unique mineral called grey (crystalline) hematite. As discussed by the TES Science Team, this mineral typically forms by processes associated with water, and this region appears to have undergone alteration by hydrothermal (hot water) or other water-related processes. As a result of this evidence for water activity, this region is a leading candidate for further exploration by one of NASA's upcoming Mars Exploration Rovers. The brightness and texture of the surface varies remarkably throughout this image. These differences are associated with different rock layers or ?units?, and can be used to map the occurrence of these layers. The number of layers indicates that extensive deposition by volcanic and sedimentary processes has occurred in this region. Since that time, however, extensive erosion has occurred to produce the patchwork of different layers exposed across the surface. Several distinct layers can be seen within the 20 km diameter crater at the bottom (south) of the image, indicating that this crater once contained layers of sedimentary material that has since been removed. THEMIS infrared images of this region show that many of these rock layers have distinctly different temperatures, indicating that the physical properties vary from layer to layer. These differences suggest that the environment and the conditions under which these layers were deposited or solidified varied through time as these layers were formed. The Story Mars exploration is all about following the signs of past or present water on the red planet. That's because water is the key to understanding the history of the Martian environment (climate and geology), the potential for life to have developed there, and the potential for human exploration some day far in the future. All of the missions in the Mars Exploration Program contribute something special to science investigations about water on Mars and complement each other nicely. For instance, take the above image. Given the contrasts, you can tell that this area is pretty complex. You've got a really old crater that's been eroded, and a rich array of volcanic surfaces and layers where material has been deposited through other processes. Now, that might make this area seem like any number of images you've already seen, but this terrain holds special appeal. A science instrument on the Mars Global Surveyor spacecraft recently discovered that this area has really high concentrations of a unique mineral called grey (crystalline) hematite. That discovery was REALLY exciting to scientists, because hematite found on Earth typically develops in the presence of water. So, did this region have water on the surface long enough for the mineral to have formed sometime in the past? And if so, could that water have been around long enough for life to have developed at some point? After all, if water was around long enough for this mineral to have formed, then maybe, just maybe . . . . Studies of this area by Odyssey and Mars Global Surveyor are helping to pave the way for the Mars Exploration Rovers, which are scheduled to land on Mars in 2004. This alluring, hematite-rich area above is called Terra Meridiani, and is one of the leading candidates among potential landing sites. At least one of the rovers may end up exploring this very terrain! While the rover won't have instruments for detecting signs of past or present life, it will be able to use its science instruments to study the rocks up close and to determine better under what environment conditions they formed. By comparing the rover's surface data with the orbital data, scientists will be able to refine their understanding of the area. Depending on what a rover finds if it lands there, who knows what the long line of future missions to this area might look like? In the meantime, the above THEMIS image will give scientists more opportunities to study this exciting area right now. The brightness and texture of the surface varies remarkably throughout. That's because different rock layers settled on top of one another through a long history of changing environmental conditions before extensive erosion came along to strip layers unevenly away. That's what has produce the patchwork of different exposed layers seen above. Perhaps one layer formed during a wet period of history, and then another layer formed on top of it because of volcanic activity, and then another through wind deposits. Or some other combination. Any future rover fortunate enough to go here will have a field day, as it could potentially study them all! THEMIS's concurrent analyses in the infrared also help in understanding the sequence of layering events through time. THEMIS's infrared studies essentially measure the temperatures of all of the rock layers. Not surprisingly, it turns out that they all have varying temperatures, indicating that the physical properties also differ from layer to layer. By mapping what type of material occurs where, scientists can add to their knowledge of climatic and geologic change through time . . . and maybe have even more to say on the question of water!
Automated inspection of hot steel slabs
Martin, R.J.
1985-12-24
The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes. 5 figs.
Automated inspection of hot steel slabs
Martin, Ronald J.
1985-01-01
The disclosure relates to a real time digital image enhancement system for performing the image enhancement segmentation processing required for a real time automated system for detecting and classifying surface imperfections in hot steel slabs. The system provides for simultaneous execution of edge detection processing and intensity threshold processing in parallel on the same image data produced by a sensor device such as a scanning camera. The results of each process are utilized to validate the results of the other process and a resulting image is generated that contains only corresponding segmentation that is produced by both processes.
Dynamic laser speckle for non-destructive quality evaluation of bread
NASA Astrophysics Data System (ADS)
Stoykova, E.; Ivanov, B.; Shopova, M.; Lyubenova, T.; Panchev, I.; Sainov, V.
2010-10-01
Coherent illumination of a diffuse object yields a randomly varying interference pattern, which changes over time at any modification of the object. This phenomenon can be used for detection and visualization of physical or biological activity in various objects (e.g. fruits, seeds, coatings) through statistical description of laser speckle dynamics. The present report aims at non-destructive full-field evaluation of bread by spatial-temporal characterization of laser speckle. The main purpose of the conducted experiments was to prove the ability of the dynamic speckle method to indicate activity within the studied bread samples. In the set-up for acquisition and storage of dynamic speckle patterns an expanded beam from a DPSS laser (532 nm and 100mW) illuminated the sample through a ground glass diffuser. A CCD camera, adjusted to focus the sample, recorded regularly a sequence of images (8 bits and 780 x 582 squared pixels, sized 8.1 × 8.1 μm) at sampling frequency 0.25 Hz. A temporal structure function was calculated to evaluate activity of the bread samples in time using the full images in the sequence. In total, 7 samples of two types of bread were monitored during a chemical and physical process of bread's staling. Segmentation of images into matrixes of isometric fragments was also utilized. The results proved the potential of dynamic speckle as effective means for monitoring the process of bread staling and ability of this approach to differentiate between different types of bread.
Real time 3D structural and Doppler OCT imaging on graphics processing units
NASA Astrophysics Data System (ADS)
Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr
2013-03-01
In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.
Optical image encryption scheme with multiple light paths based on compressive ghost imaging
NASA Astrophysics Data System (ADS)
Zhu, Jinan; Yang, Xiulun; Meng, Xiangfeng; Wang, Yurong; Yin, Yongkai; Sun, Xiaowen; Dong, Guoyan
2018-02-01
An optical image encryption method with multiple light paths is proposed based on compressive ghost imaging. In the encryption process, M random phase-only masks (POMs) are generated by means of logistic map algorithm, and these masks are then uploaded to the spatial light modulator (SLM). The collimated laser light is divided into several beams by beam splitters as it passes through the SLM, and the light beams illuminate the secret images, which are converted into sparse images by discrete wavelet transform beforehand. Thus, the secret images are simultaneously encrypted into intensity vectors by ghost imaging. The distances between the SLM and secret images vary and can be used as the main keys with original POM and the logistic map algorithm coefficient in the decryption process. In the proposed method, the storage space can be significantly decreased and the security of the system can be improved. The feasibility, security and robustness of the method are further analysed through computer simulations.
Real-time optical image processing techniques
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang
1988-01-01
Nonlinear real-time optical processing on spatial pulse frequency modulation has been pursued through the analysis, design, and fabrication of pulse frequency modulated halftone screens and the modification of micro-channel spatial light modulators (MSLMs). Micro-channel spatial light modulators are modified via the Fabry-Perot method to achieve the high gamma operation required for non-linear operation. Real-time nonlinear processing was performed using the halftone screen and MSLM. The experiments showed the effectiveness of the thresholding and also showed the needs of higher SBP for image processing. The Hughes LCLV has been characterized and found to yield high gamma (about 1.7) when operated in low frequency and low bias mode. Cascading of two LCLVs should also provide enough gamma for nonlinear processing. In this case, the SBP of the LCLV is sufficient but the uniformity of the LCLV needs improvement. These include image correlation, computer generation of holograms, pseudo-color image encoding for image enhancement, and associative-retrieval in neural processing. The discovery of the only known optical method for dynamic range compression of an input image in real-time by using GaAs photorefractive crystals is reported. Finally, a new architecture for non-linear multiple sensory, neural processing has been suggested.
NASA Astrophysics Data System (ADS)
Park, Suhyung; Park, Jaeseok
2015-05-01
Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k - t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k - t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k - t SPARKS incorporates Kalman-smoother self-calibration in k - t space and sparse signal recovery in x - f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k - t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k - t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.
Park, Suhyung; Park, Jaeseok
2015-05-07
Accelerated dynamic MRI, which exploits spatiotemporal redundancies in k - t space and coil dimension, has been widely used to reduce the number of signal encoding and thus increase imaging efficiency with minimal loss of image quality. Nonetheless, particularly in cardiac MRI it still suffers from artifacts and amplified noise in the presence of time-drifting coil sensitivity due to relative motion between coil and subject (e.g. free breathing). Furthermore, a substantial number of additional calibrating signals is to be acquired to warrant accurate calibration of coil sensitivity. In this work, we propose a novel, accelerated dynamic cardiac MRI with sparse-Kalman-smoother self-calibration and reconstruction (k - t SPARKS), which is robust to time-varying coil sensitivity even with a small number of calibrating signals. The proposed k - t SPARKS incorporates Kalman-smoother self-calibration in k - t space and sparse signal recovery in x - f space into a single optimization problem, leading to iterative, joint estimation of time-varying convolution kernels and missing signals in k - t space. In the Kalman-smoother calibration, motion-induced uncertainties over the entire time frames were included in modeling state transition while a coil-dependent noise statistic in describing measurement process. The sparse signal recovery iteratively alternates with the self-calibration to tackle the ill-conditioning problem potentially resulting from insufficient calibrating signals. Simulations and experiments were performed using both the proposed and conventional methods for comparison, revealing that the proposed k - t SPARKS yields higher signal-to-error ratio and superior temporal fidelity in both breath-hold and free-breathing cardiac applications over all reduction factors.
Image Processing Using a Parallel Architecture.
1987-12-01
ENG/87D-25 Abstract This study developed a set o± low level image processing tools on a parallel computer that allows concurrent processing of images...environment, the set of tools offers a significant reduction in the time required to perform some commonly used image processing operations. vI IMAGE...step toward developing these systems, a structured set of image processing tools was implemented using a parallel computer. More important than
NASA Astrophysics Data System (ADS)
Thomer, A.
2017-12-01
Data provenance - the record of the varied processes that went into the creation of a dataset, as well as the relationships between resulting data objects - is necessary to support the reusability, reproducibility and reliability of earth science data. In sUAS-based research, capturing provenance can be particularly challenging because of the breadth and distributed nature of the many platforms used to collect, process and analyze data. In any given project, multiple drones, controllers, computers, software systems, sensors, cameras, imaging processing algorithms and data processing workflows are used over sometimes long periods of time. These platforms and processing result in dozens - if not hundreds - of data products in varying stages of readiness-for-analysis and sharing. Provenance tracking mechanisms are needed to make the relationships between these many data products explicit, and therefore more reusable and shareable. In this talk, I discuss opportunities and challenges in tracking provenance in sUAS-based research, and identify gaps in current workflow-capture technologies. I draw on prior work conducted as part of the IMLS-funded Site-Based Data Curation project in which we developed methods of documenting in and ex silico (that is, computational and non-computation) workflows, and demonstrate this approaches applicability to research with sUASes. I conclude with a discussion of ontologies and other semantic technologies that have potential application in sUAS research.
A small animal time-resolved optical tomography platform using wide-field excitation
NASA Astrophysics Data System (ADS)
Venugopal, Vivek
Small animal imaging plays a critical role in present day biomedical research by filling an important gap in the translation of research from the bench to the bedside. Optical techniques constitute an emerging imaging modality which have tremendous potential in preclinical applications. Optical imaging methods are capable of non-invasive assessment of the functional and molecular characteristics of biological tissue. The three-dimensional optical imaging technique, referred to as diffuse optical tomography, provides an approach for the whole-body imaging of small animal models and can provide volumetric maps of tissue functional parameters (e.g. blood volume, oxygen saturation etc.) and/or provide 3D localization and quantification of fluorescence-based molecular markers in vivo. However, the complex mathematical reconstruction problem associated with optical tomography and the cumbersome instrumental designs limits its adoption as a high-throughput quantitative whole-body imaging modality in current biomedical research. The development of new optical imaging paradigms is thus necessary for a wide-acceptance of this new technology. In this thesis, the design, development, characterization and optimization of a small animal optical tomography system is discussed. Specifically, the platform combines a highly sensitive time-resolved imaging paradigm with multi-spectral excitation capability and CCD-based detection to provide a system capable of generating spatially, spectrally and temporally dense measurement datasets. The acquisition of such data sets however can take long and translate to often unrealistic acquisition times when using the classical point source based excitation scheme. The novel approach in the design of this platform is the adoption of a wide-field excitation scheme which employs extended excitation sources and in the process allows an estimated ten-fold reduction in the acquisition time. The work described herein details the design of the imaging platform employing DLP-based excitation and time-gated intensified CCD detection and the optimal system operation parameters are determined. The feasibility this imaging approach and accuracy of the system in reconstructing functional parameters and fluorescence markers based on lifetime contrast is established through phantom studies. As a part of the system characterization, the effect of noise in time-resolved optical tomography is investigated and propagation of system noise in optical reconstructions is established. Furthermore, data processing and measurement calibration techniques aimed at reducing the effect of noise in reconstructions are defined. The optimization of excitation pattern selection is established through a novel measurement-guided iterative pattern correction scheme. This technique referred to as Adaptive Full-Field Optical Tomography was shown to improve reconstruction performances in murine models by reducing the dynamic range in photon flux measurements on the surface. Lastly, the application of the unique attributes of this platform to a biologically relevant imaging application, referred to as Forster Resonance Energy Transfer is described. The tomographic imaging of FRET interaction in vivo on a whole-body scale is achieved using the wide-field imaging approach based on lifetime contrast. This technique represents the first demonstration of tomographic FRET imaging in small animals and has significant potential in the development of optical imaging techniques in varied applications ranging from drug discovery to in vivo study of protein-protein interaction.
NASA Astrophysics Data System (ADS)
Languirand, Eric Robert
Chemical imaging is an important tool for providing insight into function, role, and spatial distribution of analytes. This thesis describes the use of imaging fiber bundles (IFB) for super-resolution reconstruction using surface enhanced Raman scattering (SERS) showing improvement in resolution with arrayed bundles for the first time. Additionally this thesis describes characteristics of the IFB with regards to cross-talk as a function of aperture size. The first part of this thesis characterizes the IFB for both tapered and untapered bundles in terms of cross-talk. Cross-talk is defined as the amount of light leaking from a central fiber element in the imaging fiber bundle to surrounding fiber elements. To make this measurement ubiquitous for all imaging bundles, quantum dots were employed. Untapered and tapered IFB possess cross-talk of 2% or less, with fiber elements down to 32nm. The second part of this thesis employs a super resolution reconstruction algorithm using projection onto convex sets for resolution improvement. When using IFB arrays, the point spread function (PSF) of the array can be known accurately if the fiber elements over fill the pixel detector array. Therefore, the use of the known PSF compared to a general blurring kernel was evaluated. Relative increases in resolution of 12% and 2% at the 95% confidence level are found, when compared to a reference image, for the general blurring kernel and PSF, respectively. The third part of this thesis shows for the first time the use of SERS with a dithered IFB array coupled with super-resolution reconstruction. The resolution improvement across a step-edge is shown to be approximately 20% when compared to a reference image. This provides an additional means of increasing the resolution of fiber bundles beyond that of just tapering. Furthermore, this provides a new avenue for nanoscale imaging using these bundles. Lastly, synthetic data with varying degrees of signal-to-noise (S/N) were employed to explore the relationship S/N has with the reconstruction process. It is generally shown that increasing the number images used in the reconstruction process and increasing the S/N will improve the reconstruction providing larger increases in resolution.
Investigations of image fusion
NASA Astrophysics Data System (ADS)
Zhang, Zhong
1999-12-01
The objective of image fusion is to combine information from multiple images of the same scene. The result of image fusion is a single image which is more suitable for the purpose of human visual perception or further image processing tasks. In this thesis, a region-based fusion algorithm using the wavelet transform is proposed. The identification of important features in each image, such as edges and regions of interest, are used to guide the fusion process. The idea of multiscale grouping is also introduced and a generic image fusion framework based on multiscale decomposition is studied. The framework includes all of the existing multiscale-decomposition- based fusion approaches we found in the literature which did not assume a statistical model for the source images. Comparisons indicate that our framework includes some new approaches which outperform the existing approaches for the cases we consider. Registration must precede our fusion algorithms. So we proposed a hybrid scheme which uses both feature-based and intensity-based methods. The idea of robust estimation of optical flow from time- varying images is employed with a coarse-to-fine multi- resolution approach and feature-based registration to overcome some of the limitations of the intensity-based schemes. Experiments show that this approach is robust and efficient. Assessing image fusion performance in a real application is a complicated issue. In this dissertation, a mixture probability density function model is used in conjunction with the Expectation- Maximization algorithm to model histograms of edge intensity. Some new techniques are proposed for estimating the quality of a noisy image of a natural scene. Such quality measures can be used to guide the fusion. Finally, we study fusion of images obtained from several copies of a new type of camera developed for video surveillance. Our techniques increase the capability and reliability of the surveillance system and provide an easy way to obtain 3-D information of objects in the space monitored by the system.
Design of an MR image processing module on an FPGA chip
NASA Astrophysics Data System (ADS)
Li, Limin; Wyrwicz, Alice M.
2015-06-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.
Design of an MR image processing module on an FPGA chip
Li, Limin; Wyrwicz, Alice M.
2015-01-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. PMID:25909646
Programmable Iterative Optical Image And Data Processing
NASA Technical Reports Server (NTRS)
Jackson, Deborah J.
1995-01-01
Proposed method of iterative optical image and data processing overcomes limitations imposed by loss of optical power after repeated passes through many optical elements - especially, beam splitters. Involves selective, timed combination of optical wavefront phase conjugation and amplification to regenerate images in real time to compensate for losses in optical iteration loops; timing such that amplification turned on to regenerate desired image, then turned off so as not to regenerate other, undesired images or spurious light propagating through loops from unwanted reflections.
Real-time blind image deconvolution based on coordinated framework of FPGA and DSP
NASA Astrophysics Data System (ADS)
Wang, Ze; Li, Hang; Zhou, Hua; Liu, Hongjun
2015-10-01
Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.
Investigation of time-resolved proton radiography using x-ray flat-panel imaging system
NASA Astrophysics Data System (ADS)
Jee, K.-W.; Zhang, R.; Bentefour, E. H.; Doolan, P. J.; Cascio, E.; Sharp, G.; Flanz, J.; Lu, H.-M.
2017-03-01
Proton beam therapy benefits from the Bragg peak and delivers highly conformal dose distributions. However, the location of the end-of-range is subject to uncertainties related to the accuracy of the relative proton stopping power estimates and thereby the water-equivalent path length (WEPL) along the beam. To remedy the range uncertainty, an in vivo measurement of the WEPL through the patient, i.e. a proton-range radiograph, is highly desirable. Towards that goal, we have explored a novel method of proton radiography based on the time-resolved dose measured by a flat panel imager (FPI). A 226 MeV pencil beam and a custom-designed range modulator wheel (MW) were used to create a time-varying broad beam. The proton imaging technique used exploits this time dependency by looking at the dose rate at the imager as a function of time. This dose rate function (DRF) has a unique time-varying dose pattern at each depth of penetration. A relatively slow rotation of the MW (0.2 revolutions per second) and a fast image acquisition (30 frames per second, ~33 ms sampling) provided a sufficient temporal resolution for each DRF. Along with the high output of the CsI:Tl scintillator, imaging with pixel binning (2 × 2) generated high signal-to-noise data at a very low radiation dose (~0.1 cGy). Proton radiographs of a head phantom and a Gammex CT calibration phantom were taken with various configurations. The results of the phantom measurements show that the FPI can generate low noise and high spatial resolution proton radiographs. The WEPL values of the CT tissue surrogate inserts show that the measured relative stopping powers are accurate to ~2%. The panel did not show any noticeable radiation damage after the accumulative dose of approximately 3831 cGy. In summary, we have successfully demonstrated a highly practical method of generating proton radiography using an x-ray flat panel imager.
Investigation of time-resolved proton radiography using x-ray flat-panel imaging system.
Jee, K-W; Zhang, R; Bentefour, E H; Doolan, P J; Cascio, E; Sharp, G; Flanz, J; Lu, H-M
2017-03-07
Proton beam therapy benefits from the Bragg peak and delivers highly conformal dose distributions. However, the location of the end-of-range is subject to uncertainties related to the accuracy of the relative proton stopping power estimates and thereby the water-equivalent path length (WEPL) along the beam. To remedy the range uncertainty, an in vivo measurement of the WEPL through the patient, i.e. a proton-range radiograph, is highly desirable. Towards that goal, we have explored a novel method of proton radiography based on the time-resolved dose measured by a flat panel imager (FPI). A 226 MeV pencil beam and a custom-designed range modulator wheel (MW) were used to create a time-varying broad beam. The proton imaging technique used exploits this time dependency by looking at the dose rate at the imager as a function of time. This dose rate function (DRF) has a unique time-varying dose pattern at each depth of penetration. A relatively slow rotation of the MW (0.2 revolutions per second) and a fast image acquisition (30 frames per second, ~33 ms sampling) provided a sufficient temporal resolution for each DRF. Along with the high output of the CsI:Tl scintillator, imaging with pixel binning (2 × 2) generated high signal-to-noise data at a very low radiation dose (~0.1 cGy). Proton radiographs of a head phantom and a Gammex CT calibration phantom were taken with various configurations. The results of the phantom measurements show that the FPI can generate low noise and high spatial resolution proton radiographs. The WEPL values of the CT tissue surrogate inserts show that the measured relative stopping powers are accurate to ~2%. The panel did not show any noticeable radiation damage after the accumulative dose of approximately 3831 cGy. In summary, we have successfully demonstrated a highly practical method of generating proton radiography using an x-ray flat panel imager.
Pickett, William; Kukaswadia, Atif; Thompson, Wendy; Frechette, Mylene; McFaull, Steven; Dowdall, Hilary; Brison, Robert J
2014-01-01
This study assessed the use and clinical yield of diagnostic imaging (radiography, computed tomography, and medical resonance imaging) ordered to assist in the diagnosis of acute neck injuries presenting to emergency departments (EDs) in Kingston, Ontario, from 2002-2003 to 2009-2010. Acute neck injury cases were identified using records from the Kingston sites of the Canadian National Ambulatory Care Reporting System. Use of radiography was analyzed over time and related to proportions of cases diagnosed with clinically significant cervical spine injuries. A total of 4,712 neck injury cases were identified. Proportions of cases referred for diagnostic imaging to the neck varied significantly over time, from 30.4% in 2002-2003 to 37.6% in 2009-2010 (ptrend = 0.02). The percentage of total cases that were positive for clinically significant cervical spine injury ("clinical yield") also varied from a low of 5.8% in 2005-2006 to 9.2% in 2008-2009 (ptrend = 0.04), although the clinical yield of neck-imaged cases did not increase across the study years (ptrend = 0.23). Increased clinical yield was not observed in association with higher neck imaging rates whether that yield was expressed as a percentage of total cases positive for clinically significant injury (p = 0.29) or as a percentage of neck-imaged cases that were positive (p = 0.77). We observed increases in the use of diagnostic images over time, reflecting a need to reinforce an existing clinical decision rule for cervical spine radiography. Temporal increases in the clinical yield for total cases may suggest a changing case mix or more judicious use of advanced types of diagnostic imaging.
Uusberg, Helen; Peet, Krista; Uusberg, Andero; Akkermann, Kirsti
2018-03-17
Appearance-related attention biases are thought to contribute to body image disturbances. We investigated how preoccupation with body image is associated with attention biases to body size, focusing on the role of social comparison processes and automaticity. Thirty-six women varying on self-reported preoccupation compared their actual body size to size-modified images of either themselves or a figure-matched peer. Amplification of earlier (N170, P2) and later (P3, LPP) ERP components recorded under low vs. high concurrent working memory load were analyzed. Women with high preoccupation exhibited an earlier bias to larger bodies of both self and peer. During later processing stages, they exhibited a stronger bias to enlarged as well as reduced self-images and a lack of sensitivity to size-modifications of the peer-image. Working memory load did not affect these biases systematically. Current findings suggest that preoccupation with body image involves an earlier attention bias to weight increase cues and later over-engagement with own figure. Copyright © 2018 Elsevier B.V. All rights reserved.
Real-time satellite monitoring of Nornahraun lava flow NE Iceland
NASA Astrophysics Data System (ADS)
Jónsdóttir, Ingibjörg; Þórðarson, Þorvaldur; Höskuldsson, Ármann; Davis, Ashley; Schneider, David; Wright, Robert; Kestay, Laszlo; Hamilton, Christopher; Harris, Andrew; Coppola, Diego; Tumi Guðmundsson, Magnús; Durig, Tobias; Pedersen, Gro; Drouin, Vincent; Höskuldsson, Friðrik; Símonarson, Hreggviður; Örn Arnarson, Gunnar; Örn Einarsson, Magnús; Riishuus, Morten
2015-04-01
An effusive eruption started in Holuhraun, NE Iceland, on 31 August 2014, producing the Nornahraun lava flow field which had, by the beginning of 2015, covered over 83 km2. Throughout this event, various satellite images have been analyzed to monitor the development, active areas and map the lava extent in close collaboration with the field group, which involved regular exchange of direct observations and satellite based data for ground truthing and suggesting possible sites for lava sampling. From the beginning, satellite images in low geometric but high temporal resolution (NOAA AVHRR, MODIS) were used to monitor main regions of activity and position new vents to within 1km accuracy. As they became available, multispectral images in higher resolution (LANDSAT 8, LANDSAT 7, ASTER, EO-1 ALI) were used to map the lava channels, study lava structures and classify regions of varying activity. Hyper spectral sensors (EO-1 HYPERION), though with limited area coverage, have given a good indication of vent and lava temperature and effusion rates. All available radar imagery (SENTINEL-1, RADARSAT, COSMO SKYMED, TERRASAR X) have been used for studying lava extent, landscape and roughness. The Icelandic Coast Guard has, on a number of occasions, provided high resolution radar and thermal images from reconnaissance flights. These data sources compliment each other well and have improved analysis of events. Whilst classical TIR channels were utilized to map the temperature history of the lava, SWIR and NIR channels caught regions of highest temperature, allowing an estimate of the most active lava channels and even indicating potential changes in channel structure. Combining thermal images and radar images took this prediction a step further, improving interpretation of both image types and studying the difference between open and closed lava channels. Efforts are underway of comparing different methods of estimating magma discharge and improving the process for use in real time as well as for understanding the different phases of the eruption. During the eruption, these efforts have supported mapping of the extent of the lava every 3-4 days on average and thus underpins the time series of magma discharge calculations. Emphasis has been on communicating all information to relevant authorities and the public. Geographic Information Systems (ArcGIS) have been important for comparing, storing and presenting data, but specialized image processing programs (ERDAS IMAGINE, ENVI) are crucial for analyzing image signatures. Collaboration with USGS and NASA proved essential for acquiring relevant data in real time.
Rapid 3D bioprinting from medical images: an application to bone scaffolding
NASA Astrophysics Data System (ADS)
Lee, Daniel Z.; Peng, Matthew W.; Shinde, Rohit; Khalid, Arbab; Hong, Abigail; Pennacchi, Sara; Dawit, Abel; Sipzner, Daniel; Udupa, Jayaram K.; Rajapakse, Chamith S.
2018-03-01
Bioprinting of tissue has its applications throughout medicine. Recent advances in medical imaging allows the generation of 3-dimensional models that can then be 3D printed. However, the conventional method of converting medical images to 3D printable G-Code instructions has several limitations, namely significant processing time for large, high resolution images, and the loss of microstructural surface information from surface resolution and subsequent reslicing. We have overcome these issues by creating a JAVA program that skips the intermediate triangularization and reslicing steps and directly converts binary dicom images into G-Code. In this study, we tested the two methods of G-Code generation on the application of synthetic bone graft scaffold generation. We imaged human cadaveric proximal femurs at an isotropic resolution of 0.03mm using a high resolution peripheral quantitative computed tomography (HR-pQCT) scanner. These images, of the Digital Imaging and Communications in Medicine (DICOM) format, were then processed through two methods. In each method, slices and regions of print were selected, filtered to generate a smoothed image, and thresholded. In the conventional method, these processed images are converted to the STereoLithography (STL) format and then resliced to generate G-Code. In the new, direct method, these processed images are run through our JAVA program and directly converted to G-Code. File size, processing time, and print time were measured for each. We found that this new method produced a significant reduction in G-Code file size as well as processing time (92.23% reduction). This allows for more rapid 3D printing from medical images.
Dao, Lam; Glancy, Brian; Lucotte, Bertrand; Chang, Lin-Ching; Balaban, Robert S; Hsu, Li-Yueh
2015-01-01
SUMMARY This paper investigates a post-processing approach to correct spatial distortion in two-photon fluorescence microscopy images for vascular network reconstruction. It is aimed at in vivo imaging of large field-of-view, deep-tissue studies of vascular structures. Based on simple geometric modeling of the object-of-interest, a distortion function is directly estimated from the image volume by deconvolution analysis. Such distortion function is then applied to sub volumes of the image stack to adaptively adjust for spatially varying distortion and reduce the image blurring through blind deconvolution. The proposed technique was first evaluated in phantom imaging of fluorescent microspheres that are comparable in size to the underlying capillary vascular structures. The effectiveness of restoring three-dimensional spherical geometry of the microspheres using the estimated distortion function was compared with empirically measured point-spread function. Next, the proposed approach was applied to in vivo vascular imaging of mouse skeletal muscle to reduce the image distortion of the capillary structures. We show that the proposed method effectively improve the image quality and reduce spatially varying distortion that occurs in large field-of-view deep-tissue vascular dataset. The proposed method will help in qualitative interpretation and quantitative analysis of vascular structures from fluorescence microscopy images. PMID:26224257
Streaming Multiframe Deconvolutions on GPUs
NASA Astrophysics Data System (ADS)
Lee, M. A.; Budavári, T.
2015-09-01
Atmospheric turbulence distorts all ground-based observations, which is especially detrimental to faint detections. The point spread function (PSF) defining this blur is unknown for each exposure and varies significantly over time, making image analysis difficult. Lucky imaging and traditional co-adding throws away lots of information. We developed blind deconvolution algorithms that can simultaneously obtain robust solutions for the background image and all the PSFs. It is done in a streaming setting, which makes it practical for large number of big images. We implemented a new tool that runs of GPUs and achieves exceptional running times that can scale to the new time-domain surveys. Our code can quickly and effectively recover high-resolution images exceeding the quality of traditional co-adds. We demonstrate the power of the method on the repeated exposures in the Sloan Digital Sky Survey's Stripe 82.
Correlation time and diffusion coefficient imaging: application to a granular flow system.
Caprihan, A; Seymour, J D
2000-05-01
A parametric method for spatially resolved measurements for velocity autocorrelation functions, R(u)(tau) = , expressed as a sum of exponentials, is presented. The method is applied to a granular flow system of 2-mm oil-filled spheres rotated in a half-filled horizontal cylinder, which is an Ornstein-Uhlenbeck process with velocity autocorrelation function R(u)(tau) = e(- ||tau ||/tau(c)), where tau(c) is the correlation time and D = tau(c) is the diffusion coefficient. The pulsed-field-gradient NMR method consists of applying three different gradient pulse sequences of varying motion sensitivity to distinguish the range of correlation times present for particle motion. Time-dependent apparent diffusion coefficients are measured for these three sequences and tau(c) and D are then calculated from the apparent diffusion coefficient images. For the cylinder rotation rate of 2.3 rad/s, the axial diffusion coefficient at the top center of the free surface was 5.5 x 10(-6) m(2)/s, the correlation time was 3 ms, and the velocity fluctuation or granular temperature was 1.8 x 10(-3) m(2)/s(2). This method is also applicable to study transport in systems involving turbulence and porous media flows. Copyright 2000 Academic Press.
Rapid neural discrimination of communicative gestures
Carlson, Thomas A.
2015-01-01
Humans are biased toward social interaction. Behaviorally, this bias is evident in the rapid effects that self-relevant communicative signals have on attention and perceptual systems. The processing of communicative cues recruits a wide network of brain regions, including mentalizing systems. Relatively less work, however, has examined the timing of the processing of self-relevant communicative cues. In the present study, we used multivariate pattern analysis (decoding) approach to the analysis of magnetoencephalography (MEG) to study the processing dynamics of social-communicative actions. Twenty-four participants viewed images of a woman performing actions that varied on a continuum of communicative factors including self-relevance (to the participant) and emotional valence, while their brain activity was recorded using MEG. Controlling for low-level visual factors, we found early discrimination of emotional valence (70 ms) and self-relevant communicative signals (100 ms). These data offer neural support for the robust and rapid effects of self-relevant communicative cues on behavior. PMID:24958087
NASA Astrophysics Data System (ADS)
Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.
2012-05-01
The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.
NASA Astrophysics Data System (ADS)
Bhat, M. R.; Binoy, M. P.; Surya, N. M.; Murthy, C. R. L.; Engelbart, R. W.
2012-05-01
In this work, an attempt is made to induce porosity of varied levels in carbon fiber reinforced epoxy based polymer composite laminates fabricated using prepregs by varying the fabrication parameters such as applied vacuum, autoclave pressure and curing temperature. Different NDE tools have been utilized to evaluate the porosity content and correlate with measurable parameters of different NDE techniques. Primarily, ultrasonic imaging and real time digital X-ray imaging have been tried to obtain a measurable parameter which can represent or reflect the amount of porosity contained in the composite laminate. Also, effect of varied porosity content on mechanical properties of the CFRP composite materials is investigated through a series of experimental investigations. The outcome of the experimental approach has yielded interesting and encouraging trend as a first step towards developing an NDE tool for quantification of effect of varied porosity in the polymer composite materials.
Mihaylova, Milena; Manahilov, Velitchko
2010-11-24
Research has shown that the processing time for discriminating illusory contours is longer than for real contours. We know, however, little whether the visual processes, associated with detecting regions of illusory surfaces, are also slower as those responsible for detecting luminance-defined images. Using a speed-accuracy trade-off (SAT) procedure, we measured accuracy as a function of processing time for detecting illusory Kanizsa-type and luminance-defined squares embedded in 2D static luminance noise. The data revealed that the illusory images were detected at slower processing speed than the real images, while the points in time, when accuracy departed from chance, were not significantly different for both stimuli. The classification images for detecting illusory and real squares showed that observers employed similar detection strategies using surface regions of the real and illusory squares. The lack of significant differences between the x-intercepts of the SAT functions for illusory and luminance-modulated stimuli suggests that the detection of surface regions of both images could be based on activation of a single mechanism (the dorsal magnocellular visual pathway). The slower speed for detecting illusory images as compared to luminance-defined images could be attributed to slower processes of filling-in of regions of illusory images within the dorsal pathway.
A computational approach to real-time image processing for serial time-encoded amplified microscopy
NASA Astrophysics Data System (ADS)
Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi
2016-03-01
High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.
NASA Astrophysics Data System (ADS)
Awumah, A.; Mahanti, P.; Robinson, M. S.
2017-12-01
Image fusion is often used in Earth-based remote sensing applications to merge spatial details from a high-resolution panchromatic (Pan) image with the color information from a lower-resolution multi-spectral (MS) image, resulting in a high-resolution multi-spectral image (HRMS). Previously, the performance of six well-known image fusion methods were compared using Lunar Reconnaissance Orbiter Camera (LROC) Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images (1). Results showed the Intensity-Hue-Saturation (IHS) method provided the best spatial performance, but deteriorated the spectral content. In general, there was a trade-off between spatial enhancement and spectral fidelity from the fusion process; the more spatial details from the Pan fused with the MS image, the more spectrally distorted the final HRMS. In this work, we control the amount of spatial details fused (from the LROC NAC images to WAC images) using a controlled IHS method (2), to investigate the spatial variation in spectral distortion on fresh crater ejecta. In the controlled IHS method (2), the percentage of the Pan component merged with the MS is varied. The percent of spatial detail from the Pan used is determined by a variable whose value may be varied between 1 (no Pan utilized) to infinity (entire Pan utilized). An HRMS color composite image (red=415nm, green=321/415nm, blue=321/360nm (3)) was used to assess performance (via visual inspection and metric-based evaluations) at each tested value of the control parameter (1 to 10—after which spectral distortion saturates—in 0.01 increments) within three regions: crater interiors, ejecta blankets, and the background material surrounding the craters. Increasing the control parameter introduced increased spatial sharpness and spectral distortion in all regions, but to varying degrees. Crater interiors suffered the most color distortion, while ejecta experienced less color distortion. The controlled IHS method is therefore desirable for resolution-enhancement of fresh crater ejecta; larger values of the control parameter may be used to sharpen MS images of ejecta patterns but with less impact to color distortion than in the uncontrolled IHS fusion process. References: (1) Prasun et. al (2016) ISPRS. (2) Choi, Myungjin (2006) IEEE. (3) Denevi et. al (2014) JGR.
Kaku, Hiroki; Inoue, Kanako; Muranaka, Yoshinori; Park, Pyoyun; Ikeda, Kenichi
2015-10-01
Uranyl salts are toxic and radioactive; therefore, several studies have been conducted to screen for substitutes of electron stains. In this regard, the contrast evaluation process is time consuming and the results obtained are inconsistent. In this study, we developed a novel contrast evaluation method using affinity beads and a backscattered electron image (BSEI), obtained using scanning electron microscopy. The contrast ratios of BSEI in each electron stain treatment were correlated with those of transmission electron microscopic images. The affinity beads bound to cell components independently. Protein and DNA samples were enhanced by image contrast treated with electron stains; however, this was not observed for sugars. Protein-conjugated beads showed an additive effect of image contrast when double-stained with lead. However, additive effect of double staining was not observed in DNA-conjugated beads. The varying chemical properties of oligopeptides showed differences in image contrast when treated with each electron stain. This BSEI-based evaluation method not only enables screening for alternate electron stains, but also helps analyze the underlying mechanisms of electron staining of cellular structures. © The Author 2015. Published by Oxford University Press on behalf of The Japanese Society of Microscopy. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
In-Situ Imaging of Particles during Rapid Thermite Deflagrations
NASA Astrophysics Data System (ADS)
Grapes, Michael; Sullivan, Kyle; Reeves, Robert; Densmore, John; Willey, Trevor; van Buuren, Tony; Fezaa, Kamel
The dynamic behavior of rapidly deflagrating thermites is a highly complex process involving rapid decomposition, melting, and outgassing of intermediate and/or product gases. Few experimental techniques are capable of probing these phenomena in situ due to the small length and time scales associated with the reaction. Here we use a recently developed extended burn tube test, where we initiate a small pile of thermite on the closed end of a clear acrylic tube. The length of the tube is sufficient to fully contain the reaction as it proceeds and flows entrained particles down the tube. This experiment was brought to the Advanced Photon Source, and the particle formation was X-ray imaged at various positions down the tube. Several formulations, as well as formulation parameters were varied to investigate the size and morphology of the particles, as well as to look for dynamic behavior attributed to the reaction. In all cases, we see evidence of particle coalescence and condensed-phase interfacial reactions. The results improve our understanding of the procession of reactants to products in these systems. Funding provided by the LLNL LDRD program (PLS-16FS-028).
Different coding strategies for the perception of stable and changeable facial attributes.
Taubert, Jessica; Alais, David; Burr, David
2016-09-01
Perceptual systems face competing requirements: improving signal-to-noise ratios of noisy images, by integration; and maximising sensitivity to change, by differentiation. Both processes occur in human vision, under different circumstances: they have been termed priming, or serial dependencies, leading to positive sequential effects; and adaptation or habituation, which leads to negative sequential effects. We reasoned that for stable attributes, such as the identity and gender of faces, the system should integrate: while for changeable attributes like facial expression, it should also engage contrast mechanisms to maximise sensitivity to change. Subjects viewed a sequence of images varying simultaneously in gender and expression, and scored each as male or female, and happy or sad. We found strong and consistent positive serial dependencies for gender, and negative dependency for expression, showing that both processes can operate at the same time, on the same stimuli, depending on the attribute being judged. The results point to highly sophisticated mechanisms for optimizing use of past information, either by integration or differentiation, depending on the permanence of that attribute.
A design of real time image capturing and processing system using Texas Instrument's processor
NASA Astrophysics Data System (ADS)
Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng
2007-09-01
In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.
NASA Astrophysics Data System (ADS)
Emge, Darren K.; Adalı, Tülay
2014-06-01
As the availability and use of imaging methodologies continues to increase, there is a fundamental need to jointly analyze data that is collected from multiple modalities. This analysis is further complicated when, the size or resolution of the images differ, implying that the observation lengths of each of modality can be highly varying. To address this expanding landscape, we introduce the multiset singular value decomposition (MSVD), which can perform a joint analysis on any number of modalities regardless of their individual observation lengths. Through simulations, the inter modal relationships across the different modalities which are revealed by the MSVD are shown. We apply the MSVD to forensic fingerprint analysis, showing that MSVD joint analysis successfully identifies relevant similarities for further analysis, significantly reducing the processing time required. This reduction, takes this technique from a laboratory method to a useful forensic tool with applications across the law enforcement and security regimes.
Robust Adaptive Thresholder For Document Scanning Applications
NASA Astrophysics Data System (ADS)
Hsing, To R.
1982-12-01
In document scanning applications, thresholding is used to obtain binary data from a scanner. However, due to: (1) a wide range of different color backgrounds; (2) density variations of printed text information; and (3) the shading effect caused by the optical systems, the use of adaptive thresholding to enhance the useful information is highly desired. This paper describes a new robust adaptive thresholder for obtaining valid binary images. It is basically a memory type algorithm which can dynamically update the black and white reference level to optimize a local adaptive threshold function. The results of high image quality from different types of simulate test patterns can be obtained by this algorithm. The software algorithm is described and experiment results are present to describe the procedures. Results also show that the techniques described here can be used for real-time signal processing in the varied applications.
Generating survival times to simulate Cox proportional hazards models with time-varying covariates.
Austin, Peter C
2012-12-20
Simulations and Monte Carlo methods serve an important role in modern statistical research. They allow for an examination of the performance of statistical procedures in settings in which analytic and mathematical derivations may not be feasible. A key element in any statistical simulation is the existence of an appropriate data-generating process: one must be able to simulate data from a specified statistical model. We describe data-generating processes for the Cox proportional hazards model with time-varying covariates when event times follow an exponential, Weibull, or Gompertz distribution. We consider three types of time-varying covariates: first, a dichotomous time-varying covariate that can change at most once from untreated to treated (e.g., organ transplant); second, a continuous time-varying covariate such as cumulative exposure at a constant dose to radiation or to a pharmaceutical agent used for a chronic condition; third, a dichotomous time-varying covariate with a subject being able to move repeatedly between treatment states (e.g., current compliance or use of a medication). In each setting, we derive closed-form expressions that allow one to simulate survival times so that survival times are related to a vector of fixed or time-invariant covariates and to a single time-varying covariate. We illustrate the utility of our closed-form expressions for simulating event times by using Monte Carlo simulations to estimate the statistical power to detect as statistically significant the effect of different types of binary time-varying covariates. This is compared with the statistical power to detect as statistically significant a binary time-invariant covariate. Copyright © 2012 John Wiley & Sons, Ltd.
Megapixel mythology and photospace: estimating photospace for camera phones from large image sets
NASA Astrophysics Data System (ADS)
Hultgren, Bror O.; Hertel, Dirk W.
2008-01-01
It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.
A real-time MTFC algorithm of space remote-sensing camera based on FPGA
NASA Astrophysics Data System (ADS)
Zhao, Liting; Huang, Gang; Lin, Zhe
2018-01-01
A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.
Engraving Print Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoelck, Daniel; Barbe, Joaquim
2008-04-15
A print is a mark, or drawing, made in or upon a plate, stone, woodblock or other material which is cover with ink and then is press usually into a paper reproducing the image on the paper. Engraving prints usually are image composed of a group of binary lines, specially those are made with relief and intaglio techniques. Varying the number and the orientation of lines, the drawing of the engraving print is conformed. For this reason we propose an application based on image processing methods to classify engraving prints.
Mamede, Joao I.; Hope, Thomas J.
2016-01-01
Summary Live cell imaging is a valuable technique that allows the characterization of the dynamic processes of the HIV-1 life-cycle. Here, we present a method of production and imaging of dual-labeled HIV viral particles that allows the visualization of two events. Varying release of the intravirion fluid phase marker reveals virion fusion and the loss of the integrity of HIV viral cores with the use of live wide-field fluorescent microscopy. PMID:26714704
NASA Technical Reports Server (NTRS)
Conrad, A. R.; Lupton, W. F.
1992-01-01
Each Keck instrument presents a consistent software view to the user interface programmer. The view consists of a small library of functions, which are identical for all instruments, and a large set of keywords, that vary from instrument to instrument. All knowledge of the underlying task structure is hidden from the application programmer by the keyword layer. Image capture software uses the same function library to collect data for the image header. Because the image capture software and the instrument control software are built on top of the same keyword layer, a given observation can be 'replayed' by extracting keyword-value pairs from the image header and passing them back to the control system. The keyword layer features non-blocking as well as blocking I/O. A non-blocking keyword write operation (such as setting a filter position) specifies a callback to be invoked when the operation is complete. A non-blocking keyword read operation specifies a callback to be invoked whenever the keyword changes state. The keyword-callback style meshes well with the widget-callback style commonly used in X window programs. The first keyword library was built for the two Keck optical instruments. More recently, keyword libraries have been developed for the infrared instruments and for telescope control. Although the underlying mechanisms used for inter-process communication by each of these systems vary widely (Lick MUSIC, Sun RPC, and direct socket I/O, respectively), a basic user interface has been written that can be used with any of these systems. Since the keyword libraries are bound to user interface programs dynamically at run time, only a single set of user interface executables is needed. For example, the same program, 'xshow', can be used to display continuously the telescope's position, the time left in an instrument's exposure, or both values simultaneously. Less generic tools that operate on specific keywords, for example an X display that controls optical instrument exposures, have also been written using the keyword layer.
PSO-based methods for medical image registration and change assessment of pigmented skin
NASA Astrophysics Data System (ADS)
Kacenjar, Steve; Zook, Matthew; Balint, Michael
2011-03-01
There are various scientific and technological areas in which it is imperative to rapidly detect and quantify changes in imagery over time. In fields such as earth remote sensing, aerospace systems, and medical imaging, searching for timedependent, regional changes across deformable topographies is complicated by varying camera acquisition geometries, lighting environments, background clutter conditions, and occlusion. Under these constantly-fluctuating conditions, the use of standard, rigid-body registration approaches often fail to provide sufficient fidelity to overlay image scenes together. This is problematic because incorrect assessments of the underlying changes of high-level topography can result in systematic errors in the quantification and classification of interested areas. For example, in the current naked-eye detection strategies of melanoma, a dermatologist often uses static morphological attributes to identify suspicious skin lesions for biopsy. This approach does not incorporate temporal changes which suggest malignant degeneration. By performing the co-registration of time-separated skin imagery, a dermatologist may more effectively detect and identify early morphological changes in pigmented lesions; enabling the physician to detect cancers at an earlier stage resulting in decreased morbidity and mortality. This paper describes an image processing system which will be used to detect changes in the characteristics of skin lesions over time. The proposed system consists of three main functional elements: 1.) coarse alignment of timesequenced imagery, 2.) refined alignment of local skin topographies, and 3.) assessment of local changes in lesion size. During the coarse alignment process, various approaches can be used to obtain a rough alignment, including: 1.) a manual landmark/intensity-based registration method1, and 2.) several flavors of autonomous optical matched filter methods2. These procedures result in the rough alignment of a patient's back topography. Since the skin is a deformable membrane, this process only provides an initial condition for subsequent refinements in aligning the localized topography of the skin. To achieve a refined enhancement, a Particle Swarm Optimizer (PSO) is used to optimally determine the local camera models associated with a generalized geometric transform. Here the optimization process is driven using the minimization of entropy between the multiple time-separated images. Once the camera models are corrected for local skin deformations, the images are compared using both pixel-based and regional-based methods. Limits on the detectability of change are established by the fidelity to which the algorithm corrects for local skin deformation and background alterations. These limits provide essential information in establishing early-warning thresholds for Melanoma detection. Key to this work is the development of a PSO alignment algorithm to perform the refined alignment in local skin topography between the time sequenced imagery (TSI). Test and validation of this alignment process is achieved using a forward model producing known geometric artifacts in the images and afterwards using a PSO algorithm to demonstrate the ability to identify and correct for these artifacts. Specifically, the forward model introduces local translational, rotational, and magnification changes within the image. These geometric modifiers are expected during TSI acquisition because of logistical issues to precisely align the patient to the image recording geometry and is therefore of paramount importance to any viable image registration system. This paper shows that the PSO alignment algorithm is effective in autonomously determining and mitigating these geometric modifiers. The degree of efficacy is measured by several statistically and morphologically based pre-image filtering operations applied to the TSI imagery before applying the PSO alignment algorithm. These trade studies show that global image threshold binarization provides rapid and superior convergence characteristics relative to that of morphologically based methods.
NASA Astrophysics Data System (ADS)
Jackson, Christopher Robert
"Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm selects sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient sequential processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors. This thesis describes two hardware implementations of the LRF algorithm to achieve real-time image processing. The first was created with a VIRTEX-7 field programmable gate array (FPGA). The other developed using the graphics processing unit (GPU) of a NVIDIA GeForce GTX 690 video card. The novelty in the FPGA approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link video output. We also describe a custom hardware simulation environment we have built to test the FPGA LRF implementation. The advantage of the GPU approach is significantly improved development time, integration of image stabilization into the system, and comparable atmospheric turbulence mitigation.
Method and apparatus for optical encoding with compressible imaging
NASA Technical Reports Server (NTRS)
Leviton, Douglas B. (Inventor)
2006-01-01
The present invention presents an optical encoder with increased conversion rates. Improvement in the conversion rate is a result of combining changes in the pattern recognition encoder's scale pattern with an image sensor readout technique which takes full advantage of those changes, and lends itself to operation by modern, high-speed, ultra-compact microprocessors and digital signal processors (DSP) or field programmable gate array (FPGA) logic elements which can process encoder scale images at the highest speeds. Through these improvements, all three components of conversion time (reciprocal conversion rate)--namely exposure time, image readout time, and image processing time--are minimized.
High Resolution Near Real Time Image Processing and Support for MSSS Modernization
NASA Astrophysics Data System (ADS)
Duncan, R. B.; Sabol, C.; Borelli, K.; Spetka, S.; Addison, J.; Mallo, A.; Farnsworth, B.; Viloria, R.
2012-09-01
This paper describes image enhancement software applications engineering development work that has been performed in support of Maui Space Surveillance System (MSSS) Modernization. It also includes R&D and transition activity that has been performed over the past few years with the objective of providing increased space situational awareness (SSA) capabilities. This includes Air Force Research Laboratory (AFRL) use of an FY10 Dedicated High Performance Investment (DHPI) cluster award -- and our selection and planned use for an FY12 DHPI award. We provide an introduction to image processing of electro optical (EO) telescope sensors data; and a high resolution image enhancement and near real time processing and summary status overview. We then describe recent image enhancement applications development and support for MSSS Modernization, results to date, and end with a discussion of desired future development work and conclusions. Significant improvements to image processing enhancement have been realized over the past several years, including a key application that has realized more than a 10,000-times speedup compared to the original R&D code -- and a greater than 72-times speedup over the past few years. The latest version of this code maintains software efficiency for post-mission processing while providing optimization for image processing of data from a new EO sensor at MSSS. Additional work has also been performed to develop low latency, near real time processing of data that is collected by the ground-based sensor during overhead passes of space objects.
Low bandwidth eye tracker for scanning laser ophthalmoscopy
NASA Astrophysics Data System (ADS)
Harvey, Zachary G.; Dubra, Alfredo; Cahill, Nathan D.; Lopez Alarcon, Sonia
2012-02-01
The incorporation of adaptive optics to scanning ophthalmoscopes (AOSOs) has allowed for in vivo, noninvasive imaging of the human rod and cone photoreceptor mosaics. Light safety restrictions and power limitations of the current low-coherence light sources available for imaging result in each individual raw image having a low signal to noise ratio (SNR). To date, the only approach used to increase the SNR has been to collect large number of raw images (N >50), to register them to remove the distortions due to involuntary eye motion, and then to average them. The large amplitude of involuntary eye motion with respect to the AOSO field of view (FOV) dictates that an even larger number of images need to be collected at each retinal location to ensure adequate SNR over the feature of interest. Compensating for eye motion during image acquisition to keep the feature of interest within the FOV could reduce the number of raw frames required per retinal feature, therefore significantly reduce the imaging time, storage requirements, post-processing times and, more importantly, subject's exposure to light. In this paper, we present a particular implementation of an AOSO, termed the adaptive optics scanning light ophthalmoscope (AOSLO) equipped with a simple eye tracking system capable of compensating for eye drift by estimating the eye motion from the raw frames and by using a tip-tilt mirror to compensate for it in a closed-loop. Multiple control strategies were evaluated to minimize the image distortion introduced by the tracker itself. Also, linear, quadratic and Kalman filter motion prediction algorithms were implemented and tested and tested using both simulated motion (sinusoidal motion with varying frequencies) and human subjects. The residual displacement of the retinal features was used to compare the performance of the different correction strategies and prediction methods.
Synthetic Foveal Imaging Technology
NASA Technical Reports Server (NTRS)
Hoenk, Michael; Monacos, Steve; Nikzad, Shouleh
2009-01-01
Synthetic Foveal imaging Technology (SyFT) is an emerging discipline of image capture and image-data processing that offers the prospect of greatly increased capabilities for real-time processing of large, high-resolution images (including mosaic images) for such purposes as automated recognition and tracking of moving objects of interest. SyFT offers a solution to the image-data processing problem arising from the proposed development of gigapixel mosaic focal-plane image-detector assemblies for very wide field-of-view imaging with high resolution for detecting and tracking sparse objects or events within narrow subfields of view. In order to identify and track the objects or events without the means of dynamic adaptation to be afforded by SyFT, it would be necessary to post-process data from an image-data space consisting of terabytes of data. Such post-processing would be time-consuming and, as a consequence, could result in missing significant events that could not be observed at all due to the time evolution of such events or could not be observed at required levels of fidelity without such real-time adaptations as adjusting focal-plane operating conditions or aiming of the focal plane in different directions to track such events. The basic concept of foveal imaging is straightforward: In imitation of a natural eye, a foveal-vision image sensor is designed to offer higher resolution in a small region of interest (ROI) within its field of view. Foveal vision reduces the amount of unwanted information that must be transferred from the image sensor to external image-data-processing circuitry. The aforementioned basic concept is not new in itself: indeed, image sensors based on these concepts have been described in several previous NASA Tech Briefs articles. Active-pixel integrated-circuit image sensors that can be programmed in real time to effect foveal artificial vision on demand are one such example. What is new in SyFT is a synergistic combination of recent advances in foveal imaging, computing, and related fields, along with a generalization of the basic foveal-vision concept to admit a synthetic fovea that is not restricted to one contiguous region of an image.
An automated distinction of DICOM images for lung cancer CAD system
NASA Astrophysics Data System (ADS)
Suzuki, H.; Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nishitani, H.; Ohmatsu, H.; Eguchi, K.; Kaneko, M.; Moriyama, N.
2009-02-01
Automated distinction of medical images is an important preprocessing in Computer-Aided Diagnosis (CAD) systems. The CAD systems have been developed using medical image sets with specific scan conditions and body parts. However, varied examinations are performed in medical sites. The specification of the examination is contained into DICOM textual meta information. Most DICOM textual meta information can be considered reliable, however the body part information cannot always be considered reliable. In this paper, we describe an automated distinction of DICOM images as a preprocessing for lung cancer CAD system. Our approach uses DICOM textual meta information and low cost image processing. Firstly, the textual meta information such as scan conditions of DICOM image is distinguished. Secondly, the DICOM image is set to distinguish the body parts which are identified by image processing. The identification of body parts is based on anatomical structure which is represented by features of three regions, body tissue, bone, and air. The method is effective to the practical use of lung cancer CAD system in medical sites.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Novo, E. M. L. M.
1983-01-01
The effects of the seasonal variation of illumination over digital processing of LANDSAT images are evaluated. Two sets of LANDSAT data referring to the orbit 150 and row 28 were selected with illumination parameters varying from 43 deg to 64 deg for azimuth and from 30 deg to 36 deg for solar elevation respectively. IMAGE-100 system permitted the digital processing of LANDSAT data. Original images were transformed by means of digital filtering so as to enhance their spatial features. The resulting images were used to obtain an unsupervised classification of relief units. Topographic variables (declivity, altitude, relief range and slope length) were used to identify the true relief units existing on the ground. The LANDSAT over pass data show that digital processing is highly affected by illumination geometry, and there is no correspondence between relief units as defined by spectral features and those resulting from topographic features.
Super-Resolution Imaging of Molecular Emission Spectra and Single Molecule Spectral Fluctuations
Mlodzianoski, Michael J.; Curthoys, Nikki M.; Gunewardene, Mudalige S.; Carter, Sean; Hess, Samuel T.
2016-01-01
Localization microscopy can image nanoscale cellular details. To address biological questions, the ability to distinguish multiple molecular species simultaneously is invaluable. Here, we present a new version of fluorescence photoactivation localization microscopy (FPALM) which detects the emission spectrum of each localized molecule, and can quantify changes in emission spectrum of individual molecules over time. This information can allow for a dramatic increase in the number of different species simultaneously imaged in a sample, and can create super-resolution maps showing how single molecule emission spectra vary with position and time in a sample. PMID:27002724
Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.
2014-01-01
Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868
Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P
2014-07-01
Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6 mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.
NASA Astrophysics Data System (ADS)
Coffey, Stephen; Connell, Joseph
2005-06-01
This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.
NASA Astrophysics Data System (ADS)
Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan
2017-08-01
The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.
Fast processing of microscopic images using object-based extended depth of field.
Intarapanich, Apichart; Kaewkamnerd, Saowaluck; Pannarut, Montri; Shaw, Philip J; Tongsima, Sissades
2016-12-22
Microscopic analysis requires that foreground objects of interest, e.g. cells, are in focus. In a typical microscopic specimen, the foreground objects may lie on different depths of field necessitating capture of multiple images taken at different focal planes. The extended depth of field (EDoF) technique is a computational method for merging images from different depths of field into a composite image with all foreground objects in focus. Composite images generated by EDoF can be applied in automated image processing and pattern recognition systems. However, current algorithms for EDoF are computationally intensive and impractical, especially for applications such as medical diagnosis where rapid sample turnaround is important. Since foreground objects typically constitute a minor part of an image, the EDoF technique could be made to work much faster if only foreground regions are processed to make the composite image. We propose a novel algorithm called object-based extended depths of field (OEDoF) to address this issue. The OEDoF algorithm consists of four major modules: 1) color conversion, 2) object region identification, 3) good contrast pixel identification and 4) detail merging. First, the algorithm employs color conversion to enhance contrast followed by identification of foreground pixels. A composite image is constructed using only these foreground pixels, which dramatically reduces the computational time. We used 250 images obtained from 45 specimens of confirmed malaria infections to test our proposed algorithm. The resulting composite images with all in-focus objects were produced using the proposed OEDoF algorithm. We measured the performance of OEDoF in terms of image clarity (quality) and processing time. The features of interest selected by the OEDoF algorithm are comparable in quality with equivalent regions in images processed by the state-of-the-art complex wavelet EDoF algorithm; however, OEDoF required four times less processing time. This work presents a modification of the extended depth of field approach for efficiently enhancing microscopic images. This selective object processing scheme used in OEDoF can significantly reduce the overall processing time while maintaining the clarity of important image features. The empirical results from parasite-infected red cell images revealed that our proposed method efficiently and effectively produced in-focus composite images. With the speed improvement of OEDoF, this proposed algorithm is suitable for processing large numbers of microscope images, e.g., as required for medical diagnosis.
Zierler, R E; Phillips, D J; Beach, K W; Primozich, J F; Strandness, D E
1987-08-01
The combination of a B-mode imaging system and a single range-gate pulsed Doppler flow velocity detector (duplex scanner) has become the standard noninvasive method for assessing the extracranial carotid artery. However, a significant limitation of this approach is the small area of vessel lumen that can be evaluated at any one time. This report describes a new duplex instrument that displays blood flow as colors superimposed on a real-time B-mode image. Returning echoes from a linear array of transducers are continuously processed for amplitude and phase. Changes in phase are produced by tissue motion and are used to calculate Doppler shift frequency. This results in a color assignment: red and blue indicate direction of flow with respect to the ultrasound beam, and lighter shades represent higher velocities. The carotid bifurcations of 10 normal subjects were studied. Changes in flow velocities across the arterial lumen were clearly visualized as varying shades of red or blue during the cardiac cycle. A region of flow separation was observed in all proximal internal carotids as a blue area located along the outer wall of the bulb. Thus, it is possible to detect the localized flow patterns that characterize normal carotid arteries. Other advantages of color-flow imaging include the ability to rapidly identify the carotid bifurcation branches and any associated anatomic variations.
Total focusing method with correlation processing of antenna array signals
NASA Astrophysics Data System (ADS)
Kozhemyak, O. A.; Bortalevich, S. I.; Loginov, E. L.; Shinyakov, Y. A.; Sukhorukov, M. P.
2018-03-01
The article proposes a method of preliminary correlation processing of a complete set of antenna array signals used in the image reconstruction algorithm. The results of experimental studies of 3D reconstruction of various reflectors using and without correlation processing are presented in the article. Software ‘IDealSystem3D’ by IDeal-Technologies was used for experiments. Copper wires of different diameters located in a water bath were used as a reflector. The use of correlation processing makes it possible to obtain more accurate reconstruction of the image of the reflectors and to increase the signal-to-noise ratio. The experimental results were processed using an original program. This program allows varying the parameters of the antenna array and sampling frequency.
Processing, Cataloguing and Distribution of Uas Images in Near Real Time
NASA Astrophysics Data System (ADS)
Runkel, I.
2013-08-01
Why are UAS such a hype? UAS make the data capture flexible, fast and easy. For many applications this is more important than a perfect photogrammetric aerial image block. To ensure, that the advantage of a fast data capturing will be valid up to the end of the processing chain, all intermediate steps like data processing and data dissemination to the customer need to be flexible and fast as well. GEOSYSTEMS has established the whole processing workflow as server/client solution. This is the focus of the presentation. Depending on the image acquisition system the image data can be down linked during the flight to the data processing computer or it is stored on a mobile device and hooked up to the data processing computer after the flight campaign. The image project manager reads the data from the device and georeferences the images according to the position data. The meta data is converted into an ISO conform format and subsequently all georeferenced images are catalogued in the raster data management System ERDAS APOLLO. APOLLO provides the data, respectively the images as an OGC-conform services to the customer. Within seconds the UAV-images are ready to use for GIS application, image processing or direct interpretation via web applications - where ever you want. The whole processing chain is built in a generic manner. It can be adapted to a magnitude of applications. The UAV imageries can be processed and catalogued as single ortho imges or as image mosaic. Furthermore, image data of various cameras can be fusioned. By using WPS (web processing services) image enhancement, image analysis workflows like change detection layers can be calculated and provided to the image analysts. The processing of the WPS runs direct on the raster data management server. The image analyst has no data and no software on his local computer. This workflow is proven to be fast, stable and accurate. It is designed to support time critical applications for security demands - the images can be checked and interpreted in near real-time. For sensible areas it gives you the possibility to inform remote decision makers or interpretation experts in order to provide them situations awareness, wherever they are. For monitoring and inspection tasks it speeds up the process of data capture and data interpretation. The fully automated workflow of data pre-processing, data georeferencing, data cataloguing and data dissemination in near real time was developed based on the Intergraph products ERDAS IMAGINE, ERDAS APOLLO and GEOSYSTEMS METAmorph!IT. It is offered as adaptable solution by GEOSYSTEMS GmbH.
NASA Astrophysics Data System (ADS)
Chang, Ni-Bin; Xuan, Zhemin; Wimberly, Brent
2011-09-01
Soil moisture and evapotranspiration (ET) is affected by both water and energy balances in the soilvegetation- atmosphere system, it involves many complex processes in the nexus of water and thermal cycles at the surface of the Earth. These impacts may affect the recharge of the upper Floridian aquifer. The advent of urban hydrology and remote sensing technologies opens new and innovative means to undertake eventbased assessment of ecohydrological effects in urban regions. For assessing these landfalls, the multispectral Moderate Resolution Imaging Spectroradiometer (MODIS) remote sensing images can be used for the estimation of such soil moisture change in connection with two other MODIS products - Enhanced Vegetation Index (EVI), Land Surface Temperature (LST). Supervised classification for soil moisture retrieval was performed for Tampa Bay area on the 2 kmx2km grid with MODIS images. Machine learning with genetic programming model for soil moisture estimation shows advances in image processing, feature extraction, and change detection of soil moisture. ET data that were derived by Geostationary Operational Environmental Satellite (GOES) data and hydrologic models can be retrieved from the USGS web site directly. Overall, the derived soil moisture in comparison with ET time series changes on a seasonal basis shows that spatial and temporal variations of soil moisture and ET that are confined within a defined region for each type of surfaces, showing clustered patterns and featuring space scatter plot in association with the land use and cover map. These concomitant soil moisture patterns and ET fluctuations vary among patches, plant species, and, especially, location on the urban gradient. Time series plots of LST in association with ET, soil moisture and EVI reveals unique ecohydrological trends. Such ecohydrological assessment can be applied for supporting the urban landscape management in hurricane-stricken regions.
Radionuclide-fluorescence Reporter Gene Imaging to Track Tumor Progression in Rodent Tumor Models
Volpe, Alessia; Man, Francis; Lim, Lindsay; Khoshnevisan, Alex; Blower, Julia; Blower, Philip J.; Fruhwirth, Gilbert O.
2018-01-01
Metastasis is responsible for most cancer deaths. Despite extensive research, the mechanistic understanding of the complex processes governing metastasis remains incomplete. In vivo models are paramount for metastasis research, but require refinement. Tracking spontaneous metastasis by non-invasive in vivo imaging is now possible, but remains challenging as it requires long-time observation and high sensitivity. We describe a longitudinal combined radionuclide and fluorescence whole-body in vivo imaging approach for tracking tumor progression and spontaneous metastasis. This reporter gene methodology employs the sodium iodide symporter (NIS) fused to a fluorescent protein (FP). Cancer cells are engineered to stably express NIS-FP followed by selection based on fluorescence-activated cell sorting. Corresponding tumor models are established in mice. NIS-FP expressing cancer cells are tracked non-invasively in vivo at the whole-body level by positron emission tomography (PET) using the NIS radiotracer [18F]BF4-. PET is currently the most sensitive in vivo imaging technology available at this scale and enables reliable and absolute quantification. Current methods either rely on large cohorts of animals that are euthanized for metastasis assessment at varying time points, or rely on barely quantifiable 2D imaging. The advantages of the described method are: (i) highly sensitive non-invasive in vivo 3D PET imaging and quantification, (ii) automated PET tracer production, (iii) a significant reduction in required animal numbers due to repeat imaging options, (iv) the acquisition of paired data from subsequent imaging sessions providing better statistical data, and (v) the intrinsic option for ex vivo confirmation of cancer cells in tissues by fluorescence microscopy or cytometry. In this protocol, we describe all steps required for routine NIS-FP-afforded non-invasive in vivo cancer cell tracking using PET/CT and ex vivo confirmation of in vivo results. This protocol has applications beyond cancer research whenever in vivo localization, expansion and long-time monitoring of a cell population is of interest. PMID:29608157
Dynamics of myosin II organization into contractile networks and fibers at the medial cell cortex
NASA Astrophysics Data System (ADS)
Nie, Wei
The cellular morphology of adhered cells depends crucially on the formation of a contractile meshwork of parallel and cross-linked stress fibers along the contacting surface. The motor activity and mini-filament assembly of non-muscle myosin II is an important component of cell-level cytoskeletal remodeling during mechanosensing. To monitor the dynamics of non-muscle myosin II, we used confocal microscopy to image cultured HeLa cells that stably express myosin regulatory light chain tagged with GFP (MRLC-GFP). MRLC-GFP was monitored in time-lapse movies at steady state and during the response of cells to varying concentrations of blebbistatin (which disrupts actomyosin stress fibers). Using image correlation spectroscopy analysis, we quantified the kinetics of disassembly and reassembly of actomyosin networks and compared to studies by other groups. This analysis suggested the following processes: myosin minifilament assembly and disassembly; aligning and contraction; myosin filament stabilization upon increasing contractile tension. Numerical simulations that include those processes capture some of the main features observed in the experiments. This study provides a framework to help interpret how different cortical myosin remodeling kinetics may contribute to different cell shape and rigidity depending on substrate stiffness. We discuss methods to monitor myosin reorganization using non-linear imaging methods.
Adaptive enhancement for nonuniform illumination images via nonlinear mapping
NASA Astrophysics Data System (ADS)
Wang, Yanfang; Huang, Qian; Hu, Jing
2017-09-01
Nonuniform illumination images suffer from degenerated details because of underexposure, overexposure, or a combination of both. To improve the visual quality of color images, underexposure regions should be lightened, whereas overexposure areas need to be dimmed properly. However, discriminating between underexposure and overexposure is troublesome. Compared with traditional methods that produce a fixed demarcation value throughout an image, the proposed demarcation changes as local luminance varies, thus is suitable for manipulating complicated illumination. Based on this locally adaptive demarcation, a nonlinear modification is applied to image luminance. Further, with the modified luminance, we propose a nonlinear process to reconstruct a luminance-enhanced color image. For every pixel, this nonlinear process takes the luminance change and the original chromaticity into account, thus trying to avoid exaggerated colors at dark areas and depressed colors at highly bright regions. Finally, to improve image contrast, a local and image-dependent exponential technique is designed and applied to the RGB channels of the obtained color image. Experimental results demonstrate that our method produces good contrast and vivid color for both nonuniform illumination images and images with normal illumination.
A brain MRI bias field correction method created in the Gaussian multi-scale space
NASA Astrophysics Data System (ADS)
Chen, Mingsheng; Qin, Mingxin
2017-07-01
A pre-processing step is needed to correct for the bias field signal before submitting corrupted MR images to such image-processing algorithms. This study presents a new bias field correction method. The method creates a Gaussian multi-scale space by the convolution of the inhomogeneous MR image with a two-dimensional Gaussian function. In the multi-Gaussian space, the method retrieves the image details from the differentiation of the original image and convolution image. Then, it obtains an image whose inhomogeneity is eliminated by the weighted sum of image details in each layer in the space. Next, the bias field-corrected MR image is retrieved after the Υ correction, which enhances the contrast and brightness of the inhomogeneity-eliminated MR image. We have tested the approach on T1 MRI and T2 MRI with varying bias field levels and have achieved satisfactory results. Comparison experiments with popular software have demonstrated superior performance of the proposed method in terms of quantitative indices, especially an improvement in subsequent image segmentation.
NASA Astrophysics Data System (ADS)
Harkrider, Curtis Jason
2000-08-01
The incorporation of gradient-index (GRIN) material into optical systems offers novel and practical solutions to lens design problems. However, widespread use of gradient-index optics has been limited by poor correlation between gradient-index designs and the refractive index profiles produced by ion exchange between glass and molten salt. Previously, a design-for- manufacture model was introduced that connected the design and fabrication processes through use of diffusion modeling linked with lens design software. This project extends the design-for-manufacture model into a time- varying boundary condition (TVBC) diffusion model. TVBC incorporates the time-dependent phenomenon of melt poisoning and introduces a new index profile control method, multiple-step diffusion. The ions displaced from the glass during the ion exchange fabrication process can reduce the total change in refractive index (Δn). Chemical equilibrium is used to model this melt poisoning process. Equilibrium experiments are performed in a titania silicate glass and chemically analyzed. The equilibrium model is fit to ion concentration data that is used to calculate ion exchange boundary conditions. The boundary conditions are changed purposely to control the refractive index profile in multiple-step TVBC diffusion. The glass sample is alternated between ion exchange with a molten salt bath and annealing. The time of each diffusion step can be used to exert control on the index profile. The TVBC computer model is experimentally verified and incorporated into the design- for-manufacture subroutine that runs in lens design software. The TVBC design-for-manufacture model is useful for fabrication-based tolerance analysis of gradient-index lenses and for the design of manufactureable GRIN lenses. Several optical elements are designed and fabricated using multiple-step diffusion, verifying the accuracy of the model. The strength of multiple-step diffusion process lies in its versatility. An axicon, imaging lens, and curved radial lens, all with different index profile requirements, are designed out of a single glass composition.
Juhasz, Barbara J
2016-11-14
Recording eye movements provides information on the time-course of word recognition during reading. Juhasz and Rayner [Juhasz, B. J., & Rayner, K. (2003). Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading. Journal of Experimental Psychology: Learning, Memory and Cognition, 29, 1312-1318] examined the impact of five word recognition variables, including familiarity and age-of-acquisition (AoA), on fixation durations. All variables impacted fixation durations, but the time-course differed. However, the study focused on relatively short, morphologically simple words. Eye movements are also informative for examining the processing of morphologically complex words such as compound words. The present study further examined the time-course of lexical and semantic variables during morphological processing. A total of 120 English compound words that varied in familiarity, AoA, semantic transparency, lexeme meaning dominance, sensory experience rating (SER), and imageability were selected. The impact of these variables on fixation durations was examined when length, word frequency, and lexeme frequencies were controlled in a regression model. The most robust effects were found for familiarity and AoA, indicating that a reader's experience with compound words significantly impacts compound recognition. These results provide insight into semantic processing of morphologically complex words during reading.
Real-time computer treatment of THz passive device images with the high image quality
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2012-06-01
We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.
High-precision tracking of brownian boomerang colloidal particles confined in quasi two dimensions.
Chakrabarty, Ayan; Wang, Feng; Fan, Chun-Zhen; Sun, Kai; Wei, Qi-Huo
2013-11-26
In this article, we present a high-precision image-processing algorithm for tracking the translational and rotational Brownian motion of boomerang-shaped colloidal particles confined in quasi-two-dimensional geometry. By measuring mean square displacements of an immobilized particle, we demonstrate that the positional and angular precision of our imaging and image-processing system can achieve 13 nm and 0.004 rad, respectively. By analyzing computer-simulated images, we demonstrate that the positional and angular accuracies of our image-processing algorithm can achieve 32 nm and 0.006 rad. Because of zero correlations between the displacements in neighboring time intervals, trajectories of different videos of the same particle can be merged into a very long time trajectory, allowing for long-time averaging of different physical variables. We apply this image-processing algorithm to measure the diffusion coefficients of boomerang particles of three different apex angles and discuss the angle dependence of these diffusion coefficients.
NASA Astrophysics Data System (ADS)
Montanari, Davide; Scolari, Enrica; Silvestri, Chiara; Jiang Graves, Yan; Yan, Hao; Cervino, Laura; Rice, Roger; Jiang, Steve B.; Jia, Xun
2014-03-01
Cone beam CT (CBCT) has been widely used for patient setup in image-guided radiation therapy (IGRT). Radiation dose from CBCT scans has become a clinical concern. The purposes of this study are (1) to commission a graphics processing unit (GPU)-based Monte Carlo (MC) dose calculation package gCTD for Varian On-Board Imaging (OBI) system and test the calculation accuracy, and (2) to quantitatively evaluate CBCT dose from the OBI system in typical IGRT scan protocols. We first conducted dose measurements in a water phantom. X-ray source model parameters used in gCTD are obtained through a commissioning process. gCTD accuracy is demonstrated by comparing calculations with measurements in water and in CTDI phantoms. Twenty-five brain cancer patients are used to study dose in a standard-dose head protocol, and 25 prostate cancer patients are used to study dose in pelvis protocol and pelvis spotlight protocol. Mean dose to each organ is calculated. Mean dose to 2% voxels that have the highest dose is also computed to quantify the maximum dose. It is found that the mean dose value to an organ varies largely among patients. Moreover, dose distribution is highly non-homogeneous inside an organ. The maximum dose is found to be 1-3 times higher than the mean dose depending on the organ, and is up to eight times higher for the entire body due to the very high dose region in bony structures. High computational efficiency has also been observed in our studies, such that MC dose calculation time is less than 5 min for a typical case.
Kiryu, Tohru; Yamada, Hiroshi; Jimbo, Masahiro; Bando, Takehiko
2004-01-01
Virtual reality (VR) is a promising technology in biomedical engineering, but at the same time enlarges another problem called cybersickness. Aiming at suppression of cybersicknes, we are investigating the influences of vection-induced images on the autonomic regulation quantitatively. We used the motion vectors to quantify image scenes and measured electrocardiogram, blood pressure, and respiration for evaluating the autonomic regulation. Using the estimated motion vectors, we further synthesized random-dot pattern images to survey which component of the global motion vectors seriously affected the autonomic regulation. The results showed that the zoom component with a specific frequency band (0.1-3.0 Hz) would induce sickness.
Towards a framework for agent-based image analysis of remote-sensing data
Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera
2015-01-01
Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects’ properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA). PMID:27721916
Towards a framework for agent-based image analysis of remote-sensing data.
Hofmann, Peter; Lettmayer, Paul; Blaschke, Thomas; Belgiu, Mariana; Wegenkittl, Stefan; Graf, Roland; Lampoltshammer, Thomas Josef; Andrejchenko, Vera
2015-04-03
Object-based image analysis (OBIA) as a paradigm for analysing remotely sensed image data has in many cases led to spatially and thematically improved classification results in comparison to pixel-based approaches. Nevertheless, robust and transferable object-based solutions for automated image analysis capable of analysing sets of images or even large image archives without any human interaction are still rare. A major reason for this lack of robustness and transferability is the high complexity of image contents: Especially in very high resolution (VHR) remote-sensing data with varying imaging conditions or sensor characteristics, the variability of the objects' properties in these varying images is hardly predictable. The work described in this article builds on so-called rule sets. While earlier work has demonstrated that OBIA rule sets bear a high potential of transferability, they need to be adapted manually, or classification results need to be adjusted manually in a post-processing step. In order to automate these adaptation and adjustment procedures, we investigate the coupling, extension and integration of OBIA with the agent-based paradigm, which is exhaustively investigated in software engineering. The aims of such integration are (a) autonomously adapting rule sets and (b) image objects that can adopt and adjust themselves according to different imaging conditions and sensor characteristics. This article focuses on self-adapting image objects and therefore introduces a framework for agent-based image analysis (ABIA).
NASA Astrophysics Data System (ADS)
Menke, H. P.; Bijeljic, B.; Andrew, M. G.; Blunt, M. J.
2014-12-01
Sequestering carbon in deep geologic formations is one way of reducing anthropogenic CO2 emissions. When supercritical CO2 mixes with brine in a reservoir, the acid generated has the potential to dissolve the surrounding pore structure. However, the magnitude and type of dissolution are condition dependent. Understanding how small changes in the pore structure, chemistry, and flow properties affect dissolution is paramount for successful predictive modelling. Both 'Pink Beam' synchrotron radiation and a Micro-CT lab source are used in dynamic X-ray microtomography to investigate the pore structure changes during supercritical CO2 injection in carbonate rocks of varying heterogeneity at high temperatures and pressures and various flow-rates. Three carbonate rock types were studied, one with a homogeneous pore structure and two heterogeneous carbonates. All samples are practically pure calcium carbonate, but have widely varying rock structures. Flow-rate was varied in three successive experiments by over an order of magnitude whlie keeping all other experimental conditions constant. A 4-mm carbonate core was injected with CO2-saturated brine at 10 MPa and 50oC. Tomographic images were taken at 30-second to 20-minute time-resolutions during a 2 to 4-hour injection period. A pore network was extracted using a topological analysis of the pore space and pore-scale flow modelling was performed directly on the binarized images with connected pathways and used to track the altering velocity distributions. Significant differences in dissolution type and magnitude were found for each rock type and flowrate. At the highest flow-rates, the homogeneous carbonate was seen to have predominately uniform dissolution with minor dissolution rate differences between the pores and pore throats. Alternatively, the heterogeneous carbonates which formed wormholes at high flow rates. At low flow rates the homogeneous rock developed wormholes, while the heterogeneous samples showed evidence of compact dissolution. This study serves as a unique benchmark for pore-scale reactive transport modelling directly on the binarized Micro-CT images. Dynamic pore-scale imaging methods offer advantages in helping explain the dominant processes at the pore scale so that they may be up-scaled for accurate model prediction.
Progress on automated data analysis algorithms for ultrasonic inspection of composites
NASA Astrophysics Data System (ADS)
Aldrin, John C.; Forsyth, David S.; Welter, John T.
2015-03-01
Progress is presented on the development and demonstration of automated data analysis (ADA) software to address the burden in interpreting ultrasonic inspection data for large composite structures. The automated data analysis algorithm is presented in detail, which follows standard procedures for analyzing signals for time-of-flight indications and backwall amplitude dropout. New algorithms have been implemented to reliably identify indications in time-of-flight images near the front and back walls of composite panels. Adaptive call criteria have also been applied to address sensitivity to variation in backwall signal level, panel thickness variation, and internal signal noise. ADA processing results are presented for a variety of test specimens that include inserted materials and discontinuities produced under poor manufacturing conditions. Software tools have been developed to support both ADA algorithm design and certification, producing a statistical evaluation of indication results and false calls using a matching process with predefined truth tables. Parametric studies were performed to evaluate detection and false call results with respect to varying algorithm settings.
Evaluative Processing of Food Images: A Conditional Role for Viewing in Preference Formation
Wolf, Alexandra; Ounjai, Kajornvut; Takahashi, Muneyoshi; Kobayashi, Shunsuke; Matsuda, Tetsuya; Lauwereyns, Johan
2018-01-01
Previous research suggested a role of gaze in preference formation, not merely as an expression of preference, but also as a causal influence. According to the gaze cascade hypothesis, the longer subjects look at an item, the more likely they are to develop a preference for it. However, to date the connection between viewing and liking has been investigated predominately with self-paced viewing conditions in which the subjects were required to select certain items from simultaneously presented stimuli on the basis of perceived visual attractiveness. Such conditions might promote a default, but non-mandatory connection between viewing and liking. To explore whether the connection is separable, we examined the evaluative processing of single naturalistic food images in a 2 × 2 design, conducted completely within subjects, in which we varied both the type of exposure (self-paced versus time-controlled) and the type of evaluation (non-exclusive versus exclusive). In the self-paced exclusive evaluation, longer viewing was associated with a higher likelihood of a positive evaluation. However, in the self-paced non-exclusive evaluation, the trend reversed such that longer viewing durations were associated with lesser ratings. Furthermore, in the time-controlled tasks, both with non-exclusive and exclusive evaluation, there was no significant relationship between the viewing duration and the evaluation. The overall pattern of results was consistent for viewing times measured in terms of exposure duration (i.e., the duration of stimulus presentation on the screen) and in terms of actual gaze duration (i.e., the amount of time the subject effectively gazed at the stimulus on the screen). The data indicated that viewing does not intrinsically lead to a higher evaluation when evaluating single food images; instead, the relationship between viewing duration and evaluation depends on the type of task. We suggest that self-determination of exposure duration may be a prerequisite for any influence from viewing time on evaluative processing, regardless of whether the influence is facilitative. Moreover, the purported facilitative link between viewing and liking appears to be limited to exclusive evaluation, when only a restricted number of items can be included in a chosen set. PMID:29942273
Evaluative Processing of Food Images: A Conditional Role for Viewing in Preference Formation.
Wolf, Alexandra; Ounjai, Kajornvut; Takahashi, Muneyoshi; Kobayashi, Shunsuke; Matsuda, Tetsuya; Lauwereyns, Johan
2018-01-01
Previous research suggested a role of gaze in preference formation, not merely as an expression of preference, but also as a causal influence. According to the gaze cascade hypothesis, the longer subjects look at an item, the more likely they are to develop a preference for it. However, to date the connection between viewing and liking has been investigated predominately with self-paced viewing conditions in which the subjects were required to select certain items from simultaneously presented stimuli on the basis of perceived visual attractiveness. Such conditions might promote a default, but non-mandatory connection between viewing and liking. To explore whether the connection is separable, we examined the evaluative processing of single naturalistic food images in a 2 × 2 design, conducted completely within subjects, in which we varied both the type of exposure (self-paced versus time-controlled) and the type of evaluation (non-exclusive versus exclusive). In the self-paced exclusive evaluation, longer viewing was associated with a higher likelihood of a positive evaluation. However, in the self-paced non-exclusive evaluation, the trend reversed such that longer viewing durations were associated with lesser ratings. Furthermore, in the time-controlled tasks, both with non-exclusive and exclusive evaluation, there was no significant relationship between the viewing duration and the evaluation. The overall pattern of results was consistent for viewing times measured in terms of exposure duration (i.e., the duration of stimulus presentation on the screen) and in terms of actual gaze duration (i.e., the amount of time the subject effectively gazed at the stimulus on the screen). The data indicated that viewing does not intrinsically lead to a higher evaluation when evaluating single food images; instead, the relationship between viewing duration and evaluation depends on the type of task. We suggest that self-determination of exposure duration may be a prerequisite for any influence from viewing time on evaluative processing, regardless of whether the influence is facilitative. Moreover, the purported facilitative link between viewing and liking appears to be limited to exclusive evaluation, when only a restricted number of items can be included in a chosen set.
Temperature dependence of proton NMR relaxation times at earth's magnetic field
NASA Astrophysics Data System (ADS)
Niedbalski, Peter; Kiswandhi, Andhika; Parish, Christopher; Ferguson, Sarah; Cervantes, Eduardo; Oomen, Anisha; Krishnan, Anagha; Goyal, Aayush; Lumata, Lloyd
The theoretical description of relaxation processes for protons, well established and experimentally verified at conventional nuclear magnetic resonance (NMR) fields, has remained untested at low fields despite significant advances in low field NMR technology. In this study, proton spin-lattice relaxation (T1) times in pure water and water doped with varying concentrations of the paramagnetic agent copper chloride have been measured from 6 to 92oC at earth's magnetic field (1700 Hz). Results show a linear increase of T1 with temperature for each of the samples studied. Increasing the concentration of the copper chloride greatly reduced T1 and reduced dependence on temperature. The consistency of the results with theory is an important confirmation of past results, while the ability of an ultra-low field NMR system to do contrast-enhanced magnetic resonance imaging (MRI) is promising for future applicability to low-cost medical imaging and chemical identification. This work is supported by US Dept of Defense Award No. W81XWH-14-1-0048 and the Robert A. Welch Foundation Grant No. AT-1877.
Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi
2014-02-01
We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.
Fabrication and characterization of polymer gel for MRI phantom with embedded lesion particles
NASA Astrophysics Data System (ADS)
In, Eunji; Naguib, Hani E.; Haider, Masoom
2012-04-01
Magnetic Resonance Imaging (MRI) is a medical imaging technique used in radiology to visualize the detailed internal structure and body soft tissues in complete 3D image. MRI performs best when optimal imaging parameters such as contrast, signal to noise ratio (SNR), spatial resolution and total scan time are utilized. However, due to a variety of imaging parameters that differ with the manufacturer, a calibration medium that allows the control of these parameters is necessary. Therefore, a phantom that behaves similar to human soft tissue is developed to replace a real human. Polymer gel is novel material that has great potential in the medical imaging. Since very few have focused on examining the behavior of polymer lesions, the motivation of this study is to develop a polymer gel phantom, especially for liver, with embedded lesions. Both the phantom and lesions should be capable of reflecting T1 and T2 relaxation values through various characterization processes. In this paper, phantom and lesion particles were fabricated with carrageenan as a gelling agent by physical aggregation. Agar was used as supplementary gelling agent and T2 modifier and Gd-DTPA as T1 modifier. The polymer gel samples were fabricated by varying the concentrations of the gelling agent, and T1 and T2 modifiers. The lesion particles were obtained by extracting molten polymer gel solution in chilled oil bath to obtain spherical shape. The polymer gel properties including density, elastic modulus, dielectric constant and optical properties were measured to compare with human tissue values for long period of time.
MRI technique for the snapshot imaging of quantitative velocity maps using RARE
NASA Astrophysics Data System (ADS)
Shiko, G.; Sederman, A. J.; Gladden, L. F.
2012-03-01
A quantitative PGSE-RARE pulse sequence was developed and successfully applied to the in situ dissolution of two pharmaceutical formulations dissolving over a range of timescales. The new technique was chosen over other existing fast velocity imaging techniques because it is T2 weighted, not T2∗ weighted, and is, therefore, robust for imaging time-varying interfaces and flow in magnetically heterogeneous systems. The complex signal was preserved intact by separating odd and even echoes to obtain two phase maps which are then averaged in post-processing. Initially, the validity of the technique was shown when imaging laminar flow in a pipe. Subsequently, the dissolution of two drugs was followed in situ, where the technique enables the imaging and quantification of changes in the form of the tablet and the flow field surrounding it at high spatial and temporal resolution. First, the complete 3D velocity field around an eroding salicylic acid tablet was acquired at a resolution of 98 × 49 μm2, within 20 min, and monitored over ˜13 h. The tablet was observed to experience a heterogeneous flow field and, hence a heterogeneous shear field, which resulted in the non-symmetric erosion of the tablet. Second, the dissolution of a fast dissolving immediate release tablet was followed using one-shot 2D velocity images acquired every 5.2 s at a resolution of 390 × 390 μm2. The quantitative nature of the technique and fast acquisition times provided invaluable information on the dissolution behaviour of this tablet, which had not been attainable previously with conventional quantitative MRI techniques.
Real time display Fourier-domain OCT using multi-thread parallel computing with data vectorization
NASA Astrophysics Data System (ADS)
Eom, Tae Joong; Kim, Hoon Seop; Kim, Chul Min; Lee, Yeung Lak; Choi, Eun-Seo
2011-03-01
We demonstrate a real-time display of processed OCT images using multi-thread parallel computing with a quad-core CPU of a personal computer. The data of each A-line are treated as one vector to maximize the data translation rate between the cores of the CPU and RAM stored image data. A display rate of 29.9 frames/sec for processed OCT data (4096 FFT-size x 500 A-scans) is achieved in our system using a wavelength swept source with 52-kHz swept frequency. The data processing times of the OCT image and a Doppler OCT image with a 4-time average are 23.8 msec and 91.4 msec.
Heo, Young Jin; Lee, Donghyeon; Kang, Junsu; Lee, Keondo; Chung, Wan Kyun
2017-09-14
Imaging flow cytometry (IFC) is an emerging technology that acquires single-cell images at high-throughput for analysis of a cell population. Rich information that comes from high sensitivity and spatial resolution of a single-cell microscopic image is beneficial for single-cell analysis in various biological applications. In this paper, we present a fast image-processing pipeline (R-MOD: Real-time Moving Object Detector) based on deep learning for high-throughput microscopy-based label-free IFC in a microfluidic chip. The R-MOD pipeline acquires all single-cell images of cells in flow, and identifies the acquired images as a real-time process with minimum hardware that consists of a microscope and a high-speed camera. Experiments show that R-MOD has the fast and reliable accuracy (500 fps and 93.3% mAP), and is expected to be used as a powerful tool for biomedical and clinical applications.
Chen, Hui; Palmer, N; Dayton, M; Carpenter, A; Schneider, M B; Bell, P M; Bradley, D K; Claus, L D; Fang, L; Hilsabeck, T; Hohenberger, M; Jones, O S; Kilkenny, J D; Kimmel, M W; Robertson, G; Rochau, G; Sanchez, M O; Stahoviak, J W; Trotter, D C; Porter, J L
2016-11-01
A novel x-ray imager, which takes time-resolved gated images along a single line-of-sight, has been successfully implemented at the National Ignition Facility (NIF). This Gated Laser Entrance Hole diagnostic, G-LEH, incorporates a high-speed multi-frame CMOS x-ray imager developed by Sandia National Laboratories to upgrade the existing Static X-ray Imager diagnostic at NIF. The new diagnostic is capable of capturing two laser-entrance-hole images per shot on its 1024 × 448 pixels photo-detector array, with integration times as short as 1.6 ns per frame. Since its implementation on NIF, the G-LEH diagnostic has successfully acquired images from various experimental campaigns, providing critical new information for understanding the hohlraum performance in inertial confinement fusion (ICF) experiments, such as the size of the laser entrance hole vs. time, the growth of the laser-heated gold plasma bubble, the change in brightness of inner beam spots due to time-varying cross beam energy transfer, and plasma instability growth near the hohlraum wall.
LMI designmethod for networked-based PID control
NASA Astrophysics Data System (ADS)
Souza, Fernando de Oliveira; Mozelli, Leonardo Amaral; de Oliveira, Maurício Carvalho; Palhares, Reinaldo Martinez
2016-10-01
In this paper, we propose a methodology for the design of networked PID controllers for second-order delayed processes using linear matrix inequalities. The proposed procedure takes into account time-varying delay on the plant, time-varying delays induced by the network and packed dropouts. The design is carried on entirely using a continuous-time model of the closed-loop system where time-varying delays are used to represent sampling and holding occurring in a discrete-time digital PID controller.
Measuring the complexity of design in real-time imaging software
NASA Astrophysics Data System (ADS)
Sangwan, Raghvinder S.; Vercellone-Smith, Pamela; Laplante, Phillip A.
2007-02-01
Due to the intricacies in the algorithms involved, the design of imaging software is considered to be more complex than non-image processing software (Sangwan et al, 2005). A recent investigation (Larsson and Laplante, 2006) examined the complexity of several image processing and non-image processing software packages along a wide variety of metrics, including those postulated by McCabe (1976), Chidamber and Kemerer (1994), and Martin (2003). This work found that it was not always possible to quantitatively compare the complexity between imaging applications and nonimage processing systems. Newer research and an accompanying tool (Structure 101, 2006), however, provides a greatly simplified approach to measuring software complexity. Therefore it may be possible to definitively quantify the complexity differences between imaging and non-imaging software, between imaging and real-time imaging software, and between software programs of the same application type. In this paper, we review prior results and describe the methodology for measuring complexity in imaging systems. We then apply a new complexity measurement methodology to several sets of imaging and non-imaging code in order to compare the complexity differences between the two types of applications. The benefit of such quantification is far reaching, for example, leading to more easily measured performance improvement and quality in real-time imaging code.
2017-01-01
Binaural cues occurring in natural environments are frequently time varying, either from the motion of a sound source or through interactions between the cues produced by multiple sources. Yet, a broad understanding of how the auditory system processes dynamic binaural cues is still lacking. In the current study, we directly compared neural responses in the inferior colliculus (IC) of unanesthetized rabbits to broadband noise with time-varying interaural time differences (ITD) with responses to noise with sinusoidal amplitude modulation (SAM) over a wide range of modulation frequencies. On the basis of prior research, we hypothesized that the IC, one of the first stages to exhibit tuning of firing rate to modulation frequency, might use a common mechanism to encode time-varying information in general. Instead, we found weaker temporal coding for dynamic ITD compared with amplitude modulation and stronger effects of adaptation for amplitude modulation. The differences in temporal coding of dynamic ITD compared with SAM at the single-neuron level could be a neural correlate of “binaural sluggishness,” the inability to perceive fluctuations in time-varying binaural cues at high modulation frequencies, for which a physiological explanation has so far remained elusive. At ITD-variation frequencies of 64 Hz and above, where a temporal code was less effective, noise with a dynamic ITD could still be distinguished from noise with a constant ITD through differences in average firing rate in many neurons, suggesting a frequency-dependent tradeoff between rate and temporal coding of time-varying binaural information. NEW & NOTEWORTHY Humans use time-varying binaural cues to parse auditory scenes comprising multiple sound sources and reverberation. However, the neural mechanisms for doing so are poorly understood. Our results demonstrate a potential neural correlate for the reduced detectability of fluctuations in time-varying binaural information at high speeds, as occurs in reverberation. The results also suggest that the neural mechanisms for processing time-varying binaural and monaural cues are largely distinct. PMID:28381487
Phytoplankton pigment patterns and wind forcing off central California
NASA Technical Reports Server (NTRS)
Abbott, Mark R.; Barksdale, Brett
1991-01-01
Mesoscale variability in phytoplankton pigment distributions of central California during the spring-summer upwelling season are studied via a 4-yr time series of high-resolution coastal zone color scanner imagery. Empirical orthogonal functions are used to decompose the time series of spatial images into its dominant modes of variability. The coupling between wind forcing of the upper ocean and phytoplankton distribution on mesoscales is investigated. Wind forcing, in particular the curl of the wind stress, was found to play an important role in the distribution of phytoplankton pigment in the California Current. The spring transition varies in timing and intensity from year to year but appears to be a recurrent feature associated with the rapid onset of the upwelling-favorable winds. Although the underlying dynamics may be dominated by processes other than forcing by wind stress curl, it appears that curl may force the variability of the filaments and hence the pigment patterns.
A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer
NASA Astrophysics Data System (ADS)
Luckman, Adrian J.; Allinson, Nigel M.
1989-03-01
A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.
Lee, Kenneth K C; Mariampillai, Adrian; Yu, Joe X Z; Cadotte, David W; Wilson, Brian C; Standish, Beau A; Yang, Victor X D
2012-07-01
Advances in swept source laser technology continues to increase the imaging speed of swept-source optical coherence tomography (SS-OCT) systems. These fast imaging speeds are ideal for microvascular detection schemes, such as speckle variance (SV), where interframe motion can cause severe imaging artifacts and loss of vascular contrast. However, full utilization of the laser scan speed has been hindered by the computationally intensive signal processing required by SS-OCT and SV calculations. Using a commercial graphics processing unit that has been optimized for parallel data processing, we report a complete high-speed SS-OCT platform capable of real-time data acquisition, processing, display, and saving at 108,000 lines per second. Subpixel image registration of structural images was performed in real-time prior to SV calculations in order to reduce decorrelation from stationary structures induced by the bulk tissue motion. The viability of the system was successfully demonstrated in a high bulk tissue motion scenario of human fingernail root imaging where SV images (512 × 512 pixels, n = 4) were displayed at 54 frames per second.
Characteristics of nonlinear imaging of broadband laser stacked by chirped pulses
NASA Astrophysics Data System (ADS)
Wang, Youwen; You, Kaiming; Chen, Liezun; Lu, Shizhuan; Dai, Zhiping; Ling, Xiaohui
2014-11-01
Nanosecond-level pulses of specific shape is usually generated by stacking chirped pulses for high-power inertial confinement fusion driver, in which nonlinear imaging of scatterers may damage precious optical elements. We present a numerical study of the characteristics of nonlinear imaging of scatterers in broadband laser stacked by chirped pulses to disclose the dependence of location and intensity of images on the parameters of the stacked pulse. It is shown that, for sub-nanosecond long sub-pulses with chirp or transform-limited sub-pulses, the time-mean intensity and location of images through normally dispersive and anomalously dispersive self-focusing medium slab are almost identical; While for picosecond-level short sub-pulses with chirp, the time-mean intensity of images for weak normal dispersion is slightly higher than that for weak anomalous dispersion through a thin nonlinear slab; the result is opposite to that for strong dispersion in a thick nonlinear slab; Furthermore, for given time delay between neighboring sub-pulses, the time-mean intensity of images varies periodically with chirp of the sub-pulse increasing; for a given pulse width of sub-pulse, the time-mean intensity of images decreases with the time delay between neighboring sub-pulses increasing; additionally, there is a little difference in the time-mean intensity of images of the laser stacked by different numbers of sub-pulses. Finally, the obtained results are also given physical explanations.
Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU
NASA Astrophysics Data System (ADS)
Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee
2013-02-01
3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.
Design of an MR image processing module on an FPGA chip.
Li, Limin; Wyrwicz, Alice M
2015-06-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128×128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. Copyright © 2015 Elsevier Inc. All rights reserved.
UWGSP7: a real-time optical imaging workstation
NASA Astrophysics Data System (ADS)
Bush, John E.; Kim, Yongmin; Pennington, Stan D.; Alleman, Andrew P.
1995-04-01
With the development of UWGSP7, the University of Washington Image Computing Systems Laboratory has a real-time workstation for continuous-wave (cw) optical reflectance imaging. Recent discoveries in optical science and imaging research have suggested potential practical use of the technology as a medical imaging modality and identified the need for a machine to support these applications in real time. The UWGSP7 system was developed to provide researchers with a high-performance, versatile tool for use in optical imaging experiments with the eventual goal of bringing the technology into clinical use. One of several major applications of cw optical reflectance imaging is tumor imaging which uses a light-absorbing dye that preferentially sequesters in tumor tissue. This property could be used to locate tumors and to identify tumor margins intraoperatively. Cw optical reflectance imaging consists of illumination of a target with a band-limited light source and monitoring the light transmitted by or reflected from the target. While continuously illuminating the target, a control image is acquired and stored. A dye is injected into a subject and a sequence of data images are acquired and processed. The data images are aligned with the control image and then subtracted to obtain a signal representing the change in optical reflectance over time. This signal can be enhanced by digital image processing and displayed in pseudo-color. This type of emerging imaging technique requires a computer system that is versatile and adaptable. The UWGSP7 utilizes a VESA local bus PC as a host computer running the Windows NT operating system and includes ICSL developed add-on boards for image acquisition and processing. The image acquisition board is used to digitize and format the analog signal from the input device into digital frames and to the average frames into images. To accommodate different input devices, the camera interface circuitry is designed in a small mezzanine board that supports the RS-170 standard. The image acquisition board is connected to the image- processing board using a direct connect port which provides a 66 Mbytes/s channel independent of the system bus. The image processing board utilizes the Texas Instruments TMS320C80 Multimedia Video Processor chip. This chip is capable of 2 billion operations per second providing the UWGSP7 with the capability to perform real-time image processing functions like median filtering, convolution and contrast enhancement. This processing power allows interactive analysis of the experiments as compared to current practice of off-line processing and analysis. Due to its flexibility and programmability, the UWGSP7 can be adapted into various research needs in intraoperative optical imaging.
van Atteveldt, Nienke; Musacchia, Gabriella; Zion-Golumbic, Elana; Sehatpour, Pejman; Javitt, Daniel C.; Schroeder, Charles
2015-01-01
The brain’s fascinating ability to adapt its internal neural dynamics to the temporal structure of the sensory environment is becoming increasingly clear. It is thought to be metabolically beneficial to align ongoing oscillatory activity to the relevant inputs in a predictable stream, so that they will enter at optimal processing phases of the spontaneously occurring rhythmic excitability fluctuations. However, some contexts have a more predictable temporal structure than others. Here, we tested the hypothesis that the processing of rhythmic sounds is more efficient than the processing of irregularly timed sounds. To do this, we simultaneously measured functional magnetic resonance imaging (fMRI) and electro-encephalograms (EEG) while participants detected oddball target sounds in alternating blocks of rhythmic (e.g., with equal inter-stimulus intervals) or random (e.g., with randomly varied inter-stimulus intervals) tone sequences. Behaviorally, participants detected target sounds faster and more accurately when embedded in rhythmic streams. The fMRI response in the auditory cortex was stronger during random compared to random tone sequence processing. Simultaneously recorded N1 responses showed larger peak amplitudes and longer latencies for tones in the random (vs. the rhythmic) streams. These results reveal complementary evidence for more efficient neural and perceptual processing during temporally predictable sensory contexts. PMID:26579044
Evaluation of clinical image processing algorithms used in digital mammography.
Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde
2009-03-01
Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the same six pairs of modalities were significantly different, but the JAFROC confidence intervals were about 32% smaller than ROC confidence intervals. This study shows that image processing has a significant impact on the detection of microcalcifications in digital mammograms. Objective measurements, such as described here, should be used by the manufacturers to select the optimal image processing algorithm.
NASA Astrophysics Data System (ADS)
Lee, Hsiang-Chieh; Ahsen, Osman O.; Liu, Jonathan J.; Tsai, Tsung-Han; Huang, Qin; Mashimo, Hiroshi; Fujimoto, James G.
2017-07-01
Radiofrequency ablation (RFA) is widely used for the eradication of dysplasia and the treatment of early stage esophageal carcinoma in patients with Barrett's esophagus (BE). However, there are several factors, such as variation of BE epithelium (EP) thickness among individual patients and varying RFA catheter-tissue contact, which may compromise RFA efficacy. We used a high-speed optical coherence tomography (OCT) system to identify and monitor changes in the esophageal tissue architecture from RFA. Two different OCT imaging/RFA application protocols were performed using an ex vivo swine esophagus model: (1) post-RFA volumetric OCT imaging for quantitative analysis of the coagulum formation using RFA applications with different energy settings, and (2) M-mode OCT imaging for monitoring the dynamics of tissue architectural changes in real time during RFA application. Post-RFA volumetric OCT measurements showed an increase in the coagulum thickness with respect to the increasing RFA energies. Using a subset of the specimens, OCT measurements of coagulum and coagulum + residual EP thickness were shown to agree with histology, which accounted for specimen shrinkage during histological processing. In addition, we demonstrated the feasibility of OCT for real-time visualization of the architectural changes during RFA application with different energy settings. Results suggest feasibility of using OCT for RFA treatment planning and guidance.
Uzbekova, Svetlana; Elis, Sebastien; Teixeira-Gomes, Ana-Paula; Desmarchais, Alice; Maillard, Virginie; Labas, Valerie
2015-01-01
In mammals, oocytes develop inside the ovarian follicles; this process is strongly supported by the surrounding follicular environment consisting of cumulus, granulosa and theca cells, and follicular fluid. In the antral follicle, the final stages of oogenesis require large amounts of energy that is produced by follicular cells from substrates including glucose, amino acids and fatty acids (FAs). Since lipid metabolism plays an important role in acquiring oocyte developmental competence, the aim of this study was to investigate site-specificity of lipid metabolism in ovaries by comparing lipid profiles and expression of FA metabolism-related genes in different ovarian compartments. Using MALDI Mass Spectrometry Imaging, images of porcine ovary sections were reconstructed from lipid ion signals for the first time. Cluster analysis of ion spectra revealed differences in spatial distribution of lipid species among ovarian compartments, notably between the follicles and interstitial tissue. Inside the follicles analysis differentiated follicular fluid, granulosa, theca and the oocyte-cumulus complex. Moreover, by transcript quantification using real time PCR, we showed that expression of five key genes in FA metabolism significantly varied between somatic follicular cells (theca, granulosa and cumulus) and the oocyte. In conclusion, lipid metabolism differs between ovarian and follicular compartments. PMID:25756245
Associative architecture for image processing
NASA Astrophysics Data System (ADS)
Adar, Rutie; Akerib, Avidan
1997-09-01
This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.
1983-06-01
system, provides a convenient, low- noise , fully parallel method of improving contrast and enhancing structural detail in an image prior to input to a...directed towards problems in deconvolution, reconstruction from projections, bandlimited extrapolation, and shift varying deblurring of images...deconvolution algorithm has been studied with promising 5 results [I] for simulated motion blurs. Future work will focus on noise effects and the extension
NASA Technical Reports Server (NTRS)
Tolliver, C. L.
1989-01-01
The quest for the highest resolution microwave imaging and principle of time-domain imaging has been the primary motivation for recent developments in time-domain techniques. With the present technology, fast time varying signals can now be measured and recorded both in magnitude and in-phase. It has also enhanced our ability to extract relevant details concerning the scattering object. In the past, the interface of object geometry or shape for scattered signals has received substantial attention in radar technology. Various scattering theories were proposed to develop analytical solutions to this problem. Furthermore, the random inversion, frequency swept holography, and the synthetic radar imaging, have two things in common: (1) the physical optic far-field approximation, and (2) the utilization of channels as an extra physical dimension, were also advanced. Despite the inherent vectorial nature of electromagnetic waves, these scalar treatments have brought forth some promising results in practice with notable examples in subsurface and structure sounding. The development of time-domain techniques are studied through the theoretical aspects as well as experimental verification. The use of time-domain imaging for space robotic vision applications has been suggested.
Rapid variability of the arcsec-scale X-ray jets of SS 433
NASA Astrophysics Data System (ADS)
Migliari, S.; Fender, R. P.; Blundell, K. M.; Méndez, M.; van der Klis, M.
2005-04-01
We present X-ray images of all the available Chandra observations of the galactic jet source SS 433. We have studied the morphology of the X-ray images and inspected the evolution of the arcsec X-ray jets, recently found to be manifestations of in situ reheating of the relativistic gas downstream in the jets. The Chandra images reveal that the arcsec X-ray jets are not steady long-term structures; the structure varies, indicating that the reheating processes have no preference for a particular precession phase or distance from the binary core. Three observations made within about five days in 2001 May, and a 60-ks observation made in 2003 July, show that the variability of the jets can be very rapid, from time-scales of days to (possibly) hours. The three 2001 May images show two resolved knots in the east jet getting brighter one after the other, suggesting that a common phenomenon might be at the origin of the sequential reheatings of the knots. We discuss possible scenarios and propose a model to interpret these brightenings in terms of a propagating shock wave, revealing a second, faster outflow in the jet.
Coil Compression for Accelerated Imaging with Cartesian Sampling
Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael
2012-01-01
MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589
NASA Astrophysics Data System (ADS)
Vuori, Tero; Olkkonen, Maria
2006-01-01
The aim of the study is to test both customer image quality rating (subjective image quality) and physical measurement of user behavior (eye movements tracking) to find customer satisfaction differences in imaging technologies. Methodological aim is to find out whether eye movements could be quantitatively used in image quality preference studies. In general, we want to map objective or physically measurable image quality to subjective evaluations and eye movement data. We conducted a series of image quality tests, in which the test subjects evaluated image quality while we recorded their eye movements. Results show that eye movement parameters consistently change according to the instructions given to the user, and according to physical image quality, e.g. saccade duration increased with increasing blur. Results indicate that eye movement tracking could be used to differentiate image quality evaluation strategies that the users have. Results also show that eye movements would help mapping between technological and subjective image quality. Furthermore, these results give some empirical emphasis to top-down perception processes in image quality perception and evaluation by showing differences between perceptual processes in situations when cognitive task varies.
A method of fast mosaic for massive UAV images
NASA Astrophysics Data System (ADS)
Xiang, Ren; Sun, Min; Jiang, Cheng; Liu, Lei; Zheng, Hui; Li, Xiaodong
2014-11-01
With the development of UAV technology, UAVs are used widely in multiple fields such as agriculture, forest protection, mineral exploration, natural disaster management and surveillances of public security events. In contrast of traditional manned aerial remote sensing platforms, UAVs are cheaper and more flexible to use. So users can obtain massive image data with UAVs, but this requires a lot of time to process the image data, for example, Pix4UAV need approximately 10 hours to process 1000 images in a high performance PC. But disaster management and many other fields require quick respond which is hard to realize with massive image data. Aiming at improving the disadvantage of high time consumption and manual interaction, in this article a solution of fast UAV image stitching is raised. GPS and POS data are used to pre-process the original images from UAV, belts and relation between belts and images are recognized automatically by the program, in the same time useless images are picked out. This can boost the progress of finding match points between images. Levenberg-Marquard algorithm is improved so that parallel computing can be applied to shorten the time of global optimization notably. Besides traditional mosaic result, it can also generate superoverlay result for Google Earth, which can provide a fast and easy way to show the result data. In order to verify the feasibility of this method, a fast mosaic system of massive UAV images is developed, which is fully automated and no manual interaction is needed after original images and GPS data are provided. A test using 800 images of Kelan River in Xinjiang Province shows that this system can reduce 35%-50% time consumption in contrast of traditional methods, and increases respond speed of UAV image processing rapidly.
Jadidi, Masoud; Båth, Magnus; Nyrén, Sven
2018-04-09
To compare the quality of images obtained with two different protocols with different acquisition time and the influence from image post processing in a chest digital tomosynthesis (DTS) system. 20 patients with suspected lung cancer were imaged with a chest X-ray equipment with tomosynthesis option. Two examination protocols with different acquisition times (6.3 and 12 s) were performed on each patient. Both protocols were presented with two different image post-processing (standard DTS processing and more advanced processing optimised for chest radiography). Thus, 4 series from each patient, altogether 80 series, were presented anonymously and in a random order. Five observers rated the quality of the reconstructed section images according to predefined quality criteria in three different classes. Visual grading characteristics (VGC) was used to analyse the data and the area under the VGC curve (AUC VGC ) was used as figure-of-merit. The 12 s protocol and the standard DTS processing were used as references in the analyses. The protocol with 6.3 s acquisition time had a statistically significant advantage over the vendor-recommended protocol with 12 s acquisition time for the classes of criteria, Demarcation (AUC VGC = 0.56, p = 0.009) and Disturbance (AUC VGC = 0.58, p < 0.001). A similar value of AUC VGC was found also for the class Structure (definition of bone structures in the spine) (0.56) but it could not be statistically separated from 0.5 (p = 0.21). For the image processing, the VGC analysis showed a small but statistically significant advantage for the standard DTS processing over the more advanced processing for the classes of criteria Demarcation (AUC VGC = 0.45, p = 0.017) and Disturbance (AUC VGC = 0.43, p = 0.005). A similar value of AUC VGC was found also for the class Structure (0.46), but it could not be statistically separated from 0.5 (p = 0.31). The study indicates that the protocol with 6.3 s acquisition time yields slightly better image quality than the vender-recommended protocol with acquisition time 12 s for several anatomical structures. Furthermore, the standard gradation processing (the vendor-recommended post-processing for DTS), yields to some extent advantage over the gradation processing/multiobjective frequency processing/flexible noise control processing in terms of image quality for all classes of criteria. Advances in knowledge: The study proves that the image quality may be strongly affected by the selection of DTS protocol and that the vendor-recommended protocol may not always be the optimal choice.
Community tools for cartographic and photogrammetric processing of Mars Express HRSC images
Kirk, Randolph L.; Howington-Kraus, Elpitha; Edmundson, Kenneth L.; Redding, Bonnie L.; Galuszka, Donna M.; Hare, Trent M.; Gwinner, K.; Wu, B.; Di, K.; Oberst, J.; Karachevtseva, I.
2017-01-01
The High Resolution Stereo Camera (HRSC) on the Mars Express orbiter (Neukum et al. 2004) is a multi-line pushbroom scanner that can obtain stereo and color coverage of targets in a single overpass, with pixel scales as small as 10 m at periapsis. Since commencing operations in 2004 it has imaged ~ 77 % of Mars at 20 m/pixel or better. The instrument team uses the Video Image Communication And Retrieval (VICAR) software to produce and archive a range of data products from uncalibrated and radiometrically calibrated images to controlled digital topographic models (DTMs) and orthoimages and regional mosaics of DTM and orthophoto data (Gwinner et al. 2009; 2010b; 2016). Alternatives to this highly effective standard processing pipeline are nevertheless of interest to researchers who do not have access to the full VICAR suite and may wish to make topographic products or perform other (e. g., spectrophotometric) analyses prior to the release of the highest level products. We have therefore developed software to ingest HRSC images and model their geometry in the USGS Integrated Software for Imagers and Spectrometers (ISIS3), which can be used for data preparation, geodetic control, and analysis, and the commercial photogrammetric software SOCET SET (® BAE Systems; Miller and Walker 1993; 1995) which can be used for independent production of DTMs and orthoimages. The initial implementation of this capability utilized the then-current ISIS2 system and the generic pushbroom sensor model of SOCET SET, and was described in the DTM comparison of independent photogrammetric processing by different elements of the HRSC team (Heipke et al. 2007). A major drawback of this prototype was that neither software system then allowed for pushbroom images in which the exposure time changes from line to line. Except at periapsis, HRSC makes such timing changes every few hundred lines to accommodate changes of altitude and velocity in its elliptical orbit. As a result, it was necessary to split observations into blocks of constant exposure time, greatly increasing the effort needed to control the images and collect DTMs. Here, we describe a substantially improved HRSC processing capability that incorporates sensor models with varying line timing in the current ISIS3 system (Sides 2017) and SOCET SET. This enormously reduces the work effort for processing most images and eliminates the artifacts that arose from segmenting them. In addition, the software takes advantage of the continuously evolving capabilities of ISIS3 and the improved image matching module NGATE (Next Generation Automatic Terrain Extraction, incorporating area and feature based algorithms, multi-image and multi-direction matching) of SOCET SET, thus greatly reducing the need for manual editing of DTM errors. We have also developed a procedure for geodetically controlling the images to Mars Orbiter Laser Altimeter (MOLA) data by registering a preliminary stereo topographic model to MOLA by using the point cloud alignment (pc_align) function of the NASA Ames Stereo Pipeline (ASP; Moratto et al. 2010). This effectively converts inter-image tiepoints into ground control points in the MOLA coordinate system. The result is improved absolute accuracy and a significant reduction in work effort relative to manual measurement of ground control. The ISIS and ASP software used are freely available; SOCET SET, is a commercial product. By the end of 2017 we expect to have ported our SOCET SET HRSC sensor model to the Community Sensor Model (CSM; Community Sensor Model Working Group 2010; Hare and Kirk 2017) standard utilized by the successor photogrammetric system SOCET GXP that is currently offered by BAE. In early 2018, we are also working with BAE to release the CSM source code under a BSD or MIT open source license.
NASA Technical Reports Server (NTRS)
Wolfe, R. H., Jr.; Juday, R. D.
1982-01-01
Interimage matching is the process of determining the geometric transformation required to conform spatially one image to another. In principle, the parameters of that transformation are varied until some measure of some difference between the two images is minimized or some measure of sameness (e.g., cross-correlation) is maximized. The number of such parameters to vary is faily large (six for merely an affine transformation), and it is customary to attempt an a priori transformation reducing the complexity of the residual transformation or subdivide the image into small enough match zones (control points or patches) that a simple transformation (e.g., pure translation) is applicable, yet large enough to facilitate matching. In the latter case, a complex mapping function is fit to the results (e.g., translation offsets) in all the patches. The methods reviewed have all chosen one or both of the above options, ranging from a priori along-line correction for line-dependent effects (the high-frequency correction) to a full sensor-to-geobase transformation with subsequent subdivision into a grid of match points.
ImageJ: Image processing and analysis in Java
NASA Astrophysics Data System (ADS)
Rasband, W. S.
2012-06-01
ImageJ is a public domain Java image processing program inspired by NIH Image. It can display, edit, analyze, process, save and print 8-bit, 16-bit and 32-bit images. It can read many image formats including TIFF, GIF, JPEG, BMP, DICOM, FITS and "raw". It supports "stacks", a series of images that share a single window. It is multithreaded, so time-consuming operations such as image file reading can be performed in parallel with other operations.
Image processing applications: From particle physics to society
NASA Astrophysics Data System (ADS)
Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.
2017-01-01
We present an embedded system for extremely efficient real-time pattern recognition execution, enabling technological advancements with both scientific and social impact. It is a compact, fast, low consumption processing unit (PU) based on a combination of Field Programmable Gate Arrays (FPGAs) and the full custom associative memory chip. The PU has been developed for real time tracking in particle physics experiments, but delivers flexible features for potential application in a wide range of fields. It has been proposed to be used in accelerated pattern matching execution for Magnetic Resonance Fingerprinting (biomedical applications), in real time detection of space debris trails in astronomical images (space applications) and in brain emulation for image processing (cognitive image processing). We illustrate the potentiality of the PU for the new applications.
Telemedicine optoelectronic biomedical data processing system
NASA Astrophysics Data System (ADS)
Prosolovska, Vita V.
2010-08-01
The telemedicine optoelectronic biomedical data processing system is created to share medical information for the control of health rights and timely and rapid response to crisis. The system includes the main blocks: bioprocessor, analog-digital converter biomedical images, optoelectronic module for image processing, optoelectronic module for parallel recording and storage of biomedical imaging and matrix screen display of biomedical images. Rated temporal characteristics of the blocks defined by a particular triggering optoelectronic couple in analog-digital converters and time imaging for matrix screen. The element base for hardware implementation of the developed matrix screen is integrated optoelectronic couples produced by selective epitaxy.
An echolocation model for the restoration of an acoustic image from a single-emission echo
NASA Astrophysics Data System (ADS)
Matsuo, Ikuo; Yano, Masafumi
2004-12-01
Bats can form a fine acoustic image of an object using frequency-modulated echolocation sound. The acoustic image is an impulse response, known as a reflected-intensity distribution, which is composed of amplitude and phase spectra over a range of frequencies. However, bats detect only the amplitude spectrum due to the low-time resolution of their peripheral auditory system, and the frequency range of emission is restricted. It is therefore necessary to restore the acoustic image from limited information. The amplitude spectrum varies with the changes in the configuration of the reflected-intensity distribution, while the phase spectrum varies with the changes in its configuration and location. Here, by introducing some reasonable constraints, a method is proposed for restoring an acoustic image from the echo. The configuration is extrapolated from the amplitude spectrum of the restricted frequency range by using the continuity condition of the amplitude spectrum at the minimum frequency of the emission and the minimum phase condition. The determination of the location requires extracting the amplitude spectra, which vary with its location. For this purpose, the Gaussian chirplets with a carrier frequency compatible with bat emission sweep rates were used. The location is estimated from the temporal changes of the amplitude spectra. .
Resolution Enhancement In Ultrasonic Imaging By A Time-Varying Filter
NASA Astrophysics Data System (ADS)
Ching, N. H.; Rosenfeld, D.; Braun, M.
1987-09-01
The study reported here investigates the use of a time-varying filter to compensate for the spreading of ultrasonic pulses due to the frequency dependence of attenuation by tissues. The effect of this pulse spreading is to degrade progressively the axial resolution with increasing depth. The form of compensation required to correct for this effect is impossible to realize exactly. A novel time-varying filter utilizing a bank of bandpass filters is proposed as a realizable approximation of the required compensation. The performance of this filter is evaluated by means of a computer simulation. The limits of its application are discussed. Apart from improving the axial resolution, and hence the accuracy of axial measurements, the compensating filter could be used in implementing tissue characterization algorithms based on attenuation data.
Identification of Time-Varying Pilot Control Behavior in Multi-Axis Control Tasks
NASA Technical Reports Server (NTRS)
Zaal, Peter M. T.; Sweet, Barbara T.
2012-01-01
Recent developments in fly-by-wire control architectures for rotorcraft have introduced new interest in the identification of time-varying pilot control behavior in multi-axis control tasks. In this paper a maximum likelihood estimation method is used to estimate the parameters of a pilot model with time-dependent sigmoid functions to characterize time-varying human control behavior. An experiment was performed by 9 general aviation pilots who had to perform a simultaneous roll and pitch control task with time-varying aircraft dynamics. In 8 different conditions, the axis containing the time-varying dynamics and the growth factor of the dynamics were varied, allowing for an analysis of the performance of the estimation method when estimating time-dependent parameter functions. In addition, a detailed analysis of pilots adaptation to the time-varying aircraft dynamics in both the roll and pitch axes could be performed. Pilot control behavior in both axes was significantly affected by the time-varying aircraft dynamics in roll and pitch, and by the growth factor. The main effect was found in the axis that contained the time-varying dynamics. However, pilot control behavior also changed over time in the axis not containing the time-varying aircraft dynamics. This indicates that some cross coupling exists in the perception and control processes between the roll and pitch axes.
Ku, Yixuan; Zhao, Di; Bodner, Mark; Zhou, Yong-Di
2015-08-01
In the present study, causal roles of both the primary somatosensory cortex (SI) and the posterior parietal cortex (PPC) were investigated in a tactile unimodal working memory (WM) task. Individual magnetic resonance imaging-based single-pulse transcranial magnetic stimulation (spTMS) was applied, respectively, to the left SI (ipsilateral to tactile stimuli), right SI (contralateral to tactile stimuli) and right PPC (contralateral to tactile stimuli), while human participants were performing a tactile-tactile unimodal delayed matching-to-sample task. The time points of spTMS were 300, 600 and 900 ms after the onset of the tactile sample stimulus (duration: 200 ms). Compared with ipsilateral SI, application of spTMS over either contralateral SI or contralateral PPC at those time points significantly impaired the accuracy of task performance. Meanwhile, the deterioration in accuracy did not vary with the stimulating time points. Together, these results indicate that the tactile information is processed cooperatively by SI and PPC in the same hemisphere, starting from the early delay of the tactile unimodal WM task. This pattern of processing of tactile information is different from the pattern in tactile-visual cross-modal WM. In a tactile-visual cross-modal WM task, SI and PPC contribute to the processing sequentially, suggesting a process of sensory information transfer during the early delay between modalities. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Time-Varying Networks of Inter-Ictal Discharging Reveal Epileptogenic Zone.
Zhang, Luyan; Liang, Yi; Li, Fali; Sun, Hongbin; Peng, Wenjing; Du, Peishan; Si, Yajing; Song, Limeng; Yu, Liang; Xu, Peng
2017-01-01
The neuronal synchronous discharging may cause an epileptic seizure. Currently, most of the studies conducted to investigate the mechanism of epilepsy are based on EEGs or functional magnetic resonance imaging (fMRI) recorded during the ictal discharging or the resting-state, and few studies have probed into the dynamic patterns during the inter-ictal discharging that are much easier to record in clinical applications. Here, we propose a time-varying network analysis based on adaptive directed transfer function to uncover the dynamic brain network patterns during the inter-ictal discharging. In addition, an algorithm based on the time-varying outflow of information derived from the network analysis is developed to detect the epileptogenic zone. The analysis performed revealed the time-varying network patterns during different stages of inter-ictal discharging; the epileptogenic zone was activated prior to the discharge onset then worked as the source to propagate the activity to other brain regions. Consistence between the epileptogenic zones detected by our proposed approach and the actual epileptogenic zones proved that time-varying network analysis could not only reveal the underlying neural mechanism of epilepsy, but also function as a useful tool in detecting the epileptogenic zone based on the EEGs in the inter-ictal discharging.
Applications of Fractal Analytical Techniques in the Estimation of Operational Scale
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Quattrochi, Dale A.
2000-01-01
The observational scale and the resolution of remotely sensed imagery are essential considerations in the interpretation process. Many atmospheric, hydrologic, and other natural and human-influenced spatial phenomena are inherently scale dependent and are governed by different physical processes at different spatial domains. This spatial and operational heterogeneity constrains the ability to compare interpretations of phenomena and processes observed in higher spatial resolution imagery to similar interpretations obtained from lower resolution imagery. This is a particularly acute problem, since longterm global change investigations will require high spatial resolution Earth Observing System (EOS), Landsat 7, or commercial satellite data to be combined with lower resolution imagery from older sensors such as Landsat TM and MSS. Fractal analysis is a useful technique for identifying the effects of scale changes on remotely sensed imagery. The fractal dimension of an image is a non-integer value between two and three which indicates the degree of complexity in the texture and shapes depicted in the image. A true fractal surface exhibits self-similarity, a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution, and the slope of the fractal dimension-resolution relationship would be zero. Most geographical phenomena, however, are not self-similar at all scales, but they can be modeled by a stochastic fractal in which the scaling properties of the image exhibit patterns that can be described by statistics such as area-perimeter ratios and autocovariances. Stochastic fractal sets relax the self-similarity assumption and measure many scales and resolutions to represent the varying form of a phenomenon as the pixel size is increased in a convolution process. We have observed that for images of homogeneous land covers, the fractal dimension varies linearly with changes in resolution or pixel size over the range of past, current, and planned space-borne sensors. This relationship differs significantly in images of agricultural, urban, and forest land covers, with urban areas retaining the same level of complexity, forested areas growing smoother, and agricultural areas growing more complex as small pixels are aggregated into larger, mixed pixels. Images of scenes having a mixture of land covers have fractal dimensions that exhibit a non-linear, complex relationship to pixel size. Measuring the fractal dimension of a difference image derived from two images of the same area obtained on different dates showed that the fractal dimension increased steadily, then exhibited a sharp decrease at increasing levels of pixel aggregation. This breakpoint of the fractal dimension/resolution plot is related to the spatial domain or operational scale of the phenomenon exhibiting the predominant visible difference between the two images (in this case, mountain snow cover). The degree to which an image departs from a theoretical ideal fractal surface provides clues as to how much information is altered or lost in the processes of rescaling and rectification. The measured fractal dimension of complex, composite land covers such as urban areas also provides a useful textural index that can assist image classification of complex scenes.
Realization of integral 3-dimensional image using fabricated tunable liquid lens array
NASA Astrophysics Data System (ADS)
Lee, Muyoung; Kim, Junoh; Kim, Cheol Joong; Lee, Jin Su; Won, Yong Hyub
2015-03-01
Electrowetting has been widely studied for various optical applications such as optical switch, sensor, prism, and display. In this study, vari-focal liquid lens array is developed using electrowetting principle to construct integral 3-dimensional imaging. The electrowetting principle that changes the surface tension by applying voltage has several advantages to realize active optical device such as fast response time, low electrical consumption, and no mechanical moving parts. Two immiscible liquids that are water and oil are used for forming lens. By applying a voltage to the water, the focal length of the lens could be tuned as changing contact angle of water. The fabricated electrowetting vari-focal liquid lens array has 1mm diameter spherical lens shape that has 1.6mm distance between each lens. The number of lenses on the panel is 23x23 and the focal length of the lens array is simultaneously tuned from -125 to 110 diopters depending on the applied voltage. The fabricated lens array is implemented to integral 3-dimensional imaging. A 3D object is reconstructed by fabricated liquid lens array with 23x23 elemental images that are generated by 3D max tools. When liquid lens array is tuned as convex state. From vari-focal liquid lens array implemented integral imaging system, we expect that depth enhanced integral imaging can be realized in the near future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, H; Lee, J; Pua, R
2014-06-01
Purpose: The purpose of our study is to reduce imaging radiation dose while maintaining image quality of region of interest (ROI) in X-ray fluoroscopy. A low-dose real-time ROI fluoroscopic imaging technique which includes graphics-processing-unit- (GPU-) accelerated image processing for brightness compensation and noise filtering was developed in this study. Methods: In our ROI fluoroscopic imaging, a copper filter is placed in front of the X-ray tube. The filter contains a round aperture to reduce radiation dose to outside of the aperture. To equalize the brightness difference between inner and outer ROI regions, brightness compensation was performed by use of amore » simple weighting method that applies selectively to the inner ROI, the outer ROI, and the boundary zone. A bilateral filtering was applied to the images to reduce relatively high noise in the outer ROI images. To speed up the calculation of our technique for real-time application, the GPU-acceleration was applied to the image processing algorithm. We performed a dosimetric measurement using an ion-chamber dosimeter to evaluate the amount of radiation dose reduction. The reduction of calculation time compared to a CPU-only computation was also measured, and the assessment of image quality in terms of image noise and spatial resolution was conducted. Results: More than 80% of dose was reduced by use of the ROI filter. The reduction rate depended on the thickness of the filter and the size of ROI aperture. The image noise outside the ROI was remarkably reduced by the bilateral filtering technique. The computation time for processing each frame image was reduced from 3.43 seconds with single CPU to 9.85 milliseconds with GPU-acceleration. Conclusion: The proposed technique for X-ray fluoroscopy can substantially reduce imaging radiation dose to the patient while maintaining image quality particularly in the ROI region in real-time.« less
Characterization of selected elementary motion detector cells to image primitives.
Benson, Leslie A; Barrett, Steven F; Wright, Cameron H G
2008-01-01
Developing a visual sensing system, complete with motion processing hardware and software would have many applications to current technology. It could be mounted on many autonomous vehicles to provide information about the navigational environment, as well as obstacle avoidance features. Incorporating the motion processing capabilities into the sensor requires a new approach to the algorithm implementation. This research, and that of many others, have turned to nature for inspiration. Elementary motion detector (EMD) cells are involved in a biological preprocessing network to provide information to the motion processing lobes of the house degrees y Musca domestica. This paper describes the response of the photoreceptor inputs to the EMDs. The inputs to the EMD components are tested as they are stimulated with varying image primitives. This is the first of many steps in characterizing the EMD response to image primitives.
Semiautomated Segmentation of Polycystic Kidneys in T2-Weighted MR Images.
Kline, Timothy L; Edwards, Marie E; Korfiatis, Panagiotis; Akkus, Zeynettin; Torres, Vicente E; Erickson, Bradley J
2016-09-01
The objective of the present study is to develop and validate a fast, accurate, and reproducible method that will increase and improve institutional measurement of total kidney volume and thereby avoid the higher costs, increased operator processing time, and inherent subjectivity associated with manual contour tracing. We developed a semiautomated segmentation approach, known as the minimal interaction rapid organ segmentation (MIROS) method, which results in human interaction during measurement of total kidney volume on MR images being reduced to a few minutes. This software tool automatically steps through slices and requires rough definition of kidney boundaries supplied by the user. The approach was verified on T2-weighted MR images of 40 patients with autosomal dominant polycystic kidney disease of varying degrees of severity. The MIROS approach required less than 5 minutes of user interaction in all cases. When compared with the ground-truth reference standard, MIROS showed no significant bias and had low variability (mean ± 2 SD, 0.19% ± 6.96%). The MIROS method will greatly facilitate future research studies in which accurate and reproducible measurements of cystic organ volumes are needed.
An Intelligent Systems Approach to Automated Object Recognition: A Preliminary Study
Maddox, Brian G.; Swadley, Casey L.
2002-01-01
Attempts at fully automated object recognition systems have met with varying levels of success over the years. However, none of the systems have achieved high enough accuracy rates to be run unattended. One of the reasons for this may be that they are designed from the computer's point of view and rely mainly on image-processing methods. A better solution to this problem may be to make use of modern advances in computational intelligence and distributed processing to try to mimic how the human brain is thought to recognize objects. As humans combine cognitive processes with detection techniques, such a system would combine traditional image-processing techniques with computer-based intelligence to determine the identity of various objects in a scene.
Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori
2018-01-12
To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke - . Readout noise under the highest pixel gain condition is 1 e - with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7", 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach.
NASA Astrophysics Data System (ADS)
Leijenaar, Ralph T. H.; Nalbantov, Georgi; Carvalho, Sara; van Elmpt, Wouter J. C.; Troost, Esther G. C.; Boellaard, Ronald; Aerts, Hugo J. W. L.; Gillies, Robert J.; Lambin, Philippe
2015-08-01
FDG-PET-derived textural features describing intra-tumor heterogeneity are increasingly investigated as imaging biomarkers. As part of the process of quantifying heterogeneity, image intensities (SUVs) are typically resampled into a reduced number of discrete bins. We focused on the implications of the manner in which this discretization is implemented. Two methods were evaluated: (1) RD, dividing the SUV range into D equally spaced bins, where the intensity resolution (i.e. bin size) varies per image; and (2) RB, maintaining a constant intensity resolution B. Clinical feasibility was assessed on 35 lung cancer patients, imaged before and in the second week of radiotherapy. Forty-four textural features were determined for different D and B for both imaging time points. Feature values depended on the intensity resolution and out of both assessed methods, RB was shown to allow for a meaningful inter- and intra-patient comparison of feature values. Overall, patients ranked differently according to feature values-which was used as a surrogate for textural feature interpretation-between both discretization methods. Our study shows that the manner of SUV discretization has a crucial effect on the resulting textural features and the interpretation thereof, emphasizing the importance of standardized methodology in tumor texture analysis.
Takayanagi, Isao; Yoshimura, Norio; Mori, Kazuya; Matsuo, Shinichiro; Tanaka, Shunsuke; Abe, Hirofumi; Yasuda, Naoto; Ishikawa, Kenichiro; Okura, Shunsuke; Ohsawa, Shinji; Otaka, Toshinori
2018-01-01
To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach. PMID:29329210
Bjorgan, Asgeir; Randeberg, Lise Lyngsnes
2015-01-01
Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Tze Yee
Purpose: For postimplant dosimetric assessment, computed tomography (CT) is commonly used to identify prostate brachytherapy seeds, at the expense of accurate anatomical contouring. Magnetic resonance imaging (MRI) is superior to CT for anatomical delineation, but identification of the negative-contrast seeds is challenging. Positive-contrast MRI markers were proposed to replace spacers to assist seed localization on MRI images. Visualization of these markers under varying scan parameters was investigated. Methods: To simulate a clinical scenario, a prostate phantom was implanted with 66 markers and 86 seeds, and imaged on a 3.0T MRI scanner using a 3D fast radiofrequency-spoiled gradient recalled echo acquisitionmore » with various combinations of scan parameters. Scan parameters, including flip angle, number of excitations, bandwidth, field-of-view, slice thickness, and encoding steps were systematically varied to study their effects on signal, noise, scan time, image resolution, and artifacts. Results: The effects of pulse sequence parameter selection on the marker signal strength and image noise were characterized. The authors also examined the tradeoff between signal-to-noise ratio, scan time, and image artifacts, such as the wraparound artifact, susceptibility artifact, chemical shift artifact, and partial volume averaging artifact. Given reasonable scan time and managable artifacts, the authors recommended scan parameter combinations that can provide robust visualization of the MRI markers. Conclusions: The recommended MRI pulse sequence protocol allows for consistent visualization of the markers to assist seed localization, potentially enabling MRI-only prostate postimplant dosimetry.« less
Data Processing of LAPAN-A3 Thermal Imager
NASA Astrophysics Data System (ADS)
Hartono, R.; Hakim, P. R.; Syafrudin, AH
2018-04-01
As an experimental microsatellite, LAPAN-A3/IPB satellite has an experimental thermal imager, which is called as micro-bolometer, to observe earth surface temperature for horizon observation. The imager data is transmitted from satellite to ground station by S-band video analog signal transmission, and then processed by ground station to become sequence of 8-bit enhanced and contrasted images. Data processing of LAPAN-A3/IPB thermal imager is more difficult than visual digital camera, especially for mosaic and classification purpose. This research aims to describe simple mosaic and classification process of LAPAN-A3/IPB thermal imager based on several videos data produced by the imager. The results show that stitching using Adobe Photoshop produces excellent result but can only process small area, while manual approach using ImageJ software can produce a good result but need a lot of works and time consuming. The mosaic process using image cross-correlation by Matlab offers alternative solution, which can process significantly bigger area in significantly shorter time processing. However, the quality produced is not as good as mosaic images of the other two methods. The simple classifying process that has been done shows that the thermal image can classify three distinct objects, i.e.: clouds, sea, and land surface. However, the algorithm fail to classify any other object which might be caused by distortions in the images. All of these results can be used as reference for development of thermal imager in LAPAN-A4 satellite.
Kiefer, Gundolf; Lehmann, Helko; Weese, Jürgen
2006-04-01
Maximum intensity projections (MIPs) are an important visualization technique for angiographic data sets. Efficient data inspection requires frame rates of at least five frames per second at preserved image quality. Despite the advances in computer technology, this task remains a challenge. On the one hand, the sizes of computed tomography and magnetic resonance images are increasing rapidly. On the other hand, rendering algorithms do not automatically benefit from the advances in processor technology, especially for large data sets. This is due to the faster evolving processing power and the slower evolving memory access speed, which is bridged by hierarchical cache memory architectures. In this paper, we investigate memory access optimization methods and use them for generating MIPs on general-purpose central processing units (CPUs) and graphics processing units (GPUs), respectively. These methods can work on any level of the memory hierarchy, and we show that properly combined methods can optimize memory access on multiple levels of the hierarchy at the same time. We present performance measurements to compare different algorithm variants and illustrate the influence of the respective techniques. On current hardware, the efficient handling of the memory hierarchy for CPUs improves the rendering performance by a factor of 3 to 4. On GPUs, we observed that the effect is even larger, especially for large data sets. The methods can easily be adjusted to different hardware specifics, although their impact can vary considerably. They can also be used for other rendering techniques than MIPs, and their use for more general image processing task could be investigated in the future.
Trajectory-based modeling of fluid transport in a medium with smoothly varying heterogeneity
Vasco, D. W.; Pride, Steven R.; Commer, Michael
2016-03-04
Using an asymptotic methodology, valid in the presence of smoothly varying heterogeneity and prescribed boundaries, we derive a trajectory-based solution for tracer transport. The analysis produces a Hamilton-Jacobi partial differential equation for the phase of the propagating tracer front. The trajectories follow from the characteristic equations that are equivalent to the Hamilton-Jacobi equation. The paths are determined by the fluid velocity field, the total porosity, and the dispersion tensor. Due to their dependence upon the local hydrodynamic dispersion, they differ from conventional streamlines. This difference is borne out in numerical calculations for both uniform and dipole flow fields. In anmore » application to the computational X-ray imaging of a saline tracer test, we illustrate that the trajectories may serve as the basis for a form of tracer tomography. In particular, we use the onset time of a change in attenuation for each volume element of the X-ray image as a measure of the arrival time of the saline tracer. In conclusion, the arrival times are used to image the spatial variation of the effective hydraulic conductivity within the laboratory sample.« less
Comparison of existing digital image analysis systems for the analysis of Thematic Mapper data
NASA Technical Reports Server (NTRS)
Likens, W. C.; Wrigley, R. C.
1984-01-01
Most existing image analysis systems were designed with the Landsat Multi-Spectral Scanner in mind, leaving open the question of whether or not these systems could adequately process Thematic Mapper data. In this report, both hardware and software systems have been evaluated for compatibility with TM data. Lack of spectral analysis capability was not found to be a problem, though techniques for spatial filtering and texture varied. Computer processing speed and data storage of currently existing mini-computer based systems may be less than adequate. Upgrading to more powerful hardware may be required for many TM applications.
An all-optronic synthetic aperture lidar
NASA Astrophysics Data System (ADS)
Turbide, Simon; Marchese, Linda; Terroux, Marc; Babin, François; Bergeron, Alain
2012-09-01
Synthetic Aperture Radar (SAR) is a mature technology that overcomes the diffraction limit of an imaging system's real aperture by taking advantage of the platform motion to coherently sample multiple sections of an aperture much larger than the physical one. Synthetic Aperture Lidar (SAL) is the extension of SAR to much shorter wavelengths (1.5 μm vs 5 cm). This new technology can offer higher resolution images in day or night time as well as in certain adverse conditions. It could be a powerful tool for Earth monitoring (ship detection, frontier surveillance, ocean monitoring) from aircraft, unattended aerial vehicle (UAV) or spatial platforms. A continuous flow of high-resolution images covering large areas would however produce a large amount of data involving a high cost in term of post-processing computational time. This paper presents a laboratory demonstration of a SAL system complete with image reconstruction based on optronic processing. This differs from the more traditional digital approach by its real-time processing capability. The SAL system is discussed and images obtained from a non-metallic diffuse target at ranges up to 3m are shown, these images being processed by a real-time optronic SAR processor origiinally designed to reconstruct SAR images from ENVISAT/ASAR data.
Real-time model-based vision system for object acquisition and tracking
NASA Technical Reports Server (NTRS)
Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd
1987-01-01
A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.
Fission gas bubble identification using MATLAB's image processing toolbox
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collette, R.
Automated image processing routines have the potential to aid in the fuel performance evaluation process by eliminating bias in human judgment that may vary from person-to-person or sample-to-sample. This study presents several MATLAB based image analysis routines designed for fission gas void identification in post-irradiation examination of uranium molybdenum (U–Mo) monolithic-type plate fuels. Frequency domain filtration, enlisted as a pre-processing technique, can eliminate artifacts from the image without compromising the critical features of interest. This process is coupled with a bilateral filter, an edge-preserving noise removal technique aimed at preparing the image for optimal segmentation. Adaptive thresholding proved to bemore » the most consistent gray-level feature segmentation technique for U–Mo fuel microstructures. The Sauvola adaptive threshold technique segments the image based on histogram weighting factors in stable contrast regions and local statistics in variable contrast regions. Once all processing is complete, the algorithm outputs the total fission gas void count, the mean void size, and the average porosity. The final results demonstrate an ability to extract fission gas void morphological data faster, more consistently, and at least as accurately as manual segmentation methods. - Highlights: •Automated image processing can aid in the fuel qualification process. •Routines are developed to characterize fission gas bubbles in irradiated U–Mo fuel. •Frequency domain filtration effectively eliminates FIB curtaining artifacts. •Adaptive thresholding proved to be the most accurate segmentation method. •The techniques established are ready to be applied to large scale data extraction testing.« less
NASA Astrophysics Data System (ADS)
Torkildsen, H. E.; Hovland, H.; Opsahl, T.; Haavardsholm, T. V.; Nicolas, S.; Skauli, T.
2014-06-01
In some applications of multi- or hyperspectral imaging, it is important to have a compact sensor. The most compact spectral imaging sensors are based on spectral filtering in the focal plane. For hyperspectral imaging, it has been proposed to use a "linearly variable" bandpass filter in the focal plane, combined with scanning of the field of view. As the image of a given object in the scene moves across the field of view, it is observed through parts of the filter with varying center wavelength, and a complete spectrum can be assembled. However if the radiance received from the object varies with viewing angle, or with time, then the reconstructed spectrum will be distorted. We describe a camera design where this hyperspectral functionality is traded for multispectral imaging with better spectral integrity. Spectral distortion is minimized by using a patterned filter with 6 bands arranged close together, so that a scene object is seen by each spectral band in rapid succession and with minimal change in viewing angle. The set of 6 bands is repeated 4 times so that the spectral data can be checked for internal consistency. Still the total extent of the filter in the scan direction is small. Therefore the remainder of the image sensor can be used for conventional imaging with potential for using motion tracking and 3D reconstruction to support the spectral imaging function. We show detailed characterization of the point spread function of the camera, demonstrating the importance of such characterization as a basis for image reconstruction. A simplified image reconstruction based on feature-based image coregistration is shown to yield reasonable results. Elimination of spectral artifacts due to scene motion is demonstrated.
Ge, Jiajia; Zhu, Banghe; Regalado, Steven; Godavarty, Anuradha
2008-01-01
Hand-held based optical imaging systems are a recent development towards diagnostic imaging of breast cancer. To date, all the hand-held based optical imagers are used to perform only surface mapping and target localization, but are not capable of demonstrating tomographic imaging. Herein, a novel hand-held probe based optical imager is developed towards three-dimensional (3-D) optical tomography studies. The unique features of this optical imager, which primarily consists of a hand-held probe and an intensified charge coupled device detector, are its ability to; (i) image large tissue areas (5×10 sq. cm) in a single scan, (ii) perform simultaneous multiple point illumination and collection, thus reducing the overall imaging time; and (iii) adapt to varying tissue curvatures, from a flexible probe head design. Experimental studies are performed in the frequency domain on large slab phantoms (∼650 ml) using fluorescence target(s) under perfect uptake (1:0) contrast ratios, and varying target depths (1–2 cm) and X-Y locations. The effect of implementing simultaneous over sequential multiple point illumination towards 3-D tomography is experimentally demonstrated. The feasibility of 3-D optical tomography studies has been demonstrated for the first time using a hand-held based optical imager. Preliminary fluorescence-enhanced optical tomography studies are able to reconstruct 0.45 ml target(s) located at different target depths (1–2 cm). However, the depth recovery was limited as the actual target depth increased, since only reflectance measurements were acquired. Extensive tomography studies are currently carried out to determine the resolution and performance limits of the imager on flat and curved phantoms. PMID:18697559
Ge, Jiajia; Zhu, Banghe; Regalado, Steven; Godavarty, Anuradha
2008-07-01
Hand-held based optical imaging systems are a recent development towards diagnostic imaging of breast cancer. To date, all the hand-held based optical imagers are used to perform only surface mapping and target localization, but are not capable of demonstrating tomographic imaging. Herein, a novel hand-held probe based optical imager is developed towards three-dimensional (3-D) optical tomography studies. The unique features of this optical imager, which primarily consists of a hand-held probe and an intensified charge coupled device detector, are its ability to; (i) image large tissue areas (5 x 10 sq. cm) in a single scan, (ii) perform simultaneous multiple point illumination and collection, thus reducing the overall imaging time; and (iii) adapt to varying tissue curvatures, from a flexible probe head design. Experimental studies are performed in the frequency domain on large slab phantoms (approximately 650 ml) using fluorescence target(s) under perfect uptake (1:0) contrast ratios, and varying target depths (1-2 cm) and X-Y locations. The effect of implementing simultaneous over sequential multiple point illumination towards 3-D tomography is experimentally demonstrated. The feasibility of 3-D optical tomography studies has been demonstrated for the first time using a hand-held based optical imager. Preliminary fluorescence-enhanced optical tomography studies are able to reconstruct 0.45 ml target(s) located at different target depths (1-2 cm). However, the depth recovery was limited as the actual target depth increased, since only reflectance measurements were acquired. Extensive tomography studies are currently carried out to determine the resolution and performance limits of the imager on flat and curved phantoms.
NASA Astrophysics Data System (ADS)
Sargent, Steven D.; Greenman, Mark E.; Hansen, Scott M.
1998-11-01
The Spatial Infrared Imaging Telescope (SPIRIT III) is the primary sensor aboard the Midcourse Space Experiment (MSX), which was launched 24 April 1996. SPIRIT III included a Fourier transform spectrometer that collected terrestrial and celestial background phenomenology data for the Ballistic Missile Defense Organization (BMDO). This spectrometer used a helium-neon reference laser to measure the optical path difference (OPD) in the spectrometer and to command the analog-to-digital conversion of the infrared detector signals, thereby ensuring the data were sampled at precise increments of OPD. Spectrometer data must be sampled at accurate increments of OPD to optimize the spectral resolution and spectral position of the transformed spectra. Unfortunately, a failure in the power supply preregulator at the MSX spacecraft/SPIRIT III interface early in the mission forced the spectrometer to be operated without the reference laser until a failure investigation was completed. During this time data were collected in a backup mode that used an electronic clock to sample the data. These data were sampled evenly in time, and because the scan velocity varied, at nonuniform increments of OPD. The scan velocity profile depended on scan direction and scan length, and varied over time, greatly degrading the spectral resolution and spectral and radiometric accuracy of the measurements. The Convert software used to process the SPIRIT III data was modified to resample the clock-sampled data at even increments of OPD, using scan velocity profiles determined from ground and on-orbit data, greatly improving the quality of the clock-sampled data. This paper presents the resampling algorithm, the characterization of the scan velocity profiles, and the results of applying the resampling algorithm to on-orbit data.
Real-time near-IR imaging of laser-ablation crater evolution in dental enamel
NASA Astrophysics Data System (ADS)
Darling, Cynthia L.; Fried, Daniel
2007-02-01
We have shown that the enamel of the tooth is almost completely transparent near 1310-nm in the near-infrared and that near-IR (NIR) imaging has considerable potential for the optical discrimination of sound and demineralized tissue and for observing defects in the interior of the tooth. Lasers are now routinely used for many applications in dentistry including the ablation of dental caries. The objective of this study was to test the hypothesis that real-time NIR imaging can be used to monitor laser-ablation under varying conditions to assess peripheral thermal and transient-stress induced damage and to measure the rate and efficiency of ablation. Moreover, NIR imaging may have considerable potential for monitoring the removal of demineralized areas of the tooth during cavity preparations. Sound human tooth sections of approximately 3-mm thickness were irradiated by a CO II laser under varying conditions with and without a water spray. The incision area in the interior of each sample was imaged using a tungsten-halogen lamp with band-pass filter centered at 131--nm combined with an InGaAs focal plane array with a NIR zoom microscope in transillumination. Due to the high transparency of enamel at 1310-nm, laser-incisions were clearly visible to the dentin-enamel junction and crack formation, dehydration and irreversible thermal changes were observed during ablation. This study showed that there is great potential for near-IR imaging to monitor laser-ablation events in real-time to: assess safe laser operating parameters by imaging thermal and stress-induced damage, elaborate the mechanisms involved in ablation such as dehydration, and monitor the removal of demineralized enamel.
Clinical image processing engine
NASA Astrophysics Data System (ADS)
Han, Wei; Yao, Jianhua; Chen, Jeremy; Summers, Ronald
2009-02-01
Our group provides clinical image processing services to various institutes at NIH. We develop or adapt image processing programs for a variety of applications. However, each program requires a human operator to select a specific set of images and execute the program, as well as store the results appropriately for later use. To improve efficiency, we design a parallelized clinical image processing engine (CIPE) to streamline and parallelize our service. The engine takes DICOM images from a PACS server, sorts and distributes the images to different applications, multithreads the execution of applications, and collects results from the applications. The engine consists of four modules: a listener, a router, a job manager and a data manager. A template filter in XML format is defined to specify the image specification for each application. A MySQL database is created to store and manage the incoming DICOM images and application results. The engine achieves two important goals: reduce the amount of time and manpower required to process medical images, and reduce the turnaround time for responding. We tested our engine on three different applications with 12 datasets and demonstrated that the engine improved the efficiency dramatically.
Integrated filter and detector array for spectral imaging
NASA Technical Reports Server (NTRS)
Labaw, Clayton C. (Inventor)
1992-01-01
A spectral imaging system having an integrated filter and photodetector array is disclosed. The filter has narrow transmission bands which vary in frequency along the photodetector array. The frequency variation of the transmission bands is matched to, and aligned with, the frequency variation of a received spectral image. The filter is deposited directly on the photodetector array by a low temperature deposition process. By depositing the filter directly on the photodetector array, permanent alignment is achieved for all temperatures, spectral crosstalk is substantially eliminated, and a high signal to noise ratio is achieved.
Diagnostic value of radiological imaging pre- and post-drainage of pleural effusions.
Corcoran, John P; Acton, Louise; Ahmed, Asia; Hallifax, Robert J; Psallidas, Ioannis; Wrightson, John M; Rahman, Najib M; Gleeson, Fergus V
2016-02-01
Patients with an unexplained pleural effusion often require urgent investigation. Clinical practice varies due to uncertainty as to whether an effusion should be drained completely before diagnostic imaging. We performed a retrospective study of patients undergoing medical thoracoscopy for an unexplained effusion. In 110 patients with paired (pre- and post-drainage) chest X-rays and 32 patients with paired computed tomography scans, post-drainage imaging did not provide additional information that would have influenced the clinical decision-making process. © 2015 Asian Pacific Society of Respirology.
Raster Metafile and Raster Metafile Translator
NASA Technical Reports Server (NTRS)
Taylor, Nancy L.; Everton, Eric L.; Randall, Donald P.; Gates, Raymond L.; Skeens, Kristi M.
1989-01-01
The intent is to present an effort undertaken at NASA Langley Research Center to design a generic raster image format and to develop tools for processing images prepared in this format. Both the Raster Metafile (RM) format and the Raster Metafile Translator (RMT) are addressed. This document is intended to serve a varied audience including: users wishing to display and manipulate raster image data, programmers responsible for either interfacing the RM format with other raster formats or for developing new RMT device drivers, and programmers charged with installing the software on a host platform.
Voyager image processing at the Image Processing Laboratory
NASA Astrophysics Data System (ADS)
Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.
1980-09-01
This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.
Voyager image processing at the Image Processing Laboratory
NASA Technical Reports Server (NTRS)
Jepsen, P. L.; Mosher, J. A.; Yagi, G. M.; Avis, C. C.; Lorre, J. J.; Garneau, G. W.
1980-01-01
This paper discusses new digital processing techniques as applied to the Voyager Imaging Subsystem and devised to explore atmospheric dynamics, spectral variations, and the morphology of Jupiter, Saturn and their satellites. Radiometric and geometric decalibration processes, the modulation transfer function, and processes to determine and remove photometric properties of the atmosphere and surface of Jupiter and its satellites are examined. It is exhibited that selected images can be processed into 'approach at constant longitude' time lapse movies which are useful in observing atmospheric changes of Jupiter. Photographs are included to illustrate various image processing techniques.
Time resolved optical diagnostics of ZnO plasma plumes in air
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gupta, Shyam L.; Singh, Ravi Pratap; Thareja, Raj K.
2013-10-15
We report dynamical evolution of laser ablated ZnO plasma plumes using interferometry and shadowgraphy; 2-D fast imaging and optical emission spectroscopy in air ambient at atmospheric pressure. Recorded interferograms using Nomarski interferometer and shadowgram images at various time delays show the presence of electrons and neutrals in the ablated plumes. The inference drawn from sign change of fringe shifts is consistent with two dimensional images of the plume and optical emission spectra at varying time delays with respect to ablating pulse. Zinc oxide plasma plumes are created by focusing 1.06 μm radiation on to ZnO target in air and 532more » nm is used as probe beam.« less
Results From the New NIF Gated LEH imager
NASA Astrophysics Data System (ADS)
Chen, Hui; Amendt, P.; Barrios, M.; Bradley, D.; Casey, D.; Hinkel, D.; Berzak Hopkins, L.; Kilkenny, J.; Kritcher, A.; Landen, O.; Jones, O.; Ma, T.; Milovich, J.; Michel, P.; Moody, J.; Ralph, J.; Pak, A.; Palmer, N.; Schneider, M.
2016-10-01
A novel ns-gated Laser Entrance Hole (G-LEH) diagnostic has been successfully implemented at the National Ignition Facility (NIF). This diagnostic has successfully acquired images from various experimental campaigns, providing critical information for inertial confinement fusion experiments. The G-LEH diagnostic which takes time-resolved gated images along a single line-of-sight, incorporates a high-speed multi-frame CMOS x-ray imager developed by Sandia National Laboratories into the existing Static X-ray Imager diagnostic at NIF. It is capable of capturing two laser-entrance-hole images per shot on its 1024x448 pixel photo-detector array, with integration times as short as 2 ns per frame. The results that will be presented include the size of the laser entrance hole vs. time, the growth of the laser-heated gold plasma bubble, the change in brightness of inner beam spots due to time-varying cross beam energy transfer, and plasma instability growth near the hohlraum wall. This work was performed under the auspices of the U.S. Department of Energy by LLNS, LLC, under Contract No. DE-AC52- 07NA27344.
Identification of varying time scales in sediment transport using the Hilbert-Huang Transform method
NASA Astrophysics Data System (ADS)
Kuai, Ken Z.; Tsai, Christina W.
2012-02-01
SummarySediment transport processes vary at a variety of time scales - from seconds, hours, days to months and years. Multiple time scales exist in the system of flow, sediment transport and bed elevation change processes. As such, identification and selection of appropriate time scales for flow and sediment processes can assist in formulating a system of flow and sediment governing equations representative of the dynamic interaction of flow and particles at the desired details. Recognizing the importance of different varying time scales in the fluvial processes of sediment transport, we introduce the Hilbert-Huang Transform method (HHT) to the field of sediment transport for the time scale analysis. The HHT uses the Empirical Mode Decomposition (EMD) method to decompose a time series into a collection of the Intrinsic Mode Functions (IMFs), and uses the Hilbert Spectral Analysis (HSA) to obtain instantaneous frequency data. The EMD extracts the variability of data with different time scales, and improves the analysis of data series. The HSA can display the succession of time varying time scales, which cannot be captured by the often-used Fast Fourier Transform (FFT) method. This study is one of the earlier attempts to introduce the state-of-the-art technique for the multiple time sales analysis of sediment transport processes. Three practical applications of the HHT method for data analysis of both suspended sediment and bedload transport time series are presented. The analysis results show the strong impact of flood waves on the variations of flow and sediment time scales at a large sampling time scale, as well as the impact of flow turbulence on those time scales at a smaller sampling time scale. Our analysis reveals that the existence of multiple time scales in sediment transport processes may be attributed to the fractal nature in sediment transport. It can be demonstrated by the HHT analysis that the bedload motion time scale is better represented by the ratio of the water depth to the settling velocity, h/ w. In the final part, HHT results are compared with an available time scale formula in literature.
Waldinger, Robert J; Kensinger, Elizabeth A; Schulz, Marc S
2011-09-01
This study examines whether differences in late-life well-being are linked to how older adults encode emotionally valenced information. Using fMRI with 39 older adults varying in life satisfaction, we examined how viewing positive and negative images would affect activation and connectivity of an emotion-processing network. Participants engaged most regions within this network more robustly for positive than for negative images, but within the PFC this effect was moderated by life satisfaction, with individuals higher in satisfaction showing lower levels of activity during the processing of positive images. Participants high in satisfaction showed stronger correlations among network regions-particularly between the amygdala and other emotion processing regions-when viewing positive, as compared with negative, images. Participants low in satisfaction showed no valence effect. Findings suggest that late-life satisfaction is linked with how emotion-processing regions are engaged and connected during processing of valenced information. This first demonstration of a link between neural recruitment and late-life well-being suggests that differences in neural network activation and connectivity may account for the preferential encoding of positive information seen in some older adults.
Waldinger, Robert J.; Kensinger, Elizabeth A.; Schulz, Marc S.
2013-01-01
This study examines whether differences in late-life well-being are linked to how older adults encode emotionally-valenced information. Using fMRI with 39 older adults varying in life satisfaction, we examined how viewing positive and negative images affected activation and connectivity of an emotion-processing network. Participants engaged most regions within this network more robustly for positive than for negative images, but within the PFC this effect was moderated by life satisfaction, with individuals higher in satisfaction showing lower levels of activity during the processing of positive images. Participants high in satisfaction showed stronger correlations among network regions – particularly between the amygdala and other emotion processing regions – when viewing positive as compared to negative images. Participants low in satisfaction showed no valence effect. Findings suggest that late-life satisfaction is linked with how emotion-processing regions are engaged and connected during processing of valenced information. This first demonstration of a link between neural recruitment and late-life well-being suggests that differences in neural network activation and connectivity may account for the preferential encoding of positive information seen in some older adults. PMID:21590504
Zikmund, T; Kvasnica, L; Týč, M; Křížová, A; Colláková, J; Chmelík, R
2014-11-01
Transmitted light holographic microscopy is particularly used for quantitative phase imaging of transparent microscopic objects such as living cells. The study of the cell is based on extraction of the dynamic data on cell behaviour from the time-lapse sequence of the phase images. However, the phase images are affected by the phase aberrations that make the analysis particularly difficult. This is because the phase deformation is prone to change during long-term experiments. Here, we present a novel algorithm for sequential processing of living cells phase images in a time-lapse sequence. The algorithm compensates for the deformation of a phase image using weighted least-squares surface fitting. Moreover, it identifies and segments the individual cells in the phase image. All these procedures are performed automatically and applied immediately after obtaining every single phase image. This property of the algorithm is important for real-time cell quantitative phase imaging and instantaneous control of the course of the experiment by playback of the recorded sequence up to actual time. Such operator's intervention is a forerunner of process automation derived from image analysis. The efficiency of the propounded algorithm is demonstrated on images of rat fibrosarcoma cells using an off-axis holographic microscope. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
CARS module for multimodal microscopy
NASA Astrophysics Data System (ADS)
Zadoyan, Ruben; Baldacchini, Tommaso; Carter, John; Kuo, Chun-Hung; Ocepek, David
2011-03-01
We describe a stand alone CARS module allowing upgrade of a two-photon microscope with CARS modality. The Stokes beam is generated in a commercially available photonic crystal fiber (PCF) using fraction of the power of femtosecond excitation laser. The output of the fiber is optimized for broadband CARS at Stokes shifts in 2900cm-1 region. The spectral resolution in CARS signal is 50 cm-1. It is achieved by introducing a bandpass filter in the pump beam. The timing between the pump and Stokes pulses is preset inside the module and can be varied. We demonstrate utility of the device on examples of second harmonic, two-photon fluorescence and CARS images of several biological and non-biological samples. We also present results of studies where we used CARS modality to monitor in real time the process of fabrication of microstructures by two-photon polymerization.
Hakala, Teemu; Markelin, Lauri; Honkavaara, Eija; Scott, Barry; Theocharous, Theo; Nevalainen, Olli; Näsi, Roope; Suomalainen, Juha; Viljanen, Niko; Greenwell, Claire; Fox, Nigel
2018-05-03
Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK).
Hakala, Teemu; Scott, Barry; Theocharous, Theo; Näsi, Roope; Suomalainen, Juha; Greenwell, Claire; Fox, Nigel
2018-01-01
Drone-based remote sensing has evolved rapidly in recent years. Miniaturized hyperspectral imaging sensors are becoming more common as they provide more abundant information of the object compared to traditional cameras. Reflectance is a physically defined object property and therefore often preferred output of the remote sensing data capture to be used in the further processes. Absolute calibration of the sensor provides a possibility for physical modelling of the imaging process and enables efficient procedures for reflectance correction. Our objective is to develop a method for direct reflectance measurements for drone-based remote sensing. It is based on an imaging spectrometer and irradiance spectrometer. This approach is highly attractive for many practical applications as it does not require in situ reflectance panels for converting the sensor radiance to ground reflectance factors. We performed SI-traceable spectral and radiance calibration of a tuneable Fabry-Pérot Interferometer -based (FPI) hyperspectral camera at the National Physical Laboratory NPL (Teddington, UK). The camera represents novel technology by collecting 2D format hyperspectral image cubes using time sequential spectral scanning principle. The radiance accuracy of different channels varied between ±4% when evaluated using independent test data, and linearity of the camera response was on average 0.9994. The spectral response calibration showed side peaks on several channels that were due to the multiple orders of interference of the FPI. The drone-based direct reflectance measurement system showed promising results with imagery collected over Wytham Forest (Oxford, UK). PMID:29751560
Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer.
Ashtiani, Matin N; Kheradpisheh, Saeed R; Masquelier, Timothée; Ganjtabesh, Mohammad
2017-01-01
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies).
Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer
Ashtiani, Matin N.; Kheradpisheh, Saeed R.; Masquelier, Timothée; Ganjtabesh, Mohammad
2017-01-01
The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the “entry” level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies). PMID:28790954
Radiometric Correction of Multitemporal Hyperspectral Uas Image Mosaics of Seedling Stands
NASA Astrophysics Data System (ADS)
Markelin, L.; Honkavaara, E.; Näsi, R.; Viljanen, N.; Rosnell, T.; Hakala, T.; Vastaranta, M.; Koivisto, T.; Holopainen, M.
2017-10-01
Novel miniaturized multi- and hyperspectral imaging sensors on board of unmanned aerial vehicles have recently shown great potential in various environmental monitoring and measuring tasks such as precision agriculture and forest management. These systems can be used to collect dense 3D point clouds and spectral information over small areas such as single forest stands or sample plots. Accurate radiometric processing and atmospheric correction is required when data sets from different dates and sensors, collected in varying illumination conditions, are combined. Performance of novel radiometric block adjustment method, developed at Finnish Geospatial Research Institute, is evaluated with multitemporal hyperspectral data set of seedling stands collected during spring and summer 2016. Illumination conditions during campaigns varied from bright to overcast. We use two different methods to produce homogenous image mosaics and hyperspectral point clouds: image-wise relative correction and image-wise relative correction with BRDF. Radiometric datasets are converted to reflectance using reference panels and changes in reflectance spectra is analysed. Tested methods improved image mosaic homogeneity by 5 % to 25 %. Results show that the evaluated method can produce consistent reflectance mosaics and reflectance spectra shape between different areas and dates.
Real-time blood flow visualization using the graphics processing unit
NASA Astrophysics Data System (ADS)
Yang, Owen; Cuccia, David; Choi, Bernard
2011-01-01
Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ~10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark.
Real-time blood flow visualization using the graphics processing unit
Yang, Owen; Cuccia, David; Choi, Bernard
2011-01-01
Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ∼10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark. PMID:21280915
NASA Technical Reports Server (NTRS)
Buckner, J. D.; Council, H. W.; Edwards, T. R.
1974-01-01
Description of the hardware and software implementing the system of time-lapse reproduction of images through interactive graphics (TRIIG). The system produces a quality hard copy of processed images in a fast and inexpensive manner. This capability allows for optimal development of processing software through the rapid viewing of many image frames in an interactive mode. Three critical optical devices are used to reproduce an image: an Optronics photo reader/writer, the Adage Graphics Terminal, and Polaroid Type 57 high speed film. Typical sources of digitized images are observation satellites, such as ERTS or Mariner, computer coupled electron microscopes for high-magnification studies, or computer coupled X-ray devices for medical research.
Vision based tunnel inspection using non-rigid registration
NASA Astrophysics Data System (ADS)
Badshah, Amir; Ullah, Shan; Shahzad, Danish
2015-04-01
Growing numbers of long tunnels across the globe has increased the need for safety measurements and inspections of tunnels in these days. To avoid serious damages, tunnel inspection is highly recommended at regular intervals of time to find any deformations or cracks at the right time. While following the stringent safety and tunnel accessibility standards, conventional geodetic surveying using techniques of civil engineering and other manual and mechanical methods are time consuming and results in troublesome of routine life. An automatic tunnel inspection by image processing techniques using non rigid registration has been proposed. There are many other image processing methods used for image registration purposes. Most of the processes are operation of images in its spatial domain like finding edges and corners by Harris edge detection method. These methods are quite time consuming and fail for some or other reasons like for blurred or images with noise. Due to use of image features directly by these methods in the process, are known by the group, correlation by image features. The other method is featureless correlation, in which the images are converted into its frequency domain and then correlated with each other. The shift in spatial domain is the same as in frequency domain, but the processing is order faster than in spatial domain. In the proposed method modified normalized phase correlation has been used to find any shift between two images. As pre pre-processing the tunnel images i.e. reference and template are divided into small patches. All these relative patches are registered by the proposed modified normalized phase correlation. By the application of the proposed algorithm we get the pixel movement of the images. And then these pixels shifts are converted to measuring units like mm, cm etc. After the complete process if there is any shift in the tunnel at described points are located.
An Optical Study of Processes in Hydrogen Flame in a Tube
2002-07-01
growth of the hydrogen- flame length with the hydrogen flow rate was observed, whereas for a turbulent hydrogen jet (Reynolds number Re > 104 [5]), the... flame length remained almost constant and varied only weakly with the flow rate of hydrogen. For a subsonic jet flow, flame images display an...There are some data in the literature which show how the diffusive- flame length varies with the rate of hydrogen flow [4, 7]. The length of a
Technical aspects of CT imaging of the spine.
Tins, Bernhard
2010-11-01
This review article discusses technical aspects of computed tomography (CT) imaging of the spine. Patient positioning, and its influence on image quality and movement artefact, is discussed. Particular emphasis is placed on the choice of scan parameters and their relation to image quality and radiation burden to the patient. Strategies to reduce radiation burden and artefact from metal implants are outlined. Data acquisition, processing, image display and steps to reduce artefact are reviewed. CT imaging of the spine is put into context with other imaging modalities for specific clinical indications or problems. This review aims to review underlying principles for image acquisition and to provide a rough guide for clinical problems without being prescriptive. Individual practice will always vary and reflect differences in local experience, technical provisions and clinical requirements.