Touch HDR: photograph enhancement by user controlled wide dynamic range adaptation
NASA Astrophysics Data System (ADS)
Verrall, Steve; Siddiqui, Hasib; Atanassov, Kalin; Goma, Sergio; Ramachandra, Vikas
2013-03-01
High Dynamic Range (HDR) technology enables photographers to capture a greater range of tonal detail. HDR is typically used to bring out detail in a dark foreground object set against a bright background. HDR technologies include multi-frame HDR and single-frame HDR. Multi-frame HDR requires the combination of a sequence of images taken at different exposures. Single-frame HDR requires histogram equalization post-processing of a single image, a technique referred to as local tone mapping (LTM). Images generated using HDR technology can look less natural than their non- HDR counterparts. Sometimes it is only desired to enhance small regions of an original image. For example, it may be desired to enhance the tonal detail of one subject's face while preserving the original background. The Touch HDR technique described in this paper achieves these goals by enabling selective blending of HDR and non-HDR versions of the same image to create a hybrid image. The HDR version of the image can be generated by either multi-frame or single-frame HDR. Selective blending can be performed as a post-processing step, for example, as a feature of a photo editor application, at any time after the image has been captured. HDR and non-HDR blending is controlled by a weighting surface, which is configured by the user through a sequence of touches on a touchscreen.
500 x 1Byte x 136 images. So each 500 bytes from this dataset represents one scan line of the slice image. For example, using PBM: Get frame one: rawtopgm 256 256 < tomato.data > frame1 Get frames one to four into a single image: rawtopgm 256 1024 < tomato.data >frame1-4 Get frame two (skip
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications
Park, Keunyeol; Song, Minkyu
2018-01-01
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273
The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.
Park, Keunyeol; Song, Minkyu; Kim, Soo Youn
2018-02-24
This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.
Theory and applications of structured light single pixel imaging
NASA Astrophysics Data System (ADS)
Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.
2018-02-01
Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.
Interference-free ultrasound imaging during HIFU therapy, using software tools
NASA Technical Reports Server (NTRS)
Vaezy, Shahram (Inventor); Held, Robert (Inventor); Sikdar, Siddhartha (Inventor); Managuli, Ravi (Inventor); Zderic, Vesna (Inventor)
2010-01-01
Disclosed herein is a method for obtaining a composite interference-free ultrasound image when non-imaging ultrasound waves would otherwise interfere with ultrasound imaging. A conventional ultrasound imaging system is used to collect frames of ultrasound image data in the presence of non-imaging ultrasound waves, such as high-intensity focused ultrasound (HIFU). The frames are directed to a processor that analyzes the frames to identify portions of the frame that are interference-free. Interference-free portions of a plurality of different ultrasound image frames are combined to generate a single composite interference-free ultrasound image that is displayed to a user. In this approach, a frequency of the non-imaging ultrasound waves is offset relative to a frequency of the ultrasound imaging waves, such that the interference introduced by the non-imaging ultrasound waves appears in a different portion of the frames.
Integration of image capture and processing: beyond single-chip digital camera
NASA Astrophysics Data System (ADS)
Lim, SukHwan; El Gamal, Abbas
2001-05-01
An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.
2015-07-01
IMAGE FRAME RATE (R-x\\ IFR -n) PRE-TRIGGER FRAMES (R-x\\PTG-n) TOTAL FRAMES (R-x\\TOTF-n) EXPOSURE TIME (R-x\\EXP-n) SENSOR ROTATION (R-x...0” (Single frame). “1” (Multi-frame). “2” (Continuous). Allowed when: When R\\CDT is “IMGIN” IMAGE FRAME RATE R-x\\ IFR -n R/R Ch 10 Status: RO...the settings that the user wishes to modify. Return Value The impact : A partial IHAL <configuration> element containing only the new settings for
Single-Frame Cinema. Three Dimensional Computer-Generated Imaging.
ERIC Educational Resources Information Center
Cheetham, Edward Joseph, II
This master's thesis provides a description of the proposed art form called single-frame cinema, which is a category of computer imagery that takes the temporal polarities of photography and cinema and unites them into a single visual vignette of time. Following introductory comments, individual chapters discuss (1) the essential physical…
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2017-08-01
Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.
Improved grid-noise removal in single-frame digital moiré 3D shape measurement
NASA Astrophysics Data System (ADS)
Mohammadi, Fatemeh; Kofman, Jonathan
2016-11-01
A single-frame grid-noise removal technique was developed for application in single-frame digital-moiré 3D shape measurement. The ability of the stationary wavelet transform (SWT) to prevent oscillation artifacts near discontinuities, and the ability of the Fourier transform (FFT) applied to wavelet coefficients to separate grid-noise from useful image information, were combined in a new technique, SWT-FFT, to remove grid-noise from moiré-pattern images generated by digital moiré. In comparison to previous grid-noise removal techniques in moiré, SWT-FFT avoids the requirement for mechanical translation of optical components and capture of multiple frames, to enable single-frame moiré-based measurement. Experiments using FFT, Discrete Wavelet Transform (DWT), DWT-FFT, and SWT-FFT were performed on moiré-pattern images containing grid noise, generated by digital moiré, for several test objects. SWT-FFT had the best performance in removing high-frequency grid-noise, both straight and curved lines, minimizing artifacts, and preserving the moiré pattern without blurring and degradation. SWT-FFT also had the lowest noise amplitude in the reconstructed height and lowest roughness index for all test objects, indicating best grid-noise removal in comparison to the other techniques.
Image Based Synthesis for Airborne Minefield Data
2005-12-01
Jia, and C-K. Tang, "Image repairing: robust image synthesis by adaptive ND tensor voting ", Proceedings of the IEEE, Computer Society Conference on...utility is capable to synthesize a single frame data as well as list of frames along a flight path. The application is developed in MATLAB -6.5 using the
Ma, Liheng; Zhan, Dejun; Jiang, Guangwen; Fu, Sihua; Jia, Hui; Wang, Xingshu; Huang, Zongsheng; Zheng, Jiaxing; Hu, Feng; Wu, Wei; Qin, Shiqiao
2015-09-01
The attitude accuracy of a star sensor decreases rapidly when star images become motion-blurred under dynamic conditions. Existing techniques concentrate on a single frame of star images to solve this problem and improvements are obtained to a certain extent. An attitude-correlated frames (ACF) approach, which concentrates on the features of the attitude transforms of the adjacent star image frames, is proposed to improve upon the existing techniques. The attitude transforms between different star image frames are measured by the strap-down gyro unit precisely. With the ACF method, a much larger star image frame is obtained through the combination of adjacent frames. As a result, the degradation of attitude accuracy caused by motion-blurring are compensated for. The improvement of the attitude accuracy is approximately proportional to the square root of the number of correlated star image frames. Simulations and experimental results indicate that the ACF approach is effective in removing random noises and improving the attitude determination accuracy of the star sensor under highly dynamic conditions.
Single-Frame Terrain Mapping Software for Robotic Vehicles
NASA Technical Reports Server (NTRS)
Rankin, Arturo L.
2011-01-01
This software is a component in an unmanned ground vehicle (UGV) perception system that builds compact, single-frame terrain maps for distribution to other systems, such as a world model or an operator control unit, over a local area network (LAN). Each cell in the map encodes an elevation value, terrain classification, object classification, terrain traversability, terrain roughness, and a confidence value into four bytes of memory. The input to this software component is a range image (from a lidar or stereo vision system), and optionally a terrain classification image and an object classification image, both registered to the range image. The single-frame terrain map generates estimates of the support surface elevation, ground cover elevation, and minimum canopy elevation; generates terrain traversability cost; detects low overhangs and high-density obstacles; and can perform geometry-based terrain classification (ground, ground cover, unknown). A new origin is automatically selected for each single-frame terrain map in global coordinates such that it coincides with the corner of a world map cell. That way, single-frame terrain maps correctly line up with the world map, facilitating the merging of map data into the world map. Instead of using 32 bits to store the floating-point elevation for a map cell, the vehicle elevation is assigned to the map origin elevation and reports the change in elevation (from the origin elevation) in terms of the number of discrete steps. The single-frame terrain map elevation resolution is 2 cm. At that resolution, terrain elevation from 20.5 to 20.5 m (with respect to the vehicle's elevation) is encoded into 11 bits. For each four-byte map cell, bits are assigned to encode elevation, terrain roughness, terrain classification, object classification, terrain traversability cost, and a confidence value. The vehicle s current position and orientation, the map origin, and the map cell resolution are all included in a header for each map. The map is compressed into a vector prior to delivery to another system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCowan, P. M., E-mail: pmccowan@cancercare.mb.ca; McCurdy, B. M. C.; Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9
Purpose: The in vivo 3D dose delivered to a patient during volumetric modulated arc therapy (VMAT) delivery can be calculated using electronic portal imaging device (EPID) images. These images must be acquired in cine-mode (i.e., “movie” mode) in order to capture the time-dependent delivery information. The angle subtended by each cine-mode EPID image during an arc can be changed via the frame averaging number selected within the image acquisition software. A large frame average number will decrease the EPID’s angular resolution and will result in a decrease in the accuracy of the dose information contained within each image. Alternatively, lessmore » EPID images acquired per delivery will decrease the overall 3D patient dose calculation time, which is appealing for large-scale clinical implementation. Therefore, the purpose of this study was to determine the optimal frame average value per EPID image, defined as the highest frame averaging that can be used without an appreciable loss in 3D dose reconstruction accuracy for VMAT treatments. Methods: Six different VMAT plans and six different SBRT-VMAT plans were delivered to an anthropomorphic phantom. Delivery was carried out on a Varian 2300ix model linear accelerator (Linac) equipped with an aS1000 EPID running at a frame acquisition rate of 7.5 Hz. An additional PC was set up at the Linac console area, equipped with specialized frame-grabber hardware and software packages allowing continuous acquisition of all EPID frames during delivery. Frames were averaged into “frame-averaged” EPID images using MATLAB. Each frame-averaged data set was used to calculate the in vivo dose to the patient and then compared to the single EPID frame in vivo dose calculation (the single frame calculation represents the highest possible angular resolution per EPID image). A mean percentage dose difference of low dose (<20% prescription dose) and high dose regions (>80% prescription dose) was calculated for each frame averaged scenario for each plan. The authors defined their unacceptable loss of accuracy as no more than a ±1% mean dose difference in the high dose region. Optimal frame average numbers were then determined as a function of the Linac’s average gantry speed and the dose per fraction. Results: The authors found that 9 and 11 frame averages were suitable for all VMAT and SBRT-VMAT treatments, respectively. This resulted in no more than a 1% loss to any of the dose region’s mean percentage difference when compared to the single frame reconstruction. The optimized number was dependent on the treatment’s dose per fraction and was determined to be as high as 14 for 12 Gy/fraction (fx), 15 for 8 Gy/fx, 11 for 6 Gy/fx, and 9 for 2 Gy/fx. Conclusions: The authors have determined an optimal EPID frame averaging number for multiple VMAT-type treatments. These are given as a function of the dose per fraction and average gantry speed. These optimized values are now used in the authors’ clinical, 3D, in vivo patient dosimetry program. This provides a reduction in calculation time while maintaining the authors’ required level of accuracy in the dose reconstruction.« less
Backside-illuminated 6.6-μm pixel video-rate CCDs for scientific imaging applications
NASA Astrophysics Data System (ADS)
Tower, John R.; Levine, Peter A.; Hsueh, Fu-Lung; Patel, Vipulkumar; Swain, Pradyumna K.; Meray, Grazyna M.; Andrews, James T.; Dawson, Robin M.; Sudol, Thomas M.; Andreas, Robert
2000-05-01
A family of backside illuminated CCD imagers with 6.6 micrometers pixels has been developed. The imagers feature full 12 bit (> 4,000:1) dynamic range with measured noise floor of < 10 e RMS at 5 MHz clock rates, and measured full well capacity of > 50,000 e. The modulation transfer function performance is excellent, with measured MTF at Nyquist of 46% for 500 nm illumination. Three device types have been developed. The first device is a 1 K X 1 K full frame device with a single output port, which can be run as a 1 K X 512 frame transfer device. The second device is a 512 X 512 frame transfer device with a single output port. The third device is a 512 X 512 split frame transfer device with four output ports. All feature the high quantum efficiency afforded by backside illumination.
High-speed multi-exposure laser speckle contrast imaging with a single-photon counting camera
Dragojević, Tanja; Bronzi, Danilo; Varma, Hari M.; Valdes, Claudia P.; Castellvi, Clara; Villa, Federica; Tosi, Alberto; Justicia, Carles; Zappa, Franco; Durduran, Turgut
2015-01-01
Laser speckle contrast imaging (LSCI) has emerged as a valuable tool for cerebral blood flow (CBF) imaging. We present a multi-exposure laser speckle imaging (MESI) method which uses a high-frame rate acquisition with a negligible inter-frame dead time to mimic multiple exposures in a single-shot acquisition series. Our approach takes advantage of the noise-free readout and high-sensitivity of a complementary metal-oxide-semiconductor (CMOS) single-photon avalanche diode (SPAD) array to provide real-time speckle contrast measurement with high temporal resolution and accuracy. To demonstrate its feasibility, we provide comparisons between in vivo measurements with both the standard and the new approach performed on a mouse brain, in identical conditions. PMID:26309751
Geiger-mode APD camera system for single-photon 3D LADAR imaging
NASA Astrophysics Data System (ADS)
Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir
2012-06-01
The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.
Adaptive foveated single-pixel imaging with dynamic supersampling
Phillips, David B.; Sun, Ming-Jie; Taylor, Jonathan M.; Edgar, Matthew P.; Barnett, Stephen M.; Gibson, Graham M.; Padgett, Miles J.
2017-01-01
In contrast to conventional multipixel cameras, single-pixel cameras capture images using a single detector that measures the correlations between the scene and a set of patterns. However, these systems typically exhibit low frame rates, because to fully sample a scene in this way requires at least the same number of correlation measurements as the number of pixels in the reconstructed image. To mitigate this, a range of compressive sensing techniques have been developed which use a priori knowledge to reconstruct images from an undersampled measurement set. Here, we take a different approach and adopt a strategy inspired by the foveated vision found in the animal kingdom—a framework that exploits the spatiotemporal redundancy of many dynamic scenes. In our system, a high-resolution foveal region tracks motion within the scene, yet unlike a simple zoom, every frame delivers new spatial information from across the entire field of view. This strategy rapidly records the detail of quickly changing features in the scene while simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This architecture provides video streams in which both the resolution and exposure time spatially vary and adapt dynamically in response to the evolution of the scene. The degree of local frame rate enhancement is scene-dependent, but here, we demonstrate a factor of 4, thereby helping to mitigate one of the main drawbacks of single-pixel imaging techniques. The methods described here complement existing compressive sensing approaches and may be applied to enhance computational imagers that rely on sequential correlation measurements. PMID:28439538
James W. Hoffman; Lloyd L. Coulter; Philip J Riggan
2005-01-01
The new FireMapper® 2.0 and OilMapper airborne, infrared imaging systems operate in a "snapshot" mode. Both systems feature the real time display of single image frames, in any selected spectral band, on a daylight readable tablet PC. These single frames are displayed to the operator with full temperature calibration in color or grayscale renditions. A rapid...
Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope
Adams, Jesse K.; Boominathan, Vivek; Avants, Benjamin W.; Vercosa, Daniel G.; Ye, Fan; Baraniuk, Richard G.; Robinson, Jacob T.; Veeraraghavan, Ashok
2017-01-01
Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world’s tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high–frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies. PMID:29226243
SuperSegger: robust image segmentation, analysis and lineage tracking of bacterial cells.
Stylianidou, Stella; Brennan, Connor; Nissen, Silas B; Kuwada, Nathan J; Wiggins, Paul A
2016-11-01
Many quantitative cell biology questions require fast yet reliable automated image segmentation to identify and link cells from frame-to-frame, and characterize the cell morphology and fluorescence. We present SuperSegger, an automated MATLAB-based image processing package well-suited to quantitative analysis of high-throughput live-cell fluorescence microscopy of bacterial cells. SuperSegger incorporates machine-learning algorithms to optimize cellular boundaries and automated error resolution to reliably link cells from frame-to-frame. Unlike existing packages, it can reliably segment microcolonies with many cells, facilitating the analysis of cell-cycle dynamics in bacteria as well as cell-contact mediated phenomena. This package has a range of built-in capabilities for characterizing bacterial cells, including the identification of cell division events, mother, daughter and neighbouring cells, and computing statistics on cellular fluorescence, the location and intensity of fluorescent foci. SuperSegger provides a variety of postprocessing data visualization tools for single cell and population level analysis, such as histograms, kymographs, frame mosaics, movies and consensus images. Finally, we demonstrate the power of the package by analyzing lag phase growth with single cell resolution. © 2016 John Wiley & Sons Ltd.
Photon Counting Imaging with an Electron-Bombarded Pixel Image Sensor
Hirvonen, Liisa M.; Suhling, Klaus
2016-01-01
Electron-bombarded pixel image sensors, where a single photoelectron is accelerated directly into a CCD or CMOS sensor, allow wide-field imaging at extremely low light levels as they are sensitive enough to detect single photons. This technology allows the detection of up to hundreds or thousands of photon events per frame, depending on the sensor size, and photon event centroiding can be employed to recover resolution lost in the detection process. Unlike photon events from electron-multiplying sensors, the photon events from electron-bombarded sensors have a narrow, acceleration-voltage-dependent pulse height distribution. Thus a gain voltage sweep during exposure in an electron-bombarded sensor could allow photon arrival time determination from the pulse height with sub-frame exposure time resolution. We give a brief overview of our work with electron-bombarded pixel image sensor technology and recent developments in this field for single photon counting imaging, and examples of some applications. PMID:27136556
Pemp, Berthold; Kardon, Randy H; Kircher, Karl; Pernicka, Elisabeth; Schmidt-Erfurth, Ursula; Reitner, Andreas
2013-07-01
Automated detection of subtle changes in peripapillary retinal nerve fibre layer thickness (RNFLT) over time using optical coherence tomography (OCT) is limited by inherent image quality before layer segmentation, stabilization of the scan on the peripapillary retina and its precise placement on repeated scans. The present study evaluates image quality and reproducibility of spectral domain (SD)-OCT comparing different rates of automatic real-time tracking (ART). Peripapillary RNFLT was measured in 40 healthy eyes on six different days using SD-OCT with an eye-tracking system. Image brightness of OCT with unaveraged single frame B-scans was compared to images using ART of 16 B-scans and 100 averaged frames. Short-term and day-to-day reproducibility was evaluated by calculation of intraindividual coefficients of variation (CV) and intraclass correlation coefficients (ICC) for single measurements as well as for seven repeated measurements per study day. Image brightness, short-term reproducibility, and day-to-day reproducibility were significantly improved using ART of 100 frames compared to one and 16 frames. Short-term CV was reduced from 0.94 ± 0.31 % and 0.91 ± 0.54 % in scans of one and 16 frames to 0.56 ± 0.42 % in scans of 100 averaged frames (P ≤ 0.003 each). Day-to-day CV was reduced from 0.98 ± 0.86 % and 0.78 ± 0.56 % to 0.53 ± 0.43 % (P ≤ 0.022 each). The range of ICC was 0.94 to 0.99. Sample size calculations for detecting changes of RNFLT over time in the range of 2 to 5 μm were performed based on intraindividual variability. Image quality and reproducibility of mean peripapillary RNFLT measurements using SD-OCT is improved by averaging OCT images with eye-tracking compared to unaveraged single frame images. Further improvement is achieved by increasing the amount of frames per measurement, and by averaging values of repeated measurements per session. These strategies may allow a more accurate evaluation of RNFLT reduction in clinical trials observing optic nerve degeneration.
Super-resolution imaging applied to moving object tracking
NASA Astrophysics Data System (ADS)
Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi
2017-10-01
Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.
NASA Astrophysics Data System (ADS)
Thapa, Damber; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan
2015-12-01
In this paper, we propose a speckle noise reduction method for spectral-domain optical coherence tomography (SD-OCT) images called multi-frame weighted nuclear norm minimization (MWNNM). This method is a direct extension of weighted nuclear norm minimization (WNNM) in the multi-frame framework since an adequately denoised image could not be achieved with single-frame denoising methods. The MWNNM method exploits multiple B-scans collected from a small area of a SD-OCT volumetric image, and then denoises and averages them together to obtain a high signal-to-noise ratio B-scan. The results show that the image quality metrics obtained by denoising and averaging only five nearby B-scans with MWNNM method is considerably better than those of the average image obtained by registering and averaging 40 azimuthally repeated B-scans.
Multiple-Event, Single-Photon Counting Imaging Sensor
NASA Technical Reports Server (NTRS)
Zheng, Xinyu; Cunningham, Thomas J.; Sun, Chao; Wang, Kang L.
2011-01-01
The single-photon counting imaging sensor is typically an array of silicon Geiger-mode avalanche photodiodes that are monolithically integrated with CMOS (complementary metal oxide semiconductor) readout, signal processing, and addressing circuits located in each pixel and the peripheral area of the chip. The major problem is its single-event method for photon count number registration. A single-event single-photon counting imaging array only allows registration of up to one photon count in each of its pixels during a frame time, i.e., the interval between two successive pixel reset operations. Since the frame time can t be too short, this will lead to very low dynamic range and make the sensor merely useful for very low flux environments. The second problem of the prior technique is a limited fill factor resulting from consumption of chip area by the monolithically integrated CMOS readout in pixels. The resulting low photon collection efficiency will substantially ruin any benefit gained from the very sensitive single-photon counting detection. The single-photon counting imaging sensor developed in this work has a novel multiple-event architecture, which allows each of its pixels to register as more than one million (or more) photon-counting events during a frame time. Because of a consequently boosted dynamic range, the imaging array of the invention is capable of performing single-photon counting under ultra-low light through high-flux environments. On the other hand, since the multiple-event architecture is implemented in a hybrid structure, back-illumination and close-to-unity fill factor can be realized, and maximized quantum efficiency can also be achieved in the detector array.
Efficient use of bit planes in the generation of motion stimuli
NASA Technical Reports Server (NTRS)
Mulligan, Jeffrey B.; Stone, Leland S.
1988-01-01
The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed.
Alignment of cryo-EM movies of individual particles by optimization of image translations.
Rubinstein, John L; Brubaker, Marcus A
2015-11-01
Direct detector device (DDD) cameras have revolutionized single particle electron cryomicroscopy (cryo-EM). In addition to an improved camera detective quantum efficiency, acquisition of DDD movies allows for correction of movement of the specimen, due to both instabilities in the microscope specimen stage and electron beam-induced movement. Unlike specimen stage drift, beam-induced movement is not always homogeneous within an image. Local correlation in the trajectories of nearby particles suggests that beam-induced motion is due to deformation of the ice layer. Algorithms have already been described that can correct movement for large regions of frames and for >1 MDa protein particles. Another algorithm allows individual <1 MDa protein particle trajectories to be estimated, but requires rolling averages to be calculated from frames and fits linear trajectories for particles. Here we describe an algorithm that allows for individual <1 MDa particle images to be aligned without frame averaging or linear trajectories. The algorithm maximizes the overall correlation of the shifted frames with the sum of the shifted frames. The optimum in this single objective function is found efficiently by making use of analytically calculated derivatives of the function. To smooth estimates of particle trajectories, rapid changes in particle positions between frames are penalized in the objective function and weighted averaging of nearby trajectories ensures local correlation in trajectories. This individual particle motion correction, in combination with weighting of Fourier components to account for increasing radiation damage in later frames, can be used to improve 3-D maps from single particle cryo-EM. Copyright © 2015 Elsevier Inc. All rights reserved.
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
NASA Astrophysics Data System (ADS)
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
Temporal compressive imaging for video
NASA Astrophysics Data System (ADS)
Zhou, Qun; Zhang, Linxia; Ke, Jun
2018-01-01
In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.
Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong
2010-01-01
With the use of adaptive optics (AO), high-resolution microscopic imaging of living human retina in the single cell level has been achieved. In an adaptive optics confocal scanning laser ophthalmoscope (AOSLO) system, with a small field size (about 1 degree, 280 μm), the motion of the eye severely affects the stabilization of the real-time video images and results in significant distortions of the retina images. In this paper, Scale-Invariant Feature Transform (SIFT) is used to abstract stable point features from the retina images. Kanade-Lucas-Tomasi(KLT) algorithm is applied to track the features. With the tracked features, the image distortion in each frame is removed by the second-order polynomial transformation, and 10 successive frames are co-added to enhance the image quality. Features of special interest in an image can also be selected manually and tracked by KLT. A point on a cone is selected manually, and the cone is tracked from frame to frame. PMID:21258443
Okuda, Kyohei; Sakimoto, Shota; Fujii, Susumu; Ida, Tomonobu; Moriyama, Shigeru
The frame-of-reference using computed-tomography (CT) coordinate system on single-photon emission computed tomography (SPECT) reconstruction is one of the advanced characteristics of the xSPECT reconstruction system. The aim of this study was to reveal the influence of the high-resolution frame-of-reference on the xSPECT reconstruction. 99m Tc line-source phantom and National Electrical Manufacturers Association (NEMA) image quality phantom were scanned using the SPECT/CT system. xSPECT reconstructions were performed with the reference CT images in different sizes of the display field-of-view (DFOV) and pixel. The pixel sizes of the reconstructed xSPECT images were close to 2.4 mm, which is acquired as originally projection data, even if the reference CT resolution was varied. The full width at half maximum (FWHM) of the line-source, absolute recovery coefficient, and background variability of image quality phantom were independent on the sizes of DFOV in the reference CT images. The results of this study revealed that the image quality of the reconstructed xSPECT images is not influenced by the resolution of frame-of-reference on SPECT reconstruction.
NASA Astrophysics Data System (ADS)
Huynh, Nam; Zhang, Edward; Betcke, Marta; Arridge, Simon R.; Beard, Paul; Cox, Ben
2015-03-01
A system for dynamic mapping of broadband ultrasound fields has been designed, with high frame rate photoacoustic imaging in mind. A Fabry-Pérot interferometric ultrasound sensor was interrogated using a coherent light single-pixel camera. Scrambled Hadamard measurement patterns were used to sample the acoustic field at the sensor, and either a fast Hadamard transform or a compressed sensing reconstruction algorithm were used to recover the acoustic pressure data. Frame rates of 80 Hz were achieved for 32x32 images even though no specialist hardware was used for the on-the-fly reconstructions. The ability of the system to obtain photocacoustic images with data compressions as low as 10% was also demonstrated.
Research on compression performance of ultrahigh-definition videos
NASA Astrophysics Data System (ADS)
Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di
2017-11-01
With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.
Dual-slit confocal light sheet microscopy for in vivo whole-brain imaging of zebrafish
Yang, Zhe; Mei, Li; Xia, Fei; Luo, Qingming; Fu, Ling; Gong, Hui
2015-01-01
In vivo functional imaging at single-neuron resolution is an important approach to visualize biological processes in neuroscience. Light sheet microscopy (LSM) is a cutting edge in vivo imaging technique that provides micron-scale spatial resolution at high frame rate. Due to the scattering and absorption of tissue, however, conventional LSM is inadequate to resolve cells because of the attenuated signal to noise ratio (SNR). Using dual-beam illumination and confocal dual-slit detection, here a dual-slit confocal LSM is demonstrated to obtain the SNR enhanced images with frame rate twice as high as line confocal LSM method. Through theoretical calculations and experiments, the correlation between the slit’s width and SNR was determined to optimize the image quality. In vivo whole brain structural imaging stacks and the functional imaging sequences of single slice were obtained for analysis of calcium activities at single-cell resolution. A two-fold increase in imaging speed of conventional confocal LSM makes it possible to capture the sequence of the neurons’ activities and help reveal the potential functional connections in the whole zebrafish’s brain. PMID:26137381
Statistical Deconvolution for Superresolution Fluorescence Microscopy
Mukamel, Eran A.; Babcock, Hazen; Zhuang, Xiaowei
2012-01-01
Superresolution microscopy techniques based on the sequential activation of fluorophores can achieve image resolution of ∼10 nm but require a sparse distribution of simultaneously activated fluorophores in the field of view. Image analysis procedures for this approach typically discard data from crowded molecules with overlapping images, wasting valuable image information that is only partly degraded by overlap. A data analysis method that exploits all available fluorescence data, regardless of overlap, could increase the number of molecules processed per frame and thereby accelerate superresolution imaging speed, enabling the study of fast, dynamic biological processes. Here, we present a computational method, referred to as deconvolution-STORM (deconSTORM), which uses iterative image deconvolution in place of single- or multiemitter localization to estimate the sample. DeconSTORM approximates the maximum likelihood sample estimate under a realistic statistical model of fluorescence microscopy movies comprising numerous frames. The model incorporates Poisson-distributed photon-detection noise, the sparse spatial distribution of activated fluorophores, and temporal correlations between consecutive movie frames arising from intermittent fluorophore activation. We first quantitatively validated this approach with simulated fluorescence data and showed that deconSTORM accurately estimates superresolution images even at high densities of activated fluorophores where analysis by single- or multiemitter localization methods fails. We then applied the method to experimental data of cellular structures and demonstrated that deconSTORM enables an approximately fivefold or greater increase in imaging speed by allowing a higher density of activated fluorophores/frame. PMID:22677393
Software for Acquiring Image Data for PIV
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Cheung, H. M.; Kressler, Brian
2003-01-01
PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.
Stripe nonuniformity correction for infrared imaging system based on single image optimization
NASA Astrophysics Data System (ADS)
Hua, Weiping; Zhao, Jufeng; Cui, Guangmang; Gong, Xiaoli; Ge, Peng; Zhang, Jiang; Xu, Zhihai
2018-06-01
Infrared imaging is often disturbed by stripe nonuniformity noise. Scene-based correction method can effectively reduce the impact of stripe noise. In this paper, a stripe nonuniformity correction method based on differential constraint is proposed. Firstly, the gray distribution of stripe nonuniformity is analyzed and the penalty function is constructed by the difference of horizontal gradient and vertical gradient. With the weight function, the penalty function is optimized to obtain the corrected image. Comparing with other single-frame approaches, experiments show that the proposed method performs better in both subjective and objective analysis, and does less damage to edge and detail. Meanwhile, the proposed method runs faster. We have also discussed the differences between the proposed idea and multi-frame methods. Our method is finally well applied in hardware system.
Coincidence ion imaging with a fast frame camera
NASA Astrophysics Data System (ADS)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen
2014-12-01
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.
Real-time CT-video registration for continuous endoscopic guidance
NASA Astrophysics Data System (ADS)
Merritt, Scott A.; Rai, Lav; Higgins, William E.
2006-03-01
Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physician's behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frame's optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.
Informative-frame filtering in endoscopy videos
NASA Astrophysics Data System (ADS)
An, Yong Hwan; Hwang, Sae; Oh, JungHwan; Lee, JeongKyu; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny
2005-04-01
Advances in video technology are being incorporated into today"s healthcare practice. For example, colonoscopy is an important screening tool for colorectal cancer. Colonoscopy allows for the inspection of the entire colon and provides the ability to perform a number of therapeutic operations during a single procedure. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. Other endoscopic procedures include upper gastrointestinal endoscopy, enteroscopy, bronchoscopy, cystoscopy, and laparoscopy. However, a significant number of out-of-focus frames are included in this type of videos since current endoscopes are equipped with a single, wide-angle lens that cannot be focused. The out-of-focus frames do not hold any useful information. To reduce the burdens of the further processes such as computer-aided image processing or human expert"s examinations, these frames need to be removed. We call an out-of-focus frame as non-informative frame and an in-focus frame as informative frame. We propose a new technique to classify the video frames into two classes, informative and non-informative frames using a combination of Discrete Fourier Transform (DFT), Texture Analysis, and K-Means Clustering. The proposed technique can evaluate the frames without any reference image, and does not need any predefined threshold value. Our experimental studies indicate that it achieves over 96% of four different performance metrics (i.e. precision, sensitivity, specificity, and accuracy).
Tracking quasi-stationary flow of weak fluorescent signals by adaptive multi-frame correlation.
Ji, L; Danuser, G
2005-12-01
We have developed a novel cross-correlation technique to probe quasi-stationary flow of fluorescent signals in live cells at a spatial resolution that is close to single particle tracking. By correlating image blocks between pairs of consecutive frames and integrating their correlation scores over multiple frame pairs, uncertainty in identifying a globally significant maximum in the correlation score function has been greatly reduced as compared with conventional correlation-based tracking using the signal of only two consecutive frames. This approach proves robust and very effective in analysing images with a weak, noise-perturbed signal contrast where texture characteristics cannot be matched between only a pair of frames. It can also be applied to images that lack prominent features that could be utilized for particle tracking or feature-based template matching. Furthermore, owing to the integration of correlation scores over multiple frames, the method can handle signals with substantial frame-to-frame intensity variation where conventional correlation-based tracking fails. We tested the performance of the method by tracking polymer flow in actin and microtubule cytoskeleton structures labelled at various fluorophore densities providing imagery with a broad range of signal modulation and noise. In applications to fluorescent speckle microscopy (FSM), where the fluorophore density is sufficiently low to reveal patterns of discrete fluorescent marks referred to as speckles, we combined the multi-frame correlation approach proposed above with particle tracking. This hybrid approach allowed us to follow single speckles robustly in areas of high speckle density and fast flow, where previously published FSM analysis methods were unsuccessful. Thus, we can now probe cytoskeleton polymer dynamics in living cells at an entirely new level of complexity and with unprecedented detail.
Toward real-time quantum imaging with a single pixel camera
Lawrie, B. J.; Pooser, R. C.
2013-03-19
In this paper, we present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively pass macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. Finally, in low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imagingmore » with sensitivity below the photon shot noise limit.« less
Kim, Daehyeok; Song, Minkyu; Choe, Byeongseong; Kim, Soo Youn
2017-06-25
In this paper, we present a multi-resolution mode CMOS image sensor (CIS) for intelligent surveillance system (ISS) applications. A low column fixed-pattern noise (CFPN) comparator is proposed in 8-bit two-step single-slope analog-to-digital converter (TSSS ADC) for the CIS that supports normal, 1/2, 1/4, 1/8, 1/16, 1/32, and 1/64 mode of pixel resolution. We show that the scaled-resolution images enable CIS to reduce total power consumption while images hold steady without events. A prototype sensor of 176 × 144 pixels has been fabricated with a 0.18 μm 1-poly 4-metal CMOS process. The area of 4-shared 4T-active pixel sensor (APS) is 4.4 μm × 4.4 μm and the total chip size is 2.35 mm × 2.35 mm. The maximum power consumption is 10 mW (with full resolution) with supply voltages of 3.3 V (analog) and 1.8 V (digital) and 14 frame/s of frame rates.
NASA Astrophysics Data System (ADS)
Perrin, Douglas P.; Bueno, Alejandra; Rodriguez, Andrea; Marx, Gerald R.; del Nido, Pedro J.
2017-03-01
In this paper we describe a pilot study, where machine learning methods are used to differentiate between congenital heart diseases. Our approach was to apply convolutional neural networks (CNNs) to echocardiographic images from five different pediatric populations: normal, coarctation of the aorta (CoA), hypoplastic left heart syndrome (HLHS), transposition of the great arteries (TGA), and single ventricle (SV). We used a single network topology that was trained in a pairwise fashion in order to evaluate the potential to differentiate between patient populations. In total we used 59,151 echo frames drawn from 1,666 clinical sequences. Approximately 80% of the data was used for training, and the remainder for validation. Data was split at sequence boundaries to avoid having related images in the training and validation sets. While training was done with echo images/frames, evaluation was performed for both single frame discrimination as well as sequence discrimination (by majority voting). In total 10 networks were generated and evaluated. Unlike other domains where this network topology has been used, in ultrasound there is low visual variation between classes. This work shows the potential for CNNs to be applied to this low-variation domain of medical imaging for disease discrimination.
Applying compressive sensing to TEM video: A substantial frame rate increase on any camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia
One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less
Applying compressive sensing to TEM video: A substantial frame rate increase on any camera
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; ...
2015-08-13
One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less
Extended image differencing for change detection in UAV video mosaics
NASA Astrophysics Data System (ADS)
Saur, Günter; Krüger, Wolfgang; Schumann, Arne
2014-03-01
Change detection is one of the most important tasks when using unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. We address changes of short time scale, i.e. the observations are taken in time distances from several minutes up to a few hours. Each observation is a short video sequence acquired by the UAV in near-nadir view and the relevant changes are, e.g., recently parked or moved vehicles. In this paper we extend our previous approach of image differencing for single video frames to video mosaics. A precise image-to-image registration combined with a robust matching approach is needed to stitch the video frames to a mosaic. Additionally, this matching algorithm is applied to mosaic pairs in order to align them to a common geometry. The resulting registered video mosaic pairs are the input of the change detection procedure based on extended image differencing. A change mask is generated by an adaptive threshold applied to a linear combination of difference images of intensity and gradient magnitude. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed size of shadows, and compression or transmission artifacts. The special effects of video mosaicking such as geometric distortions and artifacts at moving objects have to be considered, too. In our experiments we analyze the influence of these effects on the change detection results by considering several scenes. The results show that for video mosaics this task is more difficult than for single video frames. Therefore, we extended the image registration by estimating an elastic transformation using a thin plate spline approach. The results for mosaics are comparable to that of single video frames and are useful for interactive image exploitation due to a larger scene coverage.
Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing
NASA Technical Reports Server (NTRS)
Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.
1995-01-01
Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.
In vivo fluorescence imaging of primate retinal ganglion cells and retinal pigment epithelial cells
NASA Astrophysics Data System (ADS)
Gray, Daniel C.; Merigan, William; Wolfing, Jessica I.; Gee, Bernard P.; Porter, Jason; Dubra, Alfredo; Twietmeyer, Ted H.; Ahamd, Kamran; Tumbar, Remy; Reinholz, Fred; Williams, David R.
2006-08-01
The ability to resolve single cells noninvasively in the living retina has important applications for the study of normal retina, diseased retina, and the efficacy of therapies for retinal disease. We describe a new instrument for high-resolution, in vivo imaging of the mammalian retina that combines the benefits of confocal detection, adaptive optics, multispectral, and fluorescence imaging. The instrument is capable of imaging single ganglion cells and their axons through retrograde transport in ganglion cells of fluorescent dyes injected into the monkey lateral geniculate nucleus (LGN). In addition, we demonstrate a method involving simultaneous imaging in two spectral bands that allows the integration of very weak signals across many frames despite inter-frame movement of the eye. With this method, we are also able to resolve the smallest retinal capillaries in fluorescein angiography and the mosaic of retinal pigment epithelium (RPE) cells with lipofuscin autofluorescence.
Faint Debris Detection by Particle Based Track-Before-Detect Method
NASA Astrophysics Data System (ADS)
Uetsuhara, M.; Ikoma, N.
2014-09-01
This study proposes a particle method to detect faint debris, which is hardly seen in single frame, from an image sequence based on the concept of track-before-detect (TBD). The most widely used detection method is detect-before-track (DBT), which firstly detects signals of targets from single frame by distinguishing difference of intensity between foreground and background then associate the signals for each target between frames. DBT is capable of tracking bright targets but limited. DBT is necessary to consider presence of false signals and is difficult to recover from false association. On the other hand, TBD methods try to track targets without explicitly detecting the signals followed by evaluation of goodness of each track and obtaining detection results. TBD has an advantage over DBT in detecting weak signals around background level in single frame. However, conventional TBD methods for debris detection apply brute-force search over candidate tracks then manually select true one from the candidates. To reduce those significant drawbacks of brute-force search and not-fully automated process, this study proposes a faint debris detection algorithm by a particle based TBD method consisting of sequential update of target state and heuristic search of initial state. The state consists of position, velocity direction and magnitude, and size of debris over the image at a single frame. The sequential update process is implemented by a particle filter (PF). PF is an optimal filtering technique that requires initial distribution of target state as a prior knowledge. An evolutional algorithm (EA) is utilized to search the initial distribution. The EA iteratively applies propagation and likelihood evaluation of particles for the same image sequences and resulting set of particles is used as an initial distribution of PF. This paper describes the algorithm of the proposed faint debris detection method. The algorithm demonstrates performance on image sequences acquired during observation campaigns dedicated to GEO breakup fragments, which would contain a sufficient number of faint debris images. The results indicate the proposed method is capable of tracking faint debris with moderate computational costs at operational level.
4D multiple-cathode ultrafast electron microscopy
Baskin, John Spencer; Liu, Haihua; Zewail, Ahmed H.
2014-01-01
Four-dimensional multiple-cathode ultrafast electron microscopy is developed to enable the capture of multiple images at ultrashort time intervals for a single microscopic dynamic process. The dynamic process is initiated in the specimen by one femtosecond light pulse and probed by multiple packets of electrons generated by one UV laser pulse impinging on multiple, spatially distinct, cathode surfaces. Each packet is distinctly recorded, with timing and detector location controlled by the cathode configuration. In the first demonstration, two packets of electrons on each image frame (of the CCD) probe different times, separated by 19 picoseconds, in the evolution of the diffraction of a gold film following femtosecond heating. Future elaborations of this concept to extend its capabilities and expand the range of applications of 4D ultrafast electron microscopy are discussed. The proof-of-principle demonstration reported here provides a path toward the imaging of irreversible ultrafast phenomena of materials, and opens the door to studies involving the single-frame capture of ultrafast dynamics using single-pump/multiple-probe, embedded stroboscopic imaging. PMID:25006261
4D multiple-cathode ultrafast electron microscopy.
Baskin, John Spencer; Liu, Haihua; Zewail, Ahmed H
2014-07-22
Four-dimensional multiple-cathode ultrafast electron microscopy is developed to enable the capture of multiple images at ultrashort time intervals for a single microscopic dynamic process. The dynamic process is initiated in the specimen by one femtosecond light pulse and probed by multiple packets of electrons generated by one UV laser pulse impinging on multiple, spatially distinct, cathode surfaces. Each packet is distinctly recorded, with timing and detector location controlled by the cathode configuration. In the first demonstration, two packets of electrons on each image frame (of the CCD) probe different times, separated by 19 picoseconds, in the evolution of the diffraction of a gold film following femtosecond heating. Future elaborations of this concept to extend its capabilities and expand the range of applications of 4D ultrafast electron microscopy are discussed. The proof-of-principle demonstration reported here provides a path toward the imaging of irreversible ultrafast phenomena of materials, and opens the door to studies involving the single-frame capture of ultrafast dynamics using single-pump/multiple-probe, embedded stroboscopic imaging.
Image segmentation and dynamic lineage analysis in single-cell fluorescence microscopy.
Wang, Quanli; Niemi, Jarad; Tan, Chee-Meng; You, Lingchong; West, Mike
2010-01-01
An increasingly common component of studies in synthetic and systems biology is analysis of dynamics of gene expression at the single-cell level, a context that is heavily dependent on the use of time-lapse movies. Extracting quantitative data on the single-cell temporal dynamics from such movies remains a major challenge. Here, we describe novel methods for automating key steps in the analysis of single-cell, fluorescent images-segmentation and lineage reconstruction-to recognize and track individual cells over time. The automated analysis iteratively combines a set of extended morphological methods for segmentation, and uses a neighborhood-based scoring method for frame-to-frame lineage linking. Our studies with bacteria, budding yeast and human cells, demonstrate the portability and usability of these methods, whether using phase, bright field or fluorescent images. These examples also demonstrate the utility of our integrated approach in facilitating analyses of engineered and natural cellular networks in diverse settings. The automated methods are implemented in freely available, open-source software.
Experimental single-chip color HDTV image acquisition system with 8M-pixel CMOS image sensor
NASA Astrophysics Data System (ADS)
Shimamoto, Hiroshi; Yamashita, Takayuki; Funatsu, Ryohei; Mitani, Kohji; Nojiri, Yuji
2006-02-01
We have developed an experimental single-chip color HDTV image acquisition system using 8M-pixel CMOS image sensor. The sensor has 3840 × 2160 effective pixels and is progressively scanned at 60 frames per second. We describe the color filter array and interpolation method to improve image quality with a high-pixel-count single-chip sensor. We also describe an experimental image acquisition system we used to measured spatial frequency characteristics in the horizontal direction. The results indicate good prospects for achieving a high quality single chip HDTV camera that reduces pseudo signals and maintains high spatial frequency characteristics within the frequency band for HDTV.
Particle displacement tracking applied to air flows
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1991-01-01
Electronic Particle Image Velocimeter (PIV) techniques offer many advantages over conventional photographic PIV methods such as fast turn around times and simplified data reduction. A new all electronic PIV technique was developed which can measure high speed gas velocities. The Particle Displacement Tracking (PDT) technique employs a single cw laser, small seed particles (1 micron), and a single intensified, gated CCD array frame camera to provide a simple and fast method of obtaining two-dimensional velocity vector maps with unambiguous direction determination. Use of a single CCD camera eliminates registration difficulties encountered when multiple cameras are used to obtain velocity magnitude and direction information. An 80386 PC equipped with a large memory buffer frame-grabber board provides all of the data acquisition and data reduction operations. No array processors of other numerical processing hardware are required. Full video resolution (640x480 pixel) is maintained in the acquired images, providing high resolution video frames of the recorded particle images. The time between data acquisition to display of the velocity vector map is less than 40 sec. The new electronic PDT technique is demonstrated on an air nozzle flow with velocities less than 150 m/s.
Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.
Huang, Yan; Wang, Wei; Wang, Liang
2018-04-01
Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.
Advances in indirect detector systems for ultra high-speed hard X-ray imaging with synchrotron light
NASA Astrophysics Data System (ADS)
Olbinado, M. P.; Grenzer, J.; Pradel, P.; De Resseguier, T.; Vagovic, P.; Zdora, M.-C.; Guzenko, V. A.; David, C.; Rack, A.
2018-04-01
We report on indirect X-ray detector systems for various full-field, ultra high-speed X-ray imaging methodologies, such as X-ray phase-contrast radiography, diffraction topography, grating interferometry and speckle-based imaging performed at the hard X-ray imaging beamline ID19 of the European Synchrotron—ESRF. Our work highlights the versatility of indirect X-ray detectors to multiple goals such as single synchrotron pulse isolation, multiple-frame recording up to millions frames per second, high efficiency, and high spatial resolution. Besides the technical advancements, potential applications are briefly introduced and discussed.
High-resolution depth profiling using a range-gated CMOS SPAD quanta image sensor.
Ren, Ximing; Connolly, Peter W R; Halimi, Abderrahim; Altmann, Yoann; McLaughlin, Stephen; Gyongy, Istvan; Henderson, Robert K; Buller, Gerald S
2018-03-05
A CMOS single-photon avalanche diode (SPAD) quanta image sensor is used to reconstruct depth and intensity profiles when operating in a range-gated mode used in conjunction with pulsed laser illumination. By designing the CMOS SPAD array to acquire photons within a pre-determined temporal gate, the need for timing circuitry was avoided and it was therefore possible to have an enhanced fill factor (61% in this case) and a frame rate (100,000 frames per second) that is more difficult to achieve in a SPAD array which uses time-correlated single-photon counting. When coupled with appropriate image reconstruction algorithms, millimeter resolution depth profiles were achieved by iterating through a sequence of temporal delay steps in synchronization with laser illumination pulses. For photon data with high signal-to-noise ratios, depth images with millimeter scale depth uncertainty can be estimated using a standard cross-correlation approach. To enhance the estimation of depth and intensity images in the sparse photon regime, we used a bespoke clustering-based image restoration strategy, taking into account the binomial statistics of the photon data and non-local spatial correlations within the scene. For sparse photon data with total exposure times of 75 ms or less, the bespoke algorithm can reconstruct depth images with millimeter scale depth uncertainty at a stand-off distance of approximately 2 meters. We demonstrate a new approach to single-photon depth and intensity profiling using different target scenes, taking full advantage of the high fill-factor, high frame rate and large array format of this range-gated CMOS SPAD array.
Keyhole imaging method for dynamic objects behind the occlusion area
NASA Astrophysics Data System (ADS)
Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong
2018-01-01
A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .
Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera
NASA Astrophysics Data System (ADS)
Rahman, Samiur; Ullah, Sana; Ullah, Sehat
2018-01-01
Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.
Coincidence ion imaging with a fast frame camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei
2014-12-15
A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less
Imaging visible light with Medipix2.
Mac Raighne, Aaron; Brownlee, Colin; Gebert, Ulrike; Maneuski, Dzmitry; Milnes, James; O'Shea, Val; Rügheimer, Tilman K
2010-11-01
A need exists for high-speed single-photon counting optical imaging detectors. Single-photon counting high-speed detection of x rays is possible by using Medipix2 with pixelated silicon photodiodes. In this article, we report on a device that exploits the Medipix2 chip for optical imaging. The fabricated device is capable of imaging at >3000 frames/s over a 256×256 pixel matrix. The imaging performance of the detector device via the modulation transfer function is measured, and the presence of ion feedback and its degradation of the imaging properties are discussed.
Wavelet denoising of multiframe optical coherence tomography data
Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.
2012-01-01
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103
Wavelet denoising of multiframe optical coherence tomography data.
Mayer, Markus A; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P
2012-03-01
We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.
Direct Estimation of Structure and Motion from Multiple Frames
1990-03-01
sequential frames in an image sequence. As a consequence, the information that can be extracted from a single optical flow field is limited to a snapshot of...researchers have developed techniques that extract motion and structure inform.4tion without computation of the optical flow. Best known are the "direct...operated iteratively on a sequence of images to recover structure. It required feature extraction and matching. Broida and Chellappa [9] suggested the use of
Single-pixel imaging using balanced detection and a digital micromirror device
NASA Astrophysics Data System (ADS)
Soldevila, F.; Clemente, P.; Tajahuerce, E.; Uribe-Patarroyo, Néstor; Andrés, P.; Lancis, J.
2018-02-01
Over the past decade, single-pixel imaging (SPI) has established as a viable tool in scenarios where traditional imaging techniques struggle to provide images with acceptable quality in practicable times and reasonable costs. However, SPI still has several limitations inherent to the technique, such as working with spurious light and in real time. Here we present a novel approach, using complementary measurements and a single balanced detector. By using balanced detection, we improve the frame rate of the complementary measurement architectures by a factor of two. Furthermore, the use of a balanced detector provides environmental light immunity to the method.
NASA Astrophysics Data System (ADS)
Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha
2012-09-01
Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.
Image sensor with high dynamic range linear output
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Fossum, Eric R. (Inventor)
2007-01-01
Designs and operational methods to increase the dynamic range of image sensors and APS devices in particular by achieving more than one integration times for each pixel thereof. An APS system with more than one column-parallel signal chains for readout are described for maintaining a high frame rate in readout. Each active pixel is sampled for multiple times during a single frame readout, thus resulting in multiple integration times. The operation methods can also be used to obtain multiple integration times for each pixel with an APS design having a single column-parallel signal chain for readout. Furthermore, analog-to-digital conversion of high speed and high resolution can be implemented.
New image compression scheme for digital angiocardiography application
NASA Astrophysics Data System (ADS)
Anastassopoulos, George C.; Lymberopoulos, Dimitris C.; Kotsopoulos, Stavros A.; Kokkinakis, George C.
1993-06-01
The present paper deals with the development and evaluation of a new compression scheme, for angiocardiography images. This scheme provides considerable compression of the medical data file, through two different stages. The first stage obliterates the redundancy inside a single frame domain since the second stage obliterates the redundancy among the sequential frames. Within these stages the employed data compression ratio can be easily adjusted according to the needs of the angiocardiography applications, where still or moving (in slow or full motion) images are hauled. The developed scheme has been tailored on the real needs of the diagnosis oriented conferencing-teleworking processes, where Unified Image Viewing facilities are required.
High-frame-rate full-vocal-tract 3D dynamic speech imaging.
Fu, Maojing; Barlaz, Marissa S; Holtrop, Joseph L; Perry, Jamie L; Kuehn, David P; Shosted, Ryan K; Liang, Zhi-Pei; Sutton, Bradley P
2017-04-01
To achieve high temporal frame rate, high spatial resolution and full-vocal-tract coverage for three-dimensional dynamic speech MRI by using low-rank modeling and sparse sampling. Three-dimensional dynamic speech MRI is enabled by integrating a novel data acquisition strategy and an image reconstruction method with the partial separability model: (a) a self-navigated sparse sampling strategy that accelerates data acquisition by collecting high-nominal-frame-rate cone navigator sand imaging data within a single repetition time, and (b) are construction method that recovers high-quality speech dynamics from sparse (k,t)-space data by enforcing joint low-rank and spatiotemporal total variation constraints. The proposed method has been evaluated through in vivo experiments. A nominal temporal frame rate of 166 frames per second (defined based on a repetition time of 5.99 ms) was achieved for an imaging volume covering the entire vocal tract with a spatial resolution of 2.2 × 2.2 × 5.0 mm 3 . Practical utility of the proposed method was demonstrated via both validation experiments and a phonetics investigation. Three-dimensional dynamic speech imaging is possible with full-vocal-tract coverage, high spatial resolution and high nominal frame rate to provide dynamic speech data useful for phonetic studies. Magn Reson Med 77:1619-1629, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Mattioli Della Rocca, Francescopaolo
2018-01-01
This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm. PMID:29641479
Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing.
Estrada, Rolando; Tomasi, Carlo; Cabrera, Michelle T; Wallace, David K; Freedman, Sharon F; Farsiu, Sina
2011-10-01
Indirect ophthalmoscopy (IO) is the standard of care for evaluation of the neonatal retina. When recorded on video from a head-mounted camera, IO images have low quality and narrow Field of View (FOV). We present an image fusion methodology for converting a video IO recording into a single, high quality, wide-FOV mosaic that seamlessly blends the best frames in the video. To this end, we have developed fast and robust algorithms for automatic evaluation of video quality, artifact detection and removal, vessel mapping, registration, and multi-frame image fusion. Our experiments show the effectiveness of the proposed methods.
A multi-frame soft x-ray pinhole imaging diagnostic for single-shot applicationsa)
NASA Astrophysics Data System (ADS)
Wurden, G. A.; Coffey, S. K.
2012-10-01
For high energy density magnetized target fusion experiments at the Air Force Research Laboratory FRCHX machine, obtaining multi-frame soft x-ray images of the field reversed configuration (FRC) plasma as it is being compressed will provide useful dynamics and symmetry information. However, vacuum hardware will be destroyed during the implosion. We have designed a simple in-vacuum pinhole nosecone attachment, fitting onto a Conflat window, coated with 3.2 mg/cm2 of P-47 phosphor, and covered with a thin 50-nm aluminum reflective overcoat, lens-coupled to a multi-frame Hadland Ultra intensified digital camera. We compare visible and soft x-ray axial images of translating (˜200 eV) plasmas in the FRX-L and FRCHX machines in Los Alamos and Albuquerque.
High speed fluorescence imaging with compressed ultrafast photography
NASA Astrophysics Data System (ADS)
Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.
2017-02-01
Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.
Vision-based object detection and recognition system for intelligent vehicles
NASA Astrophysics Data System (ADS)
Ran, Bin; Liu, Henry X.; Martono, Wilfung
1999-01-01
Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.
Comet Wild 2 Up Close and Personal
NASA Technical Reports Server (NTRS)
2004-01-01
On January 2, 2004 NASA's Stardust spacecraft made a close flyby of comet Wild 2 (pronounced 'Vilt-2'). Among the equipment the spacecraft carried on board was a navigation camera. This is the 34th of the 72 images taken by Stardust's navigation camera during close encounter. The exposure time was 10 milliseconds. The two frames are actually of 1 single exposure. The frame on the left depicts the comet as the human eye would see it. The frame on the right depicts the same image but 'stretched' so that the faint jets emanating from Wild 2 can be plainly seen. Comet Wild 2 is about five kilometers (3.1 miles) in diameter.
High-sensitivity, high-speed continuous imaging system
Watson, Scott A; Bender, III, Howard A
2014-11-18
A continuous imaging system for recording low levels of light typically extending over small distances with high-frame rates and with a large number of frames is described. Photodiode pixels disposed in an array having a chosen geometry, each pixel having a dedicated amplifier, analog-to-digital convertor, and memory, provide parallel operation of the system. When combined with a plurality of scintillators responsive to a selected source of radiation, in a scintillator array, the light from each scintillator being directed to a single corresponding photodiode in close proximity or lens-coupled thereto, embodiments of the present imaging system may provide images of x-ray, gamma ray, proton, and neutron sources with high efficiency.
NASA Astrophysics Data System (ADS)
Blackford, Ethan B.; Estepp, Justin R.
2015-03-01
Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.
Chen, Hui; Palmer, N; Dayton, M; Carpenter, A; Schneider, M B; Bell, P M; Bradley, D K; Claus, L D; Fang, L; Hilsabeck, T; Hohenberger, M; Jones, O S; Kilkenny, J D; Kimmel, M W; Robertson, G; Rochau, G; Sanchez, M O; Stahoviak, J W; Trotter, D C; Porter, J L
2016-11-01
A novel x-ray imager, which takes time-resolved gated images along a single line-of-sight, has been successfully implemented at the National Ignition Facility (NIF). This Gated Laser Entrance Hole diagnostic, G-LEH, incorporates a high-speed multi-frame CMOS x-ray imager developed by Sandia National Laboratories to upgrade the existing Static X-ray Imager diagnostic at NIF. The new diagnostic is capable of capturing two laser-entrance-hole images per shot on its 1024 × 448 pixels photo-detector array, with integration times as short as 1.6 ns per frame. Since its implementation on NIF, the G-LEH diagnostic has successfully acquired images from various experimental campaigns, providing critical new information for understanding the hohlraum performance in inertial confinement fusion (ICF) experiments, such as the size of the laser entrance hole vs. time, the growth of the laser-heated gold plasma bubble, the change in brightness of inner beam spots due to time-varying cross beam energy transfer, and plasma instability growth near the hohlraum wall.
Sobieranski, Antonio C; Inci, Fatih; Tekin, H Cumhur; Yuksekkaya, Mehmet; Comunello, Eros; Cobra, Daniel; von Wangenheim, Aldo; Demirci, Utkan
2017-01-01
In this paper, an irregular displacement-based lensless wide-field microscopy imaging platform is presented by combining digital in-line holography and computational pixel super-resolution using multi-frame processing. The samples are illuminated by a nearly coherent illumination system, where the hologram shadows are projected into a complementary metal-oxide semiconductor-based imaging sensor. To increase the resolution, a multi-frame pixel resolution approach is employed to produce a single holographic image from multiple frame observations of the scene, with small planar displacements. Displacements are resolved by a hybrid approach: (i) alignment of the LR images by a fast feature-based registration method, and (ii) fine adjustment of the sub-pixel information using a continuous optimization approach designed to find the global optimum solution. Numerical method for phase-retrieval is applied to decode the signal and reconstruct the morphological details of the analyzed sample. The presented approach was evaluated with various biological samples including sperm and platelets, whose dimensions are in the order of a few microns. The obtained results demonstrate a spatial resolution of 1.55 µm on a field-of-view of ≈30 mm2. PMID:29657866
Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji
2016-02-22
In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.
A multi-frame soft x-ray pinhole imaging diagnostic for single-shot applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurden, G. A.; Coffey, S. K.
2012-10-15
For high energy density magnetized target fusion experiments at the Air Force Research Laboratory FRCHX machine, obtaining multi-frame soft x-ray images of the field reversed configuration (FRC) plasma as it is being compressed will provide useful dynamics and symmetry information. However, vacuum hardware will be destroyed during the implosion. We have designed a simple in-vacuum pinhole nosecone attachment, fitting onto a Conflat window, coated with 3.2 mg/cm{sup 2} of P-47 phosphor, and covered with a thin 50-nm aluminum reflective overcoat, lens-coupled to a multi-frame Hadland Ultra intensified digital camera. We compare visible and soft x-ray axial images of translating ({approx}200more » eV) plasmas in the FRX-L and FRCHX machines in Los Alamos and Albuquerque.« less
Measuring single-cell gene expression dynamics in bacteria using fluorescence time-lapse microscopy
Young, Jonathan W; Locke, James C W; Altinok, Alphan; Rosenfeld, Nitzan; Bacarian, Tigran; Swain, Peter S; Mjolsness, Eric; Elowitz, Michael B
2014-01-01
Quantitative single-cell time-lapse microscopy is a powerful method for analyzing gene circuit dynamics and heterogeneous cell behavior. We describe the application of this method to imaging bacteria by using an automated microscopy system. This protocol has been used to analyze sporulation and competence differentiation in Bacillus subtilis, and to quantify gene regulation and its fluctuations in individual Escherichia coli cells. The protocol involves seeding and growing bacteria on small agarose pads and imaging the resulting microcolonies. Images are then reviewed and analyzed using our laboratory's custom MATLAB analysis code, which segments and tracks cells in a frame-to-frame method. This process yields quantitative expression data on cell lineages, which can illustrate dynamic expression profiles and facilitate mathematical models of gene circuits. With fast-growing bacteria, such as E. coli or B. subtilis, image acquisition can be completed in 1 d, with an additional 1–2 d for progressing through the analysis procedure. PMID:22179594
Multiple signal classification algorithm for super-resolution fluorescence microscopy
Agarwal, Krishna; Macháň, Radek
2016-01-01
Single-molecule localization techniques are restricted by long acquisition and computational times, or the need of special fluorophores or biologically toxic photochemical environments. Here we propose a statistical super-resolution technique of wide-field fluorescence microscopy we call the multiple signal classification algorithm which has several advantages. It provides resolution down to at least 50 nm, requires fewer frames and lower excitation power and works even at high fluorophore concentrations. Further, it works with any fluorophore that exhibits blinking on the timescale of the recording. The multiple signal classification algorithm shows comparable or better performance in comparison with single-molecule localization techniques and four contemporary statistical super-resolution methods for experiments of in vitro actin filaments and other independently acquired experimental data sets. We also demonstrate super-resolution at timescales of 245 ms (using 49 frames acquired at 200 frames per second) in samples of live-cell microtubules and live-cell actin filaments imaged without imaging buffers. PMID:27934858
A novel snapshot polarimetric imager
NASA Astrophysics Data System (ADS)
Wong, Gerald; McMaster, Ciaran; Struthers, Robert; Gorman, Alistair; Sinclair, Peter; Lamb, Robert; Harvey, Andrew R.
2012-10-01
Polarimetric imaging (PI) is of increasing importance in determining additional scene information beyond that of conventional images. For very long-range surveillance, image quality is degraded due to turbulence. Furthermore, the high magnification required to create images with sufficient spatial resolution suitable for object recognition and identification require long focal length optical systems. These are incompatible with the size and weight restrictions for aircraft. Techniques which allow detection and recognition of an object at the single pixel level are therefore likely to provide advance warning of approaching threats or long-range object cueing. PI is a technique that has the potential to detect object signatures at the pixel level. Early attempts to develop PI used rotating polarisers (and spectral filters) which recorded sequential polarized images from which the complete Stokes matrix could be derived. This approach has built-in latency between frames and requires accurate registration of consecutive frames to analyze real-time video of moving objects. Alternatively, multiple optical systems and cameras have been demonstrated to remove latency, but this approach increases cost and bulk of the imaging system. In our investigation we present a simplified imaging system that divides an image into two orthogonal polarimetric components which are then simultaneously projected onto a single detector array. Thus polarimetric data is recorded without latency on a single snapshot. We further show that, for pixel-level objects, the data derived from only two orthogonal states (H and V) is sufficient to increase the probability of detection whilst reducing false alarms compared to conventional unpolarised imaging.
Robust Small Target Co-Detection from Airborne Infrared Image Sequences.
Gao, Jingli; Wen, Chenglin; Liu, Meiqin
2017-09-29
In this paper, a novel infrared target co-detection model combining the self-correlation features of backgrounds and the commonality features of targets in the spatio-temporal domain is proposed to detect small targets in a sequence of infrared images with complex backgrounds. Firstly, a dense target extraction model based on nonlinear weights is proposed, which can better suppress background of images and enhance small targets than weights of singular values. Secondly, a sparse target extraction model based on entry-wise weighted robust principal component analysis is proposed. The entry-wise weight adaptively incorporates structural prior in terms of local weighted entropy, thus, it can extract real targets accurately and suppress background clutters efficiently. Finally, the commonality of targets in the spatio-temporal domain are used to construct target refinement model for false alarms suppression and target confirmation. Since real targets could appear in both of the dense and sparse reconstruction maps of a single frame, and form trajectories after tracklet association of consecutive frames, the location correlation of the dense and sparse reconstruction maps for a single frame and tracklet association of the location correlation maps for successive frames have strong ability to discriminate between small targets and background clutters. Experimental results demonstrate that the proposed small target co-detection method can not only suppress background clutters effectively, but also detect targets accurately even if with target-like interference.
Miller, Brian W.; Furenlid, Lars R.; Moore, Stephen K.; Barber, H. Bradford; Nagarkar, Vivek V.; Barrett, Harrison H.
2010-01-01
FastSPECT III is a stationary, single-photon emission computed tomography (SPECT) imager designed specifically for imaging and studying neurological pathologies in rodent brain, including Alzheimer’s and Parkinsons’s disease. Twenty independent BazookaSPECT [1] gamma-ray detectors acquire projections of a spherical field of view with pinholes selected for desired resolution and sensitivity. Each BazookaSPECT detector comprises a columnar CsI(Tl) scintillator, image-intensifier, optical lens, and fast-frame-rate CCD camera. Data stream back to processing computers via firewire interfaces, and heavy use of graphics processing units (GPUs) ensures that each frame of data is processed in real time to extract the images of individual gamma-ray events. Details of the system design, imaging aperture fabrication methods, and preliminary projection images are presented. PMID:21218137
McMullan, G; Vinothkumar, K R; Henderson, R
2015-11-01
We have recorded dose-fractionated electron cryo-microscope images of thin films of pure flash-frozen amorphous ice and pre-irradiated amorphous carbon on a Falcon II direct electron detector using 300 keV electrons. We observe Thon rings [1] in both the power spectrum of the summed frames and the sum of power spectra from the individual frames. The Thon rings from amorphous carbon images are always more visible in the power spectrum of the summed frames whereas those of amorphous ice are more visible in the sum of power spectra from the individual frames. This difference indicates that while pre-irradiated carbon behaves like a solid during the exposure, amorphous ice behaves like a fluid with the individual water molecules undergoing beam-induced motion. Using the measured variation in the power spectra amplitude with number of electrons per image we deduce that water molecules are randomly displaced by a mean squared distance of ∼1.1 Å(2) for every incident 300 keV e(-)/Å(2). The induced motion leads to an optimal exposure with 300 keV electrons of 4.0 e(-)/Å(2) per image with which to observe Thon rings centred around the strong 3.7 Å scattering peak from amorphous ice. The beam-induced movement of the water molecules generates pseudo-Brownian motion of embedded macromolecules. The resulting blurring of single particle images contributes an additional term, on top of that from radiation damage, to the minimum achievable B-factor for macromolecular structure determination. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.
Video Image Stabilization and Registration (VISAR) Software
NASA Technical Reports Server (NTRS)
1999-01-01
Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.
Comet Wild 2 Up Close and Personal
2004-01-02
On January 2, 2004 NASA's Stardust spacecraft made a close flyby of comet Wild 2 (pronounced "Vilt-2"). Among the equipment the spacecraft carried on board was a navigation camera. This is the 34th of the 72 images taken by Stardust's navigation camera during close encounter. The exposure time was 10 milliseconds. The two frames are actually of 1 single exposure. The frame on the left depicts the comet as the human eye would see it. The frame on the right depicts the same image but "stretched" so that the faint jets emanating from Wild 2 can be plainly seen. Comet Wild 2 is about five kilometers (3.1 miles) in diameter. http://photojournal.jpl.nasa.gov/catalog/PIA05571
Surface coil proton MR imaging at 2 T.
Röschmann, P; Tischler, R
1986-10-01
We describe the design and application of surface coils for magnetic resonance (MR) imaging at high resonance frequencies (85 MHz). Circular, rectangular-frame, and reflector-type surface coils were used in the transmit-and-receive mode. With these coils, the required radio frequency power is reduced by factors of two up to 100 with respect to head and body coils. With the small, circular coils, high-resolution images of a small region of interest can be obtained that are free of foldback and motion artifacts originating outside the field of interest. With the rectangular-frame and reflector coils, large fields of view are also accessible. As examples of applications, single- and multiple-section images of the eye, knee, head and shoulder, and spinal cord are provided.
Compact Kirkpatrick–Baez microscope mirrors for imaging laser-plasma x-ray emission
Marshall, F. J.
2012-07-18
Compact Kirkpatrick–Baez microscope mirror components for use in imaging laser-plasma x-ray emission have been manufactured, coated, and tested. A single mirror pair has dimensions of 14 × 7 × 9 mm and a best resolution of ~5 μm. The mirrors are coated with Ir providing a useful energy range of 2-8 keV when operated at a grazing angle of 0.7°. The mirrors can be circularly arranged to provide 16 images of the target emission a configuration best suited for use in combination with a custom framing camera. As a result, an alternative arrangement of the mirrors would allow alignment ofmore » the images with a fourstrip framing camera.« less
Freeze frame analysis on high speed cinematography of Nd/YAG laser explosions in ocular tissues.
Vernon, S A; Cheng, H
1986-01-01
High speed colour cinematography at 400 frames per second was used to photograph both single and train burst Nd/YAG laser applications in ox eyes at threshold energy levels. Measurements of the extent and speed of particle scatter and tissue distortion from the acoustic transient were made from a sequential freeze frame analysis of the films. Particles were observed to travel over 8 mm from the site of Nd/YAG application 20 milliseconds after a single pulse at initial speeds in excess of 20 km/h. The use of train bursts of pulses was seen to increase the number of particles scattered and project the wavefront of particles further from the point of laser application. Images PMID:3754458
High-contrast imaging in the cloud with klipReduce and Findr
NASA Astrophysics Data System (ADS)
Haug-Baltzell, Asher; Males, Jared R.; Morzinski, Katie M.; Wu, Ya-Lin; Merchant, Nirav; Lyons, Eric; Close, Laird M.
2016-08-01
Astronomical data sets are growing ever larger, and the area of high contrast imaging of exoplanets is no exception. With the advent of fast, low-noise detectors operating at 10 to 1000 Hz, huge numbers of images can be taken during a single hours-long observation. High frame rates offer several advantages, such as improved registration, frame selection, and improved speckle calibration. However, advanced image processing algorithms are computationally challenging to apply. Here we describe a parallelized, cloud-based data reduction system developed for the Magellan Adaptive Optics VisAO camera, which is capable of rapidly exploring tens of thousands of parameter sets affecting the Karhunen-Loève image processing (KLIP) algorithm to produce high-quality direct images of exoplanets. We demonstrate these capabilities with a visible wavelength high contrast data set of a hydrogen-accreting brown dwarf companion.
Chen, Yuling; Lou, Yang; Yen, Jesse
2017-07-01
During conventional ultrasound imaging, the need for multiple transmissions for one image and the time of flight for a desired imaging depth limit the frame rate of the system. Using a single plane wave pulse during each transmission followed by parallel receive processing allows for high frame rate imaging. However, image quality is degraded because of the lack of transmit focusing. Beamforming by spatial matched filtering (SMF) is a promising method which focuses ultrasonic energy using spatial filters constructed from the transmit-receive impulse response of the system. Studies by other researchers have shown that SMF beamforming can provide dynamic transmit-receive focusing throughout the field of view. In this paper, we apply SMF beamforming to plane wave transmissions (PWTs) to achieve both dynamic transmit-receive focusing at all imaging depths and high imaging frame rate (>5000 frames per second). We demonstrated the capability of the combined method (PWT + SMF) of achieving two-way focusing mathematically through analysis based on the narrowband Rayleigh-Sommerfeld diffraction theory. Moreover, the broadband performance of PWT + SMF was quantified in terms of lateral resolution and contrast from both computer simulations and experimental data. Results were compared between SMF beamforming and conventional delay-and-sum (DAS) beamforming in both simulations and experiments. At an imaging depth of 40 mm, simulation results showed a 29% lateral resolution improvement and a 160% contrast improvement with PWT + SMF. These improvements were 17% and 48% for experimental data with noise.
Construction of high frame rate images with Fourier transform
NASA Astrophysics Data System (ADS)
Peng, Hu; Lu, Jian-Yu
2002-05-01
Traditionally, images are constructed with a delay-and-sum method that adjusts the phases of received signals (echoes) scattered from the same point in space so that they are summed in phase. Recently, the relationship between the delay-and-sum method and the Fourier transform is investigated [Jian-yu Lu, Anjun Liu, and Hu Peng, ``High frame rate and delay-and-sum imaging methods,'' IEEE Trans. Ultrason. Ferroelectr. Freq. Control (submitted)]. In this study, a generic Fourier transform method is developed. Two-dimensional (2-D) or three-dimensional (3-D) high frame rate images can be constructed using the Fourier transform with a single transmission of an ultrasound pulse from an array as long as the transmission field of the array is known. To verify our theory, computer simulations have been performed with a linear array, a 2-D array, a convex curved array, and a spherical 2-D array. The simulation results are consistent with our theory. [Work supported in part by Grant 5RO1 HL60301 from NIH.
Moving target detection in flash mode against stroboscopic mode by active range-gated laser imaging
NASA Astrophysics Data System (ADS)
Zhang, Xuanyu; Wang, Xinwei; Sun, Liang; Fan, Songtao; Lei, Pingshun; Zhou, Yan; Liu, Yuliang
2018-01-01
Moving target detection is important for the application of target tracking and remote surveillance in active range-gated laser imaging. This technique has two operation modes based on the difference of the number of pulses per frame: stroboscopic mode with the accumulation of multiple laser pulses per frame and flash mode with a single shot of laser pulse per frame. In this paper, we have established a range-gated laser imaging system. In the system, two types of lasers with different frequency were chosen for the two modes. Electric fan and horizontal sliding track were selected as the moving targets to compare the moving blurring between two modes. Consequently, the system working in flash mode shows more excellent performance in motion blurring against stroboscopic mode. Furthermore, based on experiments and theoretical analysis, we presented the higher signal-to-noise ratio of image acquired by stroboscopic mode than flash mode in indoor and underwater environment.
NASA Technical Reports Server (NTRS)
Mcewen, A. S.; Soderblom, L. A.; Becker, T. L.; Lee, E. M.; Batson, R. M.
1993-01-01
About 1000 Viking Orbiter red and violet filter images have been processed to provide global color coverage of Mars at a scale of 1 km/pixel. Individual image frames acquired during a single spacecraft revolution ('rev') were first processed through radiometric calibration, cosmetic cleanup, geometric control, reprojection, and mosaicking. A total of 57 'single-rev' mosaics have been produced. Phase angles range from 13 to 85 degrees. All the mosaics are geometrically tied to the Mars digital image mosaic (MDIM), a black-and-white base map with a scale of 231 m/pixel.
High-speed single-pixel digital holography
NASA Astrophysics Data System (ADS)
González, Humberto; Martínez-León, Lluís.; Soldevila, Fernando; Araiza-Esquivel, Ma.; Tajahuerce, Enrique; Lancis, Jesús
2017-06-01
The complete phase and amplitude information of biological specimens can be easily determined by phase-shifting digital holography. Spatial light modulators (SLMs) based on liquid crystal technology, with a frame-rate around 60 Hz, have been employed in digital holography. In contrast, digital micro-mirror devices (DMDs) can reach frame rates up to 22 kHz. A method proposed by Lee to design computer generated holograms (CGHs) permits the use of such binary amplitude modulators as phase-modulation devices. Single-pixel imaging techniques record images by sampling the object with a sequence of micro-structured light patterns and using a simple photodetector. Our group has reported some approaches combining single-pixel imaging and phase-shifting digital holography. In this communication, we review these techniques and present the possibility of a high-speed single-pixel phase-shifting digital holography system with phase-encoded illumination. This system is based on a Mach-Zehnder interferometer, with a DMD acting as the modulator for projecting the sampling patterns on the object and also being used for phase-shifting. The proposed sampling functions are phaseencoded Hadamard patterns generated through a Lee hologram approach. The method allows the recording of the complex amplitude distribution of an object at high speed on account of the high frame rates of the DMD. Reconstruction may take just a few seconds. Besides, the optical setup is envisaged as a true adaptive system, which is able to measure the aberration induced by the optical system in the absence of a sample object, and then to compensate the wavefront in the phasemodulation stage.
High-performance floating-point image computing workstation for medical applications
NASA Astrophysics Data System (ADS)
Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin
1990-07-01
The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.
Quantum measurement of a rapidly rotating spin qubit in diamond.
Wood, Alexander A; Lilette, Emmanuel; Fein, Yaakov Y; Tomek, Nikolas; McGuinness, Liam P; Hollenberg, Lloyd C L; Scholten, Robert E; Martin, Andy M
2018-05-01
A controlled qubit in a rotating frame opens new opportunities to probe fundamental quantum physics, such as geometric phases in physically rotating frames, and can potentially enhance detection of magnetic fields. Realizing a single qubit that can be measured and controlled during physical rotation is experimentally challenging. We demonstrate quantum control of a single nitrogen-vacancy (NV) center within a diamond rotated at 200,000 rpm, a rotational period comparable to the NV spin coherence time T 2 . We stroboscopically image individual NV centers that execute rapid circular motion in addition to rotation and demonstrate preparation, control, and readout of the qubit quantum state with lasers and microwaves. Using spin-echo interferometry of the rotating qubit, we are able to detect modulation of the NV Zeeman shift arising from the rotating NV axis and an external DC magnetic field. Our work establishes single NV qubits in diamond as quantum sensors in the physically rotating frame and paves the way for the realization of single-qubit diamond-based rotation sensors.
Quantum measurement of a rapidly rotating spin qubit in diamond
Fein, Yaakov Y.; Hollenberg, Lloyd C. L.; Scholten, Robert E.
2018-01-01
A controlled qubit in a rotating frame opens new opportunities to probe fundamental quantum physics, such as geometric phases in physically rotating frames, and can potentially enhance detection of magnetic fields. Realizing a single qubit that can be measured and controlled during physical rotation is experimentally challenging. We demonstrate quantum control of a single nitrogen-vacancy (NV) center within a diamond rotated at 200,000 rpm, a rotational period comparable to the NV spin coherence time T2. We stroboscopically image individual NV centers that execute rapid circular motion in addition to rotation and demonstrate preparation, control, and readout of the qubit quantum state with lasers and microwaves. Using spin-echo interferometry of the rotating qubit, we are able to detect modulation of the NV Zeeman shift arising from the rotating NV axis and an external DC magnetic field. Our work establishes single NV qubits in diamond as quantum sensors in the physically rotating frame and paves the way for the realization of single-qubit diamond-based rotation sensors. PMID:29736417
1999-06-01
Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.
Processing Near-Infrared Imagery of the Orion Heatshield During EFT-1 Hypersonic Reentry
NASA Technical Reports Server (NTRS)
Spisz, Thomas S.; Taylor, Jeff C.; Gibson, David M.; Kennerly, Steve; Osei-Wusu, Kwame; Horvath, Thomas J.; Schwartz, Richard J.; Tack, Steven; Bush, Brett C.; Oliver, A. Brandon
2016-01-01
The Scientifically Calibrated In-Flight Imagery (SCIFLI) team captured high-resolution, calibrated, near-infrared imagery of the Orion capsule during atmospheric reentry of the EFT-1 mission. A US Navy NP-3D aircraft equipped with a multi-band optical sensor package, referred to as Cast Glance, acquired imagery of the Orion capsule's heatshield during a period when Orion was slowing from approximately Mach 10 to Mach 7. The line-of-sight distance ranged from approximately 65 to 40 nmi. Global surface temperatures of the capsule's thermal heatshield derived from the near-infrared intensity measurements complemented the in-depth (embedded) thermocouple measurements. Moreover, these derived surface temperatures are essential to the assessment of the thermocouples' reliance on inverse heat transfer methods and material response codes to infer the surface temperature from the in-depth measurements. The paper describes the image processing challenges associated with a manually-tracked, high-angular rate air-to-air observation. Issues included management of significant frame-to-frame motions due to both tracking jerk and jitter as well as distortions due to atmospheric effects. Corrections for changing sky backgrounds (including some cirrus clouds), atmospheric attenuation, and target orientations and ranges also had to be made. The image processing goal is to reduce the detrimental effects due to motion (both sensor and capsule), vibration (jitter), and atmospherics for image quality improvement, without compromising the quantitative integrity of the data, especially local intensity (temperature) variations. The paper will detail the approach of selecting and utilizing only the highest quality images, registering several co-temporal image frames to a single image frame to the extent frame-to-frame distortions would allow, and then co-adding the registered frames to improve image quality and reduce noise. Using preflight calibration data, the registered and averaged infrared intensity images were converted to surface temperatures on the Orion capsule's heatshield. Temperature uncertainties will be discussed relative to uncertainties of surface emissivity and atmospheric transmission loss. Comparison of limited onboard surface thermocouple data to the image derived surface temperature will be presented.
Novel ultrasonic real-time scanner featuring servo controlled transducers displaying a sector image.
Matzuk, T; Skolnick, M L
1978-07-01
This paper describes a new real-time servo controlled sector scanner that produces high resolution images and has functionally programmable features similar to phased array systems, but possesses the simplicity of design and low cost best achievable in a mechanical sector scanner. The unique feature is the transducer head which contains a single moving part--the transducer--enclosed within a light-weight, hand held, and vibration free case. The frame rate, sector width, stop action angle, are all operator programmable. The frame rate can be varied from 12 to 30 frames s-1 and the sector width from 0 degrees to 60 degrees. Conversion from sector to time motion (T/M) modes are instant and two options are available, a freeze position high density T/M and a low density T/M obtainable simultaneously during sector visualization. Unusual electronic features are: automatic gain control, electronic recording of images on video tape in rf format, and ability to post-process images during video playback to extract T/M display and to change time gain control (tgc) and image size.
Spread-Spectrum Beamforming and Clutter Filtering for Plane-Wave Color Doppler Imaging.
Mansour, Omar; Poepping, Tamie L; Lacefield, James C
2016-07-21
Plane-wave imaging is desirable for its ability to achieve high frame rates, allowing the capture of fast dynamic events and continuous Doppler data. In most implementations of plane-wave imaging, multiple low-resolution images from different plane wave tilt angles are compounded to form a single high-resolution image, thereby reducing the frame rate. Compounding improves the lateral beam profile in the high-resolution image, but it also acts as a low-pass filter in slow time that causes attenuation and aliasing of signals with high Doppler shifts. This paper introduces a spread-spectrum color Doppler imaging method that produces high-resolution images without the use of compounding, thereby eliminating the tradeoff between beam quality, maximum unaliased Doppler frequency, and frame rate. The method uses a long, random sequence of transmit angles rather than a linear sweep of plane wave directions. The random angle sequence randomizes the phase of off-focus (clutter) signals, thereby spreading the clutter power in the Doppler spectrum, while keeping the spectrum of the in-focus signal intact. The ensemble of randomly tilted low-resolution frames also acts as the Doppler ensemble, so it can be much longer than a conventional linear sweep, thereby improving beam formation while also making the slow-time Doppler sampling frequency equal to the pulse repetition frequency. Experiments performed using a carotid artery phantom with constant flow demonstrate that the spread-spectrum method more accurately measures the parabolic flow profile of the vessel and outperforms conventional plane-wave Doppler in both contrast resolution and estimation of high flow velocities. The spread-spectrum method is expected to be valuable for Doppler applications that require measurement of high velocities at high frame rates.
Multiplane wave imaging increases signal-to-noise ratio in ultrafast ultrasound imaging.
Tiran, Elodie; Deffieux, Thomas; Correia, Mafalda; Maresca, David; Osmanski, Bruno-Felix; Sieu, Lim-Anna; Bergel, Antoine; Cohen, Ivan; Pernot, Mathieu; Tanter, Mickael
2015-11-07
Ultrafast imaging using plane or diverging waves has recently enabled new ultrasound imaging modes with improved sensitivity and very high frame rates. Some of these new imaging modalities include shear wave elastography, ultrafast Doppler, ultrafast contrast-enhanced imaging and functional ultrasound imaging. Even though ultrafast imaging already encounters clinical success, increasing even more its penetration depth and signal-to-noise ratio for dedicated applications would be valuable. Ultrafast imaging relies on the coherent compounding of backscattered echoes resulting from successive tilted plane waves emissions; this produces high-resolution ultrasound images with a trade-off between final frame rate, contrast and resolution. In this work, we introduce multiplane wave imaging, a new method that strongly improves ultrafast images signal-to-noise ratio by virtually increasing the emission signal amplitude without compromising the frame rate. This method relies on the successive transmissions of multiple plane waves with differently coded amplitudes and emission angles in a single transmit event. Data from each single plane wave of increased amplitude can then be obtained, by recombining the received data of successive events with the proper coefficients. The benefits of multiplane wave for B-mode, shear wave elastography and ultrafast Doppler imaging are experimentally demonstrated. Multiplane wave with 4 plane waves emissions yields a 5.8 ± 0.5 dB increase in signal-to-noise ratio and approximately 10 mm in penetration in a calibrated ultrasound phantom (0.7 d MHz(-1) cm(-1)). In shear wave elastography, the same multiplane wave configuration yields a 2.07 ± 0.05 fold reduction of the particle velocity standard deviation and a two-fold reduction of the shear wave velocity maps standard deviation. In functional ultrasound imaging, the mapping of cerebral blood volume results in a 3 to 6 dB increase of the contrast-to-noise ratio in deep structures of the rodent brain.
Iodine filter imaging system for subtraction angiography using synchrotron radiation
NASA Astrophysics Data System (ADS)
Umetani, K.; Ueda, K.; Takeda, T.; Itai, Y.; Akisada, M.; Nakajima, T.
1993-11-01
A new type of real-time imaging system was developed for transvenous coronary angiography. A combination of an iodine filter and a single energy broad-bandwidth X-ray produces two-energy images for the iodine K-edge subtraction technique. X-ray images are sequentially converted to visible images by an X-ray image intensifier. By synchronizing the timing of the movement of the iodine filter into and out of the X-ray beam, two output images of the image intensifier are focused side by side on the photoconductive layer of a camera tube by an oscillating mirror. Both images are read out by electron beam scanning of a 1050-scanning-line video camera within a camera frame time of 66.7 ms. One hundred ninety two pairs of iodine-filtered and non-iodine-filtered images are stored in the frame memory at a rate of 15 pairs/s. In vivo subtracted images of coronary arteries in dogs were obtained in the form of motion pictures.
Data management and digital delivery of analog data
Miller, W.A.; Longhenry, Ryan; Smith, T.
2008-01-01
The U.S. Geological Survey's (USGS) data archive at the Earth Resources Observation and Science (EROS) Center is a comprehensive and impartial record of the Earth's changing land surface. USGS/EROS has been archiving and preserving land remote sensing data for over 35 years. This remote sensing archive continues to grow as aircraft and satellites acquire more imagery. As a world leader in preserving data, USGS/EROS has a reputation as a technological innovator in solving challenges and ensuring that access to these collections is available. Other agencies also call on the USGS to consider their collections for long-term archive support. To improve access to the USGS film archive, each frame on every roll of film is being digitized by automated high performance digital camera systems. The system robotically captures a digital image from each film frame for the creation of browse and medium resolution image files. Single frame metadata records are also created to improve access that otherwise involves interpreting flight indexes. USGS/EROS is responsible for over 8.6 million frames of aerial photographs and 27.7 million satellite images.
Dim target trajectory-associated detection in bright earth limb background
NASA Astrophysics Data System (ADS)
Chen, Penghui; Xu, Xiaojian; He, Xiaoyu; Jiang, Yuesong
2015-09-01
The intensive emission of earth limb in the field of view of sensors contributes much to the observation images. Due to the low signal-to-noise ratio (SNR), it is a challenge to detect small targets in earth limb background, especially for the detection of point-like targets from a single frame. To improve the target detection, track before detection (TBD) based on the frame sequence is performed. In this paper, a new technique is proposed to determine the target associated trajectories, which jointly carries out background removing, maximum value projection (MVP) and Hough transform. The background of the bright earth limb in the observation images is removed according to the profile characteristics. For a moving target, the corresponding pixels in the MVP image are shifting approximately regularly in time sequence. And the target trajectory is determined by Hough transform according to the pixel characteristics of the target and the clutter and noise. Comparing with traditional frame-by-frame methods, determining associated trajectories from MVP reduces the computation load. Numerical simulations are presented to demonstrate the effectiveness of the approach proposed.
A study of video frame rate on the perception of moving imagery detail
NASA Technical Reports Server (NTRS)
Haines, Richard F.; Chuang, Sherry L.
1993-01-01
The rate at which each frame of color moving video imagery is displayed was varied in small steps to determine what is the minimal acceptable frame rate for life scientists viewing white rats within a small enclosure. Two, twenty five second-long scenes (slow and fast animal motions) were evaluated by nine NASA principal investigators and animal care technicians. The mean minimum acceptable frame rate across these subjects was 3.9 fps both for the slow and fast moving animal scenes. The highest single trial frame rate averaged across all subjects for the slow and the fast scene was 6.2 and 4.8, respectively. Further research is called for in which frame rate, image size, and color/gray scale depth are covaried during the same observation period.
A multiresolution halftoning algorithm for progressive display
NASA Astrophysics Data System (ADS)
Mukherjee, Mithun; Sharma, Gaurav
2005-01-01
We describe and implement an algorithmic framework for memory efficient, 'on-the-fly' halftoning in a progressive transmission environment. Instead of a conventional approach which repeatedly recalls the continuous tone image from memory and subsequently halftones it for display, the proposed method achieves significant memory efficiency by storing only the halftoned image and updating it in response to additional information received through progressive transmission. Thus the method requires only a single frame-buffer of bits for storage of the displayed binary image and no additional storage is required for the contone data. The additional image data received through progressive transmission is accommodated through in-place updates of the buffer. The method is thus particularly advantageous for high resolution bi-level displays where it can result in significant savings in memory. The proposed framework is implemented using a suitable multi-resolution, multi-level modification of error diffusion that is motivated by the presence of a single binary frame-buffer. Aggregates of individual display bits constitute the multiple output levels at a given resolution. This creates a natural progression of increasing resolution with decreasing bit-depth.
NASA Astrophysics Data System (ADS)
Lien, Chi-Hsiang; Lin, Chun-Yu; Chen, Shean-Jen; Chien, Fan-Ching
2017-02-01
A three-dimensional (3D) single fluorescent particle tracking strategy based on temporal focusing multiphoton excitation microscopy (TFMPEM) combined with astigmatism imaging is proposed for delivering nanoscale-level axial information that reveals 3D trajectories of single fluorospheres in the axially-resolved multiphoton excitation volume without z-axis scanning. It provides the dynamical ability by measuring the diffusion coefficient of fluorospheres in glycerol solutions with a position standard deviation of 14 nm and 21 nm in the lateral and axial direction and a frame rate of 100 Hz. Moreover, the optical trapping force based on the TFMPEM is minimized to avoid the interference in the tracing measurements compared to that in the spatial focusing MPE approaches. Therefore, we presented a three dimensional single particle tracking strategy to overcome the limitation of the time resolution of the multiphoton imaging using fast frame rate of TFMPEM, and provide three dimensional locations of multiple particles using an astigmatism method.
Sun, Mingzhai; Huang, Jiaqing; Bunyak, Filiz; Gumpper, Kristyn; De, Gejing; Sermersheim, Matthew; Liu, George; Lin, Pei-Hui; Palaniappan, Kannappan; Ma, Jianjie
2014-01-01
One key factor that limits resolution of single-molecule superresolution microscopy relates to the localization accuracy of the activated emitters, which is usually deteriorated by two factors. One originates from the background noise due to out-of-focus signals, sample auto-fluorescence, and camera acquisition noise; and the other is due to the low photon count of emitters at a single frame. With fast acquisition rate, the activated emitters can last multiple frames before they transiently switch off or permanently bleach. Effectively incorporating the temporal information of these emitters is critical to improve the spatial resolution. However, majority of the existing reconstruction algorithms locate the emitters frame by frame, discarding or underusing the temporal information. Here we present a new image reconstruction algorithm based on tracklets, short trajectories of the same objects. We improve the localization accuracy by associating the same emitters from multiple frames to form tracklets and by aggregating signals to enhance the signal to noise ratio. We also introduce a weighted mean-shift algorithm (WMS) to automatically detect the number of modes (emitters) in overlapping regions of tracklets so that not only well-separated single emitters but also individual emitters within multi-emitter groups can be identified and tracked. In combination with a maximum likelihood estimator method (MLE), we are able to resolve low to medium density of overlapping emitters with improved localization accuracy. We evaluate the performance of our method with both synthetic and experimental data, and show that the tracklet-based reconstruction is superior in localization accuracy, particularly for weak signals embedded in a strong background. Using this method, for the first time, we resolve the transverse tubule structure of the mammalian skeletal muscle. PMID:24921337
Sun, Mingzhai; Huang, Jiaqing; Bunyak, Filiz; Gumpper, Kristyn; De, Gejing; Sermersheim, Matthew; Liu, George; Lin, Pei-Hui; Palaniappan, Kannappan; Ma, Jianjie
2014-05-19
One key factor that limits resolution of single-molecule superresolution microscopy relates to the localization accuracy of the activated emitters, which is usually deteriorated by two factors. One originates from the background noise due to out-of-focus signals, sample auto-fluorescence, and camera acquisition noise; and the other is due to the low photon count of emitters at a single frame. With fast acquisition rate, the activated emitters can last multiple frames before they transiently switch off or permanently bleach. Effectively incorporating the temporal information of these emitters is critical to improve the spatial resolution. However, majority of the existing reconstruction algorithms locate the emitters frame by frame, discarding or underusing the temporal information. Here we present a new image reconstruction algorithm based on tracklets, short trajectories of the same objects. We improve the localization accuracy by associating the same emitters from multiple frames to form tracklets and by aggregating signals to enhance the signal to noise ratio. We also introduce a weighted mean-shift algorithm (WMS) to automatically detect the number of modes (emitters) in overlapping regions of tracklets so that not only well-separated single emitters but also individual emitters within multi-emitter groups can be identified and tracked. In combination with a maximum likelihood estimator method (MLE), we are able to resolve low to medium density of overlapping emitters with improved localization accuracy. We evaluate the performance of our method with both synthetic and experimental data, and show that the tracklet-based reconstruction is superior in localization accuracy, particularly for weak signals embedded in a strong background. Using this method, for the first time, we resolve the transverse tubule structure of the mammalian skeletal muscle.
Lee, Jinwoo; Miyanaga, Yukihiro; Ueda, Masahiro; Hohng, Sungchul
2012-01-01
There is no confocal microscope optimized for single-molecule imaging in live cells and superresolution fluorescence imaging. By combining the swiftness of the line-scanning method and the high sensitivity of wide-field detection, we have developed a, to our knowledge, novel confocal fluorescence microscope with a good optical-sectioning capability (1.0 μm), fast frame rates (<33 fps), and superior fluorescence detection efficiency. Full compatibility of the microscope with conventional cell-imaging techniques allowed us to do single-molecule imaging with a great ease at arbitrary depths of live cells. With the new microscope, we monitored diffusion motion of fluorescently labeled cAMP receptors of Dictyostelium discoideum at both the basal and apical surfaces and obtained superresolution fluorescence images of microtubules of COS-7 cells at depths in the range 0–85 μm from the surface of a coverglass. PMID:23083712
The AAPM/RSNA physics tutorial for residents: digital fluoroscopy.
Pooley, R A; McKinney, J M; Miller, D A
2001-01-01
A digital fluoroscopy system is most commonly configured as a conventional fluoroscopy system (tube, table, image intensifier, video system) in which the analog video signal is converted to and stored as digital data. Other methods of acquiring the digital data (eg, digital or charge-coupled device video and flat-panel detectors) will become more prevalent in the future. Fundamental concepts related to digital imaging in general include binary numbers, pixels, and gray levels. Digital image data allow the convenient use of several image processing techniques including last image hold, gray-scale processing, temporal frame averaging, and edge enhancement. Real-time subtraction of digital fluoroscopic images after injection of contrast material has led to widespread use of digital subtraction angiography (DSA). Additional image processing techniques used with DSA include road mapping, image fade, mask pixel shift, frame summation, and vessel size measurement. Peripheral angiography performed with an automatic moving table allows imaging of the peripheral vasculature with a single contrast material injection.
Single-cell photoacoustic thermometry
Gao, Liang; Wang, Lidai; Li, Chiye; Liu, Yan; Ke, Haixin; Zhang, Chi
2013-01-01
Abstract. A novel photoacoustic thermometric method is presented for simultaneously imaging cells and sensing their temperature. With three-seconds-per-frame imaging speed, a temperature resolution of 0.2°C was achieved in a photo-thermal cell heating experiment. Compared to other approaches, the photoacoustic thermometric method has the advantage of not requiring custom-developed temperature-sensitive biosensors. This feature should facilitate the conversion of single-cell thermometry into a routine lab tool and make it accessible to a much broader biological research community. PMID:23377004
PET and Single-Photon Emission Computed Tomography in Brain Concussion.
Raji, Cyrus A; Henderson, Theodore A
2018-02-01
This article offers an overview of the application of PET and single photon emission computed tomography brain imaging to concussion, a type of mild traumatic brain injury and traumatic brain injury, in general. The article reviews the application of these neuronuclear imaging modalities in cross-sectional and longitudinal studies. Additionally, this article frames the current literature with an overview of the basic physics and radiation exposure risks of each modality. Copyright © 2017 Elsevier Inc. All rights reserved.
A GPU-Parallelized Eigen-Based Clutter Filter Framework for Ultrasound Color Flow Imaging.
Chee, Adrian J Y; Yiu, Billy Y S; Yu, Alfred C H
2017-01-01
Eigen-filters with attenuation response adapted to clutter statistics in color flow imaging (CFI) have shown improved flow detection sensitivity in the presence of tissue motion. Nevertheless, its practical adoption in clinical use is not straightforward due to the high computational cost for solving eigendecompositions. Here, we provide a pedagogical description of how a real-time computing framework for eigen-based clutter filtering can be developed through a single-instruction, multiple data (SIMD) computing approach that can be implemented on a graphical processing unit (GPU). Emphasis is placed on the single-ensemble-based eigen-filtering approach (Hankel singular value decomposition), since it is algorithmically compatible with GPU-based SIMD computing. The key algebraic principles and the corresponding SIMD algorithm are explained, and annotations on how such algorithm can be rationally implemented on the GPU are presented. Real-time efficacy of our framework was experimentally investigated on a single GPU device (GTX Titan X), and the computing throughput for varying scan depths and slow-time ensemble lengths was studied. Using our eigen-processing framework, real-time video-range throughput (24 frames/s) can be attained for CFI frames with full view in azimuth direction (128 scanlines), up to a scan depth of 5 cm ( λ pixel axial spacing) for slow-time ensemble length of 16 samples. The corresponding CFI image frames, with respect to the ones derived from non-adaptive polynomial regression clutter filtering, yielded enhanced flow detection sensitivity in vivo, as demonstrated in a carotid imaging case example. These findings indicate that the GPU-enabled eigen-based clutter filtering can improve CFI flow detection performance in real time.
Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect
NASA Astrophysics Data System (ADS)
Artyukhin, S. G.; Mestetskiy, L. M.
2015-05-01
This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.
Solid state replacement of rotating mirror cameras
NASA Astrophysics Data System (ADS)
Frank, Alan M.; Bartolick, Joseph M.
2007-01-01
Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed 'In-situ Storage Image Sensor' or 'ISIS', by Prof. Goji Etoh has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.
NASA Technical Reports Server (NTRS)
Chamberlain, F. R. (Inventor)
1980-01-01
A system for generating, within a single frame of photographic film, a quadrified image including images of angularly (including orthogonally) related fields of view of a near field three dimensional object is described. It is characterized by three subsystems each of which includes a plurality of reflective surfaces for imaging a different field of view of the object at a different quadrant of the quadrified image. All of the subsystems have identical path lengths to the object photographed.
Real-Space x-ray tomographic reconstruction of randomly oriented objects with sparse data frames.
Ayyer, Kartik; Philipp, Hugh T; Tate, Mark W; Elser, Veit; Gruner, Sol M
2014-02-10
Schemes for X-ray imaging single protein molecules using new x-ray sources, like x-ray free electron lasers (XFELs), require processing many frames of data that are obtained by taking temporally short snapshots of identical molecules, each with a random and unknown orientation. Due to the small size of the molecules and short exposure times, average signal levels of much less than 1 photon/pixel/frame are expected, much too low to be processed using standard methods. One approach to process the data is to use statistical methods developed in the EMC algorithm (Loh & Elser, Phys. Rev. E, 2009) which processes the data set as a whole. In this paper we apply this method to a real-space tomographic reconstruction using sparse frames of data (below 10(-2) photons/pixel/frame) obtained by performing x-ray transmission measurements of a low-contrast, randomly-oriented object. This extends the work by Philipp et al. (Optics Express, 2012) to three dimensions and is one step closer to the single molecule reconstruction problem.
Wei, Chen-Wei; Nguyen, Thu-Mai; Xia, Jinjun; Arnal, Bastien; Wong, Emily Y; Pelivanov, Ivan M; O'Donnell, Matthew
2015-02-01
Because of depth-dependent light attenuation, bulky, low-repetition-rate lasers are usually used in most photoacoustic (PA) systems to provide sufficient pulse energies to image at depth within the body. However, integrating these lasers with real-time clinical ultrasound (US) scanners has been problematic because of their size and cost. In this paper, an integrated PA/US (PAUS) imaging system is presented operating at frame rates >30 Hz. By employing a portable, low-cost, low-pulse-energy (~2 mJ/pulse), high-repetition-rate (~1 kHz), 1053-nm laser, and a rotating galvo-mirror system enabling rapid laser beam scanning over the imaging area, the approach is demonstrated for potential applications requiring a few centimeters of penetration. In particular, we demonstrate here real-time (30 Hz frame rate) imaging (by combining multiple single-shot sub-images covering the scan region) of an 18-gauge needle inserted into a piece of chicken breast with subsequent delivery of an absorptive agent at more than 1-cm depth to mimic PAUS guidance of an interventional procedure. A signal-to-noise ratio of more than 35 dB is obtained for the needle in an imaging area 2.8 × 2.8 cm (depth × lateral). Higher frame rate operation is envisioned with an optimized scanning scheme.
Ultra-fast framing camera tube
Kalibjian, Ralph
1981-01-01
An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.
High-speed particle tracking in microscopy using SPAD image sensors
NASA Astrophysics Data System (ADS)
Gyongy, Istvan; Davies, Amy; Miguelez Crespo, Allende; Green, Andrew; Dutton, Neale A. W.; Duncan, Rory R.; Rickman, Colin; Henderson, Robert K.; Dalgarno, Paul A.
2018-02-01
Single photon avalanche diodes (SPADs) are used in a wide range of applications, from fluorescence lifetime imaging microscopy (FLIM) to time-of-flight (ToF) 3D imaging. SPAD arrays are becoming increasingly established, combining the unique properties of SPADs with widefield camera configurations. Traditionally, the photosensitive area (fill factor) of SPAD arrays has been limited by the in-pixel digital electronics. However, recent designs have demonstrated that by replacing the complex digital pixel logic with simple binary pixels and external frame summation, the fill factor can be increased considerably. A significant advantage of such binary SPAD arrays is the high frame rates offered by the sensors (>100kFPS), which opens up new possibilities for capturing ultra-fast temporal dynamics in, for example, life science cellular imaging. In this work we consider the use of novel binary SPAD arrays in high-speed particle tracking in microscopy. We demonstrate the tracking of fluorescent microspheres undergoing Brownian motion, and in intra-cellular vesicle dynamics, at high frame rates. We thereby show how binary SPAD arrays can offer an important advance in live cell imaging in such fields as intercellular communication, cell trafficking and cell signaling.
HIGH SPEED KERR CELL FRAMING CAMERA
Goss, W.C.; Gilley, L.F.
1964-01-01
The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)
Differential Multiphoton Laser Scanning Microscopy
Field, Jeffrey J.; Sheetz, Kraig E.; Chandler, Eric V.; Hoover, Erich E.; Young, Michael D.; Ding, Shi-you; Sylvester, Anne W.; Kleinfeld, David; Squier, Jeff A.
2016-01-01
Multifocal multiphoton microscopy (MMM) in the biological and medical sciences has become an important tool for obtaining high resolution images at video rates. While current implementations of MMM achieve very high frame rates, they are limited in their applicability to essentially those biological samples that exhibit little or no scattering. In this paper, we report on a method for MMM in which imaging detection is not necessary (single element point detection is implemented), and is therefore fully compatible for use in imaging through scattering media. Further, we demonstrate that this method leads to a new type of MMM wherein it is possible to simultaneously obtain multiple images and view differences in excitation parameters in a single shot. PMID:27390511
Graphene metamaterial spatial light modulator for infrared single pixel imaging.
Fan, Kebin; Suen, Jonathan Y; Padilla, Willie J
2017-10-16
High-resolution and hyperspectral imaging has long been a goal for multi-dimensional data fusion sensing applications - of interest for autonomous vehicles and environmental monitoring. In the long wave infrared regime this quest has been impeded by size, weight, power, and cost issues, especially as focal-plane array detector sizes increase. Here we propose and experimentally demonstrated a new approach based on a metamaterial graphene spatial light modulator (GSLM) for infrared single pixel imaging. A frequency-division multiplexing (FDM) imaging technique is designed and implemented, and relies entirely on the electronic reconfigurability of the GSLM. We compare our approach to the more common raster-scan method and directly show FDM image frame rates can be 64 times faster with no degradation of image quality. Our device and related imaging architecture are not restricted to the infrared regime, and may be scaled to other bands of the electromagnetic spectrum. The study presented here opens a new approach for fast and efficient single pixel imaging utilizing graphene metamaterials with novel acquisition strategies.
PCIE interface design for high-speed image storage system based on SSD
NASA Astrophysics Data System (ADS)
Wang, Shiming
2015-02-01
This paper proposes and implements a standard interface of miniaturized high-speed image storage system, which combines PowerPC with FPGA and utilizes PCIE bus as the high speed switching channel. Attached to the PowerPC, mSATA interface SSD(Solid State Drive) realizes RAID3 array storage. At the same time, a high-speed real-time image compression patent IP core also can be embedded in FPGA, which is in the leading domestic level with compression rate and image quality, making that the system can record higher image data rate or achieve longer recording time. The notebook memory card buckle type design is used in the mSATA interface SSD, which make it possible to complete the replacement in 5 seconds just using single hand, thus the total length of repeated recordings is increased. MSI (Message Signaled Interrupts) interruption guarantees the stability and reliability of continuous DMA transmission. Furthermore, only through the gigabit network, the remote display, control and upload to backup function can be realized. According to an optional 25 frame/s or 30 frame/s, upload speeds can be up to more than 84 MB/s. Compared with the existing FLASH array high-speed memory systems, it has higher degree of modularity, better stability and higher efficiency on development, maintenance and upgrading. Its data access rate is up to 300MB/s, realizing the high speed image storage system miniaturization, standardization and modularization, thus it is fit for image acquisition, storage and real-time transmission to server on mobile equipment.
Video-rate nanoscopy enabled by sCMOS camera-specific single-molecule localization algorithms
Huang, Fang; Hartwich, Tobias M. P.; Rivera-Molina, Felix E.; Lin, Yu; Duim, Whitney C.; Long, Jane J.; Uchil, Pradeep D.; Myers, Jordan R.; Baird, Michelle A.; Mothes, Walther; Davidson, Michael W.; Toomre, Derek; Bewersdorf, Joerg
2013-01-01
Newly developed scientific complementary metal–oxide–semiconductor (sCMOS) cameras have the potential to dramatically accelerate data acquisition in single-molecule switching nanoscopy (SMSN) while simultaneously increasing the effective quantum efficiency. However, sCMOS-intrinsic pixel-dependent readout noise substantially reduces the localization precision and introduces localization artifacts. Here we present algorithms that overcome these limitations and provide unbiased, precise localization of single molecules at the theoretical limit. In combination with a multi-emitter fitting algorithm, we demonstrate single-molecule localization super-resolution imaging at up to 32 reconstructed images/second (recorded at 1,600–3,200 camera frames/second) in both fixed and living cells. PMID:23708387
Cardiac phase detection in intravascular ultrasound images
NASA Astrophysics Data System (ADS)
Matsumoto, Monica M. S.; Lemos, Pedro Alves; Yoneyama, Takashi; Furuie, Sergio Shiguemi
2008-03-01
Image gating is related to image modalities that involve quasi-periodic moving organs. Therefore, during intravascular ultrasound (IVUS) examination, there is cardiac movement interference. In this paper, we aim to obtain IVUS gated images based on the images themselves. This would allow the reconstruction of 3D coronaries with temporal accuracy for any cardiac phase, which is an advantage over the ECG-gated acquisition that shows a single one. It is also important for retrospective studies, as in existing IVUS databases there are no additional reference signals (ECG). From the images, we calculated signals based on average intensity (AI), and, from consecutive frames, average intensity difference (AID), cross-correlation coefficient (CC) and mutual information (MI). The process includes a wavelet-based filter step and ascendant zero-cross detection in order to obtain the phase information. Firstly, we tested 90 simulated sequences with 1025 frames each. Our method was able to achieve more than 95.0% of true positives and less than 2.3% of false positives ratio, for all signals. Afterwards, we tested in a real examination, with 897 frames and ECG as gold-standard. We achieved 97.4% of true positives (CC and MI), and 2.5% of false positives. For future works, methodology should be tested in wider range of IVUS examinations.
NASA Astrophysics Data System (ADS)
Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji
2012-03-01
We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.
Noise reduction in single time frame optical DNA maps
Müller, Vilhelm; Westerlund, Fredrik
2017-01-01
In optical DNA mapping technologies sequence-specific intensity variations (DNA barcodes) along stretched and stained DNA molecules are produced. These “fingerprints” of the underlying DNA sequence have a resolution of the order one kilobasepairs and the stretching of the DNA molecules are performed by surface adsorption or nano-channel setups. A post-processing challenge for nano-channel based methods, due to local and global random movement of the DNA molecule during imaging, is how to align different time frames in order to produce reproducible time-averaged DNA barcodes. The current solutions to this challenge are computationally rather slow. With high-throughput applications in mind, we here introduce a parameter-free method for filtering a single time frame noisy barcode (snap-shot optical map), measured in a fraction of a second. By using only a single time frame barcode we circumvent the need for post-processing alignment. We demonstrate that our method is successful at providing filtered barcodes which are less noisy and more similar to time averaged barcodes. The method is based on the application of a low-pass filter on a single noisy barcode using the width of the Point Spread Function of the system as a unique, and known, filtering parameter. We find that after applying our method, the Pearson correlation coefficient (a real number in the range from -1 to 1) between the single time-frame barcode and the time average of the aligned kymograph increases significantly, roughly by 0.2 on average. By comparing to a database of more than 3000 theoretical plasmid barcodes we show that the capabilities to identify plasmids is improved by filtering single time-frame barcodes compared to the unfiltered analogues. Since snap-shot experiments and computational time using our method both are less than a second, this study opens up for high throughput optical DNA mapping with improved reproducibility. PMID:28640821
Constructing spherical panoramas of a bladder phantom from endoscopic video using bundle adjustment
NASA Astrophysics Data System (ADS)
Soper, Timothy D.; Chandler, John E.; Porter, Michael P.; Seibel, Eric J.
2011-03-01
The high recurrence rate of bladder cancer requires patients to undergo frequent surveillance screenings over their lifetime following initial diagnosis and resection. Our laboratory is developing panoramic stitching software that would compile several minutes of cystoscopic video into a single panoramic image, covering the entire bladder, for review by an urolgist at a later time or remote location. Global alignment of video frames is achieved by using a bundle adjuster that simultaneously recovers both the 3D structure of the bladder as well as the scope motion using only the video frames as input. The result of the algorithm is a complete 360° spherical panorama of the outer surface. The details of the software algorithms are presented here along with results from both a virtual cystoscopy as well from real endoscopic imaging of a bladder phantom. The software successfully stitched several hundred video frames into a single panoramic with subpixel accuracy and with no knowledge of the intrinsic camera properties, such as focal length and radial distortion. In the discussion, we outline future work in development of the software as well as identifying factors pertinent to clinical translation of this technology.
Raphael, David T; McIntee, Diane; Tsuruda, Jay S; Colletti, Patrick; Tatevossian, Ray
2005-12-01
Magnetic resonance neurography (MRN) is an imaging method by which nerves can be selectively highlighted. Using commercial software, the authors explored a variety of approaches to develop a three-dimensional volume-rendered MRN image of the entire brachial plexus and used it to evaluate the accuracy of infraclavicular block approaches. With institutional review board approval, MRN of the brachial plexus was performed in 10 volunteer subjects. MRN imaging was performed on a GE 1.5-tesla magnetic resonance scanner (General Electric Healthcare Technologies, Waukesha, WI) using a phased array torso coil. Coronal STIR and T1 oblique sagittal sequences of the brachial plexus were obtained. Multiple software programs were explored for enhanced display and manipulation of the composite magnetic resonance images. The authors developed a frontal slab composite approach that allows single-frame reconstruction of a three-dimensional volume-rendered image of the entire brachial plexus. Automatic segmentation was supplemented by manual segmentation in nearly all cases. For each of three infraclavicular approaches (posteriorly directed needle below midclavicle, infracoracoid, or caudomedial to coracoid), the targeting error was measured as the distance from the MRN plexus midpoint to the approach-targeted site. Composite frontal slabs (coronal views), which are single-frame three-dimensional volume renderings from image-enhanced two-dimensional frontal view projections of the underlying coronal slices, were created. The targeting errors (mean +/- SD) for the approaches-midclavicle, infracoracoid, caudomedial to coracoid-were 0.43 +/- 0.67, 0.99 +/- 1.22, and 0.65 +/- 1.14 cm, respectively. Image-processed three-dimensional volume-rendered MNR scans, which allow visualization of the entire brachial plexus within a single composite image, have educational value in illustrating the complexity and individual variation of the plexus. Suggestions for improved guidance during infraclavicular block procedures are presented.
Lew, Matthew D; von Diezmann, Alexander R S; Moerner, W E
2013-02-25
Automated processing of double-helix (DH) microscope images of single molecules (SMs) streamlines the protocol required to obtain super-resolved three-dimensional (3D) reconstructions of ultrastructures in biological samples by single-molecule active control microscopy. Here, we present a suite of MATLAB subroutines, bundled with an easy-to-use graphical user interface (GUI), that facilitates 3D localization of single emitters (e.g. SMs, fluorescent beads, or quantum dots) with precisions of tens of nanometers in multi-frame movies acquired using a wide-field DH epifluorescence microscope. The algorithmic approach is based upon template matching for SM recognition and least-squares fitting for 3D position measurement, both of which are computationally expedient and precise. Overlapping images of SMs are ignored, and the precision of least-squares fitting is not as high as maximum likelihood-based methods. However, once calibrated, the algorithm can fit 15-30 molecules per second on a 3 GHz Intel Core 2 Duo workstation, thereby producing a 3D super-resolution reconstruction of 100,000 molecules over a 20×20×2 μm field of view (processing 128×128 pixels × 20000 frames) in 75 min.
Real-time Avatar Animation from a Single Image.
Saragih, Jason M; Lucey, Simon; Cohn, Jeffrey F
2011-01-01
A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.
Real-time Avatar Animation from a Single Image
Saragih, Jason M.; Lucey, Simon; Cohn, Jeffrey F.
2014-01-01
A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user’s facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters. PMID:24598812
NASA Astrophysics Data System (ADS)
Hong, Guosong; Zou, Yingping; Antaris, Alexander L.; Diao, Shuo; Wu, Di; Cheng, Kai; Zhang, Xiaodong; Chen, Changxin; Liu, Bo; He, Yuehui; Wu, Justin Z.; Yuan, Jun; Zhang, Bo; Tao, Zhimin; Fukunaga, Chihiro; Dai, Hongjie
2014-06-01
In vivo fluorescence imaging in the second near-infrared window (1.0-1.7 μm) can afford deep tissue penetration and high spatial resolution, owing to the reduced scattering of long-wavelength photons. Here we synthesize a series of low-bandgap donor/acceptor copolymers with tunable emission wavelengths of 1,050-1,350 nm in this window. Non-covalent functionalization with phospholipid-polyethylene glycol results in water-soluble and biocompatible polymeric nanoparticles, allowing for live cell molecular imaging at >1,000 nm with polymer fluorophores for the first time. Importantly, the high quantum yield of the polymer allows for in vivo, deep-tissue and ultrafast imaging of mouse arterial blood flow with an unprecedented frame rate of >25 frames per second. The high time-resolution results in spatially and time resolved imaging of the blood flow pattern in cardiogram waveform over a single cardiac cycle (~200 ms) of a mouse, which has not been observed with fluorescence imaging in this window before.
Deep learning massively accelerates super-resolution localization microscopy.
Ouyang, Wei; Aristov, Andrey; Lelek, Mickaël; Hao, Xian; Zimmer, Christophe
2018-06-01
The speed of super-resolution microscopy methods based on single-molecule localization, for example, PALM and STORM, is limited by the need to record many thousands of frames with a small number of observed molecules in each. Here, we present ANNA-PALM, a computational strategy that uses artificial neural networks to reconstruct super-resolution views from sparse, rapidly acquired localization images and/or widefield images. Simulations and experimental imaging of microtubules, nuclear pores, and mitochondria show that high-quality, super-resolution images can be reconstructed from up to two orders of magnitude fewer frames than usually needed, without compromising spatial resolution. Super-resolution reconstructions are even possible from widefield images alone, though adding localization data improves image quality. We demonstrate super-resolution imaging of >1,000 fields of view containing >1,000 cells in ∼3 h, yielding an image spanning spatial scales from ∼20 nm to ∼2 mm. The drastic reduction in acquisition time and sample irradiation afforded by ANNA-PALM enables faster and gentler high-throughput and live-cell super-resolution imaging.
Single photon detection imaging of Cherenkov light emitted during radiation therapy
NASA Astrophysics Data System (ADS)
Adamson, Philip M.; Andreozzi, Jacqueline M.; LaRochelle, Ethan; Gladstone, David J.; Pogue, Brian W.
2018-03-01
Cherenkov imaging during radiation therapy has been developed as a tool for dosimetry, which could have applications in patient delivery verification or in regular quality audit. The cameras used are intensified imaging sensors, either ICCD or ICMOS cameras, which allow important features of imaging, including: (1) nanosecond time gating, (2) amplification by 103-104, which together allow for imaging which has (1) real time capture at 10-30 frames per second, (2) sensitivity at the level of single photon event level, and (3) ability to suppress background light from the ambient room. However, the capability to achieve single photon imaging has not been fully analyzed to date, and as such was the focus of this study. The ability to quantitatively characterize how a single photon event appears in amplified camera imaging from the Cherenkov images was analyzed with image processing. The signal seen at normal gain levels appears to be a blur of about 90 counts in the CCD detector, after going through the chain of photocathode detection, amplification through a microchannel plate PMT, excitation onto a phosphor screen and then imaged on the CCD. The analysis of single photon events requires careful interpretation of the fixed pattern noise, statistical quantum noise distributions, and the spatial spread of each pulse through the ICCD.
Lee, Jinwoo; Miyanaga, Yukihiro; Ueda, Masahiro; Hohng, Sungchul
2012-10-17
There is no confocal microscope optimized for single-molecule imaging in live cells and superresolution fluorescence imaging. By combining the swiftness of the line-scanning method and the high sensitivity of wide-field detection, we have developed a, to our knowledge, novel confocal fluorescence microscope with a good optical-sectioning capability (1.0 μm), fast frame rates (<33 fps), and superior fluorescence detection efficiency. Full compatibility of the microscope with conventional cell-imaging techniques allowed us to do single-molecule imaging with a great ease at arbitrary depths of live cells. With the new microscope, we monitored diffusion motion of fluorescently labeled cAMP receptors of Dictyostelium discoideum at both the basal and apical surfaces and obtained superresolution fluorescence images of microtubules of COS-7 cells at depths in the range 0-85 μm from the surface of a coverglass. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Wei, Chen-Wei; Nguyen, Thu-Mai; Xia, Jinjun; Arnal, Bastien; Wong, Emily Y.; Pelivanov, Ivan M.; O’Donnell, Matthew
2015-01-01
Because of depth-dependent light attenuation, bulky, low-repetition-rate lasers are usually used in most photoacoustic (PA) systems to provide sufficient pulse energies to image at depth within the body. However, integrating these lasers with real-time clinical ultrasound (US) scanners has been problematic because of their size and cost. In this paper, an integrated PA/US (PAUS) imaging system is presented operating at frame rates >30 Hz. By employing a portable, low-cost, low-pulse-energy (~2 mJ/pulse), high-repetition-rate (~1 kHz), 1053-nm laser, and a rotating galvo-mirror system enabling rapid laser beam scanning over the imaging area, the approach is demonstrated for potential applications requiring a few centimeters of penetration. In particular, we demonstrate here real-time (30 Hz frame rate) imaging (by combining multiple single-shot sub-images covering the scan region) of an 18-gauge needle inserted into a piece of chicken breast with subsequent delivery of an absorptive agent at more than 1-cm depth to mimic PAUS guidance of an interventional procedure. A signal-to-noise ratio of more than 35 dB is obtained for the needle in an imaging area 2.8 × 2.8 cm (depth × lateral). Higher frame rate operation is envisioned with an optimized scanning scheme. PMID:25643081
NASA Astrophysics Data System (ADS)
Antolovic, Ivan Michel; Burri, Samuel; Bruschini, Claudio; Hoebe, Ron; Charbon, Edoardo
2016-02-01
For many scientific applications, electron multiplying charge coupled devices (EMCCDs) have been the sensor of choice because of their high quantum efficiency and built-in electron amplification. Lately, many researchers introduced scientific complementary metal-oxide semiconductor (sCMOS) imagers in their instrumentation, so as to take advantage of faster readout and the absence of excess noise. Alternatively, single-photon avalanche diode (SPAD) imagers can provide even faster frame rates and zero readout noise. SwissSPAD is a 1-bit 512×128 SPAD imager, one of the largest of its kind, featuring a frame duration of 6.4 μs. Additionally, a gating mechanism enables photosensitive windows as short as 5 ns with a skew better than 150 ps across the entire array. The SwissSPAD photon detection efficiency (PDE) uniformity is very high, thanks on one side to a photon-to-digital conversion and on the other to a reduced fraction of "hot pixels" or "screamers", which would pollute the image with noise. A low native fill factor was recovered to a large extent using a microlens array, leading to a maximum PDE increase of 12×. This enabled us to detect single fluorophores, as required by ground state depletion followed by individual molecule return imaging microscopy (GSDIM). We show the first super resolution results obtained with a SPAD imager, with an estimated localization uncertainty of 30 nm and resolution of 100 nm. The high time resolution of 6.4 μs can be utilized to explore the dye's photophysics or for dye optimization. We also present the methodology for the blinking analysis on experimental data.
Machine Learning Based Single-Frame Super-Resolution Processing for Lensless Blood Cell Counting
Huang, Xiwei; Jiang, Yu; Liu, Xu; Xu, Hang; Han, Zhi; Rong, Hailong; Yang, Haiping; Yan, Mei; Yu, Hao
2016-01-01
A lensless blood cell counting system integrating microfluidic channel and a complementary metal oxide semiconductor (CMOS) image sensor is a promising technique to miniaturize the conventional optical lens based imaging system for point-of-care testing (POCT). However, such a system has limited resolution, making it imperative to improve resolution from the system-level using super-resolution (SR) processing. Yet, how to improve resolution towards better cell detection and recognition with low cost of processing resources and without degrading system throughput is still a challenge. In this article, two machine learning based single-frame SR processing types are proposed and compared for lensless blood cell counting, namely the Extreme Learning Machine based SR (ELMSR) and Convolutional Neural Network based SR (CNNSR). Moreover, lensless blood cell counting prototypes using commercial CMOS image sensors and custom designed backside-illuminated CMOS image sensors are demonstrated with ELMSR and CNNSR. When one captured low-resolution lensless cell image is input, an improved high-resolution cell image will be output. The experimental results show that the cell resolution is improved by 4×, and CNNSR has 9.5% improvement over the ELMSR on resolution enhancing performance. The cell counting results also match well with a commercial flow cytometer. Such ELMSR and CNNSR therefore have the potential for efficient resolution improvement in lensless blood cell counting systems towards POCT applications. PMID:27827837
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
NASA Astrophysics Data System (ADS)
Bozic, Ivan; El-Haddad, Mohamed T.; Malone, Joseph D.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.
2017-02-01
Ophthalmic diagnostic imaging using optical coherence tomography (OCT) is limited by bulk eye motions and a fundamental trade-off between field-of-view (FOV) and sampling density. Here, we introduced a novel multi-volumetric registration and mosaicking method using our previously described multimodal swept-source spectrally encoded scanning laser ophthalmoscopy and OCT (SS-SESLO-OCT) system. Our SS-SESLO-OCT acquires an entire en face fundus SESLO image simultaneously with every OCT cross-section at 200 frames-per-second. In vivo human retinal imaging was performed in a healthy volunteer, and three volumetric datasets were acquired with the volunteer moving freely and refixating between each acquisition. In post-processing, SESLO frames were used to estimate en face rotational and translational motions by registering every frame in all three volumetric datasets to the first frame in the first volume. OCT cross-sections were contrast-normalized and registered axially and rotationally across all volumes. Rotational and translational motions calculated from SESLO frames were applied to corresponding OCT B-scans to compensate for interand intra-B-scan bulk motions, and the three registered volumes were combined into a single interpolated multi-volumetric mosaic. Using complementary information from SESLO and OCT over serially acquired volumes, we demonstrated multivolumetric registration and mosaicking to recover regions of missing data resulting from blinks, saccades, and ocular drifts. We believe our registration method can be directly applied for multi-volumetric motion compensation, averaging, widefield mosaicking, and vascular mapping with potential applications in ophthalmic clinical diagnostics, handheld imaging, and intraoperative guidance.
AlDahlawi, Ismail; Prasad, Dheerendra; Podgorsak, Matthew B
2017-05-01
The Gamma Knife Icon comes with an integrated cone-beam CT (CBCT) for image-guided stereotactic treatment deliveries. The CBCT can be used for defining the Leksell stereotactic space using imaging without the need for the traditional invasive frame system, and this allows also for frameless thermoplastic mask stereotactic treatments (single or fractionated) with the Gamma Knife unit. In this study, we used an in-house built marker tool to evaluate the stability of the CBCT-based stereotactic space and its agreement with the standard frame-based stereotactic space. We imaged the tool with a CT indicator box using our CT-simulator at the beginning, middle, and end of the study period (6 weeks) for determining the frame-based stereotactic space. The tool was also scanned with the Icon's CBCT on a daily basis throughout the study period, and the CBCT images were used for determining the CBCT-based stereotactic space. The coordinates of each marker were determined in each CT and CBCT scan using the Leksell GammaPlan treatment planning software. The magnitudes of vector difference between the means of each marker in frame-based and CBCT-based stereotactic space ranged from 0.21 to 0.33 mm, indicating good agreement of CBCT-based and frame-based stereotactic space definition. Scanning 4-month later showed good prolonged stability of the CBCT-based stereotactic space definition. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC).
Phillips, Zachary F; Chen, Michael; Waller, Laura
2017-01-01
We present a new technique for quantitative phase and amplitude microscopy from a single color image with coded illumination. Our system consists of a commercial brightfield microscope with one hardware modification-an inexpensive 3D printed condenser insert. The method, color-multiplexed Differential Phase Contrast (cDPC), is a single-shot variant of Differential Phase Contrast (DPC), which recovers the phase of a sample from images with asymmetric illumination. We employ partially coherent illumination to achieve resolution corresponding to 2× the objective NA. Quantitative phase can then be used to synthesize DIC and phase contrast images or extract shape and density. We demonstrate amplitude and phase recovery at camera-limited frame rates (50 fps) for various in vitro cell samples and c. elegans in a micro-fluidic channel.
Real time 3D visualization of intraoperative organ deformations using structured dictionary.
Wang, Dan; Tewfik, Ahmed H
2012-04-01
Restricted visualization of the surgical field is one of the most critical challenges for minimally invasive surgery (MIS). Current intraoperative visualization systems are promising. However, they can hardly meet the requirements of high resolution and real time 3D visualization of the surgical scene to support the recognition of anatomic structures for safe MIS procedures. In this paper, we present a new approach for real time 3D visualization of organ deformations based on optical imaging patches with limited field-of-view and a single preoperative scan of magnetic resonance imaging (MRI) or computed tomography (CT). The idea for reconstruction is motivated by our empirical observation that the spherical harmonic coefficients corresponding to distorted surfaces of a given organ lie in lower dimensional subspaces in a structured dictionary that can be learned from a set of representative training surfaces. We provide both theoretical and practical designs for achieving these goals. Specifically, we discuss details about the selection of limited optical views and the registration of partial optical images with a single preoperative MRI/CT scan. The design proposed in this paper is evaluated with both finite element modeling data and ex vivo experiments. The ex vivo test is conducted on fresh porcine kidneys using 3D MRI scans with 1.2 mm resolution and a portable laser scanner with an accuracy of 0.13 mm. Results show that the proposed method achieves a sub-3 mm spatial resolution in terms of Hausdorff distance when using only one preoperative MRI scan and the optical patch from the single-sided view of the kidney. The reconstruction frame rate is between 10 frames/s and 39 frames/s depending on the complexity of the test model.
Generation of complementary sampled phase-only holograms.
Tsang, P W M; Chow, Y T; Poon, T-C
2016-10-03
If an image is uniformly down-sampled into a sparse form and converted into a hologram, the phase component alone will be adequate to reconstruct the image. However, the appearance of the reconstructed image is degraded with numerous empty holes. In this paper, we present a low complexity and non-iterative solution to this problem. Briefly, two phase-only holograms are generated for an image, each based on a different down-sampling lattice. Subsequently, the holograms are displayed alternately at high frame rate. The reconstructed images of the 2 holograms will appear to be a single, densely sampled image with enhance visual quality.
Tracking Sunspots from Mars, April 2015 Animation
2015-07-10
This single frame from a sequence of six images of an animation shows sunspots as viewed by NASA Curiosity Mars rover from April 4 to April 15, 2015. From Mars, the rover was in position to see the opposite side of the sun. The images were taken by the right-eye camera of Curiosity's Mast Camera (Mastcam), which has a 100-millimeter telephoto lens. The view on the left of each pair in this sequence has little processing other than calibration and putting north toward the top of each frame. The view on the right of each pair has been enhanced to make sunspots more visible. The apparent granularity throughout these enhanced images is an artifact of this processing. These sunspots seen in this sequence eventually produced two solar eruptions, one of which affected Earth. http://photojournal.jpl.nasa.gov/catalog/PIA19802
Robust image alignment for cryogenic transmission electron microscopy.
McLeod, Robert A; Kowal, Julia; Ringler, Philippe; Stahlberg, Henning
2017-03-01
Cryo-electron microscopy recently experienced great improvements in structure resolution due to direct electron detectors with improved contrast and fast read-out leading to single electron counting. High frames rates enabled dose fractionation, where a long exposure is broken into a movie, permitting specimen drift to be registered and corrected. The typical approach for image registration, with high shot noise and low contrast, is multi-reference (MR) cross-correlation. Here we present the software package Zorro, which provides robust drift correction for dose fractionation by use of an intensity-normalized cross-correlation and logistic noise model to weight each cross-correlation in the MR model and filter each cross-correlation optimally. Frames are reliably registered by Zorro with low dose and defocus. Methods to evaluate performance are presented, by use of independently-evaluated even- and odd-frame stacks by trajectory comparison and Fourier ring correlation. Alignment of tiled sub-frames is also introduced, and demonstrated on an example dataset. Zorro source code is available at github.com/CINA/zorro. Copyright © 2016 Elsevier Inc. All rights reserved.
Burnette, Dylan T; Sengupta, Prabuddha; Dai, Yuhai; Lippincott-Schwartz, Jennifer; Kachar, Bechara
2011-12-27
Superresolution imaging techniques based on the precise localization of single molecules, such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM), achieve high resolution by fitting images of single fluorescent molecules with a theoretical Gaussian to localize them with a precision on the order of tens of nanometers. PALM/STORM rely on photoactivated proteins or photoswitching dyes, respectively, which makes them technically challenging. We present a simple and practical way of producing point localization-based superresolution images that does not require photoactivatable or photoswitching probes. Called bleaching/blinking assisted localization microscopy (BaLM), the technique relies on the intrinsic bleaching and blinking behaviors characteristic of all commonly used fluorescent probes. To detect single fluorophores, we simply acquire a stream of fluorescence images. Fluorophore bleach or blink-off events are detected by subtracting from each image of the series the subsequent image. Similarly, blink-on events are detected by subtracting from each frame the previous one. After image subtractions, fluorescence emission signals from single fluorophores are identified and the localizations are determined by fitting the fluorescence intensity distribution with a theoretical Gaussian. We also show that BaLM works with a spectrum of fluorescent molecules in the same sample. Thus, BaLM extends single molecule-based superresolution localization to samples labeled with multiple conventional fluorescent probes.
Israel, Yonatan; Tenne, Ron; Oron, Dan; Silberberg, Yaron
2017-01-01
Despite advances in low-light-level detection, single-photon methods such as photon correlation have rarely been used in the context of imaging. The few demonstrations, for example of subdiffraction-limited imaging utilizing quantum statistics of photons, have remained in the realm of proof-of-principle demonstrations. This is primarily due to a combination of low values of fill factors, quantum efficiencies, frame rates and signal-to-noise characteristic of most available single-photon sensitive imaging detectors. Here we describe an imaging device based on a fibre bundle coupled to single-photon avalanche detectors that combines a large fill factor, a high quantum efficiency, a low noise and scalable architecture. Our device enables localization-based super-resolution microscopy in a non-sparse non-stationary scene, utilizing information on the number of active emitters, as gathered from non-classical photon statistics. PMID:28287167
GPU-based multi-volume ray casting within VTK for medical applications.
Bozorgi, Mohammadmehdi; Lindseth, Frank
2015-03-01
Multi-volume visualization is important for displaying relevant information in multimodal or multitemporal medical imaging studies. The main objective with the current study was to develop an efficient GPU-based multi-volume ray caster (MVRC) and validate the proposed visualization system in the context of image-guided surgical navigation. Ray casting can produce high-quality 2D images from 3D volume data but the method is computationally demanding, especially when multiple volumes are involved, so a parallel GPU version has been implemented. In the proposed MVRC, imaginary rays are sent through the volumes (one ray for each pixel in the view), and at equal and short intervals along the rays, samples are collected from each volume. Samples from all the volumes are composited using front to back α-blending. Since all the rays can be processed simultaneously, the MVRC was implemented in parallel on the GPU to achieve acceptable interactive frame rates. The method is fully integrated within the visualization toolkit (VTK) pipeline with the ability to apply different operations (e.g., transformations, clipping, and cropping) on each volume separately. The implemented method is cross-platform (Windows, Linux and Mac OSX) and runs on different graphics card (NVidia and AMD). The speed of the MVRC was tested with one to five volumes of varying sizes: 128(3), 256(3), and 512(3). A Tesla C2070 GPU was used, and the output image size was 600 × 600 pixels. The original VTK single-volume ray caster and the MVRC were compared when rendering only one volume. The multi-volume rendering system achieved an interactive frame rate (> 15 fps) when rendering five small volumes (128 (3) voxels), four medium-sized volumes (256(3) voxels), and two large volumes (512(3) voxels). When rendering single volumes, the frame rate of the MVRC was comparable to the original VTK ray caster for small and medium-sized datasets but was approximately 3 frames per second slower for large datasets. The MVRC was successfully integrated in an existing surgical navigation system and was shown to be clinically useful during an ultrasound-guided neurosurgical tumor resection. A GPU-based MVRC for VTK is a useful tool in medical visualization. The proposed multi-volume GPU-based ray caster for VTK provided high-quality images at reasonable frame rates. The MVRC was effective when used in a neurosurgical navigation application.
High frequency ultrasound: a new frontier for ultrasound.
Shung, K; Cannata, Jonathan; Qifa Zhou, Member; Lee, Jungwoo
2009-01-01
High frequency ultrasonic imaging is considered by many to be the next frontier in ultrasonic imaging because higher frequencies yield much improved spatial resolution by sacrificing the depth of penetration. It has many clinical applications including visualizing blood vessel wall, anterior segments of the eye and skin. Another application is small animal imaging. Ultrasound is especially attractive in imaging the heart of a small animal like mouse which has a size in the mm range and a heart beat rate faster than 600 BPM. A majority of current commercial high frequency scanners often termed "ultrasonic backscatter microscope or UBM" acquire images by scanning single element transducers at frequencies between 50 to 80 MHz with a frame rate lower than 40 frames/s, making them less suitable for this application. High frequency linear arrays and linear array based ultrasonic imaging systems at frequencies higher than 30 MHz are being developed. The engineering of such arrays and development of high frequency imaging systems has been proven to be highly challenging. High frequency ultrasound may find other significant biomedical applications. The development of acoustic tweezers for manipulating microparticles is such an example.
Software manual for operating particle displacement tracking data acquisition and reduction system
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1991-01-01
The software manual is presented. The necessary steps required to record, analyze, and reduce Particle Image Velocimetry (PIV) data using the Particle Displacement Tracking (PDT) technique are described. The new PDT system is an all electronic technique employing a CCD video camera and a large memory buffer frame-grabber board to record low velocity (less than or equal to 20 cm/s) flows. Using a simple encoding scheme, a time sequence of single exposure images are time coded into a single image and then processed to track particle displacements and determine 2-D velocity vectors. All the PDT data acquisition, analysis, and data reduction software is written to run on an 80386 PC.
Kalantari, Faraz; Wang, Jing
2017-01-01
Purpose Four-dimensional positron emission tomography (4D-PET) imaging is a potential solution to the respiratory motion effect in the thoracic region. Computed tomography (CT)-based attenuation correction (AC) is an essential step toward quantitative imaging for PET. However, due to the temporal difference between 4D-PET and a single attenuation map from CT, typically available in routine clinical scanning, motion artifacts are observed in the attenuation-corrected PET images, leading to errors in tumor shape and uptake. We introduced a practical method to align single-phase CT with all other 4D-PET phases for AC. Methods A penalized non-rigid Demons registration between individual 4D-PET frames without AC provides the motion vectors to be used for warping single-phase attenuation map. The non-rigid Demons registration was used to derive deformation vector fields (DVFs) between PET matched with the CT phase and other 4D-PET images. While attenuated PET images provide useful data for organ borders such as those of the lung and the liver, tumors cannot be distinguished from the background due to loss of contrast. To preserve the tumor shape in different phases, an ROI-covering tumor was excluded from non-rigid transformation. Instead the mean DVF of the central region of the tumor was assigned to all voxels in the ROI. This process mimics a rigid transformation of the tumor along with a non-rigid transformation of other organs. A 4D-XCAT phantom with spherical lung tumors, with diameters ranging from 10 to 40 mm, was used to evaluate the algorithm. The performance of the proposed hybrid method for attenuation map estimation was compared to 1) the Demons non-rigid registration only and 2) a single attenuation map based on quantitative parameters in individual PET frames. Results Motion-related artifacts were significantly reduced in the attenuation-corrected 4D-PET images. When a single attenuation map was used for all individual PET frames, the normalized root mean square error (NRMSE) values in tumor region were 49.3% (STD: 8.3%), 50.5% (STD: 9.3%), 51.8% (STD: 10.8%) and 51.5% (STD: 12.1%) for 10-mm, 20-mm, 30-mm and 40-mm tumors respectively. These errors were reduced to 11.9% (STD: 2.9%), 13.6% (STD: 3.9%), 13.8% (STD: 4.8%), and 16.7% (STD: 9.3%) by our proposed method for deforming the attenuation map. The relative errors in total lesion glycolysis (TLG) values were −0.25% (STD: 2.87%) and 3.19% (STD: 2.35%) for 30-mm and 40-mm tumors respectively in proposed method. The corresponding values for Demons method were 25.22% (STD: 14.79%) and 18.42% (STD: 7.06%). Our proposed hybrid method outperforms the Demons method especially for larger tumors. For tumors smaller than 20 mm, non-rigid transformation could also provide quantitative results. Conclusion Although non-AC 4D-PET frames include insignificant anatomical information, they are still useful to estimate the DVFs to align the attenuation map for accurate AC. The proposed hybrid method can recover the AC-related artifacts and provide quantitative AC-PET images. PMID:27987223
Microradiography with Semiconductor Pixel Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakubek, Jan; Cejnarova, Andrea; Dammer, Jiri
High resolution radiography (with X-rays, neutrons, heavy charged particles, ...) often exploited also in tomographic mode to provide 3D images stands as a powerful imaging technique for instant and nondestructive visualization of fine internal structure of objects. Novel types of semiconductor single particle counting pixel detectors offer many advantages for radiation imaging: high detection efficiency, energy discrimination or direct energy measurement, noiseless digital integration (counting), high frame rate and virtually unlimited dynamic range. This article shows the application and potential of pixel detectors (such as Medipix2 or TimePix) in different fields of radiation imaging.
Norbert, M.A.; Yale, O.
1992-04-28
A large effective-aperture, low-cost optical telescope with diffraction-limited resolution enables ground-based observation of near-earth space objects. The telescope has a non-redundant, thinned-aperture array in a center-mount, single-structure space frame. It employes speckle interferometric imaging to achieve diffraction-limited resolution. The signal-to-noise ratio problem is mitigated by moving the wavelength of operation to the near-IR, and the image is sensed by a Silicon CCD. The steerable, single-structure array presents a constant pupil. The center-mount, radar-like mount enables low-earth orbit space objects to be tracked as well as increases stiffness of the space frame. In the preferred embodiment, the array has elemental telescopes with subaperture of 2.1 m in a circle-of-nine configuration. The telescope array has an effective aperture of 12 m which provides a diffraction-limited resolution of 0.02 arc seconds. Pathlength matching of the telescope array is maintained by a electro-optical system employing laser metrology. Speckle imaging relaxes pathlength matching tolerance by one order of magnitude as compared to phased arrays. Many features of the telescope contribute to substantial reduction in costs. These include eliminating the conventional protective dome and reducing on-site construction activities. The cost of the telescope scales with the first power of the aperture rather than its third power as in conventional telescopes. 15 figs.
Norbert, Massie A.; Yale, Oster
1992-01-01
A large effective-aperture, low-cost optical telescope with diffraction-limited resolution enables ground-based observation of near-earth space objects. The telescope has a non-redundant, thinned-aperture array in a center-mount, single-structure space frame. It employes speckle interferometric imaging to achieve diffraction-limited resolution. The signal-to-noise ratio problem is mitigated by moving the wavelength of operation to the near-IR, and the image is sensed by a Silicon CCD. The steerable, single-structure array presents a constant pupil. The center-mount, radar-like mount enables low-earth orbit space objects to be tracked as well as increases stiffness of the space frame. In the preferred embodiment, the array has elemental telescopes with subaperture of 2.1 m in a circle-of-nine configuration. The telescope array has an effective aperture of 12 m which provides a diffraction-limited resolution of 0.02 arc seconds. Pathlength matching of the telescope array is maintained by a electro-optical system employing laser metrology. Speckle imaging relaxes pathlength matching tolerance by one order of magnitude as compared to phased arrays. Many features of the telescope contribute to substantial reduction in costs. These include eliminating the conventional protective dome and reducing on-site construction activities. The cost of the telescope scales with the first power of the aperture rather than its third power as in conventional telescopes.
Resolving Fast, Confined Diffusion in Bacteria with Image Correlation Spectroscopy.
Rowland, David J; Tuson, Hannah H; Biteen, Julie S
2016-05-24
By following single fluorescent molecules in a microscope, single-particle tracking (SPT) can measure diffusion and binding on the nanometer and millisecond scales. Still, although SPT can at its limits characterize the fastest biomolecules as they interact with subcellular environments, this measurement may require advanced illumination techniques such as stroboscopic illumination. Here, we address the challenge of measuring fast subcellular motion by instead analyzing single-molecule data with spatiotemporal image correlation spectroscopy (STICS) with a focus on measurements of confined motion. Our SPT and STICS analysis of simulations of the fast diffusion of confined molecules shows that image blur affects both STICS and SPT, and we find biased diffusion rate measurements for STICS analysis in the limits of fast diffusion and tight confinement due to fitting STICS correlation functions to a Gaussian approximation. However, we determine that with STICS, it is possible to correctly interpret the motion that blurs single-molecule images without advanced illumination techniques or fast cameras. In particular, we present a method to overcome the bias due to image blur by properly estimating the width of the correlation function by directly calculating the correlation function variance instead of using the typical Gaussian fitting procedure. Our simulation results are validated by applying the STICS method to experimental measurements of fast, confined motion: we measure the diffusion of cytosolic mMaple3 in living Escherichia coli cells at 25 frames/s under continuous illumination to illustrate the utility of STICS in an experimental parameter regime for which in-frame motion prevents SPT and tight confinement of fast diffusion precludes stroboscopic illumination. Overall, our application of STICS to freely diffusing cytosolic protein in small cells extends the utility of single-molecule experiments to the regime of fast confined diffusion without requiring advanced microscopy techniques. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Formulation of image fusion as a constrained least squares optimization problem
Dwork, Nicholas; Lasry, Eric M.; Pauly, John M.; Balbás, Jorge
2017-01-01
Abstract. Fusing a lower resolution color image with a higher resolution monochrome image is a common practice in medical imaging. By incorporating spatial context and/or improving the signal-to-noise ratio, it provides clinicians with a single frame of the most complete information for diagnosis. In this paper, image fusion is formulated as a convex optimization problem that avoids image decomposition and permits operations at the pixel level. This results in a highly efficient and embarrassingly parallelizable algorithm based on widely available robust and simple numerical methods that realizes the fused image as the global minimizer of the convex optimization problem. PMID:28331885
High dynamic range CMOS-based mammography detector for FFDM and DBT
NASA Astrophysics Data System (ADS)
Peters, Inge M.; Smit, Chiel; Miller, James J.; Lomako, Andrey
2016-03-01
Digital Breast Tomosynthesis (DBT) requires excellent image quality in a dynamic mode at very low dose levels while Full Field Digital Mammography (FFDM) is a static imaging modality that requires high saturation dose levels. These opposing requirements can only be met by a dynamic detector with a high dynamic range. This paper will discuss a wafer-scale CMOS-based mammography detector with 49.5 μm pixels and a CsI scintillator. Excellent image quality is obtained for FFDM as well as DBT applications, comparing favorably with a-Se detectors that dominate the X-ray mammography market today. The typical dynamic range of a mammography detector is not high enough to accommodate both the low noise and the high saturation dose requirements for DBT and FFDM applications, respectively. An approach based on gain switching does not provide the signal-to-noise benefits in the low-dose DBT conditions. The solution to this is to add frame summing functionality to the detector. In one X-ray pulse several image frames will be acquired and summed. The requirements to implement this into a detector are low noise levels, high frame rates and low lag performance, all of which are unique characteristics of CMOS detectors. Results are presented to prove that excellent image quality is achieved, using a single detector for both DBT as well as FFDM dose conditions. This method of frame summing gave the opportunity to optimize the detector noise and saturation level for DBT applications, to achieve high DQE level at low dose, without compromising the FFDM performance.
Kotasidis, F A; Mehranian, A; Zaidi, H
2016-05-07
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
NASA Astrophysics Data System (ADS)
Kotasidis, F. A.; Mehranian, A.; Zaidi, H.
2016-05-01
Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.
NASA Technical Reports Server (NTRS)
1992-01-01
The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.
A single FPGA-based portable ultrasound imaging system for point-of-care applications.
Kim, Gi-Duck; Yoon, Changhan; Kye, Sang-Bum; Lee, Youngbae; Kang, Jeeun; Yoo, Yangmo; Song, Tai-kyong
2012-07-01
We present a cost-effective portable ultrasound system based on a single field-programmable gate array (FPGA) for point-of-care applications. In the portable ultrasound system developed, all the ultrasound signal and image processing modules, including an effective 32-channel receive beamformer with pseudo-dynamic focusing, are embedded in an FPGA chip. For overall system control, a mobile processor running Linux at 667 MHz is used. The scan-converted ultrasound image data from the FPGA are directly transferred to the system controller via external direct memory access without a video processing unit. The potable ultrasound system developed can provide real-time B-mode imaging with a maximum frame rate of 30, and it has a battery life of approximately 1.5 h. These results indicate that the single FPGA-based portable ultrasound system developed is able to meet the processing requirements in medical ultrasound imaging while providing improved flexibility for adapting to emerging POC applications.
Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting
NASA Astrophysics Data System (ADS)
Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang
In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.
Single-shot quantitative phase microscopy with color-multiplexed differential phase contrast (cDPC)
2017-01-01
We present a new technique for quantitative phase and amplitude microscopy from a single color image with coded illumination. Our system consists of a commercial brightfield microscope with one hardware modification—an inexpensive 3D printed condenser insert. The method, color-multiplexed Differential Phase Contrast (cDPC), is a single-shot variant of Differential Phase Contrast (DPC), which recovers the phase of a sample from images with asymmetric illumination. We employ partially coherent illumination to achieve resolution corresponding to 2× the objective NA. Quantitative phase can then be used to synthesize DIC and phase contrast images or extract shape and density. We demonstrate amplitude and phase recovery at camera-limited frame rates (50 fps) for various in vitro cell samples and c. elegans in a micro-fluidic channel. PMID:28152023
Results From the New NIF Gated LEH imager
NASA Astrophysics Data System (ADS)
Chen, Hui; Amendt, P.; Barrios, M.; Bradley, D.; Casey, D.; Hinkel, D.; Berzak Hopkins, L.; Kilkenny, J.; Kritcher, A.; Landen, O.; Jones, O.; Ma, T.; Milovich, J.; Michel, P.; Moody, J.; Ralph, J.; Pak, A.; Palmer, N.; Schneider, M.
2016-10-01
A novel ns-gated Laser Entrance Hole (G-LEH) diagnostic has been successfully implemented at the National Ignition Facility (NIF). This diagnostic has successfully acquired images from various experimental campaigns, providing critical information for inertial confinement fusion experiments. The G-LEH diagnostic which takes time-resolved gated images along a single line-of-sight, incorporates a high-speed multi-frame CMOS x-ray imager developed by Sandia National Laboratories into the existing Static X-ray Imager diagnostic at NIF. It is capable of capturing two laser-entrance-hole images per shot on its 1024x448 pixel photo-detector array, with integration times as short as 2 ns per frame. The results that will be presented include the size of the laser entrance hole vs. time, the growth of the laser-heated gold plasma bubble, the change in brightness of inner beam spots due to time-varying cross beam energy transfer, and plasma instability growth near the hohlraum wall. This work was performed under the auspices of the U.S. Department of Energy by LLNS, LLC, under Contract No. DE-AC52- 07NA27344.
NASA Astrophysics Data System (ADS)
Gorpas, Dimitris; Ma, Dinglong; Bec, Julien; Yankelevich, Diego R.; Marcu, Laura
2016-03-01
Fluorescence lifetime imaging has been shown to be a robust technique for biochemical and functional characterization of tissues and to present great potential for intraoperative tissue diagnosis and guidance of surgical procedures. We report a technique for real-time mapping of fluorescence parameters (i.e. lifetime values) onto the location from where the fluorescence measurements were taken. This is achieved by merging a 450 nm aiming beam generated by a diode laser with the excitation light in a single delivery/collection fiber and by continuously imaging the region of interest with a color CMOS camera. The interrogated locations are then extracted from the acquired frames via color-based segmentation of the aiming beam. Assuming a Gaussian profile of the imaged aiming beam, the segmentation results are fitted to ellipses that are dynamically scaled at the full width of three automatically estimated thresholds (50%, 75%, 90%) of the Gaussian distribution's maximum value. This enables the dynamic augmentation of the white-light video frames with the corresponding fluorescence decay parameters. A fluorescence phantom and fresh tissue samples were used to evaluate this method with motorized and hand-held scanning measurements. At 640x512 pixels resolution the area of interest augmented with fluorescence decay parameters can be imaged at an average 34 frames per second. The developed method has the potential to become a valuable tool for real-time display of optical spectroscopy data during continuous scanning applications that subsequently can be used for tissue characterization and diagnosis.
The gravitational lens system Q0957+561 in the ultraviolet
NASA Technical Reports Server (NTRS)
Dolan, J. F.; Michalitsianos, A. G.; Thompson, R. W.; Boyd, P. T.; Wolinski, K. G.; Bless, R. C.; Nelson, M. J.; Percival, J. W.; Taylor, M. J.; Elliot, J. L.
1995-01-01
Photometric and polarimetric observations of both images of the gravitationally lensed quasar Q0957+561 (z(sub em) = 1.41) were obtained in the UV in 1993 with the High Speed Photometer on board the Hubble Space Photometer on board the Hubble Space Telescope. The images exhibited no significant polarization in a bandpass centered on 2770 A (observer's frame); p less than or = 3.2 % (2 sigma upper limit) in each image. The ratio of the flux density in image A to that in image B in late 1993 had a constant valuee, 1.021 +/- 0.008, in four different UV bandpass between 1400 A and 3040 A observer's frame). These results are consistent with the prediction of the gravitation lens interpretation that the photometric ratio of the images measured simultaneously should be independent of frequency. Reprocessed archival spectra of the two images obtained between 1981 and 1983 by the International Ultraviolet Explorer (IUE) show that the photometric ratio of A to B varies between 0.96 and 2.0 in the Ly alpha emission line, and between 0.77 and 1.8 in the O VI lambda 1037 emission line (quasar rest frame). The photometric ratio of A to B at any single epoch is often significantly different in the two emission lines. Accepting the system as a gravitational lens implies that in the quasar the flux in the Ly alpha emsisson line can vary independently of the flux in the 0 IV emission line.
Enhancement of dynamic myocardial perfusion PET images based on low-rank plus sparse decomposition.
Lu, Lijun; Ma, Xiaomian; Mohy-Ud-Din, Hassan; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan
2018-02-01
The absolute quantification of dynamic myocardial perfusion (MP) PET imaging is challenged by the limited spatial resolution of individual frame images due to division of the data into shorter frames. This study aims to develop a method for restoration and enhancement of dynamic PET images. We propose that the image restoration model should be based on multiple constraints rather than a single constraint, given the fact that the image characteristic is hardly described by a single constraint alone. At the same time, it may be possible, but not optimal, to regularize the image with multiple constraints simultaneously. Fortunately, MP PET images can be decomposed into a superposition of background vs. dynamic components via low-rank plus sparse (L + S) decomposition. Thus, we propose an L + S decomposition based MP PET image restoration model and express it as a convex optimization problem. An iterative soft thresholding algorithm was developed to solve the problem. Using realistic dynamic 82 Rb MP PET scan data, we optimized and compared its performance with other restoration methods. The proposed method resulted in substantial visual as well as quantitative accuracy improvements in terms of noise versus bias performance, as demonstrated in extensive 82 Rb MP PET simulations. In particular, the myocardium defect in the MP PET images had improved visual as well as contrast versus noise tradeoff. The proposed algorithm was also applied on an 8-min clinical cardiac 82 Rb MP PET study performed on the GE Discovery PET/CT, and demonstrated improved quantitative accuracy (CNR and SNR) compared to other algorithms. The proposed method is effective for restoration and enhancement of dynamic PET images. Copyright © 2017 Elsevier B.V. All rights reserved.
Kura, Sreekanth; Xie, Hongyu; Fu, Buyin; Ayata, Cenk; Boas, David A; Sakadžić, Sava
2018-06-01
Resting state functional connectivity (RSFC) allows the study of functional organization in normal and diseased brain by measuring the spontaneous brain activity generated under resting conditions. Intrinsic optical signal imaging (IOSI) based on multiple illumination wavelengths has been used successfully to compute RSFC maps in animal studies. The IOSI setup complexity would be greatly reduced if only a single wavelength can be used to obtain comparable RSFC maps. We used anesthetized mice and performed various comparisons between the RSFC maps based on single wavelength as well as oxy-, deoxy- and total hemoglobin concentration changes. The RSFC maps based on IOSI at a single wavelength selected for sensitivity to the blood volume changes are quantitatively comparable to the RSFC maps based on oxy- and total hemoglobin concentration changes obtained by the more complex IOSI setups. Moreover, RSFC maps do not require CCD cameras with very high frame acquisition rates, since our results demonstrate that they can be computed from the data obtained at frame rates as low as 5 Hz. Our results will have general utility for guiding future RSFC studies based on IOSI and making decisions about the IOSI system designs.
NASA Astrophysics Data System (ADS)
De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-05-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99. 73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
NASA Technical Reports Server (NTRS)
DeLuccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24 hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24 hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
NASA Technical Reports Server (NTRS)
De Luccia, Frank J.; Houchin, Scott; Porter, Brian C.; Graybill, Justin; Haas, Evan; Johnson, Patrick D.; Isaacson, Peter J.; Reth, Alan D.
2016-01-01
The GOES-R Flight Project has developed an Image Navigation and Registration (INR) Performance Assessment Tool Set (IPATS) for measuring Advanced Baseline Imager (ABI) and Geostationary Lightning Mapper (GLM) INR performance metrics in the post-launch period for performance evaluation and long term monitoring. For ABI, these metrics are the 3-sigma errors in navigation (NAV), channel-to-channel registration (CCR), frame-to-frame registration (FFR), swath-to-swath registration (SSR), and within frame registration (WIFR) for the Level 1B image products. For GLM, the single metric of interest is the 3-sigma error in the navigation of background images (GLM NAV) used by the system to navigate lightning strikes. 3-sigma errors are estimates of the 99.73rd percentile of the errors accumulated over a 24-hour data collection period. IPATS utilizes a modular algorithmic design to allow user selection of data processing sequences optimized for generation of each INR metric. This novel modular approach minimizes duplication of common processing elements, thereby maximizing code efficiency and speed. Fast processing is essential given the large number of sub-image registrations required to generate INR metrics for the many images produced over a 24-hour evaluation period. Another aspect of the IPATS design that vastly reduces execution time is the off-line propagation of Landsat based truth images to the fixed grid coordinates system for each of the three GOES-R satellite locations, operational East and West and initial checkout locations. This paper describes the algorithmic design and implementation of IPATS and provides preliminary test results.
Fuzzy logic particle tracking velocimetry
NASA Technical Reports Server (NTRS)
Wernet, Mark P.
1993-01-01
Fuzzy logic has proven to be a simple and robust method for process control. Instead of requiring a complex model of the system, a user defined rule base is used to control the process. In this paper the principles of fuzzy logic control are applied to Particle Tracking Velocimetry (PTV). Two frames of digitally recorded, single exposure particle imagery are used as input. The fuzzy processor uses the local particle displacement information to determine the correct particle tracks. Fuzzy PTV is an improvement over traditional PTV techniques which typically require a sequence (greater than 2) of image frames for accurately tracking particles. The fuzzy processor executes in software on a PC without the use of specialized array or fuzzy logic processors. A pair of sample input images with roughly 300 particle images each, results in more than 200 velocity vectors in under 8 seconds of processing time.
Larkin, J D; Publicover, N G; Sutko, J L
2011-01-01
In photon event distribution sampling, an image formation technique for scanning microscopes, the maximum likelihood position of origin of each detected photon is acquired as a data set rather than binning photons in pixels. Subsequently, an intensity-related probability density function describing the uncertainty associated with the photon position measurement is applied to each position and individual photon intensity distributions are summed to form an image. Compared to pixel-based images, photon event distribution sampling images exhibit increased signal-to-noise and comparable spatial resolution. Photon event distribution sampling is superior to pixel-based image formation in recognizing the presence of structured (non-random) photon distributions at low photon counts and permits use of non-raster scanning patterns. A photon event distribution sampling based method for localizing single particles derived from a multi-variate normal distribution is more precise than statistical (Gaussian) fitting to pixel-based images. Using the multi-variate normal distribution method, non-raster scanning and a typical confocal microscope, localizations with 8 nm precision were achieved at 10 ms sampling rates with acquisition of ~200 photons per frame. Single nanometre precision was obtained with a greater number of photons per frame. In summary, photon event distribution sampling provides an efficient way to form images when low numbers of photons are involved and permits particle tracking with confocal point-scanning microscopes with nanometre precision deep within specimens. © 2010 The Authors Journal of Microscopy © 2010 The Royal Microscopical Society.
NASA Astrophysics Data System (ADS)
Cajgfinger, Thomas; Chabanat, Eric; Dominjon, Agnes; Doan, Quang T.; Guerin, Cyrille; Houles, Julien; Barbier, Remi
2011-03-01
Nano-biophotonics applications will benefit from new fluorescent microscopy methods based essentially on super-resolution techniques (beyond the diffraction limit) on large biological structures (membranes) with fast frame rate (1000 Hz). This trend tends to push the photon detectors to the single-photon counting regime and the camera acquisition system to real time dynamic multiple-target tracing. The LUSIPHER prototype presented in this paper aims to give a different approach than those of Electron Multiplied CCD (EMCCD) technology and try to answer to the stringent demands of the new nano-biophotonics imaging techniques. The electron bombarded CMOS (ebCMOS) device has the potential to respond to this challenge, thanks to the linear gain of the accelerating high voltage of the photo-cathode, to the possible ultra fast frame rate of CMOS sensors and to the single-photon sensitivity. We produced a camera system based on a 640 kPixels ebCMOS with its acquisition system. The proof of concept for single-photon based tracking for multiple single-emitters is the main result of this paper.
1.0 T open-configuration magnetic resonance-guided microwave ablation of pig livers in real time
Dong, Jun; Zhang, Liang; Li, Wang; Mao, Siyue; Wang, Yiqi; Wang, Deling; Shen, Lujun; Dong, Annan; Wu, Peihong
2015-01-01
The current fastest frame rate of each single image slice in MR-guided ablation is 1.3 seconds, which means delayed imaging for human at an average reaction time: 0.33 seconds. The delayed imaging greatly limits the accuracy of puncture and ablation, and results in puncture injury or incomplete ablation. To overcome delayed imaging and obtain real-time imaging, the study was performed using a 1.0-T whole-body open configuration MR scanner in the livers of 10 Wuzhishan pigs. A respiratory-triggered liver matrix array was explored to guide and monitor microwave ablation in real-time. We successfully performed the entire ablation procedure under MR real-time guidance at 0.202 s, the fastest frame rate for each single image slice. The puncture time ranged from 23 min to 3 min. For the pigs, the mean puncture time was shorted to 4.75 minutes and the mean ablation time was 11.25 minutes at power 70 W. The mean length and widths were 4.62 ± 0.24 cm and 2.64 ± 0.13 cm, respectively. No complications or ablation related deaths during or after ablation were observed. In the current study, MR is able to guide microwave ablation like ultrasound in real-time guidance showing great potential for the treatment of liver tumors. PMID:26315365
NASA Astrophysics Data System (ADS)
Jaanimagi, Paul A.
1992-01-01
This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.
Passive autonomous infrared sensor technology
NASA Astrophysics Data System (ADS)
Sadjadi, Firooz
1987-10-01
This study was conducted in response to the DoD's need for establishing understanding of algorithm's modules for passive infrared sensors and seekers and establishing a standardized systematic procedure for applying this understanding to DoD applications. We quantified the performances of Honeywell's Background Adaptive Convexity Operator Region Extractor (BACORE) detection and segmentation modules, as functions of a set of image metrics for both single-frame and multiframe processing. We established an understanding of the behavior of the BACORE's internal parameters. We characterized several sets of stationary and sequential imagery and extracted TIR squared, TBIR squared, ESR, and range for each target. We generated a set of performance models for multi-frame processing BACORE that could be used to predict the behavior of BACORE in image metric space. A similar study was conducted for another of Honeywell's segmentors, namely Texture Boundary Locator (TBL), and its performances were quantified. Finally, a comparison of TBL and BACORE on the same data base and same number of frames was made.
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Andrew; Kovarik, Libor; Abellan, Patricia
One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into amore » single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental conditions. Figure 1 highlights the results from the Pd nanoparticle experiment. On the left, 10 frames are reconstructed from a single coded frame—the original frames are shown for comparison. On the right a selection of three frames are shown from reconstructions at compression levels 10,20,30. The reconstructions, which are not post-processed, are true to the original and degrade in a straightforward manner. The final choice of compression level will obviously depend on both the temporal and spatial resolution required for a specific imaging task, but the results indicate that an increase in speed of better than an order of magnitude should be possible for all experiments. References: [1] P Llull, X Liao, X Yuan et al. Optics express 21(9), (2013), p. 10526. [2] J Yang, X Yuan, X Liao et al. Image Processing, IEEE Trans 23(11), (2014), p. 4863. [3] X Yuan, J Yang, P Llull et al. In ICIP 2013 (IEEE), p. 14. [4] X Yuan, P Llull, X Liao et al. In CVPR 2014. p. 3318. [5] EJ Candès, J Romberg and T Tao. Information Theory, IEEE Trans 52(2), (2006), p. 489. [6] P Binev, W Dahmen, R DeVore et al. In Modeling Nanoscale Imaging in Electron Microscopy, eds. T Vogt, W Dahmen and P Binev (Springer US), Nanostructure Science and Technology (2012). p. 73. [7] A Stevens, H Yang, L Carin et al. Microscopy 63(1), (2014), pp. 41.« less
The research of multi-frame target recognition based on laser active imaging
NASA Astrophysics Data System (ADS)
Wang, Can-jin; Sun, Tao; Wang, Tin-feng; Chen, Juan
2013-09-01
Laser active imaging is fit to conditions such as no difference in temperature between target and background, pitch-black night, bad visibility. Also it can be used to detect a faint target in long range or small target in deep space, which has advantage of high definition and good contrast. In one word, it is immune to environment. However, due to the affect of long distance, limited laser energy and atmospheric backscatter, it is impossible to illuminate the whole scene at the same time. It means that the target in every single frame is unevenly or partly illuminated, which make the recognition more difficult. At the same time the speckle noise which is common in laser active imaging blurs the images . In this paper we do some research on laser active imaging and propose a new target recognition method based on multi-frame images . Firstly, multi pulses of laser is used to obtain sub-images for different parts of scene. A denoising method combined homomorphic filter with wavelet domain SURE is used to suppress speckle noise. And blind deconvolution is introduced to obtain low-noise and clear sub-images. Then these sub-images are registered and stitched to combine a completely and uniformly illuminated scene image. After that, a new target recognition method based on contour moments is proposed. Firstly, canny operator is used to obtain contours. For each contour, seven invariant Hu moments are calculated to generate the feature vectors. At last the feature vectors are input into double hidden layers BP neural network for classification . Experiments results indicate that the proposed algorithm could achieve a high recognition rate and satisfactory real-time performance for laser active imaging.
Specialized CCDs for high-frame-rate visible imaging and UV imaging applications
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Taylor, Gordon C.; Shallcross, Frank V.; Tower, John R.; Lawler, William B.; Harrison, Lorna J.; Socker, Dennis G.; Marchywka, Mike
1993-11-01
This paper reports recent progress by the authors in two distinct charge coupled device (CCD) technology areas. The first technology area is high frame rate, multi-port, frame transfer imagers. A 16-port, 512 X 512, split frame transfer imager and a 32-port, 1024 X 1024, split frame transfer imager are described. The thinned, backside illuminated devices feature on-chip correlated double sampling, buried blooming drains, and a room temperature dark current of less than 50 pA/cm2, without surface accumulation. The second technology area is vacuum ultraviolet (UV) frame transfer imagers. A developmental 1024 X 640 frame transfer imager with 20% quantum efficiency at 140 nm is described. The device is fabricated in a p-channel CCD process, thinned for backside illumination, and utilizes special packaging to achieve stable UV response.
Single-shot digital holography by use of the fractional Talbot effect.
Martínez-León, Lluís; Araiza-E, María; Javidi, Bahram; Andrés, Pedro; Climent, Vicent; Lancis, Jesús; Tajahuerce, Enrique
2009-07-20
We present a method for recording in-line single-shot digital holograms based on the fractional Talbot effect. In our system, an image sensor records the interference between the light field scattered by the object and a properly codified parallel reference beam. A simple binary two-dimensional periodic grating is used to codify the reference beam generating a periodic three-step phase distribution over the sensor plane by fractional Talbot effect. This provides a method to perform single-shot phase-shifting interferometry at frame rates only limited by the sensor capabilities. Our technique is well adapted for dynamic wavefront sensing applications. Images of the object are digitally reconstructed from the digital hologram. Both computer simulations and experimental results are presented.
Encrypting Digital Camera with Automatic Encryption Key Deletion
NASA Technical Reports Server (NTRS)
Oakley, Ernest C. (Inventor)
2007-01-01
A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.
All-passive pixel super-resolution of time-stretch imaging
Chan, Antony C. S.; Ng, Ho-Cheung; Bogaraju, Sharat C. V.; So, Hayden K. H.; Lam, Edmund Y.; Tsia, Kevin K.
2017-01-01
Based on image encoding in a serial-temporal format, optical time-stretch imaging entails a stringent requirement of state-of-the-art fast data acquisition unit in order to preserve high image resolution at an ultrahigh frame rate — hampering the widespread utilities of such technology. Here, we propose a pixel super-resolution (pixel-SR) technique tailored for time-stretch imaging that preserves pixel resolution at a relaxed sampling rate. It harnesses the subpixel shifts between image frames inherently introduced by asynchronous digital sampling of the continuous time-stretch imaging process. Precise pixel registration is thus accomplished without any active opto-mechanical subpixel-shift control or other additional hardware. Here, we present the experimental pixel-SR image reconstruction pipeline that restores high-resolution time-stretch images of microparticles and biological cells (phytoplankton) at a relaxed sampling rate (≈2–5 GSa/s)—more than four times lower than the originally required readout rate (20 GSa/s) — is thus effective for high-throughput label-free, morphology-based cellular classification down to single-cell precision. Upon integration with the high-throughput image processing technology, this pixel-SR time-stretch imaging technique represents a cost-effective and practical solution for large scale cell-based phenotypic screening in biomedical diagnosis and machine vision for quality control in manufacturing. PMID:28303936
Benioff, Paul
2009-01-01
Tmore » his work is based on the field of reference frames based on quantum representations of real and complex numbers described in other work. Here frame domains are expanded to include space and time lattices. Strings of qukits are described as hybrid systems as they are both mathematical and physical systems. As mathematical systems they represent numbers. As physical systems in each frame the strings have a discrete Schrodinger dynamics on the lattices. he frame field has an iterative structure such that the contents of a stage j frame have images in a stage j - 1 (parent) frame. A discussion of parent frame images includes the proposal that points of stage j frame lattices have images as hybrid systems in parent frames. he resulting association of energy with images of lattice point locations, as hybrid systems states, is discussed. Representations and images of other physical systems in the different frames are also described.« less
Fast Fourier single-pixel imaging via binary illumination.
Zhang, Zibang; Wang, Xueying; Zheng, Guoan; Zhong, Jingang
2017-09-20
Fourier single-pixel imaging (FSI) employs Fourier basis patterns for encoding spatial information and is capable of reconstructing high-quality two-dimensional and three-dimensional images. Fourier-domain sparsity in natural scenes allows FSI to recover sharp images from undersampled data. The original FSI demonstration, however, requires grayscale Fourier basis patterns for illumination. This requirement imposes a limitation on the imaging speed as digital micro-mirror devices (DMDs) generate grayscale patterns at a low refreshing rate. In this paper, we report a new strategy to increase the speed of FSI by two orders of magnitude. In this strategy, we binarize the Fourier basis patterns based on upsampling and error diffusion dithering. We demonstrate a 20,000 Hz projection rate using a DMD and capture 256-by-256-pixel dynamic scenes at a speed of 10 frames per second. The reported technique substantially accelerates image acquisition speed of FSI. It may find broad imaging applications at wavebands that are not accessible using conventional two-dimensional image sensors.
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Changhui; Wei, Kai
2008-07-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.
Speidel, Michael A; Tomkowiak, Michael T; Raval, Amish N; Dunkerley, David A P; Slagowski, Jordan M; Kahn, Paul; Ku, Jamie; Funk, Tobias
Scanning-beam digital x-ray (SBDX) is an inverse geometry fluoroscopy system for low dose cardiac imaging. The use of a narrow scanned x-ray beam in SBDX reduces detected x-ray scatter and improves dose efficiency, however the tight beam collimation also limits the maximum achievable x-ray fluence. To increase the fluence available for imaging, we have constructed a new SBDX prototype with a wider x-ray beam, larger-area detector, and new real-time image reconstructor. Imaging is performed with a scanning source that generates 40,328 narrow overlapping projections from 71 × 71 focal spot positions for every 1/15 s scan period. A high speed 2-mm thick CdTe photon counting detector was constructed with 320×160 elements and 10.6 cm × 5.3 cm area (full readout every 1.28 μs), providing an 86% increase in area over the previous SBDX prototype. A matching multihole collimator was fabricated from layers of tungsten, brass, and lead, and a multi-GPU reconstructor was assembled to reconstruct the stream of captured detector images into full field-of-view images in real time. Thirty-two tomosynthetic planes spaced by 5 mm plus a multiplane composite image are produced for each scan frame. Noise equivalent quanta on the new SBDX prototype measured 63%-71% higher than the previous prototype. X-ray scatter fraction was 3.9-7.8% when imaging 23.3-32.6 cm acrylic phantoms, versus 2.3-4.2% with the previous prototype. Coronary angiographic imaging at 15 frame/s was successfully performed on the new SBDX prototype, with live display of either a multiplane composite or single plane image.
A higher-speed compressive sensing camera through multi-diode design
NASA Astrophysics Data System (ADS)
Herman, Matthew A.; Tidman, James; Hewitt, Donna; Weston, Tyler; McMackin, Lenore
2013-05-01
Obtaining high frame rates is a challenge with compressive sensing (CS) systems that gather measurements in a sequential manner, such as the single-pixel CS camera. One strategy for increasing the frame rate is to divide the FOV into smaller areas that are sampled and reconstructed in parallel. Following this strategy, InView has developed a multi-aperture CS camera using an 8×4 array of photodiodes that essentially act as 32 individual simultaneously operating single-pixel cameras. Images reconstructed from each of the photodiode measurements are stitched together to form the full FOV. To account for crosstalk between the sub-apertures, novel modulation patterns have been developed to allow neighboring sub-apertures to share energy. Regions of overlap not only account for crosstalk energy that would otherwise be reconstructed as noise, but they also allow for tolerance in the alignment of the DMD to the lenslet array. Currently, the multi-aperture camera is built into a computational imaging workstation configuration useful for research and development purposes. In this configuration, modulation patterns are generated in a CPU and sent to the DMD via PCI express, which allows the operator to develop and change the patterns used in the data acquisition step. The sensor data is collected and then streamed to the workstation via an Ethernet or USB connection for the reconstruction step. Depending on the amount of data taken and the amount of overlap between sub-apertures, frame rates of 2-5 frames per second can be achieved. In a stand-alone camera platform, currently in development, pattern generation and reconstruction will be implemented on-board.
Single-Shot Optical Sectioning Using Two-Color Probes in HiLo Fluorescence Microscopy
Muro, Eleonora; Vermeulen, Pierre; Ioannou, Andriani; Skourides, Paris; Dubertret, Benoit; Fragola, Alexandra; Loriette, Vincent
2011-01-01
We describe a wide-field fluorescence microscope setup which combines HiLo microscopy technique with the use of a two-color fluorescent probe. It allows one-shot fluorescence optical sectioning of thick biological moving sample which is illuminated simultaneously with a flat and a structured pattern at two different wavelengths. Both homogenous and structured fluorescence images are spectrally separated at detection and combined similarly with the HiLo microscopy technique. We present optically sectioned full-field images of Xenopus laevis embryos acquired at 25 images/s frame rate. PMID:21641327
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-27
... Frames and Image Display Devices and Components Thereof; Notice of Institution of Investigation... United States after importation of certain digital photo frames and image display devices and components... certain digital photo frames and image display devices and components thereof that infringe one or more of...
photPARTY: Python Automated Square-Aperture Photometry
NASA Astrophysics Data System (ADS)
Symons, Teresa A.
As CCD's have drastically increased the amount of information recorded per frame, so too have they increased the time and effort needed to sift through the data. For observations of a single star, information from millions of pixels needs to be distilled into one number: the magnitude. Various computer systems have been used to streamline this process over the years. The CCDPhot photometer, in use at the Kitt Peak 0.9-m telescope in the 1990's, allowed for user settings and provided real time magnitudes during observation of single stars. It is this level of speed and convenience that inspired the development of the Python-based software analysis system photPARTY, which can quickly and efficiently produce magnitudes for a set of single- star or un-crowded field CCD frames. Seeking to remove the need for manual interaction after initial settings for a group of images, photPARTY automatically locates stars, subtracts the background, and performs square-aperture photometry. Rather than being a package of available functions, it is essentially a self-contained, one-click analysis system, with the capability to process several hundred frames in just a couple of minutes. Results of comparisons with present systems such as IRAF are presented.
Interactive CT-Video Registration for the Continuous Guidance of Bronchoscopy
Merritt, Scott A.; Khare, Rahul; Bascom, Rebecca
2014-01-01
Bronchoscopy is a major step in lung cancer staging. To perform bronchoscopy, the physician uses a procedure plan, derived from a patient’s 3D computed-tomography (CT) chest scan, to navigate the bronchoscope through the lung airways. Unfortunately, physicians vary greatly in their ability to perform bronchoscopy. As a result, image-guided bronchoscopy systems, drawing upon the concept of CT-based virtual bronchoscopy (VB), have been proposed. These systems attempt to register the bronchoscope’s live position within the chest to a CT-based virtual chest space. Recent methods, which register the bronchoscopic video to CT-based endoluminal airway renderings, show promise but do not enable continuous real-time guidance. We present a CT-video registration method inspired by computer-vision innovations in the fields of image alignment and image-based rendering. In particular, motivated by the Lucas–Kanade algorithm, we propose an inverse-compositional framework built around a gradient-based optimization procedure. We next propose an implementation of the framework suitable for image-guided bronchoscopy. Laboratory tests, involving both single frames and continuous video sequences, demonstrate the robustness and accuracy of the method. Benchmark timing tests indicate that the method can run continuously at 300 frames/s, well beyond the real-time bronchoscopic video rate of 30 frames/s. This compares extremely favorably to the ≥1 s/frame speeds of other methods and indicates the method’s potential for real-time continuous registration. A human phantom study confirms the method’s efficacy for real-time guidance in a controlled setting, and, hence, points the way toward the first interactive CT-video registration approach for image-guided bronchoscopy. Along this line, we demonstrate the method’s efficacy in a complete guidance system by presenting a clinical study involving lung cancer patients. PMID:23508260
A new method of small target detection based on neural network
NASA Astrophysics Data System (ADS)
Hu, Jing; Hu, Yongli; Lu, Xinxin
2018-02-01
The detection and tracking of moving dim target in infrared image have been an research hotspot for many years. The target in each frame of images only occupies several pixels without any shape and structure information. Moreover, infrared small target is often submerged in complicated background with low signal-to-clutter ratio, making the detection very difficult. Different backgrounds exhibit different statistical properties, making it becomes extremely complex to detect the target. If the threshold segmentation is not reasonable, there may be more noise points in the final detection, which is unfavorable for the detection of the trajectory of the target. Single-frame target detection may not be able to obtain the desired target and cause high false alarm rate. We believe the combination of suspicious target detection spatially in each frame and temporal association for target tracking will increase reliability of tracking dim target. The detection of dim target is mainly divided into two parts, In the first part, we adopt bilateral filtering method in background suppression, after the threshold segmentation, the suspicious target in each frame are extracted, then we use LSTM(long short term memory) neural network to predict coordinates of target of the next frame. It is a brand-new method base on the movement characteristic of the target in sequence images which could respond to the changes in the relationship between past and future values of the values. Simulation results demonstrate proposed algorithm can effectively predict the trajectory of the moving small target and work efficiently and robustly with low false alarm.
NO PLIF imaging in the CUBRC 48-inch shock tunnel
NASA Astrophysics Data System (ADS)
Jiang, N.; Bruzzese, J.; Patton, R.; Sutton, J.; Yentsch, R.; Gaitonde, D. V.; Lempert, W. R.; Miller, J. D.; Meyer, T. R.; Parker, R.; Wadham, T.; Holden, M.; Danehy, P. M.
2012-12-01
Nitric oxide planar laser-induced fluorescence (NO PLIF) imaging is demonstrated at a 10-kHz repetition rate in the Calspan University at Buffalo Research Center's (CUBRC) 48-inch Mach 9 hypervelocity shock tunnel using a pulse burst laser-based high frame rate imaging system. Sequences of up to ten images are obtained internal to a supersonic combustor model, located within the shock tunnel, during a single ~10-millisecond duration run of the ground test facility. Comparison with a CFD simulation shows good overall qualitative agreement in the jet penetration and spreading observed with an average of forty individual PLIF images obtained during several facility runs.
NASA Astrophysics Data System (ADS)
Zhou, Weifeng; Cai, Jian-Feng; Gao, Hao
2013-12-01
A popular approach for medical image reconstruction has been through the sparsity regularization, assuming the targeted image can be well approximated by sparse coefficients under some properly designed system. The wavelet tight frame is such a widely used system due to its capability for sparsely approximating piecewise-smooth functions, such as medical images. However, using a fixed system may not always be optimal for reconstructing a variety of diversified images. Recently, the method based on the adaptive over-complete dictionary that is specific to structures of the targeted images has demonstrated its superiority for image processing. This work is to develop the adaptive wavelet tight frame method image reconstruction. The proposed scheme first constructs the adaptive wavelet tight frame that is task specific, and then reconstructs the image of interest by solving an l1-regularized minimization problem using the constructed adaptive tight frame system. The proof-of-concept study is performed for computed tomography (CT), and the simulation results suggest that the adaptive tight frame method improves the reconstructed CT image quality from the traditional tight frame method.
The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross
2014-06-15
Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less
3-D ultrasound volume reconstruction using the direct frame interpolation method.
Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin
2010-11-01
A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce computation times and memory requirements. The method is straightforward, independent of additional input or parameters, and uses the high-resolution B-mode image frames instead of usually lower-resolution voxel information for interpolation. The DFI method can be considered as a valuable alternative to conventional 3-D ultrasound reconstruction methods based on pixel or voxel nearest-neighbor approaches, offering better quality and competitive reconstruction time.
Wolthaus, J W H; Sonke, J J; van Herk, M; Damen, E M F
2008-09-01
lower lobe lung tumors move with amplitudes of up to 2 cm due to respiration. To reduce respiration imaging artifacts in planning CT scans, 4D imaging techniques are used. Currently, we use a single (midventilation) frame of the 4D data set for clinical delineation of structures and radiotherapy planning. A single frame, however, often contains artifacts due to breathing irregularities, and is noisier than a conventional CT scan since the exposure per frame is lower. Moreover, the tumor may be displaced from the mean tumor position due to hysteresis. The aim of this work is to develop a framework for the acquisition of a good quality scan representing all scanned anatomy in the mean position by averaging transformed (deformed) CT frames, i.e., canceling out motion. A nonrigid registration method is necessary since motion varies over the lung. 4D and inspiration breath-hold (BH) CT scans were acquired for 13 patients. An iterative multiscale motion estimation technique was applied to the 4D CT scan, similar to optical flow but using image phase (gray-value transitions from bright to dark and vice versa) instead. From the (4D) deformation vector field (DVF) derived, the local mean position in the respiratory cycle was computed and the 4D DVF was modified to deform all structures of the original 4D CT scan to this mean position. A 3D midposition (MidP) CT scan was then obtained by (arithmetic or median) averaging of the deformed 4D CT scan. Image registration accuracy, tumor shape deviation with respect to the BH CT scan, and noise were determined to evaluate the image fidelity of the MidP CT scan and the performance of the technique. Accuracy of the used deformable image registration method was comparable to established automated locally rigid registration and to manual landmark registration (average difference to both methods < 0.5 mm for all directions) for the tumor region. From visual assessment, the registration was good for the clearly visible features (e.g., tumor and diaphragm). The shape of the tumor, with respect to that of the BH CT scan, was better represented by the MidP reconstructions than any of the 4D CT frames (including MidV; reduction of "shape differences" was 66%). The MidP scans contained about one-third the noise of individual 4D CT scan frames. We implemented an accurate method to estimate the motion of structures in a 4D CT scan. Subsequently, a novel method to create a midposition CT scan (time-weighted average of the anatomy) for treatment planning with reduced noise and artifacts was introduced. Tumor shape and position in the MidP CT scan represents that of the BH CT scan better than MidV CT scan and, therefore, was found to be appropriate for treatment planning.
Flash trajectory imaging of target 3D motion
NASA Astrophysics Data System (ADS)
Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang
2011-03-01
We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.
Imaging Sensor Development for Scattering Atmospheres.
1983-03-01
subtracted out- put from a CCD imaging detector for a single frame can be written as A _ S (2-22) V B + B{ shot noise thermal noise , dark current shot ...addition, the spectral re- sponses of current devices are limited to the visible region and their sensitivities are not very high. Solid state detectors ...are generally much more sensitive than spatial light modulators, and some (e.g., HgCdTe detectors ) can re- spond up to the 10 um region. Several
Evaluation of Particle Image Velocimetry Measurement Using Multi-wavelength Illumination
NASA Astrophysics Data System (ADS)
Lai, HC; Chew, TF; Razak, NA
2018-05-01
In past decades, particle image velocimetry (PIV) has been widely used in measuring fluid flow and a lot of researches have been done to improve the PIV technique. Many researches are conducted on high power light emitting diode (HPLED) to replace the traditional laser illumination system in PIV. As an extended work to the research in PIV illumination system, two high power light emitting diodes (HPLED) with different wavelength are introduced as PIV illumination system. The objective of this research is using dual colours LED to directly replace laser as illumination system in order for a single frame to be captured by a normal camera instead of a high speed camera. Dual colours HPLEDs PIV are capable with single frame double pulses mode which able to plot the velocity vector of the particles after correlation. An illumination system is designed and fabricated and evaluated by measuring water flow in a small tank. The results indicates that HPLEDs promises a few advantages in terms of cost, safety and performance. It has a high potential to be develop into an alternative for PIV in the near future.
High Dynamic Range Pixel Array Detector for Scanning Transmission Electron Microscopy.
Tate, Mark W; Purohit, Prafull; Chamberlain, Darol; Nguyen, Kayla X; Hovden, Robert; Chang, Celesta S; Deb, Pratiti; Turgut, Emrah; Heron, John T; Schlom, Darrell G; Ralph, Daniel C; Fuchs, Gregory D; Shanks, Katherine S; Philipp, Hugh T; Muller, David A; Gruner, Sol M
2016-02-01
We describe a hybrid pixel array detector (electron microscope pixel array detector, or EMPAD) adapted for use in electron microscope applications, especially as a universal detector for scanning transmission electron microscopy. The 128×128 pixel detector consists of a 500 µm thick silicon diode array bump-bonded pixel-by-pixel to an application-specific integrated circuit. The in-pixel circuitry provides a 1,000,000:1 dynamic range within a single frame, allowing the direct electron beam to be imaged while still maintaining single electron sensitivity. A 1.1 kHz framing rate enables rapid data collection and minimizes sample drift distortions while scanning. By capturing the entire unsaturated diffraction pattern in scanning mode, one can simultaneously capture bright field, dark field, and phase contrast information, as well as being able to analyze the full scattering distribution, allowing true center of mass imaging. The scattering is recorded on an absolute scale, so that information such as local sample thickness can be directly determined. This paper describes the detector architecture, data acquisition system, and preliminary results from experiments with 80-200 keV electron beams.
NASA Astrophysics Data System (ADS)
Mahmoud, Ahmed M.; Ngan, Peter; Crout, Richard; Mukdadi, Osama M.
2009-02-01
The use of ultrasound in dentistry is still an open growing area of research. Currently, there is a lack of imaging modalities to accurately predict minute structures and defects in the jawbone. In particular, the inability of 2D radiographic images to detect bony periodontal defects resulted from infection of the periodontium. This study investigates the feasibility of high frequency ultrasound to reconstruct high resolution 3D surface images of human jawbone. Methods: A dentate and non-dentate mandibles were used in this study. The system employs high frequency single-element ultrasound focused transducers (15-30 MHz) for scanning. Continuous acquisition using a 1 GHz data acquisition card is synchronized with a high precision two-dimensional stage positioning system of +/-1 μm resolution for acquiring accurate and quantitative measurements of the mandible in vitro. Radio frequency (RF) signals are acquired laterally 44-45.5 μm apart for each frame. Different frames are reconstructed 500 μm apart for the 3D reconstruction. Signal processing algorithms are applied on the received ultrasound signals for filtering, focusing, and envelope detection before frame reconstruction. Furthermore, an edge detection technique is adopted to detect the bone surface in each frame. Finally, all edges are combined together in order to render a 3D surface image of the jawbone. Major anatomical landmarks on the resultant images were confirmed with the anatomical structures on the mandibles to show the efficacy of the system. Comparison were also made with conventional 2D radiographs to show the superiority of the ultrasound imaging system in diagnosing small defects in the lateral, axial and elevation planes of space. Results: The landmarks on all ultrasound images matched with those on the mandible, indicating the efficacy of the system in detecting small structures in human jaw bones. Comparison with conventional 2D radiographic images of the same mandible showed superiority of the 3D ultrasound images in detecting defects in the elevation plane of space. These results suggest that the high frequency ultrasound system shows great potential in providing a non-invasive method to characterize the jawbone and detect periodontal diseases at earlier stages.
NASA Technical Reports Server (NTRS)
Waegell, Mordecai J.; Palacios, David M.
2011-01-01
Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter
NASA Astrophysics Data System (ADS)
Lowrance, John L.; Mastrocola, V. J.; Renda, George F.; Swain, Pradyumna K.; Kabra, R.; Bhaskaran, Mahalingham; Tower, John R.; Levine, Peter A.
2004-02-01
This paper describes the architecture, process technology, and performance of a family of high burst rate CCDs. These imagers employ high speed, low lag photo-detectors with local storage at each photo-detector to achieve image capture at rates greater than 106 frames per second. One imager has a 64 x 64 pixel array with 12 frames of storage. A second imager has a 80 x 160 array with 28 frames of storage, and the third imager has a 64 x 64 pixel array with 300 frames of storage. Application areas include capture of rapid mechanical motion, optical wavefront sensing, fluid cavitation research, combustion studies, plasma research and wind-tunnel-based gas dynamics research.
Massie, Norbert A.; Oster, Yale
1992-01-01
A large effective-aperture, low-cost optical telescope with diffraction-limited resolution enables ground-based observation of near-earth space objects. The telescope has a non-redundant, thinned-aperture array in a center-mount, single-structure space frame. It employs speckle interferometric imaging to achieve diffraction-limited resolution. The signal-to-noise ratio problem is mitigated by moving the wavelength of operation to the near-IR, and the image is sensed by a Silicon CCD. The steerable, single-structure array presents a constant pupil. The center-mount, radar-like mount enables low-earth orbit space objects to be tracked as well as increases stiffness of the space frame. In the preferred embodiment, the array has elemental telescopes with subaperture of 2.1 m in a circle-of-nine configuration. The telescope array has an effective aperture of 12 m which provides a diffraction-limited resolution of 0.02 arc seconds. Pathlength matching of the telescope array is maintained by an electro-optical system employing laser metrology. Speckle imaging relaxes pathlength matching tolerance by one order of magnitude as compared to phased arrays. Many features of the telescope contribute to substantial reduction in costs. These include eliminating the conventional protective dome and reducing on-site construction activites. The cost of the telescope scales with the first power of the aperture rather than its third power as in conventional telescopes.
NASA Astrophysics Data System (ADS)
Kura, Sreekanth; Xie, Hongyu; Fu, Buyin; Ayata, Cenk; Boas, David A.; Sakadžić, Sava
2018-06-01
Objective. Resting state functional connectivity (RSFC) allows the study of functional organization in normal and diseased brain by measuring the spontaneous brain activity generated under resting conditions. Intrinsic optical signal imaging (IOSI) based on multiple illumination wavelengths has been used successfully to compute RSFC maps in animal studies. The IOSI setup complexity would be greatly reduced if only a single wavelength can be used to obtain comparable RSFC maps. Approach. We used anesthetized mice and performed various comparisons between the RSFC maps based on single wavelength as well as oxy-, deoxy- and total hemoglobin concentration changes. Main results. The RSFC maps based on IOSI at a single wavelength selected for sensitivity to the blood volume changes are quantitatively comparable to the RSFC maps based on oxy- and total hemoglobin concentration changes obtained by the more complex IOSI setups. Moreover, RSFC maps do not require CCD cameras with very high frame acquisition rates, since our results demonstrate that they can be computed from the data obtained at frame rates as low as 5 Hz. Significance. Our results will have general utility for guiding future RSFC studies based on IOSI and making decisions about the IOSI system designs.
High Contrast Ultrafast Imaging of the Human Heart
Papadacci, Clement; Pernot, Mathieu; Couade, Mathieu; Fink, Mathias; Tanter, Mickael
2014-01-01
Non-invasive ultrafast imaging for human cardiac applications is a big challenge to image intrinsic waves such as electromechanical waves or remotely induced shear waves in elastography imaging techniques. In this paper we propose to perform ultrafast imaging of the heart with adapted sector size by using diverging waves emitted from a classical transthoracic cardiac phased array probe. As in ultrafast imaging with plane wave coherent compounding, diverging waves can be summed coherently to obtain high-quality images of the entire heart at high frame rate in a full field-of-view. To image shear waves propagation at high SNR, the field-of-view can be adapted by changing the angular aperture of the transmitted wave. Backscattered echoes from successive circular wave acquisitions are coherently summed at every location in the image to improve the image quality while maintaining very high frame rates. The transmitted diverging waves, angular apertures and subapertures size are tested in simulation and ultrafast coherent compounding is implemented on a commercial scanner. The improvement of the imaging quality is quantified in phantom and in vivo on human heart. Imaging shear wave propagation at 2500 frame/s using 5 diverging waves provides a strong increase of the Signal to noise ratio of the tissue velocity estimates while maintaining a high frame rate. Finally, ultrafast imaging with a 1 to 5 diverging waves is used to image the human heart at a frame rate of 900 frames/s over an entire cardiac cycle. Thanks to spatial coherent compounding, a strong improvement of imaging quality is obtained with a small number of transmitted diverging waves and a high frame rate, which allows imaging the propagation of electromechanical and shear waves with good image quality. PMID:24474135
A 176×144 148dB adaptive tone-mapping imager
NASA Astrophysics Data System (ADS)
Vargas-Sierra, S.; Liñán-Cembrano, G.; Rodríguez-Vázquez, A.
2012-03-01
This paper presents a 176x144 (QCIF) HDR image sensor where visual information is simultaneously captured and adaptively compressed by means of an in-pixel tone mapping scheme. The tone mapping curve (TMC) is calculated from the histogram of a Time Stamp image captured in the previous frame, which serves as a probability indicator of the distribution of illuminations within the present frame. The chip produces 7-bit/pixel images that can map illuminations from 311μlux to 55.3 klux in a single frame in a way that each pixel decides when to stop observing photocurrent integration -with extreme values captured at 8s and 2.34μs respectively. Pixels size is 33x33μm2, which includes a 3x3μm2 Nwell- Psubstrate photodiode and an autozeroing technique for establishing the reset voltage, which cancels most of the offset contributions created by the analog processing circuitry. Dark signal (10.8 mV/s ) effects in the final image are attenuated by an automatic programming of the DAC top voltage. Measured characteristics are Sensitivity 5.79 V/lux.s , FWC 12.2ke-, Conversion Factor 129(e-/DN), and Read Noise 25e-. The chip has been designed in the 0.35μm OPTO technology from Austriamicrosystems (AMS). Due to the focal plane operation, this architecture is especially well suited to be implemented in a 3D (vertical stacking) technology using per-pixel TSVs.
Motion-compensated compressed sensing for dynamic imaging
NASA Astrophysics Data System (ADS)
Sundaresan, Rajagopalan; Kim, Yookyung; Nadar, Mariappan S.; Bilgin, Ali
2010-08-01
The recently introduced Compressed Sensing (CS) theory explains how sparse or compressible signals can be reconstructed from far fewer samples than what was previously believed possible. The CS theory has attracted significant attention for applications such as Magnetic Resonance Imaging (MRI) where long acquisition times have been problematic. This is especially true for dynamic MRI applications where high spatio-temporal resolution is needed. For example, in cardiac cine MRI, it is desirable to acquire the whole cardiac volume within a single breath-hold in order to avoid artifacts due to respiratory motion. Conventional MRI techniques do not allow reconstruction of high resolution image sequences from such limited amount of data. Vaswani et al. recently proposed an extension of the CS framework to problems with partially known support (i.e. sparsity pattern). In their work, the problem of recursive reconstruction of time sequences of sparse signals was considered. Under the assumption that the support of the signal changes slowly over time, they proposed using the support of the previous frame as the "known" part of the support for the current frame. While this approach works well for image sequences with little or no motion, motion causes significant change in support between adjacent frames. In this paper, we illustrate how motion estimation and compensation techniques can be used to reconstruct more accurate estimates of support for image sequences with substantial motion (such as cardiac MRI). Experimental results using phantoms as well as real MRI data sets illustrate the improved performance of the proposed technique.
SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, S; Rottmann, J; Berbeco, R
2014-06-01
Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less
The Multimission Image Processing Laboratory's virtual frame buffer interface
NASA Technical Reports Server (NTRS)
Wolfe, T.
1984-01-01
Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.
Zhou, Zhuhuang; Wu, Shuicai; Lin, Man-Yen; Fang, Jui; Liu, Hao-Li; Tsui, Po-Hsiang
2018-05-01
In this study, the window-modulated compounding (WMC) technique was integrated into three-dimensional (3D) ultrasound Nakagami imaging for improving the spatial visualization of backscatter statistics. A 3D WMC Nakagami image was produced by summing and averaging a number of 3D Nakagami images (number of frames denoted as N) formed using sliding cubes with varying side lengths ranging from 1 to N times the transducer pulse. To evaluate the performance of the proposed 3D WMC Nakagami imaging method, agar phantoms with scatterer concentrations ranging from 2 to 64 scatterers/mm 3 were made, and six stages of fatty liver (zero, one, two, four, six, and eight weeks) were induced in rats by methionine-choline-deficient diets (three rats for each stage, total n = 18). A mechanical scanning system with a 5-MHz focused single-element transducer was used for ultrasound radiofrequency data acquisition. The experimental results showed that 3D WMC Nakagami imaging was able to characterize different scatterer concentrations. Backscatter statistics were visualized with various numbers of frames; N = 5 reduced the estimation error of 3D WMC Nakagami imaging in visualizing the backscatter statistics. Compared with conventional 3D Nakagami imaging, 3D WMC Nakagami imaging improved the image smoothness without significant image resolution degradation, and it can thus be used for describing different stages of fatty liver in rats.
Resolution enhancement of tri-stereo remote sensing images by super resolution methods
NASA Astrophysics Data System (ADS)
Tuna, Caglayan; Akoguz, Alper; Unal, Gozde; Sertel, Elif
2016-10-01
Super resolution (SR) refers to generation of a High Resolution (HR) image from a decimated, blurred, low-resolution (LR) image set, which can be either a single frame or multi-frame that contains a collection of several images acquired from slightly different views of the same observation area. In this study, we propose a novel application of tri-stereo Remote Sensing (RS) satellite images to the super resolution problem. Since the tri-stereo RS images of the same observation area are acquired from three different viewing angles along the flight path of the satellite, these RS images are properly suited to a SR application. We first estimate registration between the chosen reference LR image and other LR images to calculate the sub pixel shifts among the LR images. Then, the warping, blurring and down sampling matrix operators are created as sparse matrices to avoid high memory and computational requirements, which would otherwise make the RS-SR solution impractical. Finally, the overall system matrix, which is constructed based on the obtained operator matrices is used to obtain the estimate HR image in one step in each iteration of the SR algorithm. Both the Laplacian and total variation regularizers are incorporated separately into our algorithm and the results are presented to demonstrate an improved quantitative performance against the standard interpolation method as well as improved qualitative results due expert evaluations.
An acquisition system for CMOS imagers with a genuine 10 Gbit/s bandwidth
NASA Astrophysics Data System (ADS)
Guérin, C.; Mahroug, J.; Tromeur, W.; Houles, J.; Calabria, P.; Barbier, R.
2012-12-01
This paper presents a high data throughput acquisition system for pixel detector readout such as CMOS imagers. This CMOS acquisition board offers a genuine 10 Gbit/s bandwidth to the workstation and can provide an on-line and continuous high frame rate imaging capability. On-line processing can be implemented either on the Data Acquisition Board or on the multi-cores workstation depending on the complexity of the algorithms. The different parts composing the acquisition board have been designed to be used first with a single-photon detector called LUSIPHER (800×800 pixels), developed in our laboratory for scientific applications ranging from nano-photonics to adaptive optics. The architecture of the acquisition board is presented and the performances achieved by the produced boards are described. The future developments (hardware and software) concerning the on-line implementation of algorithms dedicated to single-photon imaging are tackled.
Fast scanning probe for ophthalmic echography using an ultrasound motor.
Carotenuto, Riccardo; Caliano, Giosuè; Caronti, Alessandro; Savoia, Alessandro; Pappalardo, Massimo
2005-11-01
High-frequency transducers, up to 35-50 MHz, are widely used in ophthalmic echography to image fine eye structures. Phased-array techniques are not practically applicable at such a high frequency, due to the too small size required for the single transducer element, and mechanical scanning is the only practical alternative. At present, all ophthalmic ultrasound systems use focused single-element, mechanically scanned probes. A good probe positioning and image evaluation feedback requires an image refresh-rate of about 15-30 frames per second, which is achieved in commercial mechanical scanning probes by using electromagnetic motors. In this work, we report the design, construction, and experimental characterization of the first mechanical scanning probe for ophthalmic echography based on a small piezoelectric ultrasound motor. The prototype probe reaches a scanning rate of 15 sectors per second, with very silent operation and little weight. The first high-frequency echographic images obtained with the prototype probe are presented.
Skolnick, M L; Matzuk, T
1978-08-01
This paper describes a new real-time servo-controlled sector scanner that produces high-resolution images similar to phased-array systems, but possesses the simplicity of design and low cost best achievable in a mechanical sector scanner. Its unique feature is the transducer head which contains a single moving part--the transducer. Frame rates vary from 0 to 30 degrees and the sector angle from 0 to 60 degrees. Abdominal applications include: differentiation of vascular structures, detection of small masses, imaging of diagonally oriented organs. Survey scanning, and demonstration of regions difficult to image with contact scanners. Cardiac uses are also described.
A monolithic 640 × 512 CMOS imager with high-NIR sensitivity
NASA Astrophysics Data System (ADS)
Lauxtermann, Stefan; Fisher, John; McDougal, Michael
2014-06-01
In this paper we present first results from a backside illuminated CMOS image sensor that we fabricated on high resistivity silicon. Compared to conventional CMOS imagers, a thicker photosensitive membrane can be depleted when using silicon with low background doping concentration while maintaining low dark current and good MTF performance. The benefits of such a fully depleted silicon sensor are high quantum efficiency over a wide spectral range and a fast photo detector response. Combining these characteristics with the circuit complexity and manufacturing maturity available from a modern, mixed signal CMOS technology leads to a new type of sensor, with an unprecedented performance spectrum in a monolithic device. Our fully depleted, backside illuminated CMOS sensor was designed to operate at integration times down to 100nsec and frame rates up to 1000Hz. Noise in Integrate While Read (IWR) snapshot shutter operation for these conditions was simulated to be below 10e- at room temperature. 2×2 binning with a 4× increase in sensitivity and a maximum frame rate of 4000 Hz is supported. For application in hyperspectral imaging systems the full well capacity in each row can individually be programmed between 10ke-, 60ke- and 500ke-. On test structures we measured a room temperature dark current of 360pA/cm2 at a reverse bias of 3.3V. A peak quantum efficiency of 80% was measured with a single layer AR coating on the backside. Test images captured with the 50μm thick VGA imager between 30Hz and 90Hz frame rate show a strong response at NIR wavelengths.
Image quality assessment metric for frame accumulated image
NASA Astrophysics Data System (ADS)
Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling
2018-01-01
The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.
Improved frame-based estimation of head motion in PET brain imaging.
Mukherjee, J M; Lindsay, C; Mukherjee, A; Olivier, P; Shao, L; King, M A; Licho, R
2016-05-01
Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. The list mode data for PET acquisition is uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is not susceptible to motion introduced between CT and PET acquisitions. The authors have shown that they can estimate motion for frames with time intervals as short as 5 s using nonattenuation corrected reconstructed FDG PET brain images. Intraframe motion in 60-s frames causes degradation of accuracy to about 2 mm based on the motion type.
Islam, Salwa; Fitzgerald, Lisa
2016-01-01
High rates of obesity are a significant issue amongst Indigenous populations in many countries around the world. Media framing of issues can play a critical role in shaping public opinion and government policy. A broad range of media analyses have been conducted on various aspects of obesity, however media representation of Indigenous obesity remains unexplored. In this study we investigate how obesity in Australia's Indigenous population is represented in newsprint media coverage. Media articles published between 2007 and 2014 were analysed for the distribution and extent of coverage over time and across Indigenous and mainstream media sources using quantitative content analysis. Representation of the causes and solutions of Indigenous obesity and framing in text and image content was examined using qualitative framing analysis. Media coverage of Indigenous obesity was very limited with no clear trends in reporting over time or across sources. The single Indigenous media source was the second largest contributor to the media discourse of this issue. Structural causes/origins were most often cited and individual solutions were comparatively overrepresented. A range of frames were employed across the media sources. All images reinforced textual framing except for one article where the image depicted individual factors whereas the text referred to structural determinants. This study provides a starting point for an important area of research that needs further investigation. The findings highlight the importance of alternative news media outlets, such as The Koori Mail, and that these should be developed to enhance the quality and diversity of media coverage. Media organisations can actively contribute to improving Indigenous health through raising awareness, evidence-based balanced reporting, and development of closer ties with Indigenous health workers.
Improved quality of intrafraction kilovoltage images by triggered readout of unexposed frames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poulsen, Per Rugaard, E-mail: per.poulsen@rm.dk; Jonassen, Johnny; Jensen, Carsten
2015-11-15
Purpose: The gantry-mounted kilovoltage (kV) imager of modern linear accelerators can be used for real-time tumor localization during radiation treatment delivery. However, the kV image quality often suffers from cross-scatter from the megavoltage (MV) treatment beam. This study investigates readout of unexposed kV frames as a means to improve the kV image quality in a series of experiments and a theoretical model of the observed image quality improvements. Methods: A series of fluoroscopic images were acquired of a solid water phantom with an embedded gold marker and an air cavity with and without simultaneous radiation of the phantom with amore » 6 MV beam delivered perpendicular to the kV beam with 300 and 600 monitor units per minute (MU/min). An in-house built device triggered readout of zero, one, or multiple unexposed frames between the kV exposures. The unexposed frames contained part of the MV scatter, consequently reducing the amount of MV scatter accumulated in the exposed frames. The image quality with and without unexposed frame readout was quantified as the contrast-to-noise ratio (CNR) of the gold marker and air cavity for a range of imaging frequencies from 1 to 15 Hz. To gain more insight into the observed CNR changes, the image lag of the kV imager was measured and used as input in a simple model that describes the CNR with unexposed frame readout in terms of the contrast, kV noise, and MV noise measured without readout of unexposed frames. Results: Without readout of unexposed kV frames, the quality of intratreatment kV images decreased dramatically with reduced kV frequencies due to MV scatter. The gold marker was only visible for imaging frequencies ≥3 Hz at 300 MU/min and ≥5 Hz for 600 MU/min. Visibility of the air cavity required even higher imaging frequencies. Readout of multiple unexposed frames ensured visibility of both structures at all imaging frequencies and a CNR that was independent of the kV frame rate. The image lag was 12.2%, 2.2%, and 0.9% in the first, second, and third frame after an exposure. The CNR model predicted the CNR with triggered image readout with a mean absolute error of 2.0% for the gold marker. Conclusions: A device that triggers readout of unexposed frames during kV fluoroscopy was built and shown to greatly improve the quality of intratreatment kV images. A simple theoretical model successfully described the CNR improvements with the device.« less
Delay-Encoded Harmonic Imaging (DE-HI) in Multiplane-Wave Compounding.
Gong, Ping; Song, Pengfei; Chen, Shigao
2017-04-01
The development of ultrafast ultrasound imaging brings great opportunities to improve imaging technologies such as shear wave elastography and ultrafast Doppler imaging. In ultrafast imaging, several tilted plane or diverging wave images are coherently combined to form a compounded image, leading to trade-offs among image signal-to-noise ratio (SNR), resolution, and post-compounded frame rate. Multiplane wave (MW) imaging is proposed to solve this trade-off by encoding multiple plane waves with Hadamard matrix during one transmission event (i.e. pulse-echo event), to improve image SNR without sacrificing the resolution or frame rate. However, it suffers from stronger reverberation artifacts in B-mode images compared to standard plane wave compounding due to longer transmitted pulses. If harmonic imaging can be combined with MW imaging, the reverberation artifacts and other clutter noises such as sidelobes and multipath scattering clutters should be suppressed. The challenge, however, is that the Hadamard codes used in MW imaging cannot encode the 2 nd harmonic component by inversing the pulse polarity. In this paper, we propose a delay-encoded harmonic imaging (DE-HI) technique to encode the 2 nd harmonic with a one quarter period delay calculated at the transmit center frequency, rather than reversing the pulse polarity during multiplane wave emissions. Received DE-HI signals can then be decoded in the frequency domain to recover the signals as in single plane wave emissions, but mainly with improved SNR at the 2 nd harmonic component instead of the fundamental component. DE-HI was tested experimentally with a point target, a B-mode imaging phantom, and in-vivo human liver imaging. Improvements in image contrast-to-noise ratio (CNR), spatial resolution, and lesion-signal-to-noise ratio ( l SNR) have been achieved compared to standard plane wave compounding, MW imaging, and standard harmonic imaging (maximal improvement of 116% on CNR and 115% on l SNR as compared to standard HI around 55 mm depth in the B-mode imaging phantom study). The potential high frame rate and the stability of encoding and decoding processes of DE-HI were also demonstrated, which made DE-HI promising for a wide spectrum of imaging applications.
Design and performance of single photon APD focal plane arrays for 3-D LADAR imaging
NASA Astrophysics Data System (ADS)
Itzler, Mark A.; Entwistle, Mark; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir; Zalud, Peter F.; Senko, Tom; Tower, John; Ferraro, Joseph
2010-08-01
×We describe the design, fabrication, and performance of focal plane arrays (FPAs) for use in 3-D LADAR imaging applications requiring single photon sensitivity. These 32 × 32 FPAs provide high-efficiency single photon sensitivity for three-dimensional LADAR imaging applications at 1064 nm. Our GmAPD arrays are designed using a planarpassivated avalanche photodiode device platform with buried p-n junctions that has demonstrated excellent performance uniformity, operational stability, and long-term reliability. The core of the FPA is a chip stack formed by hybridizing the GmAPD photodiode array to a custom CMOS read-out integrated circuit (ROIC) and attaching a precision-aligned GaP microlens array (MLA) to the back-illuminated detector array. Each ROIC pixel includes an active quenching circuit governing Geiger-mode operation of the corresponding avalanche photodiode pixel as well as a pseudo-random counter to capture per-pixel time-of-flight timestamps in each frame. The FPA has been designed to operate at frame rates as high as 186 kHz for 2 μs range gates. Effective single photon detection efficiencies as high as 40% (including all optical transmission and MLA losses) are achieved for dark count rates below 20 kHz. For these planar-geometry diffused-junction GmAPDs, isolation trenches are used to reduce crosstalk due to hot carrier luminescence effects during avalanche events, and we present details of the crosstalk performance for different operating conditions. Direct measurement of temporal probability distribution functions due to cumulative timing uncertainties of the GmAPDs and ROIC circuitry has demonstrated a FWHM timing jitter as low as 265 ps (standard deviation is ~100 ps).
Report on recent results of the PERCIVAL soft X-ray imager
NASA Astrophysics Data System (ADS)
Khromova, A.; Cautero, G.; Giuressi, D.; Menk, R.; Pinaroli, G.; Stebel, L.; Correa, J.; Marras, A.; Wunderer, C. B.; Lange, S.; Tennert, M.; Niemann, M.; Hirsemann, H.; Smoljanin, S.; Reza, S.; Graafsma, H.; Göttlicher, P.; Shevyakov, I.; Supra, J.; Xia, Q.; Zimmer, M.; Guerrini, N.; Marsh, B.; Sedgwick, I.; Nicholls, T.; Turchetta, R.; Pedersen, U.; Tartoni, N.; Hyun, H. J.; Kim, K. S.; Rah, S. Y.; Hoenk, M. E.; Jewell, A. D.; Jones, T. J.; Nikzad, S.
2016-11-01
The PERCIVAL (Pixelated Energy Resolving CMOS Imager, Versatile And Large) soft X-ray 2D imaging detector is based on stitched, wafer-scale sensors possessing a thick epi-layer, which together with back-thinning and back-side illumination yields elevated quantum efficiency in the photon energy range of 125-1000 eV. Main application fields of PERCIVAL are foreseen in photon science with FELs and synchrotron radiation. This requires high dynamic range up to 105 ph @ 250 eV paired with single photon sensitivity with high confidence at moderate frame rates in the range of 10-120 Hz. These figures imply the availability of dynamic gain switching on a pixel-by-pixel basis and a highly parallel, low noise analog and digital readout, which has been realized in the PERCIVAL sensor layout. Different aspects of the detector performance have been assessed using prototype sensors with different pixel and ADC types. This work will report on the recent test results performed on the newest chip prototypes with the improved pixel and ADC architecture. For the target frame rates in the 10-120 Hz range an average noise floor of 14e- has been determined, indicating the ability of detecting single photons with energies above 250 eV. Owing to the successfully implemented adaptive 3-stage multiple-gain switching, the integrated charge level exceeds 4 · 106 e- or 57000 X-ray photons at 250 eV per frame at 120 Hz. For all gains the noise level remains below the Poisson limit also in high-flux conditions. Additionally, a short overview over the updates on an oncoming 2 Mpixel (P2M) detector system (expected at the end of 2016) will be reported.
Mellema, Daniel C; Song, Pengfei; Kinnick, Randall R; Urban, Matthew W; Greenleaf, James F; Manduca, Armando; Chen, Shigao
2016-09-01
Ultrasound shear wave elastography (SWE) utilizes the propagation of induced shear waves to characterize the shear modulus of soft tissue. Many methods rely on an acoustic radiation force (ARF) "push beam" to generate shear waves. However, specialized hardware is required to generate the push beams, and the thermal stress that is placed upon the ultrasound system, transducer, and tissue by the push beams currently limits the frame-rate to about 1 Hz. These constraints have limited the implementation of ARF to high-end clinical systems. This paper presents Probe Oscillation Shear Elastography (PROSE) as an alternative method to measure tissue elasticity. PROSE generates shear waves using a harmonic mechanical vibration of an ultrasound transducer, while simultaneously detecting motion with the same transducer under pulse-echo mode. Motion of the transducer during detection produces a "strain-like" compression artifact that is coupled with the observed shear waves. A novel symmetric sampling scheme is proposed such that pulse-echo detection events are acquired when the ultrasound transducer returns to the same physical position, allowing the shear waves to be decoupled from the compression artifact. Full field-of-view (FOV) two-dimensional (2D) shear wave speed images were obtained by applying a local frequency estimation (LFE) technique, capable of generating a 2D map from a single frame of shear wave motion. The shear wave imaging frame rate of PROSE is comparable to the vibration frequency, which can be an order of magnitude higher than ARF based techniques. PROSE was able to produce smooth and accurate shear wave images from three homogeneous phantoms with different moduli, with an effective frame rate of 300 Hz. An inclusion phantom study showed that increased vibration frequencies improved the accuracy of inclusion imaging, and allowed targets as small as 6.5 mm to be resolved with good contrast (contrast-to-noise ratio ≥ 19 dB) between the target and background.
Mellema, Daniel C.; Song, Pengfei; Kinnick, Randall R.; Urban, Matthew W.; Greenleaf, James F.; Manduca, Armando; Chen, Shigao
2017-01-01
Ultrasound shear wave elastography (SWE) utilizes the propagation of induced shear waves to characterize the shear modulus of soft tissue. Many methods rely on an acoustic radiation force (ARF) “push beam” to generate shear waves. However, specialized hardware is required to generate the push beams, and the thermal stress that is placed upon the ultrasound system, transducer, and tissue by the push beams currently limits the frame-rate to about 1 Hz. These constraints have limited the implementation of ARF to high-end clinical systems. This paper presents Probe Oscillation Shear Elastography (PROSE) as an alternative method to measure tissue elasticity. PROSE generates shear waves using a harmonic mechanical vibration of an ultrasound transducer, while simultaneously detecting motion with the same transducer under pulse-echo mode. Motion of the transducer during detection produces a “strain-like” compression artifact that is coupled with the observed shear waves. A novel symmetric sampling scheme is proposed such that pulse-echo detection events are acquired when the ultrasound transducer returns to the same physical position, allowing the shear waves to be decoupled from the compression artifact. Full field-of-view (FOV) two-dimensional (2D) shear wave speed images were obtained by applying a local frequency estimation (LFE) technique, capable of generating a 2D map from a single frame of shear wave motion. The shear wave imaging frame rate of PROSE is comparable to the vibration frequency, which can be an order of magnitude higher than ARF based techniques. PROSE was able to produce smooth and accurate shear wave images from three homogeneous phantoms with different moduli, with an effective frame rate of 300Hz. An inclusion phantom study showed that increased vibration frequencies improved the accuracy of inclusion imaging, and allowed targets as small as 6.5 mm to be resolved with good contrast (contrast-to-noise ratio ≥19 dB) between the target and background. PMID:27076352
An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories
NASA Astrophysics Data System (ADS)
Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji
2008-11-01
We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.
Two-photon voltage imaging using a genetically encoded voltage indicator
Akemann, Walther; Sasaki, Mari; Mutoh, Hiroki; Imamura, Takeshi; Honkura, Naoki; Knöpfel, Thomas
2013-01-01
Voltage-sensitive fluorescent proteins (VSFPs) are a family of genetically-encoded voltage indicators (GEVIs) reporting membrane voltage fluctuation from genetically-targeted cells in cell cultures to whole brains in awake mice as demonstrated earlier using 1-photon (1P) fluorescence excitation imaging. However, in-vivo 1P imaging captures optical signals only from superficial layers and does not optically resolve single neurons. Two-photon excitation (2P) imaging, on the other hand, has not yet been convincingly applied to GEVI experiments. Here we show that 2P imaging of VSFP Butterfly 1.2 expresssing pyramidal neurons in layer 2/3 reports optical membrane voltage in brain slices consistent with 1P imaging but with a 2–3 larger ΔR/R value. 2P imaging of mouse cortex in-vivo achieved cellular resolution throughout layer 2/3. In somatosensory cortex we recorded sensory responses to single whisker deflections in anesthetized mice at full frame video rate. Our results demonstrate the feasibility of GEVI-based functional 2P imaging in mouse cortex. PMID:23868559
Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems
NASA Technical Reports Server (NTRS)
Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.
2015-01-01
Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.
Frame Rate Considerations for Real-Time Abdominal Acoustic Radiation Force Impulse Imaging
Fahey, Brian J.; Palmeri, Mark L.; Trahey, Gregg E.
2008-01-01
With the advent of real-time Acoustic Radiation Force Impulse (ARFI) imaging, elevated frame rates are both desirable and relevant from a clinical perspective. However, fundamental limitations on frame rates are imposed by thermal safety concerns related to incident radiation force pulses. Abdominal ARFI imaging utilizes a curvilinear scanning geometry that results in markedly different tissue heating patterns than those previously studied for linear arrays or mechanically-translated concave transducers. Finite Element Method (FEM) models were used to simulate these tissue heating patterns and to analyze the impact of tissue heating on frame rates available for abdominal ARFI imaging. A perfusion model was implemented to account for cooling effects due to blood flow and frame rate limitations were evaluated in the presence of normal, reduced and negligible tissue perfusions. Conventional ARFI acquisition techniques were also compared to ARFI imaging with parallel receive tracking in terms of thermal efficiency. Additionally, thermocouple measurements of transducer face temperature increases were acquired to assess the frame rate limitations imposed by cumulative heating of the imaging array. Frame rates sufficient for many abdominal imaging applications were found to be safely achievable utilizing available ARFI imaging techniques. PMID:17521042
Lattanzi, Riccardo; Zhang, Bei; Knoll, Florian; Assländer, Jakob; Cloos, Martijn A
2018-06-01
Magnetic Resonance Fingerprinting reconstructions can become computationally intractable with multiple transmit channels, if the B 1 + phases are included in the dictionary. We describe a general method that allows to omit the transmit phases. We show that this enables straightforward implementation of dictionary compression to further reduce the problem dimensionality. We merged the raw data of each RF source into a single k-space dataset, extracted the transceiver phases from the corresponding reconstructed images and used them to unwind the phase in each time frame. All phase-unwound time frames were combined in a single set before performing SVD-based compression. We conducted synthetic, phantom and in-vivo experiments to demonstrate the feasibility of SVD-based compression in the case of two-channel transmission. Unwinding the phases before SVD-based compression yielded artifact-free parameter maps. For fully sampled acquisitions, parameters were accurate with as few as 6 compressed time frames. SVD-based compression performed well in-vivo with highly under-sampled acquisitions using 16 compressed time frames, which reduced reconstruction time from 750 to 25min. Our method reduces the dimensions of the dictionary atoms and enables to implement any fingerprint compression strategy in the case of multiple transmit channels. Copyright © 2018 Elsevier Inc. All rights reserved.
Non-rigid estimation of cell motion in calcium time-lapse images
NASA Astrophysics Data System (ADS)
Hachi, Siham; Lucumi Moreno, Edinson; Desmet, An-Sofie; Vanden Berghe, Pieter; Fleming, Ronan M. T.
2016-03-01
Calcium imaging is a widely used technique in neuroscience permitting the simultaneous monitoring of electro- physiological activity of hundreds of neurons at single cell resolution. Identification of neuronal activity requires rapid and reliable image analysis techniques, especially when neurons fire and move simultaneously over time. Traditionally, image segmentation is performed to extract individual neurons in the first frame of a calcium sequence. Thereafter, the mean intensity is calculated from the same region of interest in each frame to infer calcium signals. However, when cells move, deform and fire, this segmentation on its own generates artefacts and therefore biased neuronal activity. Therefore, there is a pressing need to develop a more efficient cell tracking technique. We hereby present a novel vision-based cell tracking scheme using a thin-plate spline deformable model. The thin-plate spline warping is based on control points detected using the Fast from Accelerated Segment Test descriptor and tracked using the Lucas-Kanade optical flow. Our method is able to track neurons in calcium time-series, even when there are large changes in intensity, such as during a firing event. The robustness and efficiency of the proposed approach is validated on real calcium time-lapse images of a neuronal population.
A FPGA-based architecture for real-time image matching
NASA Astrophysics Data System (ADS)
Wang, Jianhui; Zhong, Sheng; Xu, Wenhui; Zhang, Weijun; Cao, Zhiguo
2013-10-01
Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images taken at different viewpoint or different time from the same scene. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720 images. Its processing speed can meet the demand of most real-life computer vision applications.
Two-dimensional real-time imaging system for subtraction angiography using an iodine filter
NASA Astrophysics Data System (ADS)
Umetani, Keiji; Ueda, Ken; Takeda, Tohoru; Anno, Izumi; Itai, Yuji; Akisada, Masayoshi; Nakajima, Teiichi
1992-01-01
A new type of subtraction imaging system was developed using an iodine filter and a single-energy broad bandwidth monochromatized x ray. The x-ray images of coronary arteries made after intravenous injection of a contrast agent are enhanced by an energy-subtraction technique. Filter chopping of the x-ray beam switches energies rapidly, so that a nearly simultaneous pair of filtered and nonfiltered images can be made. By using a high-speed video camera, a pair of two 512 × 512 pixel images can be obtained within 9 ms. Three hundred eighty-four images (raw data) are stored in a 144-Mbyte frame memory. After phantom studies, in vivo subtracted images of coronary arteries in dogs were obtained at a rate of 15 images/s.
An evaluation of dynamic lip-tooth characteristics during speech and smile in adolescents.
Ackerman, Marc B; Brensinger, Colleen; Landis, J Richard
2004-02-01
This retrospective study was conducted to measure lip-tooth characteristics of adolescents. Pretreatment video clips of 1242 consecutive patients were screened for Class-I skeletal and dental patterns. After all inclusion criteria were applied, the final sample consisted of 50 patients (27 boys, 23 girls) with a mean age of 12.5 years. The raw digital video stream of each patient was edited to select a single image frame representing the patient saying the syllable "chee" and a second single image representing the patient's posed social smile and saved as part of a 12-frame image sequence. Each animation image was analyzed using a SmileMesh computer application to measure the smile index (the ratio of the intercommissure width divided by the interlabial gap), intercommissure width (mm), interlabial gap (mm), percent incisor below the intercommissure line, and maximum incisor exposure (mm). The data were analyzed using SAS (version 8.1). All recorded differences in linear measures had to be > or = 2 mm. The results suggest that anterior tooth display at speech and smile should be recorded independently but evaluated as part of a dynamic range. Asking patients to say "cheese" and then smile is no longer a valid method to elicit the parameters of anterior tooth display. When planning the vertical positions of incisors during orthodontic treatment, the orthodontist should view the dynamics of anterior tooth display as a continuum delineated by the time points of rest, speech, posed social smile, and a Duchenne smile.
A 3D camera for improved facial recognition
NASA Astrophysics Data System (ADS)
Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim
2004-12-01
We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.
Wu, Jianglai; Tang, Anson H. L.; Mok, Aaron T. Y.; Yan, Wenwei; Chan, Godfrey C. F.; Wong, Kenneth K. Y.; Tsia, Kevin K.
2017-01-01
Apart from the spatial resolution enhancement, scaling of temporal resolution, equivalently the imaging throughput, of fluorescence microscopy is of equal importance in advancing cell biology and clinical diagnostics. Yet, this attribute has mostly been overlooked because of the inherent speed limitation of existing imaging strategies. To address the challenge, we employ an all-optical laser-scanning mechanism, enabled by an array of reconfigurable spatiotemporally-encoded virtual sources, to demonstrate ultrafast fluorescence microscopy at line-scan rate as high as 8 MHz. We show that this technique enables high-throughput single-cell microfluidic fluorescence imaging at 75,000 cells/second and high-speed cellular 2D dynamical imaging at 3,000 frames per second, outperforming the state-of-the-art high-speed cameras and the gold-standard laser scanning strategies. Together with its wide compatibility to the existing imaging modalities, this technology could empower new forms of high-throughput and high-speed biological fluorescence microscopy that was once challenged. PMID:28966855
Adaptive optics images restoration based on frame selection and multi-framd blind deconvolution
NASA Astrophysics Data System (ADS)
Tian, Y.; Rao, C. H.; Wei, K.
2008-10-01
The adaptive optics can only partially compensate the image blurred by atmospheric turbulent due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frame blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are picked out by frame selection technique is deconvolved. There is no priori knowledge except the positive constraint. The method has been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system in Yunnan Observatory. The results showed that the method can effectively improve the images partially corrected by adaptive optics.
Exploring the brain on multiple scales with correlative two-photon and light sheet microscopy
NASA Astrophysics Data System (ADS)
Silvestri, Ludovico; Allegra Mascaro, Anna Letizia; Costantini, Irene; Sacconi, Leonardo; Pavone, Francesco S.
2014-02-01
One of the unique features of the brain is that its activity cannot be framed in a single spatio-temporal scale, but rather spans many orders of magnitude both in space and time. A single imaging technique can reveal only a small part of this complex machinery. To obtain a more comprehensive view of brain functionality, complementary approaches should be combined into a correlative framework. Here, we describe a method to integrate data from in vivo two-photon fluorescence imaging and ex vivo light sheet microscopy, taking advantage of blood vessels as reference chart. We show how the apical dendritic arbor of a single cortical pyramidal neuron imaged in living thy1-GFP-M mice can be found in the large-scale brain reconstruction obtained with light sheet microscopy. Starting from the apical portion, the whole pyramidal neuron can then be segmented. The correlative approach presented here allows contextualizing within a three-dimensional anatomic framework the neurons whose dynamics have been observed with high detail in vivo.
High-speed digital imaging of cytosolic Ca2+ and contraction in single cardiomyocytes.
O'Rourke, B; Reibel, D K; Thomas, A P
1990-07-01
A charge-coupled device (CCD) camera, with the capacity for simultaneous spatially resolved photon counting and rapid frame transfer, was utilized for high-speed digital image collection from an inverted epifluorescence microscope. The unique properties of the CCD detector were applied to an analysis of cell shortening and the Ca2+ transient from fluorescence images of fura-2-loaded [corrected] cardiomyocytes. On electrical stimulation of the cell, a series of sequential subimages was collected and used to create images of Ca2+ within the cell during contraction. The high photosensitivity of the camera, combined with a detector-based frame storage technique, permitted collection of fluorescence images 10 ms apart. This rate of image collection was sufficient to resolve the rapid events of contraction, e.g., the upstroke of the Ca2+ transient (less than 40 ms) and the time to peak shortening (less than 80 ms). The technique was used to examine the effects of beta-adrenoceptor activation, fura-2 load, and stimulus frequency on cytosolic Ca2+ transients and contractions of single cardiomyocytes. beta-Adrenoceptor stimulation resulted in pronounced increases in peak Ca2+, maximal rates of rise and decay of Ca2+, extent of shortening, and maximal velocities of shortening and relaxation. Raising the intracellular load of fura-2 had little effect on the rising phase of Ca2+ or the extent of shortening but extended the duration of the Ca2+ transient and contraction. In related experiments utilizing differential-interference contrast microscopy, the same technique was applied to visualize sarcomere dynamics in contracting cells. This newly developed technique is a versatile tool for analyzing the Ca2+ transient and mechanical events in studies of excitation-contraction coupling in cardiomyocytes.
NASA Astrophysics Data System (ADS)
An, Lin; Shen, Tueng T.; Wang, Ruikang K.
2011-10-01
This paper presents comprehensive and depth-resolved retinal microvasculature images within human retina achieved by a newly developed ultrahigh sensitive optical microangiography (UHS-OMAG) system. Due to its high flow sensitivity, UHS-OMAG is much more sensitive to tissue motion due to the involuntary movement of the human eye and head compared to the traditional OMAG system. To mitigate these motion artifacts on final imaging results, we propose a new phase compensation algorithm in which the traditional phase-compensation algorithm is repeatedly used to efficiently minimize the motion artifacts. Comparatively, this new algorithm demonstrates at least 8 to 25 times higher motion tolerability, critical for the UHS-OMAG system to achieve retinal microvasculature images with high quality. Furthermore, the new UHS-OMAG system employs a high speed line scan CMOS camera (240 kHz A-line scan rate) to capture 500 A-lines for one B-frame at a 400 Hz frame rate. With this system, we performed a series of in vivo experiments to visualize the retinal microvasculature in humans. Two featured imaging protocols are utilized. The first is of the low lateral resolution (16 μm) and a wide field of view (4 × 3 mm2 with single scan and 7 × 8 mm2 for multiple scans), while the second is of the high lateral resolution (5 μm) and a narrow field of view (1.5 × 1.2 mm2 with single scan). The great imaging performance delivered by our system suggests that UHS-OMAG can be a promising noninvasive alternative to the current clinical retinal microvasculature imaging techniques for the diagnosis of eye diseases with significant vascular involvement, such as diabetic retinopathy and age-related macular degeneration.
Single myelin fiber imaging in living rodents without labeling by deep optical coherence microscopy.
Ben Arous, Juliette; Binding, Jonas; Léger, Jean-François; Casado, Mariano; Topilko, Piotr; Gigan, Sylvain; Boccara, A Claude; Bourdieu, Laurent
2011-11-01
Myelin sheath disruption is responsible for multiple neuropathies in the central and peripheral nervous system. Myelin imaging has thus become an important diagnosis tool. However, in vivo imaging has been limited to either low-resolution techniques unable to resolve individual fibers or to low-penetration imaging of single fibers, which cannot provide quantitative information about large volumes of tissue, as required for diagnostic purposes. Here, we perform myelin imaging without labeling and at micron-scale resolution with >300-μm penetration depth on living rodents. This was achieved with a prototype [termed deep optical coherence microscopy (deep-OCM)] of a high-numerical aperture infrared full-field optical coherence microscope, which includes aberration correction for the compensation of refractive index mismatch and high-frame-rate interferometric measurements. We were able to measure the density of individual myelinated fibers in the rat cortex over a large volume of gray matter. In the peripheral nervous system, deep-OCM allows, after minor surgery, in situ imaging of single myelinated fibers over a large fraction of the sciatic nerve. This allows quantitative comparison of normal and Krox20 mutant mice, in which myelination in the peripheral nervous system is impaired. This opens promising perspectives for myelin chronic imaging in demyelinating diseases and for minimally invasive medical diagnosis.
Single myelin fiber imaging in living rodents without labeling by deep optical coherence microscopy
NASA Astrophysics Data System (ADS)
Ben Arous, Juliette; Binding, Jonas; Léger, Jean-François; Casado, Mariano; Topilko, Piotr; Gigan, Sylvain; Claude Boccara, A.; Bourdieu, Laurent
2011-11-01
Myelin sheath disruption is responsible for multiple neuropathies in the central and peripheral nervous system. Myelin imaging has thus become an important diagnosis tool. However, in vivo imaging has been limited to either low-resolution techniques unable to resolve individual fibers or to low-penetration imaging of single fibers, which cannot provide quantitative information about large volumes of tissue, as required for diagnostic purposes. Here, we perform myelin imaging without labeling and at micron-scale resolution with >300-μm penetration depth on living rodents. This was achieved with a prototype [termed deep optical coherence microscopy (deep-OCM)] of a high-numerical aperture infrared full-field optical coherence microscope, which includes aberration correction for the compensation of refractive index mismatch and high-frame-rate interferometric measurements. We were able to measure the density of individual myelinated fibers in the rat cortex over a large volume of gray matter. In the peripheral nervous system, deep-OCM allows, after minor surgery, in situ imaging of single myelinated fibers over a large fraction of the sciatic nerve. This allows quantitative comparison of normal and Krox20 mutant mice, in which myelination in the peripheral nervous system is impaired. This opens promising perspectives for myelin chronic imaging in demyelinating diseases and for minimally invasive medical diagnosis.
Improved frame-based estimation of head motion in PET brain imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukherjee, J. M., E-mail: joyeeta.mitra@umassmed.edu; Lindsay, C.; King, M. A.
Purpose: Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. Methods: The list mode data for PET acquisition ismore » uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. Results: The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is not susceptible to motion introduced between CT and PET acquisitions. Conclusions: The authors have shown that they can estimate motion for frames with time intervals as short as 5 s using nonattenuation corrected reconstructed FDG PET brain images. Intraframe motion in 60-s frames causes degradation of accuracy to about 2 mm based on the motion type.« less
Improved frame-based estimation of head motion in PET brain imaging
Mukherjee, J. M.; Lindsay, C.; Mukherjee, A.; Olivier, P.; Shao, L.; King, M. A.; Licho, R.
2016-01-01
Purpose: Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. Methods: The list mode data for PET acquisition is uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. Results: The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is not susceptible to motion introduced between CT and PET acquisitions. Conclusions: The authors have shown that they can estimate motion for frames with time intervals as short as 5 s using nonattenuation corrected reconstructed FDG PET brain images. Intraframe motion in 60-s frames causes degradation of accuracy to about 2 mm based on the motion type. PMID:27147355
Novel instrumentation for multifield time-lapse cinemicrography.
Kallman, R F; Blevins, N; Coyne, M A; Prionas, S D
1990-04-01
The most significant feature of the system that is described is its ability to image essentially simultaneously the growth of up to 99 single cells into macroscopic colonies, each in its own microscope field. Operationally, fields are first defined and programmed by a trained observer. All subsequent steps are automatic and under computer control. Salient features of the hardware are stepper motor-controlled movement of the stage and fine adjustment of an inverted microscope, a high-quality 16-mm cine camera with light meter and controls, and a miniature incubator in which cells may be grown under defined conditions directly on the microscope stage. This system, termed MUTLAS, necessitates reordering of the primary images by rephotographing them on fresh film. Software developed for the analysis of cell and colony growth requires frame-by-frame examination of the secondary film and the use of a mouse-driven cursor to trace microscopically visible (4X objective magnification) events.
Triple Helix Formation in a Topologically Controlled DNA Nanosystem.
Yamagata, Yutaro; Emura, Tomoko; Hidaka, Kumi; Sugiyama, Hiroshi; Endo, Masayuki
2016-04-11
In the present study, we demonstrate single-molecule imaging of triple helix formation in DNA nanostructures. The binding of the single-molecule third strand to double-stranded DNA in a DNA origami frame was examined using two different types of triplet base pairs. The target DNA strand and the third strand were incorporated into the DNA frame, and the binding of the third strand was controlled by the formation of Watson-Crick base pairing. Triple helix formation was monitored by observing the structural changes in the incorporated DNA strands. It was also examined using a photocaged third strand wherein the binding of the third strand was directly observed using high-speed atomic force microscopy during photoirradiation. We found that the binding of the third strand could be controlled by regulating duplex formation and the uncaging of the photocaged strands in the designed nanospace. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Snapshot 3D tracking of insulin granules in live cells
NASA Astrophysics Data System (ADS)
Wang, Xiaolei; Huang, Xiang; Gdor, Itay; Daddysman, Matthew; Yi, Hannah; Selewa, Alan; Haunold, Theresa; Hereld, Mark; Scherer, Norbert F.
2018-02-01
Rapid and accurate volumetric imaging remains a challenge, yet has the potential to enhance understanding of cell function. We developed and used a multifocal microscope (MFM) for 3D snapshot imaging to allow 3D tracking of insulin granules labeled with mCherry in MIN6 cells. MFM employs a special diffractive optical element (DOE) to simultaneously image multiple focal planes. This simultaneous acquisition of information determines the 3D location of single objects at a speed only limited by the array detector's frame rate. We validated the accuracy of MFM imaging/tracking with fluorescence beads; the 3D positions and trajectories of single fluorescence beads can be determined accurately over a wide range of spatial and temporal scales. The 3D positions and trajectories of single insulin granules in a 3.2um deep volume were determined with imaging processing that combines 3D decovolution, shift correction, and finally tracking using the Imaris software package. We find that the motion of the granules is superdiffusive, but less so in 3D than 2D for cells grown on coverslip surfaces, suggesting an anisotropy in the cytoskeleton (e.g. microtubules and action).
Computational imaging with a balanced detector.
Soldevila, F; Clemente, P; Tajahuerce, E; Uribe-Patarroyo, N; Andrés, P; Lancis, J
2016-06-29
Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media.
Computational imaging with a balanced detector
NASA Astrophysics Data System (ADS)
Soldevila, F.; Clemente, P.; Tajahuerce, E.; Uribe-Patarroyo, N.; Andrés, P.; Lancis, J.
2016-06-01
Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media.
Computational imaging with a balanced detector
Soldevila, F.; Clemente, P.; Tajahuerce, E.; Uribe-Patarroyo, N.; Andrés, P.; Lancis, J.
2016-01-01
Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media. PMID:27353733
Photon counting phosphorescence lifetime imaging with TimepixCam
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; ...
2017-01-12
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less
Kim, Byungyeon; Park, Byungjun; Lee, Seungrag; Won, Youngjae
2016-01-01
We demonstrated GPU accelerated real-time confocal fluorescence lifetime imaging microscopy (FLIM) based on the analog mean-delay (AMD) method. Our algorithm was verified for various fluorescence lifetimes and photon numbers. The GPU processing time was faster than the physical scanning time for images up to 800 × 800, and more than 149 times faster than a single core CPU. The frame rate of our system was demonstrated to be 13 fps for a 200 × 200 pixel image when observing maize vascular tissue. This system can be utilized for observing dynamic biological reactions, medical diagnosis, and real-time industrial inspection. PMID:28018724
A multicolor imaging pyrometer
NASA Technical Reports Server (NTRS)
Frish, Michael B.; Frank, Jonathan H.
1989-01-01
A multicolor imaging pyrometer was designed for accurately and precisely measuring the temperature distribution histories of small moving samples. The device projects six different color images of the sample onto a single charge coupled device array that provides an RS-170 video signal to a computerized frame grabber. The computer automatically selects which one of the six images provides useful data, and converts that information to a temperature map. By measuring the temperature of molten aluminum heated in a kiln, a breadboard version of the device was shown to provide high accuracy in difficult measurement situations. It is expected that this pyrometer will ultimately find application in measuring the temperature of materials undergoing radiant heating in a microgravity acoustic levitation furnace.
Photon counting phosphorescence lifetime imaging with TimepixCam.
Hirvonen, Liisa M; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei
2017-01-01
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.
Photon counting phosphorescence lifetime imaging with TimepixCam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window, and read out by a Timepix ASIC. The 256 x 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting (TCSPC) imaging. We have characterised the photon detection capabilities of this detector system, and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200more » μm diameter polystyrene beads.« less
Photon counting phosphorescence lifetime imaging with TimepixCam
NASA Astrophysics Data System (ADS)
Hirvonen, Liisa M.; Fisher-Levine, Merlin; Suhling, Klaus; Nomerotski, Andrei
2017-01-01
TimepixCam is a novel fast optical imager based on an optimized silicon pixel sensor with a thin entrance window and read out by a Timepix Application Specific Integrated Circuit. The 256 × 256 pixel sensor has a time resolution of 15 ns at a sustained frame rate of 10 Hz. We used this sensor in combination with an image intensifier for wide-field time-correlated single photon counting imaging. We have characterised the photon detection capabilities of this detector system and employed it on a wide-field epifluorescence microscope to map phosphorescence decays of various iridium complexes with lifetimes of about 1 μs in 200 μm diameter polystyrene beads.
A multicolor imaging pyrometer
NASA Astrophysics Data System (ADS)
Frish, Michael B.; Frank, Jonathan H.
1989-06-01
A multicolor imaging pyrometer was designed for accurately and precisely measuring the temperature distribution histories of small moving samples. The device projects six different color images of the sample onto a single charge coupled device array that provides an RS-170 video signal to a computerized frame grabber. The computer automatically selects which one of the six images provides useful data, and converts that information to a temperature map. By measuring the temperature of molten aluminum heated in a kiln, a breadboard version of the device was shown to provide high accuracy in difficult measurement situations. It is expected that this pyrometer will ultimately find application in measuring the temperature of materials undergoing radiant heating in a microgravity acoustic levitation furnace.
Adaptive Optics Image Restoration Based on Frame Selection and Multi-frame Blind Deconvolution
NASA Astrophysics Data System (ADS)
Tian, Yu; Rao, Chang-hui; Wei, Kai
Restricted by the observational condition and the hardware, adaptive optics can only make a partial correction of the optical images blurred by atmospheric turbulence. A postprocessing method based on frame selection and multi-frame blind deconvolution is proposed for the restoration of high-resolution adaptive optics images. By frame selection we mean we first make a selection of the degraded (blurred) images for participation in the iterative blind deconvolution calculation, with no need of any a priori knowledge, and with only a positivity constraint. This method has been applied to the restoration of some stellar images observed by the 61-element adaptive optics system installed on the Yunnan Observatory 1.2m telescope. The experimental results indicate that this method can effectively compensate for the residual errors of the adaptive optics system on the image, and the restored image can reach the diffraction-limited quality.
Giewekemeyer, Klaus; Philipp, Hugh T.; Wilke, Robin N.; Aquila, Andrew; Osterhoff, Markus; Tate, Mark W.; Shanks, Katherine S.; Zozulya, Alexey V.; Salditt, Tim; Gruner, Sol M.; Mancuso, Adrian P.
2014-01-01
Coherent (X-ray) diffractive imaging (CDI) is an increasingly popular form of X-ray microscopy, mainly due to its potential to produce high-resolution images and the lack of an objective lens between the sample and its corresponding imaging detector. One challenge, however, is that very high dynamic range diffraction data must be collected to produce both quantitative and high-resolution images. In this work, hard X-ray ptychographic coherent diffractive imaging has been performed at the P10 beamline of the PETRA III synchrotron to demonstrate the potential of a very wide dynamic range imaging X-ray detector (the Mixed-Mode Pixel Array Detector, or MM-PAD). The detector is capable of single photon detection, detecting fluxes exceeding 1 × 108 8-keV photons pixel−1 s−1, and framing at 1 kHz. A ptychographic reconstruction was performed using a peak focal intensity on the order of 1 × 1010 photons µm−2 s−1 within an area of approximately 325 nm × 603 nm. This was done without need of a beam stop and with a very modest attenuation, while ‘still’ images of the empty beam far-field intensity were recorded without any attenuation. The treatment of the detector frames and CDI methodology for reconstruction of non-sensitive detector regions, partially also extending the active detector area, are described. PMID:25178008
Giewekemeyer, Klaus; Philipp, Hugh T; Wilke, Robin N; Aquila, Andrew; Osterhoff, Markus; Tate, Mark W; Shanks, Katherine S; Zozulya, Alexey V; Salditt, Tim; Gruner, Sol M; Mancuso, Adrian P
2014-09-01
Coherent (X-ray) diffractive imaging (CDI) is an increasingly popular form of X-ray microscopy, mainly due to its potential to produce high-resolution images and the lack of an objective lens between the sample and its corresponding imaging detector. One challenge, however, is that very high dynamic range diffraction data must be collected to produce both quantitative and high-resolution images. In this work, hard X-ray ptychographic coherent diffractive imaging has been performed at the P10 beamline of the PETRA III synchrotron to demonstrate the potential of a very wide dynamic range imaging X-ray detector (the Mixed-Mode Pixel Array Detector, or MM-PAD). The detector is capable of single photon detection, detecting fluxes exceeding 1 × 10(8) 8-keV photons pixel(-1) s(-1), and framing at 1 kHz. A ptychographic reconstruction was performed using a peak focal intensity on the order of 1 × 10(10) photons µm(-2) s(-1) within an area of approximately 325 nm × 603 nm. This was done without need of a beam stop and with a very modest attenuation, while `still' images of the empty beam far-field intensity were recorded without any attenuation. The treatment of the detector frames and CDI methodology for reconstruction of non-sensitive detector regions, partially also extending the active detector area, are described.
Automated tracking of whiskers in videos of head fixed rodents.
Clack, Nathan G; O'Connor, Daniel H; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W
2012-01-01
We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception.
Automated Tracking of Whiskers in Videos of Head Fixed Rodents
Clack, Nathan G.; O'Connor, Daniel H.; Huber, Daniel; Petreanu, Leopoldo; Hires, Andrew; Peron, Simon; Svoboda, Karel; Myers, Eugene W.
2012-01-01
We have developed software for fully automated tracking of vibrissae (whiskers) in high-speed videos (>500 Hz) of head-fixed, behaving rodents trimmed to a single row of whiskers. Performance was assessed against a manually curated dataset consisting of 1.32 million video frames comprising 4.5 million whisker traces. The current implementation detects whiskers with a recall of 99.998% and identifies individual whiskers with 99.997% accuracy. The average processing rate for these images was 8 Mpx/s/cpu (2.6 GHz Intel Core2, 2 GB RAM). This translates to 35 processed frames per second for a 640 px×352 px video of 4 whiskers. The speed and accuracy achieved enables quantitative behavioral studies where the analysis of millions of video frames is required. We used the software to analyze the evolving whisking strategies as mice learned a whisker-based detection task over the course of 6 days (8148 trials, 25 million frames) and measure the forces at the sensory follicle that most underlie haptic perception. PMID:22792058
Feathering effect detection and artifact agglomeration index-based video deinterlacing technique
NASA Astrophysics Data System (ADS)
Martins, André Luis; Rodrigues, Evandro Luis Linhari; de Paiva, Maria Stela Veludo
2018-03-01
Several video deinterlacing techniques have been developed, and each one presents a better performance in certain conditions. Occasionally, even the most modern deinterlacing techniques create frames with worse quality than primitive deinterlacing processes. This paper validates that the final image quality can be improved by combining different types of deinterlacing techniques. The proposed strategy is able to select between two types of deinterlaced frames and, if necessary, make the local correction of the defects. This decision is based on an artifact agglomeration index obtained from a feathering effect detection map. Starting from a deinterlaced frame produced by the "interfield average" method, the defective areas are identified, and, if deemed appropriate, these areas are replaced by pixels generated through the "edge-based line average" method. Test results have proven that the proposed technique is able to produce video frames with higher quality than applying a single deinterlacing technique through getting what is good from intra- and interfield methods.
Single shot laser speckle based 3D acquisition system for medical applications
NASA Astrophysics Data System (ADS)
Khan, Danish; Shirazi, Muhammad Ayaz; Kim, Min Young
2018-06-01
The state of the art techniques used by medical practitioners to extract the three-dimensional (3D) geometry of different body parts requires a series of images/frames such as laser line profiling or structured light scanning. Movement of the patients during scanning process often leads to inaccurate measurements due to sequential image acquisition. Single shot structured techniques are robust to motion but the prevalent challenges in single shot structured light methods are the low density and algorithm complexity. In this research, a single shot 3D measurement system is presented that extracts the 3D point cloud of human skin by projecting a laser speckle pattern using a single pair of images captured by two synchronized cameras. In contrast to conventional laser speckle 3D measurement systems that realize stereo correspondence by digital correlation of projected speckle patterns, the proposed system employs KLT tracking method to locate the corresponding points. The 3D point cloud contains no outliers and sufficient quality of 3D reconstruction is achieved. The 3D shape acquisition of human body parts validates the potential application of the proposed system in the medical industry.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-18
... Image Display Devices and Components Thereof; Issuance of a Limited Exclusion Order and Cease and Desist... within the United States after importation of certain digital photo frames and image display devices and...: (1) The unlicensed entry of digital photo frames and image display devices and components thereof...
Precise Trajectory Reconstruction of CE-3 Hovering Stage By Landing Camera Images
NASA Astrophysics Data System (ADS)
Yan, W.; Liu, J.; Li, C.; Ren, X.; Mu, L.; Gao, X.; Zeng, X.
2014-12-01
Chang'E-3 (CE-3) is part of the second phase of the Chinese Lunar Exploration Program, incorporating a lander and China's first lunar rover. It was landed on 14 December, 2013 successfully. Hovering and obstacle avoidance stages are essential for CE-3 safety soft landing so that precise spacecraft trajectory in these stages are of great significance to verify orbital control strategy, to optimize orbital design, to accurately determine the landing site of CE-3, and to analyze the geological background of the landing site. Because the time consumption of these stages is just 25s, it is difficult to present spacecraft's subtle movement by Measurement and Control System or by radio observations. Under this background, the trajectory reconstruction based on landing camera images can be used to obtain the trajectory of CE-3 because of its technical advantages such as unaffecting by lunar gravity field spacecraft kinetic model, high resolution, high frame rate, and so on. In this paper, the trajectory of CE-3 before and after entering hovering stage was reconstructed by landing camera images from frame 3092 to frame 3180, which lasted about 9s, under Single Image Space Resection (SISR). The results show that CE-3's subtle changes during hovering stage can be emerged by the reconstructed trajectory. The horizontal accuracy of spacecraft position was up to 1.4m while vertical accuracy was up to 0.76m. The results can be used for orbital control strategy analysis and some other application fields.
A state space based approach to localizing single molecules from multi-emitter images.
Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J
2017-01-28
Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.
Park, Jinhyoung; Hu, Changhong; Shung, K Kirk
2011-12-01
A stand-alone front-end system for high-frequency coded excitation imaging was implemented to achieve a wider dynamic range. The system included an arbitrary waveform amplifier, an arbitrary waveform generator, an analog receiver, a motor position interpreter, a motor controller and power supplies. The digitized arbitrary waveforms at a sampling rate of 150 MHz could be programmed and converted to an analog signal. The pulse was subsequently amplified to excite an ultrasound transducer, and the maximum output voltage level achieved was 120 V(pp). The bandwidth of the arbitrary waveform amplifier was from 1 to 70 MHz. The noise figure of the preamplifier was less than 7.7 dB and the bandwidth was 95 MHz. Phantoms and biological tissues were imaged at a frame rate as high as 68 frames per second (fps) to evaluate the performance of the system. During the measurement, 40-MHz lithium niobate (LiNbO(3)) single-element lightweight (<;0.28 g) transducers were utilized. The wire target measure- ment showed that the -6-dB axial resolution of a chirp-coded excitation was 50 μm and lateral resolution was 120 μm. The echo signal-to-noise ratios were found to be 54 and 65 dB for the short burst and coded excitation, respectively. The contrast resolution in a sphere phantom study was estimated to be 24 dB for the chirp-coded excitation and 15 dB for the short burst modes. In an in vivo study, zebrafish and mouse hearts were imaged. Boundaries of the zebrafish heart in the image could be differentiated because of the low-noise operation of the implemented system. In mouse heart images, valves and chambers could be readily visualized with the coded excitation.
Planetary Education and Outreach Using the NOAA Science on a Sphere
NASA Technical Reports Server (NTRS)
Simon-Miller, A. A.; Williams, D. R.; Smith, S. M.; Friedlander, J. S.; Mayo, L. A.; Clark, P. E.; Henderson, M. A.
2011-01-01
Science On a Sphere (SOS) is a large visualization system, developed by the National Oceanic and Atmospheric Administration (NOAH), that uses computers running Redhat Linux and four video projectors to display animated data onto the outside of a sphere. Said another way, SOS is a stationary globe that can show dynamic, animated images in spherical form. Visualization of cylindrical data maps show planets, their atmosphere, oceans, and land, in very realistic form. The SOS system uses 4 video projectors to display images onto the sphere. Each projector is driven by a separate computer, and a fifth computer is used to control the operation of the display computers. Each computer is a relatively powerful PC with a high-end graphics card. The video projectors have native XGA resolution. The projectors are placed at the corners of a 30' x 30' square with a 68" carbon fiber sphere suspended in the center of the square. The equator of the sphere is typically located 86" off the floor. SOS uses common image formats such as JPEG, or TIFF in a very specific, but simple form; the images are plotted on an equatorial cylindrical equidistant projection, or as it is commonly known, a latitude/longitude grid, where the image is twice as wide as it is high (rectangular). 2048x] 024 is the minimum usable spatial resolution without some noticeable pixelation. Labels and text can be applied within the image, or using a timestamp-like feature within the SOS system software. There are two basic modes of operation for SOS: displaying a single image or an animated sequence of frames. The frame or frames can be setup to rotate or tilt, as in a planetary rotation. Sequences of images that animate through time produce a movie visualization, with or without an overlain soundtrack. After the images are processed, SOS will display the images in sequence and play them like a movie across the entire sphere surface. Movies can be of any arbitrary length, limited mainly by disk space and can be animated at frame rates up to 30 frames per second. Transitions, special effects, and other computer graphics techniques can be added to a sequence through the use of off-the-shelf software, like Final Cut Pro. However, one drawback is that the Sphere cannot be used in the same manner as a flat movie screen; images cannot be pushed to a "side", a highlighted area must be viewable to all sides of the room simultaneously, and some transitions do not work as well as others. We discuss these issues and workarounds in our poster.
Redies, Christoph; Groß, Franziska
2013-01-01
Frames provide a visual link between artworks and their surround. We asked how image properties change as an observer zooms out from viewing a painting alone, to viewing the painting with its frame and, finally, the framed painting in its museum environment (museum scene). To address this question, we determined three higher-order image properties that are based on histograms of oriented luminance gradients. First, complexity was measured as the sum of the strengths of all gradients in the image. Second, we determined the self-similarity of histograms of the orientated gradients at different levels of spatial analysis. Third, we analyzed how much gradient strength varied across orientations (anisotropy). Results were obtained for three art museums that exhibited paintings from three major periods of Western art. In all three museums, the mean complexity of the frames was higher than that of the paintings or the museum scenes. Frames thus provide a barrier of complexity between the paintings and their exterior. By contrast, self-similarity and anisotropy values of images of framed paintings were intermediate between the images of the paintings and the museum scenes, i.e., the frames provided a transition between the paintings and their surround. We also observed differences between the three museums that may reflect modified frame usage in different art periods. For example, frames in the museum for 20th century art tended to be smaller and less complex than in the two other two museums that exhibit paintings from earlier art periods (13th–18th century and 19th century, respectively). Finally, we found that the three properties did not depend on the type of reproduction of the paintings (photographs in museums, scans from books or images from the Google Art Project). To the best of our knowledge, this study is the first to investigate the relation between frames and paintings by measuring physically defined, higher-order image properties. PMID:24265625
A single frame: imaging live cells twenty-five years ago.
Fink, Rachel
2011-07-01
In the mid-1980s live-cell imaging was changed by the introduction of video techniques, allowing new ways to collect and store data. The increased resolution obtained by manipulating video signals, the ability to use time-lapse videocassette recorders to study events that happen over long time intervals, and the introduction of fluorescent probes and sensitive video cameras opened research avenues previously unavailable. The author gives a personal account of this evolution, focusing on cell migration studies at the Marine Biological Laboratory 25 years ago. Copyright © 2011 Wiley-Liss, Inc.
Imaging spectroscopy using embedded diffractive optical arrays
NASA Astrophysics Data System (ADS)
Hinnrichs, Michele; Hinnrichs, Bradford
2017-09-01
Pacific Advanced Technology (PAT) has developed an infrared hyperspectral camera based on diffractive optic arrays. This approach to hyperspectral imaging has been demonstrated in all three infrared bands SWIR, MWIR and LWIR. The hyperspectral optical system has been integrated into the cold-shield of the sensor enabling the small size and weight of this infrared hyperspectral sensor. This new and innovative approach to an infrared hyperspectral imaging spectrometer uses micro-optics that are made up of an area array of diffractive optical elements where each element is tuned to image a different spectral region on a common focal plane array. The lenslet array is embedded in the cold-shield of the sensor and actuated with a miniature piezo-electric motor. This approach enables rapid infrared spectral imaging with multiple spectral images collected and processed simultaneously each frame of the camera. This paper will present our optical mechanical design approach which results in an infrared hyper-spectral imaging system that is small enough for a payload on a small satellite, mini-UAV, commercial quadcopter or man portable. Also, an application of how this spectral imaging technology can easily be used to quantify the mass and volume flow rates of hydrocarbon gases. The diffractive optical elements used in the lenslet array are blazed gratings where each lenslet is tuned for a different spectral bandpass. The lenslets are configured in an area array placed a few millimeters above the focal plane and embedded in the cold-shield to reduce the background signal normally associated with the optics. The detector array is divided into sub-images covered by each lenslet. We have developed various systems using a different number of lenslets in the area array. Depending on the size of the focal plane and the diameter of the lenslet array will determine the number of simultaneous different spectral images collected each frame of the camera. A 2 x 2 lenslet array will image four different spectral images of the scene each frame and when coupled with a 512 x 512 focal plane array will give spatial resolution of 256 x 256 pixel each spectral image. Another system that we developed uses a 4 x 4 lenslet array on a 1024 x 1024 pixel element focal plane array which gives 16 spectral images of 256 x 256 pixel resolution each frame. This system spans the SWIR and MWIR bands with a single optical array and focal plane array.
Ravì, Daniele; Szczotka, Agnieszka Barbara; Shakir, Dzhoshkun Ismail; Pereira, Stephen P; Vercauteren, Tom
2018-06-01
Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.
Point spread function engineering for iris recognition system design.
Ashok, Amit; Neifeld, Mark A
2010-04-01
Undersampling in the detector array degrades the performance of iris-recognition imaging systems. We find that an undersampling of 8 x 8 reduces the iris-recognition performance by nearly a factor of 4 (on CASIA iris database), as measured by the false rejection ratio (FRR) metric. We employ optical point spread function (PSF) engineering via a Zernike phase mask in conjunction with multiple subpixel shifted image measurements (frames) to mitigate the effect of undersampling. A task-specific optimization framework is used to engineer the optical PSF and optimize the postprocessing parameters to minimize the FRR. The optimized Zernike phase enhanced lens (ZPEL) imager design with one frame yields an improvement of nearly 33% relative to a thin observation module by bounded optics (TOMBO) imager with one frame. With four frames the optimized ZPEL imager achieves a FRR equal to that of the conventional imager without undersampling. Further, the ZPEL imager design using 16 frames yields a FRR that is actually 15% lower than that obtained with the conventional imager without undersampling.
Single-shot optical sectioning using two-color probes in HiLo fluorescence microscopy.
Muro, Eleonora; Vermeulen, Pierre; Ioannou, Andriani; Skourides, Paris; Dubertret, Benoit; Fragola, Alexandra; Loriette, Vincent
2011-06-08
We describe a wide-field fluorescence microscope setup which combines HiLo microscopy technique with the use of a two-color fluorescent probe. It allows one-shot fluorescence optical sectioning of thick biological moving sample which is illuminated simultaneously with a flat and a structured pattern at two different wavelengths. Both homogenous and structured fluorescence images are spectrally separated at detection and combined similarly with the HiLo microscopy technique. We present optically sectioned full-field images of Xenopus laevis embryos acquired at 25 images/s frame rate. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Unmanned Vehicle Guidance Using Video Camera/Vehicle Model
NASA Technical Reports Server (NTRS)
Sutherland, T.
1999-01-01
A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.
[Improvement of Digital Capsule Endoscopy System and Image Interpolation].
Zhao, Shaopeng; Yan, Guozheng; Liu, Gang; Kuang, Shuai
2016-01-01
Traditional capsule image collects and transmits analog image, with weak anti-interference ability, low frame rate, low resolution. This paper presents a new digital image capsule, which collects and transmits digital image, with frame rate up to 30 frames/sec and pixels resolution of 400 x 400. The image is compressed in the capsule, and is transmitted to the outside of the capsule for decompression and interpolation. A new type of interpolation algorithm is proposed, which is based on the relationship between the image planes, to obtain higher quality colour images. capsule endoscopy, digital image, SCCB protocol, image interpolation
Experiments on sparsity assisted phase retrieval of phase objects
NASA Astrophysics Data System (ADS)
Gaur, Charu; Lochab, Priyanka; Khare, Kedar
2017-05-01
Iterative phase retrieval algorithms such as the Gerchberg-Saxton method and the Fienup hybrid input-output method are known to suffer from the twin image stagnation problem, particularly when the solution to be recovered is complex valued and has centrosymmetric support. Recently we showed that the twin image stagnation problem can be addressed using image sparsity ideas (Gaur et al 2015 J. Opt. Soc. Am. A 32 1922). In this work we test this sparsity assisted phase retrieval method with experimental single shot Fourier transform intensity data frames corresponding to phase objects displayed on a spatial light modulator. The standard iterative phase retrieval algorithms are combined with an image sparsity based penalty in an adaptive manner. Illustrations for both binary and continuous phase objects are provided. It is observed that image sparsity constraint has an important role to play in obtaining meaningful phase recovery without encountering the well-known stagnation problems. The results are valuable for enabling single shot coherent diffraction imaging of phase objects for applications involving illumination wavelengths over a wide range of electromagnetic spectrum.
NO PLIF Imaging in the CUBRC 48 Inch Shock Tunnel
NASA Technical Reports Server (NTRS)
Jiang, N.; Bruzzese, J.; Patton, R.; Sutton J.; Lempert W.; Miller, J. D.; Meyer, T. R.; Parker, R.; Wadham, T.; Holden, M.;
2011-01-01
Nitric Oxide Planar Laser-Induced Fluorescence (NO PLIF) imaging is demonstrated at a 10 kHz repetition rate in the Calspan-University at Buffalo Research Center s (CUBRC) 48-inch Mach 9 hypervelocity shock tunnel using a pulse burst laser-based high frame rate imaging system. Sequences of up to ten images are obtained internal to a supersonic combustor model, located within the shock tunnel, during a single approx.10-millisecond duration run of the ground test facility. This represents over an order of magnitude improvement in data rate from previous PLIF-based diagnostic approaches. Comparison with a preliminary CFD simulation shows good overall qualitative agreement between the prediction of the mean NO density field and the observed PLIF image intensity, averaged over forty individual images obtained during several facility runs.
Cassini "Noodle" Mosaic of Saturn
2017-07-24
This mosaic of images combines views captured by NASA's Cassini spacecraft as it made the first dive of the mission's Grand Finale on April 26, 2017. It shows a vast swath of Saturn's atmosphere, from the north polar vortex to the boundary of the hexagon-shaped jet stream, to details in bands and swirls at middle latitudes and beyond. The mosaic is a composite of 137 images captured as Cassini made its first dive toward the gap between Saturn and its rings. It is an update to a previously released image product. In the earlier version, the images were presented as individual movie frames, whereas here, they have been combined into a single, continuous mosaic. The mosaic is presented as a still image as well as a video that pans across its length. Imaging scientists referred to this long, narrow mosaic as a "noodle" in planning the image sequence. The first frame of the mosaic is centered on Saturn's north pole, and the last frame is centered on a region at 18 degrees north latitude. During the dive, the spacecraft's altitude above the clouds changed from 45,000 to 3,200 miles (72,400 to 8374 kilometers), while the image scale changed from 5.4 miles (8.7 kilometers) per pixel to 0.6 mile (1 kilometer) per pixel. The bottom of the mosaic (near the end of the movie) has a curved shape. This is where the spacecraft rotated to point its high-gain antenna in the direction of motion as a protective measure before crossing Saturn's ring plane. The images in this sequence were captured in visible light using the Cassini spacecraft wide-angle camera. The original versions of these images, as sent by the spacecraft, have a size of 512 by 512 pixels. The small image size was chosen in order to allow the camera to take images quickly as Cassini sped over Saturn. These images of the planet's curved surface were projected onto a flat plane before being combined into a mosaic. Each image was mapped in stereographic projection centered at 55 degree north latitude. A movie is available at https://photojournal.jpl.nasa.gov/catalog/PIA21617
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-13
... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-807] Certain Digital Photo Frames and Image Display Devices and Components Thereof; Commission Determination Not To Review an Initial... importation, and the sale within the United States after importation of certain digital photo frames and image...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Shaozhen; Wei, Wei; Hsieh, Bao-Yu
We present single-shot phase-sensitive imaging of propagating mechanical waves within tissue, enabled by an ultrafast optical coherence tomography (OCT) system powered by a 1.628 MHz Fourier domain mode-locked (FDML) swept laser source. We propose a practical strategy for phase-sensitive measurement by comparing the phases between adjacent OCT B-scans, where the B-scan contains a number of A-scans equaling an integer number of FDML buffers. With this approach, we show that micro-strain fields can be mapped with ∼3.0 nm sensitivity at ∼16 000 fps. The system's capabilities are demonstrated on porcine cornea by imaging mechanical wave propagation launched by a pulsed UV laser beam, promisingmore » non-contact, real-time, and high-resolution optical coherence elastography.« less
Countermeasures for unintentional and intentional video watermarking attacks
NASA Astrophysics Data System (ADS)
Deguillaume, Frederic; Csurka, Gabriela; Pun, Thierry
2000-05-01
These last years, the rapidly growing digital multimedia market has revealed an urgent need for effective copyright protection mechanisms. Therefore, digital audio, image and video watermarking has recently become a very active area of research, as a solution to this problem. Many important issues have been pointed out, one of them being the robustness to non-intentional and intentional attacks. This paper studies some attacks and proposes countermeasures applied to videos. General attacks are lossy copying/transcoding such as MPEG compression and digital/analog (D/A) conversion, changes of frame-rate, changes of display format, and geometrical distortions. More specific attacks are sequence edition, and statistical attacks such as averaging or collusion. Averaging attack consists of averaging locally consecutive frames to cancel the watermark. This attack works well for schemes which embed random independent marks into frames. In the collusion attack the watermark is estimated from single frames (based on image denoising), and averaged over different scenes for better accuracy. The estimated watermark is then subtracted from each frame. Collusion requires that the same mark is embedded into all frames. The proposed countermeasures first ensures robustness to general attacks by spread spectrum encoding in the frequency domain and by the use of an additional template. Secondly, a Bayesian criterion, evaluating the probability of a correctly decoded watermark, is used for rejection of outliers, and to implement an algorithm against statistical attacks. The idea is to embed randomly chosen marks among a finite set of marks, into subsequences of videos which are long enough to resist averaging attacks, but short enough to avoid collusion attacks. The Bayesian criterion is needed to select the correct mark at the decoding step. Finally, the paper presents experimental results showing the robustness of the proposed method.
Sequential detection of web defects
Eichel, Paul H.; Sleefe, Gerard E.; Stalker, K. Terry; Yee, Amy A.
2001-01-01
A system for detecting defects on a moving web having a sequential series of identical frames uses an imaging device to form a real-time camera image of a frame and a comparitor to comparing elements of the camera image with corresponding elements of an image of an exemplar frame. The comparitor provides an acceptable indication if the pair of elements are determined to be statistically identical; and a defective indication if the pair of elements are determined to be statistically not identical. If the pair of elements is neither acceptable nor defective, the comparitor recursively compares the element of said exemplar frame with corresponding elements of other frames on said web until one of the acceptable or defective indications occur.
Stereo and IMU-Assisted Visual Odometry for Small Robots
NASA Technical Reports Server (NTRS)
2012-01-01
This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.
Pushing the Limit of Infrared Multiphoton Dissociation to Megadalton-Size DNA Ions.
Doussineau, Tristan; Antoine, Rodolphe; Santacreu, Marion; Dugourd, Philippe
2012-08-16
We report the use of infrared multiphoton dissociation (IRMPD) for the determination of relative activation energies for unimolecular dissociation of megadalton DNA ions. Single ions with masses in the megadalton range were stored in an electrostatic ion trap for a few tens of milliseconds and the image current generated by the roundtrips of ions in the trap was recorded. While being trapped, single ions were irradiated by a CO2 laser and fragmented, owing to multiphoton IR activation. The analysis of the single-ion image current during the heating period allows us to measure changes in the charge of the trapped ion. We estimated the activation energy associated with the dissociation of megadalton-size DNA ions in the frame of an Arrhenius-like model by analyzing a large set of individual ions in order to construct a frequency histogram of the dissociation rates for a collection of ions.
Compressive Testing of Stitched Frame and Stringer Alternate Configurations
NASA Technical Reports Server (NTRS)
Leone, Frank A., Jr.; Jegley, Dawn C.
2016-01-01
A series of single-frame and single-stringer compression tests were conducted at NASA Langley Research Center on specimens harvested from a large panel built using the Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) concept. Different frame and stringer designs were used in fabrication of the PRSEUS panel. In this report, the details of the experimental testing of single-frame and single-stringer compression specimens are presented, as well as discussions on the performance of the various structural configurations included in the panel.
Imaging of gaseous oxygen through DFB laser illumination
NASA Astrophysics Data System (ADS)
Cocola, L.; Fedel, M.; Tondello, G.; Poletto, L.
2016-05-01
A Tunable Diode Laser Absorption Spectroscopy setup with Wavelength Modulation has been used together with a synchronous sampling imaging sensor to obtain two-dimensional transmission-mode images of oxygen content. Modulated laser light from a 760nm DFB source has been used to illuminate a scene from the back while image frames were acquired with a high dynamic range camera. Thanks to synchronous timing between the imaging device and laser light modulation, the traditional lock-in approach used in Wavelength Modulation Spectroscopy was replaced by image processing techniques, and many scanning periods were averaged together to allow resolution of small intensity variation over the already weak absorption signals from oxygen absorption band. After proper binning and filtering, the time-domain waveform obtained from each pixel in a set of frames representing the wavelength scan was used as the single detector signal in a traditional TDLAS-WMS setup, and so processed through a software defined digital lock-in demodulation and a second harmonic signal fitting routine. In this way the WMS artifacts of a gas absorption feature were obtained from each pixel together with intensity normalization parameter, allowing a reconstruction of oxygen distribution in a two-dimensional scene regardless from broadband transmitted intensity. As a first demonstration of the effectiveness of this setup, oxygen absorption images of similar containers filled with either oxygen or nitrogen were acquired and processed.
Image resolution enhancement via image restoration using neural network
NASA Astrophysics Data System (ADS)
Zhang, Shuangteng; Lu, Yihong
2011-04-01
Image super-resolution aims to obtain a high-quality image at a resolution that is higher than that of the original coarse one. This paper presents a new neural network-based method for image super-resolution. In this technique, the super-resolution is considered as an inverse problem. An observation model that closely follows the physical image acquisition process is established to solve the problem. Based on this model, a cost function is created and minimized by a Hopfield neural network to produce high-resolution images from the corresponding low-resolution ones. Not like some other single frame super-resolution techniques, this technique takes into consideration point spread function blurring as well as additive noise and therefore generates high-resolution images with more preserved or restored image details. Experimental results demonstrate that the high-resolution images obtained by this technique have a very high quality in terms of PSNR and visually look more pleasant.
Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit
NASA Technical Reports Server (NTRS)
Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.
2013-01-01
This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.
The ferredoxin-thioredoxin reductase variable subunit gene from Anacystis nidulans.
Szekeres, M; Droux, M; Buchanan, B B
1991-01-01
The ferredoxin-thioredoxin reductase variable subunit gene of Anacystis nidulans was cloned, and its nucleotide sequence was determined. A single-copy 219-bp open reading frame encoded a protein of 73 amino acid residues, with a calculated Mr of 8,400. The monocistronic transcripts were represented in a 400-base and a less abundant 300-base mRNA form. Images PMID:1705544
Ong, Lee-Ling S; Xinghua Zhang; Kundukad, Binu; Dauwels, Justin; Doyle, Patrick; Asada, H Harry
2016-08-01
An approach to automatically detect bacteria division with temporal models is presented. To understand how bacteria migrate and proliferate to form complex multicellular behaviours such as biofilms, it is desirable to track individual bacteria and detect cell division events. Unlike eukaryotic cells, prokaryotic cells such as bacteria lack distinctive features, causing bacteria division difficult to detect in a single image frame. Furthermore, bacteria may detach, migrate close to other bacteria and may orientate themselves at an angle to the horizontal plane. Our system trains a hidden conditional random field (HCRF) model from tracked and aligned bacteria division sequences. The HCRF model classifies a set of image frames as division or otherwise. The performance of our HCRF model is compared with a Hidden Markov Model (HMM). The results show that a HCRF classifier outperforms a HMM classifier. From 2D bright field microscopy data, it is a challenge to separate individual bacteria and associate observations to tracks. Automatic detection of sequences with bacteria division will improve tracking accuracy.
NASA Astrophysics Data System (ADS)
Izadyyazdanabadi, Mohammadhassan; Belykh, Evgenii; Martirosyan, Nikolay; Eschbacher, Jennifer; Nakaji, Peter; Yang, Yezhou; Preul, Mark C.
2017-03-01
Confocal laser endomicroscopy (CLE), although capable of obtaining images at cellular resolution during surgery of brain tumors in real time, creates as many non-diagnostic as diagnostic images. Non-useful images are often distorted due to relative motion between probe and brain or blood artifacts. Many images, however, simply lack diagnostic features immediately informative to the physician. Examining all the hundreds or thousands of images from a single case to discriminate diagnostic images from nondiagnostic ones can be tedious. Providing a real time diagnostic value assessment of images (fast enough to be used during the surgical acquisition process and accurate enough for the pathologist to rely on) to automatically detect diagnostic frames would streamline the analysis of images and filter useful images for the pathologist/surgeon. We sought to automatically classify images as diagnostic or non-diagnostic. AlexNet, a deep-learning architecture, was used in a 4-fold cross validation manner. Our dataset includes 16,795 images (8572 nondiagnostic and 8223 diagnostic) from 74 CLE-aided brain tumor surgery patients. The ground truth for all the images is provided by the pathologist. Average model accuracy on test data was 91% overall (90.79 % accuracy, 90.94 % sensitivity and 90.87 % specificity). To evaluate the model reliability we also performed receiver operating characteristic (ROC) analysis yielding 0.958 average for area under ROC curve (AUC). These results demonstrate that a deeply trained AlexNet network can achieve a model that reliably and quickly recognizes diagnostic CLE images.
NASA Astrophysics Data System (ADS)
Trusiak, Maciej; Micó, Vicente; Patorski, Krzysztof; García-Monreal, Javier; Sluzewski, Lukasz; Ferreira, Carlos
2016-08-01
In this contribution we propose two Hilbert-Huang Transform based algorithms for fast and accurate single-shot and two-shot quantitative phase imaging applicable in both on-axis and off-axis configurations. In the first scheme a single fringe pattern containing information about biological phase-sample under study is adaptively pre-filtered using empirical mode decomposition based approach. Further it is phase demodulated by the Hilbert Spiral Transform aided by the Principal Component Analysis for the local fringe orientation estimation. Orientation calculation enables closed fringes efficient analysis and can be avoided using arbitrary phase-shifted two-shot Gram-Schmidt Orthonormalization scheme aided by Hilbert-Huang Transform pre-filtering. This two-shot approach is a trade-off between single-frame and temporal phase shifting demodulation. Robustness of the proposed techniques is corroborated using experimental digital holographic microscopy studies of polystyrene micro-beads and red blood cells. Both algorithms compare favorably with the temporal phase shifting scheme which is used as a reference method.
Optical Navigation Image of Ganymede
NASA Technical Reports Server (NTRS)
1996-01-01
NASA's Galileo spacecraft, now in orbit around Jupiter, returned this optical navigation image June 3, 1996, showing that the spacecraft is accurately targeted for its first flyby of the giant moon Ganymede on June 27. The missing data in the frame is the result of a special editing feature recently added to the spacecraft's computer to transmit navigation images more quickly. This is first in a series of optical navigation frames, highly edited onboard the spacecraft, that will be used to fine-tune the spacecraft's trajectory as Galileo approaches Ganymede. The image, used for navigation purposes only, is the product of new computer processing capabilities on the spacecraft that allow Galileo to send back only the information required to show the spacecraft is properly targeted and that Ganymede is where navigators calculate it to be. 'This navigation image is totally different from the pictures we'll be taking for scientific study of Ganymede when we get close to it later this month,' said Galileo Project Scientist Dr. Torrence Johnson. On June 27, Galileo will fly just 844 kilometers (524 miles) above Ganymede and return the most detailed, full-frame, high-resolution images and other measurements of the satellite ever obtained. Icy Ganymede is the largest moon in the solar system and three-quarters the size of Mars. It is one of the four large Jovian moons that are special targets of study for the Galileo mission. Of the more than 5 million bits contained in a single image, Galileo performed on-board editing to send back a mere 24,000 bits containing the essential information needed to assure proper targeting. Only the light-to-dark transitions of the crescent Ganymede and reference star locations were transmitted to Earth. The navigation image was taken from a distance of 9.8 million kilometers (6.1 million miles). On June 27th, the spacecraft will be 10,000 times closer to Ganymede.
Dangerous gas detection based on infrared video
NASA Astrophysics Data System (ADS)
Ding, Kang; Hong, Hanyu; Huang, Likun
2018-03-01
As the gas leak infrared imaging detection technology has significant advantages of high efficiency and remote imaging detection, in order to enhance the detail perception of observers and equivalently improve the detection limit, we propose a new type of gas leak infrared image detection method, which combines background difference methods and multi-frame interval difference method. Compared to the traditional frame methods, the multi-frame interval difference method we proposed can extract a more complete target image. By fusing the background difference image and the multi-frame interval difference image, we can accumulate the information of infrared target image of the gas leak in many aspect. The experiment demonstrate that the completeness of the gas leakage trace information is enhanced significantly, and the real-time detection effect can be achieved.
Full-frame video stabilization with motion inpainting.
Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung
2006-07-01
Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.
Distance-based over-segmentation for single-frame RGB-D images
NASA Astrophysics Data System (ADS)
Fang, Zhuoqun; Wu, Chengdong; Chen, Dongyue; Jia, Tong; Yu, Xiaosheng; Zhang, Shihong; Qi, Erzhao
2017-11-01
Over-segmentation, known as super-pixels, is a widely used preprocessing step in segmentation algorithms. Oversegmentation algorithm segments an image into regions of perceptually similar pixels, but performs badly based on only color image in the indoor environments. Fortunately, RGB-D images can improve the performances on the images of indoor scene. In order to segment RGB-D images into super-pixels effectively, we propose a novel algorithm, DBOS (Distance-Based Over-Segmentation), which realizes full coverage of super-pixels on the image. DBOS fills the holes in depth images to fully utilize the depth information, and applies SLIC-like frameworks for fast running. Additionally, depth features such as plane projection distance are extracted to compute distance which is the core of SLIC-like frameworks. Experiments on RGB-D images of NYU Depth V2 dataset demonstrate that DBOS outperforms state-ofthe-art methods in quality while maintaining speeds comparable to them.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-04-06
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.
Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan
2017-01-01
An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503
2015-07-08
This single frame from a four-frame movie shows New Horizons' final deep search for hazardous material around Pluto, obtained on July 1, 2015. These data allow a highly sensitive search for any new moons. The images were taken with the spacecraft's Long Range Reconnaissance Imager (LORRI) over a 100-minute period, and were the final observations in the series of dedicated searches for hazards in the Pluto system which began on May 11. The images show all five known satellites of Pluto moving in their orbits around the dwarf planet, but analysis of these data has so far not revealed the existence of any additional moons. This means that any undiscovered Plutonian moons further than a few thousand miles from Pluto must be smaller than about 1 mile (1.6 kilometers) in diameter, if their surfaces have similar brightness to Pluto's big moon Charon. For comparison, Pluto's faintest known moon, Styx, which is conspicuous in the lower left quadrant of these images, is about 4 miles (7 kilometers) across, assuming the same surface brightness. The absence of additional moons, and also the absence of detectable rings in the hazard search data, imply that the spacecraft is very unlikely to be damaged by collisions with rings, or dust particles ejected from moons, during its high-speed passage through the Pluto system. The four movie frames were taken at 16:28, 16:38, 17:52, and 18:04 UTC on July 1, from a range of 9.4 million miles (15.2 million kilometers). Each frame is a mosaic of four sets of overlapping images, with a total exposure time of 120 seconds. The images have been heavily processed to remove the glare of Pluto and Charon, and the dense background of stars, though blemishes remain at the locations of many of the brighter stars. The "tails" extending to the right or downward from Pluto and Charon are camera artifacts caused by the extreme overexposure of both objects. Pluto and its five moons Charon, Styx, Nix, Kerberos and Hydra are identified by their initials, and their orbits around the center of gravity of the system (which is located just outside Pluto itself) are also shown. http://photojournal.jpl.nasa.gov/catalog/PIA19701
Ruschin, Mark; Komljenovic, Philip T; Ansell, Steve; Ménard, Cynthia; Bootsma, Gregory; Cho, Young-Bin; Chung, Caroline; Jaffray, David
2013-01-01
Image guidance has improved the precision of fractionated radiation treatment delivery on linear accelerators. Precise radiation delivery is particularly critical when high doses are delivered to complex shapes with steep dose gradients near critical structures, as is the case for intracranial radiosurgery. To reduce potential geometric uncertainties, a cone beam computed tomography (CT) image guidance system was developed in-house to generate high-resolution images of the head at the time of treatment, using a dedicated radiosurgery unit. The performance and initial clinical use of this imaging system are described. A kilovoltage cone beam CT system was integrated with a Leksell Gamma Knife Perfexion radiosurgery unit. The X-ray tube and flat-panel detector are mounted on a translational arm, which is parked above the treatment unit when not in use. Upon descent, a rotational axis provides 210° of rotation for cone beam CT scans. Mechanical integrity of the system was evaluated over a 6-month period. Subsequent clinical commissioning included end-to-end testing of targeting performance and subjective image quality performance in phantoms. The system has been used to image 2 patients, 1 of whom received single-fraction radiosurgery and 1 who received 3 fractions, using a relocatable head frame. Images of phantoms demonstrated soft tissue contrast visibility and submillimeter spatial resolution. A contrast difference of 35 HU was easily detected at a calibration dose of 1.2 cGy (center of head phantom). The shape of the mechanical flex vs scan angle was highly reproducible and exhibited <0.2 mm peak-to-peak variation. With a 0.5-mm voxel pitch, the maximum targeting error was 0.4 mm. Images of 2 patients were analyzed offline and submillimeter agreement was confirmed with conventional frame. A cone beam CT image guidance system was successfully adapted to a radiosurgery unit. The system is capable of producing high-resolution images of bone and soft tissue. The system is in clinical use and provides excellent image guidance without invasive frames. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)
2011-01-01
A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.
Global view of Venus from Magellan, Pioneer, and Venera data
1991-10-29
This global view of Venus, centered at 270 degrees east longitude, is a compilation of data from several sources. Magellan synthetic aperature radar mosaics from the first cycle of Magellan mapping are mapped onto a computer-simulated globe to create the image. Data gaps are filled with Pioneer-Venus orbiter data, or a constant mid-range value. Simulated color is used to enhance small-scale structure. The simulated hues are based on color images recorded by the Soviet Venera 13 and 14 spacecraft. The image was produced at the Jet Propulsion Laboratory (JPL) Multimission Image Processing Laboratory and is a single frame from a video released at the JPL news conference, 10-29-91. View provided by JPL with alternate number P-39225 MGN81.
High frame-rate en face optical coherence tomography system using KTN optical beam deflector
NASA Astrophysics Data System (ADS)
Ohmi, Masato; Shinya, Yusuke; Imai, Tadayuki; Toyoda, Seiji; Kobayashi, Junya; Sakamoto, Tadashi
2017-02-01
We developed high frame-rate en face optical coherence tomography (OCT) system using KTa1-xNbxO3 (KTN) optical beam deflector. In the imaging system, the fast scanning was performed at 200 kHz by the KTN optical beam deflector, while the slow scanning was performed at 800 Hz by the galvanometer mirror. As a preliminary experiment, we succeeded in obtaining en face OCT images of human fingerprint with a frame rate of 800 fps. This is the highest frame-rate obtained using time-domain (TD) en face OCT imaging. The 3D-OCT image of sweat gland was also obtained by our imaging system.
Motion compensation for fully 4D PET reconstruction using PET superset data
NASA Astrophysics Data System (ADS)
Verhaeghe, J.; Gravel, P.; Mio, R.; Fukasawa, R.; Rosa-Neto, P.; Soucy, J.-P.; Thompson, C. J.; Reader, A. J.
2010-07-01
Fully 4D PET image reconstruction is receiving increasing research interest due to its ability to significantly reduce spatiotemporal noise in dynamic PET imaging. However, thus far in the literature, the important issue of correcting for subject head motion has not been considered. Specifically, as a direct consequence of using temporally extensive basis functions, a single instance of movement propagates to impair the reconstruction of multiple time frames, even if no further movement occurs in those frames. Existing 3D motion compensation strategies have not yet been adapted to 4D reconstruction, and as such the benefits of 4D algorithms have not yet been reaped in a clinical setting where head movement undoubtedly occurs. This work addresses this need, developing a motion compensation method suitable for fully 4D reconstruction methods which exploits an optical tracking system to measure the head motion along with PET superset data to store the motion compensated data. List-mode events are histogrammed as PET superset data according to the measured motion, and a specially devised normalization scheme for motion compensated reconstruction from the superset data is required. This work proceeds to propose the corresponding time-dependent normalization modifications which are required for a major class of fully 4D image reconstruction algorithms (those which use linear combinations of temporal basis functions). Using realistically simulated as well as real high-resolution PET data from the HRRT, we demonstrate both the detrimental impact of subject head motion in fully 4D PET reconstruction and the efficacy of our proposed modifications to 4D algorithms. Benefits are shown both for the individual PET image frames as well as for parametric images of tracer uptake and volume of distribution for 18F-FDG obtained from Patlak analysis.
NASA Astrophysics Data System (ADS)
Hardie, Russell C.; Rucci, Michael A.; Dapore, Alexander J.; Karch, Barry K.
2017-07-01
We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.
Motion compensation for fully 4D PET reconstruction using PET superset data.
Verhaeghe, J; Gravel, P; Mio, R; Fukasawa, R; Rosa-Neto, P; Soucy, J-P; Thompson, C J; Reader, A J
2010-07-21
Fully 4D PET image reconstruction is receiving increasing research interest due to its ability to significantly reduce spatiotemporal noise in dynamic PET imaging. However, thus far in the literature, the important issue of correcting for subject head motion has not been considered. Specifically, as a direct consequence of using temporally extensive basis functions, a single instance of movement propagates to impair the reconstruction of multiple time frames, even if no further movement occurs in those frames. Existing 3D motion compensation strategies have not yet been adapted to 4D reconstruction, and as such the benefits of 4D algorithms have not yet been reaped in a clinical setting where head movement undoubtedly occurs. This work addresses this need, developing a motion compensation method suitable for fully 4D reconstruction methods which exploits an optical tracking system to measure the head motion along with PET superset data to store the motion compensated data. List-mode events are histogrammed as PET superset data according to the measured motion, and a specially devised normalization scheme for motion compensated reconstruction from the superset data is required. This work proceeds to propose the corresponding time-dependent normalization modifications which are required for a major class of fully 4D image reconstruction algorithms (those which use linear combinations of temporal basis functions). Using realistically simulated as well as real high-resolution PET data from the HRRT, we demonstrate both the detrimental impact of subject head motion in fully 4D PET reconstruction and the efficacy of our proposed modifications to 4D algorithms. Benefits are shown both for the individual PET image frames as well as for parametric images of tracer uptake and volume of distribution for (18)F-FDG obtained from Patlak analysis.
Evaluation of a high framerate multi-exposure laser speckle contrast imaging setup
NASA Astrophysics Data System (ADS)
Hultman, Martin; Fredriksson, Ingemar; Strömberg, Tomas; Larsson, Marcus
2018-02-01
We present a first evaluation of a new multi-exposure laser speckle contrast imaging (MELSCI) system for assessing spatial variations in the microcirculatory perfusion. The MELSCI system is based on a 1000 frames per second 1-megapixel camera connected to a field programmable gate arrays (FPGA) capable of producing MELSCI data in realtime. The imaging system is evaluated against a single point laser Doppler flowmetry (LDF) system during occlusionrelease provocations of the arm in five subjects. Perfusion is calculated from MELSCI data using current state-of-the-art inverse models. The analysis displayed a good agreement between measured and modeled data, with an average error below 6%. This strongly indicates that the applied model is capable of accurately describing the MELSCI data and that the acquired data is of high quality. Comparing readings from the occlusion-release provocation showed that the MELSCI perfusion was significantly correlated (R=0.83) to the single point LDF perfusion, clearly outperforming perfusion estimations based on a single exposure time. We conclude that the MELSCI system provides blood flow images of enhanced quality, taking us one step closer to a system that accurately can monitor dynamic changes in skin perfusion over a large area in real-time.
NASA Astrophysics Data System (ADS)
Osada, Masakazu; Tsukui, Hideki
2002-09-01
ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.
Modified Mean-Pyramid Coding Scheme
NASA Technical Reports Server (NTRS)
Cheung, Kar-Ming; Romer, Richard
1996-01-01
Modified mean-pyramid coding scheme requires transmission of slightly fewer data. Data-expansion factor reduced from 1/3 to 1/12. Schemes for progressive transmission of image data transmitted in sequence of frames in such way coarse version of image reconstructed after receipt of first frame and increasingly refined version of image reconstructed after receipt of each subsequent frame.
Ultra-fast high-resolution hybrid and monolithic CMOS imagers in multi-frame radiography
NASA Astrophysics Data System (ADS)
Kwiatkowski, Kris; Douence, Vincent; Bai, Yibin; Nedrow, Paul; Mariam, Fesseha; Merrill, Frank; Morris, Christopher L.; Saunders, Andy
2014-09-01
A new burst-mode, 10-frame, hybrid Si-sensor/CMOS-ROIC FPA chip has been recently fabricated at Teledyne Imaging Sensors. The intended primary use of the sensor is in the multi-frame 800 MeV proton radiography at LANL. The basic part of the hybrid is a large (48×49 mm2) stitched CMOS chip of 1100×1100 pixel count, with a minimum shutter speed of 50 ns. The performance parameters of this chip are compared to the first generation 3-frame 0.5-Mpixel custom hybrid imager. The 3-frame cameras have been in continuous use for many years, in a variety of static and dynamic experiments at LANSCE. The cameras can operate with a per-frame adjustable integration time of ~ 120ns-to- 1s, and inter-frame time of 250ns to 2s. Given the 80 ms total readout time, the original and the new imagers can be externally synchronized to 0.1-to-5 Hz, 50-ns wide proton beam pulses, and record up to ~1000-frame radiographic movies typ. of 3-to-30 minute duration. The performance of the global electronic shutter is discussed and compared to that of a high-resolution commercial front-illuminated monolithic CMOS imager.
GPU-Accelerated Hybrid Algorithm for 3D Localization of Fluorescent Emitters in Dense Clusters
NASA Astrophysics Data System (ADS)
Jung, Yoon; Barsic, Anthony; Piestun, Rafael; Fakhri, Nikta
In stochastic switching-based super-resolution imaging, a random subset of fluorescent emitters are imaged and localized for each frame to construct a single high resolution image. However, the condition of non-overlapping point spread functions (PSFs) imposes constraints on experimental parameters. Recent development in post processing methods such as dictionary-based sparse support recovery using compressive sensing has shown up to an order of magnitude higher recall rate than single emitter fitting methods. However, the computational complexity of this approach scales poorly with the grid size and requires long runtime. Here, we introduce a fast and accurate compressive sensing algorithm for localizing fluorescent emitters in high density in 3D, namely sparse support recovery using Orthogonal Matching Pursuit (OMP) and L1-Homotopy algorithm for reconstructing STORM images (SOLAR STORM). SOLAR STORM combines OMP with L1-Homotopy to reduce computational complexity, which is further accelerated by parallel implementation using GPUs. This method can be used in a variety of experimental conditions for both in vitro and live cell fluorescence imaging.
Adaptive Markov Random Fields for Example-Based Super-resolution of Faces
NASA Astrophysics Data System (ADS)
Stephenson, Todd A.; Chen, Tsuhan
2006-12-01
Image enhancement of low-resolution images can be done through methods such as interpolation, super-resolution using multiple video frames, and example-based super-resolution. Example-based super-resolution, in particular, is suited to images that have a strong prior (for those frameworks that work on only a single image, it is more like image restoration than traditional, multiframe super-resolution). For example, hallucination and Markov random field (MRF) methods use examples drawn from the same domain as the image being enhanced to determine what the missing high-frequency information is likely to be. We propose to use even stronger prior information by extending MRF-based super-resolution to use adaptive observation and transition functions, that is, to make these functions region-dependent. We show with face images how we can adapt the modeling for each image patch so as to improve the resolution.
All-optical framing photography based on hyperspectral imaging method
NASA Astrophysics Data System (ADS)
Liu, Shouxian; Li, Yu; Li, Zeren; Chen, Guanghua; Peng, Qixian; Lei, Jiangbo; Liu, Jun; Yuan, Shuyun
2017-02-01
We propose and experimentally demonstrate a new all optical-framing photography that uses hyperspectral imaging methods to record a chirped pulse's temporal-spatial information. This proposed method consists of three parts: (1) a chirped laser pulse encodes temporal phenomena onto wavelengths; (2) a lenslet array generates a series of integral pupil images;(3) a dispersive device disperses the integral images at void space of image sensor. Compared with Ultrafast All-Optical Framing Technology(Daniel Frayer,2013,2014) and Sequentially Time All-Optical Mapping Photography( Nakagawa 2014, 2015), our method is convenient to adjust the temporal resolution and to flexibly increase the numbers of frames. Theoretically, the temporal resolution of our scheme is limited by the amount of dispersion that is added to a Fourier transform limited femtosecond laser pulse. Correspondingly, the optimal number of frames is decided by the ratio of the observational time window to the temporal resolution, and the effective pixels of each frame are mostly limited by the dimensions M×N of the lenslet array. For example, if a 40fs Fourier transform limited femtosecond pulse is stretched to 10ps, a CCD camera with 2048×3072 pixels can record 15 framing images with temporal resolution of 650fs and image size of 100×100 pixels. As spectrometer structure, our recording part has another advantage that not only amplitude images but also frequency domain interferograms can be imaged. Therefore, it is comparatively easy to capture fast dynamics in the refractive index change of materials. A further dynamic experiment is being conducted.
Towards accurate localization: long- and short-term correlation filters for tracking
NASA Astrophysics Data System (ADS)
Li, Minglangjun; Tian, Chunna
2018-04-01
Visual tracking is a challenging problem, especially using a single model. In this paper, we propose a discriminative correlation filter (DCF) based tracking approach that exploits both the long-term and short-term information of the target, named LSTDCF, to improve the tracking performance. In addition to a long-term filter learned through the whole sequence, a short-term filter is trained using only features extracted from most recent frames. The long-term filter tends to capture more semantics of the target as more frames are used for training. However, since the target may undergo large appearance changes, features extracted around the target in non-recent frames prevent the long-term filter from locating the target in the current frame accurately. In contrast, the short-term filter learns more spatial details of the target from recent frames but gets over-fitting easily. Thus the short-term filter is less robust to handle cluttered background and prone to drift. We take the advantage of both filters and fuse their response maps to make the final estimation. We evaluate our approach on a widely-used benchmark with 100 image sequences and achieve state-of-the-art results.
2016-09-15
NASA's Cassini spacecraft stared at Saturn for nearly 44 hours on April 25 to 27, 2016, to obtain this movie showing just over four Saturn days. With Cassini's orbit being moved closer to the planet in preparation for the mission's 2017 finale, scientists took this final opportunity to capture a long movie in which the planet's full disk fit into a single wide-angle camera frame. Visible at top is the giant hexagon-shaped jet stream that surrounds the planet's north pole. Each side of this huge shape is slightly wider than Earth. The resolution of the 250 natural color wide-angle camera frames comprising this movie is 512x512 pixels, rather than the camera's full resolution of 1024x1024 pixels. Cassini's imaging cameras have the ability to take reduced-size images like these in order to decrease the amount of data storage space required for an observation. The spacecraft began acquiring this sequence of images just after it obtained the images to make a three-panel color mosaic. When it began taking images for this movie sequence, Cassini was 1,847,000 miles (2,973,000 kilometers) from Saturn, with an image scale of 355 kilometers per pixel. When it finished gathering the images, the spacecraft had moved 171,000 miles (275,000 kilometers) closer to the planet, with an image scale of 200 miles (322 kilometers) per pixel. A movie is available at http://photojournal.jpl.nasa.gov/catalog/PIA21047
Real-time imaging of methane gas leaks using a single-pixel camera.
Gibson, Graham M; Sun, Baoqing; Edgar, Matthew P; Phillips, David B; Hempler, Nils; Maker, Gareth T; Malcolm, Graeme P A; Padgett, Miles J
2017-02-20
We demonstrate a camera which can image methane gas at video rates, using only a single-pixel detector and structured illumination. The light source is an infrared laser diode operating at 1.651μm tuned to an absorption line of methane gas. The light is structured using an addressable micromirror array to pattern the laser output with a sequence of Hadamard masks. The resulting backscattered light is recorded using a single-pixel InGaAs detector which provides a measure of the correlation between the projected patterns and the gas distribution in the scene. Knowledge of this correlation and the patterns allows an image to be reconstructed of the gas in the scene. For the application of locating gas leaks the frame rate of the camera is of primary importance, which in this case is inversely proportional to the square of the linear resolution. Here we demonstrate gas imaging at ~25 fps while using 256 mask patterns (corresponding to an image resolution of 16×16). To aid the task of locating the source of the gas emission, we overlay an upsampled and smoothed image of the low-resolution gas image onto a high-resolution color image of the scene, recorded using a standard CMOS camera. We demonstrate for an illumination of only 5mW across the field-of-view imaging of a methane gas leak of ~0.2 litres/minute from a distance of ~1 metre.
Joint Transform Correlation for face tracking: elderly fall detection application
NASA Astrophysics Data System (ADS)
Katz, Philippe; Aron, Michael; Alfalou, Ayman
2013-03-01
In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.
Munyon, Charles N; Koubeissi, Mohamad Z; Syed, Tanvir U; Lüders, Hans O; Miller, Jonathan P
2013-01-01
Frame-based stereotaxy and open craniotomy may seem mutually exclusive, but invasive electrophysiological monitoring can require broad sampling of the cortex and precise targeting of deeper structures. The purpose of this study is to describe simultaneous frame-based insertion of depth electrodes and craniotomy for placement of subdural grids through a single surgical field and to determine the accuracy of depth electrodes placed using this technique. A total of 6 patients with intractable epilepsy underwent placement of a stereotactic frame with the center of the planned cranial flap equidistant from the fixation posts. After volumetric imaging, craniotomy for placement of subdural grids was performed. Depth electrodes were placed using frame-based stereotaxy. Postoperative CT determined the accuracy of electrode placement. A total of 31 depth electrodes were placed. Mean distance of distal electrode contact from the target was 1.0 ± 0.15 mm. Error was correlated to distance to target, with an additional 0.35 mm error for each centimeter (r = 0.635, p < 0.001); when corrected, there was no difference in accuracy based on target structure or method of placement (prior to craniotomy vs. through grid, p = 0.23). The described technique for craniotomy through a stereotactic frame allows placement of subdural grids and depth electrodes without sacrificing the accuracy of a frame or requiring staged procedures.
Dual-mode optical microscope based on single-pixel imaging
NASA Astrophysics Data System (ADS)
Rodríguez, A. D.; Clemente, P.; Tajahuerce, E.; Lancis, J.
2016-07-01
We demonstrate an inverted microscope that can image specimens in both reflection and transmission modes simultaneously with a single light source. The microscope utilizes a digital micromirror device (DMD) for patterned illumination altogether with two single-pixel photosensors for efficient light detection. The system, a scan-less device with no moving parts, works by sequential projection of a set of binary intensity patterns onto the sample that are codified onto a modified commercial DMD. Data to be displayed are geometrically transformed before written into a memory cell to cancel optical artifacts coming from the diamond-like shaped structure of the micromirror array. The 24-bit color depth of the display is fully exploited to increase the frame rate by a factor of 24, which makes the technique practicable for real samples. Our commercial DMD-based LED-illumination is cost effective and can be easily coupled as an add-on module for already existing inverted microscopes. The reflection and transmission information provided by our dual microscope complement each other and can be useful for imaging non-uniform samples and to prevent self-shadowing effects.
High-frame rate multiport CCD imager and camera
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.
1993-01-01
A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.
Time-series animation techniques for visualizing urban growth
Acevedo, W.; Masuoka, P.
1997-01-01
Time-series animation is a visually intuitive way to display urban growth. Animations of landuse change for the Baltimore-Washington region were generated by showing a series of images one after the other in sequential order. Before creating an animation, various issues which will affect the appearance of the animation should be considered, including the number of original data frames to use, the optimal animation display speed, the number of intermediate frames to create between the known frames, and the output media on which the animations will be displayed. To create new frames between the known years of data, the change in each theme (i.e. urban development, water bodies, transportation routes) must be characterized and an algorithm developed to create the in-between frames. Example time-series animations were created using a temporal GIS database of the Baltimore-Washington area. Creating the animations involved generating raster images of the urban development, water bodies, and principal transportation routes; overlaying the raster images on a background image; and importing the frames to a movie file. Three-dimensional perspective animations were created by draping each image over digital elevation data prior to importing the frames to a movie file. ?? 1997 Elsevier Science Ltd.
The impact of verbal framing on brain activity evoked by emotional images.
Kisley, Michael A; Campbell, Alana M; Larson, Jenna M; Naftz, Andrea E; Regnier, Jesse T; Davalos, Deana B
2011-12-01
Emotional stimuli generally command more brain processing resources than non-emotional stimuli, but the magnitude of this effect is subject to voluntary control. Cognitive reappraisal represents one type of emotion regulation that can be voluntarily employed to modulate responses to emotional stimuli. Here, the late positive potential (LPP), a specific event-related brain potential (ERP) component, was measured in response to neutral, positive and negative images while participants performed an evaluative categorization task. One experimental group adopted a "negative frame" in which images were categorized as negative or not. The other adopted a "positive frame" in which the exact same images were categorized as positive or not. Behavioral performance confirmed compliance with random group assignment, and peak LPP amplitude to negative images was affected by group membership: brain responses to negative images were significantly reduced in the "positive frame" group. This suggests that adopting a more positive appraisal frame can modulate brain activity elicited by negative stimuli in the environment.
Correction of projective distortion in long-image-sequence mosaics without prior information
NASA Astrophysics Data System (ADS)
Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie
2010-04-01
Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is shown to be effective and suitable for real-time implementation.
High-speed varifocal imaging with a tunable acoustic gradient index of refraction lens.
Mermillod-Blondin, Alexandre; McLeod, Euan; Arnold, Craig B
2008-09-15
Fluidic lenses allow for varifocal optical elements, but current approaches are limited by the speed at which focal length can be changed. Here we demonstrate the use of a tunable acoustic gradient (TAG) index of refraction lens as a fast varifocal element. The optical power of the TAG lens varies continuously, allowing for rapid selection and modification of the effective focal length at time scales of 1 mus and shorter. The wavefront curvature applied to the incident light is experimentally quantified as a function of time, and single-frame imaging is demonstrated. Results indicate that the TAG lens can successfully be employed to perform high-rate imaging at multiple locations.
Monocular Stereo Measurement Using High-Speed Catadioptric Tracking
Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku
2017-01-01
This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483
A two-step A/D conversion and column self-calibration technique for low noise CMOS image sensors.
Bae, Jaeyoung; Kim, Daeyun; Ham, Seokheon; Chae, Youngcheol; Song, Minkyu
2014-07-04
In this paper, a 120 frames per second (fps) low noise CMOS Image Sensor (CIS) based on a Two-Step Single Slope ADC (TS SS ADC) and column self-calibration technique is proposed. The TS SS ADC is suitable for high speed video systems because its conversion speed is much faster (by more than 10 times) than that of the Single Slope ADC (SS ADC). However, there exist some mismatching errors between the coarse block and the fine block due to the 2-step operation of the TS SS ADC. In general, this makes it difficult to implement the TS SS ADC beyond a 10-bit resolution. In order to improve such errors, a new 4-input comparator is discussed and a high resolution TS SS ADC is proposed. Further, a feedback circuit that enables column self-calibration to reduce the Fixed Pattern Noise (FPN) is also described. The proposed chip has been fabricated with 0.13 μm Samsung CIS technology and the chip satisfies the VGA resolution. The pixel is based on the 4-TR Active Pixel Sensor (APS). The high frame rate of 120 fps is achieved at the VGA resolution. The measured FPN is 0.38 LSB, and measured dynamic range is about 64.6 dB.
Uribe-Patarroyo, Néstor; Bouma, Brett E.
2015-01-01
We present a new technique for the correction of nonuniform rotation distortion in catheter-based optical coherence tomography (OCT), based on the statistics of speckle between A-lines using intensity-based dynamic light scattering. This technique does not rely on tissue features and can be performed on single frames of data, thereby enabling real-time image correction. We demonstrate its suitability in a gastrointestinal balloon-catheter OCT system, determining the actual rotational speed with high temporal resolution, and present corrected cross-sectional and en face views showing significant enhancement of image quality. PMID:26625040
Soft X-ray and XUV imaging with a charge-coupled device /CCD/-based detector
NASA Technical Reports Server (NTRS)
Loter, N. G.; Burstein, P.; Krieger, A.; Ross, D.; Harrison, D.; Michels, D. J.
1981-01-01
A soft X-ray/XUV imaging camera which uses a thinned, back-illuminated, all-buried channel RCA CCD for radiation sensing has been built and tested. The camera is a slow-scan device which makes possible frame integration if necessary. The detection characteristics of the device have been tested over the 15-1500 eV range. The response was linear with exposure up to 0.2-0.4 erg/sq cm; saturation occurred at greater exposures. Attention is given to attempts to resolve single photons with energies of 1.5 keV.
Research of real-time video processing system based on 6678 multi-core DSP
NASA Astrophysics Data System (ADS)
Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang
2017-10-01
In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-03-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor (the size of the standard 35mm frame) with the means to select left and right image information. Even with the added stereoscopic capability the appearance of existing camera bodies will be unaltered.
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-07-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.
Phasor imaging with a widefield photon-counting detector
Siegmund, Oswald H. W.; Tremsin, Anton S.; Vallerga, John V.; Weiss, Shimon
2012-01-01
Abstract. Fluorescence lifetime can be used as a contrast mechanism to distinguish fluorophores for localization or tracking, for studying molecular interactions, binding, assembly, and aggregation, or for observing conformational changes via Förster resonance energy transfer (FRET) between donor and acceptor molecules. Fluorescence lifetime imaging microscopy (FLIM) is thus a powerful technique but its widespread use has been hampered by demanding hardware and software requirements. FLIM data is often analyzed in terms of multicomponent fluorescence lifetime decays, which requires large signals for a good signal-to-noise ratio. This confines the approach to very low frame rates and limits the number of frames which can be acquired before bleaching the sample. Recently, a computationally efficient and intuitive graphical representation, the phasor approach, has been proposed as an alternative method for FLIM data analysis at the ensemble and single-molecule level. In this article, we illustrate the advantages of combining phasor analysis with a widefield time-resolved single photon-counting detector (the H33D detector) for FLIM applications. In particular we show that phasor analysis allows real-time subsecond identification of species by their lifetimes and rapid representation of their spatial distribution, thanks to the parallel acquisition of FLIM information over a wide field of view by the H33D detector. We also discuss possible improvements of the H33D detector’s performance made possible by the simplicity of phasor analysis and its relaxed timing accuracy requirements compared to standard time-correlated single-photon counting (TCSPC) methods. PMID:22352658
Optical flow estimation on image sequences with differently exposed frames
NASA Astrophysics Data System (ADS)
Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin
2015-09-01
Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.
Weavers, Paul T; Borisch, Eric A; Hulshizer, Tom C; Rossman, Phillip J; Young, Phillip M; Johnson, Casey P; McKay, Jessica; Cline, Christopher C; Riederer, Stephen J
2016-04-01
Three-station stepping-table time-resolved 3D contrast-enhanced magnetic resonance angiography has conflicting demands in the need to limit acquisition time in proximal stations to match the speed of the advancing contrast bolus and in the distal-most station to avoid venous contamination while still providing clinically useful spatial resolution. This work describes improved receiver coil arrays which address this issue by allowing increased acceleration factors, providing increased spatial resolution per unit time. Receiver coil arrays were constructed for each station (pelvis, thigh, calf) and then integrated into a 48-element array for three-station peripheral CE-MRA. Coil element sizes and array configurations for these three stations were designed to improve SENSE-type parallel imaging taking advantage of an increase in coil count for all stations versus the previous 32 channel capability. At each station either acceleration apportionment or optimal CAIPIRINHA selection was used to choose the optimum acceleration parameters for each subject. Results were evaluated in both single- and multi-station studies. Single-station studies showed that SENSE acceleration in the thigh station could be readily increased from R=8 to R=10, allowing reduction of the frame time from 2.5 to 2.1 s to better image the typically rapidly advancing bolus at this station. Similarly, the improved coil array for the calf station permitted acceleration increase from R=8 to R=12, providing a 4.0 vs. 5.2 s frame time. Results in three-station studies suggest an improved ability to track the contrast bolus in peripheral CE-MRA. Modified receiver coil arrays and individualized parameter optimization have been used to provide improved acceleration at all stations in multi-station peripheral CE-MRA and provide high spatial resolution with frame times as short as 2.1 s. Copyright © 2015 Elsevier Inc. All rights reserved.
1995 Joseph E. Whitley, MD, Award. A World Wide Web gateway to the radiologic learning file.
Channin, D S
1995-12-01
Computer networks in general, and the Internet specifically, are changing the way information is manipulated in the world at large and in radiology. The goal of this project was to develop a computer system in which images from the Radiologic Learning File, available previously only via a single-user laser disc, are made available over a generic, high-availability computer network to many potential users simultaneously. Using a networked workstation in our laboratory and freely available distributed hypertext software, we established a World Wide Web (WWW) information server for radiology. Images from the Radiologic Learning File are requested through the WWW client software, digitized from a single laser disc containing the entire teaching file and then transmitted over the network to the client. The text accompanying each image is incorporated into the transmitted document. The Radiologic Learning File is now on-line, and requests to view the cases result in the delivery of the text and images. Image digitization via a frame grabber takes 1/30th of a second. Conversion of the image to a standard computer graphic format takes 45-60 sec. Text and image transmission speed on a local area network varies between 200 and 400 kilobytes (KB) per second depending on the network load. We have made images from a laser disc of the Radiologic Learning File available through an Internet-based hypertext server. The images previously available through a single-user system located in a remote section of our department are now ubiquitously available throughout our department via the department's computer network. We have thus converted a single-user, limited functionality system into a multiuser, widely available resource.
Multi-frame partially saturated images blind deconvolution
NASA Astrophysics Data System (ADS)
Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2016-12-01
When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.
Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems
NASA Technical Reports Server (NTRS)
Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.
2015-01-01
Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, Julian; Tate, Mark W.; Shanks, Katherine S.
Pixel Array Detectors (PADs) consist of an x-ray sensor layer bonded pixel-by-pixel to an underlying readout chip. This approach allows both the sensor and the custom pixel electronics to be tailored independently to best match the x-ray imaging requirements. Here we describe the hybridization of CdTe sensors to two different charge-integrating readout chips, the Keck PAD and the Mixed-Mode PAD (MM-PAD), both developed previously in our laboratory. The charge-integrating architecture of each of these PADs extends the instantaneous counting rate by many orders of magnitude beyond that obtainable with photon counting architectures. The Keck PAD chip consists of rapid, 8-frame,more » in-pixel storage elements with framing periods <150 ns. The second detector, the MM-PAD, has an extended dynamic range by utilizing an in-pixel overflow counter coupled with charge removal circuitry activated at each overflow. This allows the recording of signals from the single-photon level to tens of millions of x-rays/pixel/frame while framing at 1 kHz. Both detector chips consist of a 128×128 pixel array with (150 µm){sup 2} pixels.« less
Adjustable permanent magnet assembly for NMR and MRI
Pines, Alexander; Paulsen, Jeffrey; Bouchard, Louis S; Blumich, Bernhard
2013-10-29
System and methods for designing and using single-sided magnet assemblies for magnetic resonance imaging (MRI) are disclosed. The single-sided magnet assemblies can include an array of permanent magnets disposed at selected positions. At least one of the permanent magnets can be configured to rotate about an axis of rotation in the range of at least +/-10 degrees and can include a magnetization having a vector component perpendicular to the axis of rotation. The single-sided magnet assemblies can further include a magnet frame that is configured to hold the permanent magnets in place while allowing the at least one of the permanent magnets to rotate about the axis of rotation.
NASA Astrophysics Data System (ADS)
Lychagin, D. V.; Filippov, A. V.; Novitskaia, O. S.; Kolubaev, E. A.; Sizova, O. V.
2016-08-01
The results of experimental research into dry sliding friction of Hadfield steel single crystals involving registration of acoustic emission are presented in the paper. The images of friction surfaces of Hadfield steel single crystals and wear grooves of the counterbody surface made after completion of three serial experiments conducted under similar conditions and friction regimes are given. The relation of the acoustic emission waveform envelope to the changing friction factor is revealed. Amplitude-frequency characteristics of acoustic emission signal frames are determined on the base of Fast Fourier Transform and Short Time Fourier Transform during the run-in stage of tribounits and in the process of stable friction.
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Schuman, Joel S
2016-01-01
Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t -test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects.
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.
2016-01-01
Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180
Andreozzi, Jacqueline M; Zhang, Rongxiao; Glaser, Adam K; Jarvis, Lesley A; Pogue, Brian W; Gladstone, David J
2015-02-01
To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary cost than the EM-ICCD. The ICCD with an intensifier better optimized for red wavelengths was found to provide the best potential for real-time display (at least 8.6 fps) of radiation dose on the skin during treatment at a resolution of 1024 × 1024.
NASA Astrophysics Data System (ADS)
Hong, Inki; Cho, Sanghee; Michel, Christian J.; Casey, Michael E.; Schaefferkoetter, Joshua D.
2014-09-01
A new data handling method is presented for improving the image noise distribution and reducing bias when reconstructing very short frames from low count dynamic PET acquisition. The new method termed ‘Complementary Frame Reconstruction’ (CFR) involves the indirect formation of a count-limited emission image in a short frame through subtraction of two frames with longer acquisition time, where the short time frame data is excluded from the second long frame data before the reconstruction. This approach can be regarded as an alternative to the AML algorithm recently proposed by Nuyts et al, as a method to reduce the bias for the maximum likelihood expectation maximization (MLEM) reconstruction of count limited data. CFR uses long scan emission data to stabilize the reconstruction and avoids modification of algorithms such as MLEM. The subtraction between two long frame images, naturally allows negative voxel values and significantly reduces bias introduced in the final image. Simulations based on phantom and clinical data were used to evaluate the accuracy of the reconstructed images to represent the true activity distribution. Applicability to determine the arterial input function in human and small animal studies is also explored. In situations with limited count rate, e.g. pediatric applications, gated abdominal, cardiac studies, etc., or when using limited doses of short-lived isotopes such as 15O-water, the proposed method will likely be preferred over independent frame reconstruction to address bias and noise issues.
The effect of image force and diffusion on the deposition of ultrafine particle to vegetation
NASA Astrophysics Data System (ADS)
Lin, M. Y.; Katul, G. G.; Huang, C. W.; CHU, C. R.; Khlystov, A.
2017-12-01
Ultrafine particles (UFP) along with their sources and sinks are gaining significant attention due to their dual role in cloud microphysics and human health. Due to its expansive areal extent, vegetation is a significant sink for UFP thus prompting interest in how UFP deposit onto vegetated surfaces. Single fiber theory reasonably explains deposition of zero charge UFP onto vegetation by treating vegetation as filter media. However, the ability of the single fiber theory to predict deposition of charged UFP onto vegetation remains unknown and frames the scope of this presentation. Wind tunnel experiments are used to investigate UFP deposition (size range 12.6 - 102 nm) onto Juniper branches (Juniperus chinesis) and their results are interpreted using single fiber theory. Three different wind speeds (0.3, 0.6, and 0.9 m/s) are investigated to study deposition of singly-charged particles and these deposition values are contrasted with neutrally charged particles. The wind tunnel experiments indicate that single fiber theory can be used to describe deposition of singly-charged particles onto vegetation if both the image force and Brownian diffusion are simultaneously considered. The image force was found to be proportional to KIM0.5 when the image force dimensionless number (KIM) is smaller than 10-8, a common condition for singly charged UFP particle. The proportionality constant was found to be 27.6 (i.e. 27.6×KIM0.5) and is larger than a previously reported value (9.7) derived for KIM between 10-7 10-5, primarily due to the lower KIM (<10-8) in this study. Another study also showed that this proportionality constant increases with decreasing KIM. With this representation for the image force, the single fiber filtration model and measurements agree to within 20%. The work here offers a new perspective on the role of image force at small KIM (10-10 10-8) and its role in enhanced deposition of charged UFP onto vegetation.
Optimization of image quality and dose for Varian aS500 electronic portal imaging devices (EPIDs).
McGarry, C K; Grattan, M W D; Cosgrove, V P
2007-12-07
This study was carried out to investigate whether the electronic portal imaging (EPI) acquisition process could be optimized, and as a result tolerance and action levels be set for the PIPSPro QC-3V phantom image quality assessment. The aim of the optimization process was to reduce the dose delivered to the patient while maintaining a clinically acceptable image quality. This is of interest when images are acquired in addition to the planned patient treatment, rather than images being acquired using the treatment field during a patient's treatment. A series of phantoms were used to assess image quality for different acquisition settings relative to the baseline values obtained following acceptance testing. Eight Varian aS500 EPID systems on four matched Varian 600C/D linacs and four matched Varian 2100C/D linacs were compared for consistency of performance and images were acquired at the four main orthogonal gantry angles. Images were acquired using a 6 MV beam operating at 100 MU min(-1) and the low-dose acquisition mode. Doses used in the comparison were measured using a Farmer ionization chamber placed at d(max) in solid water. The results demonstrated that the number of reset frames did not have any influence on the image contrast, but the number of frame averages did. The expected increase in noise with corresponding decrease in contrast was also observed when reducing the number of frame averages. The optimal settings for the low-dose acquisition mode with respect to image quality and dose were found to be one reset frame and three frame averages. All patients at the Northern Ireland Cancer Centre are now imaged using one reset frame and three frame averages in the 6 MV 100 MU min(-1) low-dose acquisition mode. Routine EPID QC contrast tolerance (+/-10) and action (+/-20) levels using the PIPSPro phantom based around expected values of 190 (Varian 600C/D) and 225 (Varian 2100C/D) have been introduced. The dose at dmax from electronic portal imaging has been reduced by approximately 28%, and while the image quality has been reduced, the images produced are still clinically acceptable.
Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications
NASA Astrophysics Data System (ADS)
Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David
2017-10-01
The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.
An adaptive enhancement algorithm for infrared video based on modified k-means clustering
NASA Astrophysics Data System (ADS)
Zhang, Linze; Wang, Jingqi; Wu, Wen
2016-09-01
In this paper, we have proposed a video enhancement algorithm to improve the output video of the infrared camera. Sometimes the video obtained by infrared camera is very dark since there is no clear target. In this case, infrared video should be divided into frame images by frame extraction, in order to carry out the image enhancement. For the first frame image, which can be divided into k sub images by using K-means clustering according to the gray interval it occupies before k sub images' histogram equalization according to the amount of information per sub image, we used a method to solve a problem that final cluster centers close to each other in some cases; and for the other frame images, their initial cluster centers can be determined by the final clustering centers of the previous ones, and the histogram equalization of each sub image will be carried out after image segmentation based on K-means clustering. The histogram equalization can make the gray value of the image to the whole gray level, and the gray level of each sub image is determined by the ratio of pixels to a frame image. Experimental results show that this algorithm can improve the contrast of infrared video where night target is not obvious which lead to a dim scene, and reduce the negative effect given by the overexposed pixels adaptively in a certain range.
On the single-photon-counting (SPC) modes of imaging using an XFEL source
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhehui
In this study, the requirements to achieve high detection efficiency (above 50%) and gigahertz (GHz) frame rate for the proposed 42-keV X-ray free-electron laser (XFEL) at Los Alamos are summarized. Direct detection scenarios using C (diamond), Si, Ge and GaAs semiconductor sensors are analyzed. Single-photon counting (SPC) mode and weak SPC mode using Si can potentially meet the efficiency and frame rate requirements and be useful to both photoelectric absorption and Compton physics as the photon energy increases. Multilayer three-dimensional (3D) detector architecture, as a possible means to realize SPC modes, is compared with the widely used two-dimensional (2D) hybridmore » planar electrode structure and 3D deeply entrenched electrode architecture. Demonstration of thin film cameras less than 100-μm thick with onboard thin ASICs could be an initial step to realize multilayer 3D detectors and SPC modes for XFELs.« less
On the single-photon-counting (SPC) modes of imaging using an XFEL source
Wang, Zhehui
2015-12-14
In this study, the requirements to achieve high detection efficiency (above 50%) and gigahertz (GHz) frame rate for the proposed 42-keV X-ray free-electron laser (XFEL) at Los Alamos are summarized. Direct detection scenarios using C (diamond), Si, Ge and GaAs semiconductor sensors are analyzed. Single-photon counting (SPC) mode and weak SPC mode using Si can potentially meet the efficiency and frame rate requirements and be useful to both photoelectric absorption and Compton physics as the photon energy increases. Multilayer three-dimensional (3D) detector architecture, as a possible means to realize SPC modes, is compared with the widely used two-dimensional (2D) hybridmore » planar electrode structure and 3D deeply entrenched electrode architecture. Demonstration of thin film cameras less than 100-μm thick with onboard thin ASICs could be an initial step to realize multilayer 3D detectors and SPC modes for XFELs.« less
Demosaicking for full motion video 9-band SWIR sensor
NASA Astrophysics Data System (ADS)
Kanaev, Andrey V.; Rawhouser, Marjorie; Kutteruf, Mary R.; Yetzbacher, Michael K.; DePrenger, Michael J.; Novak, Kyle M.; Miller, Corey A.; Miller, Christopher W.
2014-05-01
Short wave infrared (SWIR) spectral imaging systems are vital for Intelligence, Surveillance, and Reconnaissance (ISR) applications because of their abilities to autonomously detect targets and classify materials. Typically the spectral imagers are incapable of providing Full Motion Video (FMV) because of their reliance on line scanning. We enable FMV capability for a SWIR multi-spectral camera by creating a repeating pattern of 3x3 spectral filters on a staring focal plane array (FPA). In this paper we present the imagery from an FMV SWIR camera with nine discrete bands and discuss image processing algorithms necessary for its operation. The main task of image processing in this case is demosaicking of the spectral bands i.e. reconstructing full spectral images with original FPA resolution from spatially subsampled and incomplete spectral data acquired with the choice of filter array pattern. To the best of author's knowledge, the demosaicking algorithms for nine or more equally sampled bands have not been reported before. Moreover all existing algorithms developed for demosaicking visible color filter arrays with less than nine colors assume either certain relationship between the visible colors, which are not valid for SWIR imaging, or presence of one color band with higher sampling rate compared to the rest of the bands, which does not conform to our spectral filter pattern. We will discuss and present results for two novel approaches to demosaicking: interpolation using multi-band edge information and application of multi-frame super-resolution to a single frame resolution enhancement of multi-spectral spatially multiplexed images.
Still-to-video face recognition in unconstrained environments
NASA Astrophysics Data System (ADS)
Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing
2015-02-01
Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.
High-speed multi-frame laser Schlieren for visualization of explosive events
NASA Astrophysics Data System (ADS)
Clarke, S. A.; Murphy, M. J.; Landon, C. D.; Mason, T. A.; Adrian, R. J.; Akinci, A. A.; Martinez, M. E.; Thomas, K. A.
2007-09-01
High-Speed Multi-Frame Laser Schlieren is used for visualization of a range of explosive and non-explosive events. Schlieren is a well-known technique for visualizing shock phenomena in transparent media. Laser backlighting and a framing camera allow for Schlieren images with very short (down to 5 ns) exposure times, band pass filtering to block out explosive self-light, and 14 frames of a single explosive event. This diagnostic has been applied to several explosive initiation events, such as exploding bridgewires (EBW), Exploding Foil Initiators (EFI) (or slappers), Direct Optical Initiation (DOI), and ElectroStatic Discharge (ESD). Additionally, a series of tests have been performed on "cut-back" detonators with varying initial pressing (IP) heights. We have also used this Diagnostic to visualize a range of EBW, EFI, and DOI full-up detonators. The setup has also been used to visualize a range of other explosive events, such as explosively driven metal shock experiments and explosively driven microjets. Future applications to other explosive events such as boosters and IHE booster evaluation will be discussed. Finite element codes (EPIC, CTH) have been used to analyze the schlieren images to determine likely boundary or initial conditions to determine the temporal-spatial pressure profile across the output face of the detonator. These experiments are part of a phased plan to understand the evolution of detonation in a detonator from initiation shock through run to detonation to full detonation to transition to booster and booster detonation.
Object acquisition and tracking for space-based surveillance
NASA Astrophysics Data System (ADS)
1991-11-01
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase 1) and N00014-89-C-0015 (Phase 2). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processing into time dependent, object dependent, and data dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.
Object acquisition and tracking for space-based surveillance. Final report, Dec 88-May 90
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1991-11-27
This report presents the results of research carried out by Space Computer Corporation under the U.S. government's Small Business Innovation Research (SBIR) Program. The work was sponsored by the Strategic Defense Initiative Organization and managed by the Office of Naval Research under Contracts N00014-87-C-0801 (Phase I) and N00014-89-C-0015 (Phase II). The basic purpose of this research was to develop and demonstrate a new approach to the detection of, and initiation of track on, moving targets using data from a passive infrared or visual sensor. This approach differs in very significant ways from the traditional approach of dividing the required processingmore » into time dependent, object-dependent, and data-dependent processing stages. In that approach individual targets are first detected in individual image frames, and the detections are then assembled into tracks. That requires that the signal to noise ratio in each image frame be sufficient for fairly reliable target detection. In contrast, our approach bases detection of targets on multiple image frames, and, accordingly, requires a smaller signal to noise ratio. It is sometimes referred to as track before detect, and can lead to a significant reduction in total system cost. For example, it can allow greater detection range for a single sensor, or it can allow the use of smaller sensor optics. Both the traditional and track before detect approaches are applicable to systems using scanning sensors, as well as those which use staring sensors.« less
A passive terahertz video camera based on lumped element kinetic inductance detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon
We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequencymore » domain multiplexing electronics.« less
Development Of A Dynamic Radiographic Capability Using High-Speed Video
NASA Astrophysics Data System (ADS)
Bryant, Lawrence E.
1985-02-01
High-speed video equipment can be used to optically image up to 2,000 full frames per second or 12,000 partial frames per second. X-ray image intensifiers have historically been used to image radiographic images at 30 frames per second. By combining these two types of equipment, it is possible to perform dynamic x-ray imaging of up to 2,000 full frames per second. The technique has been demonstrated using conventional, industrial x-ray sources such as 150 Kv and 300 Kv constant potential x-ray generators, 2.5 MeV Van de Graaffs, and linear accelerators. A crude form of this high-speed radiographic imaging has been shown to be possible with a cobalt 60 source. Use of a maximum aperture lens makes best use of the available light output from the image intensifier. The x-ray image intensifier input and output fluors decay rapidly enough to allow the high frame rate imaging. Data are presented on the maximum possible video frame rates versus x-ray penetration of various thicknesses of aluminum and steel. Photographs illustrate typical radiographic setups using the high speed imaging method. Video recordings show several demonstrations of this technique with the played-back x-ray images slowed down up to 100 times as compared to the actual event speed. Typical applications include boiling type action of liquids in metal containers, compressor operation with visualization of crankshaft, connecting rod and piston movement and thermal battery operation. An interesting aspect of this technique combines both the optical and x-ray capabilities to observe an object or event with both external and internal details with one camera in a visual mode and the other camera in an x-ray mode. This allows both kinds of video images to appear side by side in a synchronized presentation.
Smear correction of highly variable, frame-transfer CCD images with application to polarimetry.
Iglesias, Francisco A; Feller, Alex; Nagaraju, Krishnappa
2015-07-01
Image smear, produced by the shutterless operation of frame-transfer CCD detectors, can be detrimental for many imaging applications. Existing algorithms used to numerically remove smear do not contemplate cases where intensity levels change considerably between consecutive frame exposures. In this report, we reformulate the smearing model to include specific variations of the sensor illumination. The corresponding desmearing expression and its noise properties are also presented and demonstrated in the context of fast imaging polarimetry.
High frame rate imaging systems developed in Northwest Institute of Nuclear Technology
NASA Astrophysics Data System (ADS)
Li, Binkang; Wang, Kuilu; Guo, Mingan; Ruan, Linbo; Zhang, Haibing; Yang, Shaohua; Feng, Bing; Sun, Fengrong; Chen, Yanli
2007-01-01
This paper presents high frame rate imaging systems developed in Northwest Institute of Nuclear Technology in recent years. Three types of imaging systems are included. The first type of system utilizes EG&G RETICON Photodiode Array (PDA) RA100A as the image sensor, which can work at up to 1000 frame per second (fps). Besides working continuously, the PDA system is also designed to switch to capture flash light event working mode. A specific time sequence is designed to satisfy this request. The camera image data can be transmitted to remote area by coaxial or optic fiber cable and then be stored. The second type of imaging system utilizes PHOTOBIT Complementary Metal Oxygen Semiconductor (CMOS) PB-MV13 as the image sensor, which has a high resolution of 1280 (H) ×1024 (V) pixels per frame. The CMOS system can operate at up to 500fps in full frame and 4000fps partially. The prototype scheme of the system is presented. The third type of imaging systems adopts charge coupled device (CCD) as the imagers. MINTRON MTV-1881EX, DALSA CA-D1 and CA-D6 camera head are used in the systems development. The features comparison of the RA100A, PB-MV13, and CA-D6 based systems are given in the end.
Gesture recognition by instantaneous surface EMG images.
Geng, Weidong; Du, Yu; Jin, Wenguang; Wei, Wentao; Hu, Yu; Li, Jiajun
2016-11-15
Gesture recognition in non-intrusive muscle-computer interfaces is usually based on windowed descriptive and discriminatory surface electromyography (sEMG) features because the recorded amplitude of a myoelectric signal may rapidly fluctuate between voltages above and below zero. Here, we present that the patterns inside the instantaneous values of high-density sEMG enables gesture recognition to be performed merely with sEMG signals at a specific instant. We introduce the concept of an sEMG image spatially composed from high-density sEMG and verify our findings from a computational perspective with experiments on gesture recognition based on sEMG images with a classification scheme of a deep convolutional network. Without any windowed features, the resultant recognition accuracy of an 8-gesture within-subject test reached 89.3% on a single frame of sEMG image and reached 99.0% using simple majority voting over 40 frames with a 1,000 Hz sampling rate. Experiments on the recognition of 52 gestures of NinaPro database and 27 gestures of CSL-HDEMG database also validated that our approach outperforms state-of-the-arts methods. Our findings are a starting point for the development of more fluid and natural muscle-computer interfaces with very little observational latency. For example, active prostheses and exoskeletons based on high-density electrodes could be controlled with instantaneous responses.
Mosaic construction, processing, and review of very large electron micrograph composites
NASA Astrophysics Data System (ADS)
Vogt, Robert C., III; Trenkle, John M.; Harmon, Laurel A.
1996-11-01
A system of programs is described for acquisition, mosaicking, cueing and interactive review of large-scale transmission electron micrograph composite images. This work was carried out as part of a final-phase clinical analysis study of a drug for the treatment of diabetic peripheral neuropathy. MOre than 500 nerve biopsy samples were prepared, digitally imaged, processed, and reviewed. For a given sample, typically 1000 or more 1.5 megabyte frames were acquired, for a total of between 1 and 2 gigabytes of data per sample. These frames were then automatically registered and mosaicked together into a single virtual image composite, which was subsequently used to perform automatic cueing of axons and axon clusters, as well as review and marking by qualified neuroanatomists. Statistics derived from the review process were used to evaluate the efficacy of the drug in promoting regeneration of myelinated nerve fibers. This effort demonstrates a new, entirely digital capability for doing large-scale electron micrograph studies, in which all of the relevant specimen data can be included at high magnification, as opposed to simply taking a random sample of discrete locations. It opens up the possibility of a new era in electron microscopy--one which broadens the scope of questions that this imaging modality can be used to answer.
A Method of Face Detection with Bayesian Probability
NASA Astrophysics Data System (ADS)
Sarker, Goutam
2010-10-01
The objective of face detection is to identify all images which contain a face, irrespective of its orientation, illumination conditions etc. This is a hard problem, because the faces are highly variable in size, shape lighting conditions etc. Many methods have been designed and developed to detect faces in a single image. The present paper is based on one `Appearance Based Method' which relies on learning the facial and non facial features from image examples. This in its turn is based on statistical analysis of examples and counter examples of facial images and employs Bayesian Conditional Classification Rule to detect the probability of belongingness of a face (or non-face) within an image frame. The detection rate of the present system is very high and thereby the number of false positive and false negative detection is substantially low.
Radiometric calibration of wide-field camera system with an application in astronomy
NASA Astrophysics Data System (ADS)
Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika
2017-09-01
Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.
Color image generation for screen-scanning holographic display.
Takaki, Yasuhiro; Matsumoto, Yuji; Nakajima, Tatsumi
2015-10-19
Horizontally scanning holography using a microelectromechanical system spatial light modulator (MEMS-SLM) can provide reconstructed images with an enlarged screen size and an increased viewing zone angle. Herein, we propose techniques to enable color image generation for a screen-scanning display system employing a single MEMS-SLM. Higher-order diffraction components generated by the MEMS-SLM for R, G, and B laser lights were coupled by providing proper illumination angles on the MEMS-SLM for each color. An error diffusion technique to binarize the hologram patterns was developed, in which the error diffusion directions were determined for each color. Color reconstructed images with a screen size of 6.2 in. and a viewing zone angle of 10.2° were generated at a frame rate of 30 Hz.
Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates
NASA Technical Reports Server (NTRS)
Linares, Irving (Inventor)
2016-01-01
Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.
Simultaneous narrowband ultrasonic strain-flow imaging
NASA Astrophysics Data System (ADS)
Tsou, Jean K.; Mai, Jerome J.; Lupotti, Fermin A.; Insana, Michael F.
2004-04-01
We are summarizing new research aimed at forming spatially and temporally registered combinations of strain and color-flow images using echo data recorded from a commercial ultrasound system. Applications include diagnosis of vascular diseases and tumor malignancies. The challenge is to meet the diverse needs of each measurement. The approach is to first apply eigenfilters that separate echo components from moving tissues and blood flow, and then estimate blood velocity and tissue displacement from the filtered-IQ-signal phase modulations. At the cost of a lower acquisition frame rate, we find the autocorrelation strain estimator yields higher resolution strain estimate than the cross-correlator since estimates are made from ensembles at a single point in space. The technique is applied to in vivo carotid imaging, to demonstrate the sensitivity for strain-flow vascular imaging.
The Mapping X-Ray Fluorescence Spectrometer (MAPX)
NASA Technical Reports Server (NTRS)
Blake, David; Sarrazin, Philippe; Bristow, Thomas; Downs, Robert; Gailhanou, Marc; Marchis, Franck; Ming, Douglas; Morris, Richard; Sole, Vincente Armando; Thompson, Kathleen;
2016-01-01
MapX will provide elemental imaging at =100 micron spatial resolution over 2.5 X 2.5 centimeter areas, yielding elemental chemistry at or below the scale length where many relict physical, chemical, and biological features can be imaged and interpreted in ancient rocks. MapX is a full-frame spectroscopic imager positioned on soil or regolith with touch sensors. During an analysis, an X-ray source (tube or radioisotope) bombards the sample surface with X-rays or alpha-particles / gamma rays, resulting in sample X-ray Fluorescence (XRF). Fluoresced X-rays pass through an X-ray lens (X-ray µ-Pore Optic, "MPO") that projects a spatially resolved image of the X-rays onto a CCD. The CCD is operated in single photon counting mode so that the positions and energies of individual photons are retained. In a single analysis, several thousand frames are stored and processed. A MapX experiment provides elemental maps having a spatial resolution of =100 micron and quantitative XRF spectra from Regions of Interest (ROI) 2 centimers = x = 100 micron. ROI are compared with known rock and mineral compositions to extrapolate the data to rock types and putative mineralogies. The MapX geometry is being refined with ray-tracing simulations and with synchrotron experiments at SLAC. Source requirements are being determined through Monte Carlo modeling and experiment using XMIMSIM [1], GEANT4 [2] and PyMca [3] and a dedicated XRF test fixture. A flow-down of requirements for both tube and radioisotope sources is being developed from these experiments. In addition to Mars lander and rover missions, MapX could be used for landed science on other airless bodies (Phobos/Deimos, Comet nucleus, asteroids, the Earth's moon, and the icy satellites of the outer planets, including Europa.
CCD sensors in synchrotron X-ray detectors
NASA Astrophysics Data System (ADS)
Strauss, M. G.; Naday, I.; Sherman, I. S.; Kraimer, M. R.; Westbrook, E. M.; Zaluzec, N. J.
1988-04-01
The intense photon flux from advanced synchrotron light sources, such as the 7-GeV synchrotron being designed at Argonne, require integrating-type detectors. Charge-coupled devices (CCDs) are well suited as synchrotron X-ray detectors. When irradiated indirectly via a phosphor followed by reducing optics, diffraction patterns of 100 cm 2 can be imaged on a 2 cm 2 CCD. With a conversion efficiency of ˜ 1 CCD electron/X-ray photon, a peak saturation capacity of > 10 6 X-rays can be obtained. A programmable CCD controller operating at a clock frequency of 20 MHz has been developed. The readout rate is 5 × 10 6 pixels/s and the shift rate in the parallel registers is 10 6 lines/s. The test detector was evaluated in two experiments. In protein crystallography diffraction patterns have been obtained from a lysozyme crystal using a conventional rotating anode X-ray generator. Based on these results we expect to obtain at a synchrotron diffraction images at a rate of ˜ 1 frame/s or a complete 3-dimensional data set from a single crystal in ˜ 2 min. In electron energy-loss spectroscopy (EELS), the CCD was used in a parallel detection mode which is similar to the mode array detectors are used in dispersive EXAFS. With a beam current corresponding to 3 × 10 9 electron/s on the detector, a series of 64 spectra were recorded on the CCD in a continuous sequence without interruption due to readout. The frame-to-frame pixel signal fluctuations had σ = 0.4% from which DQE = 0.4 was obtained, where the detector conversion efficiency was 2.6 CCD electrons/X-ray photon. These multiple frame series also showed the time-resolved modulation of the electron microscope optics by stray magnetic fields.
Image synchronization for 3D application using the NanEye sensor
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
Time stamping of single optical photons with 10 ns resolution
NASA Astrophysics Data System (ADS)
Chakaberia, Irakli; Cotlet, Mircea; Fisher-Levine, Merlin; Hodges, Diedra R.; Nguyen, Jayke; Nomerotski, Andrei
2017-05-01
High spatial and temporal resolution are key features for many modern applications, e.g. mass spectrometry, probing the structure of materials via neutron scattering, studying molecular structure, etc.1-5 Fast imaging also provides the capability of coincidence detection, and the further addition of sensitivity to single optical photons with the capability of timestamping them further broadens the field of potential applications. Photon counting is already widely used in X-ray imaging,6 where the high energy of the photons makes their detection easier. TimepixCam is a novel optical imager,7 which achieves high spatial resolution using an array of 256×256 55 μm × 55μm pixels which have individually controlled functionality. It is based on a thin-entrance-window silicon sensor, bump-bonded to a Timepix ASIC.8 TimepixCam provides high quantum efficiency in the optical wavelength range (400-1000 nm). We perform the timestamping of single photons with a time resolution of 20 ns, by coupling TimepixCam to a fast image-intensifier with a P47 phosphor screen. The fast emission time of the P479 allows us to preserve good time resolution while maintaining the capability to focus the optical output of the intensifier onto the 256×256 pixel Timepix sensor area. We demonstrate the capability of the (TimepixCam + image intensifier) setup to provide high-resolution single-photon timestamping, with an effective frame rate of 50 MHz.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Y; Olsen, J.; Parikh, P.
2014-06-01
Purpose: Evaluate commonly used segmentation algorithms on a commercially available real-time MR image guided radiotherapy (MR-IGRT) system (ViewRay), compare the strengths and weaknesses of each method, with the purpose of improving motion tracking for more accurate radiotherapy. Methods: MR motion images of bladder, kidney, duodenum, and liver tumor were acquired for three patients using a commercial on-board MR imaging system and an imaging protocol used during MR-IGRT. A series of 40 frames were selected for each case to cover at least 3 respiratory cycles. Thresholding, Canny edge detection, fuzzy k-means (FKM), k-harmonic means (KHM), and reaction-diffusion level set evolution (RD-LSE),more » along with the ViewRay treatment planning and delivery system (TPDS) were included in the comparisons. To evaluate the segmentation results, an expert manual contouring of the organs or tumor from a physician was used as a ground-truth. Metrics value of sensitivity, specificity, Jaccard similarity, and Dice coefficient were computed for comparison. Results: In the segmentation of single image frame, all methods successfully segmented the bladder and kidney, but only FKM, KHM and TPDS were able to segment the liver tumor and the duodenum. For segmenting motion image series, the TPDS method had the highest sensitivity, Jarccard, and Dice coefficients in segmenting bladder and kidney, while FKM and KHM had a slightly higher specificity. A similar pattern was observed when segmenting the liver tumor and the duodenum. The Canny method is not suitable for consistently segmenting motion frames in an automated process, while thresholding and RD-LSE cannot consistently segment a liver tumor and the duodenum. Conclusion: The study compared six different segmentation methods and showed the effectiveness of the ViewRay TPDS algorithm in segmenting motion images during MR-IGRT. Future studies include a selection of conformal segmentation methods based on image/organ-specific information, different filtering methods and their influences on the segmentation results. Parag Parikh receives research grant from ViewRay. Sasa Mutic has consulting and research agreements with ViewRay. Yanle Hu receives travel reimbursement from ViewRay. Iwan Kawrakow and James Dempsey are ViewRay employees.« less
2017-01-01
We report an approach, named chemTEM, to follow chemical transformations at the single-molecule level with the electron beam of a transmission electron microscope (TEM) applied as both a tunable source of energy and a sub-angstrom imaging probe. Deposited on graphene, disk-shaped perchlorocoronene molecules are precluded from intermolecular interactions. This allows monomolecular transformations to be studied at the single-molecule level in real time and reveals chlorine elimination and reactive aryne formation as a key initial stage of multistep reactions initiated by the 80 keV e-beam. Under the same conditions, perchlorocoronene confined within a nanotube cavity, where the molecules are situated in very close proximity to each other, enables imaging of intermolecular reactions, starting with the Diels–Alder cycloaddition of a generated aryne, followed by rearrangement of the angular adduct to a planar polyaromatic structure and the formation of a perchlorinated zigzag nanoribbon of graphene as the final product. ChemTEM enables the entire process of polycondensation, including the formation of metastable intermediates, to be captured in a one-shot “movie”. A molecule with a similar size and shape but with a different chemical composition, octathio[8]circulene, under the same conditions undergoes another type of polycondensation via thiyl biradical generation and subsequent reaction leading to polythiophene nanoribbons with irregular edges incorporating bridging sulfur atoms. Graphene or carbon nanotubes supporting the individual molecules during chemTEM studies ensure that the elastic interactions of the molecules with the e-beam are the dominant forces that initiate and drive the reactions we image. Our ab initio DFT calculations explicitly incorporating the e-beam in the theoretical model correlate with the chemTEM observations and give a mechanism for direct control not only of the type of the reaction but also of the reaction rate. Selection of the appropriate e-beam energy and control of the dose rate in chemTEM enabled imaging of reactions on a time frame commensurate with TEM image capture rates, revealing atomistic mechanisms of previously unknown processes. PMID:28191929
Orientation sensors by defocused imaging of single gold nano-bipyramids
NASA Astrophysics Data System (ADS)
Zhang, Fanwei; Li, Qiang; Rao, Wenye; Hu, Hongjin; Gao, Ye; Wu, Lijun
2018-01-01
Optical probes for nanoscale orientation sensing have attracted much attention in the field of single-molecule detections. Noble metal especially Au nanoparticles (NPs) exhibit extraordinary plasmonic properties, great photostability, excellent biocompatibility and nontoxicity, and thereby could be alternative labels to conventional applied organic dyes or quantum dots. One type of the most interesting metallic NPs is Au nanorods (AuNRs). Its anisotropic emission accompanied with anisotropic shape is potentially applicable in orientation sensing. Recently, we resolved the 3D orientation of single AuNRs within one frame by deliberately introducing an aberration (slight shift of the dipole away from the focal plane) to the imaging system1 . This defocused imaging technique is based on the electron transition dipole approximation and the fact that the dipole radiation exhibits an angular anisotropy. Since the photoluminescence quantum yield (PLQY) can be enhanced by the "lightning rod effect" (at a sharp angled surface) and localized SPR modes, that of the single Au nano-bipyramid (AuNB) with more sharp tips or edges was found to be doubled comparing to AuNRs with a same effective size2. Here, with a 532 nm excitation, we find that the PL properties of individual AuNBs can be described by three perpendicularly-arranged dipoles (with different ratios). Their PL defocused images are bright, clear and exhibit obvious anisotropy. These properties suggest that AuNBs are excellent candidates for orientation sensing labels in single molecule detections.
NASA Astrophysics Data System (ADS)
Mitra, Debasis; Boutchko, Rostyslav; Ray, Judhajeet; Nilsen-Hamilton, Marit
2015-03-01
In this work we present a time-lapsed confocal microscopy image analysis technique for an automated gene expression study of multiple single living cells. Fluorescence Resonance Energy Transfer (FRET) is a technology by which molecule-to-molecule interactions are visualized. We analyzed a dynamic series of ~102 images obtained using confocal microscopy of fluorescence in yeast cells containing RNA reporters that give a FRET signal when the gene promoter is activated. For each time frame, separate images are available for three spectral channels and the integrated intensity snapshot of the system. A large number of time-lapsed frames must be analyzed to identify each cell individually across time and space, as it is moving in and out of the focal plane of the microscope. This makes it a difficult image processing problem. We have proposed an algorithm here, based on scale-space technique, which solves the problem satisfactorily. The algorithm has multiple directions for even further improvement. The ability to rapidly measure changes in gene expression simultaneously in many cells in a population will open the opportunity for real-time studies of the heterogeneity of genetic response in a living cell population and the interactions between cells that occur in a mixed population, such as the ones found in the organs and tissues of multicellular organisms.
Hyperspectral optical tomography of intrinsic signals in the rat cortex
Konecky, Soren D.; Wilson, Robert H.; Hagen, Nathan; Mazhar, Amaan; Tkaczyk, Tomasz S.; Frostig, Ron D.; Tromberg, Bruce J.
2015-01-01
Abstract. We introduce a tomographic approach for three-dimensional imaging of evoked hemodynamic activity, using broadband illumination and diffuse optical tomography (DOT) image reconstruction. Changes in diffuse reflectance in the rat somatosensory cortex due to stimulation of a single whisker were imaged at a frame rate of 5 Hz using a hyperspectral image mapping spectrometer. In each frame, images in 38 wavelength bands from 484 to 652 nm were acquired simultaneously. For data analysis, we developed a hyperspectral DOT algorithm that used the Rytov approximation to quantify changes in tissue concentration of oxyhemoglobin (ctHbO2) and deoxyhemoglobin (ctHb) in three dimensions. Using this algorithm, the maximum changes in ctHbO2 and ctHb were found to occur at 0.29±0.02 and 0.66±0.04 mm beneath the surface of the cortex, respectively. Rytov tomographic reconstructions revealed maximal spatially localized increases and decreases in ctHbO2 and ctHb of 321±53 and 555±96 nM, respectively, with these maximum changes occurring at 4±0.2 s poststimulus. The localized optical signals from the Rytov approximation were greater than those from modified Beer–Lambert, likely due in part to the inability of planar reflectance to account for partial volume effects. PMID:26835483
Pappas, E P; Seimenis, I; Moutsatsos, A; Georgiou, E; Nomikos, P; Karaiskos, P
2016-10-07
This work provides characterization of system-related geometric distortions present in MRIs used in Gamma Knife (GK) stereotactic radiosurgery (SRS) treatment planning. A custom-made phantom, compatible with the Leksell stereotactic frame model G and encompassing 947 control points (CPs), was utilized. MR images were obtained with and without the frame, thus allowing discrimination of frame-induced distortions. In the absence of the frame and following compensation for field inhomogeneities, measured average CP disposition owing to gradient nonlinearities was 0.53 mm. In presence of the frame, contrarily, detected distortion was greatly increased (up to about 5 mm) in the vicinity of the frame base due to eddy currents induced in the closed loop of its aluminum material. Frame-related distortion was obliterated at approximately 90 mm from the frame base. Although the region with the maximum observed distortion may not lie within the GK treatable volume, the presence of the frame results in distortion of the order of 1.5 mm at a 7 cm distance from the center of the Leksell space. Additionally, severe distortions observed outside the treatable volume could possibly impinge on the delivery accuracy mainly by adversely affecting the registration process (e.g. the position of the lower part of the N-shaped fiducials used to define the stereotactic space may be miss-registered). Images acquired with a modified version of the frame developed by replacing its front side with an acrylic bar, thus interrupting the closed aluminum loop and reducing the induced eddy currents, were shown to benefit from relatively reduced distortion. System-related distortion was also identified in patient MR images. Using corresponding CT angiography images as a reference, an offset of 1.1 mm was detected for two vessels lying in close proximity to the frame base, while excellent spatial agreement was observed for a vessel far apart from the frame base.
NASA Astrophysics Data System (ADS)
Pappas, E. P.; Seimenis, I.; Moutsatsos, A.; Georgiou, E.; Nomikos, P.; Karaiskos, P.
2016-10-01
This work provides characterization of system-related geometric distortions present in MRIs used in Gamma Knife (GK) stereotactic radiosurgery (SRS) treatment planning. A custom-made phantom, compatible with the Leksell stereotactic frame model G and encompassing 947 control points (CPs), was utilized. MR images were obtained with and without the frame, thus allowing discrimination of frame-induced distortions. In the absence of the frame and following compensation for field inhomogeneities, measured average CP disposition owing to gradient nonlinearities was 0.53 mm. In presence of the frame, contrarily, detected distortion was greatly increased (up to about 5 mm) in the vicinity of the frame base due to eddy currents induced in the closed loop of its aluminum material. Frame-related distortion was obliterated at approximately 90 mm from the frame base. Although the region with the maximum observed distortion may not lie within the GK treatable volume, the presence of the frame results in distortion of the order of 1.5 mm at a 7 cm distance from the center of the Leksell space. Additionally, severe distortions observed outside the treatable volume could possibly impinge on the delivery accuracy mainly by adversely affecting the registration process (e.g. the position of the lower part of the N-shaped fiducials used to define the stereotactic space may be miss-registered). Images acquired with a modified version of the frame developed by replacing its front side with an acrylic bar, thus interrupting the closed aluminum loop and reducing the induced eddy currents, were shown to benefit from relatively reduced distortion. System-related distortion was also identified in patient MR images. Using corresponding CT angiography images as a reference, an offset of 1.1 mm was detected for two vessels lying in close proximity to the frame base, while excellent spatial agreement was observed for a vessel far apart from the frame base.
Volumetric Two-photon Imaging of Neurons Using Stereoscopy (vTwINS)
Song, Alexander; Charles, Adam S.; Koay, Sue Ann; Gauthier, Jeff L.; Thiberge, Stephan Y.; Pillow, Jonathan W.; Tank, David W.
2017-01-01
Two-photon laser scanning microscopy of calcium dynamics using fluorescent indicators is a widely used imaging method for large scale recording of neural activity in vivo. Here we introduce volumetric Two-photon Imaging of Neurons using Stereoscopy (vTwINS), a volumetric calcium imaging method that employs an elongated, V-shaped point spread function to image a 3D brain volume. Single neurons project to spatially displaced “image pairs” in the resulting 2D image, and the separation distance between images is proportional to depth in the volume. To demix the fluorescence time series of individual neurons, we introduce a novel orthogonal matching pursuit algorithm that also infers source locations within the 3D volume. We illustrate vTwINS by imaging neural population activity in mouse primary visual cortex and hippocampus. Our results demonstrate that vTwINS provides an effective method for volumetric two-photon calcium imaging that increases the number of neurons recorded while maintaining a high frame-rate. PMID:28319111
[The dilemma of data flood - reducing costs and increasing quality control].
Gassmann, B
2012-09-05
Digitization is found everywhere in sonography. Printing of ultrasound images using the videoprinter with special paper will be done in single cases. The documentation of sonography procedures is more and more done by saving image sequences instead of still frames. Echocardiography is routinely recorded in between with so called R-R-loops. Doing contrast enhanced ultrasound recording of sequences is necessary to get a deep impression of the vascular structure of interest. Working with this data flood in daily practice a specialized software is required. Comparison in follow up of stored and recent images/sequences is very helpful. Nevertheless quality control of the ultrasound system and the transducers is simple and safe - using a phantom for detail resolution and general image quality the stored images/sequences are comparable over the life cycle of the system. The comparison in follow up is showing decreased image quality and transducer defects immediately.
NASA Astrophysics Data System (ADS)
Gelderblom, Erik C.; Vos, Hendrik J.; Mastik, Frits; Faez, Telli; Luan, Ying; Kokhuis, Tom J. A.; van der Steen, Antonius F. W.; Lohse, Detlef; de Jong, Nico; Versluis, Michel
2012-10-01
The Brandaris 128 ultra-high-speed imaging facility has been updated over the last 10 years through modifications made to the camera's hardware and software. At its introduction the camera was able to record 6 sequences of 128 images (500 × 292 pixels) at a maximum frame rate of 25 Mfps. The segmented mode of the camera was revised to allow for subdivision of the 128 image sensors into arbitrary segments (1-128) with an inter-segment time of 17 μs. Furthermore, a region of interest can be selected to increase the number of recordings within a single run of the camera from 6 up to 125. By extending the imaging system with a laser-induced fluorescence setup, time-resolved ultra-high-speed fluorescence imaging of microscopic objects has been enabled. Minor updates to the system are also reported here.
Lipid shedding from single oscillating microbubbles.
Luan, Ying; Lajoinie, Guillaume; Gelderblom, Erik; Skachkov, Ilya; van der Steen, Antonius F W; Vos, Hendrik J; Versluis, Michel; De Jong, Nico
2014-08-01
Lipid-coated microbubbles are used clinically as contrast agents for ultrasound imaging and are being developed for a variety of therapeutic applications. The lipid encapsulation and shedding of the lipids by acoustic driving of the microbubble has a crucial role in microbubble stability and in ultrasound-triggered drug delivery; however, little is known about the dynamics of lipid shedding under ultrasound excitation. Here we describe a study that optically characterized the lipid shedding behavior of individual microbubbles on a time scale of nanoseconds to microseconds. A single ultrasound burst of 20 to 1000 cycles, with a frequency of 1 MHz and an acoustic pressure varying from 50 to 425 kPa, was applied. In the first step, high-speed fluorescence imaging was performed at 150,000 frames per second to capture the instantaneous dynamics of lipid shedding. Lipid detachment was observed within the first few cycles of ultrasound. Subsequently, the detached lipids were transported by the surrounding flow field, either parallel to the focal plane (in-plane shedding) or in a trajectory perpendicular to the focal plane (out-of-plane shedding). In the second step, the onset of lipid shedding was studied as a function of the acoustic driving parameters, for example, pressure, number of cycles, bubble size and oscillation amplitude. The latter was recorded with an ultrafast framing camera running at 10 million frames per second. A threshold for lipid shedding under ultrasound excitation was found for a relative bubble oscillation amplitude >30%. Lipid shedding was found to be reproducible, indicating that the shedding event can be controlled. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Wong, Yau; Chao, Jerry; Lin, Zhiping; Ober, Raimund J.
2014-01-01
In fluorescence microscopy, high-speed imaging is often necessary for the proper visualization and analysis of fast subcellular dynamics. Here, we examine how the speed of image acquisition affects the accuracy with which parameters such as the starting position and speed of a microscopic non-stationary fluorescent object can be estimated from the resulting image sequence. Specifically, we use a Fisher information-based performance bound to investigate the detector-dependent effect of frame rate on the accuracy of parameter estimation. We demonstrate that when a charge-coupled device detector is used, the estimation accuracy deteriorates as the frame rate increases beyond a point where the detector’s readout noise begins to overwhelm the low number of photons detected in each frame. In contrast, we show that when an electron-multiplying charge-coupled device (EMCCD) detector is used, the estimation accuracy improves with increasing frame rate. In fact, at high frame rates where the low number of photons detected in each frame renders the fluorescent object difficult to detect visually, imaging with an EMCCD detector represents a natural implementation of the Ultrahigh Accuracy Imaging Modality, and enables estimation with an accuracy approaching that which is attainable only when a hypothetical noiseless detector is used. PMID:25321248
High-frame-rate digital radiographic videography
NASA Astrophysics Data System (ADS)
King, Nicholas S. P.; Cverna, Frank H.; Albright, Kevin L.; Jaramillo, Steven A.; Yates, George J.; McDonald, Thomas E.; Flynn, Michael J.; Tashman, Scott
1994-10-01
High speed x-ray imaging can be an important tool for observing internal processes in a wide range of applications. In this paper we describe preliminary implementation of a system having the eventual goal of observing the internal dynamics of bone and joint reactions during loading. Two Los Alamos National Laboratory (LANL) gated and image intensified camera systems were used to record images from an x-ray image convertor tube to demonstrate the potential of high frame-rate digital radiographic videography in the analysis of bone and joint dynamics of the human body. Preliminary experiments were done at LANL to test the systems. Initial high frame-rate imaging (from 500 to 1000 frames/s) of a swinging pendulum mounted to the face of an X-ray image convertor tube demonstrated high contrast response and baseline sensitivity. The systems were then evaluated at the Motion Analysis Laboratory of Henry Ford Health Systems Bone and Joint Center. Imaging of a 9 inch acrylic disk with embedded lead markers rotating at approximately 1000 RPM, demonstrated the system response to a high velocity/high contrast target. By gating the P-20 phosphor image from the X-ray image convertor with a second image intensifier (II) and using a 100 microsecond wide optical gate through the second II, enough prompt light decay from the x-ray image convertor phosphor had taken place to achieve reduction of most of the motion blurring. Measurement of the marker velocity was made by using video frames acquired at 500 frames/s. The data obtained from both experiments successfully demonstrated the feasibility of the technique. Several key areas for improvement are discussed along with salient test results and experiment details.
NASA Technical Reports Server (NTRS)
Dotson, Jessie L.; Batalha, Natalie; Bryson, Stephen T.; Caldwell, Douglas A.; Clarke, Bruce D.
2010-01-01
NASA's exoplanet discovery mission Kepler provides uninterrupted 1-min and 30-min optical photometry of a 100 square degree field over a 3.5 yr nominal mission. Downlink bandwidth is filled at these short cadences by selecting only detector pixels specific to 105 preselected stellar targets. The majority of the Kepler field, comprising 4 x 10(exp 6) m_v < 20 sources, is sampled at much lower 1-month cadence in the form of a full-frame image. The Full Frame Images (FFIs) are calibrated by the Science Operations Center at NASA Ames Research Center. The Kepler Team employ these images for astrometric and photometric reference but make the images available to the astrophysics community through the Multimission Archive at STScI (MAST). The full-frame images provide a resource for potential Kepler Guest Observers to select targets and plan observing proposals, while also providing a freely-available long-cadence legacy of photometric variation across a swathe of the Galactic disk.
High-speed adaptive optics line scan confocal retinal imaging for human eye
Wang, Xiaolin; Zhang, Yuhua
2017-01-01
Purpose Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. Methods A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye’s optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. Results The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. Conclusions We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss. PMID:28257458
High-speed adaptive optics line scan confocal retinal imaging for human eye.
Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua
2017-01-01
Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.
Optimization of cell morphology measurement via single-molecule tracking PALM.
Frost, Nicholas A; Lu, Hsiangmin E; Blanpied, Thomas A
2012-01-01
In neurons, the shape of dendritic spines relates to synapse function, which is rapidly altered during experience-dependent neural plasticity. The small size of spines makes detailed measurement of their morphology in living cells best suited to super-resolution imaging techniques. The distribution of molecular positions mapped via live-cell Photoactivated Localization Microscopy (PALM) is a powerful approach, but molecular motion complicates this analysis and can degrade overall resolution of the morphological reconstruction. Nevertheless, the motion is of additional interest because tracking single molecules provides diffusion coefficients, bound fraction, and other key functional parameters. We used Monte Carlo simulations to examine features of single-molecule tracking of practical utility for the simultaneous determination of cell morphology. We find that the accuracy of determining both distance and angle of motion depend heavily on the precision with which molecules are localized. Strikingly, diffusion within a bounded region resulted in an inward bias of localizations away from the edges, inaccurately reflecting the region structure. This inward bias additionally resulted in a counterintuitive reduction of measured diffusion coefficient for fast-moving molecules; this effect was accentuated by the long camera exposures typically used in single-molecule tracking. Thus, accurate determination of cell morphology from rapidly moving molecules requires the use of short integration times within each image to minimize artifacts caused by motion during image acquisition. Sequential imaging of neuronal processes using excitation pulses of either 2 ms or 10 ms within imaging frames confirmed this: processes appeared erroneously thinner when imaged using the longer excitation pulse. Using this pulsed excitation approach, we show that PALM can be used to image spine and spine neck morphology in living neurons. These results clarify a number of issues involved in interpretation of single-molecule data in living cells and provide a method to minimize artifacts in single-molecule experiments.
NASA Astrophysics Data System (ADS)
Oswald, Helmut; Mueller-Jones, Kay; Builtjes, Jan; Fleck, Eckart
1998-07-01
The developments in information technologies -- computer hardware, networking and storage media -- has led to expectations that these advances make it possible to replace 35 mm film completely by digital techniques in the catheter laboratory. Besides the role of an archival medium, cine film is used as the major image review and exchange medium in cardiology. None of the today technologies can fulfill completely the requirements to replace cine film. One of the major drawbacks of cine film is the single access in time and location. For the four catheter laboratories in our institutions we have designed a complementary concept combining the CD-R, also called CD-medical, as a single patient storage and exchange medium, and a digital archive for on-line access and image review of selected frames or short sequences on adequate medical workstations. The image data from various modalities as well as all digital documents regarding to a patient are part of an electronic patient record. The access, the processing and the display of documents is supported by an integrated medical application.
Role of "the frame cycle time" in portal dose imaging using an aS500-II EPID.
Al Kattar Elbalaa, Zeina; Foulquier, Jean Noel; Orthuon, Alexandre; Elbalaa, Hanna; Touboul, Emmanuel
2009-09-01
This paper evaluates the role of an acquisition parameter, the frame cycle time "FCT", in the performance of an aS500-II EPID. The work presented rests on the study of the Varian EPID aS500-II and the image acquisition system 3 (IAS3). We are interested in integrated acquisition using asynchronous mode. For better understanding the image acquisition operation, we investigated the influence of the "frame cycle time" on the speed of acquisition, the pixel value of the averaged gray-scale frame and the noise, using 6 and 15MV X-ray beams and dose rates of 1-6Gy/min on 2100 C/D Linacs. In the integrated mode not synchronized to beam pulses, only one parameter the frame cycle time "FCT" influences the pixel value. The pixel value of the averaged gray-scale frame is proportional to this parameter. When the FCT <55ms (speed of acquisition V(f/s)>18 frames/s), the speed of acquisition becomes unstable and leads to a fluctuation of the portal dose response. A timing instability and saturation are detected when the dose per frame exceeds 1.53MU/frame. Rules were deduced to avoid saturation and to optimize this dosimetric mode. The choice of the acquisition parameter is essential for the accurate portal dose imaging.
High speed three-dimensional laser scanner with real time processing
NASA Technical Reports Server (NTRS)
Lavelle, Joseph P. (Inventor); Schuet, Stefan R. (Inventor)
2008-01-01
A laser scanner computes a range from a laser line to an imaging sensor. The laser line illuminates a detail within an area covered by the imaging sensor, the area having a first dimension and a second dimension. The detail has a dimension perpendicular to the area. A traverse moves a laser emitter coupled to the imaging sensor, at a height above the area. The laser emitter is positioned at an offset along the scan direction with respect to the imaging sensor, and is oriented at a depression angle with respect to the area. The laser emitter projects the laser line along the second dimension of the area at a position where a image frame is acquired. The imaging sensor is sensitive to laser reflections from the detail produced by the laser line. The imaging sensor images the laser reflections from the detail to generate the image frame. A computer having a pipeline structure is connected to the imaging sensor for reception of the image frame, and for computing the range to the detail using height, depression angle and/or offset. The computer displays the range to the area and detail thereon covered by the image frame.
Quantum image coding with a reference-frame-independent scheme
NASA Astrophysics Data System (ADS)
Chapeau-Blondeau, François; Belin, Etienne
2016-07-01
For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.
Feature and Intensity Based Medical Image Registration Using Particle Swarm Optimization.
Abdel-Basset, Mohamed; Fakhry, Ahmed E; El-Henawy, Ibrahim; Qiu, Tie; Sangaiah, Arun Kumar
2017-11-03
Image registration is an important aspect in medical image analysis, and kinds use in a variety of medical applications. Examples include diagnosis, pre/post surgery guidance, comparing/merging/integrating images from multi-modal like Magnetic Resonance Imaging (MRI), and Computed Tomography (CT). Whether registering images across modalities for a single patient or registering across patients for a single modality, registration is an effective way to combine information from different images into a normalized frame for reference. Registered datasets can be used for providing information relating to the structure, function, and pathology of the organ or individual being imaged. In this paper a hybrid approach for medical images registration has been developed. It employs a modified Mutual Information (MI) as a similarity metric and Particle Swarm Optimization (PSO) method. Computation of mutual information is modified using a weighted linear combination of image intensity and image gradient vector flow (GVF) intensity. In this manner, statistical as well as spatial image information is included into the image registration process. Maximization of the modified mutual information is effected using the versatile Particle Swarm Optimization which is developed easily with adjusted less parameter. The developed approach has been tested and verified successfully on a number of medical image data sets that include images with missing parts, noise contamination, and/or of different modalities (CT, MRI). The registration results indicate the proposed model as accurate and effective, and show the posture contribution in inclusion of both statistical and spatial image data to the developed approach.
Li, Xueming; Zheng, Shawn; Agard, David A.; Cheng, Yifan
2015-01-01
Newly developed direct electron detection cameras have a high image output frame rate that enables recording dose fractionated image stacks of frozen hydrated biological samples by electron cryomicroscopy (cryoEM). Such novel image acquisition schemes provide opportunities to analyze cryoEM data in ways that were previously impossible. The file size of a dose fractionated image stack is 20 ~ 60 times larger than that of a single image. Thus, efficient data acquisition and on-the-fly analysis of a large number of dose-fractionated image stacks become a serious challenge to any cryoEM data acquisition system. We have developed a computer-assisted system, named UCSFImage4, for semi-automated cryo-EM image acquisition that implements an asynchronous data acquisition scheme. This facilitates efficient acquisition, on-the-fly motion correction, and CTF analysis of dose fractionated image stacks with a total time of ~60 seconds/exposure. Here we report the technical details and configuration of this system. PMID:26370395
WiseView: Visualizing motion and variability of faint WISE sources
NASA Astrophysics Data System (ADS)
Caselden, Dan; Westin, Paul, III; Meisner, Aaron; Kuchner, Marc; Colin, Guillaume
2018-06-01
WiseView renders image blinks of Wide-field Infrared Survey Explorer (WISE) coadds spanning a multi-year time baseline in a browser. The software allows for easy visual identification of motion and variability for sources far beyond the single-frame detection limit, a key threshold not surmounted by many studies. WiseView transparently gathers small image cutouts drawn from many terabytes of unWISE coadds, facilitating access to this large and unique dataset. Users need only input the coordinates of interest and can interactively tune parameters including the image stretch, colormap and blink rate. WiseView was developed in the context of the Backyard Worlds: Planet 9 citizen science project, and has enabled hundreds of brown dwarf candidate discoveries by citizen scientists and professional astronomers.
Multiple-frame IR photo-recorder KIT-3M
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, E; Wilkins, P; Nebeker, N
2006-05-15
This paper reports the experimental results of a high-speed multi-frame infrared camera which has been developed in Sarov at VNIIEF. Earlier [1] we discussed the possibility of creation of the multi-frame infrared radiation photo-recorder with framing frequency about 1 MHz. The basis of the photo-recorder is a semiconductor ionization camera [2, 3], which converts IR radiation of spectral range 1-10 micrometers into a visible image. Several sequential thermal images are registered by using the IR converter in conjunction with a multi-frame electron-optical camera. In the present report we discuss the performance characteristics of a prototype commercial 9-frame high-speed IR photo-recorder.more » The image converter records infrared images of thermal fields corresponding to temperatures ranging from 300 C to 2000 C with an exposure time of 1-20 {micro}s at a frame frequency up to 500 KHz. The IR-photo-recorder camera is useful for recording the time evolution of thermal fields in fast processes such as gas dynamics, ballistics, pulsed welding, thermal processing, automotive industry, aircraft construction, in pulsed-power electric experiments, and for the measurement of spatial mode characteristics of IR-laser radiation.« less
Multiport backside-illuminated CCD imagers for high-frame-rate camera applications
NASA Astrophysics Data System (ADS)
Levine, Peter A.; Sauer, Donald J.; Hseuh, Fu-Lung; Shallcross, Frank V.; Taylor, Gordon C.; Meray, Grazyna M.; Tower, John R.; Harrison, Lorna J.; Lawler, William B.
1994-05-01
Two multiport, second-generation CCD imager designs have been fabricated and successfully tested. They are a 16-port 512 X 512 array and a 32-port 1024 X 1024 array. Both designs are back illuminated, have on-chip CDS, lateral blooming control, and use a split vertical frame transfer architecture with full frame storage. The 512 X 512 device has been operated at rates over 800 frames per second. The 1024 X 1024 device has been operated at rates over 300 frames per second. The major changes incorporated in the second-generation design are, reduction in gate length in the output area to give improved high-clock-rate performance, modified on-chip CDS circuitry for reduced noise, and optimized implants to improve performance of blooming control at lower clock amplitude. This paper discusses the imager design improvements and presents measured performance results at high and moderate frame rates. The design and performance of three moderate frame rate cameras are discussed.
Full-Frame Reference for Test Photo of Moon
2005-09-10
This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter.
Light-pollution measurement with the Wide-field all-sky image analyzing monitoring system
NASA Astrophysics Data System (ADS)
Vítek, S.
2017-07-01
The purpose of this experiment was to measure light pollution in the capital of Czech Republic, Prague. As a measuring instrument is used calibrated consumer level digital single reflex camera with IR cut filter, therefore, the paper reports results of measuring and monitoring of the light pollution in the wavelength range of 390 - 700 nm, which most affects visual range astronomy. Combining frames of different exposure times made with a digital camera coupled with fish-eye lens allow to create high dynamic range images, contain meaningful values, so such a system can provide absolute values of the sky brightness.
Murcia, Michael J; Minner, Daniel E; Mustata, Gina-Mirela; Ritchie, Kenneth; Naumann, Christoph A
2008-11-12
The current study reports the facile design of quantum dot (QD)-conjugated lipids and their application to high-speed tracking experiments on cell surfaces. CdSe/ZnS core/shell QDs with two types of hydrophilic coatings, 2-(2-aminoethoxy)ethanol (AEE) and a 60:40 molar mixture of 1,2-dipalmitoyl- sn-glycero-3-phosphocholine and 1,2-dipalmitoyl- sn-glycero-3-phosphoethanolamine- N-[methoxy(polyethylene glycol-2000], are conjugated to sulfhydryl lipids via maleimide reactive groups on the QD surface. Prior to lipid conjugation, the colloidal stability of both types of coated QDs in aqueous solution is confirmed using fluorescence correlation spectroscopy. A sensitive assay based on single lipid tracking experiments on a planar solid-supported phospholipid bilayer is presented that establishes conditions of monovalent conjugation of QDs to lipids. The QD-lipids are then employed as single-molecule tracking probes in plasma membranes of several cell types. Initial tracking experiments at a frame rate of 30 frames/s corroborate that QD-lipids diffuse like dye-labeled lipids in the plasma membrane of COS-7, HEK-293, 3T3, and NRK cells, thus confirming monovalent labeling. Finally, QD-lipids are applied for the first time to high-speed single-molecule imaging by tracking their lateral mobility in the plasma membrane of NRK fibroblasts with up to 1000 frames/s. Our high-speed tracking data, which are in excellent agreement with previous tracking experiments that used larger (40 nm) Au labels, not only push the time resolution in long-time, continuous fluorescence-based single-molecule tracking but also show that highly photostable, photoluminescent nanoprobes of 10 nm size can be employed (AEE-coated QDs). These probes are also attractive because, unlike Au nanoparticles, they facilitate complex multicolor experiments.
Multiple-camera/motion stereoscopy for range estimation in helicopter flight
NASA Technical Reports Server (NTRS)
Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.
1993-01-01
Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.
SU-E-T-171: Missing Dose in Integrated EPID Images.
King, B; Seymour, E; Nitschke, K
2012-06-01
A dosimetric artifact has been observed with Varian EPIDs in the presence of beam interrupts. This work determines the root cause and significance of this artifact. Integrated mode EPID images were acquired both with and without a manual beam interrupt for rectangular, sliding gap IMRT fields. Simultaneously, the individual frames were captured on a separate computer using a frame-grabber system. Synchronization of the individual frames with the integrated images allowed the determination of precisely how the EPID behaved during regular operation as well as when a beam interrupt was triggered. The ability of the EPID to reliably monitor a treatment in the presence of beam interrupts was tested by comparing the difference between the interrupt and non-interrupt images. The interrupted images acquired in integrated acquisition mode displayed unanticipated behaviour in the region of the image where the leaves were located when the beam interrupt was triggered. Differences greater than 5% were observed as a result of the interrupt in some cases, with the discrepancies occurring in a non-uniform manner across the imager. The differences measured were not repeatable from one measurement to another. Examination of the individual frames showed that the EPID was consistently losing a small amount of dose at the termination of every exposure. Inclusion of one additional frame in every image rectified the unexpected behaviour, reducing the differences to 1% or less. Although integrated EPID images nominally capture the entire dose delivered during an exposure, a small amount of dose is consistently being lost at the end of every exposure. The amount of missing dose is random, depending on the exact beam termination time within a frame. Inclusion of an extra frame at the end of each exposure effectively rectifies the problem, making the EPID more suitable for clinical dosimetry applications. The authors received support from Varian Medical Systems in the form of software and equipment loans as well as technical support. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Li, Ke; Chen, Guang-Hong
2016-03-01
Cerebral CT perfusion (CTP) imaging is playing an important role in the diagnosis and treatment of acute ischemic strokes. Meanwhile, the reliability of CTP-based ischemic lesion detection has been challenged due to the noisy appearance and low signal-to-noise ratio of CTP maps. To reduce noise and improve image quality, a rigorous study on the noise transfer properties of CTP systems is highly desirable to provide the needed scientific guidance. This paper concerns how noise in the CTP source images propagates to the final CTP maps. Both theoretical deviations and subsequent validation experiments demonstrated that, the noise level of background frames plays a dominant role in the noise of the cerebral blood volume (CBV) maps. This is in direct contradiction with the general belief that noise of non-background image frames is of greater importance in CTP imaging. The study found that when radiation doses delivered to the background frames and to all non-background frames are equal, lowest noise variance is achieved in the final CBV maps. This novel equality condition provides a practical means to optimize radiation dose delivery in CTP data acquisition: radiation exposures should be modulated between background frames and non-background frames so that the above equality condition is satisïnAed. For several typical CTP acquisition protocols, numerical simulations and in vivo canine experiment demonstrated that noise of CBV can be effectively reduced using the proposed exposure modulation method.
Composite ultrasound imaging apparatus and method
Morimoto, Alan K.; Bow, Jr., Wallace J.; Strong, David Scott; Dickey, Fred M.
1998-01-01
An imaging apparatus and method for use in presenting composite two dimensional and three dimensional images from individual ultrasonic frames. A cross-sectional reconstruction is applied by using digital ultrasound frames, transducer orientation and a known center. Motion compensation, rank value filtering, noise suppression and tissue classification are utilized to optimize the composite image.
Composite ultrasound imaging apparatus and method
Morimoto, A.K.; Bow, W.J. Jr.; Strong, D.S.; Dickey, F.M.
1998-09-15
An imaging apparatus and method for use in presenting composite two dimensional and three dimensional images from individual ultrasonic frames. A cross-sectional reconstruction is applied by using digital ultrasound frames, transducer orientation and a known center. Motion compensation, rank value filtering, noise suppression and tissue classification are utilized to optimize the composite image. 37 figs.
Single image super resolution algorithm based on edge interpolation in NSCT domain
NASA Astrophysics Data System (ADS)
Zhang, Mengqun; Zhang, Wei; He, Xinyu
2017-11-01
In order to preserve the texture and edge information and to improve the space resolution of single frame, a superresolution algorithm based on Contourlet (NSCT) is proposed. The original low resolution image is transformed by NSCT, and the directional sub-band coefficients of the transform domain are obtained. According to the scale factor, the high frequency sub-band coefficients are amplified by the interpolation method based on the edge direction to the desired resolution. For high frequency sub-band coefficients with noise and weak targets, Bayesian shrinkage is used to calculate the threshold value. The coefficients below the threshold are determined by the correlation among the sub-bands of the same scale to determine whether it is noise and de-noising. The anisotropic diffusion filter is used to effectively enhance the weak target in the low contrast region of the target and background. Finally, the high-frequency sub-band is amplified by the bilinear interpolation method to the desired resolution, and then combined with the high-frequency subband coefficients after de-noising and small target enhancement, the NSCT inverse transform is used to obtain the desired resolution image. In order to verify the effectiveness of the proposed algorithm, the proposed algorithm and several common image reconstruction methods are used to test the synthetic image, motion blurred image and hyperspectral image, the experimental results show that compared with the traditional single resolution algorithm, the proposed algorithm can obtain smooth edges and good texture features, and the reconstructed image structure is well preserved and the noise is suppressed to some extent.
2013 R&D 100 Award: Movie-mode electron microscope captures nanoscale
Lagrange, Thomas; Reed, Bryan
2018-01-26
A new instrument developed by LLNL scientists and engineers, the Movie Mode Dynamic Transmission Electron Microscope (MM-DTEM), captures billionth-of-a-meter-scale images with frame rates more than 100,000 times faster than those of conventional techniques. The work was done in collaboration with a Pleasanton-based company, Integrated Dynamic Electron Solutions (IDES) Inc. Using this revolutionary imaging technique, a range of fundamental and technologically important material and biological processes can be captured in action, in complete billionth-of-a-meter detail, for the first time. The primary application of MM-DTEM is the direct observation of fast processes, including microstructural changes, phase transformations and chemical reactions, that shape real-world performance of nanostructured materials and potentially biological entities. The instrument could prove especially valuable in the direct observation of macromolecular interactions, such as protein-protein binding and host-pathogen interactions. While an earlier version of the technology, Single Shot-DTEM, could capture a single snapshot of a rapid process, MM-DTEM captures a multiframe movie that reveals complex sequences of events in detail. It is the only existing technology that can capture multiple electron microscopy images in the span of a single microsecond.
2013 R&D 100 Award: Movie-mode electron microscope captures nanoscale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagrange, Thomas; Reed, Bryan
2014-04-03
A new instrument developed by LLNL scientists and engineers, the Movie Mode Dynamic Transmission Electron Microscope (MM-DTEM), captures billionth-of-a-meter-scale images with frame rates more than 100,000 times faster than those of conventional techniques. The work was done in collaboration with a Pleasanton-based company, Integrated Dynamic Electron Solutions (IDES) Inc. Using this revolutionary imaging technique, a range of fundamental and technologically important material and biological processes can be captured in action, in complete billionth-of-a-meter detail, for the first time. The primary application of MM-DTEM is the direct observation of fast processes, including microstructural changes, phase transformations and chemical reactions, that shapemore » real-world performance of nanostructured materials and potentially biological entities. The instrument could prove especially valuable in the direct observation of macromolecular interactions, such as protein-protein binding and host-pathogen interactions. While an earlier version of the technology, Single Shot-DTEM, could capture a single snapshot of a rapid process, MM-DTEM captures a multiframe movie that reveals complex sequences of events in detail. It is the only existing technology that can capture multiple electron microscopy images in the span of a single microsecond.« less
Overcoming Dynamic Disturbances in Imaging Systems
NASA Technical Reports Server (NTRS)
Young, Eric W.; Dente, Gregory C.; Lyon, Richard G.; Chesters, Dennis; Gong, Qian
2000-01-01
We develop and discuss a methodology with the potential to yield a significant reduction in complexity, cost, and risk of space-borne optical systems in the presence of dynamic disturbances. More robust systems almost certainly will be a result as well. Many future space-based and ground-based optical systems will employ optical control systems to enhance imaging performance. The goal of the optical control subsystem is to determine the wavefront aberrations and remove them. Ideally reducing an aberrated image of the object under investigation to a sufficiently clear (usually diffraction-limited) image. Control will likely be distributed over several elements. These elements may include telescope primary segments, telescope secondary, telescope tertiary, deformable mirror(s), fine steering mirror(s), etc. The last two elements, in particular, may have to provide dynamic control. These control subsystems may become elaborate indeed. But robust system performance will require evaluation of the image quality over a substantial range and in a dynamic environment. Candidate systems for improvement in the Earth Sciences Enterprise could include next generation Landsat systems or atmospheric sensors for dynamic imaging of individual, severe storms. The technology developed here could have a substantial impact on the development of new systems in the Space Science Enterprise; such as the Next Generation Space Telescope(NGST) and its follow-on the Next NGST. Large Interferometric Systems of non-zero field, such as Planet Finder and Submillimeter Probe of the Evolution of Cosmic Structure, could benefit. These systems most likely will contain large, flexible optomechanical structures subject to dynamic disturbance. Furthermore, large systems for high resolution imaging of planets or the sun from space may also benefit. Tactical and Strategic Defense systems will need to image very small targets as well and could benefit from the technology developed here. We discuss a novel speckle imaging technique with the potential to separate dynamic aberrations from static aberrations. Post-processing of a set of image data, using an algorithm based on this technique, should work for all but the lowest light levels and highest frequency dynamic environments. This technique may serve to reduce the complexity of the control system and provide for robust, fault-tolerant, reduced risk operation. For a given object, a short exposure image is "frozen" on the focal plane in the presence of the environmental disturbance (turbulence, jitter, etc.). A key factor is that this imaging data exhibits frame-to-frame linear shift invariance. Therefore, although the Point Spread Function is varying from frame to frame, the source is fixed; and each short exposure contains object spectrum data out to the diffraction limit of the imaging system. This novel speckle imaging technique uses the Knox-Thompson method. The magnitude of the complex object spectrum is straightforward to determine by well-established approaches. The phase of the complex object spectrum is decomposed into two parts. One is a single-valued function determined by the divergence of the optical phase gradient. The other is a multi-valued function determined by the circulation of the optical phase gradient-"hidden phase." Finite difference equations are developed for the phase. The novelty of this approach is captured in the inclusion of this "hidden phase." This technique allows the diffraction-limited reconstruction of the object from the ensemble of short exposure frames while simultaneously estimating the phase as a function of time from a set of exposures.
Overcoming Dynamic Disturbances in Imaging Systems
NASA Technical Reports Server (NTRS)
Young, Eric W.; Dente, Gregory C.; Lyon, Richard G.; Chesters, Dennis; Gong, Qian
2000-01-01
We develop and discuss a methodology with the potential to yield a significant reduction in complexity, cost, and risk of space-borne optical systems in the presence of dynamic disturbances. More robust systems almost certainly will be a result as well. Many future space-based and ground-based optical systems will employ optical control systems to enhance imaging performance. The goal of the optical control subsystem is to determine the wavefront aberrations and remove them. Ideally reducing an aberrated image of the object under investigation to a sufficiently clear (usually diffraction-limited) image. Control will likely be distributed over several elements. These elements may include telescope primary segments, telescope secondary, telescope tertiary, deformable mirror(s), fine steering mirror(s), etc. The last two elements, in particular, may have to provide dynamic control. These control subsystems may become elaborate indeed. But robust system performance will require evaluation of the image quality over a substantial range and in a dynamic environment. Candidate systems for improvement in the Earth Sciences Enterprise could include next generation Landsat systems or atmospheric sensors for dynamic imaging of individual, severe storms. The technology developed here could have a substantial impact on the development of new systems in the Space Science Enterprise; such as the Next Generation Space Telescope(NGST) and its follow-on the Next NGST. Large Interferometric Systems of non-zero field, such as Planet Finder and Submillimeter Probe of the Evolution of Cosmic Structure, could benefit. These systems most likely will contain large, flexible optormechanical structures subject to dynamic disturbance. Furthermore, large systems for high resolution imaging of planets or the sun from space may also benefit. Tactical and Strategic Defense systems will need to image very small targets as well and could benefit from the technology developed here. We discuss a novel speckle imaging technique with the potential to separate dynamic aberrations from static aberrations. Post-processing of a set of image data, using an algorithm based on this technique, should work for all but the lowest light levels and highest frequency dynamic environments. This technique may serve to reduce the complexity of the control system and provide for robust, fault-tolerant, reduced risk operation. For a given object, a short exposure image is "frozen" on the focal plane in the presence of the environmental disturbance (turbulence, jitter, etc.). A key factor is that this imaging data exhibits frame-to-frame linear shift invariance. Therefore, although the Point Spread Function is varying from frame to frame, the source is fixed; and each short exposure contains object spectrum data out to the diffraction limit of the imaging system. This novel speckle imaging technique uses the Knox-Thompson method. The magnitude of the complex object spectrum is straightforward to determine by well-established approaches. The phase of the complex object spectrum is decomposed into two parts. One is a single-valued function determined by the divergence of the optical phase gradient. The other is a multi-valued function determined by, the circulation of the optical phase gradient-"hidden phase." Finite difference equations are developed for the phase. The novelty of this approach is captured in the inclusion of this "hidden phase." This technique allows the diffraction-limited reconstruction of the object from the ensemble of short exposure frames while simultaneously estimating the phase as a function of time from a set of exposures.
Improved optical flow motion estimation for digital image stabilization
NASA Astrophysics Data System (ADS)
Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao
2015-11-01
Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.
Subframe Burst Gating for Raman Spectroscopy in Combustion
NASA Technical Reports Server (NTRS)
Kojima, Jun; Fischer, David; Nguyen, Quang-Viet
2010-01-01
We describe an architecture for spontaneous Raman scattering utilizing a frame-transfer CCD sensor operating in a subframe burst-gating mode to realize time-resolved combustion diagnostics. The technique permits all-electronic optical gating with microsecond shutter speeds 5 J.Ls) without compromising optical throughput or image fidelity. When used in conjunction with a pair of orthogonally polarized excitation lasers, the technique measures single-shot vibrational Raman scattering that is minimally contaminated by problematic optical background noise.
NASA Astrophysics Data System (ADS)
Ozeki, Yasuyuki; Otsuka, Yoichi; Sato, Shuya; Hashimoto, Hiroyuki; Umemura, Wataru; Sumimura, Kazuhiko; Nishizawa, Norihiko; Fukui, Kiichi; Itoh, Kazuyoshi
2013-02-01
We have developed a video-rate stimulated Raman scattering (SRS) microscope with frame-by-frame wavenumber tunability. The system uses a 76-MHz picosecond Ti:sapphire laser and a subharmonically synchronized, 38-MHz Yb fiber laser. The Yb fiber laser pulses are spectrally sliced by a fast wavelength-tunable filter, which consists of a galvanometer scanner, a 4-f optical system and a reflective grating. The spectral resolution of the filter is ~ 3 cm-1. The wavenumber was scanned from 2800 to 3100 cm-1 with an arbitrary waveform synchronized to the frame trigger. For imaging, we introduced a 8-kHz resonant scanner and a galvanometer scanner. We were able to acquire SRS images of 500 x 480 pixels at a frame rate of 30.8 frames/s. Then these images were processed by principal component analysis followed by a modified algorithm of independent component analysis. This algorithm allows blind separation of constituents with overlapping Raman bands from SRS spectral images. The independent component (IC) spectra give spectroscopic information, and IC images can be used to produce pseudo-color images. We demonstrate various label-free imaging modalities such as 2D spectral imaging of the rat liver, two-color 3D imaging of a vessel in the rat liver, and spectral imaging of several sections of intestinal villi in the mouse. Various structures in the tissues such as lipid droplets, cytoplasm, fibrous texture, nucleus, and water-rich region were successfully visualized.
Cooper, Justin T; Peterson, Eric M; Harris, Joel M
2013-10-01
Due to its high specific surface area and chemical stability, porous silica is used as a support structure in numerous applications, including heterogeneous catalysis, biomolecule immobilization, sensors, and liquid chromatography. Reversed-phase liquid chromatography (RPLC), which uses porous silica support particles, has become an indispensable separations tool in quality control, pharmaceutics, and environmental analysis requiring identification of compounds in mixtures. For complex samples, the need for higher resolution separations requires an understanding of the time scale of processes responsible for analyte retention in the stationary phase. In the present work, single-molecule fluorescence imaging is used to observe transport of individual molecules within RPLC porous silica particles. This technique allows direct measurement of intraparticle molecular residence times, intraparticle diffusion rates, and the spatial distribution of molecules within the particle. On the basis of the localization uncertainty and characteristic measured diffusion rates, statistical criteria were developed to resolve the frame-to-frame behavior of molecules into moving and stuck events. The measured diffusion coefficient of moving molecules was used in a Monte Carlo simulation of a random-walk model within the cylindrical geometry of the particle diameter and microscope depth-of-field. The simulated molecular transport is in good agreement with the experimental data, indicating transport of moving molecules in the porous particle is described by a random-walk. Histograms of stuck-molecule event times, locations, and their contributions to intraparticle residence times were also characterized.
Single-Scale Retinex Using Digital Signal Processors
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2005-01-01
The Retinex is an image enhancement algorithm that improves the brightness, contrast and sharpness of an image. It performs a non-linear spatial/spectral transform that provides simultaneous dynamic range compression and color constancy. It has been used for a wide variety of applications ranging from aviation safety to general purpose photography. Many potential applications require the use of Retinex processing at video frame rates. This is difficult to achieve with general purpose processors because the algorithm contains a large number of complex computations and data transfers. In addition, many of these applications also constrain the potential architectures to embedded processors to save power, weight and cost. Thus we have focused on digital signal processors (DSPs) and field programmable gate arrays (FPGAs) as potential solutions for real-time Retinex processing. In previous efforts we attained a 21 (full) frame per second (fps) processing rate for the single-scale monochromatic Retinex with a TMS320C6711 DSP operating at 150 MHz. This was achieved after several significant code improvements and optimizations. Since then we have migrated our design to the slightly more powerful TMS320C6713 DSP and the fixed point TMS320DM642 DSP. In this paper we briefly discuss the Retinex algorithm, the performance of the algorithm executing on the TMS320C6713 and the TMS320DM642, and compare the results with the TMS320C6711.
Frequency-locked pulse sequencer for high-frame-rate monochromatic tissue motion imaging.
Azar, Reza Zahiri; Baghani, Ali; Salcudean, Septimiu E; Rohling, Robert
2011-04-01
To overcome the inherent low frame rate of conventional ultrasound, we have previously presented a system that can be implemented on conventional ultrasound scanners for high-frame-rate imaging of monochromatic tissue motion. The system employs a sector subdivision technique in the sequencer to increase the acquisition rate. To eliminate the delays introduced during data acquisition, a motion phase correction algorithm has also been introduced to create in-phase displacement images. Previous experimental results from tissue- mimicking phantoms showed that the system can achieve effective frame rates of up to a few kilohertz on conventional ultrasound systems. In this short communication, we present a new pulse sequencing strategy that facilitates high-frame-rate imaging of monochromatic motion such that the acquired echo signals are inherently in-phase. The sequencer uses the knowledge of the excitation frequency to synchronize the acquisition of the entire imaging plane to that of an external exciter. This sequencing approach eliminates any need for synchronization or phase correction and has applications in tissue elastography, which we demonstrate with tissue-mimicking phantoms. © 2011 IEEE
UWGSP4: an imaging and graphics superworkstation and its medical applications
NASA Astrophysics Data System (ADS)
Jong, Jing-Ming; Park, Hyun Wook; Eo, Kilsu; Kim, Min-Hwan; Zhang, Peng; Kim, Yongmin
1992-05-01
UWGSP4 is configured with a parallel architecture for image processing and a pipelined architecture for computer graphics. The system's peak performance is 1,280 MFLOPS for image processing and over 200,000 Gouraud shaded 3-D polygons per second for graphics. The simulated sustained performance is about 50% of the peak performance in general image processing. Most of the 2-D image processing functions are efficiently vectorized and parallelized in UWGSP4. A performance of 770 MFLOPS in convolution and 440 MFLOPS in FFT is achieved. The real-time cine display, up to 32 frames of 1280 X 1024 pixels per second, is supported. In 3-D imaging, the update rate for the surface rendering is 10 frames of 20,000 polygons per second; the update rate for the volume rendering is 6 frames of 128 X 128 X 128 voxels per second. The system provides 1280 X 1024 X 32-bit double frame buffers and one 1280 X 1024 X 8-bit overlay buffer for supporting realistic animation, 24-bit true color, and text annotation. A 1280 X 1024- pixel, 66-Hz noninterlaced display screen with 1:1 aspect ratio can be windowed into the frame buffer for the display of any portion of the processed image or graphics.
Rehmert, Andrea E; Kisley, Michael A
2013-10-01
Older adults have demonstrated an avoidance of negative information, presumably with a goal of greater emotional satisfaction. Understanding whether avoidance of negative information is a voluntary, motivated choice or an involuntary, automatic response will be important to differentiate, as decision making often involves emotional factors. With the use of an emotional framing event-related potential (ERP) paradigm, the present study investigated whether older adults could alter neural responses to negative stimuli through verbal reframing of evaluative response options. The late positive potential (LPP) response of 50 older adults and 50 younger adults was recorded while participants categorized emotional images in one of two framing conditions: positive ("more or less positive") or negative ("more or less negative"). It was hypothesized that older adults would be able to overcome a presumed tendency to down-regulate neural responding to negative stimuli in the negative framing condition, thus leading to larger LPP wave amplitudes to negative images. A similar effect was predicted for younger adults, but for positively valenced images, such that LPP responses would be increased in the positive framing condition compared with the negative framing condition. Overall, younger adults' LPP wave amplitudes were modulated by framing condition, including a reduction in the negativity bias in the positive frame. Older adults' neural responses were not significantly modulated, even though task-related behavior supported the notion that older adults were able to successfully adopt the negative framing condition.
Rehmert, Andrea E.; Kisley, Michael A.
2014-01-01
Older adults have demonstrated an avoidance of negative information presumably with a goal of greater emotional satisfaction. Understanding whether avoidance of negative information is a voluntary, motivated choice, or an involuntary, automatic response will be important to differentiate, as decision-making often involves emotional factors. With the use of an emotional framing event-related potential (ERP) paradigm, the present study investigated whether older adults could alter neural responses to negative stimuli through verbal reframing of evaluative response options. The late-positive potential (LPP) response of 50 older adults and 50 younger adults was recorded while participants categorized emotional images in one of two framing conditions: positive (“more or less positive”) or negative (“more or less negative”). It was hypothesized that older adults would be able to overcome a presumed tendency to down-regulate neural responding to negative stimuli in the negative framing condition thus leading to larger LPP wave amplitudes to negative images. A similar effect was predicted for younger adults but for positively valenced images such that LPP responses would be increased in the positive framing condition compared to the negative framing condition. Overall, younger adults' LPP wave amplitudes were modulated by framing condition, including a reduction in the negativity bias in the positive frame. Older adults' neural responses were not significantly modulated even though task-related behavior supported the notion that older adults were able to successfully adopt the negative framing condition. PMID:23731435
Cunningham, Charles H; Dominguez Viqueira, William; Hurd, Ralph E; Chen, Albert P
2014-02-01
Blip-reversed echo-planar imaging (EPI) is investigated as a method for measuring and correcting the spatial shifts that occur due to bulk frequency offsets in (13)C metabolic imaging in vivo. By reversing the k-space trajectory for every other time point, the direction of the spatial shift for a given frequency is reversed. Here, mutual information is used to find the 'best' alignment between images and thereby measure the frequency offset. Time-resolved 3D images of pyruvate/lactate/urea were acquired with 5 s temporal resolution over a 1 min duration in rats (N = 6). For each rat, a second injection was performed with the demodulation frequency purposely mis-set by +35 Hz, to test the correction for erroneous shifts in the images. Overall, the shift induced by the 35 Hz frequency offset was 5.9 ± 0.6 mm (mean ± standard deviation). This agrees well with the expected 5.7 mm shift based on the 2.02 ms delay between k-space lines (giving 30.9 Hz per pixel). The 0.6 mm standard deviation in the correction corresponds to a frequency-detection accuracy of 4 Hz. A method was presented for ensuring the spatial registration between (13)C metabolic images and conventional anatomical images when long echo-planar readouts are used. The frequency correction method was shown to have an accuracy of 4 Hz. Summing the spatially corrected frames gave a signal-to-noise ratio (SNR) improvement factor of 2 or greater, compared with the highest single frame. Copyright © 2013 John Wiley & Sons, Ltd.
Statistical processing of large image sequences.
Khellah, F; Fieguth, P; Murray, M J; Allen, M
2005-01-01
The dynamic estimation of large-scale stochastic image sequences, as frequently encountered in remote sensing, is important in a variety of scientific applications. However, the size of such images makes conventional dynamic estimation methods, for example, the Kalman and related filters, impractical. In this paper, we present an approach that emulates the Kalman filter, but with considerably reduced computational and storage requirements. Our approach is illustrated in the context of a 512 x 512 image sequence of ocean surface temperature. The static estimation step, the primary contribution here, uses a mixture of stationary models to accurately mimic the effect of a nonstationary prior, simplifying both computational complexity and modeling. Our approach provides an efficient, stable, positive-definite model which is consistent with the given correlation structure. Thus, the methods of this paper may find application in modeling and single-frame estimation.
NASA Astrophysics Data System (ADS)
Ezhova, Kseniia; Fedorenko, Dmitriy; Chuhlamov, Anton
2016-04-01
The article deals with the methods of image segmentation based on color space conversion, and allow the most efficient way to carry out the detection of a single color in a complex background and lighting, as well as detection of objects on a homogeneous background. The results of the analysis of segmentation algorithms of this type, the possibility of their implementation for creating software. The implemented algorithm is very time-consuming counting, making it a limited application for the analysis of the video, however, it allows us to solve the problem of analysis of objects in the image if there is no dictionary of images and knowledge bases, as well as the problem of choosing the optimal parameters of the frame quantization for video analysis.
Optical joint correlator for real-time image tracking and retinal surgery
NASA Technical Reports Server (NTRS)
Juday, Richard D. (Inventor)
1991-01-01
A method for tracking an object in a sequence of images is described. Such sequence of images may, for example, be a sequence of television frames. The object in the current frame is correlated with the object in the previous frame to obtain the relative location of the object in the two frames. An optical joint transform correlator apparatus is provided to carry out the process. Such joint transform correlator apparatus forms the basis for laser eye surgical apparatus where an image of the fundus of an eyeball is stabilized and forms the basis for the correlator apparatus to track the position of the eyeball caused by involuntary movement. With knowledge of the eyeball position, a surgical laser can be precisely pointed toward a position on the retina.
New Subarray Readout Patterns for the ACS Wide Field Channel
NASA Astrophysics Data System (ADS)
Golimowski, D.; Anderson, J.; Arslanian, S.; Chiaberge, M.; Grogin, N.; Lim, Pey Lian; Lupie, O.; McMaster, M.; Reinhart, M.; Schiffer, F.; Serrano, B.; Van Marshall, M.; Welty, A.
2017-04-01
At the start of Cycle 24, the original CCD-readout timing patterns used to generate ACS Wide Field Channel (WFC) subarray images were replaced with new patterns adapted from the four-quadrant readout pattern used to generate full-frame WFC images. The primary motivation for this replacement was a substantial reduction of observatory and staff resources needed to support WFC subarray bias calibration, which became a new and challenging obligation after the installation of the ACS CCD Electronics Box Replacement during Servicing Mission 4. The new readout patterns also improve the overall efficiency of observing with WFC subarrays and enable the processing of subarray images through stages of the ACS data calibration pipeline (calacs) that were previously restricted to full-frame WFC images. The new readout patterns replace the original 512×512, 1024×1024, and 2048×2046-pixel subarrays with subarrays having 2048 columns and 512, 1024, and 2048 rows, respectively. Whereas the original square subarrays were limited to certain WFC quadrants, the new rectangular subarrays are available in all four quadrants. The underlying bias structure of the new subarrays now conforms with those of the corresponding regions of the full-frame image, which allows raw frames in all image formats to be calibrated using one contemporaneous full-frame "superbias" reference image. The original subarrays remain available for scientific use, but calibration of these image formats is no longer supported by STScI.
View of Saudi Arabia and north eastern Africa from the Apollo 17 spacecraft
1972-12-09
AS17-148-22718 (7-19 Dec. 1972) --- This excellent view of Saudi Arabia and the north eastern portion of the African continent was photographed by the Apollo 17 astronauts with a hand-held camera on their trans-lunar coast toward man's last lunar visit. Egypt, Sudan, Ethiopia are some of the African nations are visible. Iran, Iraq, Jordan are not so clearly visible because of cloud cover and their particular location in the picture. India is dimly visible at right of frame. The Red Sea is seen entirely in this one single frame, a rare occurrence in Apollo photography or any photography taken from manned spacecraft. The Gulf of Suez, the Dead Sea, Gulf of Aden, Persian Gulf and Gulf of Oman are also visible. This frame is one of 169 frames on film magazine NN carried aboard Apollo 17, all of which are SO368 (color) film. A 250mm lens on a 70mm Hasselblad camera recorded the image, one of 92 taken during the trans-lunar coast. Note AS17-148-22727 (also magazine NN) for an excellent full Earth picture showing the entire African continent.
Dynamic phase-sensitive optical coherence elastography at a true kilohertz frame-rate
NASA Astrophysics Data System (ADS)
Singh, Manmohan; Wu, Chen; Liu, Chih-Hao; Li, Jiasong; Schill, Alexander; Nair, Achuth; Larin, Kirill V.
2016-03-01
Dynamic optical coherence elastography (OCE) techniques have rapidly emerged as a noninvasive way to characterize the biomechanical properties of tissue. However, clinical applications of the majority of these techniques have been unfeasible due to the extended acquisition time because of multiple temporal OCT acquisitions (M-B mode). Moreover, multiple excitations, large datasets, and prolonged laser exposure prohibit their translation to the clinic, where patient discomfort and safety are critical criteria. Here, we demonstrate the feasibility of noncontact true kilohertz frame-rate dynamic optical coherence elastography by directly imaging a focused air-pulse induced elastic wave with a home-built phase-sensitive OCE system. The OCE system was based on a 4X buffered Fourier Domain Mode Locked swept source laser with an A-scan rate of ~1.5 MHz, and imaged the elastic wave propagation at a frame rate of ~7.3 kHz. Because the elastic wave directly imaged, only a single excitation was utilized for one line scan measurement. Rather than acquiring multiple temporal scans at successive spatial locations as with previous techniques, here, successive B-scans were acquired over the measurement region (B-M mode). Preliminary measurements were taken on tissue-mimicking agar phantoms of various concentrations, and the results showed good agreement with uniaxial mechanical compression testing. Then, the elasticity of an in situ porcine cornea in the whole eye-globe configuration at various intraocular pressures was measured. The results showed that this technique can acquire a depth-resolved elastogram in milliseconds. Furthermore, the ultra-fast acquisition ensured that the laser safety exposure limit for the cornea was not exceeded.
Using consumer-grade devices for multi-imager non-contact imaging photoplethysmography
NASA Astrophysics Data System (ADS)
Blackford, Ethan B.; Estepp, Justin R.
2017-02-01
Imaging photoplethysmography is a technique through which the morphology of the blood volume pulse can be obtained through non-contact video recordings of exposed skin with superficial vasculature. The acceptance of such a convenient modality for use in everyday applications may well depend upon the availability of consumer-grade imagers that facilitate ease-of-adoption. Multiple imagers have been used previously in concept demonstrations, showing improvements in quality of the extracted blood volume pulse signal. However, the use of multi-imager sensors requires synchronization of the frame exposures between the individual imagers, a capability that has only recently been available without creating custom solutions. In this work, we consider the use of multiple, commercially-available, synchronous imagers for use in imaging photoplethysmography. A commercially-available solution for adopting multi-imager synchronization was analyzed for 21 stationary, seated participants while ground-truth physiological signals were simultaneously measured. A total of three imagers were used, facilitating a comparison between fused data from all three imagers versus data from the single, central imager in the array. The within-subjects design included analyses of pulse rate and pulse signal-to-noise ratio. Using the fused data from the triple-imager array, mean absolute error in pulse rate measurement was reduced to 3.8 as compared to 7.4 beats per minute with the single imager. While this represents an overall improvement in the multi-imager case, it is also noted that these errors are substantially higher than those obtained in comparable studies. We further discuss these results and their implications for using readily-available commercial imaging solutions for imaging photoplethysmography applications.
NASA Astrophysics Data System (ADS)
Dubey, Shailendra Kumar Damodar; Kute, Sunil
2014-09-01
Due to earthquake, buildings are damaged partially or completely. Particularly structures with soft storey are mostly affected. In general, such damaged structures are repaired and reused. In this regard, an experimental investigation was planned and conducted on models of single-bay, single-storey of partial concrete infilled reinforced concrete (RC) frames up to collapse with corner, central and diagonal steel bracings. Such collapsed frames were repaired with epoxy resin and retested. The initiative was to identify the behaviour, extent of restored ultimate strength and deflection of epoxy-retrofitted frames in comparison to the braced RC frames. The performance of such frames has been considered only for lateral loads. In comparison to bare RC frames, epoxy repaired partial infilled frames have significant increase in the lateral load capacity. Central bracing is more effective than corner and diagonal bracing. For the same load, epoxy repaired frames have comparable deflection than similar braced frames.
Quantitative image fusion in infrared radiometry
NASA Astrophysics Data System (ADS)
Romm, Iliya; Cukurel, Beni
2018-05-01
Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.
Double-pass imaging through scattering (Conference Presentation)
NASA Astrophysics Data System (ADS)
Tajahuerce, Enrique; Andrés Bou, Pedro; Artal, Pablo; Lancis, Jesús
2017-02-01
In the last years, single-pixel imaging (SPI) was established as a suitable tool for non-invasive imaging of an absorbing object completely embedded in an inhomogeneous medium. One of the main characteristics of the technique is that it uses very simple sensors (bucket detectors such as photodiodes or photomultiplier tubes) combined with structured illumination and mathematical algorithms to recover the image. This reduction in complexity of the sensing device gives these systems the opportunity to obtain images at shallow depth overcoming the scattering problem. Nonetheless, some challenges, such as the need for improved signal-to-noise or the frame rate, remain to be tackled before extensive use in practical systems. Also, for intact or live optically thick tissues, epi-detection is commonly used, while present implementations of SPI are limited to transillumination geometries. In this work we present new features and some recent advances in SPI that involve either the use of computationally efficient algorithms for adaptive sensing or a balanced detection mechanism. Additionally, SPI has been adapted to handle reflected light to create a double pass optical system. Such developments represent a significant step towards the use of SPI in more realistic scenarios, especially in biophotonics applications. In particular, we show the design of a single-pixel ophtalmoscope as a novel way of imaging the retina in real time.
NASA Astrophysics Data System (ADS)
Shen, Zhengwei; Cheng, Lishuang
2017-09-01
Total variation (TV)-based image deblurring method can bring on staircase artifacts in the homogenous region of the latent images recovered from the degraded images while a wavelet/frame-based image deblurring method will lead to spurious noise spikes and pseudo-Gibbs artifacts in the vicinity of discontinuities of the latent images. To suppress these artifacts efficiently, we propose a nonconvex composite wavelet/frame and TV-based image deblurring model. In this model, the wavelet/frame and the TV-based methods may complement each other, which are verified by theoretical analysis and experimental results. To further improve the quality of the latent images, nonconvex penalty function is used to be the regularization terms of the model, which may induce a stronger sparse solution and will more accurately estimate the relative large gradient or wavelet/frame coefficients of the latent images. In addition, by choosing a suitable parameter to the nonconvex penalty function, the subproblem that splits by the alternative direction method of multipliers algorithm from the proposed model can be guaranteed to be a convex optimization problem; hence, each subproblem can converge to a global optimum. The mean doubly augmented Lagrangian and the isotropic split Bregman algorithms are used to solve these convex subproblems where the designed proximal operator is used to reduce the computational complexity of the algorithms. Extensive numerical experiments indicate that the proposed model and algorithms are comparable to other state-of-the-art model and methods.
Single-Chip CMUT-on-CMOS Front-End System for Real-Time Volumetric IVUS and ICE Imaging
Gurun, Gokce; Tekes, Coskun; Zahorian, Jaime; Xu, Toby; Satir, Sarp; Karaman, Mustafa; Hasler, Jennifer; Degertekin, F. Levent
2014-01-01
Intravascular ultrasound (IVUS) and intracardiac echography (ICE) catheters with real-time volumetric ultrasound imaging capability can provide unique benefits to many interventional procedures used in the diagnosis and treatment of coronary and structural heart diseases. Integration of CMUT arrays with front-end electronics in single-chip configuration allows for implementation of such catheter probes with reduced interconnect complexity, miniaturization, and high mechanical flexibility. We implemented a single-chip forward-looking (FL) ultrasound imaging system by fabricating a 1.4-mm-diameter dual-ring CMUT array using CMUT-on-CMOS technology on a front-end IC implemented in 0.35-µm CMOS process. The dual-ring array has 56 transmit elements and 48 receive elements on two separate concentric annular rings. The IC incorporates a 25-V pulser for each transmitter and a low-noise capacitive transimpedance amplifier (TIA) for each receiver, along with digital control and smart power management. The final shape of the silicon chip is a 1.5-mm-diameter donut with a 430-µm center hole for a guide wire. The overall front-end system requires only 13 external connections and provides 4 parallel RF outputs while consuming an average power of 20 mW. We measured RF A-scans from the integrated single-chip array which show full functionality at 20.1 MHz with 43% fractional bandwidth. We also tested and demonstrated the image quality of the system on a wire phantom and an ex-vivo chicken heart sample. The measured axial and lateral point resolutions are 92 µm and 251 µm, respectively. We successfully acquired volumetric imaging data from the ex-vivo chicken heart with 60 frames per second without any signal averaging. These demonstrative results indicate that single-chip CMUT-on-CMOS systems have the potential to produce real-time volumetric images with image quality and speed suitable for catheter based clinical applications. PMID:24474131
Single-chip CMUT-on-CMOS front-end system for real-time volumetric IVUS and ICE imaging.
Gurun, Gokce; Tekes, Coskun; Zahorian, Jaime; Xu, Toby; Satir, Sarp; Karaman, Mustafa; Hasler, Jennifer; Degertekin, F Levent
2014-02-01
Intravascular ultrasound (IVUS) and intracardiac echography (ICE) catheters with real-time volumetric ultrasound imaging capability can provide unique benefits to many interventional procedures used in the diagnosis and treatment of coronary and structural heart diseases. Integration of capacitive micromachined ultrasonic transducer (CMUT) arrays with front-end electronics in single-chip configuration allows for implementation of such catheter probes with reduced interconnect complexity, miniaturization, and high mechanical flexibility. We implemented a single-chip forward-looking (FL) ultrasound imaging system by fabricating a 1.4-mm-diameter dual-ring CMUT array using CMUT-on-CMOS technology on a front-end IC implemented in 0.35-μm CMOS process. The dual-ring array has 56 transmit elements and 48 receive elements on two separate concentric annular rings. The IC incorporates a 25-V pulser for each transmitter and a low-noise capacitive transimpedance amplifier (TIA) for each receiver, along with digital control and smart power management. The final shape of the silicon chip is a 1.5-mm-diameter donut with a 430-μm center hole for a guide wire. The overall front-end system requires only 13 external connections and provides 4 parallel RF outputs while consuming an average power of 20 mW. We measured RF A-scans from the integrated single- chip array which show full functionality at 20.1 MHz with 43% fractional bandwidth. We also tested and demonstrated the image quality of the system on a wire phantom and an ex vivo chicken heart sample. The measured axial and lateral point resolutions are 92 μm and 251 μm, respectively. We successfully acquired volumetric imaging data from the ex vivo chicken heart at 60 frames per second without any signal averaging. These demonstrative results indicate that single-chip CMUT-on-CMOS systems have the potential to produce realtime volumetric images with image quality and speed suitable for catheter-based clinical applications.
Romanek, Kathleen M; McCaul, Kevin D; Sandgren, Ann K
2005-07-01
To examine the effects of age, body image, and risk framing on treatment decision making for breast cancer using a healthy population. An experimental 2 (younger women, older women) X 2 (survival, mortality frame) between-groups design. Midwestern university. Two groups of healthy women: 56 women ages 18-24 from undergraduate psychology courses and 60 women ages 35-60 from the university community. Healthy women imagined that they had been diagnosed with breast cancer and received information regarding lumpectomy versus mastectomy and recurrence rates. Participants indicated whether they would choose lumpectomy or mastectomy and why. Age, framing condition, treatment choice, body image, and reasons for treatment decision. The difference in treatment selection between younger and older women was mediated by concern for appearance. No main effect for risk framing was found; however, older women were somewhat less likely to select lumpectomy when given a mortality frame. Age, mediated by body image, influences treatment selection of lumpectomy versus mastectomy. Framing has no direct effect on treatment decisions, but younger and older women may be affected by risk information differently. Nurses should provide women who recently have been diagnosed with breast cancer with age-appropriate information regarding treatment alternatives to ensure women's active participation in the decision-making process. Women who have different levels of investment in body image also may have different concerns about treatment, and healthcare professionals should be alert to and empathetic of such concerns.
NASA Astrophysics Data System (ADS)
Rasmi, Chelur K.; Padmanabhan, Sreedevi; Shirlekar, Kalyanee; Rajan, Kanhirodan; Manjithaya, Ravi; Singh, Varsha; Mondal, Partha Pratim
2017-12-01
We propose and demonstrate a light-sheet-based 3D interrogation system on a microfluidic platform for screening biological specimens during flow. To achieve this, a diffraction-limited light-sheet (with a large field-of-view) is employed to optically section the specimens flowing through the microfluidic channel. This necessitates optimization of the parameters for the illumination sub-system (illumination intensity, light-sheet width, and thickness), microfluidic specimen platform (channel-width and flow-rate), and detection sub-system (camera exposure time and frame rate). Once optimized, these parameters facilitate cross-sectional imaging and 3D reconstruction of biological specimens. The proposed integrated light-sheet imaging and flow-based enquiry (iLIFE) imaging technique enables single-shot sectional imaging of a range of specimens of varying dimensions, ranging from a single cell (HeLa cell) to a multicellular organism (C. elegans). 3D reconstruction of the entire C. elegans is achieved in real-time and with an exposure time of few hundred micro-seconds. A maximum likelihood technique is developed and optimized for the iLIFE imaging system. We observed an intracellular resolution for mitochondria-labeled HeLa cells, which demonstrates the dynamic resolution of the iLIFE system. The proposed technique is a step towards achieving flow-based 3D imaging. We expect potential applications in diverse fields such as structural biology and biophysics.
Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking
NASA Technical Reports Server (NTRS)
Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.
2005-01-01
This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.
Gesture recognition by instantaneous surface EMG images
Geng, Weidong; Du, Yu; Jin, Wenguang; Wei, Wentao; Hu, Yu; Li, Jiajun
2016-01-01
Gesture recognition in non-intrusive muscle-computer interfaces is usually based on windowed descriptive and discriminatory surface electromyography (sEMG) features because the recorded amplitude of a myoelectric signal may rapidly fluctuate between voltages above and below zero. Here, we present that the patterns inside the instantaneous values of high-density sEMG enables gesture recognition to be performed merely with sEMG signals at a specific instant. We introduce the concept of an sEMG image spatially composed from high-density sEMG and verify our findings from a computational perspective with experiments on gesture recognition based on sEMG images with a classification scheme of a deep convolutional network. Without any windowed features, the resultant recognition accuracy of an 8-gesture within-subject test reached 89.3% on a single frame of sEMG image and reached 99.0% using simple majority voting over 40 frames with a 1,000 Hz sampling rate. Experiments on the recognition of 52 gestures of NinaPro database and 27 gestures of CSL-HDEMG database also validated that our approach outperforms state-of-the-arts methods. Our findings are a starting point for the development of more fluid and natural muscle-computer interfaces with very little observational latency. For example, active prostheses and exoskeletons based on high-density electrodes could be controlled with instantaneous responses. PMID:27845347
CMOS Imaging Sensor Technology for Aerial Mapping Cameras
NASA Astrophysics Data System (ADS)
Neumann, Klaus; Welzenbach, Martin; Timm, Martin
2016-06-01
In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.
Do we understand high-level vision?
Cox, David Daniel
2014-04-01
'High-level' vision lacks a single, agreed upon definition, but it might usefully be defined as those stages of visual processing that transition from analyzing local image structure to analyzing structure of the external world that produced those images. Much work in the last several decades has focused on object recognition as a framing problem for the study of high-level visual cortex, and much progress has been made in this direction. This approach presumes that the operational goal of the visual system is to read-out the identity of an object (or objects) in a scene, in spite of variation in the position, size, lighting and the presence of other nearby objects. However, while object recognition as a operational framing of high-level is intuitive appealing, it is by no means the only task that visual cortex might do, and the study of object recognition is beset by challenges in building stimulus sets that adequately sample the infinite space of possible stimuli. Here I review the successes and limitations of this work, and ask whether we should reframe our approaches to understanding high-level vision. Copyright © 2014. Published by Elsevier Ltd.
An electronic pan/tilt/zoom camera system
NASA Technical Reports Server (NTRS)
Zimmermann, Steve; Martin, H. Lee
1991-01-01
A camera system for omnidirectional image viewing applications that provides pan, tilt, zoom, and rotational orientation within a hemispherical field of view (FOV) using no moving parts was developed. The imaging device is based on the effect that from a fisheye lens, which produces a circular image of an entire hemispherical FOV, can be mathematically corrected using high speed electronic circuitry. An incoming fisheye image from any image acquisition source is captured in memory of the device, a transformation is performed for the viewing region of interest and viewing direction, and a corrected image is output as a video image signal for viewing, recording, or analysis. As a result, this device can accomplish the functions of pan, tilt, rotation, and zoom throughout a hemispherical FOV without the need for any mechanical mechanisms. A programmable transformation processor provides flexible control over viewing situations. Multiple images, each with different image magnifications and pan tilt rotation parameters, can be obtained from a single camera. The image transformation device can provide corrected images at frame rates compatible with RS-170 standard video equipment.
NASA Astrophysics Data System (ADS)
Cornelissen, Frans; De Backer, Steve; Lemeire, Jan; Torfs, Berf; Nuydens, Rony; Meert, Theo; Schelkens, Peter; Scheunders, Paul
2008-08-01
Peripheral neuropathy can be caused by diabetes or AIDS or be a side-effect of chemotherapy. Fibered Fluorescence Microscopy (FFM) is a recently developed imaging modality using a fiber optic probe connected to a laser scanning unit. It allows for in-vivo scanning of small animal subjects by moving the probe along the tissue surface. In preclinical research, FFM enables non-invasive, longitudinal in vivo assessment of intra epidermal nerve fibre density in various models for peripheral neuropathies. By moving the probe, FFM allows visualization of larger surfaces, since, during the movement, images are continuously captured, allowing to acquire an area larger then the field of view of the probe. For analysis purposes, we need to obtain a single static image from the multiple overlapping frames. We introduce a mosaicing procedure for this kind of video sequence. Construction of mosaic images with sub-pixel alignment is indispensable and must be integrated into a global consistent image aligning. An additional motivation for the mosaicing is the use of overlapping redundant information to improve the signal to noise ratio of the acquisition, because the individual frames tend to have both high noise levels and intensity inhomogeneities. For longitudinal analysis, mosaics captured at different times must be aligned as well. For alignment, global correlation-based matching is compared with interest point matching. Use of algorithms working on multiple CPU's (parallel processor/cluster/grid) is imperative for use in a screening model.
Robotically-adjustable microstereotactic frames for image-guided neurosurgery
NASA Astrophysics Data System (ADS)
Kratchman, Louis B.; Fitzpatrick, J. Michael
2013-03-01
Stereotactic frames are a standard tool for neurosurgical targeting, but are uncomfortable for patients and obstruct the surgical field. Microstereotactic frames are more comfortable for patients, provide better access to the surgical site, and have grown in popularity as an alternative to traditional stereotactic devices. However, clinically available microstereotactic frames require either lengthy manufacturing delays or expensive image guidance systems. We introduce a robotically-adjusted, disposable microstereotactic frame for deep brain stimulation surgery that eliminates the drawbacks of existing microstereotactic frames. Our frame can be automatically adjusted in the operating room using a preoperative plan in less than five minutes. A validation study on phantoms shows that our approach provides a target positioning error of 0.14 mm, which exceeds the required accuracy for deep brain stimulation surgery.
Dynamic Imaging of the Eye, Optic Nerve, and Extraocular Muscles With Golden Angle Radial MRI
Smith, David S.; Smith, Alex K.; Welch, E. Brian; Smith, Seth A.
2017-01-01
Purpose The eye and its accessory structures, the optic nerve and the extraocular muscles, form a complex dynamic system. In vivo magnetic resonance imaging (MRI) of this system in motion can have substantial benefits in understanding oculomotor functioning in health and disease, but has been restricted to date to imaging of static gazes only. The purpose of this work was to develop a technique to image the eye and its accessory visual structures in motion. Methods Dynamic imaging of the eye was developed on a 3-Tesla MRI scanner, based on a golden angle radial sequence that allows freely selectable frame-rate and temporal-span image reconstructions from the same acquired data set. Retrospective image reconstructions at a chosen frame rate of 57 ms per image yielded high-quality in vivo movies of various eye motion tasks performed in the scanner. Motion analysis was performed for a left–right version task where motion paths, lengths, and strains/globe angle of the medial and lateral extraocular muscles and the optic nerves were estimated. Results Offline image reconstructions resulted in dynamic images of bilateral visual structures of healthy adults in only ∼15-s imaging time. Qualitative and quantitative analyses of the motion enabled estimation of trajectories, lengths, and strains on the optic nerves and extraocular muscles at very high frame rates of ∼18 frames/s. Conclusions This work presents an MRI technique that enables high-frame-rate dynamic imaging of the eyes and orbital structures. The presented sequence has the potential to be used in furthering the understanding of oculomotor mechanics in vivo, both in health and disease. PMID:28813574
ERIC Educational Resources Information Center
Serna, Gabriel
2014-01-01
This essay examines normative aspects of the gainful employment rule and how the policy frame and image miss important implications for student aid policy. Because the economic and social burdens associated with the policy are typically borne by certain socioeconomic and ethnic groups, the policy frame and image do not identify possible negative…
Weber, Thorsten; Foucar, Lutz; Jahnke, Till; ...
2017-07-07
In this paper, we studied the photo double ionization of hydrogen molecules in the threshold region (50 eV) and the complete photo fragmentation of deuterium molecules at maximum cross section (75 eV) with single photons (linearly polarized) from the Advanced Light Source, using the reaction microscope imaging technique. The 3D-momentum vectors of two recoiling ions and up to two electrons were measured in coincidence. We present the kinetic energy sharing between the electrons and ions, the relative electron momenta, the azimuthal and polar angular distributions of the electrons in the body-fixed frame. We also present the dependency of the kineticmore » energy release in the Coulomb explosion of the two nuclei on the electron emission patterns. We find that the electronic emission in the body-fixed frame is strongly influenced by the orientation of the molecular axis to the polarization vector and the internuclear distance as well as the electronic energy sharing. Finally, traces of a possible breakdown of the Born–Oppenheimer approximation are observed near threshold.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weber, Thorsten; Foucar, Lutz; Jahnke, Till
In this paper, we studied the photo double ionization of hydrogen molecules in the threshold region (50 eV) and the complete photo fragmentation of deuterium molecules at maximum cross section (75 eV) with single photons (linearly polarized) from the Advanced Light Source, using the reaction microscope imaging technique. The 3D-momentum vectors of two recoiling ions and up to two electrons were measured in coincidence. We present the kinetic energy sharing between the electrons and ions, the relative electron momenta, the azimuthal and polar angular distributions of the electrons in the body-fixed frame. We also present the dependency of the kineticmore » energy release in the Coulomb explosion of the two nuclei on the electron emission patterns. We find that the electronic emission in the body-fixed frame is strongly influenced by the orientation of the molecular axis to the polarization vector and the internuclear distance as well as the electronic energy sharing. Finally, traces of a possible breakdown of the Born–Oppenheimer approximation are observed near threshold.« less
NASA Astrophysics Data System (ADS)
Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko
2005-09-01
Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.
Ultrafast Ultrasound Imaging of Ocular Anatomy and Blood Flow
Urs, Raksha; Ketterling, Jeffrey A.; Silverman, Ronald H.
2016-01-01
Purpose Ophthalmic ultrasound imaging is currently performed with mechanically scanned single-element probes. These probes have limited capabilities overall and lack the ability to image blood flow. Linear-array systems are able to detect blood flow, but these systems exceed ophthalmic acoustic intensity safety guidelines. Our aim was to implement and evaluate a new linear-array–based technology, compound coherent plane-wave ultrasound, which offers ultrafast imaging and depiction of blood flow at safe acoustic intensity levels. Methods We compared acoustic intensity generated by a 128-element, 18-MHz linear array operated in conventionally focused and plane-wave modes and characterized signal-to-noise ratio (SNR) and lateral resolution. We developed plane-wave B-mode, real-time color-flow, and high-resolution depiction of slow flow in postprocessed data collected continuously at a rate of 20,000 frames/s. We acquired in vivo images of the posterior pole of the eye by compounding plane-wave images acquired over ±10° and produced images depicting orbital and choroidal blood flow. Results With the array operated conventionally, Doppler modes exceeded Food and Drug Administration safety guidelines, but plane-wave modalities were well within guidelines. Plane-wave data allowed generation of high-quality compound B-mode images, with SNR increasing with the number of compounded frames. Real-time color-flow Doppler readily visualized orbital blood flow. Postprocessing of continuously acquired data blocks of 1.6-second duration allowed high-resolution depiction of orbital and choroidal flow over the cardiac cycle. Conclusions Newly developed high-frequency linear arrays in combination with plane-wave techniques present opportunities for the evaluation of ocular anatomy and blood flow, as well as visualization and analysis of other transient phenomena such as vessel wall motion over the cardiac cycle and saccade-induced vitreous motion. PMID:27428169
Design and Construction of a Field Capable Snapshot Hyperspectral Imaging Spectrometer
NASA Technical Reports Server (NTRS)
Arik, Glenda H.
2005-01-01
The computed-tomography imaging spectrometer (CTIS) is a device which captures the spatial and spectral content of a rapidly evolving same in a single image frame. The most recent CTIS design is optically all reflective and uses as its dispersive device a stated the-art reflective computer generated hologram (CGH). This project focuses on the instrument's transition from laboratory to field. This design will enable the CTIS to withstand a harsh desert environment. The system is modeled in optical design software using a tolerance analysis. The tolerances guide the design of the athermal mount and component parts. The parts are assembled into a working mount shell where the performance of the mounts is tested for thermal integrity. An interferometric analysis of the reflective CGH is also performed.
Local intensity adaptive image coding
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.
1989-01-01
The objective of preprocessing for machine vision is to extract intrinsic target properties. The most important properties ordinarily are structure and reflectance. Illumination in space, however, is a significant problem as the extreme range of light intensity, stretching from deep shadow to highly reflective surfaces in direct sunlight, impairs the effectiveness of standard approaches to machine vision. To overcome this critical constraint, an image coding scheme is being investigated which combines local intensity adaptivity, image enhancement, and data compression. It is very effective under the highly variant illumination that can exist within a single frame or field of view, and it is very robust to noise at low illuminations. Some of the theory and salient features of the coding scheme are reviewed. Its performance is characterized in a simulated space application, the research and development activities are described.
Research of spectacle frame measurement system based on structured light method
NASA Astrophysics Data System (ADS)
Guan, Dong; Chen, Xiaodong; Zhang, Xiuda; Yan, Huimin
2016-10-01
Automatic eyeglass lens edging system is now widely used to automatically cut and polish the uncut lens based on the spectacle frame shape data which is obtained from the spectacle frame measuring machine installed on the system. The conventional approach to acquire the frame shape data works in the contact scanning mode with a probe tracing around the groove contour of the spectacle frame which requires a sophisticated mechanical and numerical control system. In this paper, a novel non-contact optical measuring method based on structured light to measure the three dimensional (3D) data of the spectacle frame is proposed. First we focus on the processing approach solving the problem of deterioration of the structured light stripes caused by intense specular reflection on the frame surface. The techniques of bright-dark bi-level fringe projecting, multiple exposuring and high dynamic range imaging are introduced to obtain a high-quality image of structured light stripes. Then, the Gamma transform and median filtering are applied to enhance image contrast. In order to get rid of background noise from the image and extract the region of interest (ROI), an auxiliary lighting system of special design is utilized to help effectively distinguish between the object and the background. In addition, a morphological method with specific morphological structure-elements is adopted to remove noise between stripes and boundary of the spectacle frame. By further fringe center extraction and depth information acquisition through the method of look-up table, the 3D shape of the spectacle frame is recovered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Popple, R; Bredel, M; Brezovich, I
Purpose: To compare the accuracy of CT-MR registration using a mutual information method with registration using a frame-based localizer box. Methods: Ten patients having the Leksell head frame and scanned with a modality specific localizer box were imported into the treatment planning system. The fiducial rods of the localizer box were contoured on both the MR and CT scans. The skull was contoured on the CT images. The MR and CT images were registered by two methods. The frame-based method used the transformation that minimized the mean square distance of the centroids of the contours of the fiducial rods frommore » a mathematical model of the localizer. The mutual information method used automated image registration tools in the TPS and was restricted to a volume-of-interest defined by the skull contours with a 5 mm margin. For each case, the two registrations were adjusted by two evaluation teams, each comprised of an experienced radiation oncologist and neurosurgeon, to optimize alignment in the region of the brainstem. The teams were blinded to the registration method. Results: The mean adjustment was 0.4 mm (range 0 to 2 mm) and 0.2 mm (range 0 to 1 mm) for the frame and mutual information methods, respectively. The median difference between the frame and mutual information registrations was 0.3 mm, but was not statistically significant using the Wilcoxon signed rank test (p=0.37). Conclusion: The difference between frame and mutual information registration techniques was neither statistically significant nor, for most applications, clinically important. These results suggest that mutual information is equivalent to frame-based image registration for radiosurgery. Work is ongoing to add additional evaluators and to assess the differences between evaluators.« less
Interactive distributed hardware-accelerated LOD-sprite terrain rendering with stable frame rates
NASA Astrophysics Data System (ADS)
Swan, J. E., II; Arango, Jesus; Nakshatrala, Bala K.
2002-03-01
A stable frame rate is important for interactive rendering systems. Image-based modeling and rendering (IBMR) techniques, which model parts of the scene with image sprites, are a promising technique for interactive systems because they allow the sprite to be manipulated instead of the underlying scene geometry. However, with IBMR techniques a frequent problem is an unstable frame rate, because generating an image sprite (with 3D rendering) is time-consuming relative to manipulating the sprite (with 2D image resampling). This paper describes one solution to this problem, by distributing an IBMR technique into a collection of cooperating threads and executable programs across two computers. The particular IBMR technique distributed here is the LOD-Sprite algorithm. This technique uses a multiple level-of-detail (LOD) scene representation. It first renders a keyframe from a high-LOD representation, and then caches the frame as an image sprite. It renders subsequent spriteframes by texture-mapping the cached image sprite into a lower-LOD representation. We describe a distributed architecture and implementation of LOD-Sprite, in the context of terrain rendering, which takes advantage of graphics hardware. We present timing results which indicate we have achieved a stable frame rate. In addition to LOD-Sprite, our distribution method holds promise for other IBMR techniques.
Graphics-Printing Program For The HP Paintjet Printer
NASA Technical Reports Server (NTRS)
Atkins, Victor R.
1993-01-01
IMPRINT utility computer program developed to print graphics specified in raster files by use of Hewlett-Packard Paintjet(TM) color printer. Reads bit-mapped images from files on UNIX-based graphics workstation and prints out three different types of images: wire-frame images, solid-color images, and gray-scale images. Wire-frame images are in continuous tone or, in case of low resolution, in random gray scale. In case of color images, IMPRINT also prints by use of default palette of solid colors. Written in C language.
Self Occlusion and Disocclusion in Causal Video Object Segmentation
2015-12-18
computation is parameter- free in contrast to [4, 32, 10]. Taylor et al . [30] perform layer segmentation in longer video sequences leveraging occlusion cues...shows that our method recovers from errors in the first frame (short of failed detection). 4413 image ground truth Lee et al . [19] Grundman et al . [14...Ochs et al . [23] Taylor et al . [30] ours Figure 7. Sample Visual Results on FBMS-59. Comparison of various state-of-the-art methods. Only a single
2009-03-01
the background, which manifests itself as shot noise ; the second term is dark current noise ; the third is electronics noise ; the fourth is...quantization noise ; and the fifth is spatial noise . Because of the ease at which one can increase the number of frames collected, within the limitations of...a computer and monitor. The FTS, a Bruker OPAG 22, was equipped with a mercury cadmium telluride ( MCT ) single- pixel detector responsive in the
Communication: Strong laser alignment of solvent-solute aggregates in the gas-phase
NASA Astrophysics Data System (ADS)
Trippel, Sebastian; Wiese, Joss; Mullins, Terry; Küpper, Jochen
2018-03-01
Strong quasi-adiabatic laser alignment of the indole-water-dimer clusters, an amino-acid chromophore bound to a single water molecule through a hydrogen bond, was experimentally realized. The alignment was visualized through ion and electron imaging following strong-field ionization. Molecular-frame photoelectron angular distributions showed a clear suppression of the electron yield in the plane of the ionizing laser's polarization, which was analyzed as strong alignment of the molecular cluster with ⟨cos2 θ2D⟩ ≥ 0.9.
Development and Operation of a Material Identification and Discrimination Imaging Spectroradiometer
NASA Technical Reports Server (NTRS)
Dombrowski, Mark; Willson, paul; LaBaw, Clayton
1997-01-01
Many imaging applications require quantitative determination of a scene's spectral radiance. This paper describes a new system capable of real-time spectroradiometric imagery. Operating at a full-spectrum update rate of 30Hz, this imager is capable of collecting a 30 point spectrum from each of three imaging heads: the first operates from 400 nm to 950 nm, with a 2% bandwidth; the second operates from 1.5 micro-m to 5.5 micro-m with a 1.5% bandwidth; the third operates from 5 micro-m to 12 micro-m, also at a 1.5% bandwidth. Standard image format is 256 x 256, with 512 x 512 possible in the VIS/NIR head. Spectra of up to 256 points are available at proportionately lower frame rates. In order to make such a tremendous amount of data more manageable, internal processing electronics perform four important operations on the spectral imagery data in real-time. First, all data in the spatial/spectral cube of data is spectro-radiometrically calibrated as it is collected. Second, to allow the imager to simulate sensors with arbitrary spectral response, any set of three spectral response functions may be loaded into the imager including delta functions to allow single wavelength viewing; the instrument then evaluates the integral of the product of the scene spectral radiances and the response function. Third, more powerful exploitation of the gathered spectral radiances can be effected by application of various spectral-matched filtering algorithms to identify pixels whose relative spectral radiance distribution matches a sought-after spectral radiance distribution, allowing materials-based identification and discrimination. Fourth, the instrument allows determination of spectral reflectance, surface temperature, and spectral emissivity, also in real-time. The spectral imaging technique used in the instrument allows tailoring of the frame rate and/or the spectral bandwidth to suit the scene radiance levels, i.e., frame rate can be reduced, or bandwidth increased to improve SNR when viewing low radiance scenes. The unique challenges of design and calibration are described. Pixel readout rates of 160 MHz, for full frame readout rates of 1000 Hz (512 x 512 image) present the first challenge; processing rates of nearly 600 million integer operations per second for sensor emulation, or over 2 billion per second for matched filtering, present the second. Spatial and spectral calibration of 66,536 pixels (262,144 for the 512 x 512 version) and up to 1,000 spectral positions mandate novel decoupling methods to keep the required calibration memory to a reasonable size. Large radiometric dynamic range also requires care to maintain precision operation with minimum memory size.
Ultrafast chirped optical waveform recording using referenced heterodyning and a time microscope
Bennett, Corey Vincent
2010-06-15
A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. This invention expands upon previous work in temporal imaging by adding heterodyning, which can be self-referenced for improved precision and stability, to convert frequency chirp (the second derivative of phase with respect to time) into a time varying intensity modulation. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.
Ultrafast chirped optical waveform recorder using referenced heterodyning and a time microscope
Bennett, Corey Vincent [Livermore, CA
2011-11-22
A new technique for capturing both the amplitude and phase of an optical waveform is presented. This technique can capture signals with many THz of bandwidths in a single shot (e.g., temporal resolution of about 44 fs), or be operated repetitively at a high rate. That is, each temporal window (or frame) is captured single shot, in real time, but the process may be run repeatedly or single-shot. This invention expands upon previous work in temporal imaging by adding heterodyning, which can be self-referenced for improved precision and stability, to convert frequency chirp (the second derivative of phase with respect to time) into a time varying intensity modulation. By also including a variety of possible demultiplexing techniques, this process is scalable to recoding continuous signals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.
2011-12-15
Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat framesmore » used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.« less
Image restoration by minimizing zero norm of wavelet frame coefficients
NASA Astrophysics Data System (ADS)
Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue
2016-11-01
In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.
NASA Astrophysics Data System (ADS)
Gitlin, M. S.; Glyavin, M. Yu.; Fedotov, A. E.; Tsvetkov, A. I.
2017-07-01
The paper presents the second part of the review on a high-sensitive technique for time-resolved imaging and measurements of the 2D intensity profiles of millimeter-wave radiation by means of Visible Continuum Radiation emitted by the positive column of a medium-pressure Cs-Xe DC Discharge (VCRD method). The first part of the review was focused on the operating principles and fundamentals of this new technique [Plasma Phys. Rep. 43, 253 (2017)]. The second part of the review focuses on experiments demonstrating application of this imaging technique to measure the parameters of radiation at the output of moderate-power millimeter-wave sources. In particular, the output waveguide mode of a moderate-power W-band gyrotron with a pulsed magnetic field was identified and the relative powers of some spurious modes at the outputs of this gyrotron and a pulsed D-band orotron were evaluated. The paper also reviews applications of the VCRD technique for real-time imaging and nondestructive testing with a frame rate of higher than 10 fps by using millimeter waves. Shadow projection images of objects opaque and transparent for millimeter waves have been obtained using pulsed watt-scale millimeter waves for object illumination. Near video frame rate millimeter-wave shadowgraphy has been demonstrated. It is shown that this technique can be used for single-shot screening (including detection of concealed objects) and time-resolved imaging of time-dependent processes.
HDR video synthesis for vision systems in dynamic scenes
NASA Astrophysics Data System (ADS)
Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried
2016-09-01
High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.
Compressive Loading and Modeling of Stitched Composite Stiffeners
NASA Technical Reports Server (NTRS)
Leone, Frank A., Jr.; Jegley, Dawn C.; Linton, Kim A.
2016-01-01
A series of single-frame and single-stringer compression tests were conducted at NASA Langley Research Center on specimens harvested from a large panel built using the Pultruded Rod Stitched Efficient Unitized Structure (PRSEUS) concept. Different frame and stringer designs were used in fabrication of the PRSEUS panel. In this paper, the details of the experimental testing of single-frame and single-stringer compression specimens are presented, as well as discussions on the performance of the various structural configurations included in the panel. Nonlinear finite element models were developed to further understand the failure processes observed during the experimental campaign.
A high-speed network for cardiac image review.
Elion, J L; Petrocelli, R R
1994-01-01
A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage.
A high-speed network for cardiac image review.
Elion, J. L.; Petrocelli, R. R.
1994-01-01
A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage. PMID:7949964
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boerner, M.; Frank, A.; Pelka, A.
2012-04-15
This article reports on the development and set-up of a Nomarski-type multi-frame interferometer as a time and space resolving diagnostics of the free electron density in laser-generated plasma. The interferometer allows the recording of a series of 4 images within 6 ns of a single laser-plasma interaction. For the setup presented here, the minimal accessible free electron density is 5 x 10{sup 18} cm{sup -3}, the maximal one is 2 x 10{sup 20} cm{sup -3}. Furthermore, it provides a resolution of the electron density in space of 50 {mu}m and in time of 0.5 ns for one image with amore » customizable magnification in space for each of the 4 images. The electron density was evaluated from the interferograms using an Abel inversion algorithm. The functionality of the system was proven during first experiments and the experimental results are presented and discussed. A ray tracing procedure was realized to verify the interferometry pictures taken. In particular, the experimental results are compared to simulations and show excellent agreement, providing a conclusive picture of the evolution of the electron density distribution.« less
Multiframe digitization of x-ray (TV) images (abstract)
NASA Astrophysics Data System (ADS)
Karpenko, V. A.; Khil'chenko, A. D.; Lysenko, A. P.; Panchenko, V. E.
1989-07-01
The work in progress deals with the experimental search for a technique of digitizing x-ray TV images. The small volume of the buffer memory of the analog-to-digital (A/D) converter (ADC) we have previously used to detect TV signals made it necessary to digitize only one line at a time of the television raster and also to make use of gating to gain the video information contained in the whole frame. This paper is devoted to multiframe digitizing. The recorder of video signals comprises a broadband 8-bit A/D converter, a buffer memory having 128K words and a control circuit which forms a necessary sequence of advance pulses for the A/D converter and the memory relative to the input frame and line sync pulses (FSP and LSP). The device provides recording of video signals corresponding to one or a few frames following one after another, or to their fragments. The control circuit is responsible for the separation of the required fragment of the TV image. When loading the limit registers, the following input parameters of the control circuit are set: the skipping of a definite number of lines after the next FSP, the number of the lines of recording inside a fragment, the frequency of the information lines inside a fragment, the delay in the start of the ADC conversion relative to the arrival of the LSP, the length of the information section of a line, and the frequency of taking the readouts in a line. In addition, among the instructions given are the number of frames of recording and the frequency of their sequence. Thus, the A/D converter operates only inside a given fragment of the TV image. The information is introduced into the memory in sequence, fragment by fragment, without skipping and is then extracted as samples according to the addresses needed for representation in the required form, and processing. The video signal recorder governs the shortest time of the ADC conversion per point of 250 ns. As before, among the apparatus used were an image vidicon with luminophor conversion of x-radiation to light, and a single-crystal x-ray diffraction scheme necessary to form dynamic test objects from x-ray lines dispersed in space (the projections of the linear focus of an x-ray tube).
Analysis of Benefits and Pitfalls of Satellite SAR for Coastal Area Monitoring
NASA Astrophysics Data System (ADS)
Nunziata, F.; Buono, A.; Mgliaccio, M.; Li, X.; Wei, Y.
2016-08-01
This study aims at describing the outcomes of the Dragon-3 project no. 10689. The undertaken activities deal with coastal area monitoring and they include sea pollution and coastline extraction. The key remote sensing tool is the Synthetic Aperture Radar (SAR) that provides fine resolution images of the microwave reflectivity of the observed scene. However, the interpretation of SAR images is not at all straightforward and all the above-mentioned coastal area applications cannot be easily addressed using single-polarization SAR. Hence, the main outcome of this project is investigating the capability of multi-polarization SAR measurements to generate added-vale product in the frame of coastal area management.
Takeda, Jun; Ishida, Akihiro; Makishima, Yoshinori; Katayama, Ikufumi
2010-01-01
In this review, we demonstrate a real-time time-frequency two-dimensional (2D) pump-probe imaging spectroscopy implemented on a single shot basis applicable to excited-state dynamics in solid-state organic and biological materials. Using this technique, we could successfully map ultrafast time-frequency 2D transient absorption signals of β-carotene in solid films with wide temporal and spectral ranges having very short accumulation time of 20 ms per unit frame. The results obtained indicate the high potential of this technique as a powerful and unique spectroscopic tool to observe ultrafast excited-state dynamics of organic and biological materials in solid-state, which undergo rapid photodegradation. PMID:22399879
Frame by Frame II: A Filmography of the African American Image, 1978-1994.
ERIC Educational Resources Information Center
Klotman, Phyllis R.; Gibson, Gloria J.
A reference guide on African American film professionals, this book is a companion volume to the earlier "Frame by Frame I." It focuses on giving credit to African Americans who have contributed their talents to a film industry that has scarcely recognized their contributions, building on the aforementioned "Frame by Frame I,"…
FPGA implementation of image dehazing algorithm for real time applications
NASA Astrophysics Data System (ADS)
Kumar, Rahul; Kaushik, Brajesh Kumar; Balasubramanian, R.
2017-09-01
Weather degradation such as haze, fog, mist, etc. severely reduces the effective range of visual surveillance. This degradation is a spatially varying phenomena, which makes this problem non trivial. Dehazing is an essential preprocessing stage in applications such as long range imaging, border security, intelligent transportation system, etc. However, these applications require low latency of the preprocessing block. In this work, single image dark channel prior algorithm is modified and implemented for fast processing with comparable visual quality of the restored image/video. Although conventional single image dark channel prior algorithm is computationally expensive, it yields impressive results. Moreover, a two stage image dehazing architecture is introduced, wherein, dark channel and airlight are estimated in the first stage. Whereas, transmission map and intensity restoration are computed in the next stages. The algorithm is implemented using Xilinx Vivado software and validated by using Xilinx zc702 development board, which contains an Artix7 equivalent Field Programmable Gate Array (FPGA) and ARM Cortex A9 dual core processor. Additionally, high definition multimedia interface (HDMI) has been incorporated for video feed and display purposes. The results show that the dehazing algorithm attains 29 frames per second for the image resolution of 1920x1080 which is suitable of real time applications. The design utilizes 9 18K_BRAM, 97 DSP_48, 6508 FFs and 8159 LUTs.
Design of an MR image processing module on an FPGA chip
NASA Astrophysics Data System (ADS)
Li, Limin; Wyrwicz, Alice M.
2015-06-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.
Design of an MR image processing module on an FPGA chip
Li, Limin; Wyrwicz, Alice M.
2015-01-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. PMID:25909646
Shen, Simon; Syal, Karan; Tao, Nongjian; Wang, Shaopeng
2015-12-01
We present a Single-Cell Motion Characterization System (SiCMoCS) to automatically extract bacterial cell morphological features from microscope images and use those features to automatically classify cell motion for rod shaped motile bacterial cells. In some imaging based studies, bacteria cells need to be attached to the surface for time-lapse observation of cellular processes such as cell membrane-protein interactions and membrane elasticity. These studies often generate large volumes of images. Extracting accurate bacterial cell morphology features from these images is critical for quantitative assessment. Using SiCMoCS, we demonstrated simultaneous and automated motion tracking and classification of hundreds of individual cells in an image sequence of several hundred frames. This is a significant improvement from traditional manual and semi-automated approaches to segmenting bacterial cells based on empirical thresholds, and a first attempt to automatically classify bacterial motion types for motile rod shaped bacterial cells, which enables rapid and quantitative analysis of various types of bacterial motion.
NASA Technical Reports Server (NTRS)
1999-01-01
This narrow angle image taken by Cassini's camera system of the Moon is one of the best of a sequence of narrow angle frames taken as the spacecraft passed by the Moon on the way to its closest approach with Earth on August 17, 1999. The 80 millisecond exposure was taken through a spectral filter centered at 0.33 microns; the filter bandpass was 85 Angstroms wide. The spatial scale of the image is about 1.4 miles per pixel (about 2.3 kilometers). The imaging data were processed and released by the Cassini Imaging Central Laboratory for Operations (CICLOPS) at the University of Arizona's Lunar and Planetary Laboratory, Tucson, AZ. Photo Credit: NASA/JPL/Cassini Imaging Team/University of Arizona Cassini, launched in 1997, is a joint mission of NASA, the European Space Agency and Italian Space Agency. The mission is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Space Science, Washington DC. JPL is a division of the California Institute of Technology, Pasadena, CA.A study on multiresolution lossless video coding using inter/intra frame adaptive prediction
NASA Astrophysics Data System (ADS)
Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro
2003-06-01
Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.
ERIC Educational Resources Information Center
Allweiss, Alexandra; Grant, Carl A.; Manning, Karla
2015-01-01
This critical article provides insights into how media frames influence our understandings of school reform in urban spaces by examining images of students during the 2013 school closings in Chicago. Using visual framing analysis and informed by framing theory and critiques of neoliberalism we seek to explore two questions: (1) What role do media…
A framed, 16-image Kirkpatrick–Baez x-ray microscope
Marshall, F. J.; Bahr, R. E.; Goncharov, V. N.; ...
2017-09-08
A 16-image Kirkpatrick–Baez (KB)–type x-ray microscope consisting of compact KB mirrors has been assembled for the first time with mirrors aligned to allow it to be coupled to a high-speed framing camera. The high-speed framing camera has four independently gated strips whose emission sampling interval is ~30 ps. Images are arranged four to a strip with ~60-ps temporal spacing between frames on a strip. By spacing the timing of the strips, a frame spacing of ~15 ps is achieved. A framed resolution of ~6-um is achieved with this combination in a 400-um region of laser–plasma x-ray emission in the 2-more » to 8-keV energy range. A principal use of the microscope is to measure the evolution of the implosion stagnation region of cryogenic DT target implosions on the University of Rochester’s OMEGA Laser System. The unprecedented time and spatial resolution achieved with this framed, multi-image KB microscope have made it possible to accurately determine the cryogenic implosion core emission size and shape at the peak of stagnation. In conclusion, these core size measurements, taken in combination with those of ion temperature, neutron-production temporal width, and neutron yield allow for inference of core pressures, currently exceeding 50 GBar in OMEGA cryogenic target implosions.« less
Agarwal, Krishna; Macháň, Radek; Prasad, Dilip K
2018-03-21
Localization microscopy and multiple signal classification algorithm use temporal stack of image frames of sparse emissions from fluorophores to provide super-resolution images. Localization microscopy localizes emissions in each image independently and later collates the localizations in all the frames, giving same weight to each frame irrespective of its signal-to-noise ratio. This results in a bias towards frames with low signal-to-noise ratio and causes cluttered background in the super-resolved image. User-defined heuristic computational filters are employed to remove a set of localizations in an attempt to overcome this bias. Multiple signal classification performs eigen-decomposition of the entire stack, irrespective of the relative signal-to-noise ratios of the frames, and uses a threshold to classify eigenimages into signal and null subspaces. This results in under-representation of frames with low signal-to-noise ratio in the signal space and over-representation in the null space. Thus, multiple signal classification algorithms is biased against frames with low signal-to-noise ratio resulting into suppression of the corresponding fluorophores. This paper presents techniques to automatically debias localization microscopy and multiple signal classification algorithm of these biases without compromising their resolution and without employing heuristics, user-defined criteria. The effect of debiasing is demonstrated through five datasets of invitro and fixed cell samples.
A framed, 16-image Kirkpatrick–Baez x-ray microscope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, F. J.; Bahr, R. E.; Goncharov, V. N.
A 16-image Kirkpatrick–Baez (KB)–type x-ray microscope consisting of compact KB mirrors has been assembled for the first time with mirrors aligned to allow it to be coupled to a high-speed framing camera. The high-speed framing camera has four independently gated strips whose emission sampling interval is ~30 ps. Images are arranged four to a strip with ~60-ps temporal spacing between frames on a strip. By spacing the timing of the strips, a frame spacing of ~15 ps is achieved. A framed resolution of ~6-um is achieved with this combination in a 400-um region of laser–plasma x-ray emission in the 2-more » to 8-keV energy range. A principal use of the microscope is to measure the evolution of the implosion stagnation region of cryogenic DT target implosions on the University of Rochester’s OMEGA Laser System. The unprecedented time and spatial resolution achieved with this framed, multi-image KB microscope have made it possible to accurately determine the cryogenic implosion core emission size and shape at the peak of stagnation. In conclusion, these core size measurements, taken in combination with those of ion temperature, neutron-production temporal width, and neutron yield allow for inference of core pressures, currently exceeding 50 GBar in OMEGA cryogenic target implosions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathias, C.J.; Welch, M.J.; Raichle, M.E.
1990-03-01
Copper(II) pyruvaldehyde bis(N4-methylthiosemicarbazone) (Cu-PTSM), copper(II) pyruvaldehyde bis(N4-dimethylthiosemicarbazone) (Cu-PTSM2), and copper(II) ethylglyoxal bis(N4-methylthiosemicarbazone) (Cu-ETSM), have been proposed as PET tracers for cerebral blood flow (CBF) when labeled with generator-produced 62Cu (t1/2 = 9.7 min). To evaluate the potential of Cu-PTSM for CBF PET studies, baboon single-pass cerebral extraction measurements and PET imaging were carried out with the use of 67Cu (t1/2 = 2.6 days) and 64Cu (t1/2 = 12.7 hr), respectively. All three chelates were extracted into the brain with high efficiency. There was some clearance of all chelates in the 10-50-sec time frame and Cu-PTSM2 continued to clear. Cu-PTSM andmore » Cu-ETSM have high residual brain activity. PET imaging of baboon brain was carried out with the use of (64Cu)-Cu-PTSM. For comparison with the 64Cu brain image, a CBF (15O-labeled water) image (40 sec) was first obtained. Qualitatively, the H2(15)O and (64Cu)-Cu-PTSM images were very similar; for example, a comparison of gray to white matter uptake resulted in ratios of 2.42 for H2(15)O and 2.67 for Cu-PTSM. No redistribution of 64Cu was observed in 2 hr of imaging, as was predicted from the single-pass study results. Quantitative determination of blood flow using Cu-PTSM showed good agreement with blood flow determined with H2(15)O. This data suggests that (62Cu)-Cu-PTSM may be a useful generator-produced radiopharmaceutical for blood flow studies with PET.« less