Sample records for consecutive image frames

  1. Smear correction of highly variable, frame-transfer CCD images with application to polarimetry.

    PubMed

    Iglesias, Francisco A; Feller, Alex; Nagaraju, Krishnappa

    2015-07-01

    Image smear, produced by the shutterless operation of frame-transfer CCD detectors, can be detrimental for many imaging applications. Existing algorithms used to numerically remove smear do not contemplate cases where intensity levels change considerably between consecutive frame exposures. In this report, we reformulate the smearing model to include specific variations of the sensor illumination. The corresponding desmearing expression and its noise properties are also presented and demonstrated in the context of fast imaging polarimetry.

  2. A 100 Mfps image sensor for biological applications

    NASA Astrophysics Data System (ADS)

    Etoh, T. Goji; Shimonomura, Kazuhiro; Nguyen, Anh Quang; Takehara, Kosei; Kamakura, Yoshinari; Goetschalckx, Paul; Haspeslagh, Luc; De Moor, Piet; Dao, Vu Truong Son; Nguyen, Hoang Dung; Hayashi, Naoki; Mitsui, Yo; Inumaru, Hideo

    2018-02-01

    Two ultrahigh-speed CCD image sensors with different characteristics were fabricated for applications to advanced scientific measurement apparatuses. The sensors are BSI MCG (Backside-illuminated Multi-Collection-Gate) image sensors with multiple collection gates around the center of the front side of each pixel, placed like petals of a flower. One has five collection gates and one drain gate at the center, which can capture consecutive five frames at 100 Mfps with the pixel count of about 600 kpixels (512 x 576 x 2 pixels). In-pixel signal accumulation is possible for repetitive image capture of reproducible events. The target application is FLIM. The other is equipped with four collection gates each connected to an in-situ CCD memory with 305 elements, which enables capture of 1,220 (4 x 305) consecutive images at 50 Mfps. The CCD memory is folded and looped with the first element connected to the last element, which also makes possible the in-pixel signal accumulation. The sensor is a small test sensor with 32 x 32 pixels. The target applications are imaging TOF MS, pulse neutron tomography and dynamic PSP. The paper also briefly explains an expression of the temporal resolution of silicon image sensors theoretically derived by the authors in 2017. It is shown that the image sensor designed based on the theoretical analysis achieves imaging of consecutive frames at the frame interval of 50 ps.

  3. In vivo retinal imaging for fixational eye motion detection using a high-speed digital micromirror device (DMD)-based ophthalmoscope.

    PubMed

    Vienola, Kari V; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A; de Boer, Johannes F

    2018-02-01

    Retinal motion detection with an accuracy of 0.77 arcmin corresponding to 3.7 µm on the retina is demonstrated with a novel digital micromirror device based ophthalmoscope. By generating a confocal image as a reference, eye motion could be measured from consecutively measured subsampled frames. The subsampled frames provide 7.7 millisecond snapshots of the retina without motion artifacts between the image points of the subsampled frame, distributed over the full field of view. An ophthalmoscope pattern projection speed of 130 Hz enabled a motion detection bandwidth of 65 Hz. A model eye with a scanning mirror was built to test the performance of the motion detection algorithm. Furthermore, an in vivo motion trace was obtained from a healthy volunteer. The obtained eye motion trace clearly shows the three main types of fixational eye movements. Lastly, the obtained eye motion trace was used to correct for the eye motion in consecutively obtained subsampled frames to produce an averaged confocal image correct for motion artefacts.

  4. In vivo retinal imaging for fixational eye motion detection using a high-speed digital micromirror device (DMD)-based ophthalmoscope

    PubMed Central

    Vienola, Kari V.; Damodaran, Mathi; Braaf, Boy; Vermeer, Koenraad A.; de Boer, Johannes F.

    2018-01-01

    Retinal motion detection with an accuracy of 0.77 arcmin corresponding to 3.7 µm on the retina is demonstrated with a novel digital micromirror device based ophthalmoscope. By generating a confocal image as a reference, eye motion could be measured from consecutively measured subsampled frames. The subsampled frames provide 7.7 millisecond snapshots of the retina without motion artifacts between the image points of the subsampled frame, distributed over the full field of view. An ophthalmoscope pattern projection speed of 130 Hz enabled a motion detection bandwidth of 65 Hz. A model eye with a scanning mirror was built to test the performance of the motion detection algorithm. Furthermore, an in vivo motion trace was obtained from a healthy volunteer. The obtained eye motion trace clearly shows the three main types of fixational eye movements. Lastly, the obtained eye motion trace was used to correct for the eye motion in consecutively obtained subsampled frames to produce an averaged confocal image correct for motion artefacts. PMID:29552396

  5. Automatic video summarization driven by a spatio-temporal attention model

    NASA Astrophysics Data System (ADS)

    Barland, R.; Saadane, A.

    2008-02-01

    According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.

  6. Tracking quasi-stationary flow of weak fluorescent signals by adaptive multi-frame correlation.

    PubMed

    Ji, L; Danuser, G

    2005-12-01

    We have developed a novel cross-correlation technique to probe quasi-stationary flow of fluorescent signals in live cells at a spatial resolution that is close to single particle tracking. By correlating image blocks between pairs of consecutive frames and integrating their correlation scores over multiple frame pairs, uncertainty in identifying a globally significant maximum in the correlation score function has been greatly reduced as compared with conventional correlation-based tracking using the signal of only two consecutive frames. This approach proves robust and very effective in analysing images with a weak, noise-perturbed signal contrast where texture characteristics cannot be matched between only a pair of frames. It can also be applied to images that lack prominent features that could be utilized for particle tracking or feature-based template matching. Furthermore, owing to the integration of correlation scores over multiple frames, the method can handle signals with substantial frame-to-frame intensity variation where conventional correlation-based tracking fails. We tested the performance of the method by tracking polymer flow in actin and microtubule cytoskeleton structures labelled at various fluorophore densities providing imagery with a broad range of signal modulation and noise. In applications to fluorescent speckle microscopy (FSM), where the fluorophore density is sufficiently low to reveal patterns of discrete fluorescent marks referred to as speckles, we combined the multi-frame correlation approach proposed above with particle tracking. This hybrid approach allowed us to follow single speckles robustly in areas of high speckle density and fast flow, where previously published FSM analysis methods were unsuccessful. Thus, we can now probe cytoskeleton polymer dynamics in living cells at an entirely new level of complexity and with unprecedented detail.

  7. Data rate enhancement of optical camera communications by compensating inter-frame gaps

    NASA Astrophysics Data System (ADS)

    Nguyen, Duy Thong; Park, Youngil

    2017-07-01

    Optical camera communications (OCC) is a convenient way of transmitting data between LED lamps and image sensors that are included in most smart devices. Although many schemes have been suggested to increase the data rate of the OCC system, it is still much lower than that of the photodiode-based LiFi system. One major reason of this low data rate is attributed to the inter-frame gap (IFG) of image sensor system, that is, the time gap between consecutive image frames. In this paper, we propose a way to compensate for this IFG efficiently by an interleaved Hamming coding scheme. The proposed scheme is implemented and the performance is measured.

  8. Optical flow estimation on image sequences with differently exposed frames

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-09-01

    Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.

  9. Non-rigid multi-frame registration of cell nuclei in live cell fluorescence microscopy image data.

    PubMed

    Tektonidis, Marco; Kim, Il-Han; Chen, Yi-Chun M; Eils, Roland; Spector, David L; Rohr, Karl

    2015-01-01

    The analysis of the motion of subcellular particles in live cell microscopy images is essential for understanding biological processes within cells. For accurate quantification of the particle motion, compensation of the motion and deformation of the cell nucleus is required. We introduce a non-rigid multi-frame registration approach for live cell fluorescence microscopy image data. Compared to existing approaches using pairwise registration, our approach exploits information from multiple consecutive images simultaneously to improve the registration accuracy. We present three intensity-based variants of the multi-frame registration approach and we investigate two different temporal weighting schemes. The approach has been successfully applied to synthetic and live cell microscopy image sequences, and an experimental comparison with non-rigid pairwise registration has been carried out. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  11. Temporally diffeomorphic cardiac motion estimation from three-dimensional echocardiography by minimization of intensity consistency error.

    PubMed

    Zhang, Zhijun; Ashraf, Muhammad; Sahn, David J; Song, Xubo

    2014-05-01

    Quantitative analysis of cardiac motion is important for evaluation of heart function. Three dimensional (3D) echocardiography is among the most frequently used imaging modalities for motion estimation because it is convenient, real-time, low-cost, and nonionizing. However, motion estimation from 3D echocardiographic sequences is still a challenging problem due to low image quality and image corruption by noise and artifacts. The authors have developed a temporally diffeomorphic motion estimation approach in which the velocity field instead of the displacement field was optimized. The optimal velocity field optimizes a novel similarity function, which we call the intensity consistency error, defined as multiple consecutive frames evolving to each time point. The optimization problem is solved by using the steepest descent method. Experiments with simulated datasets, images of anex vivo rabbit phantom, images of in vivo open-chest pig hearts, and healthy human images were used to validate the authors' method. Simulated and real cardiac sequences tests showed that results in the authors' method are more accurate than other competing temporal diffeomorphic methods. Tests with sonomicrometry showed that the tracked crystal positions have good agreement with ground truth and the authors' method has higher accuracy than the temporal diffeomorphic free-form deformation (TDFFD) method. Validation with an open-access human cardiac dataset showed that the authors' method has smaller feature tracking errors than both TDFFD and frame-to-frame methods. The authors proposed a diffeomorphic motion estimation method with temporal smoothness by constraining the velocity field to have maximum local intensity consistency within multiple consecutive frames. The estimated motion using the authors' method has good temporal consistency and is more accurate than other temporally diffeomorphic motion estimation methods.

  12. A high sensitivity 20Mfps CMOS image sensor with readout speed of 1Tpixel/sec for visualization of ultra-high speed phenomena

    NASA Astrophysics Data System (ADS)

    Kuroda, R.; Sugawa, S.

    2017-02-01

    Ultra-high speed (UHS) CMOS image sensors with on-chop analog memories placed on the periphery of pixel array for the visualization of UHS phenomena are overviewed in this paper. The developed UHS CMOS image sensors consist of 400H×256V pixels and 128 memories/pixel, and the readout speed of 1Tpixel/sec is obtained, leading to 10 Mfps full resolution video capturing with consecutive 128 frames, and 20 Mfps half resolution video capturing with consecutive 256 frames. The first development model has been employed in the high speed video camera and put in practical use in 2012. By the development of dedicated process technologies, photosensitivity improvement and power consumption reduction were simultaneously achieved, and the performance improved version has been utilized in the commercialized high-speed video camera since 2015 that offers 10 Mfps with ISO16,000 photosensitivity. Due to the improved photosensitivity, clear images can be captured and analyzed even under low light condition, such as under a microscope as well as capturing of UHS light emission phenomena.

  13. A programmable display layer for virtual reality system architectures.

    PubMed

    Smit, Ferdi Alexander; van Liere, Robert; Froehlich, Bernd

    2010-01-01

    Display systems typically operate at a minimum rate of 60 Hz. However, existing VR-architectures generally produce application updates at a lower rate. Consequently, the display is not updated by the application every display frame. This causes a number of undesirable perceptual artifacts. We describe an architecture that provides a programmable display layer (PDL) in order to generate updated display frames. This replaces the default display behavior of repeating application frames until an update is available. We will show three benefits of the architecture typical to VR. First, smooth motion is provided by generating intermediate display frames by per-pixel depth-image warping using 3D motion fields. Smooth motion eliminates various perceptual artifacts due to judder. Second, we implement fine-grained latency reduction at the display frame level using a synchronized prediction of simulation objects and the viewpoint. This improves the average quality and consistency of latency reduction. Third, a crosstalk reduction algorithm for consecutive display frames is implemented, which improves the quality of stereoscopic images. To evaluate the architecture, we compare image quality and latency to that of a classic level-of-detail approach.

  14. Tracking cells in Life Cell Imaging videos using topological alignments.

    PubMed

    Mosig, Axel; Jäger, Stefan; Wang, Chaofeng; Nath, Sumit; Ersoy, Ilker; Palaniappan, Kannap-pan; Chen, Su-Shing

    2009-07-16

    With the increasing availability of live cell imaging technology, tracking cells and other moving objects in live cell videos has become a major challenge for bioimage informatics. An inherent problem for most cell tracking algorithms is over- or under-segmentation of cells - many algorithms tend to recognize one cell as several cells or vice versa. We propose to approach this problem through so-called topological alignments, which we apply to address the problem of linking segmentations of two consecutive frames in the video sequence. Starting from the output of a conventional segmentation procedure, we align pairs of consecutive frames through assigning sets of segments in one frame to sets of segments in the next frame. We achieve this through finding maximum weighted solutions to a generalized "bipartite matching" between two hierarchies of segments, where we derive weights from relative overlap scores of convex hulls of sets of segments. For solving the matching task, we rely on an integer linear program. Practical experiments demonstrate that the matching task can be solved efficiently in practice, and that our method is both effective and useful for tracking cells in data sets derived from a so-called Large Scale Digital Cell Analysis System (LSDCAS). The source code of the implementation is available for download from http://www.picb.ac.cn/patterns/Software/topaln.

  15. High-speed imaging using 3CCD camera and multi-color LED flashes

    NASA Astrophysics Data System (ADS)

    Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis

    2017-11-01

    This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.

  16. High-speed plasma imaging: A lightning bolt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurden, G.A.; Whiteson, D.O.

    Using a gated intensified digital Kodak Ektapro camera system, the authors captured a lightning bolt at 1,000 frames per second, with 100-{micro}s exposure time on each consecutive frame. As a thunder storm approaches while darkness descended (7:50 pm) on July 21, 1994, they photographed lightning bolts with an f22 105-mm lens and 100% gain on the intensified camera. This 15-frame sequence shows a cloud to ground stroke at a distance of about 1.5 km, which has a series of stepped leaders propagating downwards, following by the upward-propagating main return stroke.

  17. Registration of parametric dynamic F-18-FDG PET/CT breast images with parametric dynamic Gd-DTPA breast images

    NASA Astrophysics Data System (ADS)

    Magri, Alphonso; Krol, Andrzej; Lipson, Edward; Mandel, James; McGraw, Wendy; Lee, Wei; Tillapaugh-Fay, Gwen; Feiglin, David

    2009-02-01

    This study was undertaken to register 3D parametric breast images derived from Gd-DTPA MR and F-18-FDG PET/CT dynamic image series. Nonlinear curve fitting (Levenburg-Marquardt algorithm) based on realistic two-compartment models was performed voxel-by-voxel separately for MR (Brix) and PET (Patlak). PET dynamic series consists of 50 frames of 1-minute duration. Each consecutive PET image was nonrigidly registered to the first frame using a finite element method and fiducial skin markers. The 12 post-contrast MR images were nonrigidly registered to the precontrast frame using a free-form deformation (FFD) method. Parametric MR images were registered to parametric PET images via CT using FFD because the first PET time frame was acquired immediately after the CT image on a PET/CT scanner and is considered registered to the CT image. We conclude that nonrigid registration of PET and MR parametric images using CT data acquired during PET/CT scan and the FFD method resulted in their improved spatial coregistration. The success of this procedure was limited due to relatively large target registration error, TRE = 15.1+/-7.7 mm, as compared to spatial resolution of PET (6-7 mm), and swirling image artifacts created in MR parametric images by the FFD. Further refinement of nonrigid registration of PET and MR parametric images is necessary to enhance visualization and integration of complex diagnostic information provided by both modalities that will lead to improved diagnostic performance.

  18. Ambient-Light-Canceling Camera Using Subtraction of Frames

    NASA Technical Reports Server (NTRS)

    Morookian, John Michael

    2004-01-01

    The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light during one frame period, and would be illuminated with only ambient (background) light during the next frame period. The camera output would be digitized and sent to a computer, wherein the pixel values of the background-only frame would be subtracted from the pixel values of the signal-plus-background frame to obtain signal-only pixel values (see figure). To prevent artifacts of motion from entering the images, it would be necessary to acquire image data at a rate greater than the standard video rate of 30 frames per second. For this purpose, the ALCC would exploit a novel control technique developed at NASA s Jet Propulsion Laboratory for advanced charge-coupled-device (CCD) cameras. This technique provides for readout from a subwindow [region of interest (ROI)] within the image frame. Because the desired reflections from the eye would typically occupy a small fraction of the area within the image frame, the ROI capability would make it possible to acquire and subtract pixel values at rates of several hundred frames per second considerably greater than the standard video rate and sufficient to both (1) suppress motion artifacts and (2) track the motion of the eye between consecutive subtractive frame pairs.

  19. Observation of a cavitation cloud in tissue using correlation between ultrafast ultrasound images.

    PubMed

    Prieur, Fabrice; Zorgani, Ali; Catheline, Stefan; Souchon, Rémi; Mestas, Jean-Louis; Lafond, Maxime; Lafon, Cyril

    2015-07-01

    The local application of ultrasound is known to improve drug intake by tumors. Cavitating bubbles are one of the contributing effects. A setup in which two ultrasound transducers are placed confocally is used to generate cavitation in ex vivo tissue. As the transducers emit a series of short excitation bursts, the evolution of the cavitation activity is monitored using an ultrafast ultrasound imaging system. The frame rate of the system is several thousands of images per second, which provides several tens of images between consecutive excitation bursts. Using the correlation between consecutive images for speckle tracking, a decorrelation of the imaging signal appears due to the creation, fast movement, and dissolution of the bubbles in the cavitation cloud. By analyzing this area of decorrelation, the cavitation cloud can be localized and the spatial extent of the cavitation activity characterized.

  20. From image captioning to video summary using deep recurrent networks and unsupervised segmentation

    NASA Astrophysics Data System (ADS)

    Morosanu, Bogdan-Andrei; Lemnaru, Camelia

    2018-04-01

    Automatic captioning systems based on recurrent neural networks have been tremendously successful at providing realistic natural language captions for complex and varied image data. We explore methods for adapting existing models trained on large image caption data sets to a similar problem, that of summarising videos using natural language descriptions and frame selection. These architectures create internal high level representations of the input image that can be used to define probability distributions and distance metrics on these distributions. Specifically, we interpret each hidden unit inside a layer of the caption model as representing the un-normalised log probability of some unknown image feature of interest for the caption generation process. We can then apply well understood statistical divergence measures to express the difference between images and create an unsupervised segmentation of video frames, classifying consecutive images of low divergence as belonging to the same context, and those of high divergence as belonging to different contexts. To provide a final summary of the video, we provide a group of selected frames and a text description accompanying them, allowing a user to perform a quick exploration of large unlabeled video databases.

  1. Two-Stream Transformer Networks for Video-based Face Alignment.

    PubMed

    Liu, Hao; Lu, Jiwen; Feng, Jianjiang; Zhou, Jie

    2017-08-01

    In this paper, we propose a two-stream transformer networks (TSTN) approach for video-based face alignment. Unlike conventional image-based face alignment approaches which cannot explicitly model the temporal dependency in videos and motivated by the fact that consistent movements of facial landmarks usually occur across consecutive frames, our TSTN aims to capture the complementary information of both the spatial appearance on still frames and the temporal consistency information across frames. To achieve this, we develop a two-stream architecture, which decomposes the video-based face alignment into spatial and temporal streams accordingly. Specifically, the spatial stream aims to transform the facial image to the landmark positions by preserving the holistic facial shape structure. Accordingly, the temporal stream encodes the video input as active appearance codes, where the temporal consistency information across frames is captured to help shape refinements. Experimental results on the benchmarking video-based face alignment datasets show very competitive performance of our method in comparisons to the state-of-the-arts.

  2. Statistical Deconvolution for Superresolution Fluorescence Microscopy

    PubMed Central

    Mukamel, Eran A.; Babcock, Hazen; Zhuang, Xiaowei

    2012-01-01

    Superresolution microscopy techniques based on the sequential activation of fluorophores can achieve image resolution of ∼10 nm but require a sparse distribution of simultaneously activated fluorophores in the field of view. Image analysis procedures for this approach typically discard data from crowded molecules with overlapping images, wasting valuable image information that is only partly degraded by overlap. A data analysis method that exploits all available fluorescence data, regardless of overlap, could increase the number of molecules processed per frame and thereby accelerate superresolution imaging speed, enabling the study of fast, dynamic biological processes. Here, we present a computational method, referred to as deconvolution-STORM (deconSTORM), which uses iterative image deconvolution in place of single- or multiemitter localization to estimate the sample. DeconSTORM approximates the maximum likelihood sample estimate under a realistic statistical model of fluorescence microscopy movies comprising numerous frames. The model incorporates Poisson-distributed photon-detection noise, the sparse spatial distribution of activated fluorophores, and temporal correlations between consecutive movie frames arising from intermittent fluorophore activation. We first quantitatively validated this approach with simulated fluorescence data and showed that deconSTORM accurately estimates superresolution images even at high densities of activated fluorophores where analysis by single- or multiemitter localization methods fails. We then applied the method to experimental data of cellular structures and demonstrated that deconSTORM enables an approximately fivefold or greater increase in imaging speed by allowing a higher density of activated fluorophores/frame. PMID:22677393

  3. Cardiac phase detection in intravascular ultrasound images

    NASA Astrophysics Data System (ADS)

    Matsumoto, Monica M. S.; Lemos, Pedro Alves; Yoneyama, Takashi; Furuie, Sergio Shiguemi

    2008-03-01

    Image gating is related to image modalities that involve quasi-periodic moving organs. Therefore, during intravascular ultrasound (IVUS) examination, there is cardiac movement interference. In this paper, we aim to obtain IVUS gated images based on the images themselves. This would allow the reconstruction of 3D coronaries with temporal accuracy for any cardiac phase, which is an advantage over the ECG-gated acquisition that shows a single one. It is also important for retrospective studies, as in existing IVUS databases there are no additional reference signals (ECG). From the images, we calculated signals based on average intensity (AI), and, from consecutive frames, average intensity difference (AID), cross-correlation coefficient (CC) and mutual information (MI). The process includes a wavelet-based filter step and ascendant zero-cross detection in order to obtain the phase information. Firstly, we tested 90 simulated sequences with 1025 frames each. Our method was able to achieve more than 95.0% of true positives and less than 2.3% of false positives ratio, for all signals. Afterwards, we tested in a real examination, with 897 frames and ECG as gold-standard. We achieved 97.4% of true positives (CC and MI), and 2.5% of false positives. For future works, methodology should be tested in wider range of IVUS examinations.

  4. Compact and light-weight automated semen analysis platform using lensfree on-chip microscopy.

    PubMed

    Su, Ting-Wei; Erlinger, Anthony; Tseng, Derek; Ozcan, Aydogan

    2010-10-01

    We demonstrate a compact and lightweight platform to conduct automated semen analysis using a lensfree on-chip microscope. This holographic on-chip imaging platform weighs ∼46 g, measures ∼4.2 × 4.2 × 5.8 cm, and does not require any lenses, lasers or other bulky optical components to achieve phase and amplitude imaging of sperms over ∼24 mm(2) field-of-view with an effective numerical aperture of ∼0.2. Using this wide-field lensfree on-chip microscope, semen samples are imaged for ∼10 s, capturing a total of ∼20 holographic frames. Digital subtraction of these consecutive lensfree frames, followed by appropriate processing of the reconstructed images, enables automated quantification of the count, the speed and the dynamic trajectories of motile sperms, while summation of the same frames permits counting of immotile sperms. Such a compact and lightweight automated semen analysis platform running on a wide-field lensfree on-chip microscope could be especially important for fertility clinics, personal male fertility tests, as well as for field use in veterinary medicine such as in stud farming and animal breeding applications.

  5. Temporally consistent segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Owens, Jason L.; Osteen, Philip R.; Daniilidis, Kostas

    2014-06-01

    We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.

  6. Controlling the Display of Capsule Endoscopy Video for Diagnostic Assistance

    NASA Astrophysics Data System (ADS)

    Vu, Hai; Echigo, Tomio; Sagawa, Ryusuke; Yagi, Keiko; Shiba, Masatsugu; Higuchi, Kazuhide; Arakawa, Tetsuo; Yagi, Yasushi

    Interpretations by physicians of capsule endoscopy image sequences captured over periods of 7-8 hours usually require 45 to 120 minutes of extreme concentration. This paper describes a novel method to reduce diagnostic time by automatically controlling the display frame rate. Unlike existing techniques, this method displays original images with no skipping of frames. The sequence can be played at a high frame rate in stable regions to save time. Then, in regions with rough changes, the speed is decreased to more conveniently ascertain suspicious findings. To realize such a system, cue information about the disparity of consecutive frames, including color similarity and motion displacements is extracted. A decision tree utilizes these features to classify the states of the image acquisitions. For each classified state, the delay time between frames is calculated by parametric functions. A scheme selecting the optimal parameters set determined from assessments by physicians is deployed. Experiments involved clinical evaluations to investigate the effectiveness of this method compared to a standard-view using an existing system. Results from logged action based analysis show that compared with an existing system the proposed method reduced diagnostic time to around 32.5 ± minutes per full sequence while the number of abnormalities found was similar. As well, physicians needed less effort because of the systems efficient operability. The results of the evaluations should convince physicians that they can safely use this method and obtain reduced diagnostic times.

  7. Synchrotron radiation microtomography of Taylor bubbles in capillary two-phase flow

    NASA Astrophysics Data System (ADS)

    Boden, Stephan; dos Santos Rolo, Tomy; Baumbach, Tilo; Hampel, Uwe

    2014-07-01

    We report on a study to measure the three-dimensional shape of Taylor bubbles in capillaries using synchrotron radiation in conjunction with ultrafast radiographic imaging. Moving Taylor bubbles in 2-mm round and square capillaries were radiographically scanned with an ultrahigh frame rate of up to 36,000 fps and 5.6-µm pixel separation. Consecutive images were properly processed to yield 2D transmission radiographs of high contrast-to-noise ratio. Application of 3D tomographic image reconstruction disclosed the 3D bubble shape. The results provide a reference data base for development of sophisticated interface resolving CFD computations.

  8. Adaptive foveated single-pixel imaging with dynamic supersampling

    PubMed Central

    Phillips, David B.; Sun, Ming-Jie; Taylor, Jonathan M.; Edgar, Matthew P.; Barnett, Stephen M.; Gibson, Graham M.; Padgett, Miles J.

    2017-01-01

    In contrast to conventional multipixel cameras, single-pixel cameras capture images using a single detector that measures the correlations between the scene and a set of patterns. However, these systems typically exhibit low frame rates, because to fully sample a scene in this way requires at least the same number of correlation measurements as the number of pixels in the reconstructed image. To mitigate this, a range of compressive sensing techniques have been developed which use a priori knowledge to reconstruct images from an undersampled measurement set. Here, we take a different approach and adopt a strategy inspired by the foveated vision found in the animal kingdom—a framework that exploits the spatiotemporal redundancy of many dynamic scenes. In our system, a high-resolution foveal region tracks motion within the scene, yet unlike a simple zoom, every frame delivers new spatial information from across the entire field of view. This strategy rapidly records the detail of quickly changing features in the scene while simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This architecture provides video streams in which both the resolution and exposure time spatially vary and adapt dynamically in response to the evolution of the scene. The degree of local frame rate enhancement is scene-dependent, but here, we demonstrate a factor of 4, thereby helping to mitigate one of the main drawbacks of single-pixel imaging techniques. The methods described here complement existing compressive sensing approaches and may be applied to enhance computational imagers that rely on sequential correlation measurements. PMID:28439538

  9. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.

  10. Proposed patient motion monitoring system using feature point tracking with a web camera.

    PubMed

    Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi

    2017-12-01

    Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.

  11. Moving object localization using optical flow for pedestrian detection from a moving vehicle.

    PubMed

    Hariyono, Joko; Hoang, Van-Dung; Jo, Kang-Hyun

    2014-01-01

    This paper presents a pedestrian detection method from a moving vehicle using optical flows and histogram of oriented gradients (HOG). A moving object is extracted from the relative motion by segmenting the region representing the same optical flows after compensating the egomotion of the camera. To obtain the optical flow, two consecutive images are divided into grid cells 14 × 14 pixels; then each cell is tracked in the current frame to find corresponding cell in the next frame. Using at least three corresponding cells, affine transformation is performed according to each corresponding cell in the consecutive images, so that conformed optical flows are extracted. The regions of moving object are detected as transformed objects, which are different from the previously registered background. Morphological process is applied to get the candidate human regions. In order to recognize the object, the HOG features are extracted on the candidate region and classified using linear support vector machine (SVM). The HOG feature vectors are used as input of linear SVM to classify the given input into pedestrian/nonpedestrian. The proposed method was tested in a moving vehicle and also confirmed through experiments using pedestrian dataset. It shows a significant improvement compared with original HOG using ETHZ pedestrian dataset.

  12. Unmanned Vehicle Guidance Using Video Camera/Vehicle Model

    NASA Technical Reports Server (NTRS)

    Sutherland, T.

    1999-01-01

    A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.

  13. Robust Small Target Co-Detection from Airborne Infrared Image Sequences.

    PubMed

    Gao, Jingli; Wen, Chenglin; Liu, Meiqin

    2017-09-29

    In this paper, a novel infrared target co-detection model combining the self-correlation features of backgrounds and the commonality features of targets in the spatio-temporal domain is proposed to detect small targets in a sequence of infrared images with complex backgrounds. Firstly, a dense target extraction model based on nonlinear weights is proposed, which can better suppress background of images and enhance small targets than weights of singular values. Secondly, a sparse target extraction model based on entry-wise weighted robust principal component analysis is proposed. The entry-wise weight adaptively incorporates structural prior in terms of local weighted entropy, thus, it can extract real targets accurately and suppress background clutters efficiently. Finally, the commonality of targets in the spatio-temporal domain are used to construct target refinement model for false alarms suppression and target confirmation. Since real targets could appear in both of the dense and sparse reconstruction maps of a single frame, and form trajectories after tracklet association of consecutive frames, the location correlation of the dense and sparse reconstruction maps for a single frame and tracklet association of the location correlation maps for successive frames have strong ability to discriminate between small targets and background clutters. Experimental results demonstrate that the proposed small target co-detection method can not only suppress background clutters effectively, but also detect targets accurately even if with target-like interference.

  14. Vision-based object detection and recognition system for intelligent vehicles

    NASA Astrophysics Data System (ADS)

    Ran, Bin; Liu, Henry X.; Martono, Wilfung

    1999-01-01

    Recently, a proactive crash mitigation system is proposed to enhance the crash avoidance and survivability of the Intelligent Vehicles. Accurate object detection and recognition system is a prerequisite for a proactive crash mitigation system, as system component deployment algorithms rely on accurate hazard detection, recognition, and tracking information. In this paper, we present a vision-based approach to detect and recognize vehicles and traffic signs, obtain their information, and track multiple objects by using a sequence of color images taken from a moving vehicle. The entire system consist of two sub-systems, the vehicle detection and recognition sub-system and traffic sign detection and recognition sub-system. Both of the sub- systems consist of four models: object detection model, object recognition model, object information model, and object tracking model. In order to detect potential objects on the road, several features of the objects are investigated, which include symmetrical shape and aspect ratio of a vehicle and color and shape information of the signs. A two-layer neural network is trained to recognize different types of vehicles and a parameterized traffic sign model is established in the process of recognizing a sign. Tracking is accomplished by combining the analysis of single image frame with the analysis of consecutive image frames. The analysis of the single image frame is performed every ten full-size images. The information model will obtain the information related to the object, such as time to collision for the object vehicle and relative distance from the traffic sings. Experimental results demonstrated a robust and accurate system in real time object detection and recognition over thousands of image frames.

  15. Simple multispectral imaging approach for determining the transfer of explosive residues in consecutive fingerprints.

    PubMed

    Lees, Heidi; Zapata, Félix; Vaher, Merike; García-Ruiz, Carmen

    2018-07-01

    This novel investigation focused on studying the transfer of explosive residues (TNT, HMTD, PETN, ANFO, dynamite, black powder, NH 4 NO 3 , KNO 3 , NaClO 3 ) in ten consecutive fingerprints to two different surfaces - cotton fabric and polycarbonate plastic - by using multispectral imaging (MSI). Imaging was performed employing a reflex camera in a purpose-built photo studio. Images were processed in MATLAB to select the most discriminating frame - the one that provided the sharpest contrast between the explosive and the material in the red-green-blue (RGB) visible region. The amount of explosive residues transferred in each fingerprint was determined as the number of pixels containing explosive particles. First, the pattern of PETN transfer by ten different persons in successive fingerprints was studied. No significant differences in the pattern of transfer of PETN between subjects were observed, which was also confirmed by multivariate analysis of variance (MANOVA). Then, the transfer of traces of the nine above explosives in ten consecutive fingerprints to cotton fabric and polycarbonate plastic was investigated. The obtained results demonstrated that the amount of explosive residues deposited on successive fingerprints tended to undergo a power or exponential decrease, with the exception of inorganic salts (NH 4 NO 3 , KNO 3 , NaClO 3 ) and ANFO (consists of 90% NH 4 NO 3 ). Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Automatic detection of magnetic flux emergings in the solar atmosphere from full-disk magnetogram sequences.

    PubMed

    Fu, Gang; Shih, Frank Y; Wang, Haimin

    2008-11-01

    In this paper, we present a novel method to detect Emerging Flux Regions (EFRs) in the solar atmosphere from consecutive full-disk Michelson Doppler Imager (MDI) magnetogram sequences. To our knowledge, this is the first developed technique for automatically detecting EFRs. The method includes several steps. First, the projection distortion on the MDI magnetograms is corrected. Second, the bipolar regions are extracted by applying multiscale circular harmonic filters. Third, the extracted bipolar regions are traced in consecutive MDI frames by Kalman filter as candidate EFRs. Fourth, the properties, such as positive and negative magnetic fluxes and distance between two polarities, are measured in each frame. Finally, a feature vector is constructed for each bipolar region using the measured properties, and the Support Vector Machine (SVM) classifier is applied to distinguish EFRs from other regions. Experimental results show that the detection rate of EFRs is 96.4% and of non-EFRs is 98.0%, and the false alarm rate is 25.7%, based on all the available MDI magnetograms in 2001 and 2002.

  17. Video-mosaicking of in vivo reflectance confocal microscopy images for noninvasive examination of skin lesion (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kose, Kivanc; Gou, Mengran; Yelamos, Oriol; Cordova, Miguel A.; Rossi, Anthony; Nehal, Kishwer S.; Camps, Octavia I.; Dy, Jennifer G.; Brooks, Dana H.; Rajadhyaksha, Milind

    2017-02-01

    In this report we describe a computer vision based pipeline to convert in-vivo reflectance confocal microscopy (RCM) videos collected with a handheld system into large field of view (FOV) mosaics. For many applications such as imaging of hard to access lesions, intraoperative assessment of MOHS margins, or delineation of lesion margins beyond clinical borders, raster scan based mosaicing techniques have clinically significant limitations. In such cases, clinicians often capture RCM videos by freely moving a handheld microscope over the area of interest, but the resulting videos lose large-scale spatial relationships. Videomosaicking is a standard computational imaging technique to register, and stitch together consecutive frames of videos into large FOV high resolution mosaics. However, mosaicing RCM videos collected in-vivo has unique challenges: (i) tissue may deform or warp due to physical contact with the microscope objective lens, (ii) discontinuities or "jumps" between consecutive images and motion blur artifacts may occur, due to manual operation of the microscope, and (iii) optical sectioning and resolution may vary between consecutive images due to scattering and aberrations induced by changes in imaging depth and tissue morphology. We addressed these challenges by adapting or developing new algorithmic methods for videomosaicking, specifically by modeling non-rigid deformations, followed by automatically detecting discontinuities (cut locations) and, finally, applying a data-driven image stitching approach that fully preserves resolution and tissue morphologic detail without imposing arbitrary pre-defined boundaries. We will present example mosaics obtained by clinical imaging of both melanoma and non-melanoma skin cancers. The ability to combine freehand mosaicing for handheld microscopes with preserved cellular resolution will have high impact application in diverse clinical settings, including low-resource healthcare systems.

  18. Optical cell tracking analysis using a straight-forward approach to minimize processing time for high frame rate data

    NASA Astrophysics Data System (ADS)

    Seeto, Wen Jun; Lipke, Elizabeth Ann

    2016-03-01

    Tracking of rolling cells via in vitro experiment is now commonly performed using customized computer programs. In most cases, two critical challenges continue to limit analysis of cell rolling data: long computation times due to the complexity of tracking algorithms and difficulty in accurately correlating a given cell with itself from one frame to the next, which is typically due to errors caused by cells that either come close in proximity to each other or come in contact with each other. In this paper, we have developed a sophisticated, yet simple and highly effective, rolling cell tracking system to address these two critical problems. This optical cell tracking analysis (OCTA) system first employs ImageJ for cell identification in each frame of a cell rolling video. A custom MATLAB code was written to use the geometric and positional information of all cells as the primary parameters for matching each individual cell with itself between consecutive frames and to avoid errors when tracking cells that come within close proximity to one another. Once the cells are matched, rolling velocity can be obtained for further analysis. The use of ImageJ for cell identification eliminates the need for high level MATLAB image processing knowledge. As a result, only fundamental MATLAB syntax is necessary for cell matching. OCTA has been implemented in the tracking of endothelial colony forming cell (ECFC) rolling under shear. The processing time needed to obtain tracked cell data from a 2 min ECFC rolling video recorded at 70 frames per second with a total of over 8000 frames is less than 6 min using a computer with an Intel® Core™ i7 CPU 2.80 GHz (8 CPUs). This cell tracking system benefits cell rolling analysis by substantially reducing the time required for post-acquisition data processing of high frame rate video recordings and preventing tracking errors when individual cells come in close proximity to one another.

  19. Camera Image Transformation and Registration for Safe Spacecraft Landing and Hazard Avoidance

    NASA Technical Reports Server (NTRS)

    Jones, Brandon M.

    2005-01-01

    Inherent geographical hazards of Martian terrain may impede a safe landing for science exploration spacecraft. Surface visualization software for hazard detection and avoidance may accordingly be applied in vehicles such as the Mars Exploration Rover (MER) to induce an autonomous and intelligent descent upon entering the planetary atmosphere. The focus of this project is to develop an image transformation algorithm for coordinate system matching between consecutive frames of terrain imagery taken throughout descent. The methodology involves integrating computer vision and graphics techniques, including affine transformation and projective geometry of an object, with the intrinsic parameters governing spacecraft dynamic motion and camera calibration.

  20. An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji

    2008-11-01

    We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.

  1. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an arbitrary object recognition routine designed to work with images acquired by perspective cameras can be used without any further modifications.

  2. Evaluation of a HDR image sensor with logarithmic response for mobile video-based applications

    NASA Astrophysics Data System (ADS)

    Tektonidis, Marco; Pietrzak, Mateusz; Monnin, David

    2017-10-01

    The performance of mobile video-based applications using conventional LDR (Low Dynamic Range) image sensors highly depends on the illumination conditions. As an alternative, HDR (High Dynamic Range) image sensors with logarithmic response are capable to acquire illumination-invariant HDR images in a single shot. We have implemented a complete image processing framework for a HDR sensor, including preprocessing methods (nonuniformity correction (NUC), cross-talk correction (CTC), and demosaicing) as well as tone mapping (TM). We have evaluated the HDR sensor for video-based applications w.r.t. the display of images and w.r.t. image analysis techniques. Regarding the display we have investigated the image intensity statistics over time, and regarding image analysis we assessed the number of feature correspondences between consecutive frames of temporal image sequences. For the evaluation we used HDR image data recorded from a vehicle on outdoor or combined outdoor/indoor itineraries, and we performed a comparison with corresponding conventional LDR image data.

  3. Violent Interaction Detection in Video Based on Deep Learning

    NASA Astrophysics Data System (ADS)

    Zhou, Peipei; Ding, Qinghai; Luo, Haibo; Hou, Xinglin

    2017-06-01

    Violent interaction detection is of vital importance in some video surveillance scenarios like railway stations, prisons or psychiatric centres. Existing vision-based methods are mainly based on hand-crafted features such as statistic features between motion regions, leading to a poor adaptability to another dataset. En lightened by the development of convolutional networks on common activity recognition, we construct a FightNet to represent the complicated visual violence interaction. In this paper, a new input modality, image acceleration field is proposed to better extract the motion attributes. Firstly, each video is framed as RGB images. Secondly, optical flow field is computed using the consecutive frames and acceleration field is obtained according to the optical flow field. Thirdly, the FightNet is trained with three kinds of input modalities, i.e., RGB images for spatial networks, optical flow images and acceleration images for temporal networks. By fusing results from different inputs, we conclude whether a video tells a violent event or not. To provide researchers a common ground for comparison, we have collected a violent interaction dataset (VID), containing 2314 videos with 1077 fight ones and 1237 no-fight ones. By comparison with other algorithms, experimental results demonstrate that the proposed model for violent interaction detection shows higher accuracy and better robustness.

  4. Dynamic Textures Modeling via Joint Video Dictionary Learning.

    PubMed

    Wei, Xian; Li, Yuanxiang; Shen, Hao; Chen, Fang; Kleinsteuber, Martin; Wang, Zhongfeng

    2017-04-06

    Video representation is an important and challenging task in the computer vision community. In this paper, we consider the problem of modeling and classifying video sequences of dynamic scenes which could be modeled in a dynamic textures (DT) framework. At first, we assume that image frames of a moving scene can be modeled as a Markov random process. We propose a sparse coding framework, named joint video dictionary learning (JVDL), to model a video adaptively. By treating the sparse coefficients of image frames over a learned dictionary as the underlying "states", we learn an efficient and robust linear transition matrix between two adjacent frames of sparse events in time series. Hence, a dynamic scene sequence is represented by an appropriate transition matrix associated with a dictionary. In order to ensure the stability of JVDL, we impose several constraints on such transition matrix and dictionary. The developed framework is able to capture the dynamics of a moving scene by exploring both sparse properties and the temporal correlations of consecutive video frames. Moreover, such learned JVDL parameters can be used for various DT applications, such as DT synthesis and recognition. Experimental results demonstrate the strong competitiveness of the proposed JVDL approach in comparison with state-of-the-art video representation methods. Especially, it performs significantly better in dealing with DT synthesis and recognition on heavily corrupted data.

  5. Generation of chemical movies: FT-IR spectroscopic imaging of segmented flows.

    PubMed

    Chan, K L Andrew; Niu, X; deMello, A J; Kazarian, S G

    2011-05-01

    We have previously demonstrated that FT-IR spectroscopic imaging can be used as a powerful, label-free detection method for studying laminar flows. However, to date, the speed of image acquisition has been too slow for the efficient detection of moving droplets within segmented flow systems. In this paper, we demonstrate the extraction of fast FT-IR images with acquisition times of 50 ms. This approach allows efficient interrogation of segmented flow systems where aqueous droplets move at a speed of 2.5 mm/s. Consecutive FT-IR images separated by 120 ms intervals allow the generation of chemical movies at eight frames per second. The technique has been applied to the study of microfluidic systems containing moving droplets of water in oil and droplets of protein solution in oil. The presented work demonstrates the feasibility of the use of FT-IR imaging to study dynamic systems with subsecond temporal resolution.

  6. Parallel Implementation of a Frozen Flow Based Wavefront Reconstructor

    NASA Astrophysics Data System (ADS)

    Nagy, J.; Kelly, K.

    2013-09-01

    Obtaining high resolution images of space objects from ground based telescopes is challenging, often requiring the use of a multi-frame blind deconvolution (MFBD) algorithm to remove blur caused by atmospheric turbulence. In order for an MFBD algorithm to be effective, it is necessary to obtain a good initial estimate of the wavefront phase. Although wavefront sensors work well in low turbulence situations, they are less effective in high turbulence, such as when imaging in daylight, or when imaging objects that are close to the Earth's horizon. One promising approach, which has been shown to work very well in high turbulence settings, uses a frozen flow assumption on the atmosphere to capture the inherent temporal correlations present in consecutive frames of wavefront data. Exploiting these correlations can lead to more accurate estimation of the wavefront phase, and the associated PSF, which leads to more effective MFBD algorithms. However, with the current serial implementation, the approach can be prohibitively expensive in situations when it is necessary to use a large number of frames. In this poster we describe a parallel implementation that overcomes this constraint. The parallel implementation exploits sparse matrix computations, and uses the Trilinos package developed at Sandia National Laboratories. Trilinos provides a variety of core mathematical software for parallel architectures that have been designed using high quality software engineering practices, The package is open source, and portable to a variety of high-performance computing architectures.

  7. Robust sliding-window reconstruction for Accelerating the acquisition of MR fingerprinting.

    PubMed

    Cao, Xiaozhi; Liao, Congyu; Wang, Zhixing; Chen, Ying; Ye, Huihui; He, Hongjian; Zhong, Jianhui

    2017-10-01

    To develop a method for accelerated and robust MR fingerprinting (MRF) with improved image reconstruction and parameter matching processes. A sliding-window (SW) strategy was applied to MRF, in which signal and dictionary matching was conducted between fingerprints consisting of mixed-contrast image series reconstructed from consecutive data frames segmented by a sliding window, and a precalculated mixed-contrast dictionary. The effectiveness and performance of this new method, dubbed SW-MRF, was evaluated in both phantom and in vivo. Error quantifications were conducted on results obtained with various settings of SW reconstruction parameters. Compared with the original MRF strategy, the results of both phantom and in vivo experiments demonstrate that the proposed SW-MRF strategy either provided similar accuracy with reduced acquisition time, or improved accuracy with equal acquisition time. Parametric maps of T 1 , T 2 , and proton density of comparable quality could be achieved with a two-fold or more reduction in acquisition time. The effect of sliding-window width on dictionary sensitivity was also estimated. The novel SW-MRF recovers high quality image frames from highly undersampled MRF data, which enables more robust dictionary matching with reduced numbers of data frames. This time efficiency may facilitate MRF applications in time-critical clinical settings. Magn Reson Med 78:1579-1588, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  8. A novel snapshot polarimetric imager

    NASA Astrophysics Data System (ADS)

    Wong, Gerald; McMaster, Ciaran; Struthers, Robert; Gorman, Alistair; Sinclair, Peter; Lamb, Robert; Harvey, Andrew R.

    2012-10-01

    Polarimetric imaging (PI) is of increasing importance in determining additional scene information beyond that of conventional images. For very long-range surveillance, image quality is degraded due to turbulence. Furthermore, the high magnification required to create images with sufficient spatial resolution suitable for object recognition and identification require long focal length optical systems. These are incompatible with the size and weight restrictions for aircraft. Techniques which allow detection and recognition of an object at the single pixel level are therefore likely to provide advance warning of approaching threats or long-range object cueing. PI is a technique that has the potential to detect object signatures at the pixel level. Early attempts to develop PI used rotating polarisers (and spectral filters) which recorded sequential polarized images from which the complete Stokes matrix could be derived. This approach has built-in latency between frames and requires accurate registration of consecutive frames to analyze real-time video of moving objects. Alternatively, multiple optical systems and cameras have been demonstrated to remove latency, but this approach increases cost and bulk of the imaging system. In our investigation we present a simplified imaging system that divides an image into two orthogonal polarimetric components which are then simultaneously projected onto a single detector array. Thus polarimetric data is recorded without latency on a single snapshot. We further show that, for pixel-level objects, the data derived from only two orthogonal states (H and V) is sufficient to increase the probability of detection whilst reducing false alarms compared to conventional unpolarised imaging.

  9. Countermeasures for unintentional and intentional video watermarking attacks

    NASA Astrophysics Data System (ADS)

    Deguillaume, Frederic; Csurka, Gabriela; Pun, Thierry

    2000-05-01

    These last years, the rapidly growing digital multimedia market has revealed an urgent need for effective copyright protection mechanisms. Therefore, digital audio, image and video watermarking has recently become a very active area of research, as a solution to this problem. Many important issues have been pointed out, one of them being the robustness to non-intentional and intentional attacks. This paper studies some attacks and proposes countermeasures applied to videos. General attacks are lossy copying/transcoding such as MPEG compression and digital/analog (D/A) conversion, changes of frame-rate, changes of display format, and geometrical distortions. More specific attacks are sequence edition, and statistical attacks such as averaging or collusion. Averaging attack consists of averaging locally consecutive frames to cancel the watermark. This attack works well for schemes which embed random independent marks into frames. In the collusion attack the watermark is estimated from single frames (based on image denoising), and averaged over different scenes for better accuracy. The estimated watermark is then subtracted from each frame. Collusion requires that the same mark is embedded into all frames. The proposed countermeasures first ensures robustness to general attacks by spread spectrum encoding in the frequency domain and by the use of an additional template. Secondly, a Bayesian criterion, evaluating the probability of a correctly decoded watermark, is used for rejection of outliers, and to implement an algorithm against statistical attacks. The idea is to embed randomly chosen marks among a finite set of marks, into subsequences of videos which are long enough to resist averaging attacks, but short enough to avoid collusion attacks. The Bayesian criterion is needed to select the correct mark at the decoding step. Finally, the paper presents experimental results showing the robustness of the proposed method.

  10. Image motion compensation on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Tarbell, T. D.; Duncan, D. W.; Finch, M. L.; Spence, G.

    1981-01-01

    The SOUP experiment on Spacelab 2 includes a 30 cm visible light telescope and focal plane package mounted on the Instrument Pointing System (IPS). Scientific goals of the experiment dictate pointing stability requirements of less than 0.05 arcsecond jitter over periods of 5-20 seconds. Quantitative derivations of these requirements from two different aspects are presented: (1) avoidance of motion blurring of diffraction-limited images; (2) precise coalignment of consecutive frames to allow measurement of small image differences. To achieve this stability, a fine guider system capable of removing residual jitter of the IPS and image motions generated on the IPS cruciform instrument support structure has been constructed. This system uses solar limb detectors in the prime focal plane to derive an error signal. Image motion due to pointing errors is compensated by the agile secondary mirror mounted on piezoelectric transducers, controlled by a closed-loop servo system.

  11. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and comparison with ISS-LIS and GLM

    NASA Astrophysics Data System (ADS)

    Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.

    2017-12-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and GLM to understand how the peak rate of change and the peak area of each flash aligned with each lightning system in time. Flash features like leaders could be inferred from the video frames as well. Testing is being done to see if leader speeds may be accurately calculated under certain circumstances.

  12. Video-based measurements for wireless capsule endoscope tracking

    NASA Astrophysics Data System (ADS)

    Spyrou, Evaggelos; Iakovidis, Dimitris K.

    2014-01-01

    The wireless capsule endoscope is a swallowable medical device equipped with a miniature camera enabling the visual examination of the gastrointestinal (GI) tract. It wirelessly transmits thousands of images to an external video recording system, while its location and orientation are being tracked approximately by external sensor arrays. In this paper we investigate a video-based approach to tracking the capsule endoscope without requiring any external equipment. The proposed method involves extraction of speeded up robust features from video frames, registration of consecutive frames based on the random sample consensus algorithm, and estimation of the displacement and rotation of interest points within these frames. The results obtained by the application of this method on wireless capsule endoscopy videos indicate its effectiveness and improved performance over the state of the art. The findings of this research pave the way for a cost-effective localization and travel distance measurement of capsule endoscopes in the GI tract, which could contribute in the planning of more accurate surgical interventions.

  13. Brachial artery vasomotion and transducer pressure effect on measurements by active contour segmentation on ultrasound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cary, Theodore W.; Sultan, Laith R.; Sehgal, Chandra M., E-mail: sehgalc@uphs.upenn.edu

    Purpose: To use feed-forward active contours (snakes) to track and measure brachial artery vasomotion on ultrasound images recorded in both transverse and longitudinal views; and to compare the algorithm's performance in each view. Methods: Longitudinal and transverse view ultrasound image sequences of 45 brachial arteries were segmented by feed-forward active contour (FFAC). The segmented regions were used to measure vasomotion artery diameter, cross-sectional area, and distention both as peak-to-peak diameter and as area. ECG waveforms were also simultaneously extracted frame-by-frame by thresholding a running finite-difference image between consecutive images. The arterial and ECG waveforms were compared as they traced eachmore » phase of the cardiac cycle. Results: FFAC successfully segmented arteries in longitudinal and transverse views in all 45 cases. The automated analysis took significantly less time than manual tracing, but produced superior, well-behaved arterial waveforms. Automated arterial measurements also had lower interobserver variability as measured by correlation, difference in mean values, and coefficient of variation. Although FFAC successfully segmented both the longitudinal and transverse images, transverse measurements were less variable. The cross-sectional area computed from the longitudinal images was 27% lower than the area measured from transverse images, possibly due to the compression of the artery along the image depth by transducer pressure. Conclusions: FFAC is a robust and sensitive vasomotion segmentation algorithm in both transverse and longitudinal views. Transverse imaging may offer advantages over longitudinal imaging: transverse measurements are more consistent, possibly because the method is less sensitive to variations in transducer pressure during imaging.« less

  14. Brachial artery vasomotion and transducer pressure effect on measurements by active contour segmentation on ultrasound.

    PubMed

    Cary, Theodore W; Reamer, Courtney B; Sultan, Laith R; Mohler, Emile R; Sehgal, Chandra M

    2014-02-01

    To use feed-forward active contours (snakes) to track and measure brachial artery vasomotion on ultrasound images recorded in both transverse and longitudinal views; and to compare the algorithm's performance in each view. Longitudinal and transverse view ultrasound image sequences of 45 brachial arteries were segmented by feed-forward active contour (FFAC). The segmented regions were used to measure vasomotion artery diameter, cross-sectional area, and distention both as peak-to-peak diameter and as area. ECG waveforms were also simultaneously extracted frame-by-frame by thresholding a running finite-difference image between consecutive images. The arterial and ECG waveforms were compared as they traced each phase of the cardiac cycle. FFAC successfully segmented arteries in longitudinal and transverse views in all 45 cases. The automated analysis took significantly less time than manual tracing, but produced superior, well-behaved arterial waveforms. Automated arterial measurements also had lower interobserver variability as measured by correlation, difference in mean values, and coefficient of variation. Although FFAC successfully segmented both the longitudinal and transverse images, transverse measurements were less variable. The cross-sectional area computed from the longitudinal images was 27% lower than the area measured from transverse images, possibly due to the compression of the artery along the image depth by transducer pressure. FFAC is a robust and sensitive vasomotion segmentation algorithm in both transverse and longitudinal views. Transverse imaging may offer advantages over longitudinal imaging: transverse measurements are more consistent, possibly because the method is less sensitive to variations in transducer pressure during imaging.

  15. Brachial artery vasomotion and transducer pressure effect on measurements by active contour segmentation on ultrasound

    PubMed Central

    Cary, Theodore W.; Reamer, Courtney B.; Sultan, Laith R.; Mohler, Emile R.; Sehgal, Chandra M.

    2014-01-01

    Purpose: To use feed-forward active contours (snakes) to track and measure brachial artery vasomotion on ultrasound images recorded in both transverse and longitudinal views; and to compare the algorithm's performance in each view. Methods: Longitudinal and transverse view ultrasound image sequences of 45 brachial arteries were segmented by feed-forward active contour (FFAC). The segmented regions were used to measure vasomotion artery diameter, cross-sectional area, and distention both as peak-to-peak diameter and as area. ECG waveforms were also simultaneously extracted frame-by-frame by thresholding a running finite-difference image between consecutive images. The arterial and ECG waveforms were compared as they traced each phase of the cardiac cycle. Results: FFAC successfully segmented arteries in longitudinal and transverse views in all 45 cases. The automated analysis took significantly less time than manual tracing, but produced superior, well-behaved arterial waveforms. Automated arterial measurements also had lower interobserver variability as measured by correlation, difference in mean values, and coefficient of variation. Although FFAC successfully segmented both the longitudinal and transverse images, transverse measurements were less variable. The cross-sectional area computed from the longitudinal images was 27% lower than the area measured from transverse images, possibly due to the compression of the artery along the image depth by transducer pressure. Conclusions: FFAC is a robust and sensitive vasomotion segmentation algorithm in both transverse and longitudinal views. Transverse imaging may offer advantages over longitudinal imaging: transverse measurements are more consistent, possibly because the method is less sensitive to variations in transducer pressure during imaging. PMID:24506648

  16. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    NASA Astrophysics Data System (ADS)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.

  17. Determinants of image quality of rotational angiography for on-line assessment of frame geometry after transcatheter aortic valve implantation.

    PubMed

    Rodríguez-Olivares, Ramón; El Faquir, Nahid; Rahhab, Zouhair; Maugenest, Anne-Marie; Van Mieghem, Nicolas M; Schultz, Carl; Lauritsch, Guenter; de Jaegere, Peter P T

    2016-07-01

    To study the determinants of image quality of rotational angiography using dedicated research prototype software for motion compensation without rapid ventricular pacing after the implantation of four commercially available catheter-based valves. Prospective observational study including 179 consecutive patients who underwent transcatheter aortic valve implantation (TAVI) with either the Medtronic CoreValve (MCS), Edward-SAPIEN Valve (ESV), Boston Sadra Lotus (BSL) or Saint-Jude Portico Valve (SJP) in whom rotational angiography (R-angio) with motion compensation 3D image reconstruction was performed. Image quality was evaluated from grade 1 (excellent image quality) to grade 5 (strongly degraded). Distinction was made between good (grades 1, 2) and poor image quality (grades 3-5). Clinical (gender, body mass index, Agatston score, heart rate and rhythm, artifacts), procedural (valve type) and technical variables (isocentricity) were related with the image quality assessment. Image quality was good in 128 (72 %) and poor in 51 (28 %) patients. By univariable analysis only valve type (BSL) and the presence of an artefact negatively affected image quality. By multivariate analysis (in which BMI was forced into the model) BSL valve (Odds 3.5, 95 % CI [1.3-9.6], p = 0.02), presence of an artifact (Odds 2.5, 95 % CI [1.2-5.4], p = 0.02) and BMI (Odds 1.1, 95 % CI [1.0-1.2], p = 0.04) were independent predictors of poor image quality. Rotational angiography with motion compensation 3D image reconstruction using a dedicated research prototype software offers good image quality for the evaluation of frame geometry after TAVI in the majority of patients. Valve type, presence of artifacts and higher BMI negatively affect image quality.

  18. Computation of fluid and particle motion from a time-sequenced image pair: a global outlier identification approach.

    PubMed

    Ray, Nilanjan

    2011-10-01

    Fluid motion estimation from time-sequenced images is a significant image analysis task. Its application is widespread in experimental fluidics research and many related areas like biomedical engineering and atmospheric sciences. In this paper, we present a novel flow computation framework to estimate the flow velocity vectors from two consecutive image frames. In an energy minimization-based flow computation, we propose a novel data fidelity term, which: 1) can accommodate various measures, such as cross-correlation or sum of absolute or squared differences of pixel intensities between image patches; 2) has a global mechanism to control the adverse effect of outliers arising out of motion discontinuities, proximity of image borders; and 3) can go hand-in-hand with various spatial smoothness terms. Further, the proposed data term and related regularization schemes are both applicable to dense and sparse flow vector estimations. We validate these claims by numerical experiments on benchmark flow data sets. © 2011 IEEE

  19. Ultra-fast framing camera tube

    DOEpatents

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  20. Time stretch and its applications

    NASA Astrophysics Data System (ADS)

    Mahjoubfar, Ata; Churkin, Dmitry V.; Barland, Stéphane; Broderick, Neil; Turitsyn, Sergei K.; Jalali, Bahram

    2017-06-01

    Observing non-repetitive and statistically rare signals that occur on short timescales requires fast real-time measurements that exceed the speed, precision and record length of conventional digitizers. Photonic time stretch is a data acquisition method that overcomes the speed limitations of electronic digitizers and enables continuous ultrafast single-shot spectroscopy, imaging, reflectometry, terahertz and other measurements at refresh rates reaching billions of frames per second with non-stop recording spanning trillions of consecutive frames. The technology has opened a new frontier in measurement science unveiling transient phenomena in nonlinear dynamics such as optical rogue waves and soliton molecules, and in relativistic electron bunching. It has also created a new class of instruments that have been integrated with artificial intelligence for sensing and biomedical diagnostics. We review the fundamental principles and applications of this emerging field for continuous phase and amplitude characterization at extremely high repetition rates via time-stretch spectral interferometry.

  1. Extraction of Blebs in Human Embryonic Stem Cell Videos.

    PubMed

    Guan, Benjamin X; Bhanu, Bir; Talbot, Prue; Weng, Nikki Jo-Hao

    2016-01-01

    Blebbing is an important biological indicator in determining the health of human embryonic stem cells (hESC). Especially, areas of a bleb sequence in a video are often used to distinguish two cell blebbing behaviors in hESC: dynamic and apoptotic blebbings. This paper analyzes various segmentation methods for bleb extraction in hESC videos and introduces a bio-inspired score function to improve the performance in bleb extraction. Full bleb formation consists of bleb expansion and retraction. Blebs change their size and image properties dynamically in both processes and between frames. Therefore, adaptive parameters are needed for each segmentation method. A score function derived from the change of bleb area and orientation between consecutive frames is proposed which provides adaptive parameters for bleb extraction in videos. In comparison to manual analysis, the proposed method provides an automated fast and accurate approach for bleb sequence extraction.

  2. Deconvolution of astronomical images using SOR with adaptive relaxation.

    PubMed

    Vorontsov, S V; Strakhov, V N; Jefferies, S M; Borelli, K J

    2011-07-04

    We address the potential performance of the successive overrelaxation technique (SOR) in image deconvolution, focusing our attention on the restoration of astronomical images distorted by atmospheric turbulence. SOR is the classical Gauss-Seidel iteration, supplemented with relaxation. As indicated by earlier work, the convergence properties of SOR, and its ultimate performance in the deconvolution of blurred and noisy images, can be made competitive to other iterative techniques, including conjugate gradients, by a proper choice of the relaxation parameter. The question of how to choose the relaxation parameter, however, remained open, and in the practical work one had to rely on experimentation. In this paper, using constructive (rather than exact) arguments, we suggest a simple strategy for choosing the relaxation parameter and for updating its value in consecutive iterations to optimize the performance of the SOR algorithm (and its positivity-constrained version, +SOR) at finite iteration counts. We suggest an extension of the algorithm to the notoriously difficult problem of "blind" deconvolution, where both the true object and the point-spread function have to be recovered from the blurred image. We report the results of numerical inversions with artificial and real data, where the algorithm is compared with techniques based on conjugate gradients. In all of our experiments +SOR provides the highest quality results. In addition +SOR is found to be able to detect moderately small changes in the true object between separate data frames: an important quality for multi-frame blind deconvolution where stationarity of the object is a necesessity.

  3. Multichannel imager for littoral zone characterization

    NASA Astrophysics Data System (ADS)

    Podobna, Yuliya; Schoonmaker, Jon; Dirbas, Joe; Sofianos, James; Boucher, Cynthia; Gilbert, Gary

    2010-04-01

    This paper describes an approach to utilize a multi-channel, multi-spectral electro-optic (EO) system for littoral zone characterization. Advanced Coherent Technologies, LLC (ACT) presents their EO sensor systems for the surf zone environmental assessment and potential surf zone target detection. Specifically, an approach is presented to determine a Surf Zone Index (SZI) from the multi-spectral EO sensor system. SZI provides a single quantitative value of the surf zone conditions delivering an immediate understanding of the area and an assessment as to how well an airborne optical system might perform in a mine countermeasures (MCM) operation. Utilizing consecutive frames of SZI images, ACT is able to measure variability over time. A surf zone nomograph, which incorporates targets, sensor, and environmental data, including the SZI to determine the environmental impact on system performance, is reviewed in this work. ACT's electro-optical multi-channel, multi-spectral imaging system and test results are presented and discussed.

  4. HDR video synthesis for vision systems in dynamic scenes

    NASA Astrophysics Data System (ADS)

    Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried

    2016-09-01

    High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.

  5. High PRF ultrafast sliding compound doppler imaging: fully qualitative and quantitative analysis of blood flow

    NASA Astrophysics Data System (ADS)

    Kang, Jinbum; Jang, Won Seuk; Yoo, Yangmo

    2018-02-01

    Ultrafast compound Doppler imaging based on plane-wave excitation (UCDI) can be used to evaluate cardiovascular diseases using high frame rates. In particular, it provides a fully quantifiable flow analysis over a large region of interest with high spatio-temporal resolution. However, the pulse-repetition frequency (PRF) in the UCDI method is limited for high-velocity flow imaging since it has a tradeoff between the number of plane-wave angles (N) and acquisition time. In this paper, we present high PRF ultrafast sliding compound Doppler imaging method (HUSDI) to improve quantitative flow analysis. With the HUSDI method, full scanline images (i.e. each tilted plane wave data) in a Doppler frame buffer are consecutively summed using a sliding window to create high-quality ensemble data so that there is no reduction in frame rate and flow sensitivity. In addition, by updating a new compounding set with a certain time difference (i.e. sliding window step size or L), the HUSDI method allows various Doppler PRFs with the same acquisition data to enable a fully qualitative, retrospective flow assessment. To evaluate the performance of the proposed HUSDI method, simulation, in vitro and in vivo studies were conducted under diverse flow circumstances. In the simulation and in vitro studies, the HUSDI method showed improved hemodynamic representations without reducing either temporal resolution or sensitivity compared to the UCDI method. For the quantitative analysis, the root mean squared velocity error (RMSVE) was measured using 9 angles (-12° to 12°) with L of 1-9, and the results were found to be comparable to those of the UCDI method (L  =  N  =  9), i.e.  ⩽0.24 cm s-1, for all L values. For the in vivo study, the flow data acquired from a full cardiac cycle of the femoral vessels of a healthy volunteer were analyzed using a PW spectrogram, and arterial and venous flows were successfully assessed with high Doppler PRF (e.g. 5 kHz at L  =  4). These results indicate that the proposed HUSDI method can improve flow visualization and quantification with a higher frame rate, PRF and flow sensitivity in cardiovascular imaging.

  6. High PRF ultrafast sliding compound doppler imaging: fully qualitative and quantitative analysis of blood flow.

    PubMed

    Kang, Jinbum; Jang, Won Seuk; Yoo, Yangmo

    2018-02-09

    Ultrafast compound Doppler imaging based on plane-wave excitation (UCDI) can be used to evaluate cardiovascular diseases using high frame rates. In particular, it provides a fully quantifiable flow analysis over a large region of interest with high spatio-temporal resolution. However, the pulse-repetition frequency (PRF) in the UCDI method is limited for high-velocity flow imaging since it has a tradeoff between the number of plane-wave angles (N) and acquisition time. In this paper, we present high PRF ultrafast sliding compound Doppler imaging method (HUSDI) to improve quantitative flow analysis. With the HUSDI method, full scanline images (i.e. each tilted plane wave data) in a Doppler frame buffer are consecutively summed using a sliding window to create high-quality ensemble data so that there is no reduction in frame rate and flow sensitivity. In addition, by updating a new compounding set with a certain time difference (i.e. sliding window step size or L), the HUSDI method allows various Doppler PRFs with the same acquisition data to enable a fully qualitative, retrospective flow assessment. To evaluate the performance of the proposed HUSDI method, simulation, in vitro and in vivo studies were conducted under diverse flow circumstances. In the simulation and in vitro studies, the HUSDI method showed improved hemodynamic representations without reducing either temporal resolution or sensitivity compared to the UCDI method. For the quantitative analysis, the root mean squared velocity error (RMSVE) was measured using 9 angles (-12° to 12°) with L of 1-9, and the results were found to be comparable to those of the UCDI method (L  =  N  =  9), i.e.  ⩽0.24 cm s -1 , for all L values. For the in vivo study, the flow data acquired from a full cardiac cycle of the femoral vessels of a healthy volunteer were analyzed using a PW spectrogram, and arterial and venous flows were successfully assessed with high Doppler PRF (e.g. 5 kHz at L  =  4). These results indicate that the proposed HUSDI method can improve flow visualization and quantification with a higher frame rate, PRF and flow sensitivity in cardiovascular imaging.

  7. Estimating pixel variances in the scenes of staring sensors

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM

    2012-01-24

    A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.

  8. Artificial Neural Network applied to lightning flashes

    NASA Astrophysics Data System (ADS)

    Gin, R. B.; Guedes, D.; Bianchi, R.

    2013-05-01

    The development of video cameras enabled cientists to study lightning discharges comportment with more precision. The main goal of this project is to create a system able to detect images of lightning discharges stored in videos and classify them using an Artificial Neural Network (ANN)using C Language and OpenCV libraries. The developed system, can be split in two different modules: detection module and classification module. The detection module uses OpenCV`s computer vision libraries and image processing techniques to detect if there are significant differences between frames in a sequence, indicating that something, still not classified, occurred. Whenever there is a significant difference between two consecutive frames, two main algorithms are used to analyze the frame image: brightness and shape algorithms. These algorithms detect both shape and brightness of the event, removing irrelevant events like birds, as well as detecting the relevant events exact position, allowing the system to track it over time. The classification module uses a neural network to classify the relevant events as horizontal or vertical lightning, save the event`s images and calculates his number of discharges. The Neural Network was implemented using the backpropagation algorithm, and was trained with 42 training images , containing 57 lightning events (one image can have more than one lightning). TheANN was tested with one to five hidden layers, with up to 50 neurons each. The best configuration achieved a success rate of 95%, with one layer containing 20 neurons (33 test images with 42 events were used in this phase). This configuration was implemented in the developed system to analyze 20 video files, containing 63 lightning discharges previously manually detected. Results showed that all the lightning discharges were detected, many irrelevant events were unconsidered, and the event's number of discharges was correctly computed. The neural network used in this project achieved a success rate of 90%. The videos used in this experiment were acquired by seven video cameras installed in São Bernardo do Campo, Brazil, that continuously recorded lightning events during the summer. The cameras were disposed in a 360 loop, recording all data at a time resolution of 33ms. During this period, several convective storms were recorded.

  9. An evaluation of dynamic lip-tooth characteristics during speech and smile in adolescents.

    PubMed

    Ackerman, Marc B; Brensinger, Colleen; Landis, J Richard

    2004-02-01

    This retrospective study was conducted to measure lip-tooth characteristics of adolescents. Pretreatment video clips of 1242 consecutive patients were screened for Class-I skeletal and dental patterns. After all inclusion criteria were applied, the final sample consisted of 50 patients (27 boys, 23 girls) with a mean age of 12.5 years. The raw digital video stream of each patient was edited to select a single image frame representing the patient saying the syllable "chee" and a second single image representing the patient's posed social smile and saved as part of a 12-frame image sequence. Each animation image was analyzed using a SmileMesh computer application to measure the smile index (the ratio of the intercommissure width divided by the interlabial gap), intercommissure width (mm), interlabial gap (mm), percent incisor below the intercommissure line, and maximum incisor exposure (mm). The data were analyzed using SAS (version 8.1). All recorded differences in linear measures had to be > or = 2 mm. The results suggest that anterior tooth display at speech and smile should be recorded independently but evaluated as part of a dynamic range. Asking patients to say "cheese" and then smile is no longer a valid method to elicit the parameters of anterior tooth display. When planning the vertical positions of incisors during orthodontic treatment, the orthodontist should view the dynamics of anterior tooth display as a continuum delineated by the time points of rest, speech, posed social smile, and a Duchenne smile.

  10. 49 CFR 396.3 - Inspection, repair, and maintenance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... systematically inspected, repaired, and maintained, all motor vehicles and intermodal equipment subject to its... affect safety of operation, including but not limited to, frame and frame assemblies, suspension systems... cause to be maintained, records for each motor vehicle they control for 30 consecutive days. Intermodal...

  11. Action recognition using multi-scale histograms of oriented gradients based depth motion trail Images

    NASA Astrophysics Data System (ADS)

    Wang, Guanxi; Tie, Yun; Qi, Lin

    2017-07-01

    In this paper, we propose a novel approach based on Depth Maps and compute Multi-Scale Histograms of Oriented Gradient (MSHOG) from sequences of depth maps to recognize actions. Each depth frame in a depth video sequence is projected onto three orthogonal Cartesian planes. Under each projection view, the absolute difference between two consecutive projected maps is accumulated through a depth video sequence to form a Depth Map, which is called Depth Motion Trail Images (DMTI). The MSHOG is then computed from the Depth Maps for the representation of an action. In addition, we apply L2-Regularized Collaborative Representation (L2-CRC) to classify actions. We evaluate the proposed approach on MSR Action3D dataset and MSRGesture3D dataset. Promising experimental result demonstrates the effectiveness of our proposed method.

  12. Joint level-set and spatio-temporal motion detection for cell segmentation.

    PubMed

    Boukari, Fatima; Makrogiannis, Sokratis

    2016-08-10

    Cell segmentation is a critical step for quantification and monitoring of cell cycle progression, cell migration, and growth control to investigate cellular immune response, embryonic development, tumorigenesis, and drug effects on live cells in time-lapse microscopy images. In this study, we propose a joint spatio-temporal diffusion and region-based level-set optimization approach for moving cell segmentation. Moving regions are initially detected in each set of three consecutive sequence images by numerically solving a system of coupled spatio-temporal partial differential equations. In order to standardize intensities of each frame, we apply a histogram transformation approach to match the pixel intensities of each processed frame with an intensity distribution model learned from all frames of the sequence during the training stage. After the spatio-temporal diffusion stage is completed, we compute the edge map by nonparametric density estimation using Parzen kernels. This process is followed by watershed-based segmentation and moving cell detection. We use this result as an initial level-set function to evolve the cell boundaries, refine the delineation, and optimize the final segmentation result. We applied this method to several datasets of fluorescence microscopy images with varying levels of difficulty with respect to cell density, resolution, contrast, and signal-to-noise ratio. We compared the results with those produced by Chan and Vese segmentation, a temporally linked level-set technique, and nonlinear diffusion-based segmentation. We validated all segmentation techniques against reference masks provided by the international Cell Tracking Challenge consortium. The proposed approach delineated cells with an average Dice similarity coefficient of 89 % over a variety of simulated and real fluorescent image sequences. It yielded average improvements of 11 % in segmentation accuracy compared to both strictly spatial and temporally linked Chan-Vese techniques, and 4 % compared to the nonlinear spatio-temporal diffusion method. Despite the wide variation in cell shape, density, mitotic events, and image quality among the datasets, our proposed method produced promising segmentation results. These results indicate the efficiency and robustness of this method especially for mitotic events and low SNR imaging, enabling the application of subsequent quantification tasks.

  13. Interference-free ultrasound imaging during HIFU therapy, using software tools

    NASA Technical Reports Server (NTRS)

    Vaezy, Shahram (Inventor); Held, Robert (Inventor); Sikdar, Siddhartha (Inventor); Managuli, Ravi (Inventor); Zderic, Vesna (Inventor)

    2010-01-01

    Disclosed herein is a method for obtaining a composite interference-free ultrasound image when non-imaging ultrasound waves would otherwise interfere with ultrasound imaging. A conventional ultrasound imaging system is used to collect frames of ultrasound image data in the presence of non-imaging ultrasound waves, such as high-intensity focused ultrasound (HIFU). The frames are directed to a processor that analyzes the frames to identify portions of the frame that are interference-free. Interference-free portions of a plurality of different ultrasound image frames are combined to generate a single composite interference-free ultrasound image that is displayed to a user. In this approach, a frequency of the non-imaging ultrasound waves is offset relative to a frequency of the ultrasound imaging waves, such that the interference introduced by the non-imaging ultrasound waves appears in a different portion of the frames.

  14. Specialized CCDs for high-frame-rate visible imaging and UV imaging applications

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Taylor, Gordon C.; Shallcross, Frank V.; Tower, John R.; Lawler, William B.; Harrison, Lorna J.; Socker, Dennis G.; Marchywka, Mike

    1993-11-01

    This paper reports recent progress by the authors in two distinct charge coupled device (CCD) technology areas. The first technology area is high frame rate, multi-port, frame transfer imagers. A 16-port, 512 X 512, split frame transfer imager and a 32-port, 1024 X 1024, split frame transfer imager are described. The thinned, backside illuminated devices feature on-chip correlated double sampling, buried blooming drains, and a room temperature dark current of less than 50 pA/cm2, without surface accumulation. The second technology area is vacuum ultraviolet (UV) frame transfer imagers. A developmental 1024 X 640 frame transfer imager with 20% quantum efficiency at 140 nm is described. The device is fabricated in a p-channel CCD process, thinned for backside illumination, and utilizes special packaging to achieve stable UV response.

  15. Encrypting Digital Camera with Automatic Encryption Key Deletion

    NASA Technical Reports Server (NTRS)

    Oakley, Ernest C. (Inventor)

    2007-01-01

    A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.

  16. A Possible Approach to Inclusion of Space and Time in Frame Fields of Quantum Representations of Real and Complex Numbers

    DOE PAGES

    Benioff, Paul

    2009-01-01

    Tmore » his work is based on the field of reference frames based on quantum representations of real and complex numbers described in other work. Here frame domains are expanded to include space and time lattices. Strings of qukits are described as hybrid systems as they are both mathematical and physical systems. As mathematical systems they represent numbers. As physical systems in each frame the strings have a discrete Schrodinger dynamics on the lattices. he frame field has an iterative structure such that the contents of a stage j frame have images in a stage j - 1 (parent) frame. A discussion of parent frame images includes the proposal that points of stage j frame lattices have images as hybrid systems in parent frames. he resulting association of energy with images of lattice point locations, as hybrid systems states, is discussed. Representations and images of other physical systems in the different frames are also described.« less

  17. Post-processing of adaptive optics images based on frame selection and multi-frame blind deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Rao, Changhui; Wei, Kai

    2008-07-01

    The adaptive optics can only partially compensate the image blurred by atmospheric turbulence due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frames blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are suitable for blind deconvolution from the recorded AO close-loop frames series are selected by the frame selection technique and then do the multi-frame blind deconvolution. There is no priori knowledge except for the positive constraint in blind deconvolution. It is benefit for the use of multi-frame images to improve the stability and convergence of the blind deconvolution algorithm. The method had been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system at Yunnan Observatory. The results show that the method can effectively improve the images partially corrected by adaptive optics.

  18. RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information

    PubMed Central

    Di, Kaichang; Zhao, Qiang; Wan, Wenhui; Wang, Yexin; Gao, Yunjun

    2016-01-01

    In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy. PMID:27529256

  19. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    NASA Astrophysics Data System (ADS)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  20. 76 FR 59737 - In the Matter of Certain Digital Photo Frames and Image Display Devices and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-27

    ... Frames and Image Display Devices and Components Thereof; Notice of Institution of Investigation... United States after importation of certain digital photo frames and image display devices and components... certain digital photo frames and image display devices and components thereof that infringe one or more of...

  1. Development and prospective evaluation of an automated software system for quality control of quantitative 99mTc-MAG3 renal studies.

    PubMed

    Folks, Russell D; Garcia, Ernest V; Taylor, Andrew T

    2007-03-01

    Quantitative nuclear renography has numerous potential sources of error. We previously reported the initial development of a computer software module for comprehensively addressing the issue of quality control (QC) in the analysis of radionuclide renal images. The objective of this study was to prospectively test the QC software. The QC software works in conjunction with standard quantitative renal image analysis using a renal quantification program. The software saves a text file that summarizes QC findings as possible errors in user-entered values, calculated values that may be unreliable because of the patient's clinical condition, and problems relating to acquisition or processing. To test the QC software, a technologist not involved in software development processed 83 consecutive nontransplant clinical studies. The QC findings of the software were then tabulated. QC events were defined as technical (study descriptors that were out of range or were entered and then changed, unusually sized or positioned regions of interest, or missing frames in the dynamic image set) or clinical (calculated functional values judged to be erroneous or unreliable). Technical QC events were identified in 36 (43%) of 83 studies. Clinical QC events were identified in 37 (45%) of 83 studies. Specific QC events included starting the camera after the bolus had reached the kidney, dose infiltration, oversubtraction of background activity, and missing frames in the dynamic image set. QC software has been developed to automatically verify user input, monitor calculation of renal functional parameters, summarize QC findings, and flag potentially unreliable values for the nuclear medicine physician. Incorporation of automated QC features into commercial or local renal software can reduce errors and improve technologist performance and should improve the efficiency and accuracy of image interpretation.

  2. Adaptive tight frame based medical image reconstruction: a proof-of-concept study for computed tomography

    NASA Astrophysics Data System (ADS)

    Zhou, Weifeng; Cai, Jian-Feng; Gao, Hao

    2013-12-01

    A popular approach for medical image reconstruction has been through the sparsity regularization, assuming the targeted image can be well approximated by sparse coefficients under some properly designed system. The wavelet tight frame is such a widely used system due to its capability for sparsely approximating piecewise-smooth functions, such as medical images. However, using a fixed system may not always be optimal for reconstructing a variety of diversified images. Recently, the method based on the adaptive over-complete dictionary that is specific to structures of the targeted images has demonstrated its superiority for image processing. This work is to develop the adaptive wavelet tight frame method image reconstruction. The proposed scheme first constructs the adaptive wavelet tight frame that is task specific, and then reconstructs the image of interest by solving an l1-regularized minimization problem using the constructed adaptive tight frame system. The proof-of-concept study is performed for computed tomography (CT), and the simulation results suggest that the adaptive tight frame method improves the reconstructed CT image quality from the traditional tight frame method.

  3. The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross

    2014-06-15

    Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less

  4. 3-D ultrasound volume reconstruction using the direct frame interpolation method.

    PubMed

    Scheipers, Ulrich; Koptenko, Sergei; Remlinger, Rachel; Falco, Tony; Lachaine, Martin

    2010-11-01

    A new method for 3-D ultrasound volume reconstruction using tracked freehand 3-D ultrasound is proposed. The method is based on solving the forward volume reconstruction problem using direct interpolation of high-resolution ultrasound B-mode image frames. A series of ultrasound B-mode image frames (an image series) is acquired using the freehand scanning technique and position sensing via optical tracking equipment. The proposed algorithm creates additional intermediate image frames by directly interpolating between two or more adjacent image frames of the original image series. The target volume is filled using the original frames in combination with the additionally constructed frames. Compared with conventional volume reconstruction methods, no additional filling of empty voxels or holes within the volume is required, because the whole extent of the volume is defined by the arrangement of the original and the additionally constructed B-mode image frames. The proposed direct frame interpolation (DFI) method was tested on two different data sets acquired while scanning the head and neck region of different patients. The first data set consisted of eight B-mode 2-D frame sets acquired under optimal laboratory conditions. The second data set consisted of 73 image series acquired during a clinical study. Sample volumes were reconstructed for all 81 image series using the proposed DFI method with four different interpolation orders, as well as with the pixel nearest-neighbor method using three different interpolation neighborhoods. In addition, volumes based on a reduced number of image frames were reconstructed for comparison of the different methods' accuracy and robustness in reconstructing image data that lies between the original image frames. The DFI method is based on a forward approach making use of a priori information about the position and shape of the B-mode image frames (e.g., masking information) to optimize the reconstruction procedure and to reduce computation times and memory requirements. The method is straightforward, independent of additional input or parameters, and uses the high-resolution B-mode image frames instead of usually lower-resolution voxel information for interpolation. The DFI method can be considered as a valuable alternative to conventional 3-D ultrasound reconstruction methods based on pixel or voxel nearest-neighbor approaches, offering better quality and competitive reconstruction time.

  5. Insect Wing Displacement Measurement Using Digital Holography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguayo, Daniel D.; Mendoza Santoyo, Fernando; Torre I, Manuel H. de la

    2008-04-15

    Insects in flight have been studied with optical non destructive techniques with the purpose of using meaningful results in aerodynamics. With the availability of high resolution and large dynamic range CCD sensors the so called interferometric digital holographic technique was used to measure the surface displacement of in flight insect wings, such as butterflies. The wings were illuminated with a continuous wave Verdi laser at 532 nm, and observed with a CCD Pixelfly camera that acquire images at a rate of 11.5 frames per second at a resolution of 1392x1024 pixels and 12 Bit dynamic range. At this frame ratemore » digital holograms of the wings were captured and processed in the usual manner, namely, each individual hologram is Fourier processed in order to find the amplitude and phase corresponding to the digital hologram. The wings displacement is obtained when subtraction between two digital holograms is performed for two different wings position, a feature applied to all consecutive frames recorded. The result of subtracting is seen as a wrapped phase fringe pattern directly related to the wing displacement. The experimental data for different butterfly flying conditions and exposure times are shown as wire mesh plots in a movie of the wings displacement.« less

  6. Jitter Correction

    NASA Technical Reports Server (NTRS)

    Waegell, Mordecai J.; Palacios, David M.

    2011-01-01

    Jitter_Correct.m is a MATLAB function that automatically measures and corrects inter-frame jitter in an image sequence to a user-specified precision. In addition, the algorithm dynamically adjusts the image sample size to increase the accuracy of the measurement. The Jitter_Correct.m function takes an image sequence with unknown frame-to-frame jitter and computes the translations of each frame (column and row, in pixels) relative to a chosen reference frame with sub-pixel accuracy. The translations are measured using a Cross Correlation Fourier transformation method in which the relative phase of the two transformed images is fit to a plane. The measured translations are then used to correct the inter-frame jitter of the image sequence. The function also dynamically expands the image sample size over which the cross-correlation is measured to increase the accuracy of the measurement. This increases the robustness of the measurement to variable magnitudes of inter-frame jitter

  7. LBNL DSD Whole Frog Project

    Science.gov Websites

    500 x 1Byte x 136 images. So each 500 bytes from this dataset represents one scan line of the slice image. For example, using PBM: Get frame one: rawtopgm 256 256 < tomato.data > frame1 Get frames one to four into a single image: rawtopgm 256 1024 < tomato.data >frame1-4 Get frame two (skip

  8. Ultrahigh-frame CCD imagers

    NASA Astrophysics Data System (ADS)

    Lowrance, John L.; Mastrocola, V. J.; Renda, George F.; Swain, Pradyumna K.; Kabra, R.; Bhaskaran, Mahalingham; Tower, John R.; Levine, Peter A.

    2004-02-01

    This paper describes the architecture, process technology, and performance of a family of high burst rate CCDs. These imagers employ high speed, low lag photo-detectors with local storage at each photo-detector to achieve image capture at rates greater than 106 frames per second. One imager has a 64 x 64 pixel array with 12 frames of storage. A second imager has a 80 x 160 array with 28 frames of storage, and the third imager has a 64 x 64 pixel array with 300 frames of storage. Application areas include capture of rapid mechanical motion, optical wavefront sensing, fluid cavitation research, combustion studies, plasma research and wind-tunnel-based gas dynamics research.

  9. Validation of an improved abnormality insertion method for medical image perception investigations

    NASA Astrophysics Data System (ADS)

    Madsen, Mark T.; Durst, Gregory R.; Caldwell, Robert T.; Schartz, Kevin M.; Thompson, Brad H.; Berbaum, Kevin S.

    2009-02-01

    The ability to insert abnormalities in clinical tomographic images makes image perception studies with medical images practical. We describe a new insertion technique and its experimental validation that uses complementary image masks to select an abnormality from a library and place it at a desired location. The method was validated using a 4-alternative forced-choice experiment. For each case, four quadrants were simultaneously displayed consisting of 5 consecutive frames of a chest CT with a pulmonary nodule. One quadrant was unaltered, while the other 3 had the nodule from the unaltered quadrant artificially inserted. 26 different sets were generated and repeated with order scrambling for a total of 52 cases. The cases were viewed by radiology staff and residents who ranked each quadrant by realistic appearance. On average, the observers were able to correctly identify the unaltered quadrant in 42% of cases, and identify the unaltered quadrant both times it appeared in 25% of cases. Consensus, defined by a majority of readers, correctly identified the unaltered quadrant in only 29% of 52 cases. For repeats, the consensus observer successfully identified the unaltered quadrant only once. We conclude that the insertion method can be used to reliably place abnormalities in perception experiments.

  10. High Contrast Ultrafast Imaging of the Human Heart

    PubMed Central

    Papadacci, Clement; Pernot, Mathieu; Couade, Mathieu; Fink, Mathias; Tanter, Mickael

    2014-01-01

    Non-invasive ultrafast imaging for human cardiac applications is a big challenge to image intrinsic waves such as electromechanical waves or remotely induced shear waves in elastography imaging techniques. In this paper we propose to perform ultrafast imaging of the heart with adapted sector size by using diverging waves emitted from a classical transthoracic cardiac phased array probe. As in ultrafast imaging with plane wave coherent compounding, diverging waves can be summed coherently to obtain high-quality images of the entire heart at high frame rate in a full field-of-view. To image shear waves propagation at high SNR, the field-of-view can be adapted by changing the angular aperture of the transmitted wave. Backscattered echoes from successive circular wave acquisitions are coherently summed at every location in the image to improve the image quality while maintaining very high frame rates. The transmitted diverging waves, angular apertures and subapertures size are tested in simulation and ultrafast coherent compounding is implemented on a commercial scanner. The improvement of the imaging quality is quantified in phantom and in vivo on human heart. Imaging shear wave propagation at 2500 frame/s using 5 diverging waves provides a strong increase of the Signal to noise ratio of the tissue velocity estimates while maintaining a high frame rate. Finally, ultrafast imaging with a 1 to 5 diverging waves is used to image the human heart at a frame rate of 900 frames/s over an entire cardiac cycle. Thanks to spatial coherent compounding, a strong improvement of imaging quality is obtained with a small number of transmitted diverging waves and a high frame rate, which allows imaging the propagation of electromechanical and shear waves with good image quality. PMID:24474135

  11. SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yip, S; Rottmann, J; Berbeco, R

    2014-06-01

    Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less

  12. The Multimission Image Processing Laboratory's virtual frame buffer interface

    NASA Technical Reports Server (NTRS)

    Wolfe, T.

    1984-01-01

    Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.

  13. Synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON): A statistical model based iterative image reconstruction method to eliminate limited-view artifacts and to mitigate the temporal-average artifacts in time-resolved CT.

    PubMed

    Chen, Guang-Hong; Li, Yinsheng

    2015-08-01

    In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial-temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial-temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial-temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity of the reconstructed images were quantified using the relative root mean square error (rRMSE) and the universal quality index (UQI) in numerical simulations. The performance of the SMART-RECON algorithm was compared with that of the prior image constrained compressed sensing (PICCS) reconstruction quantitatively in simulations and qualitatively in human subject exam. In numerical simulations, the 240(∘) short scan angular span was divided into four consecutive 60(∘) angular subsectors. SMART-RECON enables four high temporal fidelity images without limited-view artifacts. The average rRMSE is 16% and UQIs are 0.96 and 0.95 for the two local regions of interest, respectively. In contrast, the corresponding average rRMSE and UQIs are 25%, 0.78, and 0.81, respectively, for the PICCS reconstruction. Note that only one filtered backprojection image can be reconstructed from the same data set with an average rRMSE and UQIs are 45%, 0.71, and 0.79, respectively, to benchmark reconstruction accuracies. For in vivo contrast enhanced cone beam CT data acquired from a short scan angular span of 200(∘), three 66(∘) angular subsectors were used in SMART-RECON. The results demonstrated clear contrast difference in three SMART-RECON reconstructed image volumes without limited-view artifacts. In contrast, for the same angular sectors, PICCS cannot reconstruct images without limited-view artifacts and with clear contrast difference in three reconstructed image volumes. In time-resolved CT, the proposed SMART-RECON method provides a new method to eliminate limited-view artifacts using data acquired in an ultranarrow temporal window, which corresponds to approximately 60(∘) angular subsectors.

  14. Image quality assessment metric for frame accumulated image

    NASA Astrophysics Data System (ADS)

    Yu, Jianping; Li, Gang; Wang, Shaohui; Lin, Ling

    2018-01-01

    The medical image quality determines the accuracy of diagnosis, and the gray-scale resolution is an important parameter to measure image quality. But current objective metrics are not very suitable for assessing medical images obtained by frame accumulation technology. Little attention was paid to the gray-scale resolution, basically based on spatial resolution and limited to the 256 level gray scale of the existing display device. Thus, this paper proposes a metric, "mean signal-to-noise ratio" (MSNR) based on signal-to-noise in order to be more reasonable to evaluate frame accumulated medical image quality. We demonstrate its potential application through a series of images under a constant illumination signal. Here, the mean image of enough images was regarded as the reference image. Several groups of images by different frame accumulation and their MSNR were calculated. The results of the experiment show that, compared with other quality assessment methods, the metric is simpler, more effective, and more suitable for assessing frame accumulated images that surpass the gray scale and precision of the original image.

  15. Attitude-correlated frames approach for a star sensor to improve attitude accuracy under highly dynamic conditions.

    PubMed

    Ma, Liheng; Zhan, Dejun; Jiang, Guangwen; Fu, Sihua; Jia, Hui; Wang, Xingshu; Huang, Zongsheng; Zheng, Jiaxing; Hu, Feng; Wu, Wei; Qin, Shiqiao

    2015-09-01

    The attitude accuracy of a star sensor decreases rapidly when star images become motion-blurred under dynamic conditions. Existing techniques concentrate on a single frame of star images to solve this problem and improvements are obtained to a certain extent. An attitude-correlated frames (ACF) approach, which concentrates on the features of the attitude transforms of the adjacent star image frames, is proposed to improve upon the existing techniques. The attitude transforms between different star image frames are measured by the strap-down gyro unit precisely. With the ACF method, a much larger star image frame is obtained through the combination of adjacent frames. As a result, the degradation of attitude accuracy caused by motion-blurring are compensated for. The improvement of the attitude accuracy is approximately proportional to the square root of the number of correlated star image frames. Simulations and experimental results indicate that the ACF approach is effective in removing random noises and improving the attitude determination accuracy of the star sensor under highly dynamic conditions.

  16. Improved frame-based estimation of head motion in PET brain imaging.

    PubMed

    Mukherjee, J M; Lindsay, C; Mukherjee, A; Olivier, P; Shao, L; King, M A; Licho, R

    2016-05-01

    Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. The list mode data for PET acquisition is uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is not susceptible to motion introduced between CT and PET acquisitions. The authors have shown that they can estimate motion for frames with time intervals as short as 5 s using nonattenuation corrected reconstructed FDG PET brain images. Intraframe motion in 60-s frames causes degradation of accuracy to about 2 mm based on the motion type.

  17. Improved quality of intrafraction kilovoltage images by triggered readout of unexposed frames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poulsen, Per Rugaard, E-mail: per.poulsen@rm.dk; Jonassen, Johnny; Jensen, Carsten

    2015-11-15

    Purpose: The gantry-mounted kilovoltage (kV) imager of modern linear accelerators can be used for real-time tumor localization during radiation treatment delivery. However, the kV image quality often suffers from cross-scatter from the megavoltage (MV) treatment beam. This study investigates readout of unexposed kV frames as a means to improve the kV image quality in a series of experiments and a theoretical model of the observed image quality improvements. Methods: A series of fluoroscopic images were acquired of a solid water phantom with an embedded gold marker and an air cavity with and without simultaneous radiation of the phantom with amore » 6 MV beam delivered perpendicular to the kV beam with 300 and 600 monitor units per minute (MU/min). An in-house built device triggered readout of zero, one, or multiple unexposed frames between the kV exposures. The unexposed frames contained part of the MV scatter, consequently reducing the amount of MV scatter accumulated in the exposed frames. The image quality with and without unexposed frame readout was quantified as the contrast-to-noise ratio (CNR) of the gold marker and air cavity for a range of imaging frequencies from 1 to 15 Hz. To gain more insight into the observed CNR changes, the image lag of the kV imager was measured and used as input in a simple model that describes the CNR with unexposed frame readout in terms of the contrast, kV noise, and MV noise measured without readout of unexposed frames. Results: Without readout of unexposed kV frames, the quality of intratreatment kV images decreased dramatically with reduced kV frequencies due to MV scatter. The gold marker was only visible for imaging frequencies ≥3 Hz at 300 MU/min and ≥5 Hz for 600 MU/min. Visibility of the air cavity required even higher imaging frequencies. Readout of multiple unexposed frames ensured visibility of both structures at all imaging frequencies and a CNR that was independent of the kV frame rate. The image lag was 12.2%, 2.2%, and 0.9% in the first, second, and third frame after an exposure. The CNR model predicted the CNR with triggered image readout with a mean absolute error of 2.0% for the gold marker. Conclusions: A device that triggers readout of unexposed frames during kV fluoroscopy was built and shown to greatly improve the quality of intratreatment kV images. A simple theoretical model successfully described the CNR improvements with the device.« less

  18. Optimization of sparse synthetic transmit aperture imaging with coded excitation and frequency division.

    PubMed

    Behar, Vera; Adam, Dan

    2005-12-01

    An effective aperture approach is used for optimization of a sparse synthetic transmit aperture (STA) imaging system with coded excitation and frequency division. A new two-stage algorithm is proposed for optimization of both the positions of the transmit elements and the weights of the receive elements. In order to increase the signal-to-noise ratio in a synthetic aperture system, temporal encoding of the excitation signals is employed. When comparing the excitation by linear frequency modulation (LFM) signals and phase shift key modulation (PSKM) signals, the analysis shows that chirps are better for excitation, since at the output of a compression filter the sidelobes generated are much smaller than those produced by the binary PSKM signals. Here, an implementation of a fast STA imaging is studied by spatial encoding with frequency division of the LFM signals. The proposed system employs a 64-element array with only four active elements used during transmit. The two-dimensional point spread function (PSF) produced by such a sparse STA system is compared to the PSF produced by an equivalent phased array system, using the Field II simulation program. The analysis demonstrates the superiority of the new sparse STA imaging system while using coded excitation and frequency division. Compared to a conventional phased array imaging system, this system acquires images of equivalent quality 60 times faster, when the transmit elements are fired in pairs consecutively and the power level used during transmit is very low. The fastest acquisition time is achieved when all transmit elements are fired simultaneously, which improves detectability, but at the cost of a slight degradation of the axial resolution. In real-time implementation, however, it must be borne in mind that the frame rate of a STA imaging system depends not only on the acquisition time of the data but also on the processing time needed for image reconstruction. Comparing to phased array imaging, a significant increase in the frame rate of a STA imaging system is possible if and only if an equivalent time efficient algorithm is used for image reconstruction.

  19. JASMINE -- Japan Astrometry Satellite Mission for INfrared Exploration: Data Analysis and Accuracy Assessment with a Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yamada, Y.; Shimokawa, T.; Shinomoto, S. Yano, T.; Gouda, N.

    2009-09-01

    For the purpose of determining the celestial coordinates of stellar positions, consecutive observational images are laid overlapping each other with clues of stars belonging to multiple plates. In the analysis, one has to estimate not only the coordinates of individual plates, but also the possible expansion and distortion of the frame. This problem reduces to a least-squares fit that can in principle be solved by a huge matrix inversion, which is, however, impracticable. Here, we propose using Kalman filtering to perform the least-squares fit and implement a practical iterative algorithm. We also estimate errors associated with this iterative method and suggest a design of overlapping plates to minimize the error.

  20. Frame Rate Considerations for Real-Time Abdominal Acoustic Radiation Force Impulse Imaging

    PubMed Central

    Fahey, Brian J.; Palmeri, Mark L.; Trahey, Gregg E.

    2008-01-01

    With the advent of real-time Acoustic Radiation Force Impulse (ARFI) imaging, elevated frame rates are both desirable and relevant from a clinical perspective. However, fundamental limitations on frame rates are imposed by thermal safety concerns related to incident radiation force pulses. Abdominal ARFI imaging utilizes a curvilinear scanning geometry that results in markedly different tissue heating patterns than those previously studied for linear arrays or mechanically-translated concave transducers. Finite Element Method (FEM) models were used to simulate these tissue heating patterns and to analyze the impact of tissue heating on frame rates available for abdominal ARFI imaging. A perfusion model was implemented to account for cooling effects due to blood flow and frame rate limitations were evaluated in the presence of normal, reduced and negligible tissue perfusions. Conventional ARFI acquisition techniques were also compared to ARFI imaging with parallel receive tracking in terms of thermal efficiency. Additionally, thermocouple measurements of transducer face temperature increases were acquired to assess the frame rate limitations imposed by cumulative heating of the imaging array. Frame rates sufficient for many abdominal imaging applications were found to be safely achievable utilizing available ARFI imaging techniques. PMID:17521042

  1. Tissue velocity imaging of coronary artery by rotating-type intravascular ultrasound.

    PubMed

    Saijo, Yoshifumi; Tanaka, Akira; Owada, Naoki; Akino, Yoshihisa; Nitta, Shinichi

    2004-04-01

    Intravascular ultrasound (IVUS) provides not only the dimensions of coronary artery but the information of tissue components. In catheterization laboratory, soft and hard plaques are classified by visual inspection of echo intensity. So-called soft plaque contains lipid core or thrombus and it is believed to be more vulnerable than a hard plaque. However, it is not simple to analyze the echo signals quantitatively. When we look at a reflection signal, the intensity is affected by the distance of the object, the medium between transducer and objects and the fluctuation caused by rotation of IVUS probe. The time of flight is also affected by the sound speed of the medium and Doppler shift caused by tissue motion but usually those can be neglected. Thus, the analysis of RF signal in time domain can be more quantitative than intensity of RF signal. In the present study, a novel imaging technique called "intravascular tissue velocity imaging" was developed for searching a vulnerable plaque. Radio-frequency (RF) signal from a clinically used IVUS apparatus was digitized at 500 MSa/s and stored in a workstation. First, non-uniform rotation was corrected by maximizing the correlation coefficient of circumferential RF signal distribution in two consecutive frames. Then, the correlation and displacement were calculated by analyzing the radial difference of RF signal. Tissue velocity was determined by the displacement and the frame rate. The correlation image of normal and atherosclerotic coronary arteries clearly showed the internal and external borders of arterial wall. Soft plaque with low echo area in the intima showed high velocity while the calcified lesion showed the very low tissue velocity. This technique provides important information on tissue character of coronary artery.

  2. Spatiotemporal alignment of in utero BOLD-MRI series.

    PubMed

    Turk, Esra Abaci; Luo, Jie; Gagoski, Borjan; Pascau, Javier; Bibbo, Carolina; Robinson, Julian N; Grant, P Ellen; Adalsteinsson, Elfar; Golland, Polina; Malpica, Norberto

    2017-08-01

    To present a method for spatiotemporal alignment of in-utero magnetic resonance imaging (MRI) time series acquired during maternal hyperoxia for enabling improved quantitative tracking of blood oxygen level-dependent (BOLD) signal changes that characterize oxygen transport through the placenta to fetal organs. The proposed pipeline for spatiotemporal alignment of images acquired with a single-shot gradient echo echo-planar imaging includes 1) signal nonuniformity correction, 2) intravolume motion correction based on nonrigid registration, 3) correction of motion and nonrigid deformations across volumes, and 4) detection of the outlier volumes to be discarded from subsequent analysis. BOLD MRI time series collected from 10 pregnant women during 3T scans were analyzed using this pipeline. To assess pipeline performance, signal fluctuations between consecutive timepoints were examined. In addition, volume overlap and distance between manual region of interest (ROI) delineations in a subset of frames and the delineations obtained through propagation of the ROIs from the reference frame were used to quantify alignment accuracy. A previously demonstrated rigid registration approach was used for comparison. The proposed pipeline improved anatomical alignment of placenta and fetal organs over the state-of-the-art rigid motion correction methods. In particular, unexpected temporal signal fluctuations during the first normoxia period were significantly decreased (P < 0.01) and volume overlap and distance between region boundaries measures were significantly improved (P < 0.01). The proposed approach to align MRI time series enables more accurate quantitative studies of placental function by improving spatiotemporal alignment across placenta and fetal organs. 1 Technical Efficacy: Stage 1 J. MAGN. RESON. IMAGING 2017;46:403-412. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Wide-Field Megahertz OCT Imaging of Patients with Diabetic Retinopathy

    PubMed Central

    Reznicek, Lukas; Kolb, Jan P.; Klein, Thomas; Mohler, Kathrin J.; Huber, Robert; Kernt, Marcus; Märtz, Josef; Neubauer, Aljoscha S.

    2015-01-01

    Purpose. To evaluate the feasibility of wide-field Megahertz (MHz) OCT imaging in patients with diabetic retinopathy. Methods. A consecutive series of 15 eyes of 15 patients with diagnosed diabetic retinopathy were included. All patients underwent Megahertz OCT imaging, a close clinical examination, slit lamp biomicroscopy, and funduscopic evaluation. To acquire densely sampled, wide-field volumetric datasets, an ophthalmic 1050 nm OCT prototype system based on a Fourier-domain mode-locked (FDML) laser source with 1.68 MHz A-scan rate was employed. Results. We were able to obtain OCT volume scans from all included 15 patients. Acquisition time was 1.8 seconds. Obtained volume datasets consisted of 2088 × 1044 A-scans of 60° of view. Thus, reconstructed en face images had a resolution of 34.8 pixels per degree in x-axis and 17.4 pixels per degree. Due to the densely sampled OCT volume dataset, postprocessed customized cross-sectional B-frames through pathologic changes such as an individual microaneurysm or a retinal neovascularization could be imaged. Conclusions. Wide-field Megahertz OCT is feasible to successfully image patients with diabetic retinopathy at high scanning rates and a wide angle of view, providing information in all three axes. The Megahertz OCT is a useful tool to screen diabetic patients for diabetic retinopathy. PMID:26273665

  4. Wide-Field Megahertz OCT Imaging of Patients with Diabetic Retinopathy.

    PubMed

    Reznicek, Lukas; Kolb, Jan P; Klein, Thomas; Mohler, Kathrin J; Wieser, Wolfgang; Huber, Robert; Kernt, Marcus; Märtz, Josef; Neubauer, Aljoscha S

    2015-01-01

    To evaluate the feasibility of wide-field Megahertz (MHz) OCT imaging in patients with diabetic retinopathy. A consecutive series of 15 eyes of 15 patients with diagnosed diabetic retinopathy were included. All patients underwent Megahertz OCT imaging, a close clinical examination, slit lamp biomicroscopy, and funduscopic evaluation. To acquire densely sampled, wide-field volumetric datasets, an ophthalmic 1050 nm OCT prototype system based on a Fourier-domain mode-locked (FDML) laser source with 1.68 MHz A-scan rate was employed. RESULTS. We were able to obtain OCT volume scans from all included 15 patients. Acquisition time was 1.8 seconds. Obtained volume datasets consisted of 2088 × 1044 A-scans of 60° of view. Thus, reconstructed en face images had a resolution of 34.8 pixels per degree in x-axis and 17.4 pixels per degree. Due to the densely sampled OCT volume dataset, postprocessed customized cross-sectional B-frames through pathologic changes such as an individual microaneurysm or a retinal neovascularization could be imaged. Wide-field Megahertz OCT is feasible to successfully image patients with diabetic retinopathy at high scanning rates and a wide angle of view, providing information in all three axes. The Megahertz OCT is a useful tool to screen diabetic patients for diabetic retinopathy.

  5. Node synchronization schemes for the Big Viterbi Decoder

    NASA Technical Reports Server (NTRS)

    Cheung, K.-M.; Swanson, L.; Arnold, S.

    1992-01-01

    The Big Viterbi Decoder (BVD), currently under development for the DSN, includes three separate algorithms to acquire and maintain node and frame synchronization. The first measures the number of decoded bits between two consecutive renormalization operations (renorm rate), the second detects the presence of the frame marker in the decoded bit stream (bit correlation), while the third searches for an encoded version of the frame marker in the encoded input stream (symbol correlation). A detailed account of the operation is given, as well as performance comparison, of the three methods.

  6. Adaptive optics images restoration based on frame selection and multi-framd blind deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Y.; Rao, C. H.; Wei, K.

    2008-10-01

    The adaptive optics can only partially compensate the image blurred by atmospheric turbulent due to the observing condition and hardware restriction. A post-processing method based on frame selection and multi-frame blind deconvolution to improve images partially corrected by adaptive optics is proposed. The appropriate frames which are picked out by frame selection technique is deconvolved. There is no priori knowledge except the positive constraint. The method has been applied in the image restoration of celestial bodies which were observed by 1.2m telescope equipped with 61-element adaptive optical system in Yunnan Observatory. The results showed that the method can effectively improve the images partially corrected by adaptive optics.

  7. Improved virtual cardiac phantom with variable diastolic filling rates and coronary artery velocities

    NASA Astrophysics Data System (ADS)

    Sturgeon, Gregory M.; Richards, Taylor W.; Samei, E.; Segars, W. P.

    2017-03-01

    To facilitate studies of measurement uncertainty in computed tomography angiography (CTA), we investigated the cardiac motion profile and resulting coronary artery motion utilizing innovative dynamic virtual and physical phantoms. The four-chamber cardiac finite element (FE) model developed in the Living Heart Project (LHP) served as the computational basis for our virtual cardiac phantom. This model provides deformation or strain information at high temporal and spatial resolution, exceeding that of speckle tracking echocardiography or tagged MRI. This model was extended by fitting its motion profile to left ventricular (LV) volume-time curves obtained from patient echocardiography data. By combining the dynamic patient variability from echo with the local strain information from the FE model, a series of virtual 4D cardiac phantoms were developed. Using the computational phantoms, we characterized the coronary motion and its effect on plaque imaging under a range of heart rates subject to variable diastolic function. The coronary artery motion was sampled at 248 spatial locations over 500 consecutive time frames. The coronary artery velocities were calculated as their average velocity during an acquisition window centered at each time frame, which minimized the discretization error. For the initial set of twelve patients, the diastatic coronary artery velocity ranged from 36.5 mm/s to 2.0 mm/s with a mean of 21.4 mm/s assuming an acquisition time of 75 ms. The developed phantoms have great potential in modeling cardiac imaging, providing a known truth and multiple realistic cardiac motion profiles to evaluate different image acquisition or reconstruction methods.

  8. Imaging two-dimensional mechanical waves of skeletal muscle contraction.

    PubMed

    Grönlund, Christer; Claesson, Kenji; Holtermann, Andreas

    2013-02-01

    Skeletal muscle contraction is related to rapid mechanical shortening and thickening. Recently, specialized ultrasound systems have been applied to demonstrate and quantify transient tissue velocities and one-dimensional (1-D) propagation of mechanical waves during muscle contraction. Such waves could potentially provide novel information on musculoskeletal characteristics, function and disorders. In this work, we demonstrate two-dimensional (2-D) mechanical wave imaging following the skeletal muscle contraction. B-mode image acquisition during multiple consecutive electrostimulations, speckle-tracking and a time-stamp sorting protocol were used to obtain 1.4 kHz frame rate 2-D tissue velocity imaging of the biceps brachii muscle contraction. The results present novel information on tissue velocity profiles and mechanical wave propagation. In particular, counter-propagating compressional and shear waves in the longitudinal direction were observed in the contracting tissue (speed 2.8-4.4 m/s) and a compressional wave in the transverse direction of the non-contracting muscle tissue (1.2-1.9 m/s). In conclusion, analysing transient 2-D tissue velocity allows simultaneous assessment of both active and passive muscle tissue properties. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  9. Improved frame-based estimation of head motion in PET brain imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mukherjee, J. M., E-mail: joyeeta.mitra@umassmed.edu; Lindsay, C.; King, M. A.

    Purpose: Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. Methods: The list mode data for PET acquisition ismore » uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. Results: The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is not susceptible to motion introduced between CT and PET acquisitions. Conclusions: The authors have shown that they can estimate motion for frames with time intervals as short as 5 s using nonattenuation corrected reconstructed FDG PET brain images. Intraframe motion in 60-s frames causes degradation of accuracy to about 2 mm based on the motion type.« less

  10. Improved frame-based estimation of head motion in PET brain imaging

    PubMed Central

    Mukherjee, J. M.; Lindsay, C.; Mukherjee, A.; Olivier, P.; Shao, L.; King, M. A.; Licho, R.

    2016-01-01

    Purpose: Head motion during PET brain imaging can cause significant degradation of image quality. Several authors have proposed ways to compensate for PET brain motion to restore image quality and improve quantitation. Head restraints can reduce movement but are unreliable; thus the need for alternative strategies such as data-driven motion estimation or external motion tracking. Herein, the authors present a data-driven motion estimation method using a preprocessing technique that allows the usage of very short duration frames, thus reducing the intraframe motion problem commonly observed in the multiple frame acquisition method. Methods: The list mode data for PET acquisition is uniformly divided into 5-s frames and images are reconstructed without attenuation correction. Interframe motion is estimated using a 3D multiresolution registration algorithm and subsequently compensated for. For this study, the authors used 8 PET brain studies that used F-18 FDG as the tracer and contained minor or no initial motion. After reconstruction and prior to motion estimation, known motion was introduced to each frame to simulate head motion during a PET acquisition. To investigate the trade-off in motion estimation and compensation with respect to frames of different length, the authors summed 5-s frames accordingly to produce 10 and 60 s frames. Summed images generated from the motion-compensated reconstructed frames were then compared to the original PET image reconstruction without motion compensation. Results: The authors found that our method is able to compensate for both gradual and step-like motions using frame times as short as 5 s with a spatial accuracy of 0.2 mm on average. Complex volunteer motion involving all six degrees of freedom was estimated with lower accuracy (0.3 mm on average) than the other types investigated. Preprocessing of 5-s images was necessary for successful image registration. Since their method utilizes nonattenuation corrected frames, it is not susceptible to motion introduced between CT and PET acquisitions. Conclusions: The authors have shown that they can estimate motion for frames with time intervals as short as 5 s using nonattenuation corrected reconstructed FDG PET brain images. Intraframe motion in 60-s frames causes degradation of accuracy to about 2 mm based on the motion type. PMID:27147355

  11. Adaptive Optics Image Restoration Based on Frame Selection and Multi-frame Blind Deconvolution

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Rao, Chang-hui; Wei, Kai

    Restricted by the observational condition and the hardware, adaptive optics can only make a partial correction of the optical images blurred by atmospheric turbulence. A postprocessing method based on frame selection and multi-frame blind deconvolution is proposed for the restoration of high-resolution adaptive optics images. By frame selection we mean we first make a selection of the degraded (blurred) images for participation in the iterative blind deconvolution calculation, with no need of any a priori knowledge, and with only a positivity constraint. This method has been applied to the restoration of some stellar images observed by the 61-element adaptive optics system installed on the Yunnan Observatory 1.2m telescope. The experimental results indicate that this method can effectively compensate for the residual errors of the adaptive optics system on the image, and the restored image can reach the diffraction-limited quality.

  12. 78 FR 16707 - Certain Digital Photo Frames and Image Display Devices and Components Thereof; Issuance of a...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-18

    ... Image Display Devices and Components Thereof; Issuance of a Limited Exclusion Order and Cease and Desist... within the United States after importation of certain digital photo frames and image display devices and...: (1) The unlicensed entry of digital photo frames and image display devices and components thereof...

  13. Frames as visual links between paintings and the museum environment: an analysis of statistical image properties

    PubMed Central

    Redies, Christoph; Groß, Franziska

    2013-01-01

    Frames provide a visual link between artworks and their surround. We asked how image properties change as an observer zooms out from viewing a painting alone, to viewing the painting with its frame and, finally, the framed painting in its museum environment (museum scene). To address this question, we determined three higher-order image properties that are based on histograms of oriented luminance gradients. First, complexity was measured as the sum of the strengths of all gradients in the image. Second, we determined the self-similarity of histograms of the orientated gradients at different levels of spatial analysis. Third, we analyzed how much gradient strength varied across orientations (anisotropy). Results were obtained for three art museums that exhibited paintings from three major periods of Western art. In all three museums, the mean complexity of the frames was higher than that of the paintings or the museum scenes. Frames thus provide a barrier of complexity between the paintings and their exterior. By contrast, self-similarity and anisotropy values of images of framed paintings were intermediate between the images of the paintings and the museum scenes, i.e., the frames provided a transition between the paintings and their surround. We also observed differences between the three museums that may reflect modified frame usage in different art periods. For example, frames in the museum for 20th century art tended to be smaller and less complex than in the two other two museums that exhibit paintings from earlier art periods (13th–18th century and 19th century, respectively). Finally, we found that the three properties did not depend on the type of reproduction of the paintings (photographs in museums, scans from books or images from the Google Art Project). To the best of our knowledge, this study is the first to investigate the relation between frames and paintings by measuring physically defined, higher-order image properties. PMID:24265625

  14. Low-complexity image processing for real-time detection of neonatal clonic seizures.

    PubMed

    Ntonfo, Guy Mathurin Kouamou; Ferrari, Gianluigi; Raheli, Riccardo; Pisani, Francesco

    2012-05-01

    In this paper, we consider a novel low-complexity real-time image-processing-based approach to the detection of neonatal clonic seizures. Our approach is based on the extraction, from a video of a newborn, of an average luminance signal representative of the body movements. Since clonic seizures are characterized by periodic movements of parts of the body (e.g., the limbs), by evaluating the periodicity of the extracted average luminance signal it is possible to detect the presence of a clonic seizure. The periodicity is investigated, through a hybrid autocorrelation-Yin estimation technique, on a per-window basis, where a time window is defined as a sequence of consecutive video frames. While processing is first carried out on a single window basis, we extend our approach to interlaced windows. The performance of the proposed detection algorithm is investigated, in terms of sensitivity and specificity, through receiver operating characteristic curves, considering video recordings of newborns affected by neonatal seizures.

  15. Human activities recognition by head movement using partial recurrent neural network

    NASA Astrophysics Data System (ADS)

    Tan, Henry C. C.; Jia, Kui; De Silva, Liyanage C.

    2003-06-01

    Traditionally, human activities recognition has been achieved mainly by the statistical pattern recognition methods or the Hidden Markov Model (HMM). In this paper, we propose a novel use of the connectionist approach for the recognition of ten simple human activities: walking, sitting down, getting up, squatting down and standing up, in both lateral and frontal views, in an office environment. By means of tracking the head movement of the subjects over consecutive frames from a database of different color image sequences, and incorporating the Elman model of the partial recurrent neural network (RNN) that learns the sequential patterns of relative change of the head location in the images, the proposed system is able to robustly classify all the ten activities performed by unseen subjects from both sexes, of different race and physique, with a recognition rate as high as 92.5%. This demonstrates the potential of employing partial RNN to recognize complex activities in the increasingly popular human-activities-based applications.

  16. Point spread function engineering for iris recognition system design.

    PubMed

    Ashok, Amit; Neifeld, Mark A

    2010-04-01

    Undersampling in the detector array degrades the performance of iris-recognition imaging systems. We find that an undersampling of 8 x 8 reduces the iris-recognition performance by nearly a factor of 4 (on CASIA iris database), as measured by the false rejection ratio (FRR) metric. We employ optical point spread function (PSF) engineering via a Zernike phase mask in conjunction with multiple subpixel shifted image measurements (frames) to mitigate the effect of undersampling. A task-specific optimization framework is used to engineer the optical PSF and optimize the postprocessing parameters to minimize the FRR. The optimized Zernike phase enhanced lens (ZPEL) imager design with one frame yields an improvement of nearly 33% relative to a thin observation module by bounded optics (TOMBO) imager with one frame. With four frames the optimized ZPEL imager achieves a FRR equal to that of the conventional imager without undersampling. Further, the ZPEL imager design using 16 frames yields a FRR that is actually 15% lower than that obtained with the conventional imager without undersampling.

  17. [Improvement of Digital Capsule Endoscopy System and Image Interpolation].

    PubMed

    Zhao, Shaopeng; Yan, Guozheng; Liu, Gang; Kuang, Shuai

    2016-01-01

    Traditional capsule image collects and transmits analog image, with weak anti-interference ability, low frame rate, low resolution. This paper presents a new digital image capsule, which collects and transmits digital image, with frame rate up to 30 frames/sec and pixels resolution of 400 x 400. The image is compressed in the capsule, and is transmitted to the outside of the capsule for decompression and interpolation. A new type of interpolation algorithm is proposed, which is based on the relationship between the image planes, to obtain higher quality colour images. capsule endoscopy, digital image, SCCB protocol, image interpolation

  18. Energetic Neutral Atom (ENA) Movies and Other Cool Data from Cassini's Magnetosphere Imaging Instrument (MIMI)

    NASA Astrophysics Data System (ADS)

    Kusterer, M. B.; Mitchell, D. G.; Krimigis, S. M.; Vandegriff, J. D.

    2014-12-01

    Having been at Saturn for over a decade, the MIMI instrument on Cassini has created a rich dataset containing many details about Saturn's magnetosphere. In particular, the images of energetic neutral atoms (ENAs) taken by the Ion and Neutral Camera (INCA) offer a global perspective on Saturn's plasma environment. The MIMI team is now regularly making movies (in MP4 format) consisting of consecutive ENA images. The movies correct for spacecraft attitude changes by projecting the images (whose viewing angles can substantially vary from one image to the next) into a fixed inertial frame that makes it easy to view spatial features evolving in time. These movies are now being delivered to the PDS and are also available at the MIMI team web site. Several other higher order products are now also available, including 20-day energy-time spectrograms for the Charge-Energy-Mass Spectrometer (CHEMS) sensor, and daily energy-time spectrograms for the Low Energy Magnetospheric Measurements system (LEMMS) sensor. All spectrograms are available as plots or digital data in ASCII format. For all MIMI sensors, a Data User Guide is also available. This paper presents details and examples covering the specifics of MIMI higher order data products. URL: http://cassini-mimi.jhuapl.edu/

  19. 77 FR 74220 - Certain Digital Photo Frames and Image Display Devices and Components Thereof; Commission...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-13

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-807] Certain Digital Photo Frames and Image Display Devices and Components Thereof; Commission Determination Not To Review an Initial... importation, and the sale within the United States after importation of certain digital photo frames and image...

  20. Sequential detection of web defects

    DOEpatents

    Eichel, Paul H.; Sleefe, Gerard E.; Stalker, K. Terry; Yee, Amy A.

    2001-01-01

    A system for detecting defects on a moving web having a sequential series of identical frames uses an imaging device to form a real-time camera image of a frame and a comparitor to comparing elements of the camera image with corresponding elements of an image of an exemplar frame. The comparitor provides an acceptable indication if the pair of elements are determined to be statistically identical; and a defective indication if the pair of elements are determined to be statistically not identical. If the pair of elements is neither acceptable nor defective, the comparitor recursively compares the element of said exemplar frame with corresponding elements of other frames on said web until one of the acceptable or defective indications occur.

  1. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  2. Frame average optimization of cine-mode EPID images used for routine clinical in vivo patient dose verification of VMAT deliveries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McCowan, P. M., E-mail: pmccowan@cancercare.mb.ca; McCurdy, B. M. C.; Medical Physics Department, CancerCare Manitoba, 675 McDermot Avenue, Winnipeg, Manitoba R3E 0V9

    Purpose: The in vivo 3D dose delivered to a patient during volumetric modulated arc therapy (VMAT) delivery can be calculated using electronic portal imaging device (EPID) images. These images must be acquired in cine-mode (i.e., “movie” mode) in order to capture the time-dependent delivery information. The angle subtended by each cine-mode EPID image during an arc can be changed via the frame averaging number selected within the image acquisition software. A large frame average number will decrease the EPID’s angular resolution and will result in a decrease in the accuracy of the dose information contained within each image. Alternatively, lessmore » EPID images acquired per delivery will decrease the overall 3D patient dose calculation time, which is appealing for large-scale clinical implementation. Therefore, the purpose of this study was to determine the optimal frame average value per EPID image, defined as the highest frame averaging that can be used without an appreciable loss in 3D dose reconstruction accuracy for VMAT treatments. Methods: Six different VMAT plans and six different SBRT-VMAT plans were delivered to an anthropomorphic phantom. Delivery was carried out on a Varian 2300ix model linear accelerator (Linac) equipped with an aS1000 EPID running at a frame acquisition rate of 7.5 Hz. An additional PC was set up at the Linac console area, equipped with specialized frame-grabber hardware and software packages allowing continuous acquisition of all EPID frames during delivery. Frames were averaged into “frame-averaged” EPID images using MATLAB. Each frame-averaged data set was used to calculate the in vivo dose to the patient and then compared to the single EPID frame in vivo dose calculation (the single frame calculation represents the highest possible angular resolution per EPID image). A mean percentage dose difference of low dose (<20% prescription dose) and high dose regions (>80% prescription dose) was calculated for each frame averaged scenario for each plan. The authors defined their unacceptable loss of accuracy as no more than a ±1% mean dose difference in the high dose region. Optimal frame average numbers were then determined as a function of the Linac’s average gantry speed and the dose per fraction. Results: The authors found that 9 and 11 frame averages were suitable for all VMAT and SBRT-VMAT treatments, respectively. This resulted in no more than a 1% loss to any of the dose region’s mean percentage difference when compared to the single frame reconstruction. The optimized number was dependent on the treatment’s dose per fraction and was determined to be as high as 14 for 12 Gy/fraction (fx), 15 for 8 Gy/fx, 11 for 6 Gy/fx, and 9 for 2 Gy/fx. Conclusions: The authors have determined an optimal EPID frame averaging number for multiple VMAT-type treatments. These are given as a function of the dose per fraction and average gantry speed. These optimized values are now used in the authors’ clinical, 3D, in vivo patient dosimetry program. This provides a reduction in calculation time while maintaining the authors’ required level of accuracy in the dose reconstruction.« less

  3. Dangerous gas detection based on infrared video

    NASA Astrophysics Data System (ADS)

    Ding, Kang; Hong, Hanyu; Huang, Likun

    2018-03-01

    As the gas leak infrared imaging detection technology has significant advantages of high efficiency and remote imaging detection, in order to enhance the detail perception of observers and equivalently improve the detection limit, we propose a new type of gas leak infrared image detection method, which combines background difference methods and multi-frame interval difference method. Compared to the traditional frame methods, the multi-frame interval difference method we proposed can extract a more complete target image. By fusing the background difference image and the multi-frame interval difference image, we can accumulate the information of infrared target image of the gas leak in many aspect. The experiment demonstrate that the completeness of the gas leakage trace information is enhanced significantly, and the real-time detection effect can be achieved.

  4. Full-frame video stabilization with motion inpainting.

    PubMed

    Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung

    2006-07-01

    Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.

  5. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  6. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  7. Comparison of Deep Brain Stimulation Lead Targeting Accuracy and Procedure Duration between 1.5- and 3-Tesla Interventional Magnetic Resonance Imaging Systems: An Initial 12-Month Experience.

    PubMed

    Southwell, Derek G; Narvid, Jared A; Martin, Alastair J; Qasim, Salman E; Starr, Philip A; Larson, Paul S

    2016-01-01

    Interventional magnetic resonance imaging (iMRI) allows deep brain stimulator lead placement under general anesthesia. While the accuracy of lead targeting has been described for iMRI systems utilizing 1.5-tesla magnets, a similar assessment of 3-tesla iMRI procedures has not been performed. To compare targeting accuracy, the number of lead targeting attempts, and surgical duration between procedures performed on 1.5- and 3-tesla iMRI systems. Radial targeting error, the number of targeting attempts, and procedure duration were compared between surgeries performed on 1.5- and 3-tesla iMRI systems (SmartFrame and ClearPoint systems). During the first year of operation of each system, 26 consecutive leads were implanted using the 1.5-tesla system, and 23 consecutive leads were implanted using the 3-tesla system. There was no significant difference in radial error (Mann-Whitney test, p = 0.26), number of lead placements that required multiple targeting attempts (Fisher's exact test, p = 0.59), or bilateral procedure durations between surgeries performed with the two systems (p = 0.15). Accurate DBS lead targeting can be achieved with iMRI systems utilizing either 1.5- or 3-tesla magnets. The use of a 3-tesla magnet, however, offers improved visualization of the target structures and allows comparable accuracy and efficiency of placement at the selected targets. © 2016 S. Karger AG, Basel.

  8. Split-screen display system and standardized methods for ultrasound image acquisition and multi-frame data processing

    NASA Technical Reports Server (NTRS)

    Selzer, Robert H. (Inventor); Hodis, Howard N. (Inventor)

    2011-01-01

    A standardized acquisition methodology assists operators to accurately replicate high resolution B-mode ultrasound images obtained over several spaced-apart examinations utilizing a split-screen display in which the arterial ultrasound image from an earlier examination is displayed on one side of the screen while a real-time "live" ultrasound image from a current examination is displayed next to the earlier image on the opposite side of the screen. By viewing both images, whether simultaneously or alternately, while manually adjusting the ultrasound transducer, an operator is able to bring into view the real-time image that best matches a selected image from the earlier ultrasound examination. Utilizing this methodology, dynamic material properties of arterial structures, such as IMT and diameter, are measured in a standard region over successive image frames. Each frame of the sequence has its echo edge boundaries automatically determined by using the immediately prior frame's true echo edge coordinates as initial boundary conditions. Computerized echo edge recognition and tracking over multiple successive image frames enhances measurement of arterial diameter and IMT and allows for improved vascular dimension measurements, including vascular stiffness and IMT determinations.

  9. High frame-rate en face optical coherence tomography system using KTN optical beam deflector

    NASA Astrophysics Data System (ADS)

    Ohmi, Masato; Shinya, Yusuke; Imai, Tadayuki; Toyoda, Seiji; Kobayashi, Junya; Sakamoto, Tadashi

    2017-02-01

    We developed high frame-rate en face optical coherence tomography (OCT) system using KTa1-xNbxO3 (KTN) optical beam deflector. In the imaging system, the fast scanning was performed at 200 kHz by the KTN optical beam deflector, while the slow scanning was performed at 800 Hz by the galvanometer mirror. As a preliminary experiment, we succeeded in obtaining en face OCT images of human fingerprint with a frame rate of 800 fps. This is the highest frame-rate obtained using time-domain (TD) en face OCT imaging. The 3D-OCT image of sweat gland was also obtained by our imaging system.

  10. Telemetry Standards, Part 1

    DTIC Science & Technology

    2015-07-01

    IMAGE FRAME RATE (R-x\\ IFR -n) PRE-TRIGGER FRAMES (R-x\\PTG-n) TOTAL FRAMES (R-x\\TOTF-n) EXPOSURE TIME (R-x\\EXP-n) SENSOR ROTATION (R-x...0” (Single frame). “1” (Multi-frame). “2” (Continuous). Allowed when: When R\\CDT is “IMGIN” IMAGE FRAME RATE R-x\\ IFR -n R/R Ch 10 Status: RO...the settings that the user wishes to modify. Return Value The impact : A partial IHAL <configuration> element containing only the new settings for

  11. Development of ultrasound/endoscopy PACS (picture archiving and communication system) and investigation of compression method for cine images

    NASA Astrophysics Data System (ADS)

    Osada, Masakazu; Tsukui, Hideki

    2002-09-01

    ABSTRACT Picture Archiving and Communication System (PACS) is a system which connects imaging modalities, image archives, and image workstations to reduce film handling cost and improve hospital workflow. Handling diagnostic ultrasound and endoscopy images is challenging, because it produces large amount of data such as motion (cine) images of 30 frames per second, 640 x 480 in resolution, with 24-bit color. Also, it requires enough image quality for clinical review. We have developed PACS which is able to manage ultrasound and endoscopy cine images with above resolution and frame rate, and investigate suitable compression method and compression rate for clinical image review. Results show that clinicians require capability for frame-by-frame forward and backward review of cine images because they carefully look through motion images to find certain color patterns which may appear in one frame. In order to satisfy this quality, we have chosen motion JPEG, installed and confirmed that we could capture this specific pattern. As for acceptable image compression rate, we have performed subjective evaluation. No subjects could tell the difference between original non-compressed images and 1:10 lossy compressed JPEG images. One subject could tell the difference between original and 1:20 lossy compressed JPEG images although it is acceptable. Thus, ratios of 1:10 to 1:20 are acceptable to reduce data amount and cost while maintaining quality for clinical review.

  12. Modified Mean-Pyramid Coding Scheme

    NASA Technical Reports Server (NTRS)

    Cheung, Kar-Ming; Romer, Richard

    1996-01-01

    Modified mean-pyramid coding scheme requires transmission of slightly fewer data. Data-expansion factor reduced from 1/3 to 1/12. Schemes for progressive transmission of image data transmitted in sequence of frames in such way coarse version of image reconstructed after receipt of first frame and increasingly refined version of image reconstructed after receipt of each subsequent frame.

  13. Ultra-fast high-resolution hybrid and monolithic CMOS imagers in multi-frame radiography

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Kris; Douence, Vincent; Bai, Yibin; Nedrow, Paul; Mariam, Fesseha; Merrill, Frank; Morris, Christopher L.; Saunders, Andy

    2014-09-01

    A new burst-mode, 10-frame, hybrid Si-sensor/CMOS-ROIC FPA chip has been recently fabricated at Teledyne Imaging Sensors. The intended primary use of the sensor is in the multi-frame 800 MeV proton radiography at LANL. The basic part of the hybrid is a large (48×49 mm2) stitched CMOS chip of 1100×1100 pixel count, with a minimum shutter speed of 50 ns. The performance parameters of this chip are compared to the first generation 3-frame 0.5-Mpixel custom hybrid imager. The 3-frame cameras have been in continuous use for many years, in a variety of static and dynamic experiments at LANSCE. The cameras can operate with a per-frame adjustable integration time of ~ 120ns-to- 1s, and inter-frame time of 250ns to 2s. Given the 80 ms total readout time, the original and the new imagers can be externally synchronized to 0.1-to-5 Hz, 50-ns wide proton beam pulses, and record up to ~1000-frame radiographic movies typ. of 3-to-30 minute duration. The performance of the global electronic shutter is discussed and compared to that of a high-resolution commercial front-illuminated monolithic CMOS imager.

  14. Speckle variance optical coherence tomography of blood flow in the beating mouse embryonic heart.

    PubMed

    Grishina, Olga A; Wang, Shang; Larina, Irina V

    2017-05-01

    Efficient separation of blood and cardiac wall in the beating embryonic heart is essential and critical for experiment-based computational modelling and analysis of early-stage cardiac biomechanics. Although speckle variance optical coherence tomography (SV-OCT) relying on calculation of intensity variance over consecutively acquired frames is a powerful approach for segmentation of fluid flow from static tissue, application of this method in the beating embryonic heart remains challenging because moving structures generate SV signal indistinguishable from the blood. Here, we demonstrate a modified four-dimensional SV-OCT approach that effectively separates the blood flow from the dynamic heart wall in the beating mouse embryonic heart. The method takes advantage of the periodic motion of the cardiac wall and is based on calculation of the SV signal over the frames corresponding to the same phase of the heartbeat cycle. Through comparison with Doppler OCT imaging, we validate this speckle-based approach and show advantages in its insensitiveness to the flow direction and velocity as well as reduced influence from the heart wall movement. This approach has a potential in variety of applications relying on visualization and segmentation of blood flow in periodically moving structures, such as mechanical simulation studies and finite element modelling. Picture: Four-dimensional speckle variance OCT imaging shows the blood flow inside the beating heart of an E8.5 mouse embryo. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Anomaly detection in forward looking infrared imaging using one-class classifiers

    NASA Astrophysics Data System (ADS)

    Popescu, Mihail; Stone, Kevin; Havens, Timothy; Ho, Dominic; Keller, James

    2010-04-01

    In this paper we describe a method for generating cues of possible abnormal objects present in the field of view of an infrared (IR) camera installed on a moving vehicle. The proposed method has two steps. In the first step, for each frame, we generate a set of possible points of interest using a corner detection algorithm. In the second step, the points related to the background are discarded from the point set using an one class classifier (OCC) trained on features extracted from a local neighborhood of each point. The advantage of using an OCC is that we do not need examples from the "abnormal object" class to train the classifier. Instead, OCC is trained using corner points from images known to be abnormal object free, i.e., that contain only background scenes. To further reduce the number of false alarms we use a temporal fusion procedure: a region has to be detected as "interesting" in m out of n, m

  16. All-optical framing photography based on hyperspectral imaging method

    NASA Astrophysics Data System (ADS)

    Liu, Shouxian; Li, Yu; Li, Zeren; Chen, Guanghua; Peng, Qixian; Lei, Jiangbo; Liu, Jun; Yuan, Shuyun

    2017-02-01

    We propose and experimentally demonstrate a new all optical-framing photography that uses hyperspectral imaging methods to record a chirped pulse's temporal-spatial information. This proposed method consists of three parts: (1) a chirped laser pulse encodes temporal phenomena onto wavelengths; (2) a lenslet array generates a series of integral pupil images;(3) a dispersive device disperses the integral images at void space of image sensor. Compared with Ultrafast All-Optical Framing Technology(Daniel Frayer,2013,2014) and Sequentially Time All-Optical Mapping Photography( Nakagawa 2014, 2015), our method is convenient to adjust the temporal resolution and to flexibly increase the numbers of frames. Theoretically, the temporal resolution of our scheme is limited by the amount of dispersion that is added to a Fourier transform limited femtosecond laser pulse. Correspondingly, the optimal number of frames is decided by the ratio of the observational time window to the temporal resolution, and the effective pixels of each frame are mostly limited by the dimensions M×N of the lenslet array. For example, if a 40fs Fourier transform limited femtosecond pulse is stretched to 10ps, a CCD camera with 2048×3072 pixels can record 15 framing images with temporal resolution of 650fs and image size of 100×100 pixels. As spectrometer structure, our recording part has another advantage that not only amplitude images but also frequency domain interferograms can be imaged. Therefore, it is comparatively easy to capture fast dynamics in the refractive index change of materials. A further dynamic experiment is being conducted.

  17. Joint Transform Correlation for face tracking: elderly fall detection application

    NASA Astrophysics Data System (ADS)

    Katz, Philippe; Aron, Michael; Alfalou, Ayman

    2013-03-01

    In this paper, an iterative tracking algorithm based on a non-linear JTC (Joint Transform Correlator) architecture and enhanced by a digital image processing method is proposed and validated. This algorithm is based on the computation of a correlation plane where the reference image is updated at each frame. For that purpose, we use the JTC technique in real time to track a patient (target image) in a room fitted with a video camera. The correlation plane is used to localize the target image in the current video frame (frame i). Then, the reference image to be exploited in the next frame (frame i+1) is updated according to the previous one (frame i). In an effort to validate our algorithm, our work is divided into two parts: (i) a large study based on different sequences with several situations and different JTC parameters is achieved in order to quantify their effects on the tracking performances (decimation, non-linearity coefficient, size of the correlation plane, size of the region of interest...). (ii) the tracking algorithm is integrated into an application of elderly fall detection. The first reference image is a face detected by means of Haar descriptors, and then localized into the new video image thanks to our tracking method. In order to avoid a bad update of the reference frame, a method based on a comparison of image intensity histograms is proposed and integrated in our algorithm. This step ensures a robust tracking of the reference frame. This article focuses on face tracking step optimisation and evalutation. A supplementary step of fall detection, based on vertical acceleration and position, will be added and studied in further work.

  18. High-frame rate multiport CCD imager and camera

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.

    1993-01-01

    A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.

  19. Time-series animation techniques for visualizing urban growth

    USGS Publications Warehouse

    Acevedo, W.; Masuoka, P.

    1997-01-01

    Time-series animation is a visually intuitive way to display urban growth. Animations of landuse change for the Baltimore-Washington region were generated by showing a series of images one after the other in sequential order. Before creating an animation, various issues which will affect the appearance of the animation should be considered, including the number of original data frames to use, the optimal animation display speed, the number of intermediate frames to create between the known frames, and the output media on which the animations will be displayed. To create new frames between the known years of data, the change in each theme (i.e. urban development, water bodies, transportation routes) must be characterized and an algorithm developed to create the in-between frames. Example time-series animations were created using a temporal GIS database of the Baltimore-Washington area. Creating the animations involved generating raster images of the urban development, water bodies, and principal transportation routes; overlaying the raster images on a background image; and importing the frames to a movie file. Three-dimensional perspective animations were created by draping each image over digital elevation data prior to importing the frames to a movie file. ?? 1997 Elsevier Science Ltd.

  20. The impact of verbal framing on brain activity evoked by emotional images.

    PubMed

    Kisley, Michael A; Campbell, Alana M; Larson, Jenna M; Naftz, Andrea E; Regnier, Jesse T; Davalos, Deana B

    2011-12-01

    Emotional stimuli generally command more brain processing resources than non-emotional stimuli, but the magnitude of this effect is subject to voluntary control. Cognitive reappraisal represents one type of emotion regulation that can be voluntarily employed to modulate responses to emotional stimuli. Here, the late positive potential (LPP), a specific event-related brain potential (ERP) component, was measured in response to neutral, positive and negative images while participants performed an evaluative categorization task. One experimental group adopted a "negative frame" in which images were categorized as negative or not. The other adopted a "positive frame" in which the exact same images were categorized as positive or not. Behavioral performance confirmed compliance with random group assignment, and peak LPP amplitude to negative images was affected by group membership: brain responses to negative images were significantly reduced in the "positive frame" group. This suggests that adopting a more positive appraisal frame can modulate brain activity elicited by negative stimuli in the environment.

  1. Correction of projective distortion in long-image-sequence mosaics without prior information

    NASA Astrophysics Data System (ADS)

    Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie

    2010-04-01

    Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is shown to be effective and suitable for real-time implementation.

  2. Synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON): A statistical model based iterative image reconstruction method to eliminate limited-view artifacts and to mitigate the temporal-average artifacts in time-resolved CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Guang-Hong, E-mail: gchen7@wisc.edu; Li, Yinsheng

    Purpose: In x-ray computed tomography (CT), a violation of the Tuy data sufficiency condition leads to limited-view artifacts. In some applications, it is desirable to use data corresponding to a narrow temporal window to reconstruct images with reduced temporal-average artifacts. However, the need to reduce temporal-average artifacts in practice may result in a violation of the Tuy condition and thus undesirable limited-view artifacts. In this paper, the authors present a new iterative reconstruction method, synchronized multiartifact reduction with tomographic reconstruction (SMART-RECON), to eliminate limited-view artifacts using data acquired within an ultranarrow temporal window that severely violates the Tuy condition. Methods:more » In time-resolved contrast enhanced CT acquisitions, image contrast dynamically changes during data acquisition. Each image reconstructed from data acquired in a given temporal window represents one time frame and can be denoted as an image vector. Conventionally, each individual time frame is reconstructed independently. In this paper, all image frames are grouped into a spatial–temporal image matrix and are reconstructed together. Rather than the spatial and/or temporal smoothing regularizers commonly used in iterative image reconstruction, the nuclear norm of the spatial–temporal image matrix is used in SMART-RECON to regularize the reconstruction of all image time frames. This regularizer exploits the low-dimensional structure of the spatial–temporal image matrix to mitigate limited-view artifacts when an ultranarrow temporal window is desired in some applications to reduce temporal-average artifacts. Both numerical simulations in two dimensional image slices with known ground truth and in vivo human subject data acquired in a contrast enhanced cone beam CT exam have been used to validate the proposed SMART-RECON algorithm and to demonstrate the initial performance of the algorithm. Reconstruction errors and temporal fidelity of the reconstructed images were quantified using the relative root mean square error (rRMSE) and the universal quality index (UQI) in numerical simulations. The performance of the SMART-RECON algorithm was compared with that of the prior image constrained compressed sensing (PICCS) reconstruction quantitatively in simulations and qualitatively in human subject exam. Results: In numerical simulations, the 240{sup ∘} short scan angular span was divided into four consecutive 60{sup ∘} angular subsectors. SMART-RECON enables four high temporal fidelity images without limited-view artifacts. The average rRMSE is 16% and UQIs are 0.96 and 0.95 for the two local regions of interest, respectively. In contrast, the corresponding average rRMSE and UQIs are 25%, 0.78, and 0.81, respectively, for the PICCS reconstruction. Note that only one filtered backprojection image can be reconstructed from the same data set with an average rRMSE and UQIs are 45%, 0.71, and 0.79, respectively, to benchmark reconstruction accuracies. For in vivo contrast enhanced cone beam CT data acquired from a short scan angular span of 200{sup ∘}, three 66{sup ∘} angular subsectors were used in SMART-RECON. The results demonstrated clear contrast difference in three SMART-RECON reconstructed image volumes without limited-view artifacts. In contrast, for the same angular sectors, PICCS cannot reconstruct images without limited-view artifacts and with clear contrast difference in three reconstructed image volumes. Conclusions: In time-resolved CT, the proposed SMART-RECON method provides a new method to eliminate limited-view artifacts using data acquired in an ultranarrow temporal window, which corresponds to approximately 60{sup ∘} angular subsectors.« less

  3. Hierarchical video summarization

    NASA Astrophysics Data System (ADS)

    Ratakonda, Krishna; Sezan, M. Ibrahim; Crinon, Regis J.

    1998-12-01

    We address the problem of key-frame summarization of vide in the absence of any a priori information about its content. This is a common problem that is encountered in home videos. We propose a hierarchical key-frame summarization algorithm where a coarse-to-fine key-frame summary is generated. A hierarchical key-frame summary facilitates multi-level browsing where the user can quickly discover the content of the video by accessing its coarsest but most compact summary and then view a desired segment of the video with increasingly more detail. At the finest level, the summary is generated on the basis of color features of video frames, using an extension of a recently proposed key-frame extraction algorithm. The finest level key-frames are recursively clustered using a novel pairwise K-means clustering approach with temporal consecutiveness constraint. We also address summarization of MPEG-2 compressed video without fully decoding the bitstream. We also propose efficient mechanisms that facilitate decoding the video when the hierarchical summary is utilized in browsing and playback of video segments starting at selected key-frames.

  4. Multi-frame partially saturated images blind deconvolution

    NASA Astrophysics Data System (ADS)

    Ye, Pengzhao; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2016-12-01

    When blurred images have saturated or over-exposed pixels, conventional blind deconvolution approaches often fail to estimate accurate point spread function (PSF) and will introduce local ringing artifacts. In this paper, we propose a method to deal with the problem under the modified multi-frame blind deconvolution framework. First, in the kernel estimation step, a light streak detection scheme using multi-frame blurred images is incorporated into the regularization constraint. Second, we deal with image regions affected by the saturated pixels separately by modeling a weighted matrix during each multi-frame deconvolution iteration process. Both synthetic and real-world examples show that more accurate PSFs can be estimated and restored images have richer details and less negative effects compared to state of art methods.

  5. Touch HDR: photograph enhancement by user controlled wide dynamic range adaptation

    NASA Astrophysics Data System (ADS)

    Verrall, Steve; Siddiqui, Hasib; Atanassov, Kalin; Goma, Sergio; Ramachandra, Vikas

    2013-03-01

    High Dynamic Range (HDR) technology enables photographers to capture a greater range of tonal detail. HDR is typically used to bring out detail in a dark foreground object set against a bright background. HDR technologies include multi-frame HDR and single-frame HDR. Multi-frame HDR requires the combination of a sequence of images taken at different exposures. Single-frame HDR requires histogram equalization post-processing of a single image, a technique referred to as local tone mapping (LTM). Images generated using HDR technology can look less natural than their non- HDR counterparts. Sometimes it is only desired to enhance small regions of an original image. For example, it may be desired to enhance the tonal detail of one subject's face while preserving the original background. The Touch HDR technique described in this paper achieves these goals by enabling selective blending of HDR and non-HDR versions of the same image to create a hybrid image. The HDR version of the image can be generated by either multi-frame or single-frame HDR. Selective blending can be performed as a post-processing step, for example, as a feature of a photo editor application, at any time after the image has been captured. HDR and non-HDR blending is controlled by a weighting surface, which is configured by the user through a sequence of touches on a touchscreen.

  6. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images.

    PubMed

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Schuman, Joel S

    2016-01-01

    Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t -test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects.

  7. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images

    PubMed Central

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.

    2016-01-01

    Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180

  8. Complementary frame reconstruction: a low-biased dynamic PET technique for low count density data in projection space

    NASA Astrophysics Data System (ADS)

    Hong, Inki; Cho, Sanghee; Michel, Christian J.; Casey, Michael E.; Schaefferkoetter, Joshua D.

    2014-09-01

    A new data handling method is presented for improving the image noise distribution and reducing bias when reconstructing very short frames from low count dynamic PET acquisition. The new method termed ‘Complementary Frame Reconstruction’ (CFR) involves the indirect formation of a count-limited emission image in a short frame through subtraction of two frames with longer acquisition time, where the short time frame data is excluded from the second long frame data before the reconstruction. This approach can be regarded as an alternative to the AML algorithm recently proposed by Nuyts et al, as a method to reduce the bias for the maximum likelihood expectation maximization (MLEM) reconstruction of count limited data. CFR uses long scan emission data to stabilize the reconstruction and avoids modification of algorithms such as MLEM. The subtraction between two long frame images, naturally allows negative voxel values and significantly reduces bias introduced in the final image. Simulations based on phantom and clinical data were used to evaluate the accuracy of the reconstructed images to represent the true activity distribution. Applicability to determine the arterial input function in human and small animal studies is also explored. In situations with limited count rate, e.g. pediatric applications, gated abdominal, cardiac studies, etc., or when using limited doses of short-lived isotopes such as 15O-water, the proposed method will likely be preferred over independent frame reconstruction to address bias and noise issues.

  9. Optimization of image quality and dose for Varian aS500 electronic portal imaging devices (EPIDs).

    PubMed

    McGarry, C K; Grattan, M W D; Cosgrove, V P

    2007-12-07

    This study was carried out to investigate whether the electronic portal imaging (EPI) acquisition process could be optimized, and as a result tolerance and action levels be set for the PIPSPro QC-3V phantom image quality assessment. The aim of the optimization process was to reduce the dose delivered to the patient while maintaining a clinically acceptable image quality. This is of interest when images are acquired in addition to the planned patient treatment, rather than images being acquired using the treatment field during a patient's treatment. A series of phantoms were used to assess image quality for different acquisition settings relative to the baseline values obtained following acceptance testing. Eight Varian aS500 EPID systems on four matched Varian 600C/D linacs and four matched Varian 2100C/D linacs were compared for consistency of performance and images were acquired at the four main orthogonal gantry angles. Images were acquired using a 6 MV beam operating at 100 MU min(-1) and the low-dose acquisition mode. Doses used in the comparison were measured using a Farmer ionization chamber placed at d(max) in solid water. The results demonstrated that the number of reset frames did not have any influence on the image contrast, but the number of frame averages did. The expected increase in noise with corresponding decrease in contrast was also observed when reducing the number of frame averages. The optimal settings for the low-dose acquisition mode with respect to image quality and dose were found to be one reset frame and three frame averages. All patients at the Northern Ireland Cancer Centre are now imaged using one reset frame and three frame averages in the 6 MV 100 MU min(-1) low-dose acquisition mode. Routine EPID QC contrast tolerance (+/-10) and action (+/-20) levels using the PIPSPro phantom based around expected values of 190 (Varian 600C/D) and 225 (Varian 2100C/D) have been introduced. The dose at dmax from electronic portal imaging has been reduced by approximately 28%, and while the image quality has been reduced, the images produced are still clinically acceptable.

  10. An adaptive enhancement algorithm for infrared video based on modified k-means clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Linze; Wang, Jingqi; Wu, Wen

    2016-09-01

    In this paper, we have proposed a video enhancement algorithm to improve the output video of the infrared camera. Sometimes the video obtained by infrared camera is very dark since there is no clear target. In this case, infrared video should be divided into frame images by frame extraction, in order to carry out the image enhancement. For the first frame image, which can be divided into k sub images by using K-means clustering according to the gray interval it occupies before k sub images' histogram equalization according to the amount of information per sub image, we used a method to solve a problem that final cluster centers close to each other in some cases; and for the other frame images, their initial cluster centers can be determined by the final clustering centers of the previous ones, and the histogram equalization of each sub image will be carried out after image segmentation based on K-means clustering. The histogram equalization can make the gray value of the image to the whole gray level, and the gray level of each sub image is determined by the ratio of pixels to a frame image. Experimental results show that this algorithm can improve the contrast of infrared video where night target is not obvious which lead to a dim scene, and reduce the negative effect given by the overexposed pixels adaptively in a certain range.

  11. Reduction of speckle noise from optical coherence tomography images using multi-frame weighted nuclear norm minimization method

    NASA Astrophysics Data System (ADS)

    Thapa, Damber; Raahemifar, Kaamran; Lakshminarayanan, Vasudevan

    2015-12-01

    In this paper, we propose a speckle noise reduction method for spectral-domain optical coherence tomography (SD-OCT) images called multi-frame weighted nuclear norm minimization (MWNNM). This method is a direct extension of weighted nuclear norm minimization (WNNM) in the multi-frame framework since an adequately denoised image could not be achieved with single-frame denoising methods. The MWNNM method exploits multiple B-scans collected from a small area of a SD-OCT volumetric image, and then denoises and averages them together to obtain a high signal-to-noise ratio B-scan. The results show that the image quality metrics obtained by denoising and averaging only five nearby B-scans with MWNNM method is considerably better than those of the average image obtained by registering and averaging 40 azimuthally repeated B-scans.

  12. The Texas Thermal Interface: A real-time computer interface for an Inframetrics infrared camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storek, D.J.; Gentle, K.W.

    1996-03-01

    The Texas Thermal Interface (TTI) offers an advantageous alternative to the conventional video path for computer analysis of infrared images from Inframetrics cameras. The TTI provides real-time computer data acquisition of 48 consecutive fields (version described here) with 8-bit pixels. The alternative requires time-consuming individual frame grabs from video tape with frequent loss of resolution in the D/A/D conversion. Within seconds after the event, the TTI temperature files may be viewed and processed to infer heat fluxes or other quantities as needed. The system cost is far less than commercial units which offer less capability. The system was developed formore » and is being used to measure heat fluxes to the plasma-facing components in a tokamak. {copyright} {ital 1996 American Institute of Physics.}« less

  13. Imaging of optically diffusive media by use of opto-elastography

    NASA Astrophysics Data System (ADS)

    Bossy, Emmanuel; Funke, Arik R.; Daoudi, Khalid; Tanter, Mickael; Fink, Mathias; Boccara, Claude

    2007-02-01

    We present a camera-based optical detection scheme designed to detect the transient motion created by the acoustic radiation force in elastic media. An optically diffusive tissue mimicking phantom was illuminated with coherent laser light, and a high speed camera (2 kHz frame rate) was used to acquire and cross-correlate consecutive speckle patterns. Time-resolved transient decorrelations of the optical speckle were measured as the results of localised motion induced in the medium by the radiation force and subsequent propagating shear waves. As opposed to classical acousto-optic techniques which are sensitive to vibrations induced by compressional waves at ultrasonic frequencies, the proposed technique is sensitive only to the low frequency transient motion induced in the medium by the radiation force. It therefore provides a way to assess both optical and shear mechanical properties.

  14. Development Of A Dynamic Radiographic Capability Using High-Speed Video

    NASA Astrophysics Data System (ADS)

    Bryant, Lawrence E.

    1985-02-01

    High-speed video equipment can be used to optically image up to 2,000 full frames per second or 12,000 partial frames per second. X-ray image intensifiers have historically been used to image radiographic images at 30 frames per second. By combining these two types of equipment, it is possible to perform dynamic x-ray imaging of up to 2,000 full frames per second. The technique has been demonstrated using conventional, industrial x-ray sources such as 150 Kv and 300 Kv constant potential x-ray generators, 2.5 MeV Van de Graaffs, and linear accelerators. A crude form of this high-speed radiographic imaging has been shown to be possible with a cobalt 60 source. Use of a maximum aperture lens makes best use of the available light output from the image intensifier. The x-ray image intensifier input and output fluors decay rapidly enough to allow the high frame rate imaging. Data are presented on the maximum possible video frame rates versus x-ray penetration of various thicknesses of aluminum and steel. Photographs illustrate typical radiographic setups using the high speed imaging method. Video recordings show several demonstrations of this technique with the played-back x-ray images slowed down up to 100 times as compared to the actual event speed. Typical applications include boiling type action of liquids in metal containers, compressor operation with visualization of crankshaft, connecting rod and piston movement and thermal battery operation. An interesting aspect of this technique combines both the optical and x-ray capabilities to observe an object or event with both external and internal details with one camera in a visual mode and the other camera in an x-ray mode. This allows both kinds of video images to appear side by side in a synchronized presentation.

  15. High frame rate imaging systems developed in Northwest Institute of Nuclear Technology

    NASA Astrophysics Data System (ADS)

    Li, Binkang; Wang, Kuilu; Guo, Mingan; Ruan, Linbo; Zhang, Haibing; Yang, Shaohua; Feng, Bing; Sun, Fengrong; Chen, Yanli

    2007-01-01

    This paper presents high frame rate imaging systems developed in Northwest Institute of Nuclear Technology in recent years. Three types of imaging systems are included. The first type of system utilizes EG&G RETICON Photodiode Array (PDA) RA100A as the image sensor, which can work at up to 1000 frame per second (fps). Besides working continuously, the PDA system is also designed to switch to capture flash light event working mode. A specific time sequence is designed to satisfy this request. The camera image data can be transmitted to remote area by coaxial or optic fiber cable and then be stored. The second type of imaging system utilizes PHOTOBIT Complementary Metal Oxygen Semiconductor (CMOS) PB-MV13 as the image sensor, which has a high resolution of 1280 (H) ×1024 (V) pixels per frame. The CMOS system can operate at up to 500fps in full frame and 4000fps partially. The prototype scheme of the system is presented. The third type of imaging systems adopts charge coupled device (CCD) as the imagers. MINTRON MTV-1881EX, DALSA CA-D1 and CA-D6 camera head are used in the systems development. The features comparison of the RA100A, PB-MV13, and CA-D6 based systems are given in the end.

  16. Tracking features in retinal images of adaptive optics confocal scanning laser ophthalmoscope using KLT-SIFT algorithm

    PubMed Central

    Li, Hao; Lu, Jing; Shi, Guohua; Zhang, Yudong

    2010-01-01

    With the use of adaptive optics (AO), high-resolution microscopic imaging of living human retina in the single cell level has been achieved. In an adaptive optics confocal scanning laser ophthalmoscope (AOSLO) system, with a small field size (about 1 degree, 280 μm), the motion of the eye severely affects the stabilization of the real-time video images and results in significant distortions of the retina images. In this paper, Scale-Invariant Feature Transform (SIFT) is used to abstract stable point features from the retina images. Kanade-Lucas-Tomasi(KLT) algorithm is applied to track the features. With the tracked features, the image distortion in each frame is removed by the second-order polynomial transformation, and 10 successive frames are co-added to enhance the image quality. Features of special interest in an image can also be selected manually and tracked by KLT. A point on a cone is selected manually, and the cone is tracked from frame to frame. PMID:21258443

  17. SU-G-BRA-05: Application of a Feature-Based Tracking Algorithm to KV X-Ray Fluoroscopic Images Toward Marker-Less Real-Time Tumor Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, M; Matsuo, Y; Mukumoto, N

    Purpose: To detect target position on kV X-ray fluoroscopic images using a feature-based tracking algorithm, Accelerated-KAZE (AKAZE), for markerless real-time tumor tracking (RTTT). Methods: Twelve lung cancer patients treated with RTTT on the Vero4DRT (Mitsubishi Heavy Industries, Japan, and Brainlab AG, Feldkirchen, Germany) were enrolled in this study. Respiratory tumor movement was greater than 10 mm. Three to five fiducial markers were implanted around the lung tumor transbronchially for each patient. Before beam delivery, external infrared (IR) markers and the fiducial markers were monitored for 20 to 40 s with the IR camera every 16.7 ms and with an orthogonalmore » kV x-ray imaging subsystem every 80 or 160 ms, respectively. Target positions derived from the fiducial markers were determined on the orthogonal kV x-ray images, which were used as the ground truth in this study. Meanwhile, tracking positions were identified by AKAZE. Among a lot of feature points, AKAZE found high-quality feature points through sequential cross-check and distance-check between two consecutive images. Then, these 2D positional data were converted to the 3D positional data by a transformation matrix with a predefined calibration parameter. Root mean square error (RMSE) was calculated to evaluate the difference between 3D tracking and target positions. A total of 393 frames was analyzed. The experiment was conducted on a personal computer with 16 GB RAM, Intel Core i7-2600, 3.4 GHz processor. Results: Reproducibility of the target position during the same respiratory phase was 0.6 +/− 0.6 mm (range, 0.1–3.3 mm). Mean +/− SD of the RMSEs was 0.3 +/− 0.2 mm (range, 0.0–1.0 mm). Median computation time per frame was 179 msec (range, 154–247 msec). Conclusion: AKAZE successfully and quickly detected the target position on kV X-ray fluoroscopic images. Initial results indicate that the differences between 3D tracking and target position would be clinically acceptable.« less

  18. Characterization of system-related geometric distortions in MR images employed in Gamma Knife radiosurgery applications.

    PubMed

    Pappas, E P; Seimenis, I; Moutsatsos, A; Georgiou, E; Nomikos, P; Karaiskos, P

    2016-10-07

    This work provides characterization of system-related geometric distortions present in MRIs used in Gamma Knife (GK) stereotactic radiosurgery (SRS) treatment planning. A custom-made phantom, compatible with the Leksell stereotactic frame model G and encompassing 947 control points (CPs), was utilized. MR images were obtained with and without the frame, thus allowing discrimination of frame-induced distortions. In the absence of the frame and following compensation for field inhomogeneities, measured average CP disposition owing to gradient nonlinearities was 0.53 mm. In presence of the frame, contrarily, detected distortion was greatly increased (up to about 5 mm) in the vicinity of the frame base due to eddy currents induced in the closed loop of its aluminum material. Frame-related distortion was obliterated at approximately 90 mm from the frame base. Although the region with the maximum observed distortion may not lie within the GK treatable volume, the presence of the frame results in distortion of the order of 1.5 mm at a 7 cm distance from the center of the Leksell space. Additionally, severe distortions observed outside the treatable volume could possibly impinge on the delivery accuracy mainly by adversely affecting the registration process (e.g. the position of the lower part of the N-shaped fiducials used to define the stereotactic space may be miss-registered). Images acquired with a modified version of the frame developed by replacing its front side with an acrylic bar, thus interrupting the closed aluminum loop and reducing the induced eddy currents, were shown to benefit from relatively reduced distortion. System-related distortion was also identified in patient MR images. Using corresponding CT angiography images as a reference, an offset of 1.1 mm was detected for two vessels lying in close proximity to the frame base, while excellent spatial agreement was observed for a vessel far apart from the frame base.

  19. Characterization of system-related geometric distortions in MR images employed in Gamma Knife radiosurgery applications

    NASA Astrophysics Data System (ADS)

    Pappas, E. P.; Seimenis, I.; Moutsatsos, A.; Georgiou, E.; Nomikos, P.; Karaiskos, P.

    2016-10-01

    This work provides characterization of system-related geometric distortions present in MRIs used in Gamma Knife (GK) stereotactic radiosurgery (SRS) treatment planning. A custom-made phantom, compatible with the Leksell stereotactic frame model G and encompassing 947 control points (CPs), was utilized. MR images were obtained with and without the frame, thus allowing discrimination of frame-induced distortions. In the absence of the frame and following compensation for field inhomogeneities, measured average CP disposition owing to gradient nonlinearities was 0.53 mm. In presence of the frame, contrarily, detected distortion was greatly increased (up to about 5 mm) in the vicinity of the frame base due to eddy currents induced in the closed loop of its aluminum material. Frame-related distortion was obliterated at approximately 90 mm from the frame base. Although the region with the maximum observed distortion may not lie within the GK treatable volume, the presence of the frame results in distortion of the order of 1.5 mm at a 7 cm distance from the center of the Leksell space. Additionally, severe distortions observed outside the treatable volume could possibly impinge on the delivery accuracy mainly by adversely affecting the registration process (e.g. the position of the lower part of the N-shaped fiducials used to define the stereotactic space may be miss-registered). Images acquired with a modified version of the frame developed by replacing its front side with an acrylic bar, thus interrupting the closed aluminum loop and reducing the induced eddy currents, were shown to benefit from relatively reduced distortion. System-related distortion was also identified in patient MR images. Using corresponding CT angiography images as a reference, an offset of 1.1 mm was detected for two vessels lying in close proximity to the frame base, while excellent spatial agreement was observed for a vessel far apart from the frame base.

  20. Space Shuttle Main Engine Propellant Path Leak Detection Using Sequential Image Processing

    NASA Technical Reports Server (NTRS)

    Smith, L. Montgomery; Malone, Jo Anne; Crawford, Roger A.

    1995-01-01

    Initial research in this study using theoretical radiation transport models established that the occurrence of a leak is accompanies by a sudden but sustained change in intensity in a given region of an image. In this phase, temporal processing of video images on a frame-by-frame basis was used to detect leaks within a given field of view. The leak detection algorithm developed in this study consists of a digital highpass filter cascaded with a moving average filter. The absolute value of the resulting discrete sequence is then taken and compared to a threshold value to produce the binary leak/no leak decision at each point in the image. Alternatively, averaging over the full frame of the output image produces a single time-varying mean value estimate that is indicative of the intensity and extent of a leak. Laboratory experiments were conducted in which artificially created leaks on a simulated SSME background were produced and recorded from a visible wavelength video camera. This data was processed frame-by-frame over the time interval of interest using an image processor implementation of the leak detection algorithm. In addition, a 20 second video sequence of an actual SSME failure was analyzed using this technique. The resulting output image sequences and plots of the full frame mean value versus time verify the effectiveness of the system.

  1. Effect of time discretization of the imaging process on the accuracy of trajectory estimation in fluorescence microscopy

    PubMed Central

    Wong, Yau; Chao, Jerry; Lin, Zhiping; Ober, Raimund J.

    2014-01-01

    In fluorescence microscopy, high-speed imaging is often necessary for the proper visualization and analysis of fast subcellular dynamics. Here, we examine how the speed of image acquisition affects the accuracy with which parameters such as the starting position and speed of a microscopic non-stationary fluorescent object can be estimated from the resulting image sequence. Specifically, we use a Fisher information-based performance bound to investigate the detector-dependent effect of frame rate on the accuracy of parameter estimation. We demonstrate that when a charge-coupled device detector is used, the estimation accuracy deteriorates as the frame rate increases beyond a point where the detector’s readout noise begins to overwhelm the low number of photons detected in each frame. In contrast, we show that when an electron-multiplying charge-coupled device (EMCCD) detector is used, the estimation accuracy improves with increasing frame rate. In fact, at high frame rates where the low number of photons detected in each frame renders the fluorescent object difficult to detect visually, imaging with an EMCCD detector represents a natural implementation of the Ultrahigh Accuracy Imaging Modality, and enables estimation with an accuracy approaching that which is attainable only when a hypothetical noiseless detector is used. PMID:25321248

  2. High-frame-rate digital radiographic videography

    NASA Astrophysics Data System (ADS)

    King, Nicholas S. P.; Cverna, Frank H.; Albright, Kevin L.; Jaramillo, Steven A.; Yates, George J.; McDonald, Thomas E.; Flynn, Michael J.; Tashman, Scott

    1994-10-01

    High speed x-ray imaging can be an important tool for observing internal processes in a wide range of applications. In this paper we describe preliminary implementation of a system having the eventual goal of observing the internal dynamics of bone and joint reactions during loading. Two Los Alamos National Laboratory (LANL) gated and image intensified camera systems were used to record images from an x-ray image convertor tube to demonstrate the potential of high frame-rate digital radiographic videography in the analysis of bone and joint dynamics of the human body. Preliminary experiments were done at LANL to test the systems. Initial high frame-rate imaging (from 500 to 1000 frames/s) of a swinging pendulum mounted to the face of an X-ray image convertor tube demonstrated high contrast response and baseline sensitivity. The systems were then evaluated at the Motion Analysis Laboratory of Henry Ford Health Systems Bone and Joint Center. Imaging of a 9 inch acrylic disk with embedded lead markers rotating at approximately 1000 RPM, demonstrated the system response to a high velocity/high contrast target. By gating the P-20 phosphor image from the X-ray image convertor with a second image intensifier (II) and using a 100 microsecond wide optical gate through the second II, enough prompt light decay from the x-ray image convertor phosphor had taken place to achieve reduction of most of the motion blurring. Measurement of the marker velocity was made by using video frames acquired at 500 frames/s. The data obtained from both experiments successfully demonstrated the feasibility of the technique. Several key areas for improvement are discussed along with salient test results and experiment details.

  3. The Kepler Full Frame Images

    NASA Technical Reports Server (NTRS)

    Dotson, Jessie L.; Batalha, Natalie; Bryson, Stephen T.; Caldwell, Douglas A.; Clarke, Bruce D.

    2010-01-01

    NASA's exoplanet discovery mission Kepler provides uninterrupted 1-min and 30-min optical photometry of a 100 square degree field over a 3.5 yr nominal mission. Downlink bandwidth is filled at these short cadences by selecting only detector pixels specific to 105 preselected stellar targets. The majority of the Kepler field, comprising 4 x 10(exp 6) m_v < 20 sources, is sampled at much lower 1-month cadence in the form of a full-frame image. The Full Frame Images (FFIs) are calibrated by the Science Operations Center at NASA Ames Research Center. The Kepler Team employ these images for astrometric and photometric reference but make the images available to the astrophysics community through the Multimission Archive at STScI (MAST). The full-frame images provide a resource for potential Kepler Guest Observers to select targets and plan observing proposals, while also providing a freely-available long-cadence legacy of photometric variation across a swathe of the Galactic disk.

  4. High-speed adaptive optics line scan confocal retinal imaging for human eye

    PubMed Central

    Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Purpose Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. Methods A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye’s optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. Results The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. Conclusions We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss. PMID:28257458

  5. High-speed adaptive optics line scan confocal retinal imaging for human eye.

    PubMed

    Lu, Jing; Gu, Boyu; Wang, Xiaolin; Zhang, Yuhua

    2017-01-01

    Continuous and rapid eye movement causes significant intraframe distortion in adaptive optics high resolution retinal imaging. To minimize this artifact, we developed a high speed adaptive optics line scan confocal retinal imaging system. A high speed line camera was employed to acquire retinal image and custom adaptive optics was developed to compensate the wave aberration of the human eye's optics. The spatial resolution and signal to noise ratio were assessed in model eye and in living human eye. The improvement of imaging fidelity was estimated by reduction of intra-frame distortion of retinal images acquired in the living human eyes with frame rates at 30 frames/second (FPS), 100 FPS, and 200 FPS. The device produced retinal image with cellular level resolution at 200 FPS with a digitization of 512×512 pixels/frame in the living human eye. Cone photoreceptors in the central fovea and rod photoreceptors near the fovea were resolved in three human subjects in normal chorioretinal health. Compared with retinal images acquired at 30 FPS, the intra-frame distortion in images taken at 200 FPS was reduced by 50.9% to 79.7%. We demonstrated the feasibility of acquiring high resolution retinal images in the living human eye at a speed that minimizes retinal motion artifact. This device may facilitate research involving subjects with nystagmus or unsteady fixation due to central vision loss.

  6. A video-based speed estimation technique for localizing the wireless capsule endoscope inside gastrointestinal tract.

    PubMed

    Bao, Guanqun; Mi, Liang; Geng, Yishuang; Zhou, Mingda; Pahlavan, Kaveh

    2014-01-01

    Wireless Capsule Endoscopy (WCE) is progressively emerging as one of the most popular non-invasive imaging tools for gastrointestinal (GI) tract inspection. As a critical component of capsule endoscopic examination, physicians need to know the precise position of the endoscopic capsule in order to identify the position of intestinal disease. For the WCE, the position of the capsule is defined as the linear distance it is away from certain fixed anatomical landmarks. In order to measure the distance the capsule has traveled, a precise knowledge of how fast the capsule moves is urgently needed. In this paper, we present a novel computer vision based speed estimation technique that is able to extract the speed of the endoscopic capsule by analyzing the displacements between consecutive frames. The proposed approach is validated using a virtual testbed as well as the real endoscopic images. Results show that the proposed method is able to precisely estimate the speed of the endoscopic capsule with 93% accuracy on average, which enhances the localization accuracy of the WCE to less than 2.49 cm.

  7. Role of "the frame cycle time" in portal dose imaging using an aS500-II EPID.

    PubMed

    Al Kattar Elbalaa, Zeina; Foulquier, Jean Noel; Orthuon, Alexandre; Elbalaa, Hanna; Touboul, Emmanuel

    2009-09-01

    This paper evaluates the role of an acquisition parameter, the frame cycle time "FCT", in the performance of an aS500-II EPID. The work presented rests on the study of the Varian EPID aS500-II and the image acquisition system 3 (IAS3). We are interested in integrated acquisition using asynchronous mode. For better understanding the image acquisition operation, we investigated the influence of the "frame cycle time" on the speed of acquisition, the pixel value of the averaged gray-scale frame and the noise, using 6 and 15MV X-ray beams and dose rates of 1-6Gy/min on 2100 C/D Linacs. In the integrated mode not synchronized to beam pulses, only one parameter the frame cycle time "FCT" influences the pixel value. The pixel value of the averaged gray-scale frame is proportional to this parameter. When the FCT <55ms (speed of acquisition V(f/s)>18 frames/s), the speed of acquisition becomes unstable and leads to a fluctuation of the portal dose response. A timing instability and saturation are detected when the dose per frame exceeds 1.53MU/frame. Rules were deduced to avoid saturation and to optimize this dosimetric mode. The choice of the acquisition parameter is essential for the accurate portal dose imaging.

  8. Temporal compressive imaging for video

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  9. High speed three-dimensional laser scanner with real time processing

    NASA Technical Reports Server (NTRS)

    Lavelle, Joseph P. (Inventor); Schuet, Stefan R. (Inventor)

    2008-01-01

    A laser scanner computes a range from a laser line to an imaging sensor. The laser line illuminates a detail within an area covered by the imaging sensor, the area having a first dimension and a second dimension. The detail has a dimension perpendicular to the area. A traverse moves a laser emitter coupled to the imaging sensor, at a height above the area. The laser emitter is positioned at an offset along the scan direction with respect to the imaging sensor, and is oriented at a depression angle with respect to the area. The laser emitter projects the laser line along the second dimension of the area at a position where a image frame is acquired. The imaging sensor is sensitive to laser reflections from the detail produced by the laser line. The imaging sensor images the laser reflections from the detail to generate the image frame. A computer having a pipeline structure is connected to the imaging sensor for reception of the image frame, and for computing the range to the detail using height, depression angle and/or offset. The computer displays the range to the area and detail thereon covered by the image frame.

  10. Quantum image coding with a reference-frame-independent scheme

    NASA Astrophysics Data System (ADS)

    Chapeau-Blondeau, François; Belin, Etienne

    2016-07-01

    For binary images, or bit planes of non-binary images, we investigate the possibility of a quantum coding decodable by a receiver in the absence of reference frames shared with the emitter. Direct image coding with one qubit per pixel and non-aligned frames leads to decoding errors equivalent to a quantum bit-flip noise increasing with the misalignment. We show the feasibility of frame-invariant coding by using for each pixel a qubit pair prepared in one of two controlled entangled states. With just one common axis shared between the emitter and receiver, exact decoding for each pixel can be obtained by means of two two-outcome projective measurements operating separately on each qubit of the pair. With strictly no alignment information between the emitter and receiver, exact decoding can be obtained by means of a two-outcome projective measurement operating jointly on the qubit pair. In addition, the frame-invariant coding is shown much more resistant to quantum bit-flip noise compared to the direct non-invariant coding. For a cost per pixel of two (entangled) qubits instead of one, complete frame-invariant image coding and enhanced noise resistance are thus obtained.

  11. Multiple-frame IR photo-recorder KIT-3M

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, E; Wilkins, P; Nebeker, N

    2006-05-15

    This paper reports the experimental results of a high-speed multi-frame infrared camera which has been developed in Sarov at VNIIEF. Earlier [1] we discussed the possibility of creation of the multi-frame infrared radiation photo-recorder with framing frequency about 1 MHz. The basis of the photo-recorder is a semiconductor ionization camera [2, 3], which converts IR radiation of spectral range 1-10 micrometers into a visible image. Several sequential thermal images are registered by using the IR converter in conjunction with a multi-frame electron-optical camera. In the present report we discuss the performance characteristics of a prototype commercial 9-frame high-speed IR photo-recorder.more » The image converter records infrared images of thermal fields corresponding to temperatures ranging from 300 C to 2000 C with an exposure time of 1-20 {micro}s at a frame frequency up to 500 KHz. The IR-photo-recorder camera is useful for recording the time evolution of thermal fields in fast processes such as gas dynamics, ballistics, pulsed welding, thermal processing, automotive industry, aircraft construction, in pulsed-power electric experiments, and for the measurement of spatial mode characteristics of IR-laser radiation.« less

  12. Reproducibility and Angle Independence of Electromechanical Wave Imaging for the Measurement of Electromechanical Activation during Sinus Rhythm in Healthy Humans.

    PubMed

    Melki, Lea; Costet, Alexandre; Konofagou, Elisa E

    2017-10-01

    Electromechanical wave imaging (EWI) is an ultrasound-based technique that can non-invasively map the transmural electromechanical activation in all four cardiac chambers in vivo. The objective of this study was to determine the reproducibility and angle independence of EWI for the assessment of electromechanical activation during normal sinus rhythm (NSR) in healthy humans. Acquisitions were performed transthoracically at 2000 frames/s on seven healthy human hearts in parasternal long-axis, apical four- and two-chamber views. EWI data was collected twice successively in each view in all subjects, while four successive acquisitions were obtained in one case. Activation maps were generated and compared (i) within the same acquisition across consecutive cardiac cycles; (ii) within same view across successive acquisitions; and (iii) within equivalent left-ventricular regions across different views. EWI was capable of characterizing electromechanical activation during NSR and of reliably obtaining similar patterns of activation. For consecutive heart cycles, the average 2-D correlation coefficient between the two isochrones across the seven subjects was 0.9893, with a mean average activation time fluctuation in LV wall segments across acquisitions of 6.19%. A mean activation time variability of 12% was obtained across different views with a measurement bias of only 3.2 ms. These findings indicate that EWI can map the electromechanical activation during NSR in human hearts in transthoracic echocardiography in vivo and results in reproducible and angle-independent activation maps. Copyright © 2017 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  13. Multiport backside-illuminated CCD imagers for high-frame-rate camera applications

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Sauer, Donald J.; Hseuh, Fu-Lung; Shallcross, Frank V.; Taylor, Gordon C.; Meray, Grazyna M.; Tower, John R.; Harrison, Lorna J.; Lawler, William B.

    1994-05-01

    Two multiport, second-generation CCD imager designs have been fabricated and successfully tested. They are a 16-port 512 X 512 array and a 32-port 1024 X 1024 array. Both designs are back illuminated, have on-chip CDS, lateral blooming control, and use a split vertical frame transfer architecture with full frame storage. The 512 X 512 device has been operated at rates over 800 frames per second. The 1024 X 1024 device has been operated at rates over 300 frames per second. The major changes incorporated in the second-generation design are, reduction in gate length in the output area to give improved high-clock-rate performance, modified on-chip CDS circuitry for reduced noise, and optimized implants to improve performance of blooming control at lower clock amplitude. This paper discusses the imager design improvements and presents measured performance results at high and moderate frame rates. The design and performance of three moderate frame rate cameras are discussed.

  14. Full-Frame Reference for Test Photo of Moon

    NASA Image and Video Library

    2005-09-10

    This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter.

  15. SU-E-T-171: Missing Dose in Integrated EPID Images.

    PubMed

    King, B; Seymour, E; Nitschke, K

    2012-06-01

    A dosimetric artifact has been observed with Varian EPIDs in the presence of beam interrupts. This work determines the root cause and significance of this artifact. Integrated mode EPID images were acquired both with and without a manual beam interrupt for rectangular, sliding gap IMRT fields. Simultaneously, the individual frames were captured on a separate computer using a frame-grabber system. Synchronization of the individual frames with the integrated images allowed the determination of precisely how the EPID behaved during regular operation as well as when a beam interrupt was triggered. The ability of the EPID to reliably monitor a treatment in the presence of beam interrupts was tested by comparing the difference between the interrupt and non-interrupt images. The interrupted images acquired in integrated acquisition mode displayed unanticipated behaviour in the region of the image where the leaves were located when the beam interrupt was triggered. Differences greater than 5% were observed as a result of the interrupt in some cases, with the discrepancies occurring in a non-uniform manner across the imager. The differences measured were not repeatable from one measurement to another. Examination of the individual frames showed that the EPID was consistently losing a small amount of dose at the termination of every exposure. Inclusion of one additional frame in every image rectified the unexpected behaviour, reducing the differences to 1% or less. Although integrated EPID images nominally capture the entire dose delivered during an exposure, a small amount of dose is consistently being lost at the end of every exposure. The amount of missing dose is random, depending on the exact beam termination time within a frame. Inclusion of an extra frame at the end of each exposure effectively rectifies the problem, making the EPID more suitable for clinical dosimetry applications. The authors received support from Varian Medical Systems in the form of software and equipment loans as well as technical support. © 2012 American Association of Physicists in Medicine.

  16. Noise characteristics of CT perfusion imaging: how does noise propagate from source images to final perfusion maps?

    NASA Astrophysics Data System (ADS)

    Li, Ke; Chen, Guang-Hong

    2016-03-01

    Cerebral CT perfusion (CTP) imaging is playing an important role in the diagnosis and treatment of acute ischemic strokes. Meanwhile, the reliability of CTP-based ischemic lesion detection has been challenged due to the noisy appearance and low signal-to-noise ratio of CTP maps. To reduce noise and improve image quality, a rigorous study on the noise transfer properties of CTP systems is highly desirable to provide the needed scientific guidance. This paper concerns how noise in the CTP source images propagates to the final CTP maps. Both theoretical deviations and subsequent validation experiments demonstrated that, the noise level of background frames plays a dominant role in the noise of the cerebral blood volume (CBV) maps. This is in direct contradiction with the general belief that noise of non-background image frames is of greater importance in CTP imaging. The study found that when radiation doses delivered to the background frames and to all non-background frames are equal, lowest noise variance is achieved in the final CBV maps. This novel equality condition provides a practical means to optimize radiation dose delivery in CTP data acquisition: radiation exposures should be modulated between background frames and non-background frames so that the above equality condition is satisïnAed. For several typical CTP acquisition protocols, numerical simulations and in vivo canine experiment demonstrated that noise of CBV can be effectively reduced using the proposed exposure modulation method.

  17. Composite ultrasound imaging apparatus and method

    DOEpatents

    Morimoto, Alan K.; Bow, Jr., Wallace J.; Strong, David Scott; Dickey, Fred M.

    1998-01-01

    An imaging apparatus and method for use in presenting composite two dimensional and three dimensional images from individual ultrasonic frames. A cross-sectional reconstruction is applied by using digital ultrasound frames, transducer orientation and a known center. Motion compensation, rank value filtering, noise suppression and tissue classification are utilized to optimize the composite image.

  18. Composite ultrasound imaging apparatus and method

    DOEpatents

    Morimoto, A.K.; Bow, W.J. Jr.; Strong, D.S.; Dickey, F.M.

    1998-09-15

    An imaging apparatus and method for use in presenting composite two dimensional and three dimensional images from individual ultrasonic frames. A cross-sectional reconstruction is applied by using digital ultrasound frames, transducer orientation and a known center. Motion compensation, rank value filtering, noise suppression and tissue classification are utilized to optimize the composite image. 37 figs.

  19. Keyhole imaging method for dynamic objects behind the occlusion area

    NASA Astrophysics Data System (ADS)

    Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong

    2018-01-01

    A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .

  20. Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera

    NASA Astrophysics Data System (ADS)

    Rahman, Samiur; Ullah, Sana; Ullah, Sehat

    2018-01-01

    Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.

  1. Improved optical flow motion estimation for digital image stabilization

    NASA Astrophysics Data System (ADS)

    Lai, Lijun; Xu, Zhiyong; Zhang, Xuyao

    2015-11-01

    Optical flow is the instantaneous motion vector at each pixel in the image frame at a time instant. The gradient-based approach for optical flow computation can't work well when the video motion is too large. To alleviate such problem, we incorporate this algorithm into a pyramid multi-resolution coarse-to-fine search strategy. Using pyramid strategy to obtain multi-resolution images; Using iterative relationship from the highest level to the lowest level to obtain inter-frames' affine parameters; Subsequence frames compensate back to the first frame to obtain stabilized sequence. The experiment results demonstrate that the promoted method has good performance in global motion estimation.

  2. Label-free observation of tissues by high-speed stimulated Raman spectral microscopy and independent component analysis

    NASA Astrophysics Data System (ADS)

    Ozeki, Yasuyuki; Otsuka, Yoichi; Sato, Shuya; Hashimoto, Hiroyuki; Umemura, Wataru; Sumimura, Kazuhiko; Nishizawa, Norihiko; Fukui, Kiichi; Itoh, Kazuyoshi

    2013-02-01

    We have developed a video-rate stimulated Raman scattering (SRS) microscope with frame-by-frame wavenumber tunability. The system uses a 76-MHz picosecond Ti:sapphire laser and a subharmonically synchronized, 38-MHz Yb fiber laser. The Yb fiber laser pulses are spectrally sliced by a fast wavelength-tunable filter, which consists of a galvanometer scanner, a 4-f optical system and a reflective grating. The spectral resolution of the filter is ~ 3 cm-1. The wavenumber was scanned from 2800 to 3100 cm-1 with an arbitrary waveform synchronized to the frame trigger. For imaging, we introduced a 8-kHz resonant scanner and a galvanometer scanner. We were able to acquire SRS images of 500 x 480 pixels at a frame rate of 30.8 frames/s. Then these images were processed by principal component analysis followed by a modified algorithm of independent component analysis. This algorithm allows blind separation of constituents with overlapping Raman bands from SRS spectral images. The independent component (IC) spectra give spectroscopic information, and IC images can be used to produce pseudo-color images. We demonstrate various label-free imaging modalities such as 2D spectral imaging of the rat liver, two-color 3D imaging of a vessel in the rat liver, and spectral imaging of several sections of intestinal villi in the mouse. Various structures in the tissues such as lipid droplets, cytoplasm, fibrous texture, nucleus, and water-rich region were successfully visualized.

  3. Frequency-locked pulse sequencer for high-frame-rate monochromatic tissue motion imaging.

    PubMed

    Azar, Reza Zahiri; Baghani, Ali; Salcudean, Septimiu E; Rohling, Robert

    2011-04-01

    To overcome the inherent low frame rate of conventional ultrasound, we have previously presented a system that can be implemented on conventional ultrasound scanners for high-frame-rate imaging of monochromatic tissue motion. The system employs a sector subdivision technique in the sequencer to increase the acquisition rate. To eliminate the delays introduced during data acquisition, a motion phase correction algorithm has also been introduced to create in-phase displacement images. Previous experimental results from tissue- mimicking phantoms showed that the system can achieve effective frame rates of up to a few kilohertz on conventional ultrasound systems. In this short communication, we present a new pulse sequencing strategy that facilitates high-frame-rate imaging of monochromatic motion such that the acquired echo signals are inherently in-phase. The sequencer uses the knowledge of the excitation frequency to synchronize the acquisition of the entire imaging plane to that of an external exciter. This sequencing approach eliminates any need for synchronization or phase correction and has applications in tissue elastography, which we demonstrate with tissue-mimicking phantoms. © 2011 IEEE

  4. UWGSP4: an imaging and graphics superworkstation and its medical applications

    NASA Astrophysics Data System (ADS)

    Jong, Jing-Ming; Park, Hyun Wook; Eo, Kilsu; Kim, Min-Hwan; Zhang, Peng; Kim, Yongmin

    1992-05-01

    UWGSP4 is configured with a parallel architecture for image processing and a pipelined architecture for computer graphics. The system's peak performance is 1,280 MFLOPS for image processing and over 200,000 Gouraud shaded 3-D polygons per second for graphics. The simulated sustained performance is about 50% of the peak performance in general image processing. Most of the 2-D image processing functions are efficiently vectorized and parallelized in UWGSP4. A performance of 770 MFLOPS in convolution and 440 MFLOPS in FFT is achieved. The real-time cine display, up to 32 frames of 1280 X 1024 pixels per second, is supported. In 3-D imaging, the update rate for the surface rendering is 10 frames of 20,000 polygons per second; the update rate for the volume rendering is 6 frames of 128 X 128 X 128 voxels per second. The system provides 1280 X 1024 X 32-bit double frame buffers and one 1280 X 1024 X 8-bit overlay buffer for supporting realistic animation, 24-bit true color, and text annotation. A 1280 X 1024- pixel, 66-Hz noninterlaced display screen with 1:1 aspect ratio can be windowed into the frame buffer for the display of any portion of the processed image or graphics.

  5. Can older adults resist the positivity effect in neural responding? The impact of verbal framing on event-related brain potentials elicited by emotional images.

    PubMed

    Rehmert, Andrea E; Kisley, Michael A

    2013-10-01

    Older adults have demonstrated an avoidance of negative information, presumably with a goal of greater emotional satisfaction. Understanding whether avoidance of negative information is a voluntary, motivated choice or an involuntary, automatic response will be important to differentiate, as decision making often involves emotional factors. With the use of an emotional framing event-related potential (ERP) paradigm, the present study investigated whether older adults could alter neural responses to negative stimuli through verbal reframing of evaluative response options. The late positive potential (LPP) response of 50 older adults and 50 younger adults was recorded while participants categorized emotional images in one of two framing conditions: positive ("more or less positive") or negative ("more or less negative"). It was hypothesized that older adults would be able to overcome a presumed tendency to down-regulate neural responding to negative stimuli in the negative framing condition, thus leading to larger LPP wave amplitudes to negative images. A similar effect was predicted for younger adults, but for positively valenced images, such that LPP responses would be increased in the positive framing condition compared with the negative framing condition. Overall, younger adults' LPP wave amplitudes were modulated by framing condition, including a reduction in the negativity bias in the positive frame. Older adults' neural responses were not significantly modulated, even though task-related behavior supported the notion that older adults were able to successfully adopt the negative framing condition.

  6. Can Older Adults Resist the Positivity Effect in Neural Responding: The Impact of Verbal Framing on Event-Related Brain Potentials Elicited by Emotional Images

    PubMed Central

    Rehmert, Andrea E.; Kisley, Michael A.

    2014-01-01

    Older adults have demonstrated an avoidance of negative information presumably with a goal of greater emotional satisfaction. Understanding whether avoidance of negative information is a voluntary, motivated choice, or an involuntary, automatic response will be important to differentiate, as decision-making often involves emotional factors. With the use of an emotional framing event-related potential (ERP) paradigm, the present study investigated whether older adults could alter neural responses to negative stimuli through verbal reframing of evaluative response options. The late-positive potential (LPP) response of 50 older adults and 50 younger adults was recorded while participants categorized emotional images in one of two framing conditions: positive (“more or less positive”) or negative (“more or less negative”). It was hypothesized that older adults would be able to overcome a presumed tendency to down-regulate neural responding to negative stimuli in the negative framing condition thus leading to larger LPP wave amplitudes to negative images. A similar effect was predicted for younger adults but for positively valenced images such that LPP responses would be increased in the positive framing condition compared to the negative framing condition. Overall, younger adults' LPP wave amplitudes were modulated by framing condition, including a reduction in the negativity bias in the positive frame. Older adults' neural responses were not significantly modulated even though task-related behavior supported the notion that older adults were able to successfully adopt the negative framing condition. PMID:23731435

  7. Dawn Orbit Determination Team: Modeling and Fitting of Optical Data at Vesta

    NASA Technical Reports Server (NTRS)

    Kennedy, Brian; Abrahamson, Matt; Ardito, Alessandro; Haw, Robert; Mastrodemos, Nicholas; Nandi, Sumita; Park, Ryan; Rush, Brian; Vaughan, Andrew

    2013-01-01

    The Dawn spacecraft was launched on September 27th, 2007. Its mission is to consecutively rendezvous with and observe the two largest bodies in the main asteroid belt, Vesta and Ceres. It has already completed over a year's worth of direct observations of Vesta (spanning from early 2011 through late 2012) and is currently on a cruise trajectory to Ceres, where it will begin scientific observations in mid-2015. Achieving this data collection required careful planning and execution from all Dawn operations teams. Dawn's Orbit Determination (OD) team was tasked with reconstruction of the as-flown trajectory as well as determination of the Vesta rotational rate, pole orientation and ephemeris, among other Vesta parameters. Improved knowledge of the Vesta pole orientation, specifically, was needed to target the final maneuvers that inserted Dawn into the first science orbit at Vesta. To solve for these parameters, the OD team used radiometric data from the Deep Space Network (DSN) along with optical data reduced from Dawn's Framing Camera (FC) images. This paper will de-scribe the initial determination of the Vesta ephemeris and pole using a combination of radiometric and optical data, and also the progress the OD team has made since then to further refine the knowledge of Vesta's body frame orientation and rate with these data.

  8. Optical joint correlator for real-time image tracking and retinal surgery

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor)

    1991-01-01

    A method for tracking an object in a sequence of images is described. Such sequence of images may, for example, be a sequence of television frames. The object in the current frame is correlated with the object in the previous frame to obtain the relative location of the object in the two frames. An optical joint transform correlator apparatus is provided to carry out the process. Such joint transform correlator apparatus forms the basis for laser eye surgical apparatus where an image of the fundus of an eyeball is stabilized and forms the basis for the correlator apparatus to track the position of the eyeball caused by involuntary movement. With knowledge of the eyeball position, a surgical laser can be precisely pointed toward a position on the retina.

  9. Cranz-Schardin camera with a large working distance for the observation of small scale high-speed flows.

    PubMed

    Skupsch, C; Chaves, H; Brücker, C

    2011-08-01

    The Cranz-Schardin camera utilizes a Q-switched Nd:YAG laser and four single CCD cameras. Light pulse energy in the range of 25 mJ and pulse duration of about 5 ns is provided by the laser. The laser light is converted to incoherent light by Rhodamine-B fluorescence dye in a cuvette. The laser beam coherence is intentionally broken in order to avoid speckle. Four light fibers collect the fluorescence light and are used for illumination. Different light fiber lengths enable a delay of illumination between consecutive images. The chosen interframe time is 25 ns, corresponding to 40 × 10(6) frames per second. Exemplarily, the camera is applied to observe the bow shock in front of a water jet, propagating in air at supersonic speed. The initial phase of the formation of a jet structure is recorded.

  10. New Subarray Readout Patterns for the ACS Wide Field Channel

    NASA Astrophysics Data System (ADS)

    Golimowski, D.; Anderson, J.; Arslanian, S.; Chiaberge, M.; Grogin, N.; Lim, Pey Lian; Lupie, O.; McMaster, M.; Reinhart, M.; Schiffer, F.; Serrano, B.; Van Marshall, M.; Welty, A.

    2017-04-01

    At the start of Cycle 24, the original CCD-readout timing patterns used to generate ACS Wide Field Channel (WFC) subarray images were replaced with new patterns adapted from the four-quadrant readout pattern used to generate full-frame WFC images. The primary motivation for this replacement was a substantial reduction of observatory and staff resources needed to support WFC subarray bias calibration, which became a new and challenging obligation after the installation of the ACS CCD Electronics Box Replacement during Servicing Mission 4. The new readout patterns also improve the overall efficiency of observing with WFC subarrays and enable the processing of subarray images through stages of the ACS data calibration pipeline (calacs) that were previously restricted to full-frame WFC images. The new readout patterns replace the original 512×512, 1024×1024, and 2048×2046-pixel subarrays with subarrays having 2048 columns and 512, 1024, and 2048 rows, respectively. Whereas the original square subarrays were limited to certain WFC quadrants, the new rectangular subarrays are available in all four quadrants. The underlying bias structure of the new subarrays now conforms with those of the corresponding regions of the full-frame image, which allows raw frames in all image formats to be calibrated using one contemporaneous full-frame "superbias" reference image. The original subarrays remain available for scientific use, but calibration of these image formats is no longer supported by STScI.

  11. Quantitative image fusion in infrared radiometry

    NASA Astrophysics Data System (ADS)

    Romm, Iliya; Cukurel, Beni

    2018-05-01

    Towards high-accuracy infrared radiance estimates, measurement practices and processing techniques aimed to achieve quantitative image fusion using a set of multi-exposure images of a static scene are reviewed. The conventional non-uniformity correction technique is extended, as the original is incompatible with quantitative fusion. Recognizing the inherent limitations of even the extended non-uniformity correction, an alternative measurement methodology, which relies on estimates of the detector bias using self-calibration, is developed. Combining data from multi-exposure images, two novel image fusion techniques that ultimately provide high tonal fidelity of a photoquantity are considered: ‘subtract-then-fuse’, which conducts image subtraction in the camera output domain and partially negates the bias frame contribution common to both the dark and scene frames; and ‘fuse-then-subtract’, which reconstructs the bias frame explicitly and conducts image fusion independently for the dark and the scene frames, followed by subtraction in the photoquantity domain. The performances of the different techniques are evaluated for various synthetic and experimental data, identifying the factors contributing to potential degradation of the image quality. The findings reflect the superiority of the ‘fuse-then-subtract’ approach, conducting image fusion via per-pixel nonlinear weighted least squares optimization.

  12. Convex composite wavelet frame and total variation-based image deblurring using nonconvex penalty functions

    NASA Astrophysics Data System (ADS)

    Shen, Zhengwei; Cheng, Lishuang

    2017-09-01

    Total variation (TV)-based image deblurring method can bring on staircase artifacts in the homogenous region of the latent images recovered from the degraded images while a wavelet/frame-based image deblurring method will lead to spurious noise spikes and pseudo-Gibbs artifacts in the vicinity of discontinuities of the latent images. To suppress these artifacts efficiently, we propose a nonconvex composite wavelet/frame and TV-based image deblurring model. In this model, the wavelet/frame and the TV-based methods may complement each other, which are verified by theoretical analysis and experimental results. To further improve the quality of the latent images, nonconvex penalty function is used to be the regularization terms of the model, which may induce a stronger sparse solution and will more accurately estimate the relative large gradient or wavelet/frame coefficients of the latent images. In addition, by choosing a suitable parameter to the nonconvex penalty function, the subproblem that splits by the alternative direction method of multipliers algorithm from the proposed model can be guaranteed to be a convex optimization problem; hence, each subproblem can converge to a global optimum. The mean doubly augmented Lagrangian and the isotropic split Bregman algorithms are used to solve these convex subproblems where the designed proximal operator is used to reduce the computational complexity of the algorithms. Extensive numerical experiments indicate that the proposed model and algorithms are comparable to other state-of-the-art model and methods.

  13. Integration of image capture and processing: beyond single-chip digital camera

    NASA Astrophysics Data System (ADS)

    Lim, SukHwan; El Gamal, Abbas

    2001-05-01

    An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.

  14. Age differences in treatment decision making for breast cancer in a sample of healthy women: the effects of body image and risk framing.

    PubMed

    Romanek, Kathleen M; McCaul, Kevin D; Sandgren, Ann K

    2005-07-01

    To examine the effects of age, body image, and risk framing on treatment decision making for breast cancer using a healthy population. An experimental 2 (younger women, older women) X 2 (survival, mortality frame) between-groups design. Midwestern university. Two groups of healthy women: 56 women ages 18-24 from undergraduate psychology courses and 60 women ages 35-60 from the university community. Healthy women imagined that they had been diagnosed with breast cancer and received information regarding lumpectomy versus mastectomy and recurrence rates. Participants indicated whether they would choose lumpectomy or mastectomy and why. Age, framing condition, treatment choice, body image, and reasons for treatment decision. The difference in treatment selection between younger and older women was mediated by concern for appearance. No main effect for risk framing was found; however, older women were somewhat less likely to select lumpectomy when given a mortality frame. Age, mediated by body image, influences treatment selection of lumpectomy versus mastectomy. Framing has no direct effect on treatment decisions, but younger and older women may be affected by risk information differently. Nurses should provide women who recently have been diagnosed with breast cancer with age-appropriate information regarding treatment alternatives to ensure women's active participation in the decision-making process. Women who have different levels of investment in body image also may have different concerns about treatment, and healthcare professionals should be alert to and empathetic of such concerns.

  15. An effective and robust method for tracking multiple fish in video image based on fish head detection.

    PubMed

    Qian, Zhi-Ming; Wang, Shuo Hong; Cheng, Xi En; Chen, Yan Qiu

    2016-06-23

    Fish tracking is an important step for video based analysis of fish behavior. Due to severe body deformation and mutual occlusion of multiple swimming fish, accurate and robust fish tracking from video image sequence is a highly challenging problem. The current tracking methods based on motion information are not accurate and robust enough to track the waving body and handle occlusion. In order to better overcome these problems, we propose a multiple fish tracking method based on fish head detection. The shape and gray scale characteristics of the fish image are employed to locate the fish head position. For each detected fish head, we utilize the gray distribution of the head region to estimate the fish head direction. Both the position and direction information from fish detection are then combined to build a cost function of fish swimming. Based on the cost function, global optimization method can be applied to associate the target between consecutive frames. Results show that our method can accurately detect the position and direction information of fish head, and has a good tracking performance for dozens of fish. The proposed method can successfully obtain the motion trajectories for dozens of fish so as to provide more precise data to accommodate systematic analysis of fish behavior.

  16. Super-resolution imaging applied to moving object tracking

    NASA Astrophysics Data System (ADS)

    Swalaganata, Galandaru; Ratna Sulistyaningrum, Dwi; Setiyono, Budi

    2017-10-01

    Moving object tracking in a video is a method used to detect and analyze changes that occur in an object that being observed. Visual quality and the precision of the tracked target are highly wished in modern tracking system. The fact that the tracked object does not always seem clear causes the tracking result less precise. The reasons are low quality video, system noise, small object, and other factors. In order to improve the precision of the tracked object especially for small object, we propose a two step solution that integrates a super-resolution technique into tracking approach. First step is super-resolution imaging applied into frame sequences. This step was done by cropping the frame in several frame or all of frame. Second step is tracking the result of super-resolution images. Super-resolution image is a technique to obtain high-resolution images from low-resolution images. In this research single frame super-resolution technique is proposed for tracking approach. Single frame super-resolution was a kind of super-resolution that it has the advantage of fast computation time. The method used for tracking is Camshift. The advantages of Camshift was simple calculation based on HSV color that use its histogram for some condition and color of the object varies. The computational complexity and large memory requirements required for the implementation of super-resolution and tracking were reduced and the precision of the tracked target was good. Experiment showed that integrate a super-resolution imaging into tracking technique can track the object precisely with various background, shape changes of the object, and in a good light conditions.

  17. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less

  18. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    DOE PAGES

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; ...

    2015-08-13

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less

  19. Heroes or Health Victims?: Exploring How the Elite Media Frames Veterans on Veterans Day.

    PubMed

    Rhidenour, Kayla B; Barrett, Ashley K; Blackburn, Kate G

    2017-11-27

    We examine the frames the elite news media uses to portray veterans on and surrounding Veterans Day 2012, 2013, 2014, and 2015. We use mental health illness and media framing literature to explore how, why, and to what extent Veterans Day news coverage uses different media frames across the four consecutive years. We compiled a Media Coverage Corpora for each year, which contains the quotes and paraphrased remarks used in all veterans news stories for that year. In our primary study, we applied the meaning extraction method (MEM) to extract emergent media frames for Veterans Day 2014 and compiled a word frequency list, which captures the words most commonly used within the corpora. In post hoc analyses, we collected news stories and compiled word frequency lists for Veterans Day 2012, 2013, and 2015. Our findings reveal dissenting frames across 2012, 2013, and 2014 Veterans Day media coverage. Word frequency results suggest the 2012 and 2013 media frames largely celebrate Veterans as heroes, but the 2014 coverage depicts veterans as victimized by their wartime experiences. Furthermore, our results demonstrate how the prevailing 2015 media frames could be a reaction to 2014 frames that portrayed veterans as health victims. We consider the ramifications of this binary portrayal of veterans as either health victims or heroes and discuss the implications of these dueling frames for veterans' access to healthcare resources.

  20. Robotically-adjustable microstereotactic frames for image-guided neurosurgery

    NASA Astrophysics Data System (ADS)

    Kratchman, Louis B.; Fitzpatrick, J. Michael

    2013-03-01

    Stereotactic frames are a standard tool for neurosurgical targeting, but are uncomfortable for patients and obstruct the surgical field. Microstereotactic frames are more comfortable for patients, provide better access to the surgical site, and have grown in popularity as an alternative to traditional stereotactic devices. However, clinically available microstereotactic frames require either lengthy manufacturing delays or expensive image guidance systems. We introduce a robotically-adjusted, disposable microstereotactic frame for deep brain stimulation surgery that eliminates the drawbacks of existing microstereotactic frames. Our frame can be automatically adjusted in the operating room using a preoperative plan in less than five minutes. A validation study on phantoms shows that our approach provides a target positioning error of 0.14 mm, which exceeds the required accuracy for deep brain stimulation surgery.

  1. Probabilistic choice between symmetric disparities in motion stereo matching for a lateral navigation system

    NASA Astrophysics Data System (ADS)

    Ershov, Egor; Karnaukhov, Victor; Mozerov, Mikhail

    2016-02-01

    Two consecutive frames of a lateral navigation camera video sequence can be considered as an appropriate approximation to epipolar stereo. To overcome edge-aware inaccuracy caused by occlusion, we propose a model that matches the current frame to the next and to the previous ones. The positive disparity of matching to the previous frame has its symmetric negative disparity to the next frame. The proposed algorithm performs probabilistic choice for each matched pixel between the positive disparity and its symmetric disparity cost. A disparity map obtained by optimization over the cost volume composed of the proposed probabilistic choice is more accurate than the traditional left-to-right and right-to-left disparity maps cross-check. Also, our algorithm needs two times less computational operations per pixel than the cross-check technique. The effectiveness of our approach is demonstrated on synthetic data and real video sequences, with ground-truth value.

  2. Real-time look-up table-based color correction for still image stabilization of digital cameras without using frame memory

    NASA Astrophysics Data System (ADS)

    Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha

    2012-09-01

    Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.

  3. Dynamic Imaging of the Eye, Optic Nerve, and Extraocular Muscles With Golden Angle Radial MRI

    PubMed Central

    Smith, David S.; Smith, Alex K.; Welch, E. Brian; Smith, Seth A.

    2017-01-01

    Purpose The eye and its accessory structures, the optic nerve and the extraocular muscles, form a complex dynamic system. In vivo magnetic resonance imaging (MRI) of this system in motion can have substantial benefits in understanding oculomotor functioning in health and disease, but has been restricted to date to imaging of static gazes only. The purpose of this work was to develop a technique to image the eye and its accessory visual structures in motion. Methods Dynamic imaging of the eye was developed on a 3-Tesla MRI scanner, based on a golden angle radial sequence that allows freely selectable frame-rate and temporal-span image reconstructions from the same acquired data set. Retrospective image reconstructions at a chosen frame rate of 57 ms per image yielded high-quality in vivo movies of various eye motion tasks performed in the scanner. Motion analysis was performed for a left–right version task where motion paths, lengths, and strains/globe angle of the medial and lateral extraocular muscles and the optic nerves were estimated. Results Offline image reconstructions resulted in dynamic images of bilateral visual structures of healthy adults in only ∼15-s imaging time. Qualitative and quantitative analyses of the motion enabled estimation of trajectories, lengths, and strains on the optic nerves and extraocular muscles at very high frame rates of ∼18 frames/s. Conclusions This work presents an MRI technique that enables high-frame-rate dynamic imaging of the eyes and orbital structures. The presented sequence has the potential to be used in furthering the understanding of oculomotor mechanics in vivo, both in health and disease. PMID:28813574

  4. The "Gainful Employment Rule" and Student Loan Defaults: How the Policy Frame Overlooks Important Normative Implications

    ERIC Educational Resources Information Center

    Serna, Gabriel

    2014-01-01

    This essay examines normative aspects of the gainful employment rule and how the policy frame and image miss important implications for student aid policy. Because the economic and social burdens associated with the policy are typically borne by certain socioeconomic and ethnic groups, the policy frame and image do not identify possible negative…

  5. Research of spectacle frame measurement system based on structured light method

    NASA Astrophysics Data System (ADS)

    Guan, Dong; Chen, Xiaodong; Zhang, Xiuda; Yan, Huimin

    2016-10-01

    Automatic eyeglass lens edging system is now widely used to automatically cut and polish the uncut lens based on the spectacle frame shape data which is obtained from the spectacle frame measuring machine installed on the system. The conventional approach to acquire the frame shape data works in the contact scanning mode with a probe tracing around the groove contour of the spectacle frame which requires a sophisticated mechanical and numerical control system. In this paper, a novel non-contact optical measuring method based on structured light to measure the three dimensional (3D) data of the spectacle frame is proposed. First we focus on the processing approach solving the problem of deterioration of the structured light stripes caused by intense specular reflection on the frame surface. The techniques of bright-dark bi-level fringe projecting, multiple exposuring and high dynamic range imaging are introduced to obtain a high-quality image of structured light stripes. Then, the Gamma transform and median filtering are applied to enhance image contrast. In order to get rid of background noise from the image and extract the region of interest (ROI), an auxiliary lighting system of special design is utilized to help effectively distinguish between the object and the background. In addition, a morphological method with specific morphological structure-elements is adopted to remove noise between stripes and boundary of the spectacle frame. By further fringe center extraction and depth information acquisition through the method of look-up table, the 3D shape of the spectacle frame is recovered.

  6. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  7. SU-F-J-96: Comparison of Frame-Based and Mutual Information Registration Techniques for CT and MR Image Sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Popple, R; Bredel, M; Brezovich, I

    Purpose: To compare the accuracy of CT-MR registration using a mutual information method with registration using a frame-based localizer box. Methods: Ten patients having the Leksell head frame and scanned with a modality specific localizer box were imported into the treatment planning system. The fiducial rods of the localizer box were contoured on both the MR and CT scans. The skull was contoured on the CT images. The MR and CT images were registered by two methods. The frame-based method used the transformation that minimized the mean square distance of the centroids of the contours of the fiducial rods frommore » a mathematical model of the localizer. The mutual information method used automated image registration tools in the TPS and was restricted to a volume-of-interest defined by the skull contours with a 5 mm margin. For each case, the two registrations were adjusted by two evaluation teams, each comprised of an experienced radiation oncologist and neurosurgeon, to optimize alignment in the region of the brainstem. The teams were blinded to the registration method. Results: The mean adjustment was 0.4 mm (range 0 to 2 mm) and 0.2 mm (range 0 to 1 mm) for the frame and mutual information methods, respectively. The median difference between the frame and mutual information registrations was 0.3 mm, but was not statistically significant using the Wilcoxon signed rank test (p=0.37). Conclusion: The difference between frame and mutual information registration techniques was neither statistically significant nor, for most applications, clinically important. These results suggest that mutual information is equivalent to frame-based image registration for radiosurgery. Work is ongoing to add additional evaluators and to assess the differences between evaluators.« less

  8. Interactive distributed hardware-accelerated LOD-sprite terrain rendering with stable frame rates

    NASA Astrophysics Data System (ADS)

    Swan, J. E., II; Arango, Jesus; Nakshatrala, Bala K.

    2002-03-01

    A stable frame rate is important for interactive rendering systems. Image-based modeling and rendering (IBMR) techniques, which model parts of the scene with image sprites, are a promising technique for interactive systems because they allow the sprite to be manipulated instead of the underlying scene geometry. However, with IBMR techniques a frequent problem is an unstable frame rate, because generating an image sprite (with 3D rendering) is time-consuming relative to manipulating the sprite (with 2D image resampling). This paper describes one solution to this problem, by distributing an IBMR technique into a collection of cooperating threads and executable programs across two computers. The particular IBMR technique distributed here is the LOD-Sprite algorithm. This technique uses a multiple level-of-detail (LOD) scene representation. It first renders a keyframe from a high-LOD representation, and then caches the frame as an image sprite. It renders subsequent spriteframes by texture-mapping the cached image sprite into a lower-LOD representation. We describe a distributed architecture and implementation of LOD-Sprite, in the context of terrain rendering, which takes advantage of graphics hardware. We present timing results which indicate we have achieved a stable frame rate. In addition to LOD-Sprite, our distribution method holds promise for other IBMR techniques.

  9. Evaluations of the setup discrepancy between BrainLAB 6D ExacTrac and cone-beam computed tomography used with the imaging guidance system Novalis-Tx for intracranial stereotactic radiosurgery.

    PubMed

    Oh, Se An; Park, Jae Won; Yea, Ji Woon; Kim, Sung Kyu

    2017-01-01

    The objective of this study was to evaluate the setup discrepancy between BrainLAB 6 degree-of-freedom (6D) ExacTrac and cone-beam computed tomography (CBCT) used with the imaging guidance system Novalis Tx for intracranial stereotactic radiosurgery. We included 107 consecutive patients for whom white stereotactic head frame masks (R408; Clarity Medical Products, Newark, OH) were used to fix the head during intracranial stereotactic radiosurgery, between August 2012 and July 2016. The patients were immobilized in the same state for both the verification image using 6D ExacTrac and online 3D CBCT. In addition, after radiation treatment, registration between the computed tomography simulation images and the CBCT images was performed with offline 6D fusion in an offline review. The root-mean-square of the difference in the translational dimensions between the ExacTrac system and CBCT was <1.01 mm for online matching and <1.10 mm for offline matching. Furthermore, the root-mean-square of the difference in the rotational dimensions between the ExacTrac system and the CBCT were <0.82° for online matching and <0.95° for offline matching. It was concluded that while the discrepancies in residual setup errors between the ExacTrac 6D X-ray and the CBCT were minor, they should not be ignored.

  10. Graphics-Printing Program For The HP Paintjet Printer

    NASA Technical Reports Server (NTRS)

    Atkins, Victor R.

    1993-01-01

    IMPRINT utility computer program developed to print graphics specified in raster files by use of Hewlett-Packard Paintjet(TM) color printer. Reads bit-mapped images from files on UNIX-based graphics workstation and prints out three different types of images: wire-frame images, solid-color images, and gray-scale images. Wire-frame images are in continuous tone or, in case of low resolution, in random gray scale. In case of color images, IMPRINT also prints by use of default palette of solid colors. Written in C language.

  11. Technical Note: Modification of the standard gain correction algorithm to compensate for the number of used reference flat frames in detector performance studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konstantinidis, Anastasios C.; Olivo, Alessandro; Speller, Robert D.

    2011-12-15

    Purpose: The x-ray performance evaluation of digital x-ray detectors is based on the calculation of the modulation transfer function (MTF), the noise power spectrum (NPS), and the resultant detective quantum efficiency (DQE). The flat images used for the extraction of the NPS should not contain any fixed pattern noise (FPN) to avoid contamination from nonstochastic processes. The ''gold standard'' method used for the reduction of the FPN (i.e., the different gain between pixels) in linear x-ray detectors is based on normalization with an average reference flat-field. However, the noise in the corrected image depends on the number of flat framesmore » used for the average flat image. The aim of this study is to modify the standard gain correction algorithm to make it independent on the used reference flat frames. Methods: Many publications suggest the use of 10-16 reference flat frames, while other studies use higher numbers (e.g., 48 frames) to reduce the propagated noise from the average flat image. This study quantifies experimentally the effect of the number of used reference flat frames on the NPS and DQE values and appropriately modifies the gain correction algorithm to compensate for this effect. Results: It is shown that using the suggested gain correction algorithm a minimum number of reference flat frames (i.e., down to one frame) can be used to eliminate the FPN from the raw flat image. This saves computer memory and time during the x-ray performance evaluation. Conclusions: The authors show that the method presented in the study (a) leads to the maximum DQE value that one would have by using the conventional method and very large number of frames and (b) has been compared to an independent gain correction method based on the subtraction of flat-field images, leading to identical DQE values. They believe this provides robust validation of the proposed method.« less

  12. Image restoration by minimizing zero norm of wavelet frame coefficients

    NASA Astrophysics Data System (ADS)

    Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue

    2016-11-01

    In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.

  13. High-frame-rate full-vocal-tract 3D dynamic speech imaging.

    PubMed

    Fu, Maojing; Barlaz, Marissa S; Holtrop, Joseph L; Perry, Jamie L; Kuehn, David P; Shosted, Ryan K; Liang, Zhi-Pei; Sutton, Bradley P

    2017-04-01

    To achieve high temporal frame rate, high spatial resolution and full-vocal-tract coverage for three-dimensional dynamic speech MRI by using low-rank modeling and sparse sampling. Three-dimensional dynamic speech MRI is enabled by integrating a novel data acquisition strategy and an image reconstruction method with the partial separability model: (a) a self-navigated sparse sampling strategy that accelerates data acquisition by collecting high-nominal-frame-rate cone navigator sand imaging data within a single repetition time, and (b) are construction method that recovers high-quality speech dynamics from sparse (k,t)-space data by enforcing joint low-rank and spatiotemporal total variation constraints. The proposed method has been evaluated through in vivo experiments. A nominal temporal frame rate of 166 frames per second (defined based on a repetition time of 5.99 ms) was achieved for an imaging volume covering the entire vocal tract with a spatial resolution of 2.2 × 2.2 × 5.0 mm 3 . Practical utility of the proposed method was demonstrated via both validation experiments and a phonetics investigation. Three-dimensional dynamic speech imaging is possible with full-vocal-tract coverage, high spatial resolution and high nominal frame rate to provide dynamic speech data useful for phonetic studies. Magn Reson Med 77:1619-1629, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  14. Processing Near-Infrared Imagery of the Orion Heatshield During EFT-1 Hypersonic Reentry

    NASA Technical Reports Server (NTRS)

    Spisz, Thomas S.; Taylor, Jeff C.; Gibson, David M.; Kennerly, Steve; Osei-Wusu, Kwame; Horvath, Thomas J.; Schwartz, Richard J.; Tack, Steven; Bush, Brett C.; Oliver, A. Brandon

    2016-01-01

    The Scientifically Calibrated In-Flight Imagery (SCIFLI) team captured high-resolution, calibrated, near-infrared imagery of the Orion capsule during atmospheric reentry of the EFT-1 mission. A US Navy NP-3D aircraft equipped with a multi-band optical sensor package, referred to as Cast Glance, acquired imagery of the Orion capsule's heatshield during a period when Orion was slowing from approximately Mach 10 to Mach 7. The line-of-sight distance ranged from approximately 65 to 40 nmi. Global surface temperatures of the capsule's thermal heatshield derived from the near-infrared intensity measurements complemented the in-depth (embedded) thermocouple measurements. Moreover, these derived surface temperatures are essential to the assessment of the thermocouples' reliance on inverse heat transfer methods and material response codes to infer the surface temperature from the in-depth measurements. The paper describes the image processing challenges associated with a manually-tracked, high-angular rate air-to-air observation. Issues included management of significant frame-to-frame motions due to both tracking jerk and jitter as well as distortions due to atmospheric effects. Corrections for changing sky backgrounds (including some cirrus clouds), atmospheric attenuation, and target orientations and ranges also had to be made. The image processing goal is to reduce the detrimental effects due to motion (both sensor and capsule), vibration (jitter), and atmospherics for image quality improvement, without compromising the quantitative integrity of the data, especially local intensity (temperature) variations. The paper will detail the approach of selecting and utilizing only the highest quality images, registering several co-temporal image frames to a single image frame to the extent frame-to-frame distortions would allow, and then co-adding the registered frames to improve image quality and reduce noise. Using preflight calibration data, the registered and averaged infrared intensity images were converted to surface temperatures on the Orion capsule's heatshield. Temperature uncertainties will be discussed relative to uncertainties of surface emissivity and atmospheric transmission loss. Comparison of limited onboard surface thermocouple data to the image derived surface temperature will be presented.

  15. Frame by Frame II: A Filmography of the African American Image, 1978-1994.

    ERIC Educational Resources Information Center

    Klotman, Phyllis R.; Gibson, Gloria J.

    A reference guide on African American film professionals, this book is a companion volume to the earlier "Frame by Frame I." It focuses on giving credit to African Americans who have contributed their talents to a film industry that has scarcely recognized their contributions, building on the aforementioned "Frame by Frame I,"…

  16. A study on multiresolution lossless video coding using inter/intra frame adaptive prediction

    NASA Astrophysics Data System (ADS)

    Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro

    2003-06-01

    Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.

  17. Behind the Photos and the Tears: Media Images, Neoliberal Discourses, Racialized Constructions of Space and School Closings in Chicago

    ERIC Educational Resources Information Center

    Allweiss, Alexandra; Grant, Carl A.; Manning, Karla

    2015-01-01

    This critical article provides insights into how media frames influence our understandings of school reform in urban spaces by examining images of students during the 2013 school closings in Chicago. Using visual framing analysis and informed by framing theory and critiques of neoliberalism we seek to explore two questions: (1) What role do media…

  18. A framed, 16-image Kirkpatrick–Baez x-ray microscope

    DOE PAGES

    Marshall, F. J.; Bahr, R. E.; Goncharov, V. N.; ...

    2017-09-08

    A 16-image Kirkpatrick–Baez (KB)–type x-ray microscope consisting of compact KB mirrors has been assembled for the first time with mirrors aligned to allow it to be coupled to a high-speed framing camera. The high-speed framing camera has four independently gated strips whose emission sampling interval is ~30 ps. Images are arranged four to a strip with ~60-ps temporal spacing between frames on a strip. By spacing the timing of the strips, a frame spacing of ~15 ps is achieved. A framed resolution of ~6-um is achieved with this combination in a 400-um region of laser–plasma x-ray emission in the 2-more » to 8-keV energy range. A principal use of the microscope is to measure the evolution of the implosion stagnation region of cryogenic DT target implosions on the University of Rochester’s OMEGA Laser System. The unprecedented time and spatial resolution achieved with this framed, multi-image KB microscope have made it possible to accurately determine the cryogenic implosion core emission size and shape at the peak of stagnation. In conclusion, these core size measurements, taken in combination with those of ion temperature, neutron-production temporal width, and neutron yield allow for inference of core pressures, currently exceeding 50 GBar in OMEGA cryogenic target implosions.« less

  19. Non-heuristic automatic techniques for overcoming low signal-to-noise-ratio bias of localization microscopy and multiple signal classification algorithm.

    PubMed

    Agarwal, Krishna; Macháň, Radek; Prasad, Dilip K

    2018-03-21

    Localization microscopy and multiple signal classification algorithm use temporal stack of image frames of sparse emissions from fluorophores to provide super-resolution images. Localization microscopy localizes emissions in each image independently and later collates the localizations in all the frames, giving same weight to each frame irrespective of its signal-to-noise ratio. This results in a bias towards frames with low signal-to-noise ratio and causes cluttered background in the super-resolved image. User-defined heuristic computational filters are employed to remove a set of localizations in an attempt to overcome this bias. Multiple signal classification performs eigen-decomposition of the entire stack, irrespective of the relative signal-to-noise ratios of the frames, and uses a threshold to classify eigenimages into signal and null subspaces. This results in under-representation of frames with low signal-to-noise ratio in the signal space and over-representation in the null space. Thus, multiple signal classification algorithms is biased against frames with low signal-to-noise ratio resulting into suppression of the corresponding fluorophores. This paper presents techniques to automatically debias localization microscopy and multiple signal classification algorithm of these biases without compromising their resolution and without employing heuristics, user-defined criteria. The effect of debiasing is demonstrated through five datasets of invitro and fixed cell samples.

  20. A framed, 16-image Kirkpatrick–Baez x-ray microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, F. J.; Bahr, R. E.; Goncharov, V. N.

    A 16-image Kirkpatrick–Baez (KB)–type x-ray microscope consisting of compact KB mirrors has been assembled for the first time with mirrors aligned to allow it to be coupled to a high-speed framing camera. The high-speed framing camera has four independently gated strips whose emission sampling interval is ~30 ps. Images are arranged four to a strip with ~60-ps temporal spacing between frames on a strip. By spacing the timing of the strips, a frame spacing of ~15 ps is achieved. A framed resolution of ~6-um is achieved with this combination in a 400-um region of laser–plasma x-ray emission in the 2-more » to 8-keV energy range. A principal use of the microscope is to measure the evolution of the implosion stagnation region of cryogenic DT target implosions on the University of Rochester’s OMEGA Laser System. The unprecedented time and spatial resolution achieved with this framed, multi-image KB microscope have made it possible to accurately determine the cryogenic implosion core emission size and shape at the peak of stagnation. In conclusion, these core size measurements, taken in combination with those of ion temperature, neutron-production temporal width, and neutron yield allow for inference of core pressures, currently exceeding 50 GBar in OMEGA cryogenic target implosions.« less

  1. Effects of frame rate and image resolution on pulse rate measured using multiple camera imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.

    2015-03-01

    Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.

  2. Image Based Synthesis for Airborne Minefield Data

    DTIC Science & Technology

    2005-12-01

    Jia, and C-K. Tang, "Image repairing: robust image synthesis by adaptive ND tensor voting ", Proceedings of the IEEE, Computer Society Conference on...utility is capable to synthesize a single frame data as well as list of frames along a flight path. The application is developed in MATLAB -6.5 using the

  3. Three-dimensional registration of intravascular optical coherence tomography and cryo-image volumes for microscopic-resolution validation.

    PubMed

    Prabhu, David; Mehanna, Emile; Gargesha, Madhusudhana; Brandt, Eric; Wen, Di; van Ditzhuijzen, Nienke S; Chamie, Daniel; Yamamoto, Hirosada; Fujino, Yusuke; Alian, Ali; Patel, Jaymin; Costa, Marco; Bezerra, Hiram G; Wilson, David L

    2016-04-01

    Evidence suggests high-resolution, high-contrast, [Formula: see text] intravascular optical coherence tomography (IVOCT) can distinguish plaque types, but further validation is needed, especially for automated plaque characterization. We developed experimental and three-dimensional (3-D) registration methods to provide validation of IVOCT pullback volumes using microscopic, color, and fluorescent cryo-image volumes with optional registered cryo-histology. A specialized registration method matched IVOCT pullback images acquired in the catheter reference frame to a true 3-D cryo-image volume. Briefly, an 11-parameter registration model including a polynomial virtual catheter was initialized within the cryo-image volume, and perpendicular images were extracted, mimicking IVOCT image acquisition. Virtual catheter parameters were optimized to maximize cryo and IVOCT lumen overlap. Multiple assessments suggested that the registration error was better than the [Formula: see text] spacing between IVOCT image frames. Tests on a digital synthetic phantom gave a registration error of only [Formula: see text] (signed distance). Visual assessment of randomly presented nearby frames suggested registration accuracy within 1 IVOCT frame interval ([Formula: see text]). This would eliminate potential misinterpretations confronted by the typical histological approaches to validation, with estimated 1-mm errors. The method can be used to create annotated datasets and automated plaque classification methods and can be extended to other intravascular imaging modalities.

  4. GPU-Based Real-Time Volumetric Ultrasound Image Reconstruction for a Ring Array

    PubMed Central

    Choe, Jung Woo; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T.

    2014-01-01

    Synthetic phased array (SPA) beamforming with Hadamard coding and aperture weighting is an optimal option for real-time volumetric imaging with a ring array, a particularly attractive geometry in intracardiac and intravascular applications. However, the imaging frame rate of this method is limited by the immense computational load required in synthetic beamforming. For fast imaging with a ring array, we developed graphics processing unit (GPU)-based, real-time image reconstruction software that exploits massive data-level parallelism in beamforming operations. The GPU-based software reconstructs and displays three cross-sectional images at 45 frames per second (fps). This frame rate is 4.5 times higher than that for our previously-developed multi-core CPU-based software. In an alternative imaging mode, it shows one B-mode image rotating about the axis and its maximum intensity projection (MIP), processed at a rate of 104 fps. This paper describes the image reconstruction procedure on the GPU platform and presents the experimental images obtained using this software. PMID:23529080

  5. Cross-Curricular Skills Development in Final-Year Dissertation by Active and Collaborative Methodologies

    ERIC Educational Resources Information Center

    Etaio, Iñaki; Churruca, Itziar; Rada, Diego; Miranda, Jonatan; Saracibar, Amaia; Sarrionandia, Fernando; Lasa, Arrate; Simón, Edurne; Labayen, Idoia; Martinez, Olaia

    2018-01-01

    European Frame for Higher Education has led universities to adapt their teaching schemes. Degrees must train students in competences including specific and cross-curricular skills. Nevertheless, there are important limitations to follow skill improvement through the consecutive academic years. Final-year dissertation (FYD) offers the opportunity…

  6. SkySat-1: very high-resolution imagery from a small satellite

    NASA Astrophysics Data System (ADS)

    Murthy, Kiran; Shearn, Michael; Smiley, Byron D.; Chau, Alexandra H.; Levine, Josh; Robinson, M. Dirk

    2014-10-01

    This paper presents details of the SkySat-1 mission, which is the first microsatellite-class commercial earth- observation system to generate sub-meter resolution panchromatic imagery, in addition to sub-meter resolution 4-band pan-sharpened imagery. SkySat-1 was built and launched for an order of magnitude lower cost than similarly performing missions. The low-cost design enables the deployment of a large imaging constellation that can provide imagery with both high temporal resolution and high spatial resolution. One key enabler of the SkySat-1 mission was simplifying the spacecraft design and instead relying on ground- based image processing to achieve high-performance at the system level. The imaging instrument consists of a custom-designed high-quality optical telescope and commercially-available high frame rate CMOS image sen- sors. While each individually captured raw image frame shows moderate quality, ground-based image processing algorithms improve the raw data by combining data from multiple frames to boost image signal-to-noise ratio (SNR) and decrease the ground sample distance (GSD) in a process Skybox calls "digital TDI". Careful qual-ity assessment and tuning of the spacecraft, payload, and algorithms was necessary to generate high-quality panchromatic, multispectral, and pan-sharpened imagery. Furthermore, the framing sensor configuration en- abled the first commercial High-Definition full-frame rate panchromatic video to be captured from space, with approximately 1 meter ground sample distance. Details of the SkySat-1 imaging instrument and ground-based image processing system are presented, as well as an overview of the work involved with calibrating and validating the system. Examples of raw and processed imagery are shown, and the raw imagery is compared to pre-launch simulated imagery used to tune the image processing algorithms.

  7. HUBBLE VIEWS THE GALILEO PROBE ENTRY SITE ON JUPITER

    NASA Technical Reports Server (NTRS)

    2002-01-01

    [left] - This Hubble Space Telescope image of Jupiter was taken on Oct. 5, 1995, when the giant planet was at a distance of 534 million miles (854 million kilometers) from Earth. The arrow points to the predicted site at which the Galileo Probe will enter Jupiter's atmosphere on December 7, 1995. At this latitude, the eastward winds have speeds of about 250 miles per hour (110 meters per second). The white oval to the north of the probe site drifts westward at 13 miles per hour (6 meters per second), rolling in the winds which increase sharply toward the equator. The Jupiter image was obtained with the high resolution mode of Hubble's Wide Field Planetary Camera 2 (WFPC2). Because the disk of the planet is larger than the field of view of the camera, image processing was used to combine overlapping images from three consecutive orbits to produce this full disk view of the planet. [right] - These four enlarged Hubble images of Jupiter's equatorial region show clouds sweeping across the predicted Galileo probe entry site, which is at the exact center of each frame (a small white dot has been inserted at the centered at the predicted entry site). The first image (upper left quadrant) was obtained with the WFPC2 on Oct. 4, 1995 at (18 hours UT). The second, third and fourth images (from upper right to lower right) were obtained 10, 20 and 60 hours later, respectively. The maps extend +/- 15 degrees in latitude and longitude. The distance across one of the images is about three Earth diameters (37,433 kilometers). During the intervening time between the first and fourth maps, the winds have swept the clouds 15,000 miles (24,000 kilometers) eastward. Credit: Reta Beebe (New Mexico State University), and NASA

  8. A trillion frames per second: the techniques and applications of light-in-flight photography.

    PubMed

    Faccio, Daniele; Velten, Andreas

    2018-06-14

    Cameras capable of capturing videos at a trillion frames per second allow to freeze light in motion, a very counterintuitive capability when related to our everyday experience in which light appears to travel instantaneously. By combining this capability with computational imaging techniques, new imaging opportunities emerge such as three dimensional imaging of scenes that are hidden behind a corner, the study of relativistic distortion effects, imaging through diffusive media and imaging of ultrafast optical processes such as laser ablation, supercontinuum and plasma generation. We provide an overview of the main techniques that have been developed for ultra-high speed photography with a particular focus on `light in flight' imaging, i.e. applications where the key element is the imaging of light itself at frame rates that allow to freeze it's motion and therefore extract information that would otherwise be blurred out and lost. . © 2018 IOP Publishing Ltd.

  9. Motion Detection in Ultrasound Image-Sequences Using Tensor Voting

    NASA Astrophysics Data System (ADS)

    Inba, Masafumi; Yanagida, Hirotaka; Tamura, Yasutaka

    2008-05-01

    Motion detection in ultrasound image sequences using tensor voting is described. We have been developing an ultrasound imaging system adopting a combination of coded excitation and synthetic aperture focusing techniques. In our method, frame rate of the system at distance of 150 mm reaches 5000 frame/s. Sparse array and short duration coded ultrasound signals are used for high-speed data acquisition. However, many artifacts appear in the reconstructed image sequences because of the incompleteness of the transmitted code. To reduce the artifacts, we have examined the application of tensor voting to the imaging method which adopts both coded excitation and synthetic aperture techniques. In this study, the basis of applying tensor voting and the motion detection method to ultrasound images is derived. It was confirmed that velocity detection and feature enhancement are possible using tensor voting in the time and space of simulated ultrasound three-dimensional image sequences.

  10. Frames and counter-frames giving meaning to dementia: a framing analysis of media content.

    PubMed

    Van Gorp, Baldwin; Vercruysse, Tom

    2012-04-01

    Media tend to reinforce the stigmatization of dementia as one of the most dreaded diseases in western society, which may have repercussions on the quality of life of those with the illness. The persons with dementia, but also those around them become imbued with the idea that life comes to an end as soon as the diagnosis is pronounced. The aim of this paper is to understand the dominant images related to dementia by means of an inductive framing analysis. The sample is composed of newspaper articles from six Belgian newspapers (2008-2010) and a convenience sample of popular images of the condition in movies, documentaries, literature and health care communications. The results demonstrate that the most dominant frame postulates that a human being is composed of two distinct parts: a material body and an immaterial mind. If this frame is used, the person with dementia ends up with no identity, which is in opposition to the Western ideals of personal self-fulfilment and individualism. For each dominant frame an alternative counter-frame is defined. It is concluded that the relative absence of counter-frames confirms the negative image of dementia. The inventory might be a help for caregivers and other professionals who want to evaluate their communication strategy. It is discussed that a more resolute use of counter-frames in communication about dementia might mitigate the stigma that surrounds dementia. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. 76 FR 54251 - Notice of Receipt of Complaint; Solicitation of Comments Relating to the Public Interest

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-31

    ... Certain Digital Photo Frames and Image Display Devices and Components Thereof, DN 2842; the Commission is... importation of certain digital photo frames and image display devices and components thereof. The complaint...

  12. Effectiveness of averaging strategies to reduce variance in retinal nerve fibre layer thickness measurements using spectral-domain optical coherence tomography.

    PubMed

    Pemp, Berthold; Kardon, Randy H; Kircher, Karl; Pernicka, Elisabeth; Schmidt-Erfurth, Ursula; Reitner, Andreas

    2013-07-01

    Automated detection of subtle changes in peripapillary retinal nerve fibre layer thickness (RNFLT) over time using optical coherence tomography (OCT) is limited by inherent image quality before layer segmentation, stabilization of the scan on the peripapillary retina and its precise placement on repeated scans. The present study evaluates image quality and reproducibility of spectral domain (SD)-OCT comparing different rates of automatic real-time tracking (ART). Peripapillary RNFLT was measured in 40 healthy eyes on six different days using SD-OCT with an eye-tracking system. Image brightness of OCT with unaveraged single frame B-scans was compared to images using ART of 16 B-scans and 100 averaged frames. Short-term and day-to-day reproducibility was evaluated by calculation of intraindividual coefficients of variation (CV) and intraclass correlation coefficients (ICC) for single measurements as well as for seven repeated measurements per study day. Image brightness, short-term reproducibility, and day-to-day reproducibility were significantly improved using ART of 100 frames compared to one and 16 frames. Short-term CV was reduced from 0.94 ± 0.31 % and 0.91 ± 0.54 % in scans of one and 16 frames to 0.56 ± 0.42 % in scans of 100 averaged frames (P ≤ 0.003 each). Day-to-day CV was reduced from 0.98 ± 0.86 % and 0.78 ± 0.56 % to 0.53 ± 0.43 % (P ≤ 0.022 each). The range of ICC was 0.94 to 0.99. Sample size calculations for detecting changes of RNFLT over time in the range of 2 to 5 μm were performed based on intraindividual variability. Image quality and reproducibility of mean peripapillary RNFLT measurements using SD-OCT is improved by averaging OCT images with eye-tracking compared to unaveraged single frame images. Further improvement is achieved by increasing the amount of frames per measurement, and by averaging values of repeated measurements per session. These strategies may allow a more accurate evaluation of RNFLT reduction in clinical trials observing optic nerve degeneration.

  13. Needle detection in ultrasound using the spectral properties of the displacement field: a feasibility study

    NASA Astrophysics Data System (ADS)

    Beigi, Parmida; Salcudean, Tim; Rohling, Robert; Lessoway, Victoria A.; Ng, Gary C.

    2015-03-01

    This paper presents a new needle detection technique for ultrasound guided interventions based on the spectral properties of small displacements arising from hand tremour or intentional motion. In a block-based approach, the displacement map is computed for each block of interest versus a reference frame, using an optical flow technique. To compute the flow parameters, the Lucas-Kanade approach is used in a multiresolution and regularized form. A least-squares fit is used to estimate the flow parameters from the overdetermined system of spatial and temporal gradients. Lateral and axial components of the displacement are obtained for each block of interest at consecutive frames. Magnitude-squared spectral coherency is derived between the median displacements of the reference block and each block of interest, to determine the spectral correlation. In vivo images were obtained from the tissue near the abdominal aorta to capture the extreme intrinsic body motion and insertion images were captured from a tissue-mimicking agar phantom. According to the analysis, both the involuntary and intentional movement of the needle produces coherent displacement with respect to a reference window near the insertion site. Intrinsic body motion also produces coherent displacement with respect to a reference window in the tissue; however, the coherency spectra of intrinsic and needle motion are distinguishable spectrally. Blocks with high spectral coherency at high frequencies are selected, estimating a channel for needle trajectory. The needle trajectory is detected from locally thresholded absolute displacement map within the initial estimate. Experimental results show the RMS localization accuracy of 1:0 mm, 0:7 mm, and 0:5 mm for hand tremour, vibrational and rotational needle movements, respectively.

  14. Schlieren Cinematography of Current Driven Plasma Jet Dynamics

    NASA Astrophysics Data System (ADS)

    Loebner, Keith; Underwood, Thomas; Cappelli, Mark

    2016-10-01

    Schlieren cinematography of a pulsed plasma deflagration jet is presented and analyzed. An ultra-high frame rate CMOS camera coupled to a Z-type laser Schlieren apparatus is used to obtain flow-field refractometry data for the continuous flow Z-pinch formed within the plasma deflagration jet. The 10 MHz frame rate for 256 consecutive frames provides high temporal resolution, enabling turbulent fluctuations and plasma instabilities to be visualized over the course of a single pulse (20 μs). The Schlieren signal is radiometrically calibrated to obtain a two dimensional mapping of the refraction angle of the axisymmetric pinch plasma, and this mapping is then Abel inverted to derive the plasma density distribution as a function radius, axial coordinate, and time. Analyses of previously unknown discharge characteristics and comparisons with prior work are discussed.

  15. Detection of Cardiac Quiescence from B-Mode Echocardiography Using a Correlation-Based Frame-to-Frame Deviation Measure

    PubMed Central

    Mcclellan, James H.; Ravichandran, Lakshminarayan; Tridandapani, Srini

    2013-01-01

    Two novel methods for detecting cardiac quiescent phases from B-mode echocardiography using a correlation-based frame-to-frame deviation measure were developed. Accurate knowledge of cardiac quiescence is crucial to the performance of many imaging modalities, including computed tomography coronary angiography (CTCA). Synchronous electrocardiography (ECG) and echocardiography data were obtained from 10 healthy human subjects (four male, six female, 23–45 years) and the interventricular septum (IVS) was observed using the apical four-chamber echocardiographic view. The velocity of the IVS was derived from active contour tracking and verified using tissue Doppler imaging echocardiography methods. In turn, the frame-to-frame deviation methods for identifying quiescence of the IVS were verified using active contour tracking. The timing of the diastolic quiescent phase was found to exhibit both inter- and intra-subject variability, suggesting that the current method of CTCA gating based on the ECG is suboptimal and that gating based on signals derived from cardiac motion are likely more accurate in predicting quiescence for cardiac imaging. Two robust and efficient methods for identifying cardiac quiescent phases from B-mode echocardiographic data were developed and verified. The methods presented in this paper will be used to develop new CTCA gating techniques and quantify the resulting potential improvement in CTCA image quality. PMID:26609501

  16. Research on the algorithm of infrared target detection based on the frame difference and background subtraction method

    NASA Astrophysics Data System (ADS)

    Liu, Yun; Zhao, Yuejin; Liu, Ming; Dong, Liquan; Hui, Mei; Liu, Xiaohua; Wu, Yijian

    2015-09-01

    As an important branch of infrared imaging technology, infrared target tracking and detection has a very important scientific value and a wide range of applications in both military and civilian areas. For the infrared image which is characterized by low SNR and serious disturbance of background noise, an innovative and effective target detection algorithm is proposed in this paper, according to the correlation of moving target frame-to-frame and the irrelevance of noise in sequential images based on OpenCV. Firstly, since the temporal differencing and background subtraction are very complementary, we use a combined detection method of frame difference and background subtraction which is based on adaptive background updating. Results indicate that it is simple and can extract the foreground moving target from the video sequence stably. For the background updating mechanism continuously updating each pixel, we can detect the infrared moving target more accurately. It paves the way for eventually realizing real-time infrared target detection and tracking, when transplanting the algorithms on OpenCV to the DSP platform. Afterwards, we use the optimal thresholding arithmetic to segment image. It transforms the gray images to black-white images in order to provide a better condition for the image sequences detection. Finally, according to the relevance of moving objects between different frames and mathematical morphology processing, we can eliminate noise, decrease the area, and smooth region boundaries. Experimental results proves that our algorithm precisely achieve the purpose of rapid detection of small infrared target.

  17. Impact of B-Scan Averaging on Spectralis Optical Coherence Tomography Image Quality before and after Cataract Surgery

    PubMed Central

    Podkowinski, Dominika; Sharian Varnousfaderani, Ehsan; Simader, Christian; Bogunovic, Hrvoje; Philip, Ana-Maria; Gerendas, Bianca S.

    2017-01-01

    Background and Objective To determine optimal image averaging settings for Spectralis optical coherence tomography (OCT) in patients with and without cataract. Study Design/Material and Methods In a prospective study, the eyes were imaged before and after cataract surgery using seven different image averaging settings. Image quality was quantitatively evaluated using signal-to-noise ratio, distinction between retinal layer image intensity distributions, and retinal layer segmentation performance. Measures were compared pre- and postoperatively across different degrees of averaging. Results 13 eyes of 13 patients were included and 1092 layer boundaries analyzed. Preoperatively, increasing image averaging led to a logarithmic growth in all image quality measures up to 96 frames. Postoperatively, increasing averaging beyond 16 images resulted in a plateau without further benefits to image quality. Averaging 16 frames postoperatively provided comparable image quality to 96 frames preoperatively. Conclusion In patients with clear media, averaging 16 images provided optimal signal quality. A further increase in averaging was only beneficial in the eyes with senile cataract. However, prolonged acquisition time and possible loss of details have to be taken into account. PMID:28630764

  18. High-Speed Video Observations of a Natural Lightning Stepped Leader

    NASA Astrophysics Data System (ADS)

    Jordan, D. M.; Hill, J. D.; Uman, M. A.; Yoshida, S.; Kawasaki, Z.

    2010-12-01

    High-speed video images of one branch of a natural negative lightning stepped leader were obtained at a frame rate of 300 kfps (3.33 us exposure) on June 18th, 2010 at the International Center for Lightning Research and Testing (ICLRT) located on the Camp Blanding Army National Guard Base in north-central Florida. The images were acquired using a 20 mm Nikon lens mounted on a Photron SA1.1 high-speed camera. A total of 225 frames (about 0.75 ms) of the downward stepped leader were captured, followed by 45 frames of the leader channel re-illumination by the return stroke and subsequent decay following the ground attachment of the primary leader channel. Luminous characteristics of dart-stepped leader propagation in triggered lightning obtained by Biagi et al. [2009, 2010] and of long laboratory spark formation [e.g., Bazelyan and Raizer, 1998; Gallimberti et al., 2002] are evident in the frames of the natural lightning stepped leader. Space stems/leaders are imaged in twelve different frames at various distances in front of the descending leader tip, which branches into two distinct components 125 frames after the channel enters the field of view. In each case, the space stem/leader appears to connect to the leader tip above in the subsequent frame, forming a new step. Each connection is associated with significant isolated brightening of the channel at the connection point followed by typically three or four frames of upward propagating re-illumination of the existing leader channel. In total, at least 80 individual steps were imaged.

  19. Local motion compensation in image sequences degraded by atmospheric turbulence: a comparative analysis of optical flow vs. block matching methods

    NASA Astrophysics Data System (ADS)

    Huebner, Claudia S.

    2016-10-01

    As a consequence of fluctuations in the index of refraction of the air, atmospheric turbulence causes scintillation, spatial and temporal blurring as well as global and local image motion creating geometric distortions. To mitigate these effects many different methods have been proposed. Global as well as local motion compensation in some form or other constitutes an integral part of many software-based approaches. For the estimation of motion vectors between consecutive frames simple methods like block matching are preferable to more complex algorithms like optical flow, at least when challenged with near real-time requirements. However, the processing power of commercially available computers continues to increase rapidly and the more powerful optical flow methods have the potential to outperform standard block matching methods. Therefore, in this paper three standard optical flow algorithms, namely Horn-Schunck (HS), Lucas-Kanade (LK) and Farnebäck (FB), are tested for their suitability to be employed for local motion compensation as part of a turbulence mitigation system. Their qualitative performance is evaluated and compared with that of three standard block matching methods, namely Exhaustive Search (ES), Adaptive Rood Pattern Search (ARPS) and Correlation based Search (CS).

  20. 36 CFR 1194.22 - Web-based intranet and internet information and applications.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... active region of a server-side image map. (f) Client-side image maps shall be provided instead of server-side image maps except where the regions cannot be defined with an available geometric shape. (g) Row...) Frames shall be titled with text that facilitates frame identification and navigation. (j) Pages shall be...

  1. 36 CFR 1194.22 - Web-based intranet and internet information and applications.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... active region of a server-side image map. (f) Client-side image maps shall be provided instead of server-side image maps except where the regions cannot be defined with an available geometric shape. (g) Row...) Frames shall be titled with text that facilitates frame identification and navigation. (j) Pages shall be...

  2. 36 CFR § 1194.22 - Web-based intranet and internet information and applications.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... active region of a server-side image map. (f) Client-side image maps shall be provided instead of server-side image maps except where the regions cannot be defined with an available geometric shape. (g) Row...) Frames shall be titled with text that facilitates frame identification and navigation. (j) Pages shall be...

  3. 77 FR 21994 - Certain Digital Photo Frames and Image Display Devices and Components Thereof; Notice of Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-12

    ... Image Display Devices and Components Thereof; Notice of Request for Written Submissions on Remedy, the... importation, and the sale within the United States after importation of certain digital photo frames and image... the President, has 60 days to approve or disapprove the Commission's action. See section 337(j), 19 U...

  4. In vivo high-resolution structural imaging of large arteries in small rodents using two-photon laser scanning microscopy

    NASA Astrophysics Data System (ADS)

    Megens, Remco T. A.; Reitsma, Sietze; Prinzen, Lenneke; Oude Egbrink, Mirjam G. A.; Engels, Wim; Leenders, Peter J. A.; Brunenberg, Ellen J. L.; Reesink, Koen D.; Janssen, Ben J. A.; Ter Haar Romeny, Bart M.; Slaaf, Dick W.; van Zandvoort, Marc A. M. J.

    2010-01-01

    In vivo (molecular) imaging of the vessel wall of large arteries at subcellular resolution is crucial for unraveling vascular pathophysiology. We previously showed the applicability of two-photon laser scanning microscopy (TPLSM) in mounted arteries ex vivo. However, in vivo TPLSM has thus far suffered from in-frame and between-frame motion artifacts due to arterial movement with cardiac and respiratory activity. Now, motion artifacts are suppressed by accelerated image acquisition triggered on cardiac and respiratory activity. In vivo TPLSM is performed on rat renal and mouse carotid arteries, both surgically exposed and labeled fluorescently (cell nuclei, elastin, and collagen). The use of short acquisition times consistently limit in-frame motion artifacts. Additionally, triggered imaging reduces between-frame artifacts. Indeed, structures in the vessel wall (cell nuclei, elastic laminae) can be imaged at subcellular resolution. In mechanically damaged carotid arteries, even the subendothelial collagen sheet (~1 μm) is visualized using collagen-targeted quantum dots. We demonstrate stable in vivo imaging of large arteries at subcellular resolution using TPLSM triggered on cardiac and respiratory cycles. This creates great opportunities for studying (diseased) arteries in vivo or immediate validation of in vivo molecular imaging techniques such as magnetic resonance imaging (MRI), ultrasound, and positron emission tomography (PET).

  5. Evaluation of Skybox Video and Still Image products

    NASA Astrophysics Data System (ADS)

    d'Angelo, P.; Kuschk, G.; Reinartz, P.

    2014-11-01

    The SkySat-1 satellite lauched by Skybox Imaging on November 21 in 2013 opens a new chapter in civilian earth observation as it is the first civilian satellite to image a target in high definition panchromatic video for up to 90 seconds. The small satellite with a mass of 100 kg carries a telescope with 3 frame sensors. Two products are available: Panchromatic video with a resolution of around 1 meter and a frame size of 2560 × 1080 pixels at 30 frames per second. Additionally, the satellite can collect still imagery with a swath of 8 km in the panchromatic band, and multispectral images with 4 bands. Using super-resolution techniques, sub-meter accuracy is reached for the still imagery. The paper provides an overview of the satellite design and imaging products. The still imagery product consists of 3 stripes of frame images with a footprint of approximately 2.6 × 1.1 km. Using bundle block adjustment, the frames are registered, and their accuracy is evaluated. Image quality of the panchromatic, multispectral and pansharpened products are evaluated. The video product used in this evaluation consists of a 60 second gazing acquisition of Las Vegas. A DSM is generated by dense stereo matching. Multiple techniques such as pairwise matching or multi image matching are used and compared. As no ground truth height reference model is availble to the authors, comparisons on flat surface and compare differently matched DSMs are performed. Additionally, visual inspection of DSM and DSM profiles show a detailed reconstruction of small features and large skyscrapers.

  6. Coincidence ion imaging with a fast frame camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less

  7. High frame-rate computational ghost imaging system using an optical fiber phased array and a low-pixel APD array.

    PubMed

    Liu, Chunbo; Chen, Jingqiu; Liu, Jiaxin; Han, Xiang'e

    2018-04-16

    To obtain a high imaging frame rate, a computational ghost imaging system scheme is proposed based on optical fiber phased array (OFPA). Through high-speed electro-optic modulators, the randomly modulated OFPA can provide much faster speckle projection, which can be precomputed according to the geometry of the fiber array and the known phases for modulation. Receiving the signal light with a low-pixel APD array can effectively decrease the requirement on sampling quantity and computation complexity owing to the reduced data dimensionality while avoiding the image aliasing due to the spatial periodicity of the speckles. The results of analysis and simulation show that the frame rate of the proposed imaging system can be significantly improved compared with traditional systems.

  8. Integrated sensor with frame memory and programmable resolution for light adaptive imaging

    NASA Technical Reports Server (NTRS)

    Zhou, Zhimin (Inventor); Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)

    2004-01-01

    An image sensor operable to vary the output spatial resolution according to a received light level while maintaining a desired signal-to-noise ratio. Signals from neighboring pixels in a pixel patch with an adjustable size are added to increase both the image brightness and signal-to-noise ratio. One embodiment comprises a sensor array for receiving input signals, a frame memory array for temporarily storing a full frame, and an array of self-calibration column integrators for uniform column-parallel signal summation. The column integrators are capable of substantially canceling fixed pattern noise.

  9. Adaptive mesh optimization and nonrigid motion recovery based image registration for wide-field-of-view ultrasound imaging.

    PubMed

    Tan, Chaowei; Wang, Bo; Liu, Paul; Liu, Dong

    2008-01-01

    Wide field of view (WFOV) imaging mode obtains an ultrasound image over an area much larger than the real time window normally available. As the probe is moved over the region of interest, new image frames are combined with prior frames to form a panorama image. Image registration techniques are used to recover the probe motion, eliminating the need for a position sensor. Speckle patterns, which are inherent in ultrasound imaging, change, or become decorrelated, as the scan plane moves, so we pre-smooth the image to reduce the effects of speckle in registration, as well as reducing effects from thermal noise. Because we wish to track the movement of features such as structural boundaries, we use an adaptive mesh over the entire smoothed image to home in on areas with feature. Motion estimation using blocks centered at the individual mesh nodes generates a field of motion vectors. After angular correction of motion vectors, we model the overall movement between frames as a nonrigid deformation. The polygon filling algorithm for precise, persistence-based spatial compounding constructs the final speckle reduced WFOV image.

  10. MO-FG-202-04: Gantry-Resolved Linac QA for VMAT: A Comprehensive and Efficient System Using An Electronic Portal Imaging Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zwan, B J; University of Newcastle, Newcastle, NSW; Barnes, M

    2016-06-15

    Purpose: To automate gantry-resolved linear accelerator (linac) quality assurance (QA) for volumetric modulated arc therapy (VMAT) using an electronic portal imaging device (EPID). Methods: A QA system for VMAT was developed that uses an EPID, frame-grabber assembly and in-house developed image processing software. The system relies solely on the analysis of EPID image frames acquired without the presence of a phantom. Images were acquired at 8.41 frames per second using a frame grabber and ancillary acquisition computer. Each image frame was tagged with a gantry angle from the linac’s on-board gantry angle encoder. Arc-dynamic QA plans were designed to assessmore » the performance of each individual linac component during VMAT. By analysing each image frame acquired during the QA deliveries the following eight machine performance characteristics were measured as a function of gantry angle: MLC positional accuracy, MLC speed constancy, MLC acceleration constancy, MLC-gantry synchronisation, beam profile constancy, dose rate constancy, gantry speed constancy, dose-gantry angle synchronisation and mechanical sag. All tests were performed on a Varian iX linear accelerator equipped with a 120 leaf Millennium MLC and an aS1000 EPID (Varian Medical Systems, Palo Alto, CA, USA). Results: Machine performance parameters were measured as a function of gantry angle using EPID imaging and compared to machine log files and the treatment plan. Data acquisition is currently underway at 3 centres, incorporating 7 treatment units, at 2 weekly measurement intervals. Conclusion: The proposed system can be applied for streamlined linac QA and commissioning for VMAT. The set of test plans developed can be used to assess the performance of each individual components of the treatment machine during VMAT deliveries as a function of gantry angle. The methodology does not require the setup of any additional phantom or measurement equipment and the analysis is fully automated to allow for regular routine testing.« less

  11. A computational model to compare different investment scenarios for mini-stereotactic frame approach to deep brain stimulation surgery.

    PubMed

    Lanotte, M; Cavallo, M; Franzini, A; Grifi, M; Marchese, E; Pantaleoni, M; Piacentino, M; Servello, D

    2010-09-01

    Deep brain stimulation (DBS) alleviates symptoms of many neurological disorders by applying electrical impulses to the brain by means of implanted electrodes, generally put in place using a conventional stereotactic frame. A new image guided disposable mini-stereotactic system has been designed to help shorten and simplify DBS procedures when compared to standard stereotaxy. A small number of studies have been conducted which demonstrate localization accuracies of the system similar to those achievable by the conventional frame. However no data are available to date on the economic impact of this new frame. The aim of this paper was to develop a computational model to evaluate the investment required to introduce the image guided mini-stereotactic technology for stereotactic DBS neurosurgery. A standard DBS patient care pathway was developed and related costs were analyzed. A differential analysis was conducted to capture the impact of introducing the image guided system on the procedure workflow. The analysis was carried out in five Italian neurosurgical centers. A computational model was developed to estimate upfront investments and surgery costs leading to a definition of the best financial option to introduce the new frame. Investments may vary from Euro 1.900 (purchasing of Image Guided [IG] mini-stereotactic frame only) to Euro 158.000.000. Moreover the model demonstrates how the introduction of the IG mini-stereotactic frame doesn't substantially affect the DBS procedure costs.

  12. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  13. Ultra-fast bright field and fluorescence imaging of the dynamics of micrometer-sized objects

    NASA Astrophysics Data System (ADS)

    Chen, Xucai; Wang, Jianjun; Versluis, Michel; de Jong, Nico; Villanueva, Flordeliza S.

    2013-06-01

    High speed imaging has application in a wide area of industry and scientific research. In medical research, high speed imaging has the potential to reveal insight into mechanisms of action of various therapeutic interventions. Examples include ultrasound assisted thrombolysis, drug delivery, and gene therapy. Visual observation of the ultrasound, microbubble, and biological cell interaction may help the understanding of the dynamic behavior of microbubbles and may eventually lead to better design of such delivery systems. We present the development of a high speed bright field and fluorescence imaging system that incorporates external mechanical waves such as ultrasound. Through collaborative design and contract manufacturing, a high speed imaging system has been successfully developed at the University of Pittsburgh Medical Center. We named the system "UPMC Cam," to refer to the integrated imaging system that includes the multi-frame camera and its unique software control, the customized modular microscope, the customized laser delivery system, its auxiliary ultrasound generator, and the combined ultrasound and optical imaging chamber for in vitro and in vivo observations. This system is capable of imaging microscopic bright field and fluorescence movies at 25 × 106 frames per second for 128 frames, with a frame size of 920 × 616 pixels. Example images of microbubble under ultrasound are shown to demonstrate the potential application of the system.

  14. Ultra-fast bright field and fluorescence imaging of the dynamics of micrometer-sized objects

    PubMed Central

    Chen, Xucai; Wang, Jianjun; Versluis, Michel; de Jong, Nico; Villanueva, Flordeliza S.

    2013-01-01

    High speed imaging has application in a wide area of industry and scientific research. In medical research, high speed imaging has the potential to reveal insight into mechanisms of action of various therapeutic interventions. Examples include ultrasound assisted thrombolysis, drug delivery, and gene therapy. Visual observation of the ultrasound, microbubble, and biological cell interaction may help the understanding of the dynamic behavior of microbubbles and may eventually lead to better design of such delivery systems. We present the development of a high speed bright field and fluorescence imaging system that incorporates external mechanical waves such as ultrasound. Through collaborative design and contract manufacturing, a high speed imaging system has been successfully developed at the University of Pittsburgh Medical Center. We named the system “UPMC Cam,” to refer to the integrated imaging system that includes the multi-frame camera and its unique software control, the customized modular microscope, the customized laser delivery system, its auxiliary ultrasound generator, and the combined ultrasound and optical imaging chamber for in vitro and in vivo observations. This system is capable of imaging microscopic bright field and fluorescence movies at 25 × 106 frames per second for 128 frames, with a frame size of 920 × 616 pixels. Example images of microbubble under ultrasound are shown to demonstrate the potential application of the system. PMID:23822346

  15. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications

    PubMed Central

    Park, Keunyeol; Song, Minkyu

    2018-01-01

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm2 with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency. PMID:29495273

  16. The Design of a Single-Bit CMOS Image Sensor for Iris Recognition Applications.

    PubMed

    Park, Keunyeol; Song, Minkyu; Kim, Soo Youn

    2018-02-24

    This paper presents a single-bit CMOS image sensor (CIS) that uses a data processing technique with an edge detection block for simple iris segmentation. In order to recognize the iris image, the image sensor conventionally captures high-resolution image data in digital code, extracts the iris data, and then compares it with a reference image through a recognition algorithm. However, in this case, the frame rate decreases by the time required for digital signal conversion of multi-bit digital data through the analog-to-digital converter (ADC) in the CIS. In order to reduce the overall processing time as well as the power consumption, we propose a data processing technique with an exclusive OR (XOR) logic gate to obtain single-bit and edge detection image data instead of multi-bit image data through the ADC. In addition, we propose a logarithmic counter to efficiently measure single-bit image data that can be applied to the iris recognition algorithm. The effective area of the proposed single-bit image sensor (174 × 144 pixel) is 2.84 mm² with a 0.18 μm 1-poly 4-metal CMOS image sensor process. The power consumption of the proposed single-bit CIS is 2.8 mW with a 3.3 V of supply voltage and 520 frame/s of the maximum frame rates. The error rate of the ADC is 0.24 least significant bit (LSB) on an 8-bit ADC basis at a 50 MHz sampling frequency.

  17. Uterine Artery Embolization for Leiomyomata: Optimization of the Radiation Dose to the Patient Using a Flat-Panel Detector Angiographic Suite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sapoval, Marc, E-mail: marc.sapoval2@egp.aphp.fr; Pellerin, Olivier; Rehel, Jean-Luc

    The purpose of this study was to assess the ability of low-dose/low-frame fluoroscopy/angiography with a flat-panel detector angiographic suite to reduce the dose delivered to patients during uterine fibroid embolization (UFE). A two-step prospective dosimetric study was conducted, with a flat-panel detector angiography suite (Siemens Axiom Artis) integrating automatic exposure control (AEC), during 20 consecutive UFEs. Patient dosimetry was performed using calibrated thermoluminescent dosimeters placed on the lower posterior pelvis skin. The first step (10 patients; group A) consisted in UFE (bilateral embolization, calibrated microspheres) performed using the following parameters: standard fluoroscopy (15 pulses/s) and angiography (3 frames/s). The secondmore » step (next consecutive 10 patients; group B) used low-dose/low-frame fluoroscopy (7.5 pulses/s for catheterization and 3 pulses/s for embolization) and angiography (1 frame/s). We also recorded the total dose-area product (DAP) delivered to the patient and the fluoroscopy time as reported by the manufacturer's dosimetry report. The mean peak skin dose decreased from 2.4 {+-} 1.3 to 0.4 {+-} 0.3 Gy (P = 0.001) for groups A and B, respectively. The DAP values decreased from 43,113 {+-} 27,207 {mu}Gy m{sup 2} for group A to 9,515 {+-} 4,520 {mu}Gy m{sup 2} for group B (P = 0.003). The dose to ovaries and uterus decreased from 378 {+-} 238 mGy (group A) to 83 {+-} 41 mGy (group B) and from 388 {+-} 246 mGy (group A) to 85 {+-} 39 mGy (group B), respectively. Effective doses decreased from 112 {+-} 71 mSv (group A) to 24 {+-} 12 mSv (group B) (P = 0.003). In conclusion, the use of low-dose/low-frame fluoroscopy/angiography, based on a good understanding of the AEC system and also on the technique during uterine fibroid embolization, allows a significant decrease in the dose exposure to the patient.« less

  18. From Video to Photo

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Ever wonder whether a still shot from a home video could serve as a "picture perfect" photograph worthy of being framed and proudly displayed on the mantle? Wonder no more. A critical imaging code used to enhance video footage taken from spaceborne imaging instruments is now available within a portable photography tool capable of producing an optimized, high-resolution image from multiple video frames.

  19. 47 CFR 73.9003 - Compliance requirements for covered demodulator products: Unscreened content.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... operating in a mode compatible with the digital visual interface (DVI) rev. 1.0 Specification as an image having the visual equivalent of no more than 350,000 pixels per frame (e.g. an image with resolution of 720×480 pixels for a 4:3 (nonsquare pixel) aspect ratio), and 30 frames per second. Such an image may...

  20. 47 CFR 73.9004 - Compliance requirements for covered demodulator products: Marked content.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... compatible with the digital visual interface (DVI) Rev. 1.0 Specification as an image having the visual equivalent of no more than 350,000 pixels per frame (e.g., an image with resolution of 720×480 pixels for a 4:3 (nonsquare pixel) aspect ratio), and 30 frames per second. Such an image may be attained by...

  1. Multi-frame image processing with panning cameras and moving subjects

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric

    2014-06-01

    Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.

  2. Radiation exposure in transcatheter patent ductus arteriosus closure: time to tune?

    PubMed

    Villemain, Olivier; Malekzadeh-Milani, Sophie; Sitefane, Fidelio; Mostefa-Kara, Meriem; Boudjemline, Younes

    2018-05-01

    The aims of this study were to describe radiation level at our institution during transcatheter patent ductus arteriosus occlusion and to evaluate the components contributing to radiation exposure. Transcatheter occlusion relying on X-ray imaging has become the treatment of choice for patients with patent ductus arteriosus. Interventionists now work hard to minimise radiation exposure in order to reduce risk of induced cancers. We retrospectively reviewed all consecutive children who underwent transcatheter closure of patent ductus arteriosus from January 2012 to January 2016. Clinical data, anatomical characteristics, and catheterisation procedure parameters were reported. Radiation doses were analysed for the following variables: total air kerma, mGy; dose area product, Gy.cm2; dose area product per body weight, Gy.cm2/kg; and total fluoroscopic time. A total of 324 patients were included (median age=1.51 [Q1-Q3: 0.62-4.23] years; weight=10.3 [6.7-17.0] kg). In all, 322/324 (99.4%) procedures were successful. The median radiation doses were as follows: total air kerma: 26 (14.5-49.3) mGy; dose area product: 1.01 (0.56-2.24) Gy.cm2; dose area product/kg: 0.106 (0.061-0.185) Gy.cm2/kg; and fluoroscopic time: 2.8 (2-4) min. In multivariate analysis, a weight >10 kg, a ductus arteriosus width <2 mm, complications during the procedure, and a high frame rate (15 frames/second) were risk factors for an increased exposure. Lower doses of radiation can be achieved with subsequent recommendations: technical improvement, frame rate reduction, avoidance of biplane cineangiograms, use of stored fluoroscopy as much as possible, and limitation of fluoroscopic time. A greater use of echocardiography might even lessen the exposure.

  3. Vision-Based Finger Detection, Tracking, and Event Identification Techniques for Multi-Touch Sensing and Display Systems

    PubMed Central

    Chen, Yen-Lin; Liang, Wen-Yew; Chiang, Chuan-Yen; Hsieh, Tung-Ju; Lee, Da-Cheng; Yuan, Shyan-Ming; Chang, Yang-Lang

    2011-01-01

    This study presents efficient vision-based finger detection, tracking, and event identification techniques and a low-cost hardware framework for multi-touch sensing and display applications. The proposed approach uses a fast bright-blob segmentation process based on automatic multilevel histogram thresholding to extract the pixels of touch blobs obtained from scattered infrared lights captured by a video camera. The advantage of this automatic multilevel thresholding approach is its robustness and adaptability when dealing with various ambient lighting conditions and spurious infrared noises. To extract the connected components of these touch blobs, a connected-component analysis procedure is applied to the bright pixels acquired by the previous stage. After extracting the touch blobs from each of the captured image frames, a blob tracking and event recognition process analyzes the spatial and temporal information of these touch blobs from consecutive frames to determine the possible touch events and actions performed by users. This process also refines the detection results and corrects for errors and occlusions caused by noise and errors during the blob extraction process. The proposed blob tracking and touch event recognition process includes two phases. First, the phase of blob tracking associates the motion correspondence of blobs in succeeding frames by analyzing their spatial and temporal features. The touch event recognition process can identify meaningful touch events based on the motion information of touch blobs, such as finger moving, rotating, pressing, hovering, and clicking actions. Experimental results demonstrate that the proposed vision-based finger detection, tracking, and event identification system is feasible and effective for multi-touch sensing applications in various operational environments and conditions. PMID:22163990

  4. A new radial strain and strain rate estimation method using autocorrelation for carotid artery

    NASA Astrophysics Data System (ADS)

    Ye, Jihui; Kim, Hoonmin; Park, Jongho; Yeo, Sunmi; Shim, Hwan; Lim, Hyungjoon; Yoo, Yangmo

    2014-03-01

    Atherosclerosis is a leading cause of cardiovascular disease. The early diagnosis of atherosclerosis is of clinical interest since it can prevent any adverse effects of atherosclerotic vascular diseases. In this paper, a new carotid artery radial strain estimation method based on autocorrelation is presented. In the proposed method, the strain is first estimated by the autocorrelation of two complex signals from the consecutive frames. Then, the angular phase from autocorrelation is converted to strain and strain rate and they are analyzed over time. In addition, a 2D strain image over region of interest in a carotid artery can be displayed. To evaluate the feasibility of the proposed radial strain estimation method, radiofrequency (RF) data of 408 frames in the carotid artery of a volunteer were acquired by a commercial ultrasound system equipped with a research package (V10, Samsung Medison, Korea) by using a L5-13IS linear array transducer. From in vivo carotid artery data, the mean strain estimate was -0.1372 while its minimum and maximum values were -2.961 and 0.909, respectively. Moreover, the overall strain estimates are highly correlated with the reconstructed M-mode trace. Similar results were obtained from the estimation of the strain rate change over time. These results indicate that the proposed carotid artery radial strain estimation method is useful for assessing the arterial wall's stiffness noninvasively without increasing the computational complexity.

  5. Jupiter's Southern Lights

    NASA Image and Video Library

    2017-05-25

    The complexity and richness of Jupiter's "southern lights" (also known as auroras) are on display in this animation of false-color maps from NASA's Juno spacecraft. Auroras result when energetic electrons from the magnetosphere crash into the molecular hydrogen in the Jovian upper atmosphere. The data for this animation were obtained by Juno's Ultraviolet Spectrograph. The images are centered on the south pole and extend to latitudes of 50 degrees south. Each frame of the animation includes data from 30 consecutive Juno spins (about 15 minutes), just after the spacecraft's fifth close approach to Jupiter on February 2, 2017. The eight frames of the animation cover the period from 13:40 to 15:40 UTC at Juno. During that time, the spacecraft was receding from 35,000 miles to 153,900 miles (56,300 kilometers to 247,600 kilometers) above the aurora; this large change in distance accounts for the increasing fuzziness of the features. Jupiter's prime meridian is toward the bottom, and longitudes increase counterclockwise from there. The sun was located near the bottom at the start of the animation, but was off to the right by the end of the two-hour period. The red coloring of some of the features indicates that those emissions came from deeper in Jupiter's atmosphere; green and white indicate emissions from higher up in the atmosphere. Animations are available at https://photojournal.jpl.nasa.gov/catalog/PIA21643

  6. A Brief Review of Facial Emotion Recognition Based on Visual Information.

    PubMed

    Ko, Byoung Chul

    2018-01-30

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling "end-to-end" learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.

  7. A Brief Review of Facial Emotion Recognition Based on Visual Information

    PubMed Central

    2018-01-01

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling “end-to-end” learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work. PMID:29385749

  8. Right Hemispatial Neglect: Frequency and Characterization Following Acute Left Hemisphere Stroke

    ERIC Educational Resources Information Center

    Kleinman, Jonathan T.; Newhart, Melissa; Davis, Cameron; Heidler-Gary, Jennifer; Gottesman, Rebecca F.; Hillis, Argye E.

    2007-01-01

    The frequency of various types of unilateral spatial neglect and associated areas of neural dysfunction after left hemisphere stroke are not well characterized. Unilateral spatial neglect (USN) in distinct spatial reference frames have been identified after acute right, but not left hemisphere stroke. We studied 47 consecutive right handed…

  9. A fast double shutter for CCD-based metrology

    NASA Astrophysics Data System (ADS)

    Geisler, R.

    2017-02-01

    Image based metrology such as Particle Image Velocimetry (PIV) depends on the comparison of two images of an object taken in fast succession. Cameras for these applications provide the so-called `double shutter' mode: One frame is captured with a short exposure time and in direct succession a second frame with a long exposure time can be recorded. The difference in the exposure times is typically no problem since illumination is provided by a pulsed light source such as a laser and the measurements are performed in a darkened environment to prevent ambient light from accumulating in the long second exposure time. However, measurements of self-luminous processes (e.g. plasma, combustion ...) as well as experiments in ambient light are difficult to perform and require special equipment (external shutters, highspeed image sensors, multi-sensor systems ...). Unfortunately, all these methods incorporate different drawbacks such as reduced resolution, degraded image quality, decreased light sensitivity or increased susceptibility to decalibration. In the solution presented here, off-the-shelf CCD sensors are used with a special timing to combine neighbouring pixels in a binning-like way. As a result, two frames of short exposure time can be captured in fast succession. They are stored in the on-chip vertical register in a line-interleaved pattern, read out in the common way and separated again by software. The two resultant frames are completely congruent; they expose no insensitive lines or line shifts and thus enable sub-pixel accurate measurements. A third frame can be captured at the full resolution analogue to the double shutter technique. Image based measurement techniques such as PIV can benefit from this mode when applied in bright environments. The third frame is useful e.g. for acceleration measurements or for particle tracking applications.

  10. New method for finding multiple meaningful trajectories

    NASA Astrophysics Data System (ADS)

    Bao, Zhonghao; Flachs, Gerald M.; Jordan, Jay B.

    1995-07-01

    Mathematical foundations and algorithms for efficiently finding multiple meaningful trajectories (FMMT) in a sequence of digital images are presented. A meaningful trajectory is motion created by a sentient being or by a device under the control of a sentient being. It is smooth and predictable over short time intervals. A meaningful trajectory can suddenly appear or disappear in sequence images. The development of the FMMT is based on these assumptions. A finite state machine in the FMMT is used to model the trajectories under the conditions of occlusions and false targets. Each possible trajectory is associated with an initial state of a finite state machine. When two frames of data are available, a linear predictor is used to predict the locations of all possible trajectories. All trajectories within a certain error bound are moved to a monitoring trajectory state. When trajectories attain three consecutive good predictions, they are moved to a valid trajectory state and considered to be locked into a tracking mode. If an object is occluded while in the valid trajectory state, the predicted position is used to continue to track; however, the confidence in the trajectory is lowered. If the trajectory confidence falls below a lower limit, the trajectory is terminated. Results are presented that illustrate the FMMT applied to track multiple munitions fired from a missile in a sequence of images. Accurate trajectories are determined even in poor images where the probabilities of miss and false alarm are very high.

  11. SuperSegger: robust image segmentation, analysis and lineage tracking of bacterial cells.

    PubMed

    Stylianidou, Stella; Brennan, Connor; Nissen, Silas B; Kuwada, Nathan J; Wiggins, Paul A

    2016-11-01

    Many quantitative cell biology questions require fast yet reliable automated image segmentation to identify and link cells from frame-to-frame, and characterize the cell morphology and fluorescence. We present SuperSegger, an automated MATLAB-based image processing package well-suited to quantitative analysis of high-throughput live-cell fluorescence microscopy of bacterial cells. SuperSegger incorporates machine-learning algorithms to optimize cellular boundaries and automated error resolution to reliably link cells from frame-to-frame. Unlike existing packages, it can reliably segment microcolonies with many cells, facilitating the analysis of cell-cycle dynamics in bacteria as well as cell-contact mediated phenomena. This package has a range of built-in capabilities for characterizing bacterial cells, including the identification of cell division events, mother, daughter and neighbouring cells, and computing statistics on cellular fluorescence, the location and intensity of fluorescent foci. SuperSegger provides a variety of postprocessing data visualization tools for single cell and population level analysis, such as histograms, kymographs, frame mosaics, movies and consensus images. Finally, we demonstrate the power of the package by analyzing lag phase growth with single cell resolution. © 2016 John Wiley & Sons Ltd.

  12. SU-F-303-11: Implementation and Applications of Rapid, SIFT-Based Cine MR Image Binning and Region Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazur, T; Wang, Y; Fischer-Valuck, B

    2015-06-15

    Purpose: To develop a novel and rapid, SIFT-based algorithm for assessing feature motion on cine MR images acquired during MRI-guided radiotherapy treatments. In particular, we apply SIFT descriptors toward both partitioning cine images into respiratory states and tracking regions across frames. Methods: Among a training set of images acquired during a fraction, we densely assign SIFT descriptors to pixels within the images. We cluster these descriptors across all frames in order to produce a dictionary of trackable features. Associating the best-matching descriptors at every frame among the training images to these features, we construct motion traces for the features. Wemore » use these traces to define respiratory bins for sorting images in order to facilitate robust pixel-by-pixel tracking. Instead of applying conventional methods for identifying pixel correspondences across frames we utilize a recently-developed algorithm that derives correspondences via a matching objective for SIFT descriptors. Results: We apply these methods to a collection of lung, abdominal, and breast patients. We evaluate the procedure for respiratory binning using target sites exhibiting high-amplitude motion among 20 lung and abdominal patients. In particular, we investigate whether these methods yield minimal variation between images within a bin by perturbing the resulting image distributions among bins. Moreover, we compare the motion between averaged images across respiratory states to 4DCT data for these patients. We evaluate the algorithm for obtaining pixel correspondences between frames by tracking contours among a set of breast patients. As an initial case, we track easily-identifiable edges of lumpectomy cavities that show minimal motion over treatment. Conclusions: These SIFT-based methods reliably extract motion information from cine MR images acquired during patient treatments. While we performed our analysis retrospectively, the algorithm lends itself to prospective motion assessment. Applications of these methods include motion assessment, identifying treatment windows for gating, and determining optimal margins for treatment.« less

  13. Evaluation of pulmonary function using breathing chest radiography with a dynamic flat panel detector: primary results in pulmonary diseases.

    PubMed

    Tanaka, Rie; Sanada, Shigeru; Okazaki, Nobuo; Kobayashi, Takeshi; Fujimura, Masaki; Yasui, Masahide; Matsui, Takeshi; Nakayama, Kazuya; Nanbu, Yuko; Matsui, Osamu

    2006-10-01

    Dynamic flat panel detectors (FPD) permit acquisition of distortion-free radiographs with a large field of view and high image quality. The present study was performed to evaluate pulmonary function using breathing chest radiography with a dynamic FPD. We report primary results of a clinical study and computer algorithm for quantifying and visualizing relative local pulmonary airflow. Dynamic chest radiographs of 18 subjects (1 emphysema, 2 asthma, 4 interstitial pneumonia, 1 pulmonary nodule, and 10 normal controls) were obtained during respiration using an FPD system. We measured respiratory changes in distance from the lung apex to the diaphragm (DLD) and pixel values in each lung area. Subsequently, the interframe differences (D-frame) and difference values between maximum inspiratory and expiratory phases (D-max) were calculated. D-max in each lung represents relative vital capacity (VC) and regional D-frames represent pulmonary airflow in each local area. D-frames were superimposed on dynamic chest radiographs in the form of color display (fusion images). The results obtained using our methods were compared with findings on computed tomography (CT) images and pulmonary functional test (PFT), which were examined before inclusion in the study. In normal subjects, the D-frames were distributed symmetrically in both lungs throughout all respiratory phases. However, subjects with pulmonary diseases showed D-frame distribution patterns that differed from the normal pattern. In subjects with air trapping, there were some areas with D-frames near zero indicated as colorless areas on fusion images. These areas also corresponded to the areas showing air trapping on computed tomography images. In asthma, obstructive abnormality was indicated by areas continuously showing D-frame near zero in the upper lung. Patients with interstitial pneumonia commonly showed fusion images with an uneven color distribution accompanied by increased D-frames in the area identified as normal on computed tomography images. Furthermore, measurement of DLD was very effective for evaluating diaphragmatic kinetics. This is a rapid and simple method for evaluation of respiratory kinetics for pulmonary diseases, which can reveal abnormalities in diaphragmatic kinetics and regional lung ventilation. Furthermore, quantification and visualization of respiratory kinetics is useful as an aid in interpreting dynamic chest radiographs.

  14. Feature Tracking for High Speed AFM Imaging of Biopolymers.

    PubMed

    Hartman, Brett; Andersson, Sean B

    2018-03-31

    The scanning speed of atomic force microscopes continues to advance with some current commercial microscopes achieving on the order of one frame per second and at least one reaching 10 frames per second. Despite the success of these instruments, even higher frame rates are needed with scan ranges larger than are currently achievable. Moreover, there is a significant installed base of slower instruments that would benefit from algorithmic approaches to increasing their frame rate without requiring significant hardware modifications. In this paper, we present an experimental demonstration of high speed scanning on an existing, non-high speed instrument, through the use of a feedback-based, feature-tracking algorithm that reduces imaging time by focusing on features of interest to reduce the total imaging area. Experiments on both circular and square gratings, as well as silicon steps and DNA strands show a reduction in imaging time by a factor of 3-12 over raster scanning, depending on the parameters chosen.

  15. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  16. 1.56 Terahertz 2-frames per second standoff imaging

    NASA Astrophysics Data System (ADS)

    Goyette, Thomas M.; Dickinson, Jason C.; Linden, Kurt J.; Neal, William R.; Joseph, Cecil S.; Gorveatt, William J.; Waldman, Jerry; Giles, Robert; Nixon, William E.

    2008-02-01

    A Terahertz imaging system intended to demonstrate identification of objects concealed under clothing was designed, assembled, and tested. The system design was based on a 2.5 m standoff distance, with a capability of visualizing a 0.5 m by 0.5 m scene at an image rate of 2 frames per second. The system optical design consisted of a 1.56 THz laser beam, which was raster swept by a dual torsion mirror scanner. The beam was focused onto the scan subject by a stationary 50 cm-diameter focusing mirror. A heterodyne detection technique was used to down convert the backscattered signal. The system demonstrated a 1.5 cm spot resolution. Human subjects were scanned at a frame rate of 2 frames per second. Hidden metal objects were detected under a jacket worn by the human subject. A movie including data and video images was produced in 1.5 minutes scanning a human through 180° of azimuth angle at 0.7° increment.

  17. Background suppression of infrared small target image based on inter-frame registration

    NASA Astrophysics Data System (ADS)

    Ye, Xiubo; Xue, Bindang

    2018-04-01

    We propose a multi-frame background suppression method for remote infrared small target detection. Inter-frame information is necessary when the heavy background clutters make it difficult to distinguish real targets and false alarms. A registration procedure based on points matching in image patches is used to compensate the local deformation of background. Then the target can be separated by background subtraction. Experiments show our method serves as an effective preliminary of target detection.

  18. Martian Dust Devil Action in Gale Crater, Sol 1597

    NASA Image and Video Library

    2017-02-27

    This frame from a sequence of images shows a dust-carrying whirlwind, called a dust devil, scooting across the ground inside Gale Crater, as observed on the local summer afternoon of NASA's Curiosity Mars Rover's 1,597th Martian day, or sol (Feb. 1, 2017). Set within a broader southward view from the rover's Navigation Camera, the rectangular area outlined in black was imaged multiple times over a span of several minutes to check for dust devils. Images from the period with most activity are shown in the inset area. The images are in pairs that were taken about 12 seconds apart, with an interval of about 90 seconds between pairs. Timing is accelerated and not fully proportional in this animation. A dust devil is most evident in the 10th, 11th and 12th frames. In the first and fifth frames, dust blowing across the ground appears as pale horizontal streak. Contrast has been modified to make frame-to-frame changes easier to see. A black frame is added between repeats of the sequence. On Mars as on Earth, dust devils are whirlwinds that result from sunshine warming the ground, prompting convective rising of air that has gained heat from the ground. Observations of Martian dust devils provide information about wind directions and interaction between the surface and the atmosphere. An animation is available at http://photojournal.jpl.nasa.gov/catalog/PIA21270

  19. The segmentation of bones in pelvic CT images based on extraction of key frames.

    PubMed

    Yu, Hui; Wang, Haijun; Shi, Yao; Xu, Ke; Yu, Xuyao; Cao, Yuzhen

    2018-05-22

    Bone segmentation is important in computed tomography (CT) imaging of the pelvis, which assists physicians in the early diagnosis of pelvic injury, in planning operations, and in evaluating the effects of surgical treatment. This study developed a new algorithm for the accurate, fast, and efficient segmentation of the pelvis. The proposed method consists of two main parts: the extraction of key frames and the segmentation of pelvic CT images. Key frames were extracted based on pixel difference, mutual information and normalized correlation coefficient. In the pelvis segmentation phase, skeleton extraction from CT images and a marker-based watershed algorithm were combined to segment the pelvis. To meet the requirements of clinical application, physician's judgment is needed. Therefore the proposed methodology is semi-automated. In this paper, 5 sets of CT data were used to test the overlapping area, and 15 CT images were used to determine the average deviation distance. The average overlapping area of the 5 sets was greater than 94%, and the minimum average deviation distance was approximately 0.58 pixels. In addition, the key frame extraction efficiency and the running time of the proposed method were evaluated on 20 sets of CT data. For each set, approximately 13% of the images were selected as key frames, and the average processing time was approximately 2 min (the time for manual marking was not included). The proposed method is able to achieve accurate, fast, and efficient segmentation of pelvic CT image sequences. Segmentation results not only provide an important reference for early diagnosis and decisions regarding surgical procedures, they also offer more accurate data for medical image registration, recognition and 3D reconstruction.

  20. Geometric accuracy of 3D coordinates of the Leksell stereotactic skull frame in 1.5 Tesla- and 3.0 Tesla-magnetic resonance imaging: a comparison of three different fixation screw materials

    PubMed Central

    Nakazawa, Hisato; Mori, Yoshimasa; Yamamuro, Osamu; Komori, Masataka; Shibamoto, Yuta; Uchiyama, Yukio; Tsugawa, Takahiko; Hagiwara, Masahiro

    2014-01-01

    We assessed the geometric distortion of 1.5-Tesla (T) and 3.0-T magnetic resonance (MR) images with the Leksell skull frame system using three types of cranial quick fixation screws (QFSs) of different materials—aluminum, aluminum with tungsten tip, and titanium—for skull frame fixation. Two kinds of acrylic phantoms were placed on a Leksell skull frame using the three types of screws, and were scanned with computed tomography (CT), 1.5-T MR imaging and 3.0-T MR imaging. The 3D coordinates for both strengths of MR imaging were compared with those for CT. The deviations of the measured coordinates at selected points (x = 50, 100 and 150; y = 50, 100 and 150) were indicated on different axial planes (z = 50, 75, 100, 125 and 150). The errors of coordinates with QFSs of aluminum, tungsten-tipped aluminum, and titanium were <1.0, 1.0 and 2.0 mm in the entire treatable area, respectively, with 1.5 T. In the 3.0-T field, the errors with aluminum QFSs were <1.0 mm only around the center, while the errors with tungsten-tipped aluminum and titanium were >2.0 mm in most positions. The geometric accuracy of the Leksell skull frame system with 1.5-T MR imaging was high and valid for clinical use. However, the geometric errors with 3.0-T MR imaging were larger than those of 1.5-T MR imaging and were acceptable only with aluminum QFSs, and then only around the central region. PMID:25034732

  1. Spread-Spectrum Beamforming and Clutter Filtering for Plane-Wave Color Doppler Imaging.

    PubMed

    Mansour, Omar; Poepping, Tamie L; Lacefield, James C

    2016-07-21

    Plane-wave imaging is desirable for its ability to achieve high frame rates, allowing the capture of fast dynamic events and continuous Doppler data. In most implementations of plane-wave imaging, multiple low-resolution images from different plane wave tilt angles are compounded to form a single high-resolution image, thereby reducing the frame rate. Compounding improves the lateral beam profile in the high-resolution image, but it also acts as a low-pass filter in slow time that causes attenuation and aliasing of signals with high Doppler shifts. This paper introduces a spread-spectrum color Doppler imaging method that produces high-resolution images without the use of compounding, thereby eliminating the tradeoff between beam quality, maximum unaliased Doppler frequency, and frame rate. The method uses a long, random sequence of transmit angles rather than a linear sweep of plane wave directions. The random angle sequence randomizes the phase of off-focus (clutter) signals, thereby spreading the clutter power in the Doppler spectrum, while keeping the spectrum of the in-focus signal intact. The ensemble of randomly tilted low-resolution frames also acts as the Doppler ensemble, so it can be much longer than a conventional linear sweep, thereby improving beam formation while also making the slow-time Doppler sampling frequency equal to the pulse repetition frequency. Experiments performed using a carotid artery phantom with constant flow demonstrate that the spread-spectrum method more accurately measures the parabolic flow profile of the vessel and outperforms conventional plane-wave Doppler in both contrast resolution and estimation of high flow velocities. The spread-spectrum method is expected to be valuable for Doppler applications that require measurement of high velocities at high frame rates.

  2. Toshiba TDF-500 High Resolution Viewing And Analysis System

    NASA Astrophysics Data System (ADS)

    Roberts, Barry; Kakegawa, M.; Nishikawa, M.; Oikawa, D.

    1988-06-01

    A high resolution, operator interactive, medical viewing and analysis system has been developed by Toshiba and Bio-Imaging Research. This system provides many advanced features including high resolution displays, a very large image memory and advanced image processing capability. In particular, the system provides CRT frame buffers capable of update in one frame period, an array processor capable of image processing at operator interactive speeds, and a memory system capable of updating multiple frame buffers at frame rates whilst supporting multiple array processors. The display system provides 1024 x 1536 display resolution at 40Hz frame and 80Hz field rates. In particular, the ability to provide whole or partial update of the screen at the scanning rate is a key feature. This allows multiple viewports or windows in the display buffer with both fixed and cine capability. To support image processing features such as windowing, pan, zoom, minification, filtering, ROI analysis, multiplanar and 3D reconstruction, a high performance CPU is integrated into the system. This CPU is an array processor capable of up to 400 million instructions per second. To support the multiple viewer and array processors' instantaneous high memory bandwidth requirement, an ultra fast memory system is used. This memory system has a bandwidth capability of 400MB/sec and a total capacity of 256MB. This bandwidth is more than adequate to support several high resolution CRT's and also the fast processing unit. This fully integrated approach allows effective real time image processing. The integrated design of viewing system, memory system and array processor are key to the imaging system. It is the intention to describe the architecture of the image system in this paper.

  3. Reducing misfocus-related motion artefacts in laser speckle contrast imaging.

    PubMed

    Ringuette, Dene; Sigal, Iliya; Gad, Raanan; Levi, Ofer

    2015-01-01

    Laser Speckle Contrast Imaging (LSCI) is a flexible, easy-to-implement technique for measuring blood flow speeds in-vivo. In order to obtain reliable quantitative data from LSCI the object must remain in the focal plane of the imaging system for the duration of the measurement session. However, since LSCI suffers from inherent frame-to-frame noise, it often requires a moving average filter to produce quantitative results. This frame-to-frame noise also makes the implementation of rapid autofocus system challenging. In this work, we demonstrate an autofocus method and system based on a novel measure of misfocus which serves as an accurate and noise-robust feedback mechanism. This measure of misfocus is shown to enable the localization of best focus with sub-depth-of-field sensitivity, yielding more accurate estimates of blood flow speeds and blood vessel diameters.

  4. In vitro particle image velocity measurements in a model root canal: flow around a polymer rotary finishing file.

    PubMed

    Koch, Jon D; Smith, Nicholas A; Garces, Daniel; Gao, Luyang; Olsen, F Kris

    2014-03-01

    Root canal irrigation is vital to thorough debridement and disinfection, but the mechanisms that contribute to its effectiveness are complex and uncertain. Traditionally, studies in this area have relied on before-and-after static comparisons to assess effectiveness, but new in situ tools are being developed to provide real-time assessments of irrigation. The aim in this work was to measure a cross section of the velocity field in the fluid flow around a polymer rotary finishing file in a model root canal. Fluorescent microparticles were seeded into an optically accessible acrylic root canal model. A polymer rotary finishing file was activated in a static position. After laser excitation, fluorescence from the microparticles was imaged onto a frame-transfer camera. Two consecutive images were cross-correlated to provide a measurement of a projected, 2-dimensional velocity field. The method reveals that fluid velocities can be much higher than the velocity of the file because of the shape of the file. Furthermore, these high velocities are in the axial direction of the canal rather than only in the direct of motion of the file. Particle image velocimetry indicates that fluid velocities induced by the rotating file can be much larger than the speed of the file. Particle image velocimetry can provide qualitative insight and quantitative measurements that may be useful for validating computational fluid dynamic models and connecting clinical observations to physical explanations in dental research. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  5. High Resolution Ultrasound Superharmonic Perfusion Imaging: In Vivo Feasibility and Quantification of Dynamic Contrast-Enhanced Acoustic Angiography.

    PubMed

    Lindsey, Brooks D; Shelton, Sarah E; Martin, K Heath; Ozgun, Kathryn A; Rojas, Juan D; Foster, F Stuart; Dayton, Paul A

    2017-04-01

    Mapping blood perfusion quantitatively allows localization of abnormal physiology and can improve understanding of disease progression. Dynamic contrast-enhanced ultrasound is a low-cost, real-time technique for imaging perfusion dynamics with microbubble contrast agents. Previously, we have demonstrated another contrast agent-specific ultrasound imaging technique, acoustic angiography, which forms static anatomical images of the superharmonic signal produced by microbubbles. In this work, we seek to determine whether acoustic angiography can be utilized for high resolution perfusion imaging in vivo by examining the effect of acquisition rate on superharmonic imaging at low flow rates and demonstrating the feasibility of dynamic contrast-enhanced superharmonic perfusion imaging for the first time. Results in the chorioallantoic membrane model indicate that frame rate and frame averaging do not affect the measured diameter of individual vessels observed, but that frame rate does influence the detection of vessels near and below the resolution limit. The highest number of resolvable vessels was observed at an intermediate frame rate of 3 Hz using a mechanically-steered prototype transducer. We also demonstrate the feasibility of quantitatively mapping perfusion rate in 2D in a mouse model with spatial resolution of ~100 μm. This type of imaging could provide non-invasive, high resolution quantification of microvascular function at penetration depths of several centimeters.

  6. Facilitation of listening comprehension by visual information under noisy listening condition

    NASA Astrophysics Data System (ADS)

    Kashimada, Chiho; Ito, Takumi; Ogita, Kazuki; Hasegawa, Hiroshi; Kamata, Kazuo; Ayama, Miyoshi

    2009-02-01

    Comprehension of a sentence under a wide range of delay conditions between auditory and visual stimuli was measured in the environment with low auditory clarity of the level of -10dB and -15dB pink noise. Results showed that the image was helpful for comprehension of the noise-obscured voice stimulus when the delay between the auditory and visual stimuli was 4 frames (=132msec) or less, the image was not helpful for comprehension when the delay between the auditory and visual stimulus was 8 frames (=264msec) or more, and in some cases of the largest delay (32 frames), the video image interfered with comprehension.

  7. Vaporization and recondensation dynamics of indocyanine green-loaded perfluoropentane droplets irradiated by a short pulse laser

    NASA Astrophysics Data System (ADS)

    Yu, Jaesok; Chen, Xucai; Villanueva, Flordeliza S.; Kim, Kang

    2016-12-01

    Phase-transition droplets have been proposed as promising contrast agents for ultrasound and photoacoustic imaging. Short pulse laser activated perfluorocarbon-based droplets, especially when in a medium with a temperature below their boiling point, undergo phase changes of vaporization and recondensation in response to pulsed laser irradiation. Here, we report and discuss the vaporization and recondensation dynamics of perfluoropentane droplets containing indocyanine green in response to a short pulsed laser with optical and acoustic measurements. To investigate the effect of temperature on the vaporization process, an imaging chamber was mounted on a temperature-controlled water reservoir and then the vaporization event was recorded at 5 million frames per second via a high-speed camera. The high-speed movies show that most of the droplets within the laser beam area expanded rapidly as soon as they were exposed to the laser pulse and immediately recondensed within 1-2 μs. The vaporization/recondensation process was consistently reproduced in six consecutive laser pulses to the same area. As the temperature of the media was increased above the boiling point of the perfluoropentane, the droplets were less likely to recondense and remained in a gas phase after the first vaporization. These observations will help to clarify the underlying processes and eventually guide the design of repeatable phase-transition droplets as a photoacoustic imaging contrast agent.

  8. Multiple enface image averaging for enhanced optical coherence tomography angiography imaging.

    PubMed

    Uji, Akihito; Balasubramanian, Siva; Lei, Jianqin; Baghdasaryan, Elmira; Al-Sheikh, Mayss; Borrelli, Enrico; Sadda, SriniVas R

    2018-05-31

    To investigate the effect of multiple enface image averaging on image quality of the optical coherence tomography angiography (OCTA). Twenty-one normal volunteers were enrolled in this study. For each subject, one eye was imaged with 3 × 3 mm scan protocol, and another eye was imaged with the 6 × 6 mm scan protocol centred on the fovea using the ZEISS Angioplex™ spectral-domain OCTA device. Eyes were repeatedly imaged to obtain nine OCTA cube scan sets, and nine superficial capillary plexus (SCP) and deep capillary plexus (DCP) were individually averaged after registration. Eighteen eyes with a 3 × 3 mm scan field and 14 eyes with a 6 × 6 mm scan field were studied. Averaged images showed more continuous vessels and less background noise in both the SCP and the DCP as the number of frames used for averaging increased, with both 3 × 3 and 6 × 6 mm scan protocols. The intensity histogram of the vessels dramatically changed after averaging. Contrast-to-noise ratio (CNR) and subjectively assessed image quality scores also increased as the number of frames used for averaging increased in all image types. However, the additional benefit in quality diminished when averaging more than five frames. Averaging only three frames achieved significant improvement in CNR and the score assigned by certified grades. Use of multiple image averaging in OCTA enface images was found to be both objectively and subjectively effective for enhancing image quality. These findings may of value for developing optimal OCTA imaging protocols for future studies. © 2018 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  9. Selection of optical model of stereophotography experiment for determination the cloud base height as a problem of testing of statistical hypotheses

    NASA Astrophysics Data System (ADS)

    Chulichkov, Alexey I.; Nikitin, Stanislav V.; Emilenko, Alexander S.; Medvedev, Andrey P.; Postylyakov, Oleg V.

    2017-10-01

    Earlier, we developed a method for estimating the height and speed of clouds from cloud images obtained by a pair of digital cameras. The shift of a fragment of the cloud in the right frame relative to its position in the left frame is used to estimate the height of the cloud and its velocity. This shift is estimated by the method of the morphological analysis of images. However, this method requires that the axes of the cameras are parallel. Instead of real adjustment of the axes, we use virtual camera adjustment, namely, a transformation of a real frame, the result of which could be obtained if all the axes were perfectly adjusted. For such adjustment, images of stars as infinitely distant objects were used: on perfectly aligned cameras, images on both the right and left frames should be identical. In this paper, we investigate in more detail possible mathematical models of cloud image deformations caused by the misalignment of the axes of two cameras, as well as their lens aberration. The simplest model follows the paraxial approximation of lens (without lens aberrations) and reduces to an affine transformation of the coordinates of one of the frames. The other two models take into account the lens distortion of the 3rd and 3rd and 5th orders respectively. It is shown that the models differ significantly when converting coordinates near the edges of the frame. Strict statistical criteria allow choosing the most reliable model, which is as much as possible consistent with the measurement data. Further, each of these three models was used to determine parameters of the image deformations. These parameters are used to provide cloud images to mean what they would have when measured using an ideal setup, and then the distance to cloud is calculated. The results were compared with data of a laser range finder.

  10. Coincidence electron/ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin

    2015-05-01

    A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.

  11. Real time heart rate variability assessment from Android smartphone camera photoplethysmography: Postural and device influences.

    PubMed

    Guede-Fernandez, F; Ferrer-Mileo, V; Ramos-Castro, J; Fernandez-Chimeno, M; Garcia-Gonzalez, M A

    2015-01-01

    The aim of this paper is to present a smartphone based system for real-time pulse-to-pulse (PP) interval time series acquisition by frame-to-frame camera image processing. The developed smartphone application acquires image frames from built-in rear-camera at the maximum available rate (30 Hz) and the smartphone GPU has been used by Renderscript API for high performance frame-by-frame image acquisition and computing in order to obtain PPG signal and PP interval time series. The relative error of mean heart rate is negligible. In addition, measurement posture and the employed smartphone model influences on the beat-to-beat error measurement of heart rate and HRV indices have been analyzed. Then, the standard deviation of the beat-to-beat error (SDE) was 7.81 ± 3.81 ms in the worst case. Furthermore, in supine measurement posture, significant device influence on the SDE has been found and the SDE is lower with Samsung S5 than Motorola X. This study can be applied to analyze the reliability of different smartphone models for HRV assessment from real-time Android camera frames processing.

  12. TH-EF-BRA-03: Assessment of Data-Driven Respiratory Motion-Compensation Methods for 4D-CBCT Image Registration and Reconstruction Using Clinical Datasets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riblett, MJ; Weiss, E; Hugo, GD

    Purpose: To evaluate the performance of a 4D-CBCT registration and reconstruction method that corrects for respiratory motion and enhances image quality under clinically relevant conditions. Methods: Building on previous work, which tested feasibility of a motion-compensation workflow using image datasets superior to clinical acquisitions, this study assesses workflow performance under clinical conditions in terms of image quality improvement. Evaluated workflows utilized a combination of groupwise deformable image registration (DIR) and image reconstruction. Four-dimensional cone beam CT (4D-CBCT) FDK reconstructions were registered to either mean or respiratory phase reference frame images to model respiratory motion. The resulting 4D transformation was usedmore » to deform projection data during the FDK backprojection operation to create a motion-compensated reconstruction. To simulate clinically realistic conditions, superior quality projection datasets were sampled using a phase-binned striding method. Tissue interface sharpness (TIS) was defined as the slope of a sigmoid curve fit to the lung-diaphragm boundary or to the carina tissue-airway boundary when no diaphragm was discernable. Image quality improvement was assessed in 19 clinical cases by evaluating mitigation of view-aliasing artifacts, tissue interface sharpness recovery, and noise reduction. Results: For clinical datasets, evaluated average TIS recovery relative to base 4D-CBCT reconstructions was observed to be 87% using fixed-frame registration alone; 87% using fixed-frame with motion-compensated reconstruction; 92% using mean-frame registration alone; and 90% using mean-frame with motion-compensated reconstruction. Soft tissue noise was reduced on average by 43% and 44% for the fixed-frame registration and registration with motion-compensation methods, respectively, and by 40% and 42% for the corresponding mean-frame methods. Considerable reductions in view aliasing artifacts were observed for each method. Conclusion: Data-driven groupwise registration and motion-compensated reconstruction have the potential to improve the quality of 4D-CBCT images acquired under clinical conditions. For clinical image datasets, the addition of motion compensation after groupwise registration visibly reduced artifact impact. This work was supported by the National Cancer Institute of the National Institutes of Health under Award Number R01CA166119. Hugo and Weiss hold a research agreement with Philips Healthcare and license agreement with Varian Medical Systems. Weiss receives royalties from UpToDate. Christensen receives funds from Roger Koch to support research.« less

  13. Physical activity advertisements that feature daily well-being improve autonomy and body image in overweight women but not men.

    PubMed

    Segar, Michelle L; Updegraff, John A; Zikmund-Fisher, Brian J; Richardson, Caroline R

    2012-01-01

    The reasons for exercising that are featured in health communications brand exercise and socialize individuals about why they should be physically active. Discovering which reasons for exercising are associated with high-quality motivation and behavioral regulation is essential to promoting physical activity and weight control that can be sustained over time. This study investigates whether framing physical activity in advertisements featuring distinct types of goals differentially influences body image and behavioral regulations based on self-determination theory among overweight and obese individuals. Using a three-arm randomized trial, overweight and obese women and men (aged 40-60 yr, n = 1690) read one of three ads framing physical activity as a way to achieve (1) better health, (2) weight loss, or (3) daily well-being. Framing effects were estimated in an ANOVA model with pairwise comparisons using the Bonferroni correction. This study showed that there are immediate framing effects on physical activity behavioral regulations and body image from reading a one-page advertisement about physical activity and that gender and BMI moderate these effects. Framing physical activity as a way to enhance daily well-being positively influenced participants' perceptions about the experience of being physically active and enhanced body image among overweight women, but not men. The experiment had less impact among the obese study participants compared to those who were overweight. These findings support a growing body of research suggesting that, compared to weight loss, framing physical activity for daily well-being is a better gain-frame message for overweight women in midlife.

  14. Physical Activity Advertisements That Feature Daily Well-Being Improve Autonomy and Body Image in Overweight Women but Not Men

    PubMed Central

    Segar, Michelle L.; Updegraff, John A.; Zikmund-Fisher, Brian J.; Richardson, Caroline R.

    2012-01-01

    The reasons for exercising that are featured in health communications brand exercise and socialize individuals about why they should be physically active. Discovering which reasons for exercising are associated with high-quality motivation and behavioral regulation is essential to promoting physical activity and weight control that can be sustained over time. This study investigates whether framing physical activity in advertisements featuring distinct types of goals differentially influences body image and behavioral regulations based on self-determination theory among overweight and obese individuals. Using a three-arm randomized trial, overweight and obese women and men (aged 40–60 yr, n = 1690) read one of three ads framing physical activity as a way to achieve (1) better health, (2) weight loss, or (3) daily well-being. Framing effects were estimated in an ANOVA model with pairwise comparisons using the Bonferroni correction. This study showed that there are immediate framing effects on physical activity behavioral regulations and body image from reading a one-page advertisement about physical activity and that gender and BMI moderate these effects. Framing physical activity as a way to enhance daily well-being positively influenced participants' perceptions about the experience of being physically active and enhanced body image among overweight women, but not men. The experiment had less impact among the obese study participants compared to those who were overweight. These findings support a growing body of research suggesting that, compared to weight loss, framing physical activity for daily well-being is a better gain-frame message for overweight women in midlife. PMID:22701782

  15. Dynamic Transmit-Receive Beamforming by Spatial Matched Filtering for Ultrasound Imaging with Plane Wave Transmission.

    PubMed

    Chen, Yuling; Lou, Yang; Yen, Jesse

    2017-07-01

    During conventional ultrasound imaging, the need for multiple transmissions for one image and the time of flight for a desired imaging depth limit the frame rate of the system. Using a single plane wave pulse during each transmission followed by parallel receive processing allows for high frame rate imaging. However, image quality is degraded because of the lack of transmit focusing. Beamforming by spatial matched filtering (SMF) is a promising method which focuses ultrasonic energy using spatial filters constructed from the transmit-receive impulse response of the system. Studies by other researchers have shown that SMF beamforming can provide dynamic transmit-receive focusing throughout the field of view. In this paper, we apply SMF beamforming to plane wave transmissions (PWTs) to achieve both dynamic transmit-receive focusing at all imaging depths and high imaging frame rate (>5000 frames per second). We demonstrated the capability of the combined method (PWT + SMF) of achieving two-way focusing mathematically through analysis based on the narrowband Rayleigh-Sommerfeld diffraction theory. Moreover, the broadband performance of PWT + SMF was quantified in terms of lateral resolution and contrast from both computer simulations and experimental data. Results were compared between SMF beamforming and conventional delay-and-sum (DAS) beamforming in both simulations and experiments. At an imaging depth of 40 mm, simulation results showed a 29% lateral resolution improvement and a 160% contrast improvement with PWT + SMF. These improvements were 17% and 48% for experimental data with noise.

  16. Automatic trajectory measurement of large numbers of crowded objects

    NASA Astrophysics Data System (ADS)

    Li, Hui; Liu, Ye; Chen, Yan Qiu

    2013-06-01

    Complex motion patterns of natural systems, such as fish schools, bird flocks, and cell groups, have attracted great attention from scientists for years. Trajectory measurement of individuals is vital for quantitative and high-throughput study of their collective behaviors. However, such data are rare mainly due to the challenges of detection and tracking of large numbers of objects with similar visual features and frequent occlusions. We present an automatic and effective framework to measure trajectories of large numbers of crowded oval-shaped objects, such as fish and cells. We first use a novel dual ellipse locator to detect the coarse position of each individual and then propose a variance minimization active contour method to obtain the optimal segmentation results. For tracking, cost matrix of assignment between consecutive frames is trainable via a random forest classifier with many spatial, texture, and shape features. The optimal trajectories are found for the whole image sequence by solving two linear assignment problems. We evaluate the proposed method on many challenging data sets.

  17. DHMI: dynamic holographic microscopy interface

    NASA Astrophysics Data System (ADS)

    He, Xuefei; Zheng, Yujie; Lee, Woei Ming

    2016-12-01

    Digital holographic microscopy (DHM) is a powerful in-vitro biological imaging tool. In this paper, we report a fully automated off-axis digital holographic microscopy system completed with a graphical user interface in the Matlab environment. The interface primarily includes Fourier domain processing, phase reconstruction, aberration compensation and autofocusing. A variety of imaging operations such as region of interest selection, de-noising mode (filtering and averaging), low frame rate imaging for immediate reconstruction and high frame rate imaging routine ( 27 fps) are implemented to facilitate ease of use.

  18. Technique for identifying, tracing, or tracking objects in image data

    DOEpatents

    Anderson, Robert J [Albuquerque, NM; Rothganger, Fredrick [Albuquerque, NM

    2012-08-28

    A technique for computer vision uses a polygon contour to trace an object. The technique includes rendering a polygon contour superimposed over a first frame of image data. The polygon contour is iteratively refined to more accurately trace the object within the first frame after each iteration. The refinement includes computing image energies along lengths of contour lines of the polygon contour and adjusting positions of the contour lines based at least in part on the image energies.

  19. Smart CMOS image sensor for lightning detection and imaging.

    PubMed

    Rolando, Sébastien; Goiffon, Vincent; Magnan, Pierre; Corbière, Franck; Molina, Romain; Tulet, Michel; Bréart-de-Boisanger, Michel; Saint-Pé, Olivier; Guiry, Saïprasad; Larnaudie, Franck; Leone, Bruno; Perez-Cuevas, Leticia; Zayer, Igor

    2013-03-01

    We present a CMOS image sensor dedicated to lightning detection and imaging. The detector has been designed to evaluate the potentiality of an on-chip lightning detection solution based on a smart sensor. This evaluation is performed in the frame of the predevelopment phase of the lightning detector that will be implemented in the Meteosat Third Generation Imager satellite for the European Space Agency. The lightning detection process is performed by a smart detector combining an in-pixel frame-to-frame difference comparison with an adjustable threshold and on-chip digital processing allowing an efficient localization of a faint lightning pulse on the entire large format array at a frequency of 1 kHz. A CMOS prototype sensor with a 256×256 pixel array and a 60 μm pixel pitch has been fabricated using a 0.35 μm 2P 5M technology and tested to validate the selected detection approach.

  20. LANDSAT 4 band 6 data evaluation

    NASA Technical Reports Server (NTRS)

    1984-01-01

    A series of images of a portion of a TM frame of Lake Ontario are presented. The top left frame is the TM Band 6 image, the top right image is a conventional contrast stretched image. The bottom left image is a Band 5 to Band 3 ratio image. This image is used to generate a primitive land cover classificaton. Each land cover (Water, Urban, Forest, Agriculture) is assigned a Band 6 emissivity value. The ratio image is then combined with the Band 6 image and atmospheric propagation data to generate the bottom right image. This image represents a display of data whose digital count can be directly related to estimated surface temperature. The resolution appears higher because the process cell is the size of the TM shortwave pixels.

  1. A novel concept for CT with fixed anodes (FACT): Medical imaging based on the feasibility of thermal load capacity.

    PubMed

    Kellermeier, Markus; Bert, Christoph; Müller, Reinhold G

    2015-07-01

    Focussing primarily on thermal load capacity, we describe the performance of a novel fixed anode CT (FACT) compared with a 100 kW reference CT. Being a fixed system, FACT has no focal spot blurring of the X-ray source during projection. Monte Carlo and finite element methods were used to determine the fluence proportional to thermal capacity. Studies of repeated short-time exposures showed that FACT could operate in pulsed mode for an unlimited period. A virtual model for FACT was constructed to analyse various temporal sequences for the X-ray source ring, representing a circular array of 1160 fixed anodes in the gantry. Assuming similar detector properties at a very small integration time, image quality was investigated using an image reconstruction library. Our model showed that approximately 60 gantry rounds per second, i.e. 60 sequential targetings of the 1160 anodes per second, were required to achieve a performance level equivalent to that of the reference CT (relative performance, RP = 1) at equivalent image quality. The optimal projection duration in each direction was about 10 μs. With a beam pause of 1 μs between projections, 78.4 gantry rounds per second with consecutive source activity were thermally possible at a given thermal focal spot. The settings allowed for a 1.3-fold (RP = 1.3) shorter scan time than conventional CT while maintaining radiation exposure and image quality. Based on the high number of rounds, FACT supports a high image frame rate at low doses, which would be beneficial in a wide range of diagnostic and technical applications. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  2. A Feasibility Study of Smartphone-Based Telesonography for Evaluating Cardiac Dynamic Function and Diagnosing Acute Appendicitis with Control of the Image Quality of the Transmitted Videos.

    PubMed

    Kim, Changsun; Cha, Hyunmin; Kang, Bo Seung; Choi, Hyuk Joong; Lim, Tae Ho; Oh, Jaehoon

    2016-06-01

    Our aim was to prove the feasibility of the remote interpretation of real-time transmitted ultrasound videos of dynamic and static organs using a smartphone with control of the image quality given a limited internet connection speed. For this study, 100 cases of echocardiography videos (dynamic organ)-50 with an ejection fraction (EF) of ≥50 s and 50 with EF <50 %-and 100 cases of suspected pediatric appendicitis (static organ)-50 with signs of acute appendicitis and 50 with no findings of appendicitis-were consecutively selected. Twelve reviewers reviewed the original videos using the liquid crystal display (LCD) monitor of an ultrasound machine and using a smartphone, to which the images were transmitted from the ultrasound machine. The resolution of the transmitted echocardiography videos was reduced by approximately 20 % to increase the frame rate of transmission given the limited internet speed. The differences in diagnostic performance between the two devices when evaluating left ventricular (LV) systolic function by measuring the EF and when evaluating the presence of acute appendicitis were investigated using a five-point Likert scale. The average areas under the receiver operating characteristic curves for each reviewer's interpretations using the LCD monitor and smartphone were respectively 0.968 (0.949-0.986) and 0.963 (0.945-0.982) (P = 0.548) for echocardiography and 0.972 (0.954-0.989) and 0.966 (0.947-0.984) (P = 0.175) for abdominal ultrasonography. We confirmed the feasibility of remotely interpreting ultrasound images using smartphones, specifically for evaluating LV function and diagnosing pediatric acute appendicitis; the images were transferred from the ultrasound machine using image quality-controlled telesonography.

  3. Determining the locations of the various CIRC recording format information blocks (user data blocks, C2 and C1 words and EFM frames) on a recorded compact disc

    NASA Technical Reports Server (NTRS)

    Howe, Dennis G.

    1993-01-01

    Just prior to its being EFM modulated (i.e., converted to eight-to-fourteen channel data by the EFM encoder) and written to a Compact Disc (CD), information that passes through the CIRC Block Encoder is grouped into 33-byte blocks referred to as EFM frames. Twenty four of the bytes that make up a given EFM frame are user data that was input into the CIRC encoder at various (different) times, 4 of the bytes of this same EFM frame were created by the C2 ECC encoder (each at a different time), and another 4 were created by the C1 ECC encoder (again, each at a different time). The one remaining byte of the given EFM frame, which is known as the EFM frame C&D (for Control & Display) byte, carries information that identifies which portion of the current disc program track the given EFM frame belongs to and also specifies the location of the given EFM frame on the disc (in terms of a time stamp that has a resolution of l/75th second, or 98 EFM frames). (Note: since the program track and time information is stored as a 98-byte word, a logical group consisting of 98 consecutive EFM frames must be read, and their respective C&D bytes must be catenated and decoded, before the program track identification and time position information that pertains to the entire block of 98 EFM frames can be obtained.) The C&D byte is put at the start (0th byte) of an EFM frame in real time; its placement completes the construction of the EFM frame - it is assigned just before the EFM frame enters the EFM encoder. Four distinct blocks of data are referred to: 24-byte User Input Data Blocks; 28-byte C2 words; 32-byte C1 words; and 33-byte EFM frames.

  4. Real-time chirp-coded imaging with a programmable ultrasound biomicroscope.

    PubMed

    Bosisio, Mattéo R; Hasquenoph, Jean-Michel; Sandrin, Laurent; Laugier, Pascal; Bridal, S Lori; Yon, Sylvain

    2010-03-01

    Ultrasound biomicroscopy (UBM) of mice can provide a testing ground for new imaging strategies. The UBM system presented in this paper facilitates the development of imaging and measurement methods with programmable design, arbitrary waveform coding, broad bandwidth (2-80 MHz), digital filtering, programmable processing, RF data acquisition, multithread/multicore real-time display, and rapid mechanical scanning (

  5. Phantom study and accuracy evaluation of an image-to-world registration approach used with electro-magnetic tracking system for neurosurgery

    NASA Astrophysics Data System (ADS)

    Li, Senhu; Sarment, David

    2015-12-01

    Minimally invasive neurosurgery needs intraoperative imaging updates and high efficient image guide system to facilitate the procedure. An automatic image guided system utilized with a compact and mobile intraoperative CT imager was introduced in this work. A tracking frame that can be easily attached onto the commercially available skull clamp was designed. With known geometry of fiducial and tracking sensor arranged on this rigid frame that was fabricated through high precision 3D printing, not only was an accurate, fully automatic registration method developed in a simple and less-costly approach, but also it helped in estimating the errors from fiducial localization in image space through image processing, and in patient space through the calibration of tracking frame. Our phantom study shows the fiducial registration error as 0.348+/-0.028mm, comparing the manual registration error as 1.976+/-0.778mm. The system in this study provided a robust and accurate image-to-patient registration without interruption of routine surgical workflow and any user interactions involved through the neurosurgery.

  6. Universal ICT Picosecond Camera

    NASA Astrophysics Data System (ADS)

    Lebedev, Vitaly B.; Syrtzev, V. N.; Tolmachyov, A. M.; Feldman, Gregory G.; Chernyshov, N. A.

    1989-06-01

    The paper reports on the design of an ICI camera operating in the mode of linear or three-frame image scan. The camera incorporates two tubes: time-analyzing ICI PIM-107 1 with cathode S-11, and brightness amplifier PMU-2V (gain about 104) for the image shaped by the first tube. The camera is designed on the basis of streak camera AGAT-SF3 2 with almost the same power sources, but substantially modified pulse electronics. Schematically, the design of tube PIM-107 is depicted in the figure. The tube consists of cermet housing 1, photocathode 2 made in a separate vacuum volume and introduced into the housing by means of a manipulator. In a direct vicinity of the photocathode, accelerating electrode is located made of a fine-structure grid. An electrostatic lens formed by focusing electrode 4 and anode diaphragm 5 produces a beam of electrons with a "remote crossover". The authors have suggested this term for an electron beam whose crossover is 40 to 60 mm away from the anode diaphragm plane which guarantees high sensitivity of scan plates 6 with respect to multiaperture framing diaphragm 7. Beyond every diaphragm aperture, a pair of deflecting plates 8 is found shielded from compensation plates 10 by diaphragm 9. The electronic image produced by the photocathode is focused on luminescent screen 11. The tube is controlled with the help of two saw-tooth voltages applied in antiphase across plates 6 and 10. Plates 6 serve for sweeping the electron beam over the surface of diaphragm 7. The beam is either allowed toward the screen, or delayed by the diaphragm walls. In such a manner, three frames are obtained, the number corresponding to that of the diaphragm apertures. Plates 10 serve for stopping the compensation of the image streak sweep on the screen. To avoid overlapping of frames, plates 8 receive static potentials responsible for shifting frames on the screen. Changing the potentials applied to plates 8, one can control the spacing between frames and partially or fully overlap the frames. This sort of control is independent of the frequency of frame running and of their duration, and can only determine frame positioning on the screen. Since diaphragm 7 is located in the area of crossover and electron trajectories cross in the crossover, the frame is not decomposed into separate elements during its formation. The image is transferred onto the screen practically within the entire time of frame duration increasing the aperture ratio of the tube as compared to that in Ref. 3.

  7. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  8. VIRTUAL FRAME BUFFER INTERFACE

    NASA Technical Reports Server (NTRS)

    Wolfe, T. L.

    1994-01-01

    Large image processing systems use multiple frame buffers with differing architectures and vendor supplied user interfaces. This variety of architectures and interfaces creates software development, maintenance, and portability problems for application programs. The Virtual Frame Buffer Interface program makes all frame buffers appear as a generic frame buffer with a specified set of characteristics, allowing programmers to write code which will run unmodified on all supported hardware. The Virtual Frame Buffer Interface converts generic commands to actual device commands. The virtual frame buffer consists of a definition of capabilities and FORTRAN subroutines that are called by application programs. The virtual frame buffer routines may be treated as subroutines, logical functions, or integer functions by the application program. Routines are included that allocate and manage hardware resources such as frame buffers, monitors, video switches, trackballs, tablets and joysticks; access image memory planes; and perform alphanumeric font or text generation. The subroutines for the various "real" frame buffers are in separate VAX/VMS shared libraries allowing modification, correction or enhancement of the virtual interface without affecting application programs. The Virtual Frame Buffer Interface program was developed in FORTRAN 77 for a DEC VAX 11/780 or a DEC VAX 11/750 under VMS 4.X. It supports ADAGE IK3000, DEANZA IP8500, Low Resolution RAMTEK 9460, and High Resolution RAMTEK 9460 Frame Buffers. It has a central memory requirement of approximately 150K. This program was developed in 1985.

  9. The use of message framing to promote sexual risk reduction in young adolescents: a pilot exploratory study

    PubMed Central

    Camenga, Deepa R.; Hieftje, Kimberly D.; Fiellin, Lynn E.; Edelman, E. Jennifer; Rosenthal, Marjorie S.; Duncan, Lindsay R.

    2014-01-01

    Few studies have explored the application of message framing to promote health behaviors in adolescents. In this exploratory study, we examined young adolescents’ selection of gain- versus loss-framed images and messages when designing an HIV-prevention intervention to promote delayed sexual initiation. Twenty-six adolescents (aged 10–14 years) participated in six focus groups and created and discussed posters to persuade their peers to delay the initiation of sexual activity. Focus groups were audio-recorded and transcribed. A five-person multidisciplinary team analyzed the posters and focus group transcripts using thematic analysis. The majority of the posters (18/26, 69%) contained both gain- and loss-framed content. Of the 93/170 (56%) images and messages with framing, similar proportions were gain- (48/93, 52%) and loss-framed (45/93, 48%). Most gain-framed content (23/48, 48%) focused on academic achievement, whereas loss-framed content focused on pregnancy (20/45, 44%) and HIV/AIDS (14/45, 31%). These preliminary data suggest that young adolescents may prefer a combination of gain- and loss-framing in health materials to promote reduction in sexual risk behaviors. PMID:24452229

  10. Wavelet denoising of multiframe optical coherence tomography data

    PubMed Central

    Mayer, Markus A.; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y.; Tornow, Ralf P.

    2012-01-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise. PMID:22435103

  11. Wavelet denoising of multiframe optical coherence tomography data.

    PubMed

    Mayer, Markus A; Borsdorf, Anja; Wagner, Martin; Hornegger, Joachim; Mardin, Christian Y; Tornow, Ralf P

    2012-03-01

    We introduce a novel speckle noise reduction algorithm for OCT images. Contrary to present approaches, the algorithm does not rely on simple averaging of multiple image frames or denoising on the final averaged image. Instead it uses wavelet decompositions of the single frames for a local noise and structure estimation. Based on this analysis, the wavelet detail coefficients are weighted, averaged and reconstructed. At a signal-to-noise gain at about 100% we observe only a minor sharpness decrease, as measured by a full-width-half-maximum reduction of 10.5%. While a similar signal-to-noise gain would require averaging of 29 frames, we achieve this result using only 8 frames as input to the algorithm. A possible application of the proposed algorithm is preprocessing in retinal structure segmentation algorithms, to allow a better differentiation between real tissue information and unwanted speckle noise.

  12. Backside-illuminated 6.6-μm pixel video-rate CCDs for scientific imaging applications

    NASA Astrophysics Data System (ADS)

    Tower, John R.; Levine, Peter A.; Hsueh, Fu-Lung; Patel, Vipulkumar; Swain, Pradyumna K.; Meray, Grazyna M.; Andrews, James T.; Dawson, Robin M.; Sudol, Thomas M.; Andreas, Robert

    2000-05-01

    A family of backside illuminated CCD imagers with 6.6 micrometers pixels has been developed. The imagers feature full 12 bit (> 4,000:1) dynamic range with measured noise floor of < 10 e RMS at 5 MHz clock rates, and measured full well capacity of > 50,000 e. The modulation transfer function performance is excellent, with measured MTF at Nyquist of 46% for 500 nm illumination. Three device types have been developed. The first device is a 1 K X 1 K full frame device with a single output port, which can be run as a 1 K X 512 frame transfer device. The second device is a 512 X 512 frame transfer device with a single output port. The third device is a 512 X 512 split frame transfer device with four output ports. All feature the high quantum efficiency afforded by backside illumination.

  13. Video-rate scanning two-photon excitation fluorescence microscopy and ratio imaging with cameleons.

    PubMed Central

    Fan, G Y; Fujisaki, H; Miyawaki, A; Tsay, R K; Tsien, R Y; Ellisman, M H

    1999-01-01

    A video-rate (30 frames/s) scanning two-photon excitation microscope has been successfully tested. The microscope, based on a Nikon RCM 8000, incorporates a femtosecond pulsed laser with wavelength tunable from 690 to 1050 nm, prechirper optics for laser pulse-width compression, resonant galvanometer for video-rate point scanning, and a pair of nonconfocal detectors for fast emission ratioing. An increase in fluorescent emission of 1.75-fold is consistently obtained with the use of the prechirper optics. The nonconfocal detectors provide another 2.25-fold increase in detection efficiency. Ratio imaging and optical sectioning can therefore be performed more efficiently without confocal optics. Faster frame rates, at 60, 120, and 240 frames/s, can be achieved with proportionally reduced scan lines per frame. Useful two-photon images can be acquired at video rate with a laser power as low as 2.7 mW at specimen with the genetically modified green fluorescent proteins. Preliminary results obtained using this system confirm that the yellow "cameleons" exhibit similar optical properties as under one-photon excitation conditions. Dynamic two-photon images of cardiac myocytes and ratio images of yellow cameleon-2.1, -3.1, and -3.1nu are also presented. PMID:10233058

  14. Portable lensless wide-field microscopy imaging platform based on digital inline holography and multi-frame pixel super-resolution

    PubMed Central

    Sobieranski, Antonio C; Inci, Fatih; Tekin, H Cumhur; Yuksekkaya, Mehmet; Comunello, Eros; Cobra, Daniel; von Wangenheim, Aldo; Demirci, Utkan

    2017-01-01

    In this paper, an irregular displacement-based lensless wide-field microscopy imaging platform is presented by combining digital in-line holography and computational pixel super-resolution using multi-frame processing. The samples are illuminated by a nearly coherent illumination system, where the hologram shadows are projected into a complementary metal-oxide semiconductor-based imaging sensor. To increase the resolution, a multi-frame pixel resolution approach is employed to produce a single holographic image from multiple frame observations of the scene, with small planar displacements. Displacements are resolved by a hybrid approach: (i) alignment of the LR images by a fast feature-based registration method, and (ii) fine adjustment of the sub-pixel information using a continuous optimization approach designed to find the global optimum solution. Numerical method for phase-retrieval is applied to decode the signal and reconstruct the morphological details of the analyzed sample. The presented approach was evaluated with various biological samples including sperm and platelets, whose dimensions are in the order of a few microns. The obtained results demonstrate a spatial resolution of 1.55 µm on a field-of-view of ≈30 mm2. PMID:29657866

  15. A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications

    PubMed Central

    Fu, Bo; Pitter, Mark C.; Russell, Noah A.

    2011-01-01

    Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852

  16. Immobilization precision of a modified GTC frame.

    PubMed

    Winey, Brian; Daartz, Juliane; Dankers, Frank; Bussière, Marc

    2012-05-10

    The purpose of this study was to evaluate and quantify the interfraction reproducibility and intrafraction immobilization precision of a modified GTC frame. The error of the patient alignment and imaging systems were measured using a cranial skull phantom, with simulated, predetermined shifts. The kV setup images were acquired with a room-mounted set of kV sources and panels. Calculated translations and rotations provided by the computer alignment software relying upon three implanted fiducials were compared to the known shifts, and the accuracy of the imaging and positioning systems was calculated. Orthogonal kV setup images for 45 proton SRT patients and 1002 fractions (average 22.3 fractions/patient) were analyzed for interfraction and intrafraction immobilization precision using a modified GTC frame. The modified frame employs a radiotransparent carbon cup and molded pillow to allow for more treatment angles from posterior directions for cranial lesions. Patients and the phantom were aligned with three 1.5 mm stainless steel fiducials implanted into the skull. The accuracy and variance of the patient positioning and imaging systems were measured to be 0.10 ± 0.06 mm, with the maximum uncertainty of rotation being ±0.07°. 957 pairs of interfraction image sets and 974 intrafraction image sets were analyzed. 3D translations and rotations were recorded. The 3D vector interfraction setup reproducibility was 0.13 mm ± 1.8 mm for translations and the largest uncertainty of ± 1.07º for rotations. The intrafraction immobilization efficacy was 0.19 mm ± 0.66 mm for translations and the largest uncertainty of ± 0.50º for rotations. The modified GTC frame provides reproducible setup and effective intrafraction immobilization, while allowing for the complete range of entrance angles from the posterior direction.

  17. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    PubMed

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  18. Real-time intravascular photoacoustic-ultrasound imaging of lipid-laden plaque at speed of video-rate level

    NASA Astrophysics Data System (ADS)

    Hui, Jie; Cao, Yingchun; Zhang, Yi; Kole, Ayeeshik; Wang, Pu; Yu, Guangli; Eakins, Gregory; Sturek, Michael; Chen, Weibiao; Cheng, Ji-Xin

    2017-03-01

    Intravascular photoacoustic-ultrasound (IVPA-US) imaging is an emerging hybrid modality for the detection of lipidladen plaques by providing simultaneous morphological and lipid-specific chemical information of an artery wall. The clinical utility of IVPA-US technology requires real-time imaging and display at speed of video-rate level. Here, we demonstrate a compact and portable IVPA-US system capable of imaging at up to 25 frames per second in real-time display mode. This unprecedented imaging speed was achieved by concurrent innovations in excitation laser source, rotary joint assembly, 1 mm IVPA-US catheter, differentiated A-line strategy, and real-time image processing and display algorithms. By imaging pulsatile motion at different imaging speeds, 16 frames per second was deemed to be adequate to suppress motion artifacts from cardiac pulsation for in vivo applications. Our lateral resolution results further verified the number of A-lines used for a cross-sectional IVPA image reconstruction. The translational capability of this system for the detection of lipid-laden plaques was validated by ex vivo imaging of an atherosclerotic human coronary artery at 16 frames per second, which showed strong correlation to gold-standard histopathology.

  19. Abnormal Image Detection in Endoscopy Videos Using a Filter Bank and Local Binary Patterns

    PubMed Central

    Nawarathna, Ruwan; Oh, JungHwan; Muthukudage, Jayantha; Tavanapong, Wallapak; Wong, Johnny; de Groen, Piet C.; Tang, Shou Jiang

    2014-01-01

    Finding mucosal abnormalities (e.g., erythema, blood, ulcer, erosion, and polyp) is one of the most essential tasks during endoscopy video review. Since these abnormalities typically appear in a small number of frames (around 5% of the total frame number), automated detection of frames with an abnormality can save physician’s time significantly. In this paper, we propose a new multi-texture analysis method that effectively discerns images showing mucosal abnormalities from the ones without any abnormality since most abnormalities in endoscopy images have textures that are clearly distinguishable from normal textures using an advanced image texture analysis method. The method uses a “texton histogram” of an image block as features. The histogram captures the distribution of different “textons” representing various textures in an endoscopy image. The textons are representative response vectors of an application of a combination of Leung and Malik (LM) filter bank (i.e., a set of image filters) and a set of Local Binary Patterns on the image. Our experimental results indicate that the proposed method achieves 92% recall and 91.8% specificity on wireless capsule endoscopy (WCE) images and 91% recall and 90.8% specificity on colonoscopy images. PMID:25132723

  20. Processing Infrared Images For Fire Management Applications

    NASA Astrophysics Data System (ADS)

    Warren, John R.; Pratt, William K.

    1981-12-01

    The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.

  1. Development of two-framing camera with large format and ultrahigh speed

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaoguo; Wang, Yuan; Wang, Yi

    2012-10-01

    High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.

  2. Large Binocular Telescope Observations of Europa Occulting Io's Volcanoes at 4.8 μm

    NASA Astrophysics Data System (ADS)

    Skrutskie, Michael F.; Conrad, Albert; Resnick, Aaron; Leisenring, Jarron; Hinz, Phil; de Pater, Imke; de Kleer, Katherine; Spencer, John; Skemer, Andrew; Woodward, Charles E.; Davies, Ashley Gerard; Defrére, Denis

    2015-11-01

    On 8 March 2015 Europa passed nearly centrally in front of Io. The Large Binocular Telescope observed this event in dual-aperture AO-corrected Fizeau interferometric imaging mode using the mid-infrared imager LMIRcam operating behind the Large Binocular Telescope Interferometer (LBTI) at a broadband wavelength of 4.8 μm (M-band). Occultation light curves generated from frames recorded every 123 milliseconds show that both Loki and Pele/Pillan were well resolved. Europa's center shifted by 2 kilometers relative to Io from frame-to-frame. The derived light curve for Loki is consistent with the double-lobed structure reported by Conrad et al. (2015) using direct interferometric imaging with LBTI.

  3. Colour-reproduction algorithm for transmitting variable video frames and its application to capsule endoscopy

    PubMed Central

    Khan, Tareq; Shrestha, Ravi; Imtiaz, Md. Shamin

    2015-01-01

    Presented is a new power-efficient colour generation algorithm for wireless capsule endoscopy (WCE) application. In WCE, transmitting colour image data from the human intestine through radio frequency (RF) consumes a huge amount of power. The conventional way is to transmit all R, G and B components of all frames. Using the proposed dictionary-based colour generation scheme, instead of sending all R, G and B frames, first one colour frame is sent followed by a series of grey-scale frames. At the receiver end, the colour information is extracted from the colour frame and then added to colourise the grey-scale frames. After a certain number of grey-scale frames, another colour frame is sent followed by the same number of grey-scale frames. This process is repeated until the end of the video sequence to maintain the colour similarity. As a result, over 50% of RF transmission power can be saved using the proposed scheme, which will eventually lead to a battery life extension of the capsule by 4–7 h. The reproduced colour images have been evaluated both statistically and subjectively by professional gastroenterologists. The algorithm is finally implemented using a WCE prototype and the performance is validated using an ex-vivo trial. PMID:26609405

  4. Corrected High-Frame Rate Anchored Ultrasound with Software Alignment

    ERIC Educational Resources Information Center

    Miller, Amanda L.; Finch, Kenneth B.

    2011-01-01

    Purpose: To improve lingual ultrasound imaging with the Corrected High Frame Rate Anchored Ultrasound with Software Alignment (CHAUSA; Miller, 2008) method. Method: A production study of the IsiXhosa alveolar click is presented. Articulatory-to-acoustic alignment is demonstrated using a Tri-Modal 3-ms pulse generator. Images from 2 simultaneous…

  5. "Mathematicians Would Say It This Way": An Investigation of Teachers' Framings of Mathematicians

    ERIC Educational Resources Information Center

    Cirillo, Michelle; Herbel-Eisenmann, Beth

    2011-01-01

    Although popular media often provides negative images of mathematicians, we contend that mathematics classroom practices can also contribute to students' images of mathematicians. In this study, we examined eight mathematics teachers' framings of mathematicians in their classrooms. Here, we analyze classroom observations to explore some of the…

  6. Underwater image mosaicking and visual odometry

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz; Tangirala, Sekhar; Sorber, Scott

    2017-05-01

    This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.

  7. Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect

    NASA Astrophysics Data System (ADS)

    Artyukhin, S. G.; Mestetskiy, L. M.

    2015-05-01

    This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.

  8. Features of Jupiter's Great Red Spot

    NASA Technical Reports Server (NTRS)

    1996-01-01

    This montage features activity in the turbulent region of Jupiter's Great Red Spot (GRS). Four sets of images of the GRS were taken through various filters of the Galileo imaging system over an 11.5 hour period on 26 June, 1996 Universal Time. The sequence was designed to reveal cloud motions. The top and bottom frames on the left are of the same area, northeast of the GRS, viewed through the methane (732 nm) filter but about 70 minutes apart. The top left and top middle frames are of the same area and at the same time, but the top middle frame is taken at a wavelength (886 nm) where methane absorbs more strongly. (Only high clouds can reflect sunlight in this wavelength.) Brightness differences are caused by the different depths of features in the two images. The bottom middle frame shows reflected light at a wavelength (757 nm) where there are essentially no absorbers in the Jovian atmosphere. The white spot is to the northwest of the GRS; its appearance at different wavelengths suggests that the brightest elements are 30 km higher than the surrounding clouds. The top and bottom frames on the right, taken nine hours apart and in the violet (415 nm) filter, show the time evolution of an atmospheric wave northeast of the GRS. Visible crests in the top right frame are much less apparent 9 hours later in the bottom right frame. The misalignment of the north-south wave crests with the observed northwestward local wind may indicate a shift in wind direction (wind shear) with height. The areas within the dark lines are 'truth windows' or sections of the images which were transmitted to Earth using less data compression. Each of the six squares covers 4.8 degrees of latitude and longitude (about 6000 square kilometers). North is at the top of each frame.

    Launched in October 1989, Galileo entered orbit around Jupiter on December 7, 1995. The spacecraft's mission is to conduct detailed studies of the giant planet, its largest moons and the Jovian magnetic environment. The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  9. Single frequency thermal wave radar: A next-generation dynamic thermography for quantitative non-destructive imaging over wide modulation frequency ranges.

    PubMed

    Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas

    2018-04-01

    Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.

  10. Single frequency thermal wave radar: A next-generation dynamic thermography for quantitative non-destructive imaging over wide modulation frequency ranges

    NASA Astrophysics Data System (ADS)

    Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas

    2018-04-01

    Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.

  11. Cosmic ray anisotropies at high energies

    NASA Technical Reports Server (NTRS)

    Martinic, N. J.; Alarcon, A.; Teran, F.

    1986-01-01

    The directional anisotropies of the energetic cosmic ray gas due to the relative motion between the observers frame and the one where the relativistic gas can be assumed isotropic is analyzed. The radiation fluxes formula in the former frame must follow as the Lorentz invariance of dp/E, where p, E are the 4-vector momentum-energy components; dp is the 3-volume element in the momentum space. The anisotropic flux shows in such a case an amplitude, in a rotating earth, smaller than the experimental measurements from say, EAS-arrays for primary particle energies larger than 1.E(14) eV. Further, it is shown that two consecutive Lorentz transformations among three inertial frames exhibit the violation of dp/E invariance between the first and the third systems of reference, due to the Wigner rotation. A discussion of this result in the context of the experimental anisotropic fluxes and its current interpretation is given.

  12. High-resolution wavefront reconstruction using the frozen flow hypothesis

    NASA Astrophysics Data System (ADS)

    Liu, Xuewen; Liang, Yonghui; Liu, Jin; Xu, Jieping

    2017-10-01

    This paper describes an approach to reconstructing wavefronts on finer grid using the frozen flow hypothesis (FFH), which exploits spatial and temporal correlations between consecutive wavefront sensor (WFS) frames. Under the assumption of FFH, slope data from WFS can be connected to a finer, composite slope grid using translation and down sampling, and elements in transformation matrices are determined by wind information. Frames of slopes are then combined and slopes on finer grid are reconstructed by solving a sparse, large-scale, ill-posed least squares problem. By using reconstructed finer slope data and adopting Fried geometry of WFS, high-resolution wavefronts are then reconstructed. The results show that this method is robust even with detector noise and wind information inaccuracy, and under bad seeing conditions, high-frequency information in wavefronts can be recovered more accurately compared with when correlations in WFS frames are ignored.

  13. Automated movement correction for dynamic PET/CT images: evaluation with phantom and patient data.

    PubMed

    Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R; Nelson, Linda D; Small, Gary W; Huang, Sung-Cheng

    2014-01-01

    Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (P<0.05) in the FDDNP DVR and FDG Ki values in the parietal and temporal regions after MC. In conclusion, MC applied to dynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers.

  14. Automated Movement Correction for Dynamic PET/CT Images: Evaluation with Phantom and Patient Data

    PubMed Central

    Ye, Hu; Wong, Koon-Pong; Wardak, Mirwais; Dahlbom, Magnus; Kepe, Vladimir; Barrio, Jorge R.; Nelson, Linda D.; Small, Gary W.; Huang, Sung-Cheng

    2014-01-01

    Head movement during a dynamic brain PET/CT imaging results in mismatch between CT and dynamic PET images. It can cause artifacts in CT-based attenuation corrected PET images, thus affecting both the qualitative and quantitative aspects of the dynamic PET images and the derived parametric images. In this study, we developed an automated retrospective image-based movement correction (MC) procedure. The MC method first registered the CT image to each dynamic PET frames, then re-reconstructed the PET frames with CT-based attenuation correction, and finally re-aligned all the PET frames to the same position. We evaluated the MC method's performance on the Hoffman phantom and dynamic FDDNP and FDG PET/CT images of patients with neurodegenerative disease or with poor compliance. Dynamic FDDNP PET/CT images (65 min) were obtained from 12 patients and dynamic FDG PET/CT images (60 min) were obtained from 6 patients. Logan analysis with cerebellum as the reference region was used to generate regional distribution volume ratio (DVR) for FDDNP scan before and after MC. For FDG studies, the image derived input function was used to generate parametric image of FDG uptake constant (Ki) before and after MC. Phantom study showed high accuracy of registration between PET and CT and improved PET images after MC. In patient study, head movement was observed in all subjects, especially in late PET frames with an average displacement of 6.92 mm. The z-direction translation (average maximum = 5.32 mm) and x-axis rotation (average maximum = 5.19 degrees) occurred most frequently. Image artifacts were significantly diminished after MC. There were significant differences (P<0.05) in the FDDNP DVR and FDG Ki values in the parietal and temporal regions after MC. In conclusion, MC applied to dynamic brain FDDNP and FDG PET/CT scans could improve the qualitative and quantitative aspects of images of both tracers. PMID:25111700

  15. A high-speed two-frame, 1-2 ns gated X-ray CMOS imager used as a hohlraum diagnostic on the National Ignition Facility (invited).

    PubMed

    Chen, Hui; Palmer, N; Dayton, M; Carpenter, A; Schneider, M B; Bell, P M; Bradley, D K; Claus, L D; Fang, L; Hilsabeck, T; Hohenberger, M; Jones, O S; Kilkenny, J D; Kimmel, M W; Robertson, G; Rochau, G; Sanchez, M O; Stahoviak, J W; Trotter, D C; Porter, J L

    2016-11-01

    A novel x-ray imager, which takes time-resolved gated images along a single line-of-sight, has been successfully implemented at the National Ignition Facility (NIF). This Gated Laser Entrance Hole diagnostic, G-LEH, incorporates a high-speed multi-frame CMOS x-ray imager developed by Sandia National Laboratories to upgrade the existing Static X-ray Imager diagnostic at NIF. The new diagnostic is capable of capturing two laser-entrance-hole images per shot on its 1024 × 448 pixels photo-detector array, with integration times as short as 1.6 ns per frame. Since its implementation on NIF, the G-LEH diagnostic has successfully acquired images from various experimental campaigns, providing critical new information for understanding the hohlraum performance in inertial confinement fusion (ICF) experiments, such as the size of the laser entrance hole vs. time, the growth of the laser-heated gold plasma bubble, the change in brightness of inner beam spots due to time-varying cross beam energy transfer, and plasma instability growth near the hohlraum wall.

  16. Application of automatic threshold in dynamic target recognition with low contrast

    NASA Astrophysics Data System (ADS)

    Miao, Hua; Guo, Xiaoming; Chen, Yu

    2014-11-01

    Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.

  17. Notes for Brazil sampling frame evaluation trip

    NASA Technical Reports Server (NTRS)

    Horvath, R. (Principal Investigator); Hicks, D. R. (Compiler)

    1981-01-01

    Field notes describing a trip conducted in Brazil are presented. This trip was conducted for the purpose of evaluating a sample frame developed using LANDSAT full frame images by the USDA Economic and Statistics Service for the eventual purpose of cropland production estimation with LANDSAT by the Foreign Commodity Production Forecasting Project of the AgRISTARS program. Six areas were analyzed on the basis of land use, crop land in corn and soybean, field size and soil type. The analysis indicated generally successful use of LANDSAT images for purposes of remote large area land use stratification.

  18. Noise and sensitivity of x-ray framing cameras at Nike (abstract)

    NASA Astrophysics Data System (ADS)

    Pawley, C. J.; Deniz, A. V.; Lehecka, T.

    1999-01-01

    X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.

  19. Classical and neural methods of image sequence interpolation

    NASA Astrophysics Data System (ADS)

    Skoneczny, Slawomir; Szostakowski, Jaroslaw

    2001-08-01

    An image interpolation problem is often encountered in many areas. Some examples are interpolation for coding/decoding process for transmission purposes, reconstruction a full frame from two interlaced sub-frames in normal TV or HDTV, or reconstruction of missing frames in old destroyed cinematic sequences. In this paper an overview of interframe interpolation methods is presented. Both direct as well as motion compensated interpolation techniques are given by examples. The used methodology can also be either classical or based on neural networks depending on demand of a specific interpolation problem solving person.

  20. 40 MHz high-frequency ultrafast ultrasound imaging.

    PubMed

    Huang, Chih-Chung; Chen, Pei-Yu; Peng, Po-Hsun; Lee, Po-Yang

    2017-06-01

    Ultrafast high-frame-rate ultrasound imaging based on coherent-plane-wave compounding has been developed for many biomedical applications. Most coherent-plane-wave compounding systems typically operate at 3-15 MHz, and the image resolution for this frequency range is not sufficient for visualizing microstructure tissues. Therefore, the purpose of this study was to implement a high-frequency ultrafast ultrasound imaging operating at 40 MHz. The plane-wave compounding imaging and conventional multifocus B-mode imaging were performed using the Field II toolbox of MATLAB in simulation study. In experiments, plane-wave compounding images were obtained from a 256 channel ultrasound research platform with a 40 MHz array transducer. All images were produced by point-spread functions and cyst phantoms. The in vivo experiment was performed from zebrafish. Since high-frequency ultrasound exhibits a lower penetration, chirp excitation was applied to increase the imaging depth in simulation. The simulation results showed that a lateral resolution of up to 66.93 μm and a contrast of up to 56.41 dB were achieved when using 75-angles plane waves in compounding imaging. The experimental results showed that a lateral resolution of up to 74.83 μm and a contrast of up to 44.62 dB were achieved when using 75-angles plane waves in compounding imaging. The dead zone and compounding noise are about 1.2 mm and 2.0 mm in depth for experimental compounding imaging, respectively. The structure of zebrafish heart was observed clearly using plane-wave compounding imaging. The use of fewer than 23 angles for compounding allowed a frame rate higher than 1000 frames per second. However, the compounding imaging exhibits a similar lateral resolution of about 72 μm as the angle of plane wave is higher than 10 angles. This study shows the highest operational frequency for ultrafast high-frame-rate ultrasound imaging. © 2017 American Association of Physicists in Medicine.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchaineau, M.; Wolinsky, M.; Sigeti, D.E.

    Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor stimulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly, and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adaptingmore » Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM execution time is proportionate to the number of triangle changes per frame, which is typically a few percent of the output mesh size, hence ROAM performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported.« less

  2. High-performance floating-point image computing workstation for medical applications

    NASA Astrophysics Data System (ADS)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e.g., three 1280 x 1024 monitors, each with a 16-Mbyte frame buffer). Each add-in board provides an expansion connector to which an optional image computing coprocessor board may be added. Each coprocessor board supports up to four processors for a peak performance of 160 MFLOPS. The coprocessors can execute programs from external high-speed microcode memory as well as built-in internal microcode routines. The internal microcode routines provide support for 2-D and 3-D graphics operations, matrix and vector arithmetic, and image processing in integer, IEEE single-precision floating point, or IEEE double-precision floating point. In addition to providing a library of C functions which links the NeXT computer to the add-in board and supports its various operational modes, algorithms and medical imaging application programs are being developed and implemented for image display and enhancement. As an extension to the built-in algorithms of the coprocessors, 2-D Fast Fourier Transform (FF1), 2-D Inverse FFF, convolution, warping and other algorithms (e.g., Discrete Cosine Transform) which exploit the parallel architecture of the coprocessor board are being implemented.

  3. Evaluation of sequential images for photogrammetrically point determination

    NASA Astrophysics Data System (ADS)

    Kowalczyk, M.

    2011-12-01

    Close range photogrammetry encounters many problems with reconstruction of objects three-dimensional shape. Relative orientation parameters of taken photos makes usually key role leading to right solution of this problem. Automation of technology process is hardly performed due to recorded scene complexity and configuration of camera positions. This configuration makes the process of joining photos into one set usually impossible automatically. Application of camcorder is the solution widely proposed in literature for support in 3D models creation. Main advantages of this tool are connected with large number of recorded images and camera positions. Exterior orientation changes barely between two neighboring frames. Those features of film sequence gives possibilities for creating models with basic algorithms, working faster and more robust, than with remotely taken photos. The first part of this paper presents results of experiments determining interior orientation parameters of some sets of frames, presenting three-dimensional test field. This section describes calibration repeatability of film frames taken from camcorder. It is important due to stability of interior camera geometric parameters. Parametric model of systematical errors was applied for correcting images. Afterwards a short film of the same test field had been taken for determination of check points group. This part has been done for controlling purposes of camera application in measurement tasks. Finally there are presented some results of experiments which compare determination of recorded object points in 3D space. In common digital photogrammetry, where separate photos are used, first levels of image pyramids are taken to connect with feature based matching. This complicated process creates a lot of emergencies, which can produce false detections of image similarities. In case of digital film camera, authors of publications avoid this dangerous step, going straightly to area based matching, aiming high degree of similarity for two corresponding film frames. First approximation, in establishing connections between photos, comes from whole image distance. This image distance method can work with more than just two dimensions of translation vector. Scale and angles are also used for improving image matching. This operation creates more similar looking frames where corresponding characteristic points lays close to each other. Procedure searching for pairs of points works faster and more accurately, because analyzed areas can be reduced. Another proposed solution comes from image created by adding differences between particular frames, gives more rough results, but works much faster than standard matching.

  4. Improving lateral resolution and image quality of optical coherence tomography by the multi-frame superresolution technique for 3D tissue imaging.

    PubMed

    Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R

    2017-11-01

    The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues.

  5. Real-time CT-video registration for continuous endoscopic guidance

    NASA Astrophysics Data System (ADS)

    Merritt, Scott A.; Rai, Lav; Higgins, William E.

    2006-03-01

    Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physician's behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frame's optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.

  6. Clinical efficacy and safety of surface imaging guided radiosurgery (SIG-RS) in the treatment of benign skull base tumors.

    PubMed

    Lau, Steven K M; Patel, Kunal; Kim, Teddy; Knipprath, Erik; Kim, Gwe-Ya; Cerviño, Laura I; Lawson, Joshua D; Murphy, Kevin T; Sanghvi, Parag; Carter, Bob S; Chen, Clark C

    2017-04-01

    Frameless, surface imaging guided radiosurgery (SIG-RS) is a novel platform for stereotactic radiosurgery (SRS) wherein patient positioning is monitored in real-time through infra-red camera tracking of facial topography. Here we describe our initial clinical experience with SIG-RS for the treatment of benign neoplasms of the skull base. We identified 48 patients with benign skull base tumors consecutively treated with SIG-RS at a single institution between 2009 and 2011. Patients were diagnosed with meningioma (n = 22), vestibular schwannoma (n = 20), or nonfunctional pituitary adenoma (n = 6). Local control and treatment-related toxicity were retrospectively assessed. Median follow-up was 65 months (range 61-72 months). Prescription doses were 12-13 Gy in a single fraction (n = 18), 8 Gy × 3 fractions (n = 6), and 5 Gy × 5 fractions (n = 24). Actuarial tumor control rate at 5 years was 98%. No grade ≥3 treatment-related toxicity was observed. Grade ≤2 toxicity was associated with symptomatic lesions (p = 0.049) and single fraction treatment (p = 0.005). SIG-RS for benign skull base tumors produces clinical outcomes comparable to conventional frame-based SRS techniques while enhancing patient comfort.

  7. Temporal enhancement of two-dimensional color doppler echocardiography

    NASA Astrophysics Data System (ADS)

    Terentjev, Alexey B.; Settlemier, Scott H.; Perrin, Douglas P.; del Nido, Pedro J.; Shturts, Igor V.; Vasilyev, Nikolay V.

    2016-03-01

    Two-dimensional color Doppler echocardiography is widely used for assessing blood flow inside the heart and blood vessels. Currently, frame acquisition time for this method varies from tens to hundreds of milliseconds, depending on Doppler sector parameters. This leads to low frame rates of resulting video sequences equal to tens of Hz, which is insufficient for some diagnostic purposes, especially in pediatrics. In this paper, we present a new approach for reconstruction of 2D color Doppler cardiac images, which results in the frame rate being increased to hundreds of Hz. This approach relies on a modified method of frame reordering originally applied to real-time 3D echocardiography. There are no previous publications describing application of this method to 2D Color Doppler data. The approach has been tested on several in-vivo cardiac 2D color Doppler datasets with approximate duration of 30 sec and native frame rate of 15 Hz. The resulting image sequences had equivalent frame rates to 500Hz.

  8. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  9. Fast registration and reconstruction of aliased low-resolution frames by use of a modified maximum-likelihood approach.

    PubMed

    Alam, M S; Bognar, J G; Cain, S; Yasuda, B J

    1998-03-10

    During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.

  10. Hologram generation by horizontal scanning of a high-speed spatial light modulator.

    PubMed

    Takaki, Yasuhiro; Okada, Naoya

    2009-06-10

    In order to increase the image size and the viewing zone angle of a hologram, a high-speed spatial light modulator (SLM) is imaged as a vertically long image by an anamorphic imaging system, and this image is scanned horizontally by a galvano scanner. The reduction in horizontal pixel pitch of the SLM provides a wide viewing zone angle. The increased image height and horizontal scanning increased the image size. We demonstrated the generation of a hologram having a 15 degrees horizontal viewing zone angle and an image size of 3.4 inches with a frame rate of 60 Hz using a digital micromirror device with a frame rate of 13.333 kHz as a high-speed SLM.

  11. Cardiac function and perfusion dynamics measured on a beat-by-beat basis in the live mouse using ultra-fast 4D optoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Ford, Steven J.; Deán-Ben, Xosé L.; Razansky, Daniel

    2015-03-01

    The fast heart rate (~7 Hz) of the mouse makes cardiac imaging and functional analysis difficult when studying mouse models of cardiovascular disease, and cannot be done truly in real-time and 3D using established imaging modalities. Optoacoustic imaging, on the other hand, provides ultra-fast imaging at up to 50 volumetric frames per second, allowing for acquisition of several frames per mouse cardiac cycle. In this study, we combined a recently-developed 3D optoacoustic imaging array with novel analytical techniques to assess cardiac function and perfusion dynamics of the mouse heart at high, 4D spatiotemporal resolution. In brief, the heart of an anesthetized mouse was imaged over a series of multiple volumetric frames. In another experiment, an intravenous bolus of indocyanine green (ICG) was injected and its distribution was subsequently imaged in the heart. Unique temporal features of the cardiac cycle and ICG distribution profiles were used to segment the heart from background and to assess cardiac function. The 3D nature of the experimental data allowed for determination of cardiac volumes at ~7-8 frames per mouse cardiac cycle, providing important cardiac function parameters (e.g., stroke volume, ejection fraction) on a beat-by-beat basis, which has been previously unachieved by any other cardiac imaging modality. Furthermore, ICG distribution dynamics allowed for the determination of pulmonary transit time and thus additional quantitative measures of cardiovascular function. This work demonstrates the potential for optoacoustic cardiac imaging and is expected to have a major contribution toward future preclinical studies of animal models of cardiovascular health and disease.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lens, E; Horst, A van der; Versteijne, E

    Purpose: Using a breath hold (BH) technique during radiotherapy of pancreatic tumors is expected to reduce intra-fractional motion. The aim of this study was to evaluate the tumor motion during BH. Methods: In this pilot study, we included 8 consecutive pancreatic cancer patients. All had 2– 4 intratumoral gold fiducials. Patients were asked to perform 3 consecutive 30-second end-inhale BHs on day 5, 10 and 15 of their three-week treatment. During BH, airflow through a mouthpiece was measured using a spirometer. Any inadvertent flow of air during BH was monitored for all patients. We measured tumor motion on lateral fluoroscopicmore » movies (57 in total) made during BH. In each movie the fiducials as a group were tracked over time in superior-inferior (SI) and anterior-posterior (AP) direction using 2-D image correlation between consecutive frames. We determined for each patient the range of intra-BH motion over all movies; we also determined the absolute means and standard deviations (SDs) for the entire patient group. Additionally, we investigated the relation between inadvertent airflow during BH and the intra-BH motion. Results: We found intra-BH tumor motion of up to 12.5 mm (range, 1.0–12.5 mm) in SI direction and up to 8.0 mm (range, 1.0–8.0 mm) in AP direction. The absolute mean motion over the patient population was 4.7 (SD: 3.0) mm and 2.8 (SD: 1.2) mm in the SI and AP direction, respectively. Patients were able to perform stable consecutive BHs; during only 20% of the movies we found very small airflows (≤ 65 ml). These were mostly stepwise in nature and could not explain the continuous tumor motions we observed. Conclusion: We found substantial (up to 12.5 mm) pancreatic tumor motion during BHs. We found minimal inadvertent airflow, seen only during a minority of BHs, and this did not explain the obtained results. This work was supported by the foundation Bergh in het Zadel through the Dutch Cancer Society (KWF Kankerbestrijding) project No. UVA 2011-5271.« less

  13. Mission Specialist Hawley works with the SWUIS experiment

    NASA Image and Video Library

    2013-11-18

    STS093-350-022 (22-27 July 1999) --- Astronaut Steven A. Hawley, mission specialist, works with the Southwest Ultraviolet Imaging System (SWUIS) experiment onboard the Earth-orbiting Space Shuttle Columbia. The SWUIS is based around a Maksutov-design Ultraviolet (UV) telescope and a UV-sensitive, image-intensified Charge-Coupled Device (CCD) camera that frames at video frame rates.

  14. High resolution metric imaging payload

    NASA Astrophysics Data System (ADS)

    Delclaud, Y.

    2017-11-01

    Alcatel Space Industries has become Europe's leader in the field of high and very high resolution optical payloads, in the frame work of earth observation system able to provide military government with metric images from space. This leadership allowed ALCATEL to propose for the export market, within a French collaboration frame, a complete space based system for metric observation.

  15. Picturing Video

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Video Pics is a software program that generates high-quality photos from video. The software was developed under an SBIR contract with Marshall Space Flight Center by Redhawk Vision, Inc.--a subsidiary of Irvine Sensors Corporation. Video Pics takes information content from multiple frames of video and enhances the resolution of a selected frame. The resulting image has enhanced sharpness and clarity like that of a 35 mm photo. The images are generated as digital files and are compatible with image editing software.

  16. Computer quantitation of coronary angiograms

    NASA Technical Reports Server (NTRS)

    Ledbetter, D. C.; Selzer, R. H.; Gordon, R. M.; Blankenhorn, D. H.; Sanmarco, M. E.

    1978-01-01

    A computer technique is being developed at the Jet Propulsion Laboratory to automate the measurement of coronary stenosis. A Vanguard 35mm film transport is optically coupled to a Spatial Data System vidicon/digitizer which in turn is controlled by a DEC PDP 11/55 computer. Programs have been developed to track the edges of the arterial shadow, to locate normal and atherosclerotic vessel sections and to measure percent stenosis. Multiple frame analysis techniques are being investigated that involve on the one hand, averaging stenosis measurements from adjacent frames, and on the other hand, averaging adjacent frame images directly and then measuring stenosis from the averaged image. For the latter case, geometric transformations are used to force registration of vessel images whose spatial orientation changes.

  17. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  18. Absolute IGS antenna phase center model igs08.atx: status and potential improvements

    NASA Astrophysics Data System (ADS)

    Schmid, R.; Dach, R.; Collilieux, X.; Jäggi, A.; Schmitz, M.; Dilssner, F.

    2016-04-01

    On 17 April 2011, all analysis centers (ACs) of the International GNSS Service (IGS) adopted the reference frame realization IGS08 and the corresponding absolute antenna phase center model igs08.atx for their routine analyses. The latter consists of an updated set of receiver and satellite antenna phase center offsets and variations (PCOs and PCVs). An update of the model was necessary due to the difference of about 1 ppb in the terrestrial scale between two consecutive realizations of the International Terrestrial Reference Frame (ITRF2008 vs. ITRF2005), as that parameter is highly correlated with the GNSS satellite antenna PCO components in the radial direction.

  19. High-Speed Photography of Detonation Propagation in Dynamically Precompressed Liquid Explosives

    NASA Astrophysics Data System (ADS)

    Petel, O. E.; Higgins, A. J.; Yoshinaka, A. C.; Zhang, F.

    2007-12-01

    The propagation of detonation in shock-compressed nitromethane was observed with a high-speed framing camera. The test explosive, nitromethane, was compressed by a reverberating shock wave to pressures as high as 10 GPa prior to being detonated by a secondary detonation event. The pressure and density in the test explosive prior to detonation were determined using two methods: manganin stress gauge measurements and LS-DYNA simulations. The velocity of the detonation front was determined from consecutive frames and correlated to the density of the reverberating shock-compressed explosive prior to detonation. Observing detonation propagation under these non-ambient conditions provides data which can be useful in the validation of equation of state models.

  20. Single-frame 3D fluorescence microscopy with ultraminiature lensless FlatScope

    PubMed Central

    Adams, Jesse K.; Boominathan, Vivek; Avants, Benjamin W.; Vercosa, Daniel G.; Ye, Fan; Baraniuk, Richard G.; Robinson, Jacob T.; Veeraraghavan, Ashok

    2017-01-01

    Modern biology increasingly relies on fluorescence microscopy, which is driving demand for smaller, lighter, and cheaper microscopes. However, traditional microscope architectures suffer from a fundamental trade-off: As lenses become smaller, they must either collect less light or image a smaller field of view. To break this fundamental trade-off between device size and performance, we present a new concept for three-dimensional (3D) fluorescence imaging that replaces lenses with an optimized amplitude mask placed a few hundred micrometers above the sensor and an efficient algorithm that can convert a single frame of captured sensor data into high-resolution 3D images. The result is FlatScope: perhaps the world’s tiniest and lightest microscope. FlatScope is a lensless microscope that is scarcely larger than an image sensor (roughly 0.2 g in weight and less than 1 mm thick) and yet able to produce micrometer-resolution, high–frame rate, 3D fluorescence movies covering a total volume of several cubic millimeters. The ability of FlatScope to reconstruct full 3D images from a single frame of captured sensor data allows us to image 3D volumes roughly 40,000 times faster than a laser scanning confocal microscope while providing comparable resolution. We envision that this new flat fluorescence microscopy paradigm will lead to implantable endoscopes that minimize tissue damage, arrays of imagers that cover large areas, and bendable, flexible microscopes that conform to complex topographies. PMID:29226243

  1. Influence of the Pixel Sizes of Reference Computed Tomography on Single-photon Emission Computed Tomography Image Reconstruction Using Conjugate-gradient Algorithm.

    PubMed

    Okuda, Kyohei; Sakimoto, Shota; Fujii, Susumu; Ida, Tomonobu; Moriyama, Shigeru

    The frame-of-reference using computed-tomography (CT) coordinate system on single-photon emission computed tomography (SPECT) reconstruction is one of the advanced characteristics of the xSPECT reconstruction system. The aim of this study was to reveal the influence of the high-resolution frame-of-reference on the xSPECT reconstruction. 99m Tc line-source phantom and National Electrical Manufacturers Association (NEMA) image quality phantom were scanned using the SPECT/CT system. xSPECT reconstructions were performed with the reference CT images in different sizes of the display field-of-view (DFOV) and pixel. The pixel sizes of the reconstructed xSPECT images were close to 2.4 mm, which is acquired as originally projection data, even if the reference CT resolution was varied. The full width at half maximum (FWHM) of the line-source, absolute recovery coefficient, and background variability of image quality phantom were independent on the sizes of DFOV in the reference CT images. The results of this study revealed that the image quality of the reconstructed xSPECT images is not influenced by the resolution of frame-of-reference on SPECT reconstruction.

  2. Data analysis for GOPEX image frames

    NASA Technical Reports Server (NTRS)

    Levine, B. M.; Shaik, K. S.; Yan, T.-Y.

    1993-01-01

    The data analysis based on the image frames received at the Solid State Imaging (SSI) camera of the Galileo Optical Experiment (GOPEX) demonstration conducted between 9-16 Dec. 1992 is described. Laser uplink was successfully established between the ground and the Galileo spacecraft during its second Earth-gravity-assist phase in December 1992. SSI camera frames were acquired which contained images of detected laser pulses transmitted from the Table Mountain Facility (TMF), Wrightwood, California, and the Starfire Optical Range (SOR), Albuquerque, New Mexico. Laser pulse data were processed using standard image-processing techniques at the Multimission Image Processing Laboratory (MIPL) for preliminary pulse identification and to produce public release images. Subsequent image analysis corrected for background noise to measure received pulse intensities. Data were plotted to obtain histograms on a daily basis and were then compared with theoretical results derived from applicable weak-turbulence and strong-turbulence considerations. Processing steps are described and the theories are compared with the experimental results. Quantitative agreement was found in both turbulence regimes, and better agreement would have been found, given more received laser pulses. Future experiments should consider methods to reliably measure low-intensity pulses, and through experimental planning to geometrically locate pulse positions with greater certainty.

  3. Video Image Stabilization and Registration (VISAR) Software

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  4. Results of analysis of archive MSG data in the context of MCS prediction system development for economic decisions assistance - case studies

    NASA Astrophysics Data System (ADS)

    Szafranek, K.; Jakubiak, B.; Lech, R.; Tomczuk, M.

    2012-04-01

    PROZA (Operational decision-making based on atmospheric conditions) is the project co-financed by the European Union through the European Regional Development Fund. One of its tasks is to develop the operational forecast system, which is supposed to support different economies branches like forestry or fruit farming by reducing the risk of economic decisions with taking into consideration weather conditions. In the frame of this studies system of sudden convective phenomena (storms or tornados) prediction is going to be built. The main authors' purpose is to predict MCSs (Mezoscale Convective Systems) basing on MSG (Meteosat Second Generation) real-time data. Until now several tests were performed. The Meteosat satellite images in selected spectral channels collected for Central Europe Region for May and August 2010 were used to detect and track cloud systems related to MCSs. In proposed tracking method first the cloud objects are defined using the temperature threshold and next the selected cells are tracked using principle of overlapping position on consecutive images. The main benefit to use a temperature thresholding to define cells is its simplicity. During the tracking process the algorithm links the cells of the image at time t to the one of the following image at time t+dt that correspond to the same cloud system (Morel-Senesi algorithm). An automated detection and elimination of some instabilities presented in tracking algorithm was developed. The poster presents analysis of exemplary MCSs in the context of near real-time prediction system development.

  5. Ultrafast Ultrasound Imaging With Cascaded Dual-Polarity Waves.

    PubMed

    Zhang, Yang; Guo, Yuexin; Lee, Wei-Ning

    2018-04-01

    Ultrafast ultrasound imaging using plane or diverging waves, instead of focused beams, has advanced greatly the development of novel ultrasound imaging methods for evaluating tissue functions beyond anatomical information. However, the sonographic signal-to-noise ratio (SNR) of ultrafast imaging remains limited due to the lack of transmission focusing, and thus insufficient acoustic energy delivery. We hereby propose a new ultrafast ultrasound imaging methodology with cascaded dual-polarity waves (CDWs), which consists of a pulse train with positive and negative polarities. A new coding scheme and a corresponding linear decoding process were thereby designed to obtain the recovered signals with increased amplitude, thus increasing the SNR without sacrificing the frame rate. The newly designed CDW ultrafast ultrasound imaging technique achieved higher quality B-mode images than coherent plane-wave compounding (CPWC) and multiplane wave (MW) imaging in a calibration phantom, ex vivo pork belly, and in vivo human back muscle. CDW imaging shows a significant improvement in the SNR (10.71 dB versus CPWC and 7.62 dB versus MW), penetration depth (36.94% versus CPWC and 35.14% versus MW), and contrast ratio in deep regions (5.97 dB versus CPWC and 5.05 dB versus MW) without compromising other image quality metrics, such as spatial resolution and frame rate. The enhanced image qualities and ultrafast frame rates offered by CDW imaging beget great potential for various novel imaging applications.

  6. Co-adding techniques for image-based wavefront sensing for segmented-mirror telescopes

    NASA Astrophysics Data System (ADS)

    Smith, J. S.; Aronstein, David L.; Dean, Bruce H.; Acton, D. S.

    2007-09-01

    Image-based wavefront sensing algorithms are being used to characterize the optical performance for a variety of current and planned astronomical telescopes. Phase retrieval recovers the optical wavefront that correlates to a series of diversity-defocused point-spread functions (PSFs), where multiple frames can be acquired at each defocus setting. Multiple frames of data can be co-added in different ways; two extremes are in "image-plane space," to average the frames for each defocused PSF and use phase retrieval once on the averaged images, or in "pupil-plane space," to use phase retrieval on each PSF frame individually and average the resulting wavefronts. The choice of co-add methodology is particularly noteworthy for segmented-mirror telescopes that are subject to noise that causes uncorrelated motions between groups of segments. Using models and data from the James Webb Space Telescope (JWST) Testbed Telescope (TBT), we show how different sources of noise (uncorrelated segment jitter, turbulence, and common-mode noise) and different parts of the optical wavefront, segment and global aberrations, contribute to choosing the co-add method. Of particular interest, segment piston is more accurately recovered in "image-plane space" co-adding, while segment tip/tilt is recovered in "pupil-plane space" co-adding.

  7. Dose-to-water conversion for the backscatter-shielded EPID: A frame-based method to correct for EPID energy response to MLC transmitted radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zwan, Benjamin J., E-mail: benjamin.zwan@uon.edu.au; O’Connor, Daryl J.; King, Brian W.

    2014-08-15

    Purpose: To develop a frame-by-frame correction for the energy response of amorphous silicon electronic portal imaging devices (a-Si EPIDs) to radiation that has transmitted through the multileaf collimator (MLC) and to integrate this correction into the backscatter shielded EPID (BSS-EPID) dose-to-water conversion model. Methods: Individual EPID frames were acquired using a Varian frame grabber and iTools acquisition software then processed using in-house software developed inMATLAB. For each EPID image frame, the region below the MLC leaves was identified and all pixels in this region were multiplied by a factor of 1.3 to correct for the under-response of the imager tomore » MLC transmitted radiation. The corrected frames were then summed to form a corrected integrated EPID image. This correction was implemented as an initial step in the BSS-EPID dose-to-water conversion model which was then used to compute dose planes in a water phantom for 35 IMRT fields. The calculated dose planes, with and without the proposed MLC transmission correction, were compared to measurements in solid water using a two-dimensional diode array. Results: It was observed that the integration of the MLC transmission correction into the BSS-EPID dose model improved agreement between modeled and measured dose planes. In particular, the MLC correction produced higher pass rates for almost all Head and Neck fields tested, yielding an average pass rate of 99.8% for 2%/2 mm criteria. A two-sample independentt-test and fisher F-test were used to show that the MLC transmission correction resulted in a statistically significant reduction in the mean and the standard deviation of the gamma values, respectively, to give a more accurate and consistent dose-to-water conversion. Conclusions: The frame-by-frame MLC transmission response correction was shown to improve the accuracy and reduce the variability of the BSS-EPID dose-to-water conversion model. The correction may be applied as a preprocessing step in any pretreatment portal dosimetry calculation and has been shown to be beneficial for highly modulated IMRT fields.« less

  8. Multi-frame super-resolution with quality self-assessment for retinal fundus videos.

    PubMed

    Köhler, Thomas; Brost, Alexander; Mogalle, Katja; Zhang, Qianyi; Köhler, Christiane; Michelson, Georg; Hornegger, Joachim; Tornow, Ralf P

    2014-01-01

    This paper proposes a novel super-resolution framework to reconstruct high-resolution fundus images from multiple low-resolution video frames in retinal fundus imaging. Natural eye movements during an examination are used as a cue for super-resolution in a robust maximum a-posteriori scheme. In order to compensate heterogeneous illumination on the fundus, we integrate retrospective illumination correction for photometric registration to the underlying imaging model. Our method utilizes quality self-assessment to provide objective quality scores for reconstructed images as well as to select regularization parameters automatically. In our evaluation on real data acquired from six human subjects with a low-cost video camera, the proposed method achieved considerable enhancements of low-resolution frames and improved noise and sharpness characteristics by 74%. In terms of image analysis, we demonstrate the importance of our method for the improvement of automatic blood vessel segmentation as an example application, where the sensitivity was increased by 13% using super-resolution reconstruction.

  9. Real time imaging and infrared background scene analysis using the Naval Postgraduate School infrared search and target designation (NPS-IRSTD) system

    NASA Astrophysics Data System (ADS)

    Bernier, Jean D.

    1991-09-01

    The imaging in real time of infrared background scenes with the Naval Postgraduate School Infrared Search and Target Designation (NPS-IRSTD) System was achieved through extensive software developments in protected mode assembly language on an Intel 80386 33 MHz computer. The new software processes the 512 by 480 pixel images directly in the extended memory area of the computer where the DT-2861 frame grabber memory buffers are mapped. Direct interfacing, through a JDR-PR10 prototype card, between the frame grabber and the host computer AT bus enables each load of the frame grabber memory buffers to be effected under software control. The protected mode assembly language program can refresh the display of a six degree pseudo-color sector in the scanner rotation within the two second period of the scanner. A study of the imaging properties of the NPS-IRSTD is presented with preliminary work on image analysis and contrast enhancement of infrared background scenes.

  10. Time-resolved contrast-enhanced MR angiography of the thorax in adults with congenital heart disease.

    PubMed

    Mohrs, Oliver K; Petersen, Steffen E; Voigtlaender, Thomas; Peters, Jutta; Nowak, Bernd; Heinemann, Markus K; Kauczor, Hans-Ulrich

    2006-10-01

    The aim of this study was to evaluate the diagnostic value of time-resolved contrast-enhanced MR angiography in adults with congenital heart disease. Twenty patients with congenital heart disease (mean age, 38 +/- 14 years; range, 16-73 years) underwent contrast-enhanced turbo fast low-angle shot MR angiography. Thirty consecutive coronal 3D slabs with a frame rate of 1-second duration were acquired. The mask defined as the first data set was subtracted from subsequent images. Image quality was evaluated using a 5-point scale (from 1, not assessable, to 5, excellent image quality). Twelve diagnostic parameters yielded 1 point each in case of correct diagnosis (binary analysis into normal or abnormal) and were summarized into three categories: anatomy of the main thoracic vessels (maximum, 5 points), sequential cardiac anatomy (maximum, 5 points), and shunt detection (maximum, 2 points). The results were compared with a combined clinical reference comprising medical or surgical reports and other imaging studies. Diagnostic accuracies were calculated for each of the parameters as well as for the three categories. The mean image quality was 3.7 +/- 1.0. Using a binary approach, 220 (92%) of the 240 single diagnostic parameters could be analyzed. The percentage of maximum diagnostic points, the sensitivity, the specificity, and the positive and the negative predictive values were all 100% for the anatomy of the main thoracic vessels; 97%, 87%, 100%, 100%, and 96% for sequential cardiac anatomy; and 93%, 93%, 92%, 88%, and 96% for shunt detection. Time-resolved contrast-enhanced MR angiography provides, in one breath-hold, anatomic and qualitative functional information in adult patients with congenital heart disease. The high diagnostic accuracy allows the investigator to tailor subsequent specific MR sequences within the same session.

  11. The performance analysis of three-dimensional track-before-detect algorithm based on Fisher-Tippett-Gnedenko theorem

    NASA Astrophysics Data System (ADS)

    Cho, Hoonkyung; Chun, Joohwan; Song, Sungchan

    2016-09-01

    The dim moving target tracking from the infrared image sequence in the presence of high clutter and noise has been recently under intensive investigation. The track-before-detect (TBD) algorithm processing the image sequence over a number of frames before decisions on the target track and existence is known to be especially attractive in very low SNR environments (⩽ 3 dB). In this paper, we shortly present a three-dimensional (3-D) TBD with dynamic programming (TBD-DP) algorithm using multiple IR image sensors. Since traditional two-dimensional TBD algorithm cannot track and detect the along the viewing direction, we use 3-D TBD with multiple sensors and also strictly analyze the detection performance (false alarm and detection probabilities) based on Fisher-Tippett-Gnedenko theorem. The 3-D TBD-DP algorithm which does not require a separate image registration step uses the pixel intensity values jointly read off from multiple image frames to compute the merit function required in the DP process. Therefore, we also establish the relationship between the pixel coordinates of image frame and the reference coordinates.

  12. Vehicle counting system using real-time video processing

    NASA Astrophysics Data System (ADS)

    Crisóstomo-Romero, Pedro M.

    2006-02-01

    Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.

  13. Carotid stiffness change over the cardiac cycle by ultrafast ultrasound imaging in healthy volunteers and vascular Ehlers-Danlos syndrome.

    PubMed

    Mirault, Tristan; Pernot, Mathieu; Frank, Michael; Couade, Mathieu; Niarra, Ralph; Azizi, Michel; Emmerich, Joseph; Jeunemaître, Xavier; Fink, Mathias; Tanter, Mickaël; Messas, Emmanuel

    2015-09-01

    Arterial stiffness is related to age and collagen properties of the arterial wall and can be indirectly evaluated by the pulse wave velocity (PWV). Ultrafast ultrasound imaging, a unique ultrahigh frame rate technique (>10, 000 images/s), recently emerged enabling direct measurement of carotid PWV and its variation over the cardiac cycle. Our goal was to characterize the carotid diastolic-systolic arterial stiffening using ultrafast ultrasound imaging in healthy individuals and in vascular Ehlers-Danlos syndrome (vEDS), in which collagen type III is defectuous. Ultrafast ultrasound imaging was performed on common carotids of 102 healthy individuals and 37 consecutive patients with vEDS. Results are mean ± standard deviation. Carotid ultrafast ultrasound imaging PWV in healthy individuals was 5.6 ± 1.2 in early systole and 7.3 ± 2.0  m/s in end systole, and correlated with age (r = 0.48; P < 0.0001 and r = 0.68; P < 0.0001, respectively). Difference between early and end-systole PWV increased with age independently of blood pressure (r = 0.54; P < 0.0001). In patients with vEDS, ultrafast ultrasound imaging PWV was 6.0 ± 1.5 in early systole and 6.7 ± 1.5  m/s in end systole. Carotid stiffness change over the cardiac cycle was lower than in healthy people (0.021 vs. 0.057  m/s per mmHg; P = 0.0035). Ultrafast ultrasound imaging can evaluate carotid PWV and its variation over the cardiac cycle. This allowed to demonstrate the age-induced increase of the arterial diastolic-systolic stiffening in healthy people and a lower stiffening in vEDS, both characterized by arterial complications. We believe that this easy-to-use technique could offer the opportunity to go beyond the diastolic PWV to better characterize arterial stiffness change with age or other collagen alterations.

  14. Informative-frame filtering in endoscopy videos

    NASA Astrophysics Data System (ADS)

    An, Yong Hwan; Hwang, Sae; Oh, JungHwan; Lee, JeongKyu; Tavanapong, Wallapak; de Groen, Piet C.; Wong, Johnny

    2005-04-01

    Advances in video technology are being incorporated into today"s healthcare practice. For example, colonoscopy is an important screening tool for colorectal cancer. Colonoscopy allows for the inspection of the entire colon and provides the ability to perform a number of therapeutic operations during a single procedure. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. Other endoscopic procedures include upper gastrointestinal endoscopy, enteroscopy, bronchoscopy, cystoscopy, and laparoscopy. However, a significant number of out-of-focus frames are included in this type of videos since current endoscopes are equipped with a single, wide-angle lens that cannot be focused. The out-of-focus frames do not hold any useful information. To reduce the burdens of the further processes such as computer-aided image processing or human expert"s examinations, these frames need to be removed. We call an out-of-focus frame as non-informative frame and an in-focus frame as informative frame. We propose a new technique to classify the video frames into two classes, informative and non-informative frames using a combination of Discrete Fourier Transform (DFT), Texture Analysis, and K-Means Clustering. The proposed technique can evaluate the frames without any reference image, and does not need any predefined threshold value. Our experimental studies indicate that it achieves over 96% of four different performance metrics (i.e. precision, sensitivity, specificity, and accuracy).

  15. Red Spot Movie

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This brief movie shows counterclockwise atmospheric motion around Jupiter's Great Red Spot. The clip was made from blue-filter images taken with the narrow-angle camera on NASA's Cassini spacecraft during seven separate rotations of Jupiter between Oct. 1 and Oct. 5, 2000.

    The clip also shows the eastward and westward motion of the zonal jets, seen as the horizontal stripes flowing in opposite directions. The zonal jets circle the planet. As far as can be determined from both Earth-based and spacecraft measurements, the positions and speeds of the jets have not changed for 100 years. Since Jupiter is a fluid planet without a solid boundary, the jet speeds are measured relative to Jupiter's magnetic field, which rotates, wobbling like a top because of its tilt, every 9 hours 55.5 minutes. The movie shows motions in the magnetic reference frame, so winds to the west correspond to features that are rotating a little slower than the magnetic field, and eastward winds correspond to features rotating a little faster.

    Because the Red Spot is in the southern hemisphere, the direction of motion indicates it is a high-pressure center. Small bright clouds appear suddenly to the west of the Great Red Spot. Scientists suspect these small white features are lightning storms. The storms eventually merge with the Red Spot and surrounding jets, and may be the main energy source for the large-scale features.

    The smallest features in the movie are about 500 kilometers (about 300 miles) across. The spacing of the movie frames in time is not uniform; some consecutive images are separated by two Jupiter rotations, and some by one. The images have been re-projected using a simple cylindrical map projection. They show an area from 50 degrees north of Jupiter's equator to 50 degrees south, extending 100 degrees east-west, about one quarter of Jupiter's circumference.

    Cassini is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Cassini mission for NASA's Office of Space Science, Washington, D.C.

  16. Image-guided surgery and therapy: current status and future directions

    NASA Astrophysics Data System (ADS)

    Peters, Terence M.

    2001-05-01

    Image-guided surgery and therapy is assuming an increasingly important role, particularly considering the current emphasis on minimally-invasive surgical procedures. Volumetric CT and MR images have been used now for some time in conjunction with stereotactic frames, to guide many neurosurgical procedures. With the development of systems that permit surgical instruments to be tracked in space, image-guided surgery now includes the use of frame-less procedures, and the application of the technology has spread beyond neurosurgery to include orthopedic applications and therapy of various soft-tissue organs such as the breast, prostate and heart. Since tracking systems allow image- guided surgery to be undertaken without frames, a great deal of effort has been spent on image-to-image and image-to- patient registration techniques, and upon the means of combining real-time intra-operative images with images acquired pre-operatively. As image-guided surgery systems have become increasingly sophisticated, the greatest challenges to their successful adoption in the operating room of the future relate to the interface between the user and the system. To date, little effort has been expended to ensure that the human factors issues relating to the use of such equipment in the operating room have been adequately addressed. Such systems will only be employed routinely in the OR when they are designed to be intuitive, unobtrusive, and provide simple access to the source of the images.

  17. Mars Science Laboratory Frame Manager for Centralized Frame Tree Database and Target Pointing

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Leger, Chris; Peters, Stephen; Carsten, Joseph; Diaz-Calderon, Antonio

    2013-01-01

    The FM (Frame Manager) flight software module is responsible for maintaining the frame tree database containing coordinate transforms between frames. The frame tree is a proper tree structure of directed links, consisting of surface and rover subtrees. Actual frame transforms are updated by their owner. FM updates site and saved frames for the surface tree. As the rover drives to a new area, a new site frame with an incremented site index can be created. Several clients including ARM and RSM (Remote Sensing Mast) update their related rover frames that they own. Through the onboard centralized FM frame tree database, client modules can query transforms between any two frames. Important applications include target image pointing for RSM-mounted cameras and frame-referenced arm moves. The use of frame tree eliminates cumbersome, error-prone calculations of coordinate entries for commands and thus simplifies flight operations significantly.

  18. Relation between calcium burden, echocardiographic stent frame eccentricity and paravalvular leakage after corevalve transcatheter aortic valve implantation.

    PubMed

    Di Martino, Luigi F M; Soliman, Osama I I; van Gils, Lennart; Vletter, Wim B; Van Mieghem, Nicolas M; Ren, Ben; Galema, Tjebbe W; Schultz, Carl; de Jaegere, Peter P T; Di Biase, Matteo; Geleijnse, Marcel L

    2017-06-01

    Paravalvular aortic leakage (PVL) after transcatheter aortic valve implantation (TAVI) is a complication with potentially severe consequences. The relation between native aortic root calcium burden, stent frame eccentricity and PVL was not studied before. Two-hundred-and-twenty-three consecutive patients with severe aortic stenosis who underwent TAVI with a Medtronic CoreValve System© and who had available pre-discharge transthoracic echocardiography were studied. Echocardiographic stent inflow frame eccentricity was defined as major-minor diameter in a short-axis view >2 mm. PVL was scored according to the updated Valve Academic Research Consortium (VARC-2) recommendations. In a subgroup of 162 (73%) patients, the calcium Agatston score was available. Stent frame eccentricity was seen in 77 (35%) of patients. The correlation between the Agatston score and stent frame eccentricity was significant (ρ = 0.241, P = 0.003). Paravalvular leakage was absent in 91 cases (41%), mild in 67 (30%), moderate in 51 (23%), and severe in 14 (6%) cases. The correlation between stent frame eccentricity and PVL severity was significant (ρ = 0.525, P < 0.0001). There was a relation between particular eccentric stent frame shapes and the site of PVL. Calcification of the aortic annulus is associated with a subsequent eccentric shape of the CoreValve prosthesis. This eccentric shape results in more PVL, with the localization of PVL related to the shape of stent frame eccentricity. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.

  19. Efficient use of bit planes in the generation of motion stimuli

    NASA Technical Reports Server (NTRS)

    Mulligan, Jeffrey B.; Stone, Leland S.

    1988-01-01

    The production of animated motion sequences on computer-controlled display systems presents a technical problem because large images cannot be transferred from disk storage to image memory at conventional frame rates. A technique is described in which a single base image can be used to generate a broad class of motion stimuli without the need for such memory transfers. This technique was applied to the generation of drifting sine-wave gratings (and by extension, sine wave plaids). For each drifting grating, sine and cosine spatial phase components are first reduced to 1 bit/pixel using a digital halftoning technique. The resulting pairs of 1-bit images are then loaded into pairs of bit planes of the display memory. To animate the patterns, the display hardware's color lookup table is modified on a frame-by-frame basis; for each frame the lookup table is set to display a weighted sum of the spatial sine and cosine phase components. Because the contrasts and temporal frequencies of the various components are mutually independent in each frame, the sine and cosine components can be counterphase modulated in temporal quadrature, yielding a single drifting grating. Using additional bit planes, multiple drifting gratings can be combined to form sine-wave plaid patterns. A large number of resultant plaid motions can be produced from a single image file because the temporal frequencies of all the components can be varied independently. For a graphics device having 8 bits/pixel, up to four drifting gratings may be combined, each having independently variable contrast and speed.

  20. Io's Sodium Cloud (Clear and Green-Yellow Filters)

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The green-yellow filter and clear filter images of Io which were released over the past two days were originally exposed on the same frame. The camera pointed in slightly different directions for the two exposures, placing a clear filter image of Io on the top half of the frame, and a green-yellow filter image of Io on the bottom half of the frame. This picture shows that entire original frame in false color, the most intense emission appearing white.

    East is to the right. Most of Io's visible surface is in shadow, though one can see part of an illuminated crescent on its western side. The burst of white light near Io's eastern equatorial edge (most distinctive in the green filter image) is sunlight scattered by the plume of the volcano Prometheus.

    There is much more bright light near Io in the clear filter image, since that filter's wider wavelength range admits more scattered light from Prometheus' sunlit plume and Io's illuminated crescent. Thus in the clear filter image especially, Prometheus's plume was bright enough to produce several white spikes which extend radially outward from the center of the plume emission. These spikes are artifacts produced by the optics of the camera. Two of the spikes in the clear filter image appear against Io's shadowed surface, and the lower of these is pointing towards a bright round spot. That spot corresponds to thermal emission from the volcano Pele.

    The Jet Propulsion Laboratory, Pasadena, CA manages the mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov.

  1. Fully automatic and reference-marker-free image stitching method for full-spine and full-leg imaging with computed radiography

    NASA Astrophysics Data System (ADS)

    Wang, Xiaohui; Foos, David H.; Doran, James; Rogers, Michael K.

    2004-05-01

    Full-leg and full-spine imaging with standard computed radiography (CR) systems requires several cassettes/storage phosphor screens to be placed in a staggered arrangement and exposed simultaneously to achieve an increased imaging area. A method has been developed that can automatically and accurately stitch the acquired sub-images without relying on any external reference markers. It can detect and correct the order, orientation, and overlap arrangement of the subimages for stitching. The automatic determination of the order, orientation, and overlap arrangement of the sub-images consists of (1) constructing a hypothesis list that includes all cassette/screen arrangements, (2) refining hypotheses based on a set of rules derived from imaging physics, (3) correlating each consecutive sub-image pair in each hypothesis and establishing an overall figure-of-merit, (4) selecting the hypothesis of maximum figure-of-merit. The stitching process requires the CR reader to over scan each CR screen so that the screen edges are completely visible in the acquired sub-images. The rotational displacement and vertical displacement between two consecutive sub-images are calculated by matching the orientation and location of the screen edge in the front image and its corresponding shadow in the back image. The horizontal displacement is estimated by maximizing the correlation function between the two image sections in the overlap region. Accordingly, the two images are stitched together. This process is repeated for the newly stitched composite image and the next consecutive sub-image until a full-image composite is created. The method has been evaluated in both phantom experiments and clinical studies. The standard deviation of image misregistration is below one image pixel.

  2. Interactive wire-frame ship hullform generation and display

    NASA Technical Reports Server (NTRS)

    Calkins, D. E.; Garbini, J. L.; Ishimaru, J.

    1984-01-01

    An interactive automated procedure to generate a wire frame graphic image of a ship hullform, which uses a digitizing tablet in conjunction with the hullform lines drawing, was developed. The geometric image created is displayed on an Evans & Sutherland PS-300 graphics terminal for real time interactive viewing or is output as hard copy on an inexpensive dot matrix printer.

  3. Visible camera imaging of plasmas in Proto-MPEX

    NASA Astrophysics Data System (ADS)

    Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.

    2015-11-01

    The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  4. Swirling Dust in Gale Crater, Mars, Sol 1613

    NASA Image and Video Library

    2017-02-27

    This frame from a sequence of images shows a dust-carrying whirlwind, called a dust devil, on lower Mount Sharp inside Gale Crater, as viewed by NASA's Curiosity Mars Rover during the summer afternoon of the rover's 1,613rd Martian day, or sol (Feb. 18, 2017). Set within a broader southward view from the rover's Navigation Camera, the rectangular area outlined in black was imaged multiple times over a span of several minutes to check for dust devils. Images from the period with most activity are shown in the inset area. The images are in pairs that were taken about 12 seconds apart, with an interval of about 90 seconds between pairs. Timing is accelerated and not fully proportional in this animation. Contrast has been modified to make frame-to-frame changes easier to see. A black frame provides a marker between repeats of the sequence. On Mars as on Earth, dust devils result from sunshine warming the ground, prompting convective rising of air that has gained heat from the ground. Observations of dust devils provide information about wind directions and interaction between the surface and the atmosphere. An animation is available at http://photojournal.jpl.nasa.gov/catalog/PIA21483

  5. Imaging a seizure model in zebrafish with structured illumination light sheet microscopy

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Dale, Savannah; Ball, Rebecca; VanLeuven, Ariel J.; Baraban, Scott; Sornborger, Andrew; Lauderdale, James D.; Kner, Peter

    2018-02-01

    Zebrafish are a promising vertebrate model for elucidating how neural circuits generate behavior under normal and pathological conditions. The Baraban group first demonstrated that zebrafish larvae are valuable for investigating seizure events and can be used as a model for epilepsy in humans. Because of their small size and transparency, zebrafish embryos are ideal for imaging seizure activity using calcium indicators. Light-sheet microscopy is well suited to capturing neural activity in zebrafish because it is capable of optical sectioning, high frame rates, and low excitation intensities. We describe work in our lab to use light-sheet microscopy for high-speed long-time imaging of neural activity in wildtype and mutant zebrafish to better understand the connectivity and activity of inhibitory neural networks when GABAergic signaling is altered in vivo. We show that, with light-sheet microscopy, neural activity can be recorded at 23 frames per second in twocolors for over 10 minutes allowing us to capture rare seizure events in mutants. We have further implemented structured illumination to increase resolution and contrast in the vertical and axial directions during high-speed imaging at an effective frame rate of over 7 frames per second.

  6. Research on compression performance of ultrahigh-definition videos

    NASA Astrophysics Data System (ADS)

    Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di

    2017-11-01

    With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.

  7. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data.

    PubMed

    Kotasidis, F A; Mehranian, A; Zaidi, H

    2016-05-07

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.

  8. Impact of time-of-flight on indirect 3D and direct 4D parametric image reconstruction in the presence of inconsistent dynamic PET data

    NASA Astrophysics Data System (ADS)

    Kotasidis, F. A.; Mehranian, A.; Zaidi, H.

    2016-05-01

    Kinetic parameter estimation in dynamic PET suffers from reduced accuracy and precision when parametric maps are estimated using kinetic modelling following image reconstruction of the dynamic data. Direct approaches to parameter estimation attempt to directly estimate the kinetic parameters from the measured dynamic data within a unified framework. Such image reconstruction methods have been shown to generate parametric maps of improved precision and accuracy in dynamic PET. However, due to the interleaving between the tomographic and kinetic modelling steps, any tomographic or kinetic modelling errors in certain regions or frames, tend to spatially or temporally propagate. This results in biased kinetic parameters and thus limits the benefits of such direct methods. Kinetic modelling errors originate from the inability to construct a common single kinetic model for the entire field-of-view, and such errors in erroneously modelled regions could spatially propagate. Adaptive models have been used within 4D image reconstruction to mitigate the problem, though they are complex and difficult to optimize. Tomographic errors in dynamic imaging on the other hand, can originate from involuntary patient motion between dynamic frames, as well as from emission/transmission mismatch. Motion correction schemes can be used, however, if residual errors exist or motion correction is not included in the study protocol, errors in the affected dynamic frames could potentially propagate either temporally, to other frames during the kinetic modelling step or spatially, during the tomographic step. In this work, we demonstrate a new strategy to minimize such error propagation in direct 4D image reconstruction, focusing on the tomographic step rather than the kinetic modelling step, by incorporating time-of-flight (TOF) within a direct 4D reconstruction framework. Using ever improving TOF resolutions (580 ps, 440 ps, 300 ps and 160 ps), we demonstrate that direct 4D TOF image reconstruction can substantially prevent kinetic parameter error propagation either from erroneous kinetic modelling, inter-frame motion or emission/transmission mismatch. Furthermore, we demonstrate the benefits of TOF in parameter estimation when conventional post-reconstruction (3D) methods are used and compare the potential improvements to direct 4D methods. Further improvements could possibly be achieved in the future by combining TOF direct 4D image reconstruction with adaptive kinetic models and inter-frame motion correction schemes.

  9. Software for Acquiring Image Data for PIV

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Cheung, H. M.; Kressler, Brian

    2003-01-01

    PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.

  10. 4D microvascular imaging based on ultrafast Doppler tomography.

    PubMed

    Demené, Charlie; Tiran, Elodie; Sieu, Lim-Anna; Bergel, Antoine; Gennisson, Jean Luc; Pernot, Mathieu; Deffieux, Thomas; Cohen, Ivan; Tanter, Mickael

    2016-02-15

    4D ultrasound microvascular imaging was demonstrated by applying ultrafast Doppler tomography (UFD-T) to the imaging of brain hemodynamics in rodents. In vivo real-time imaging of the rat brain was performed using ultrasonic plane wave transmissions at very high frame rates (18,000 frames per second). Such ultrafast frame rates allow for highly sensitive and wide-field-of-view 2D Doppler imaging of blood vessels far beyond conventional ultrasonography. Voxel anisotropy (100 μm × 100 μm × 500 μm) was corrected for by using a tomographic approach, which consisted of ultrafast acquisitions repeated for different imaging plane orientations over multiple cardiac cycles. UFT-D allows for 4D dynamic microvascular imaging of deep-seated vasculature (up to 20 mm) with a very high 4D resolution (respectively 100 μm × 100 μm × 100 μm and 10 ms) and high sensitivity to flow in small vessels (>1 mm/s) for a whole-brain imaging technique without requiring any contrast agent. 4D ultrasound microvascular imaging in vivo could become a valuable tool for the study of brain hemodynamics, such as cerebral flow autoregulation or vascular remodeling after ischemic stroke recovery, and, more generally, tumor vasculature response to therapeutic treatment. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Framing Service, Benefit, and Credibility Through Images and Texts: A Content Analysis of Online Promotional Messages of Korean Medical Tourism Industry.

    PubMed

    Jun, Jungmi

    2016-07-01

    This study examines how the Korean medical tourism industry frames its service, benefit, and credibility issues through texts and images of online brochures. The results of content analysis suggest that the Korean medical tourism industry attempts to frame their medical/health services as "excellence in surgeries and cancer care" and "advanced health technology and facilities." However, the use of cost-saving appeals was limited, which can be seen as a strategy to avoid consumers' association of lower cost with lower quality services, and to stress safety and credibility.

  12. Advances in indirect detector systems for ultra high-speed hard X-ray imaging with synchrotron light

    NASA Astrophysics Data System (ADS)

    Olbinado, M. P.; Grenzer, J.; Pradel, P.; De Resseguier, T.; Vagovic, P.; Zdora, M.-C.; Guzenko, V. A.; David, C.; Rack, A.

    2018-04-01

    We report on indirect X-ray detector systems for various full-field, ultra high-speed X-ray imaging methodologies, such as X-ray phase-contrast radiography, diffraction topography, grating interferometry and speckle-based imaging performed at the hard X-ray imaging beamline ID19 of the European Synchrotron—ESRF. Our work highlights the versatility of indirect X-ray detectors to multiple goals such as single synchrotron pulse isolation, multiple-frame recording up to millions frames per second, high efficiency, and high spatial resolution. Besides the technical advancements, potential applications are briefly introduced and discussed.

  13. Plexus structure imaging with thin slab MR neurography: rotating frames, fly-throughs, and composite projections

    NASA Astrophysics Data System (ADS)

    Raphael, David T.; McIntee, Diane; Tsuruda, Jay S.; Colletti, Patrick; Tatevossian, Raymond; Frazier, James

    2006-03-01

    We explored multiple image processing approaches by which to display the segmented adult brachial plexus in a three-dimensional manner. Magnetic resonance neurography (MRN) 1.5-Tesla scans with STIR sequences, which preferentially highlight nerves, were performed in adult volunteers to generate high-resolution raw images. Using multiple software programs, the raw MRN images were then manipulated so as to achieve segmentation of plexus neurovascular structures, which were incorporated into three different visualization schemes: rotating upper thoracic girdle skeletal frames, dynamic fly-throughs parallel to the clavicle, and thin slab volume-rendered composite projections.

  14. 2-tier in-plane motion correction and out-of-plane motion filtering for contrast-enhanced ultrasound.

    PubMed

    Ta, Casey N; Eghtedari, Mohammad; Mattrey, Robert F; Kono, Yuko; Kummel, Andrew C

    2014-11-01

    Contrast-enhanced ultrasound (CEUS) cines of focal liver lesions (FLLs) can be quantitatively analyzed to measure tumor perfusion on a pixel-by-pixel basis for diagnostic indication. However, CEUS cines acquired freehand and during free breathing cause nonuniform in-plane and out-of-plane motion from frame to frame. These motions create fluctuations in the time-intensity curves (TICs), reducing the accuracy of quantitative measurements. Out-of-plane motion cannot be corrected by image registration in 2-dimensional CEUS and degrades the quality of in-plane motion correction (IPMC). A 2-tier IPMC strategy and adaptive out-of-plane motion filter (OPMF) are proposed to provide a stable correction of nonuniform motion to reduce the impact of motion on quantitative analyses. A total of 22 cines of FLLs were imaged with dual B-mode and contrast specific imaging to acquire a 3-minute TIC. B-mode images were analyzed for motion, and the motion correction was applied to both B-mode and contrast images. For IPMC, the main reference frame was automatically selected for each cine, and subreference frames were selected in each respiratory cycle and sequentially registered toward the main reference frame. All other frames were sequentially registered toward the local subreference frame. Four OPMFs were developed and tested: subsample normalized correlation (NC), subsample sum of absolute differences, mean frame NC, and histogram. The frames that were most dissimilar to the OPMF reference frame using 1 of the 4 above criteria in each respiratory cycle were adaptively removed by thresholding against the low-pass filter of the similarity curve. Out-of-plane motion filter was quantitatively evaluated by an out-of-plane motion metric (OPMM) that measured normalized variance in the high-pass filtered TIC within the tumor region-of-interest with low OPMM being the goal. Results for IPMC and OPMF were qualitatively evaluated by 2 blinded observers who ranked the motion in the cines before and after various combinations of motion correction steps. Quantitative measurements showed that 2-tier IPMC and OPMF improved imaging stability. With IPMC, the NC B-mode metric increased from 0.504 ± 0.149 to 0.585 ± 0.145 over all cines (P < 0.001). Two-tier IPMC also produced better fits on the contrast-specific TIC than industry standard IPMC techniques did (P < 0.02). In-plane motion correction and OPMF were shown to improve goodness of fit for pixel-by-pixel analysis (P < 0.001). Out-of-plane motion filter reduced variance in the contrast-specific signal as shown by a median decrease of 49.8% in the OPMM. Two-tier IPMC and OPMF were also shown to qualitatively reduce motion. Observers consistently ranked cines with IPMC higher than the same cine before IPMC (P < 0.001) as well as ranked cines with OPMF higher than when they were uncorrected. The 2-tier sequential IPMC and adaptive OPMF significantly reduced motion in 3-minute CEUS cines of FLLs, thereby overcoming the challenges of drift and irregular breathing motion in long cines. The 2-tier IPMC strategy provided stable motion correction tolerant of out-of-plane motion throughout the cine by sequentially registering subreference frames that bypassed the motion cycles, thereby overcoming the lack of a nearly stationary reference point in long cines. Out-of-plane motion filter reduced apparent motion by adaptively removing frames imaged off-plane from the automatically selected OPMF reference frame, thereby tolerating nonuniform breathing motion. Selection of the best OPMF by minimizing OPMM effectively reduced motion under a wide variety of motion patterns applicable to clinical CEUS. These semiautomated processes only required user input for region-of-interest selection and can improve the accuracy of quantitative perfusion measurements.

  15. Smart Camera Technology Increases Quality

    NASA Technical Reports Server (NTRS)

    2004-01-01

    When it comes to real-time image processing, everyone is an expert. People begin processing images at birth and rapidly learn to control their responses through the real-time processing of the human visual system. The human eye captures an enormous amount of information in the form of light images. In order to keep the brain from becoming overloaded with all the data, portions of an image are processed at a higher resolution than others, such as a traffic light changing colors. changing colors. In the same manner, image processing products strive to extract the information stored in light in the most efficient way possible. Digital cameras available today capture millions of pixels worth of information from incident light. However, at frame rates more than a few per second, existing digital interfaces are overwhelmed. All the user can do is store several frames to memory until that memory is full and then subsequent information is lost. New technology pairs existing digital interface technology with an off-the-shelf complementary metal oxide semiconductor (CMOS) imager to provide more than 500 frames per second of specialty image processing. The result is a cost-effective detection system unlike any other.

  16. Design and construction of a high frame rate imaging system

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Waugaman, John L.; Liu, Anjun; Lu, Jian-Yu

    2002-05-01

    A new high frame rate imaging method has been developed recently [Jian-yu Lu, ``2D and 3D high frame rate imaging with limited diffraction beams,'' IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44, 839-856 (1997)]. This method may have a clinical application for imaging of fast moving objects such as human hearts, velocity vector imaging, and low-speckle imaging. To implement the method, an imaging system has been designed. The system consists of one main printed circuit board (PCB) and 16 channel boards (each channel board contains 8 channels), in addition to a set-top box for connections to a personal computer (PC), a front panel board for user control and message display, and a power control and distribution board. The main board contains a field programmable gate array (FPGA) and controls all channels (each channel has also an FPGA). We will report the analog and digital circuit design and simulations, multiplayer PCB designs with commercial software (Protel 99), PCB signal integrity testing and system RFI/EMI shielding, and the assembly and construction of the entire system. [Work supported in part by Grant 5RO1 HL60301 from NIH.

  17. Logic design and implementation of FPGA for a high frame rate ultrasound imaging system

    NASA Astrophysics Data System (ADS)

    Liu, Anjun; Wang, Jing; Lu, Jian-Yu

    2002-05-01

    Recently, a method has been developed for high frame rate medical imaging [Jian-yu Lu, ``2D and 3D high frame rate imaging with limited diffraction beams,'' IEEE Trans. Ultrason. Ferroelectr. Freq. Control 44(4), 839-856 (1997)]. To realize this method, a complicated system [multiple-channel simultaneous data acquisition, large memory in each channel for storing up to 16 seconds of data at 40 MHz and 12-bit resolution, time-variable-gain (TGC) control, Doppler imaging, harmonic imaging, as well as coded transmissions] is designed. Due to the complexity of the system, field programmable gate array (FPGA) (Xilinx Spartn II) is used. In this presentation, the design and implementation of the FPGA for the system will be reported. This includes the synchronous dynamic random access memory (SDRAM) controller and other system controllers, time sharing for auto-refresh of SDRAMs to reduce peak power, transmission and imaging modality selections, ECG data acquisition and synchronization, 160 MHz delay locked loop (DLL) for accurate timing, and data transfer via either a parallel port or a PCI bus for post image processing. [Work supported in part by Grant 5RO1 HL60301 from NIH.

  18. Spatiotemporal Pixelization to Increase the Recognition Score of Characters for Retinal Prostheses

    PubMed Central

    Kim, Hyun Seok; Park, Kwang Suk

    2017-01-01

    Most of the retinal prostheses use a head-fixed camera and a video processing unit. Some studies proposed various image processing methods to improve visual perception for patients. However, previous studies only focused on using spatial information. The present study proposes a spatiotemporal pixelization method mimicking fixational eye movements to generate stimulation images for artificial retina arrays by combining spatial and temporal information. Input images were sampled with a resolution that was four times higher than the number of pixel arrays. We subsampled this image and generated four different phosphene images. We then evaluated the recognition scores of characters by sequentially presenting phosphene images with varying pixel array sizes (6 × 6, 8 × 8 and 10 × 10) and stimulus frame rates (10 Hz, 15 Hz, 20 Hz, 30 Hz, and 60 Hz). The proposed method showed the highest recognition score at a stimulus frame rate of approximately 20 Hz. The method also significantly improved the recognition score for complex characters. This method provides a new way to increase practical resolution over restricted spatial resolution by merging the higher resolution image into high-frame time slots. PMID:29073735

  19. Tiny videos: a large data set for nonparametric video retrieval and frame classification.

    PubMed

    Karpenko, Alexandre; Aarabi, Parham

    2011-03-01

    In this paper, we present a large database of over 50,000 user-labeled videos collected from YouTube. We develop a compact representation called "tiny videos" that achieves high video compression rates while retaining the overall visual appearance of the video as it varies over time. We show that frame sampling using affinity propagation-an exemplar-based clustering algorithm-achieves the best trade-off between compression and video recall. We use this large collection of user-labeled videos in conjunction with simple data mining techniques to perform related video retrieval, as well as classification of images and video frames. The classification results achieved by tiny videos are compared with the tiny images framework [24] for a variety of recognition tasks. The tiny images data set consists of 80 million images collected from the Internet. These are the largest labeled research data sets of videos and images available to date. We show that tiny videos are better suited for classifying scenery and sports activities, while tiny images perform better at recognizing objects. Furthermore, we demonstrate that combining the tiny images and tiny videos data sets improves classification precision in a wider range of categories.

  20. Handheld probe for portable high frame photoacoustic/ultrasound imaging system

    NASA Astrophysics Data System (ADS)

    Daoudi, K.; van den Berg, P. J.; Rabot, O.; Kohl, A.; Tisserand, S.; Brands, P.; Steenbergen, W.

    2013-03-01

    Photoacoustics is a hybrid imaging modality that is based on the detection of acoustic waves generated by absorption of pulsed light by tissue chromophors. In current research, this technique uses large and costly photoacoustic systems with a low frame rate imaging. To open the door for widespread clinical use, a compact, cost effective and fast system is required. In this paper we report on the development of a small compact handset pulsed laser probe which will be connected to a portable ultrasound system for real-time photoacoustic imaging and ultrasound imaging. The probe integrates diode lasers driven by an electrical driver developed for very short high power pulses. It uses specifically developed highly efficient diode stacks with high frequency repetition rate up to 10 kHz, emitting at 800nm wavelength. The emitted beam is collimated and shaped with compact micro optics beam shaping system delivering a homogenized rectangular laser beam intensity distribution. The laser block is integrated with an ultrasound transducer in an ergonomically designed handset probe. This handset is a building block enabling for a low cost high frame rate photoacoustic and ultrasound imaging system. The probe was used with a modified ultrasound scanner and was tested by imaging a tissue mimicking phantom.

  1. Benefit from NASA

    NASA Image and Video Library

    1999-06-01

    Two scientists at NASA's Marshall Space Flight Center,atmospheric scientist Paul Meyer and solar physicist Dr. David Hathaway, developed promising new software, called Video Image Stabilization and Registration (VISAR). VISAR may help law enforcement agencies catch criminals by improving the quality of video recorded at crime scenes. In this photograph, the single frame at left, taken at night, was brightened in order to enhance details and reduce noise or snow. To further overcome the video defects in one frame, Law enforcement officials can use VISAR software to add information from multiple frames to reveal a person. Images from less than a second of videotape were added together to create the clarified image at right. VISAR stabilizes camera motion in the horizontal and vertical as well as rotation and zoom effects producing clearer images of moving objects, smoothes jagged edges, enhances still images, and reduces video noise or snow. VISAR could also have applications in medical and meteorological imaging. It could steady images of ultrasounds, which are infamous for their grainy, blurred quality. The software can be used for defense application by improving recornaissance video imagery made by military vehicles, aircraft, and ships traveling in harsh, rugged environments.

  2. In Vivo Mammalian Brain Imaging Using One- and Two-Photon Fluorescence Microendoscopy

    PubMed Central

    Jung, Juergen C.; Mehta, Amit D.; Aksay, Emre; Stepnoski, Raymond; Schnitzer, Mark J.

    2010-01-01

    One of the major limitations in the current set of techniques available to neuroscientists is a dearth of methods for imaging individual cells deep within the brains of live animals. To overcome this limitation, we developed two forms of minimally invasive fluorescence microendoscopy and tested their abilities to image cells in vivo. Both one- and two-photon fluorescence microendoscopy are based on compound gradient refractive index (GRIN) lenses that are 350–1,000 μm in diameter and provide micron-scale resolution. One-photon microendoscopy allows full-frame images to be viewed by eye or with a camera, and is well suited to fast frame-rate imaging. Two-photon microendoscopy is a laser-scanning modality that provides optical sectioning deep within tissue. Using in vivo microendoscopy we acquired video-rate movies of thalamic and CA1 hippocampal red blood cell dynamics and still-frame images of CA1 neurons and dendrites in anesthetized rats and mice. Microendoscopy will help meet the growing demand for in vivo cellular imaging created by the rapid emergence of new synthetic and genetically encoded fluorophores that can be used to label specific brain areas or cell classes. PMID:15128753

  3. Remote driving with reduced bandwidth communication

    NASA Technical Reports Server (NTRS)

    Depiero, Frederick W.; Noell, Timothy E.; Gee, Timothy F.

    1993-01-01

    Oak Ridge National Laboratory has developed a real-time video transmission system for low bandwidth remote operations. The system supports both continuous transmission of video for remote driving and progressive transmission of still images. Inherent in the system design is a spatiotemporal limitation to the effects of channel errors. The average data rate of the system is 64,000 bits/s, a compression of approximately 1000:1 for the black and white National Television Standard Code video. The image quality of the transmissions is maintained at a level that supports teleoperation of a high mobility multipurpose wheeled vehicle at speeds up to 15 mph on a moguled dirt track. Video compression is achieved by using Laplacian image pyramids and a combination of classical techniques. Certain subbands of the image pyramid are transmitted by using interframe differencing with a periodic refresh to aid in bandwidth reduction. Images are also foveated to concentrate image detail in a steerable region. The system supports dynamic video quality adjustments between frame rate, image detail, and foveation rate. A typical configuration for the system used during driving has a frame rate of 4 Hz, a compression per frame of 125:1, and a resulting latency of less than 1s.

  4. Building a 2.5D Digital Elevation Model from 2D Imagery

    NASA Technical Reports Server (NTRS)

    Padgett, Curtis W.; Ansar, Adnan I.; Brennan, Shane; Cheng, Yang; Clouse, Daniel S.; Almeida, Eduardo

    2013-01-01

    When projecting imagery into a georeferenced coordinate frame, one needs to have some model of the geographical region that is being projected to. This model can sometimes be a simple geometrical curve, such as an ellipse or even a plane. However, to obtain accurate projections, one needs to have a more sophisticated model that encodes the undulations in the terrain including things like mountains, valleys, and even manmade structures. The product that is often used for this purpose is a Digital Elevation Model (DEM). The technology presented here generates a high-quality DEM from a collection of 2D images taken from multiple viewpoints, plus pose data for each of the images and a camera model for the sensor. The technology assumes that the images are all of the same region of the environment. The pose data for each image is used as an initial estimate of the geometric relationship between the images, but the pose data is often noisy and not of sufficient quality to build a high-quality DEM. Therefore, the source imagery is passed through a feature-tracking algorithm and multi-plane-homography algorithm, which refine the geometric transforms between images. The images and their refined poses are then passed to a stereo algorithm, which generates dense 3D data for each image in the sequence. The 3D data from each image is then placed into a consistent coordinate frame and passed to a routine that divides the coordinate frame into a number of cells. The 3D points that fall into each cell are collected, and basic statistics are applied to determine the elevation of that cell. The result of this step is a DEM that is in an arbitrary coordinate frame. This DEM is then filtered and smoothed in order to remove small artifacts. The final step in the algorithm is to take the initial DEM and rotate and translate it to be in the world coordinate frame [such as UTM (Universal Transverse Mercator), MGRS (Military Grid Reference System), or geodetic] such that it can be saved in a standard DEM format and used for projection.

  5. Immobilization precision of a modified GTC frame

    PubMed Central

    Daartz, Juliane; Dankers, Frank; Bussière, Marc

    2012-01-01

    The purpose of this study was to evaluate and quantify the interfraction reproducibility and intrafraction immobilization precision of a modified GTC frame. The error of the patient alignment and imaging systems were measured using a cranial skull phantom, with simulated, predetermined shifts. The kV setup images were acquired with a room‐mounted set of kV sources and panels. Calculated translations and rotations provided by the computer alignment software relying upon three implanted fiducials were compared to the known shifts, and the accuracy of the imaging and positioning systems was calculated. Orthogonal kV setup images for 45 proton SRT patients and 1002 fractions (average 22.3 fractions/patient) were analyzed for interfraction and intrafraction immobilization precision using a modified GTC frame. The modified frame employs a radiotransparent carbon cup and molded pillow to allow for more treatment angles from posterior directions for cranial lesions. Patients and the phantom were aligned with three 1.5 mm stainless steel fiducials implanted into the skull. The accuracy and variance of the patient positioning and imaging systems were measured to be 0.10±0.06 mm, with the maximum uncertainty of rotation being ±0.07°.957 pairs of interfraction image sets and 974 intrafraction image sets were analyzed. 3D translations and rotations were recorded. The 3D vector interfraction setup reproducibility was 0.13 mm ±1.8 mm for translations and the largest uncertainty of ±1.07° for rotations. The intrafraction immobilization efficacy was 0.19 mm ±0.66 mm for translations and the largest uncertainty of ±0.50° for rotations. The modified GTC frame provides reproducible setup and effective intrafraction immobilization, while allowing for the complete range of entrance angles from the posterior direction. PACS number: 87.53.Ly, 87.55.Qr PMID:22584167

  6. A method for detecting small targets based on cumulative weighted value of target properties

    NASA Astrophysics Data System (ADS)

    Jin, Xing; Sun, Gang; Wang, Wei-hua; Liu, Fang; Chen, Zeng-ping

    2015-03-01

    Laser detection based on the "cat's eye effect" has become the hot research project for its initiative compared to the passivity of sound detection and infrared detection. And the target detection is one of the core technologies in this system. The paper puts forward a method for detecting small targets based on cumulative weighted value of target properties using given data. Firstly, we make a frame difference to the images, then make image processing based on Morphology Principles. Secondly, we segment images, and screen the targets; then find some interesting locations. Finally, comparing to a quantity of frames, we locate the target. We did an exam to 394 true frames, the experimental result shows that the mathod can detect small targets efficiently.

  7. A multi-frame soft x-ray pinhole imaging diagnostic for single-shot applicationsa)

    NASA Astrophysics Data System (ADS)

    Wurden, G. A.; Coffey, S. K.

    2012-10-01

    For high energy density magnetized target fusion experiments at the Air Force Research Laboratory FRCHX machine, obtaining multi-frame soft x-ray images of the field reversed configuration (FRC) plasma as it is being compressed will provide useful dynamics and symmetry information. However, vacuum hardware will be destroyed during the implosion. We have designed a simple in-vacuum pinhole nosecone attachment, fitting onto a Conflat window, coated with 3.2 mg/cm2 of P-47 phosphor, and covered with a thin 50-nm aluminum reflective overcoat, lens-coupled to a multi-frame Hadland Ultra intensified digital camera. We compare visible and soft x-ray axial images of translating (˜200 eV) plasmas in the FRX-L and FRCHX machines in Los Alamos and Albuquerque.

  8. Improving lateral resolution and image quality of optical coherence tomography by the multi-frame superresolution technique for 3D tissue imaging

    PubMed Central

    Shen, Kai; Lu, Hui; Baig, Sarfaraz; Wang, Michael R.

    2017-01-01

    The multi-frame superresolution technique is introduced to significantly improve the lateral resolution and image quality of spectral domain optical coherence tomography (SD-OCT). Using several sets of low resolution C-scan 3D images with lateral sub-spot-spacing shifts on different sets, the multi-frame superresolution processing of these sets at each depth layer reconstructs a higher resolution and quality lateral image. Layer by layer processing yields an overall high lateral resolution and quality 3D image. In theory, the superresolution processing including deconvolution can solve the diffraction limit, lateral scan density and background noise problems together. In experiment, the improved lateral resolution by ~3 times reaching 7.81 µm and 2.19 µm using sample arm optics of 0.015 and 0.05 numerical aperture respectively as well as doubling the image quality has been confirmed by imaging a known resolution test target. Improved lateral resolution on in vitro skin C-scan images has been demonstrated. For in vivo 3D SD-OCT imaging of human skin, fingerprint and retina layer, we used the multi-modal volume registration method to effectively estimate the lateral image shifts among different C-scans due to random minor unintended live body motion. Further processing of these images generated high lateral resolution 3D images as well as high quality B-scan images of these in vivo tissues. PMID:29188089

  9. Ultrafast Synthetic Transmit Aperture Imaging Using Hadamard-Encoded Virtual Sources With Overlapping Sub-Apertures.

    PubMed

    Ping Gong; Pengfei Song; Shigao Chen

    2017-06-01

    The development of ultrafast ultrasound imaging offers great opportunities to improve imaging technologies, such as shear wave elastography and ultrafast Doppler imaging. In ultrafast imaging, there are tradeoffs among image signal-to-noise ratio (SNR), resolution, and post-compounded frame rate. Various approaches have been proposed to solve this tradeoff, such as multiplane wave imaging or the attempts of implementing synthetic transmit aperture imaging. In this paper, we propose an ultrafast synthetic transmit aperture (USTA) imaging technique using Hadamard-encoded virtual sources with overlapping sub-apertures to enhance both image SNR and resolution without sacrificing frame rate. This method includes three steps: 1) create virtual sources using sub-apertures; 2) encode virtual sources using Hadamard matrix; and 3) add short time intervals (a few microseconds) between transmissions of different virtual sources to allow overlapping sub-apertures. The USTA was tested experimentally with a point target, a B-mode phantom, and in vivo human kidney micro-vessel imaging. Compared with standard coherent diverging wave compounding with the same frame rate, improvements on image SNR, lateral resolution (+33%, with B-mode phantom imaging), and contrast ratio (+3.8 dB, with in vivo human kidney micro-vessel imaging) have been achieved. The f-number of virtual sources, the number of virtual sources used, and the number of elements used in each sub-aperture can be flexibly adjusted to enhance resolution and SNR. This allows very flexible optimization of USTA for different applications.

  10. Ultrafast Harmonic Coherent Compound (UHCC) imaging for high frame rate echocardiography and Shear Wave Elastography

    PubMed Central

    Correia, Mafalda; Provost, Jean; Chatelin, Simon; Villemain, Olivier; Tanter, Mickael; Pernot, Mathieu

    2016-01-01

    Transthoracic shear wave elastography of the myocardium remains very challenging due to the poor quality of transthoracic ultrafast imaging and the presence of clutter noise, jitter, phase aberration, and ultrasound reverberation. Several approaches, such as, e.g., diverging-wave coherent compounding or focused harmonic imaging have been proposed to improve the imaging quality. In this study, we introduce ultrafast harmonic coherent compounding (UHCC), in which pulse-inverted diverging-waves are emitted and coherently compounded, and show that such an approach can be used to enhance both Shear Wave Elastography (SWE) and high frame rate B-mode Imaging. UHCC SWE was first tested in phantoms containing an aberrating layer and was compared against pulse-inversion harmonic imaging and against ultrafast coherent compounding (UCC) imaging at the fundamental frequency. In-vivo feasibility of the technique was then evaluated in six healthy volunteers by measuring myocardial stiffness during diastole in transthoracic imaging. We also demonstrated that improvements in imaging quality could be achieved using UHCC B-mode imaging in healthy volunteers. The quality of transthoracic images of the heart was found to be improved with the number of pulse-inverted diverging waves with reduction of the imaging mean clutter level up to 13.8-dB when compared against UCC at the fundamental frequency. These results demonstrated that UHCC B-mode imaging is promising for imaging deep tissues exposed to aberration sources with a high frame-rate. PMID:26890730

  11. Application of X-Y Separable 2-D Array Beamforming for Increased Frame Rate and Energy Efficiency in Handheld Devices

    PubMed Central

    Owen, Kevin; Fuller, Michael I.; Hossack, John A.

    2015-01-01

    Two-dimensional arrays present significant beamforming computational challenges because of their high channel count and data rate. These challenges are even more stringent when incorporating a 2-D transducer array into a battery-powered hand-held device, placing significant demands on power efficiency. Previous work in sonar and ultrasound indicates that 2-D array beamforming can be decomposed into two separable line-array beamforming operations. This has been used in conjunction with frequency-domain phase-based focusing to achieve fast volume imaging. In this paper, we analyze the imaging and computational performance of approximate near-field separable beamforming for high-quality delay-and-sum (DAS) beamforming and for a low-cost, phaserotation-only beamforming method known as direct-sampled in-phase quadrature (DSIQ) beamforming. We show that when high-quality time-delay interpolation is used, separable DAS focusing introduces no noticeable imaging degradation under practical conditions. Similar results for DSIQ focusing are observed. In addition, a slight modification to the DSIQ focusing method greatly increases imaging contrast, making it comparable to that of DAS, despite having a wider main lobe and higher side lobes resulting from the limitations of phase-only time-delay interpolation. Compared with non-separable 2-D imaging, up to a 20-fold increase in frame rate is possible with the separable method. When implemented on a smart-phone-oriented processor to focus data from a 60 × 60 channel array using a 40 × 40 aperture, the frame rate per C-mode volume slice increases from 16 to 255 Hz for DAS, and from 11 to 193 Hz for DSIQ. Energy usage per frame is similarly reduced from 75 to 4.8 mJ/ frame for DAS, and from 107 to 6.3 mJ/frame for DSIQ. We also show that the separable method outperforms 2-D FFT-based focusing by a factor of 1.64 at these data sizes. This data indicates that with the optimal design choices, separable 2-D beamforming can significantly improve frame rate and battery life for hand-held devices with 2-D arrays. PMID:22828829

  12. SIELETERS: A Static Fourier Transform Infrared Imaging Spectrometer for Airborne Hyperspectral Measurements

    DTIC Science & Technology

    2009-10-01

    cryostat and cooled at a temperature under 77K by a Stirling cryocooler , as represented on the following Figure 5 : Cryostat...Figure 5. Detector cryostat and cryocooler The read-out frequency of the detectors is adapted to the ground speed of the plane above...Cold shield Detector plane Cryocoole r Cryocoole r compresso r Fixed frame Roll frame Pitch frame Yaw frame SIELETERS: a Static Fourier

  13. Lack of visible change around active hotspots on Io

    NASA Technical Reports Server (NTRS)

    1996-01-01

    Detail of changes around two hotspots on Jupiter's moon Io as seen by Voyager 1 in April 1979 (left) and NASA's Galileo spacecraft on September 7th, 1996 (middle and right). The right frame was created with images from the Galileo Solid State Imaging system's near-infrared (756 nm), green, and violet filters. For better comparison, the middle frame mimics Voyager colors. The calderas at the top and at the lower right of the images correspond to the locations of hotspots detected by the Near Infrared Mapping Spectrometer aboard the Galileo spacecraft during its second orbit. There are no significant morphologic changes around these hot calderas; however, the diffuse red deposits, which are simply dark in the Voyager colors, appear to be associated with recent and/or ongoing volcanic activity. The three calderas range in size from approximately 100 kilometers to approximately 150 kilometers in diameter. The caldera in the lower right of each frame is named Malik. North is to the top of all frames.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC. JPL is an operating division of California Institute of Technology (Caltech).

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo

  14. PCIE interface design for high-speed image storage system based on SSD

    NASA Astrophysics Data System (ADS)

    Wang, Shiming

    2015-02-01

    This paper proposes and implements a standard interface of miniaturized high-speed image storage system, which combines PowerPC with FPGA and utilizes PCIE bus as the high speed switching channel. Attached to the PowerPC, mSATA interface SSD(Solid State Drive) realizes RAID3 array storage. At the same time, a high-speed real-time image compression patent IP core also can be embedded in FPGA, which is in the leading domestic level with compression rate and image quality, making that the system can record higher image data rate or achieve longer recording time. The notebook memory card buckle type design is used in the mSATA interface SSD, which make it possible to complete the replacement in 5 seconds just using single hand, thus the total length of repeated recordings is increased. MSI (Message Signaled Interrupts) interruption guarantees the stability and reliability of continuous DMA transmission. Furthermore, only through the gigabit network, the remote display, control and upload to backup function can be realized. According to an optional 25 frame/s or 30 frame/s, upload speeds can be up to more than 84 MB/s. Compared with the existing FLASH array high-speed memory systems, it has higher degree of modularity, better stability and higher efficiency on development, maintenance and upgrading. Its data access rate is up to 300MB/s, realizing the high speed image storage system miniaturization, standardization and modularization, thus it is fit for image acquisition, storage and real-time transmission to server on mobile equipment.

  15. Software-based approach toward vendor independent real-time photoacoustic imaging using ultrasound beamformed data

    NASA Astrophysics Data System (ADS)

    Zhang, Haichong K.; Huang, Howard; Lei, Chen; Kim, Younsu; Boctor, Emad M.

    2017-03-01

    Photoacoustic (PA) imaging has shown its potential for many clinical applications, but current research and usage of PA imaging are constrained by additional hardware costs to collect channel data, as the PA signals are incorrectly processed in existing clinical ultrasound systems. This problem arises from the fact that ultrasound systems beamform the PA signals as echoes from the ultrasound transducer instead of directly from illuminated sources. Consequently, conventional implementations of PA imaging rely on parallel channel acquisition from research platforms, which are not only slow and expensive, but are also mostly not approved by the FDA for clinical use. In previous studies, we have proposed the synthetic-aperture based photoacoustic re-beamformer (SPARE) that uses ultrasound beamformed radio frequency (RF) data as the input, which is readily available in clinical ultrasound scanners. The goal of this work is to implement the SPARE beamformer in a clinical ultrasound system, and to experimentally demonstrate its real-time visualization. Assuming a high pulsed repetition frequency (PRF) laser is used, a PZT-based pseudo PA source transmission was synchronized with the ultrasound line trigger. As a result, the frame-rate increases when limiting the image field-of-view (FOV), with 50 to 20 frames per second achieved for FOVs from 35 mm to 70 mm depth, respectively. Although in reality the maximum PRF of laser firing limits the PA image frame rate, this result indicates that the developed software is capable of displaying PA images with the maximum possible frame-rate for certain laser system without acquiring channel data.

  16. Ultrahigh-resolution high-speed retinal imaging using spectral-domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Cense, Barry; Nassif, Nader A.; Chen, Teresa C.; Pierce, Mark C.; Yun, Seok-Hyun; Hyle Park, B.; Bouma, Brett E.; Tearney, Guillermo J.; de Boer, Johannes F.

    2004-05-01

    We present the first ultrahigh-resolution optical coherence tomography (OCT) structural intensity images and movies of the human retina in vivo at 29.3 frames per second with 500 A-lines per frame. Data was acquired at a continuous rate of 29,300 spectra per second with a 98% duty cycle. Two consecutive spectra were coherently summed to improve sensitivity, resulting in an effective rate of 14,600 A-lines per second at an effective integration time of 68 μs. The turn-key source was a combination of two super luminescent diodes with a combined spectral width of more than 150 nm providing 4.5 mW of power. The spectrometer of the spectraldomain OCT (SD-OCT) setup was centered around 885 nm with a bandwidth of 145 nm. The effective bandwidth in the eye was limited to approximately 100 nm due to increased absorption of wavelengths above 920 nm in the vitreous. Comparing the performance of our ultrahighresolution SD-OCT system with a conventional high-resolution time domain OCT system, the A-line rate of the spectral-domain OCT system was 59 times higher at a 5.4 dB lower sensitivity. With use of a software based dispersion compensation scheme, coherence length broadening due to dispersion mismatch between sample and reference arms was minimized. The coherence length measured from a mirror in air was equal to 4.0 μm (n= 1). The coherence length determined from the specular reflection of the foveal umbo in vivo in a healthy human eye was equal to 3.5 μm (n = 1.38). With this new system, two layers at the location of the retinal pigmented epithelium seem to be present, as well as small features in the inner and outer plexiform layers, which are believed to be small blood vessels.

  17. SU-G-BRA-02: Development of a Learning Based Block Matching Algorithm for Ultrasound Tracking in Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shepard, A; Bednarz, B

    Purpose: To develop an ultrasound learning-based tracking algorithm with the potential to provide real-time motion traces of anatomy-based fiducials that may aid in the effective delivery of external beam radiation. Methods: The algorithm was developed in Matlab R2015a and consists of two main stages: reference frame selection, and localized block matching. Immediately following frame acquisition, a normalized cross-correlation (NCC) similarity metric is used to determine a reference frame most similar to the current frame from a series of training set images that were acquired during a pretreatment scan. Segmented features in the reference frame provide the basis for the localizedmore » block matching to determine the feature locations in the current frame. The boundary points of the reference frame segmentation are used as the initial locations for the block matching and NCC is used to find the most similar block in the current frame. The best matched block locations in the current frame comprise the updated feature boundary. The algorithm was tested using five features from two sets of ultrasound patient data obtained from MICCAI 2014 CLUST. Due to the lack of a training set associated with the image sequences, the first 200 frames of the image sets were considered a valid training set for preliminary testing, and tracking was performed over the remaining frames. Results: Tracking of the five vessel features resulted in an average tracking error of 1.21 mm relative to predefined annotations. The average analysis rate was 15.7 FPS with analysis for one of the two patients reaching real-time speeds. Computations were performed on an i5-3230M at 2.60 GHz. Conclusion: Preliminary tests show tracking errors comparable with similar algorithms at close to real-time speeds. Extension of the work onto a GPU platform has the potential to achieve real-time performance, making tracking for therapy applications a feasible option. This work is partially funded by NIH grant R01CA190298.« less

  18. Sparse Representations-Based Super-Resolution of Key-Frames Extracted from Frames-Sequences Generated by a Visual Sensor Network

    PubMed Central

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-01-01

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes. PMID:24566632

  19. Sparse representations-based super-resolution of key-frames extracted from frames-sequences generated by a visual sensor network.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-02-21

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

  20. Optical sectioning microscopy using two-frame structured illumination and Hilbert-Huang data processing

    NASA Astrophysics Data System (ADS)

    Trusiak, M.; Patorski, K.; Tkaczyk, T.

    2014-12-01

    We propose a fast, simple and experimentally robust method for reconstructing background-rejected optically-sectioned microscopic images using two-shot structured illumination approach. Innovative data demodulation technique requires two grid-illumination images mutually phase shifted by π (half a grid period) but precise phase displacement value is not critical. Upon subtraction of the two frames the input pattern with increased grid modulation is computed. The proposed demodulation procedure comprises: (1) two-dimensional data processing based on the enhanced, fast empirical mode decomposition (EFEMD) method for the object spatial frequency selection (noise reduction and bias term removal), and (2) calculating high contrast optically-sectioned image using the two-dimensional spiral Hilbert transform (HS). The proposed algorithm effectiveness is compared with the results obtained for the same input data using conventional structured-illumination (SIM) and HiLo microscopy methods. The input data were collected for studying highly scattering tissue samples in reflectance mode. In comparison with the conventional three-frame SIM technique we need one frame less and no stringent requirement on the exact phase-shift between recorded frames is imposed. The HiLo algorithm outcome is strongly dependent on the set of parameters chosen manually by the operator (cut-off frequencies for low-pass and high-pass filtering and η parameter value for optically-sectioned image reconstruction) whereas the proposed method is parameter-free. Moreover very short processing time required to efficiently demodulate the input pattern predestines proposed method for real-time in-vivo studies. Current implementation completes full processing in 0.25s using medium class PC (Inter i7 2,1 GHz processor and 8 GB RAM). Simple modification employed to extract only first two BIMFs with fixed filter window size results in reducing the computing time to 0.11s (8 frames/s).

  1. Alignment of cryo-EM movies of individual particles by optimization of image translations.

    PubMed

    Rubinstein, John L; Brubaker, Marcus A

    2015-11-01

    Direct detector device (DDD) cameras have revolutionized single particle electron cryomicroscopy (cryo-EM). In addition to an improved camera detective quantum efficiency, acquisition of DDD movies allows for correction of movement of the specimen, due to both instabilities in the microscope specimen stage and electron beam-induced movement. Unlike specimen stage drift, beam-induced movement is not always homogeneous within an image. Local correlation in the trajectories of nearby particles suggests that beam-induced motion is due to deformation of the ice layer. Algorithms have already been described that can correct movement for large regions of frames and for >1 MDa protein particles. Another algorithm allows individual <1 MDa protein particle trajectories to be estimated, but requires rolling averages to be calculated from frames and fits linear trajectories for particles. Here we describe an algorithm that allows for individual <1 MDa particle images to be aligned without frame averaging or linear trajectories. The algorithm maximizes the overall correlation of the shifted frames with the sum of the shifted frames. The optimum in this single objective function is found efficiently by making use of analytically calculated derivatives of the function. To smooth estimates of particle trajectories, rapid changes in particle positions between frames are penalized in the objective function and weighted averaging of nearby trajectories ensures local correlation in trajectories. This individual particle motion correction, in combination with weighting of Fourier components to account for increasing radiation damage in later frames, can be used to improve 3-D maps from single particle cryo-EM. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. A system for EPID-based real-time treatment delivery verification during dynamic IMRT treatment.

    PubMed

    Fuangrod, Todsaporn; Woodruff, Henry C; van Uytven, Eric; McCurdy, Boyd M C; Kuncic, Zdenka; O'Connor, Daryl J; Greer, Peter B

    2013-09-01

    To design and develop a real-time electronic portal imaging device (EPID)-based delivery verification system for dynamic intensity modulated radiation therapy (IMRT) which enables detection of gross treatment delivery errors before delivery of substantial radiation to the patient. The system utilizes a comprehensive physics-based model to generate a series of predicted transit EPID image frames as a reference dataset and compares these to measured EPID frames acquired during treatment. The two datasets are using MLC aperture comparison and cumulative signal checking techniques. The system operation in real-time was simulated offline using previously acquired images for 19 IMRT patient deliveries with both frame-by-frame comparison and cumulative frame comparison. Simulated error case studies were used to demonstrate the system sensitivity and performance. The accuracy of the synchronization method was shown to agree within two control points which corresponds to approximately ∼1% of the total MU to be delivered for dynamic IMRT. The system achieved mean real-time gamma results for frame-by-frame analysis of 86.6% and 89.0% for 3%, 3 mm and 4%, 4 mm criteria, respectively, and 97.9% and 98.6% for cumulative gamma analysis. The system can detect a 10% MU error using 3%, 3 mm criteria within approximately 10 s. The EPID-based real-time delivery verification system successfully detected simulated gross errors introduced into patient plan deliveries in near real-time (within 0.1 s). A real-time radiation delivery verification system for dynamic IMRT has been demonstrated that is designed to prevent major mistreatments in modern radiation therapy.

  3. High speed line-scan confocal imaging of stimulus-evoked intrinsic optical signals in the retina

    PubMed Central

    Li, Yang-Guo; Liu, Lei; Amthor, Franklin; Yao, Xin-Cheng

    2010-01-01

    A rapid line-scan confocal imager was developed for functional imaging of the retina. In this imager, an acousto-optic deflector (AOD) was employed to produce mechanical vibration- and inertia-free light scanning, and a high-speed (68,000 Hz) linear CCD camera was used to achieve sub-cellular and sub-millisecond spatiotemporal resolution imaging. Two imaging modalities, i.e., frame-by-frame and line-by-line recording, were validated for reflected light detection of intrinsic optical signals (IOSs) in visible light stimulus activated frog retinas. Experimental results indicated that fast IOSs were tightly correlated with retinal stimuli, and could track visible light flicker stimulus frequency up to at least 2 Hz. PMID:20125743

  4. Profile fitting in crowded astronomical images

    NASA Astrophysics Data System (ADS)

    Manish, Raja

    Around 18,000 known objects currently populate the near Earth space. These constitute active space assets as well as space debris objects. The tracking and cataloging of such objects relies on observations, most of which are ground based. Also, because of the great distance to the objects, only non-resolved object images can be obtained from the observations. Optical systems consist of telescope optics and a detector. Nowadays, usually CCD detectors are used. The information that is sought to be extracted from the frames are the individual object's astrometric position. In order to do so, the center of the object's image on the CCD frame has to be found. However, the observation frames that are read out of the detector are subject to noise. There are three different sources of noise: celestial background sources, the object signal itself and the sensor noise. The noise statistics are usually modeled as Gaussian or Poisson distributed or their combined distribution. In order to achieve a near real time processing, computationally fast and reliable methods for the so-called centroiding are desired; analytical methods are preferred over numerical ones of comparable accuracy. In this work, an analytic method for the centroiding is investigated and compared to numerical methods. Though the work focuses mainly on astronomical images, same principle could be applied on non-celestial images containing similar data. The method is based on minimizing weighted least squared (LS) error between observed data and the theoretical model of point sources in a novel yet simple way. Synthetic image frames have been simulated. The newly developed method is tested in both crowded and non-crowded fields where former needs additional image handling procedures to separate closely packed objects. Subsequent analysis on real celestial images corroborate the effectiveness of the approach.

  5. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System

    PubMed Central

    Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570

  6. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System.

    PubMed

    Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).

  7. The effect of regularization in motion compensated PET image reconstruction: a realistic numerical 4D simulation study.

    PubMed

    Tsoumpas, C; Polycarpou, I; Thielemans, K; Buerger, C; King, A P; Schaeffter, T; Marsden, P K

    2013-03-21

    Following continuous improvement in PET spatial resolution, respiratory motion correction has become an important task. Two of the most common approaches that utilize all detected PET events to motion-correct PET data are the reconstruct-transform-average method (RTA) and motion-compensated image reconstruction (MCIR). In RTA, separate images are reconstructed for each respiratory frame, subsequently transformed to one reference frame and finally averaged to produce a motion-corrected image. In MCIR, the projection data from all frames are reconstructed by including motion information in the system matrix so that a motion-corrected image is reconstructed directly. Previous theoretical analyses have explained why MCIR is expected to outperform RTA. It has been suggested that MCIR creates less noise than RTA because the images for each separate respiratory frame will be severely affected by noise. However, recent investigations have shown that in the unregularized case RTA images can have fewer noise artefacts, while MCIR images are more quantitatively accurate but have the common salt-and-pepper noise. In this paper, we perform a realistic numerical 4D simulation study to compare the advantages gained by including regularization within reconstruction for RTA and MCIR, in particular using the median-root-prior incorporated in the ordered subsets maximum a posteriori one-step-late algorithm. In this investigation we have demonstrated that MCIR with proper regularization parameters reconstructs lesions with less bias and root mean square error and similar CNR and standard deviation to regularized RTA. This finding is reproducible for a variety of noise levels (25, 50, 100 million counts), lesion sizes (8 mm, 14 mm diameter) and iterations. Nevertheless, regularized RTA can also be a practical solution for motion compensation as a proper level of regularization reduces both bias and mean square error.

  8. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  9. Comparison of corneal endothelial image analysis by Konan SP8000 noncontact and Bio-Optics Bambi systems.

    PubMed

    Benetz, B A; Diaconu, E; Bowlin, S J; Oak, S S; Laing, R A; Lass, J H

    1999-01-01

    Compare corneal endothelial image analysis by Konan SP8000 and Bio-Optics Bambi image-analysis systems. Corneal endothelial images from 98 individuals (191 eyes), ranging in age from 4 to 87 years, with a normal slit-lamp examination and no history of ocular trauma, intraocular surgery, or intraocular inflammation were obtained by the Konan SP8000 noncontact specular microscope. One observer analyzed these images by using the Konan system and a second observer by using the Bio-Optics Bambi system. Three methods of analyses were used: a fixed-frame method to obtain cell density (for both Konan and Bio-Optics Bambi) and a "dot" (Konan) or "corners" (Bio-Optics Bambi) method to determine morphometric parameters. The cell density determined by the Konan fixed-frame method was significantly higher (157 cells/mm2) than the Bio-Optics Bambi fixed-frame method determination (p<0.0001). However, the difference in cell density, although still statistically significant, was smaller and reversed comparing the Konan fixed-frame method with both Konan dot and Bio-Optics Bambi comers method (-74 cells/mm2, p<0.0001; -55 cells/mm2, p<0.0001, respectively). Small but statistically significant morphometric analyses differences between Konan and Bio-Optics Bambi were seen: cell density, +19 cells/mm2 (p = 0.03); cell area, -3.0 microm2 (p = 0.008); and coefficient of variation, +1.0 (p = 0.003). There was no statistically significant difference between these two methods in the percentage of six-sided cells detected (p = 0.55). Cell densities measured by the Konan fixed-frame method were comparable with Konan and Bio-Optics Bambi's morphometric analysis, but not with the Bio-Optics Bambi fixed-frame method. The two morphometric analyses were comparable with minimal or no differences for the parameters that were studied. The Konan SP8000 endothelial image-analysis system may be useful for large-scale clinical trials determining cell loss; its noncontact system has many clinical benefits (including patient comfort, safety, ease of use, and short procedure time) and provides reliable cell-density calculations.

  10. Siamese convolutional networks for tracking the spine motion

    NASA Astrophysics Data System (ADS)

    Liu, Yuan; Sui, Xiubao; Sun, Yicheng; Liu, Chengwei; Hu, Yong

    2017-09-01

    Deep learning models have demonstrated great success in various computer vision tasks such as image classification and object tracking. However, tracking the lumbar spine by digitalized video fluoroscopic imaging (DVFI), which can quantitatively analyze the motion mode of spine to diagnose lumbar instability, has not yet been well developed due to the lack of steady and robust tracking method. In this paper, we propose a novel visual tracking algorithm of the lumbar vertebra motion based on a Siamese convolutional neural network (CNN) model. We train a full-convolutional neural network offline to learn generic image features. The network is trained to learn a similarity function that compares the labeled target in the first frame with the candidate patches in the current frame. The similarity function returns a high score if the two images depict the same object. Once learned, the similarity function is used to track a previously unseen object without any adapting online. In the current frame, our tracker is performed by evaluating the candidate rotated patches sampled around the previous frame target position and presents a rotated bounding box to locate the predicted target precisely. Results indicate that the proposed tracking method can detect the lumbar vertebra steadily and robustly. Especially for images with low contrast and cluttered background, the presented tracker can still achieve good tracking performance. Further, the proposed algorithm operates at high speed for real time tracking.

  11. Methods for 2-D and 3-D Endobronchial Ultrasound Image Segmentation.

    PubMed

    Zang, Xiaonan; Bascom, Rebecca; Gilbert, Christopher; Toth, Jennifer; Higgins, William

    2016-07-01

    Endobronchial ultrasound (EBUS) is now commonly used for cancer-staging bronchoscopy. Unfortunately, EBUS is challenging to use and interpreting EBUS video sequences is difficult. Other ultrasound imaging domains, hampered by related difficulties, have benefited from computer-based image-segmentation methods. Yet, so far, no such methods have been proposed for EBUS. We propose image-segmentation methods for 2-D EBUS frames and 3-D EBUS sequences. Our 2-D method adapts the fast-marching level-set process, anisotropic diffusion, and region growing to the problem of segmenting 2-D EBUS frames. Our 3-D method builds upon the 2-D method while also incorporating the geodesic level-set process for segmenting EBUS sequences. Tests with lung-cancer patient data showed that the methods ran fully automatically for nearly 80% of test cases. For the remaining cases, the only user-interaction required was the selection of a seed point. When compared to ground-truth segmentations, the 2-D method achieved an overall Dice index = 90.0% ±4.9%, while the 3-D method achieved an overall Dice index = 83.9 ± 6.0%. In addition, the computation time (2-D, 0.070 s/frame; 3-D, 0.088 s/frame) was two orders of magnitude faster than interactive contour definition. Finally, we demonstrate the potential of the methods for EBUS localization in a multimodal image-guided bronchoscopy system.

  12. Optimization of Transmit Parameters in Cardiac Strain Imaging With Full and Partial Aperture Coherent Compounding.

    PubMed

    Sayseng, Vincent; Grondin, Julien; Konofagou, Elisa E

    2018-05-01

    Coherent compounding methods using the full or partial transmit aperture have been investigated as a possible means of increasing strain measurement accuracy in cardiac strain imaging; however, the optimal transmit parameters in either compounding approach have yet to be determined. The relationship between strain estimation accuracy and transmit parameters-specifically the subaperture, angular aperture, tilt angle, number of virtual sources, and frame rate-in partial aperture (subaperture compounding) and full aperture (steered compounding) fundamental mode cardiac imaging was thus investigated and compared. Field II simulation of a 3-D cylindrical annulus undergoing deformation and twist was developed to evaluate accuracy of 2-D strain estimation in cross-sectional views. The tradeoff between frame rate and number of virtual sources was then investigated via transthoracic imaging in the parasternal short-axis view of five healthy human subjects, using the strain filter to quantify estimation precision. Finally, the optimized subaperture compounding sequence (25-element subperture, 90° angular aperture, 10 virtual sources, 300-Hz frame rate) was compared to the optimized steered compounding sequence (60° angular aperture, 15° tilt, 10 virtual sources, 300-Hz frame rate) via transthoracic imaging of five healthy subjects. Both approaches were determined to estimate cumulative radial strain with statistically equivalent precision (subaperture compounding E(SNRe %) = 3.56, and steered compounding E(SNRe %) = 4.26).

  13. A video wireless capsule endoscopy system powered wirelessly: design, analysis and experiment

    NASA Astrophysics Data System (ADS)

    Pan, Guobing; Xin, Wenhui; Yan, Guozheng; Chen, Jiaoliao

    2011-06-01

    Wireless capsule endoscopy (WCE), as a relatively new technology, has brought about a revolution in the diagnosis of gastrointestinal (GI) tract diseases. However, the existing WCE systems are not widely applied in clinic because of the low frame rate and low image resolution. A video WCE system based on a wireless power supply is developed in this paper. This WCE system consists of a video capsule endoscope (CE), a wireless power transmission device, a receiving box and an image processing station. Powered wirelessly, the video CE has the abilities of imaging the GI tract and transmitting the images wirelessly at a frame rate of 30 frames per second (f/s). A mathematical prototype was built to analyze the power transmission system, and some experiments were performed to test the capability of energy transferring. The results showed that the wireless electric power supply system had the ability to transfer more than 136 mW power, which was enough for the working of a video CE. In in vitro experiments, the video CE produced clear images of the small intestine of a pig with the resolution of 320 × 240, and transmitted NTSC format video outside the body. Because of the wireless power supply, the video WCE system with high frame rate and high resolution becomes feasible, and provides a novel solution for the diagnosis of the GI tract in clinic.

  14. In vivo flow cytometry for blood cell analysis using differential epi-detection of forward scattered light

    NASA Astrophysics Data System (ADS)

    Paudel, Hari P.; Jung, Yookyung; Raphael, Anthony; Alt, Clemens; Wu, Juwell; Runnels, Judith; Lin, Charles P.

    2018-02-01

    The present standard of blood cell analysis is an invasive procedure requiring the extraction of patient's blood, followed by ex-vivo analysis using a flow cytometer or a hemocytometer. We are developing a noninvasive optical technique that alleviates the need for blood extraction. For in-vivo blood analysis we need a high speed, high resolution and high contrast label-free imaging technique. In this proceeding report, we reported a label-free method based on differential epi-detection of forward scattered light, a method inspired by Jerome Mertz's oblique back-illumination microscopy (OBM) (Ford et al, Nat. Meth. 9(12) 2012). The differential epi-detection of forward light gives phase contrast image at diffraction-limited resolution. Unlike reflection confocal microscopy (RCM), which detects only sharp refractive index variation and suffers from speckle noise, this technique is suitable for detection of subtle variation of refractive index in biological tissue and it provides the shape and the size of cells. A custom built high speed electronic detection circuit board produces a real-time differential signal which yields image contrast based on phase gradient in the sample. We recorded blood flow in-vivo at 17.2k lines per second in line scan mode, or 30 frames per second (full frame), or 120 frame per second (quarter frame) in frame scan mode. The image contrast and speed of line scan data recording show the potential of the system for noninvasive blood cell analysis.

  15. Streak detection and analysis pipeline for space-debris optical images

    NASA Astrophysics Data System (ADS)

    Virtanen, Jenni; Poikonen, Jonne; Säntti, Tero; Komulainen, Tuomo; Torppa, Johanna; Granvik, Mikael; Muinonen, Karri; Pentikäinen, Hanna; Martikainen, Julia; Näränen, Jyri; Lehti, Jussi; Flohrer, Tim

    2016-04-01

    We describe a novel data-processing and analysis pipeline for optical observations of moving objects, either of natural (asteroids, meteors) or artificial origin (satellites, space debris). The monitoring of the space object populations requires reliable acquisition of observational data, to support the development and validation of population models and to build and maintain catalogues of orbital elements. The orbital catalogues are, in turn, needed for the assessment of close approaches (for asteroids, with the Earth; for satellites, with each other) and for the support of contingency situations or launches. For both types of populations, there is also increasing interest to detect fainter objects corresponding to the small end of the size distribution. The ESA-funded StreakDet (streak detection and astrometric reduction) activity has aimed at formulating and discussing suitable approaches for the detection and astrometric reduction of object trails, or streaks, in optical observations. Our two main focuses are objects in lower altitudes and space-based observations (i.e., high angular velocities), resulting in long (potentially curved) and faint streaks in the optical images. In particular, we concentrate on single-image (as compared to consecutive frames of the same field) and low-SNR detection of objects. Particular attention has been paid to the process of extraction of all necessary information from one image (segmentation), and subsequently, to efficient reduction of the extracted data (classification). We have developed an automated streak detection and processing pipeline and demonstrated its performance with an extensive database of semisynthetic images simulating streak observations both from ground-based and space-based observing platforms. The average processing time per image is about 13 s for a typical 2k-by-2k image. For long streaks (length >100 pixels), primary targets of the pipeline, the detection sensitivity (true positives) is about 90% for both scenarios for the bright streaks (SNR > 1), while in the low-SNR regime, the sensitivity is still 50% at SNR = 0.5 .

  16. Barnacle Bill in Super Resolution from Insurance Panorama

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Barnacle Bill is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument. This image shows super resolution techniques applied to the first APXS target rock, which was never imaged with the rover's forward cameras. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin.

    This view of Barnacle Bill was produced by combining the 'Insurance Pan' frames taken while the IMP camera was still in its stowed position on sol2. The composite color frames that make up this anaglyph were produced for both the right and left eye of the IMP. The right eye composite consists of 5 frames, taken with different color filters, the left eye consists of only 1 frame. The resultant image from each eye was enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. These panchromatic frames were then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars.

    The anaglyph view was produced by combining the left with the right eye color composite frames by assigning the left eye composite view to the red color plane and the right eye composite view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. Barnacle Bill is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument.

  17. An Application for Driver Drowsiness Identification based on Pupil Detection using IR Camera

    NASA Astrophysics Data System (ADS)

    Kumar, K. S. Chidanand; Bhowmick, Brojeshwar

    A Driver drowsiness identification system has been proposed that generates alarms when driver falls asleep during driving. A number of different physical phenomena can be monitored and measured in order to detect drowsiness of driver in a vehicle. This paper presents a methodology for driver drowsiness identification using IR camera by detecting and tracking pupils. The face region is first determined first using euler number and template matching. Pupils are then located in the face region. In subsequent frames of video, pupils are tracked in order to find whether the eyes are open or closed. If eyes are closed for several consecutive frames then it is concluded that the driver is fatigued and alarm is generated.

  18. High-Speed Photography of Detonation Propagation in Dynamically Precompressed Liquid Explosives

    NASA Astrophysics Data System (ADS)

    Petel, Oren; Higgins, Andrew; Yoshinaka, Akio; Zhang, Fan

    2007-06-01

    The propagation of detonation in shock compressed nitromethane was observed with a high speed framing camera. The test explosive, nitromethane, was compressed by a reverberating shock wave to pressures on the order of 10 GPa prior to being detonated by a secondary detonation event. The pressure and density in the test explosive prior to detonation was determined using two methods: manganin strain gauge measurements and LS-DYNA simulations. The velocity of the detonation front was determined from consecutive frames and correlated to the density of the explosive post-reverberating shock wave and prior to being detonated. Observing detonation propagation under these non-ambient conditions provides data which can be useful in the validation of equation of state models.

  19. Chromatic Modulator for High Resolution CCD or APS Devices

    NASA Technical Reports Server (NTRS)

    Hartley, Frank T. (Inventor); Hull, Anthony B. (Inventor)

    2003-01-01

    A system for providing high-resolution color separation in electronic imaging. Comb drives controllably oscillate a red-green-blue (RGB) color strip filter system (or otherwise) over an electronic imaging system such as a charge-coupled device (CCD) or active pixel sensor (APS). The color filter is modulated over the imaging array at a rate three or more times the frame rate of the imaging array. In so doing, the underlying active imaging elements are then able to detect separate color-separated images, which are then combined to provide a color-accurate frame which is then recorded as the representation of the recorded image. High pixel resolution is maintained. Registration is obtained between the color strip filter and the underlying imaging array through the use of electrostatic comb drives in conjunction with a spring suspension system.

  20. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  1. Remote consultation and diagnosis in medical imaging using a global PACS backbone network

    NASA Astrophysics Data System (ADS)

    Martinez, Ralph; Sutaria, Bijal N.; Kim, Jinman; Nam, Jiseung

    1993-10-01

    A Global PACS is a national network which interconnects several PACS networks at medical and hospital complexes using a national backbone network. A Global PACS environment enables new and beneficial operations between radiologists and physicians, when they are located in different geographical locations. One operation allows the radiologist to view the same image folder at both Local and Remote sites so that a diagnosis can be performed. The paper describes the user interface, database management, and network communication software which has been developed in the Computer Engineering Research Laboratory and Radiology Research Laboratory. Specifically, a design for a file management system in a distributed environment is presented. In the remote consultation and diagnosis operation, a set of images is requested from the database archive system and sent to the Local and Remote workstation sites on the Global PACS network. Viewing the same images, the radiologists use pointing overlay commands, or frames to point out features on the images. Each workstation transfers these frames, to the other workstation, so that an interactive session for diagnosis takes place. In this phase, we use fixed frames and variable size frames, used to outline an object. The data pockets for these frames traverses the national backbone in real-time. We accomplish this feature by using TCP/IP protocol sockets for communications. The remote consultation and diagnosis operation has been tested in real-time between the University Medical Center and the Bowman Gray School of Medicine at Wake Forest University, over the Internet. In this paper, we show the feasibility of the operation in a Global PACS environment. Future improvements to the system will include real-time voice and interactive compressed video scenarios.

  2. Retrospective respiration-gated whole-body photoacoustic computed tomography of mice

    NASA Astrophysics Data System (ADS)

    Xia, Jun; Chen, Wanyi; Maslov, Konstantin; Anastasio, Mark A.; Wang, Lihong V.

    2014-01-01

    Photoacoustic tomography (PAT) is an emerging technique that has a great potential for preclinical whole-body imaging. To date, most whole-body PAT systems require multiple laser shots to generate one cross-sectional image, yielding a frame rate of <1 Hz. Because a mouse breathes at up to 3 Hz, without proper gating mechanisms, acquired images are susceptible to motion artifacts. Here, we introduce, for the first time to our knowledge, retrospective respiratory gating for whole-body photoacoustic computed tomography. This new method involves simultaneous capturing of the animal's respiratory waveform during photoacoustic data acquisition. The recorded photoacoustic signals are sorted and clustered according to the respiratory phase, and an image of the animal at each respiratory phase is reconstructed subsequently from the corresponding cluster. The new method was tested in a ring-shaped confocal photoacoustic computed tomography system with a hardware-limited frame rate of 0.625 Hz. After respiratory gating, we observed sharper vascular and anatomical images at different positions of the animal body. The entire breathing cycle can also be visualized at 20 frames/cycle.

  3. Novel ultrasonic real-time scanner featuring servo controlled transducers displaying a sector image.

    PubMed

    Matzuk, T; Skolnick, M L

    1978-07-01

    This paper describes a new real-time servo controlled sector scanner that produces high resolution images and has functionally programmable features similar to phased array systems, but possesses the simplicity of design and low cost best achievable in a mechanical sector scanner. The unique feature is the transducer head which contains a single moving part--the transducer--enclosed within a light-weight, hand held, and vibration free case. The frame rate, sector width, stop action angle, are all operator programmable. The frame rate can be varied from 12 to 30 frames s-1 and the sector width from 0 degrees to 60 degrees. Conversion from sector to time motion (T/M) modes are instant and two options are available, a freeze position high density T/M and a low density T/M obtainable simultaneously during sector visualization. Unusual electronic features are: automatic gain control, electronic recording of images on video tape in rf format, and ability to post-process images during video playback to extract T/M display and to change time gain control (tgc) and image size.

  4. Real-time mid-infrared imaging of living microorganisms.

    PubMed

    Haase, Katharina; Kröger-Lui, Niels; Pucci, Annemarie; Schönhals, Arthur; Petrich, Wolfgang

    2016-01-01

    The speed and efficiency of quantum cascade laser-based mid-infrared microspectroscopy are demonstrated using two different model organisms as examples. For the slowly moving Amoeba proteus, a quantum cascade laser is tuned over the wavelength range of 7.6 µm to 8.6 µm (wavenumbers 1320 cm(-1) and 1160 cm(-1) , respectively). The recording of a hyperspectral image takes 11.3 s whereby an average signal-to-noise ratio of 29 is achieved. The limits of time resolution are tested by imaging the fast moving Caenorhabditis elegans at a discrete wavenumber of 1265 cm(-1) . Mid-infrared imaging is performed with the 640 × 480 pixel video graphics array (VGA) standard and at a full-frame time resolution of 0.02 s (i.e. well above the most common frame rate standards). An average signal-to-noise ratio of 16 is obtained. To the best of our knowledge, these findings constitute the first mid-infrared imaging of living organisms at VGA standard and video frame rate. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Real time imaging of infrared scene data generated by the Naval Postgraduate School Infrared Search and Target Designation (NPS-IRSTD) system

    NASA Astrophysics Data System (ADS)

    Baca, Michael J.

    1990-09-01

    A system to display images generated by the Naval Postgraduate School Infrared Search and Target Designation (a modified AN/SAR-8 Advanced Development Model) in near real time was developed using a 33 MHz NIC computer as the central controller. This computer was enhanced with a Data Translation DT2861 Frame Grabber for image processing and an interface board designed and constructed at NPS to provide synchronization between the IRSTD and Frame Grabber. Images are displayed in false color in a video raster format on a 512 by 480 pixel resolution monitor. Using FORTRAN, programs have been written to acquire, unscramble, expand and display a 3 deg sector of data. The time line for acquisition, processing and display has been analyzed and repetition periods of less than four seconds for successive screen displays have been achieved. This represents a marked improvement over previous methods necessitating slower Direct Memory Access transfers of data into the Frame Grabber. Recommendations are made for further improvements to enhance the speed and utility of images produced.

  6. Earth Observation

    NASA Image and Video Library

    2014-08-23

    ISS040-E-105768 (23 Aug. 2014) --- One of the Expedition 40 crew members aboard the International Space Station, flying at an altitude of 221 nautical miles, captured this image of Egypt's Nile River and Lake Nasser on Aug. 23, 2014. The Aswan High Dam is to the right of center in the 70mm focal-length image, as the Nile flows southward (to the right in this image) toward Cairo and it?s Mediterranean delta (both out of frame at right). The Red Sea, which runs more or less parallel to the Nile, is out of frame at bottom.

  7. Real life identification of partially occluded weapons in video frames

    NASA Astrophysics Data System (ADS)

    Hempelmann, Christian F.; Arslan, Abdullah N.; Attardo, Salvatore; Blount, Grady P.; Sirakov, Nikolay M.

    2016-05-01

    We empirically test the capacity of an improved system to identify not just images of individual guns, but partially occluded guns and their parts appearing in a videoframe. This approach combines low-level geometrical information gleaned from the visual images and high-level semantic information stored in an ontology enriched with meronymic part-whole relations. The main improvements of the system are handling occlusion, new algorithms, and an emerging meronomy. Well-known and commonly deployed in ontologies, actual meronomies need to be engineered and populated with unique solutions. Here, this includes adjacency of weapon parts and essentiality of parts to the threat of and the diagnosticity for a weapon. In this study video sequences are processed frame by frame. The extraction method separates colors and removes the background. Then image subtraction of the next frame determines moving targets, before morphological closing is applied to the current frame in order to clean up noise and fill gaps. Next, the method calculates for each object the boundary coordinates and uses them to create a finite numerical sequence as a descriptor. Parts identification is done by cyclic sequence alignment and matching against the nodes of the weapons ontology. From the identified parts, the most-likely weapon will be determined by using the weapon ontology.

  8. Improved grid-noise removal in single-frame digital moiré 3D shape measurement

    NASA Astrophysics Data System (ADS)

    Mohammadi, Fatemeh; Kofman, Jonathan

    2016-11-01

    A single-frame grid-noise removal technique was developed for application in single-frame digital-moiré 3D shape measurement. The ability of the stationary wavelet transform (SWT) to prevent oscillation artifacts near discontinuities, and the ability of the Fourier transform (FFT) applied to wavelet coefficients to separate grid-noise from useful image information, were combined in a new technique, SWT-FFT, to remove grid-noise from moiré-pattern images generated by digital moiré. In comparison to previous grid-noise removal techniques in moiré, SWT-FFT avoids the requirement for mechanical translation of optical components and capture of multiple frames, to enable single-frame moiré-based measurement. Experiments using FFT, Discrete Wavelet Transform (DWT), DWT-FFT, and SWT-FFT were performed on moiré-pattern images containing grid noise, generated by digital moiré, for several test objects. SWT-FFT had the best performance in removing high-frequency grid-noise, both straight and curved lines, minimizing artifacts, and preserving the moiré pattern without blurring and degradation. SWT-FFT also had the lowest noise amplitude in the reconstructed height and lowest roughness index for all test objects, indicating best grid-noise removal in comparison to the other techniques.

  9. High-speed bioimaging with frequency-division-multiplexed fluorescence confocal microscopy

    NASA Astrophysics Data System (ADS)

    Mikami, Hideharu; Harmon, Jeffrey; Ozeki, Yasuyuki; Goda, Keisuke

    2017-04-01

    We present methods of fluorescence confocal microscopy that enable unprecedentedly high frame rate of > 10,000 fps. The methods are based on a frequency-division multiplexing technique, which was originally developed in the field of communication engineering. Specifically, we achieved a broad bandwidth ( 400 MHz) of detection signals using a dual- AOD method and overcame limitations in frame rate, due to a scanning device, by using a multi-line focusing method, resulting in a significant increase in frame rate. The methods have potential biomedical applications such as observation of sub-millisecond dynamics in biological tissues, in-vivo three-dimensional imaging, and fluorescence imaging flow cytometry.

  10. Covariance of lucky images: performance analysis

    NASA Astrophysics Data System (ADS)

    Cagigal, Manuel P.; Valle, Pedro J.; Cagigas, Miguel A.; Villó-Pérez, Isidro; Colodro-Conde, Carlos; Ginski, C.; Mugrauer, M.; Seeliger, M.

    2017-01-01

    The covariance of ground-based lucky images is a robust and easy-to-use algorithm that allows us to detect faint companions surrounding a host star. In this paper, we analyse the relevance of the number of processed frames, the frames' quality, the atmosphere conditions and the detection noise on the companion detectability. This analysis has been carried out using both experimental and computer-simulated imaging data. Although the technique allows us the detection of faint companions, the camera detection noise and the use of a limited number of frames reduce the minimum detectable companion intensity to around 1000 times fainter than that of the host star when placed at an angular distance corresponding to the few first Airy rings. The reachable contrast could be even larger when detecting companions with the assistance of an adaptive optics system.

  11. A multi-frame soft x-ray pinhole imaging diagnostic for single-shot applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurden, G. A.; Coffey, S. K.

    2012-10-15

    For high energy density magnetized target fusion experiments at the Air Force Research Laboratory FRCHX machine, obtaining multi-frame soft x-ray images of the field reversed configuration (FRC) plasma as it is being compressed will provide useful dynamics and symmetry information. However, vacuum hardware will be destroyed during the implosion. We have designed a simple in-vacuum pinhole nosecone attachment, fitting onto a Conflat window, coated with 3.2 mg/cm{sup 2} of P-47 phosphor, and covered with a thin 50-nm aluminum reflective overcoat, lens-coupled to a multi-frame Hadland Ultra intensified digital camera. We compare visible and soft x-ray axial images of translating ({approx}200more » eV) plasmas in the FRX-L and FRCHX machines in Los Alamos and Albuquerque.« less

  12. Characterization of a 512x512-pixel 8-output full-frame CCD for high-speed imaging

    NASA Astrophysics Data System (ADS)

    Graeve, Thorsten; Dereniak, Eustace L.

    1993-01-01

    The characterization of a 512 by 512 pixel, eight-output full frame CCD manufactured by English Electric Valve under part number CCD13 is discussed. This device is a high- resolution Silicon-based array designed for visible imaging applications at readout periods as low as two milliseconds. The characterization of the device includes mean-variance analysis to determine read noise and dynamic range, as well as charge transfer efficiency, MTF, and quantum efficiency measurements. Dark current and non-uniformity issues on a pixel-to-pixel basis and between individual outputs are also examined. The characterization of the device is restricted by hardware limitations to a one MHz pixel rate, corresponding to a 40 ms readout time. However, subsections of the device have been operated at up to an equivalent 100 frames per second. To maximize the frame rate, the CCD is illuminated by a synchronized strobe flash in between frame readouts. The effects of the strobe illumination on the imagery obtained from the device is discussed.

  13. Automated frame selection process for high-resolution microendoscopy

    NASA Astrophysics Data System (ADS)

    Ishijima, Ayumu; Schwarz, Richard A.; Shin, Dongsuk; Mondrik, Sharon; Vigneswaran, Nadarajah; Gillenwater, Ann M.; Anandasabapathy, Sharmila; Richards-Kortum, Rebecca

    2015-04-01

    We developed an automated frame selection algorithm for high-resolution microendoscopy video sequences. The algorithm rapidly selects a representative frame with minimal motion artifact from a short video sequence, enabling fully automated image analysis at the point-of-care. The algorithm was evaluated by quantitative comparison of diagnostically relevant image features and diagnostic classification results obtained using automated frame selection versus manual frame selection. A data set consisting of video sequences collected in vivo from 100 oral sites and 167 esophageal sites was used in the analysis. The area under the receiver operating characteristic curve was 0.78 (automated selection) versus 0.82 (manual selection) for oral sites, and 0.93 (automated selection) versus 0.92 (manual selection) for esophageal sites. The implementation of fully automated high-resolution microendoscopy at the point-of-care has the potential to reduce the number of biopsies needed for accurate diagnosis of precancer and cancer in low-resource settings where there may be limited infrastructure and personnel for standard histologic analysis.

  14. New image compression scheme for digital angiocardiography application

    NASA Astrophysics Data System (ADS)

    Anastassopoulos, George C.; Lymberopoulos, Dimitris C.; Kotsopoulos, Stavros A.; Kokkinakis, George C.

    1993-06-01

    The present paper deals with the development and evaluation of a new compression scheme, for angiocardiography images. This scheme provides considerable compression of the medical data file, through two different stages. The first stage obliterates the redundancy inside a single frame domain since the second stage obliterates the redundancy among the sequential frames. Within these stages the employed data compression ratio can be easily adjusted according to the needs of the angiocardiography applications, where still or moving (in slow or full motion) images are hauled. The developed scheme has been tailored on the real needs of the diagnosis oriented conferencing-teleworking processes, where Unified Image Viewing facilities are required.

  15. Real-Time Detection of Sporadic Meteors in the Intensified TV Imaging Systems.

    PubMed

    Vítek, Stanislav; Nasyrova, Maria

    2017-12-29

    The automatic observation of the night sky through wide-angle video systems with the aim of detecting meteor and fireballs is currently among routine astronomical observations. The observation is usually done in multi-station or network mode, so it is possible to estimate the direction and the speed of the body flight. The high velocity of the meteorite flying through the atmosphere determines the important features of the camera systems, namely the high frame rate. Thanks to high frame rates, such imaging systems produce a large amount of data, of which only a small fragment has scientific potential. This paper focuses on methods for the real-time detection of fast moving objects in the video sequences recorded by intensified TV systems with frame rates of about 60 frames per second. The goal of our effort is to remove all unnecessary data during the daytime and make free hard-drive capacity for the next observation. The processing of data from the MAIA (Meteor Automatic Imager and Analyzer) system is demonstrated in the paper.

  16. Kepler Fine Guidance Sensor Data

    NASA Technical Reports Server (NTRS)

    Van Cleve, Jeffrey; Campbell, Jennifer Roseanna

    2017-01-01

    The Kepler and K2 missions collected Fine Guidance Sensor (FGS) data in addition to the science data, as discussed in the Kepler Instrument Handbook (KIH, Van Cleve and Caldwell 2016). The FGS CCDs are frame transfer devices (KIH Table 7) located in the corners of the Kepler focal plane (KIH Figure 24), which are read out 10 times every second. The FGS data are being made available to the user community for scientific analysis as flux and centroid time series, along with a limited number of FGS full frame images which may be useful for constructing a World Coordinate System (WCS) or otherwise putting the time series data in context. This document will describe the data content and file format, and give example MATLAB scripts to read the time series. There are three file types delivered as the FGS data.1. Flux and Centroid (FLC) data: time series of star signal and centroid data. 2. Ancillary FGS Reference (AFR) data: catalog of information about the observed stars in the FLC data. 3. FGS Full-Frame Image (FGI) data: full-frame image snapshots of the FGS CCDs.

  17. Data management and digital delivery of analog data

    USGS Publications Warehouse

    Miller, W.A.; Longhenry, Ryan; Smith, T.

    2008-01-01

    The U.S. Geological Survey's (USGS) data archive at the Earth Resources Observation and Science (EROS) Center is a comprehensive and impartial record of the Earth's changing land surface. USGS/EROS has been archiving and preserving land remote sensing data for over 35 years. This remote sensing archive continues to grow as aircraft and satellites acquire more imagery. As a world leader in preserving data, USGS/EROS has a reputation as a technological innovator in solving challenges and ensuring that access to these collections is available. Other agencies also call on the USGS to consider their collections for long-term archive support. To improve access to the USGS film archive, each frame on every roll of film is being digitized by automated high performance digital camera systems. The system robotically captures a digital image from each film frame for the creation of browse and medium resolution image files. Single frame metadata records are also created to improve access that otherwise involves interpreting flight indexes. USGS/EROS is responsible for over 8.6 million frames of aerial photographs and 27.7 million satellite images.

  18. Real-Time Detection of Sporadic Meteors in the Intensified TV Imaging Systems

    PubMed Central

    2017-01-01

    The automatic observation of the night sky through wide-angle video systems with the aim of detecting meteor and fireballs is currently among routine astronomical observations. The observation is usually done in multi-station or network mode, so it is possible to estimate the direction and the speed of the body flight. The high velocity of the meteorite flying through the atmosphere determines the important features of the camera systems, namely the high frame rate. Thanks to high frame rates, such imaging systems produce a large amount of data, of which only a small fragment has scientific potential. This paper focuses on methods for the real-time detection of fast moving objects in the video sequences recorded by intensified TV systems with frame rates of about 60 frames per second. The goal of our effort is to remove all unnecessary data during the daytime and make free hard-drive capacity for the next observation. The processing of data from the MAIA (Meteor Automatic Imager and Analyzer) system is demonstrated in the paper. PMID:29286294

  19. Dim target trajectory-associated detection in bright earth limb background

    NASA Astrophysics Data System (ADS)

    Chen, Penghui; Xu, Xiaojian; He, Xiaoyu; Jiang, Yuesong

    2015-09-01

    The intensive emission of earth limb in the field of view of sensors contributes much to the observation images. Due to the low signal-to-noise ratio (SNR), it is a challenge to detect small targets in earth limb background, especially for the detection of point-like targets from a single frame. To improve the target detection, track before detection (TBD) based on the frame sequence is performed. In this paper, a new technique is proposed to determine the target associated trajectories, which jointly carries out background removing, maximum value projection (MVP) and Hough transform. The background of the bright earth limb in the observation images is removed according to the profile characteristics. For a moving target, the corresponding pixels in the MVP image are shifting approximately regularly in time sequence. And the target trajectory is determined by Hough transform according to the pixel characteristics of the target and the clutter and noise. Comparing with traditional frame-by-frame methods, determining associated trajectories from MVP reduces the computation load. Numerical simulations are presented to demonstrate the effectiveness of the approach proposed.

  20. Driving techniques for high frame rate CCD camera

    NASA Astrophysics Data System (ADS)

    Guo, Weiqiang; Jin, Longxu; Xiong, Jingwu

    2008-03-01

    This paper describes a high-frame rate CCD camera capable of operating at 100 frames/s. This camera utilizes Kodak KAI-0340, an interline transfer CCD with 640(vertical)×480(horizontal) pixels. Two output ports are used to read out CCD data and pixel rates approaching 30 MHz. Because of its reduced effective opacity of vertical charge transfer registers, interline transfer CCD can cause undesired image artifacts, such as random white spots and smear generated in the registers. To increase frame rate, a kind of speed-up structure has been incorporated inside KAI-0340, then it is vulnerable to a vertical stripe effect. The phenomena which mentioned above may severely impair the image quality. To solve these problems, some electronic methods of eliminating these artifacts are adopted. Special clocking mode can dump the unwanted charge quickly, then the fast readout of the images, cleared of smear, follows immediately. Amplifier is used to sense and correct delay mismatch between the dual phase vertical clock pulses, the transition edges become close to coincident, so vertical stripes disappear. Results obtained with the CCD camera are shown.

  1. Arizona-sized Io Eruption

    NASA Technical Reports Server (NTRS)

    1997-01-01

    These images of Jupiter's volcanic moon, Io, show the results of a dramatic event that occurred on the fiery satellite during a five-month period. The changes, captured by the solid state imaging (CCD) system on NASA's Galileo spacecraft, occurred between the time Galileo acquired the left frame, during its seventh orbit of Jupiter, and the right frame, during its tenth orbit. A new dark spot, 400 kilometers (249 miles) in diameter, which is roughly the size of Arizona, surrounds a volcanic center named Pillan Patera. Galileo imaged a 120 kilometer (75 mile) high plume erupting from this location during its ninth orbit. Pele, which produced the larger plume deposit southwest of Pillan, also appears different than it did during the seventh orbit, perhaps due to interaction between the two large plumes. Pillan's plume deposits appear dark at all wavelengths. This color differs from the very red color associated with Pele, but is similar to the deposits of Babbar Patera, the dark feature southwest of Pele. Some apparent differences between the images are not caused by changes on Io's surface, but rather are due to differences in illumination, emission and phase angles. This is particularly apparent at Babbar Patera.

    North is to the top of the images. The left frame was acquired on April 4th, 1997, while the right frame was taken on Sept. 19th, 1997. The images were obtained at ranges of 563,000 kilometers (350,000 miles) for the left image, and 505,600 kilometers (314,165 miles) for the right.

    The Jet Propulsion Laboratory, Pasadena, CA manages the Galileo mission for NASA's Office of Space Science, Washington, DC.

    This image and other images and data received from Galileo are posted on the World Wide Web, on the Galileo mission home page at URL http://galileo.jpl.nasa.gov. Background information and educational context for the images can be found at URL http://www.jpl.nasa.gov/galileo/sepo.

  2. The applicability of frame imaging from a spinning spacecraft. Volume 1: Summary report

    NASA Technical Reports Server (NTRS)

    Botticelli, R. A.; Johnson, R. O.; Wallmark, G. N.

    1973-01-01

    A detailed study was made of frame-type imaging systems for use on board a spin stabilized spacecraft for outer planets applications. All types of frame imagers capable of performing this mission were considered, regardless of the current state of the art. Detailed sensor models of these systems were developed at the component level and used in the subsequent analyses. An overall assessment was then made of the various systems based upon results of a worst-case performance analysis, foreseeable technology problems, and the relative reliability and radiation tolerance of the systems. Special attention was directed at restraints imposed by image motion and the limited data transmission and storage capability of the spacecraft. Based upon this overall assessment, the most promising systems were selected and then examined in detail for a specified Jupiter orbiter mission. The relative merits of each selected system were then analyzed, and the system design characteristics were demonstrated using preliminary configurations, block diagrams, and tables of estimated weights, volumes and power consumption.

  3. Construction of high frame rate images with Fourier transform

    NASA Astrophysics Data System (ADS)

    Peng, Hu; Lu, Jian-Yu

    2002-05-01

    Traditionally, images are constructed with a delay-and-sum method that adjusts the phases of received signals (echoes) scattered from the same point in space so that they are summed in phase. Recently, the relationship between the delay-and-sum method and the Fourier transform is investigated [Jian-yu Lu, Anjun Liu, and Hu Peng, ``High frame rate and delay-and-sum imaging methods,'' IEEE Trans. Ultrason. Ferroelectr. Freq. Control (submitted)]. In this study, a generic Fourier transform method is developed. Two-dimensional (2-D) or three-dimensional (3-D) high frame rate images can be constructed using the Fourier transform with a single transmission of an ultrasound pulse from an array as long as the transmission field of the array is known. To verify our theory, computer simulations have been performed with a linear array, a 2-D array, a convex curved array, and a spherical 2-D array. The simulation results are consistent with our theory. [Work supported in part by Grant 5RO1 HL60301 from NIH.

  4. Tvashtar Movie

    NASA Technical Reports Server (NTRS)

    2007-01-01

    [figure removed for brevity, see original site] Click on the image for QuickTime movie of Tvashtar Movie

    Using its Long Range Reconnaissance Imager (LORRI), the New Horizons spacecraft captured the two frames in this 'movie' of the 330-kilometer (200-mile) high Tvashtar volcanic eruption plume on Jupiter's moon Io on February 28, 2007, from a range of 2.7 million kilometers (1.7 million miles). The two images were taken 50 minutes apart, at 03:50 and 04:40 Universal Time, and because particles in the plume take an estimated 30 minutes to fall back to the surface after being ejected by the central volcano, each image likely shows an entirely different set of particles. The details of the plume structure look quite different in each frame, though the overall brightness and size of the plume remain constant.

    Surface details on the nightside of Io, faintly illuminated by Jupiter, show the 5-degree change in Io's central longitude, from 22 to 27 degrees west, between the two frames.

  5. Moving target detection in flash mode against stroboscopic mode by active range-gated laser imaging

    NASA Astrophysics Data System (ADS)

    Zhang, Xuanyu; Wang, Xinwei; Sun, Liang; Fan, Songtao; Lei, Pingshun; Zhou, Yan; Liu, Yuliang

    2018-01-01

    Moving target detection is important for the application of target tracking and remote surveillance in active range-gated laser imaging. This technique has two operation modes based on the difference of the number of pulses per frame: stroboscopic mode with the accumulation of multiple laser pulses per frame and flash mode with a single shot of laser pulse per frame. In this paper, we have established a range-gated laser imaging system. In the system, two types of lasers with different frequency were chosen for the two modes. Electric fan and horizontal sliding track were selected as the moving targets to compare the moving blurring between two modes. Consequently, the system working in flash mode shows more excellent performance in motion blurring against stroboscopic mode. Furthermore, based on experiments and theoretical analysis, we presented the higher signal-to-noise ratio of image acquired by stroboscopic mode than flash mode in indoor and underwater environment.

  6. A CMOS image sensor with programmable pixel-level analog processing.

    PubMed

    Massari, Nicola; Gottardi, Massimo; Gonzo, Lorenzo; Stoppa, David; Simoni, Andrea

    2005-11-01

    A prototype of a 34 x 34 pixel image sensor, implementing real-time analog image processing, is presented. Edge detection, motion detection, image amplification, and dynamic-range boosting are executed at pixel level by means of a highly interconnected pixel architecture based on the absolute value of the difference among neighbor pixels. The analog operations are performed over a kernel of 3 x 3 pixels. The square pixel, consisting of 30 transistors, has a pitch of 35 microm with a fill-factor of 20%. The chip was fabricated in a 0.35 microm CMOS technology, and its power consumption is 6 mW with 3.3 V power supply. The device was fully characterized and achieves a dynamic range of 50 dB with a light power density of 150 nW/mm2 and a frame rate of 30 frame/s. The measured fixed pattern noise corresponds to 1.1% of the saturation level. The sensor's dynamic range can be extended up to 96 dB using the double-sampling technique.

  7. Walén test and de Hoffmann-Teller frame of interplanetary large-amplitude Alfvén waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, J. K.; Hsieh, Wen-Chieh; Lee, L. C.

    2014-05-10

    In this study, three methods of analysis are compared to test the Walén relation. Method 1 requires a good de Hoffmann-Teller (HT) frame. Method 2 uses three components separately to find the frame that is slightly modified from Method 1. This method is intended to improve the accuracy of the HT frame and able to demonstrate the anisotropic property of the fluctuations. The better the relation is, the closer the slope of a regression fitting the data of plasma versus Alfvén velocities is to 1. However, this criterion is based on an average HT frame, and the fitted slope doesmore » not always work for the Walén test because the HT frame can change so fast in the high-speed streams. We propose Method 3 to check the Walén relation using a sequence of data generated by taking the difference of two consecutive values of plasma and Alfvén velocities, respectively. The difference data are independent of the HT frame. We suggest that the ratio of the variances between plasma and Alfvén velocities is a better parameter to qualify the Walén relation. Four cases in two solar wind streams are studied using these three methods. Our results show that when the solar wind HT frame remains stable, all three methods can predict Alfvénic fluctuations well, but Method 3 can better predict the Walén relation when solar wind contains structures with several small streams. A simulated case also demonstrates that Method 3 is better and more robust than Methods 1 and 2. These results are important for a better understanding of Alfvénic fluctuations and turbulence in the solar wind.« less

  8. The Benslimane's Artistic Model for Females' Gaze Beauty: An Original Assessment Tool.

    PubMed

    Benslimane, Fahd; van Harpen, Laura; Myers, Simon R; Ingallina, Fabio; Ghanem, Ali M

    2017-02-01

    The aim of this paper is to analyze the aesthetic characteristics of the human females' gaze using anthropometry and to present an artistic model to represent it: "The Frame Concept." In this model, the eye fissure represents a painting, and the most peripheral shadows around it represent the frame of this painting. The narrower the frame, the more aesthetically pleasing and youthful the gaze appears. This study included a literature review of the features that make the gaze appear attractive. Photographs of models with attractive gazes were examined, and old photographs of patients were compared to recent photographs. The frame ratio was defined by anthropometric measurements of modern portraits of twenty consecutive Miss World winners. The concept was then validated for age and attractiveness across centuries by analysis of modern female photographs and works of art acknowledged for portraying beautiful young and older women in classical paintings. The frame height inversely correlated with attractiveness in modern female portrait photographs. The eye fissure frame ratio of modern idealized female portraits was similar to that of beautiful female portraits idealized by classical artists. In contrast, the eye fissure frames of classical artists' mothers' portraits were significantly wider than those of beautiful younger women. The Frame Concept is a valid artistic tool that provides an understanding of both the aesthetic and aging characteristics of the female periorbital region, enabling the practitioner to plan appropriate aesthetic interventions. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the A3 online Instructions to Authors. www.springer.com/00266 .

  9. Real-time subpixel-accuracy tracking of single mitochondria in neurons reveals heterogeneous mitochondrial motion.

    PubMed

    Alsina, Adolfo; Lai, Wu Ming; Wong, Wai Kin; Qin, Xianan; Zhang, Min; Park, Hyokeun

    2017-11-04

    Mitochondria are essential for cellular survival and function. In neurons, mitochondria are transported to various subcellular regions as needed. Thus, defects in the axonal transport of mitochondria are related to the pathogenesis of neurodegenerative diseases, and the movement of mitochondria has been the subject of intense research. However, the inability to accurately track mitochondria with subpixel accuracy has hindered this research. Here, we report an automated method for tracking mitochondria based on the center of fluorescence. This tracking method, which is accurate to approximately one-tenth of a pixel, uses the centroid of an individual mitochondrion and provides information regarding the distance traveled between consecutive imaging frames, instantaneous speed, net distance traveled, and average speed. Importantly, this new tracking method enables researchers to observe both directed motion and undirected movement (i.e., in which the mitochondrion moves randomly within a small region, following a sub-diffusive motion). This method significantly improves our ability to analyze the movement of mitochondria and sheds light on the dynamic features of mitochondrial movement. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. The Eight Frame Colored Squiggle Technique

    ERIC Educational Resources Information Center

    Steinhardt, Lenore

    2006-01-01

    In this art therapy adaptation of the squiggle technique, the client draws eight colored squiggles on a paper folded into eight frames and then develops them into images utilizing a full range of color. The client is encouraged to write titles on each frame and use them to compose a story. This technique often stimulates emergence of meaningful…

  11. Video shot boundary detection using region-growing-based watershed method

    NASA Astrophysics Data System (ADS)

    Wang, Jinsong; Patel, Nilesh; Grosky, William

    2004-10-01

    In this paper, a novel shot boundary detection approach is presented, based on the popular region growing segmentation method - Watershed segmentation. In image processing, gray-scale pictures could be considered as topographic reliefs, in which the numerical value of each pixel of a given image represents the elevation at that point. Watershed method segments images by filling up basins with water starting at local minima, and at points where water coming from different basins meet, dams are built. In our method, each frame in the video sequences is first transformed from the feature space into the topographic space based on a density function. Low-level features are extracted from frame to frame. Each frame is then treated as a point in the feature space. The density of each point is defined as the sum of the influence functions of all neighboring data points. The height function that is originally used in Watershed segmentation is then replaced by inverting the density at the point. Thus, all the highest density values are transformed into local minima. Subsequently, Watershed segmentation is performed in the topographic space. The intuitive idea under our method is that frames within a shot are highly agglomerative in the feature space and have higher possibilities to be merged together, while those frames between shots representing the shot changes are not, hence they have less density values and are less likely to be clustered by carefully extracting the markers and choosing the stopping criterion.

  12. Symptomatic venous thromboembolism following circular frame treatment for tibial fractures.

    PubMed

    Vollans, S; Chaturvedi, A; Sivasankaran, K; Madhu, T; Hadland, Y; Allgar, V; Sharma, H K

    2015-01-01

    Venous thromboembolism (VTE) is a significant cause of morbidity and mortality following tibial fractures. The risk is as high as 77% without prophylaxis and around 10% with prophylaxis. Within the current literature there are no figures reported specifically for those individuals treated with circular frames. Our aim was to evaluate the VTE incidence within a single surgeon series and to evaluate potential risk factors. We retrospectively reviewed our consecutive single surgeon series of 177 patients admitted to a major trauma unit with tibial fractures. All patients received standardised care, including chemical thromboprophylaxis within 24h of injury until independent mobility was achieved. We comprehensively reviewed our prospective database and medical records looking at demographics and potential risk factors. Seven patients (4.0% ± 2.87%) developed symptomatic VTE during the course of frame treatment; three deep vein thrombosis (DVTs) and four pulmonary embolisms (PEs). Those with a VTE event had significantly increased body mass index (BMI) (p = 0.01) when compared to those without symptomatic VTE. No differences (p > 0.05) were observed between the groups in age, gender, smoking status, fracture type (anatomical allocation or open/closed), delay to frame treatment, weight bearing status post-frame, inpatient stay or total duration of frame treatment. This study suggests that increased BMI is a statistically significant risk factor for VTE, as reported in current literature. In addition, we calculated the true risk of VTE following circular frame treatment for tibial fracture in our series is from 1.13% to 6.87%, which is at least comparable to other forms of treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Band registration of tuneable frame format hyperspectral UAV imagers in complex scenes

    NASA Astrophysics Data System (ADS)

    Honkavaara, Eija; Rosnell, Tomi; Oliveira, Raquel; Tommaselli, Antonio

    2017-12-01

    A recent revolution in miniaturised sensor technology has provided markets with novel hyperspectral imagers operating in the frame format principle. In the case of unmanned aerial vehicle (UAV) based remote sensing, the frame format technology is highly attractive in comparison to the commonly utilised pushbroom scanning technology, because it offers better stability and the possibility to capture stereoscopic data sets, bringing an opportunity for 3D hyperspectral object reconstruction. Tuneable filters are one of the approaches for capturing multi- or hyperspectral frame images. The individual bands are not aligned when operating a sensor based on tuneable filters from a mobile platform, such as UAV, because the full spectrum recording is carried out in the time-sequential principle. The objective of this investigation was to study the aspects of band registration of an imager based on tuneable filters and to develop a rigorous and efficient approach for band registration in complex 3D scenes, such as forests. The method first determines the orientations of selected reference bands and reconstructs the 3D scene using structure-from-motion and dense image matching technologies. The bands, without orientation, are then matched to the oriented bands accounting the 3D scene to provide exterior orientations, and afterwards, hyperspectral orthomosaics, or hyperspectral point clouds, are calculated. The uncertainty aspects of the novel approach were studied. An empirical assessment was carried out in a forested environment using hyperspectral images captured with a hyperspectral 2D frame format camera, based on a tuneable Fabry-Pérot interferometer (FPI) on board a multicopter and supported by a high spatial resolution consumer colour camera. A theoretical assessment showed that the method was capable of providing band registration accuracy better than 0.5-pixel size. The empirical assessment proved the performance and showed that, with the novel method, most parts of the band misalignments were less than the pixel size. Furthermore, it was shown that the performance of the band alignment was dependent on the spatial distance from the reference band.

  14. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  15. Real-time integrated photoacoustic and ultrasound (PAUS) imaging system to guide interventional procedures: ex vivo study.

    PubMed

    Wei, Chen-Wei; Nguyen, Thu-Mai; Xia, Jinjun; Arnal, Bastien; Wong, Emily Y; Pelivanov, Ivan M; O'Donnell, Matthew

    2015-02-01

    Because of depth-dependent light attenuation, bulky, low-repetition-rate lasers are usually used in most photoacoustic (PA) systems to provide sufficient pulse energies to image at depth within the body. However, integrating these lasers with real-time clinical ultrasound (US) scanners has been problematic because of their size and cost. In this paper, an integrated PA/US (PAUS) imaging system is presented operating at frame rates >30 Hz. By employing a portable, low-cost, low-pulse-energy (~2 mJ/pulse), high-repetition-rate (~1 kHz), 1053-nm laser, and a rotating galvo-mirror system enabling rapid laser beam scanning over the imaging area, the approach is demonstrated for potential applications requiring a few centimeters of penetration. In particular, we demonstrate here real-time (30 Hz frame rate) imaging (by combining multiple single-shot sub-images covering the scan region) of an 18-gauge needle inserted into a piece of chicken breast with subsequent delivery of an absorptive agent at more than 1-cm depth to mimic PAUS guidance of an interventional procedure. A signal-to-noise ratio of more than 35 dB is obtained for the needle in an imaging area 2.8 × 2.8 cm (depth × lateral). Higher frame rate operation is envisioned with an optimized scanning scheme.

  16. Determination of Stent Frame Displacement After Endovascular Aneurysm Sealing.

    PubMed

    van Veen, Ruben; van Noort, Kim; Schuurmann, Richte C L; Wille, Jan; Slump, Cornelis H; de Vries, Jean-Paul P M

    2018-02-01

    To describe and validate a new methodology for visualizing and quantifying 3-dimensional (3D) displacement of the stent frames of the Nellix endosystem after endovascular aneurysm sealing (EVAS). The 3D positions of the stent frames were registered to 5 fixed anatomical landmarks on the post-EVAS computed tomography (CT) scans, facilitating comparison of the position and shape of the stent frames between consecutive follow-up scans. Displacement of the proximal and distal ends of the stent frames, the entire stent frame trajectories, as well as changes in distance between the stent frames were determined for 6 patients with >5-mm displacement and 6 patients with <5-mm displacement at 1-year follow-up. The measurements were performed by 2 independent observers; the intraclass correlation coefficient (ICC) was used to determine interobserver variability. Three types of displacement were identified: displacement of the proximal and/or distal end of the stent frames, lateral displacement of one or both stent frames, and stent frame buckling. The ICC ranged from good (0.750) to excellent (0.958). No endoleak or migration was detected in the 12 patients on conventional CT angiography at 1 year. However, of the 6 patients with >5-mm displacement on the 1-year CT as determined by the new methodology, 2 went on to develop a type Ia endoleak in longer follow-up, and displacement progressed to >15 mm for 2 other patients. No endoleak or progressive displacement was appreciated for the patients with <5-mm displacement. The sac anchoring principle of the Nellix endosystem may result in several types of displacement that have not been observed during surveillance of regular endovascular aneurysm repairs. The presented methodology allows precise 3D determination of the Nellix endosystems and can detect subtle displacement better than standard CT angiography. Displacement >5 mm on the 1-year CT scans reconstructed with the new methodology may forecast impaired sealing and anchoring of the Nellix endosystem.

  17. Drifty shifty deluxe.m

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Josh Sugar, Dave Robinson

    2014-01-24

    Performs a cross-correlation calculation to align images. This is particularly useful for aligning frames of a movie so that an object of interest does not spatially drift. For in situ microscopy experiments. Movies are collected where an object changes with time. At the same time, the object usually drifts too. This shifts the movie frames so that the object is aligned from frame to frame. Then it is easy to see the object changes without the added complication of it moving too.

  18. Video Super-Resolution via Bidirectional Recurrent Convolutional Networks.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang

    2018-04-01

    Super resolving a low-resolution video, namely video super-resolution (SR), is usually handled by either single-image SR or multi-frame SR. Single-Image SR deals with each video frame independently, and ignores intrinsic temporal dependency of video frames which actually plays a very important role in video SR. Multi-Frame SR generally extracts motion information, e.g., optical flow, to model the temporal dependency, but often shows high computational cost. Considering that recurrent neural networks (RNNs) can model long-term temporal dependency of video sequences well, we propose a fully convolutional RNN named bidirectional recurrent convolutional network for efficient multi-frame SR. Different from vanilla RNNs, 1) the commonly-used full feedforward and recurrent connections are replaced with weight-sharing convolutional connections. So they can greatly reduce the large number of network parameters and well model the temporal dependency in a finer level, i.e., patch-based rather than frame-based, and 2) connections from input layers at previous timesteps to the current hidden layer are added by 3D feedforward convolutions, which aim to capture discriminate spatio-temporal patterns for short-term fast-varying motions in local adjacent frames. Due to the cheap convolutional operations, our model has a low computational complexity and runs orders of magnitude faster than other multi-frame SR methods. With the powerful temporal dependency modeling, our model can super resolve videos with complex motions and achieve well performance.

  19. Graphics processing unit accelerated intensity-based optical coherence tomography angiography using differential frames with real-time motion correction.

    PubMed

    Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi

    2014-02-01

    We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.

  20. Design of an automated imaging system for use in a space experiment

    NASA Technical Reports Server (NTRS)

    Hartz, William G.; Bozzolo, Nora G.; Lewis, Catherine C.; Pestak, Christopher J.

    1991-01-01

    An experiment, occurring in an orbiting platform, examines the mass transfer across gas-liquid and liquid-liquid interfaces. It employs an imaging system with real time image analysis. The design includes optical design, imager selection and integration, positioner control, image recording, software development for processing and interfaces to telemetry. It addresses the constraints of weight, volume, and electric power associated with placing the experiment in the Space Shuttle cargo bay. Challenging elements of the design are: imaging and recording of a 200-micron-diameter bubble with a resolution of 2 microns to serve a primary source of data; varying frame rates from 500 per second to 1 frame per second, depending on the experiment phase; and providing three-dimensional information to determine the shape of the bubble.

  1. Image enhancement software for underwater recovery operations: User's manual

    NASA Astrophysics Data System (ADS)

    Partridge, William J.; Therrien, Charles W.

    1989-06-01

    This report describes software for performing image enhancement on live or recorded video images. The software was developed for operational use during underwater recovery operations at the Naval Undersea Warfare Engineering Station. The image processing is performed on an IBM-PC/AT compatible computer equipped with hardware to digitize and display video images. The software provides the capability to provide contrast enhancement and other similar functions in real time through hardware lookup tables, to automatically perform histogram equalization, to capture one or more frames and average them or apply one of several different processing algorithms to a captured frame. The report is in the form of a user manual for the software and includes guided tutorial and reference sections. A Digital Image Processing Primer in the appendix serves to explain the principle concepts that are used in the image processing.

  2. An abuttable CCD imager for visible and X-ray focal plane arrays

    NASA Technical Reports Server (NTRS)

    Burke, Barry E.; Mountain, Robert W.; Harrison, David C.; Bautz, Marshall W.; Doty, John P.

    1991-01-01

    A frame-transfer silicon charge-coupled-device (CCD) imager has been developed that can be closely abutted to other imagers on three sides of the imaging array. It is intended for use in multichip arrays. The device has 420 x 420 pixels in the imaging and frame-store regions and is constructed using a three-phase triple-polysilicon process. Particular emphasis has been placed on achieving low-noise charge detection for low-light-level imaging in the visible and maximum energy resolution for X-ray spectroscopic applications. Noise levels of 6 electrons at 1-MHz and less than 3 electrons at 100-kHz data rates have been achieved. Imagers have been fabricated on 1000-Ohm-cm material to maximize quantum efficiency and minimize split events in the soft X-ray regime.

  3. Real-time model-based vision system for object acquisition and tracking

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian; Gennery, Donald B.; Bon, Bruce; Litwin, Todd

    1987-01-01

    A machine vision system is described which is designed to acquire and track polyhedral objects moving and rotating in space by means of two or more cameras, programmable image-processing hardware, and a general-purpose computer for high-level functions. The image-processing hardware is capable of performing a large variety of operations on images and on image-like arrays of data. Acquisition utilizes image locations and velocities of the features extracted by the image-processing hardware to determine the three-dimensional position, orientation, velocity, and angular velocity of the object. Tracking correlates edges detected in the current image with edge locations predicted from an internal model of the object and its motion, continually updating velocity information to predict where edges should appear in future frames. With some 10 frames processed per second, real-time tracking is possible.

  4. System Integration of FastSPECT III, a Dedicated SPECT Rodent-Brain Imager Based on BazookaSPECT Detector Technology

    PubMed Central

    Miller, Brian W.; Furenlid, Lars R.; Moore, Stephen K.; Barber, H. Bradford; Nagarkar, Vivek V.; Barrett, Harrison H.

    2010-01-01

    FastSPECT III is a stationary, single-photon emission computed tomography (SPECT) imager designed specifically for imaging and studying neurological pathologies in rodent brain, including Alzheimer’s and Parkinsons’s disease. Twenty independent BazookaSPECT [1] gamma-ray detectors acquire projections of a spherical field of view with pinholes selected for desired resolution and sensitivity. Each BazookaSPECT detector comprises a columnar CsI(Tl) scintillator, image-intensifier, optical lens, and fast-frame-rate CCD camera. Data stream back to processing computers via firewire interfaces, and heavy use of graphics processing units (GPUs) ensures that each frame of data is processed in real time to extract the images of individual gamma-ray events. Details of the system design, imaging aperture fabrication methods, and preliminary projection images are presented. PMID:21218137

  5. Digital methods of recording color television images on film tape

    NASA Astrophysics Data System (ADS)

    Krivitskaya, R. Y.; Semenov, V. M.

    1985-04-01

    Three methods are now available for recording color television images on film tape, directly or after appropriate finish of signal processing. Conventional recording of images from the screens of three kinescopes with synthetic crystal face plates is still most effective for high fidelity. This method was improved by digital preprocessing of brightness color-difference signal. Frame-by-frame storage of these signals in the memory in digital form is followed by gamma and aperture correction and electronic correction of crossover distortions in the color layers of the film with fixing in accordance with specific emulsion procedures. The newer method of recording color television images with line arrays of light-emitting diodes involves dichromic superposing mirrors and a movable scanning mirror. This method allows the use of standard movie cameras, simplifies interlacing-to-linewise conversion and the mechanical equipment, and lengthens exposure time while it shortens recording time. The latest image transform method requires an audio-video recorder, a memory disk, a digital computer, and a decoder. The 9-step procedure includes preprocessing the total color television signal with reduction of noise level and time errors, followed by frame frequency conversion and setting the number of lines. The total signal is then resolved into its brightness and color-difference components and phase errors and image blurring are also reduced. After extraction of R,G,B signals and colorimetric matching of TV camera and film tape, the simultaneous R,B, B signals are converted from interlacing to sequential triades of color-quotient frames with linewise scanning at triple frequency. Color-quotient signals are recorded with an electron beam on a smoothly moving black-and-white film tape under vacuum. While digital techniques improve the signal quality and simplify the control of processes, not requiring stabilization of circuits, image processing is still analog.

  6. Masked-backlighter technique used to simultaneously image x-ray absorption and x-ray emission from an inertial confinement fusion plasma.

    PubMed

    Marshall, F J; Radha, P B

    2014-11-01

    A method to simultaneously image both the absorption and the self-emission of an imploding inertial confinement fusion plasma has been demonstrated on the OMEGA Laser System. The technique involves the use of a high-Z backlighter, half of which is covered with a low-Z material, and a high-speed x-ray framing camera aligned to capture images backlit by this masked backlighter. Two strips of the four-strip framing camera record images backlit by the high-Z portion of the backlighter, while the other two strips record images aligned with the low-Z portion of the backlighter. The emission from the low-Z material is effectively eliminated by a high-Z filter positioned in front of the framing camera, limiting the detected backlighter emission to that of the principal emission line of the high-Z material. As a result, half of the images are of self-emission from the plasma and the other half are of self-emission plus the backlighter. The advantage of this technique is that the self-emission simultaneous with backlighter absorption is independently measured from a nearby direction. The absorption occurs only in the high-Z backlit frames and is either spatially separated from the emission or the self-emission is suppressed by filtering, or by using a backlighter much brighter than the self-emission, or by subtraction. The masked-backlighter technique has been used on the OMEGA Laser System to simultaneously measure the emission profiles and the absorption profiles of polar-driven implosions.

  7. Hyperspectral CMOS imager

    NASA Astrophysics Data System (ADS)

    Jerram, P. A.; Fryer, M.; Pratlong, J.; Pike, A.; Walker, A.; Dierickx, B.; Dupont, B.; Defernez, A.

    2017-11-01

    CCDs have been used for many years for Hyperspectral imaging missions and have been extremely successful. These include the Medium Resolution Imaging Spectrometer (MERIS) [1] on Envisat, the Compact High Resolution Imaging Spectrometer (CHRIS) on Proba and the Ozone Monitoring Instrument operating in the UV spectral region. ESA are also planning a number of further missions that are likely to use CCD technology (Sentinel 3, 4 and 5). However CMOS sensors have a number of advantages which means that they will probably be used for hyperspectral applications in the longer term. There are two main advantages with CMOS sensors: First a hyperspectral image consists of spectral lines with a large difference in intensity; in a frame transfer CCD the faint spectral lines have to be transferred through the part of the imager illuminated by intense lines. This can lead to cross-talk and whilst this problem can be reduced by the use of split frame transfer and faster line rates CMOS sensors do not require a frame transfer and hence inherently will not suffer from this problem. Second, with a CMOS sensor the intense spectral lines can be read multiple times within a frame to give a significant increase in dynamic range. We will describe the design, and initial test of a CMOS sensor for use in hyperspectral applications. This device has been designed to give as high a dynamic range as possible with minimum cross-talk. The sensor has been manufactured on high resistivity epitaxial silicon wafers and is be back-thinned and left relatively thick in order to obtain the maximum quantum efficiency across the entire spectral range

  8. Overlay of multiframe SEM images including nonlinear field distortions

    NASA Astrophysics Data System (ADS)

    Babin, S.; Borisov, S.; Ivonin, I.; Nakazawa, S.; Yamazaki, Y.

    2018-03-01

    To reduce charging and shrinkage, CD-SEMs utilize low electron energies and multiframe imaging. This results in every next frame being altered due to stage and beam instability, as well as due to charging. Regular averaging of the frames blurs the edges; this directly effects the extracted values of critical dimensions. A technique was developed to overlay multiframe images without the loss of quality. This method takes into account drift, rotation, and magnification corrections, as well as nonlinear distortions due to wafer charging. A significant improvement in the signal to noise ratio and overall image quality without degradation of the feature's edge quality was achieved. The developed software is capable of working with regular and large size images up to 32K pixels in each direction.

  9. First experience with early dynamic (18)F-NaF-PET/CT in patients with chronic osteomyelitis.

    PubMed

    Freesmeyer, Martin; Stecker, Franz F; Schierz, Jan-Henning; Hofmann, Gunther O; Winkens, Thomas

    2014-05-01

    This study investigates whether early dynamic positron emission tomography/computed tomography (edPET/CT) using (18)F-sodium fluoride-((18)F-NaF) is feasible in depicting early phases of radiotracer distribution in patients with chronic osteomyelitis (COM). A total of 12 ed(18)F-NaF-PET/CT examinations were performed on 11 consecutive patients (2 female, 9 male; age 53 ± 12 years) in list mode over 5 min starting with radiopharmaceutical injection before standard late (18)F-NaF-PET/CT. Eight consecutive time intervals (frames) were reconstructed for each patient: four 15 s, then four 60 s. Several volumes of interest (VOI) were selected, representing the affected area as well as different reference areas within the bone and soft tissue. Maximum and mean ed standardized uptake values (edSUVmax, edSUVmean, respectively) were calculated in each VOI during each frame to measure early fluoride influx and accumulation. Results were compared between affected and non-affected (contralateral) bones. Starting in the 31-45 s frame, the affected bone area showed significantly higher edSUVmax and edSUVmean compared to the healthy contralateral region. The affected bone areas also significantly differed from non-affected contralateral regions in conventional late (18)F-NaF-PET/CT. This pilot study suggests that, in patients with COM, ed(18)F-NaF -PET offers additional information about early radiotracer distribution to standard (18)F-NaF -PET/CT, similar to a three-phase bone scan. The results should be validated in larger trials which directly compare ed(18)F-NaF-PET to a three-phase bone scan.

  10. Video attention deviation estimation using inter-frame visual saliency map analysis

    NASA Astrophysics Data System (ADS)

    Feng, Yunlong; Cheung, Gene; Le Callet, Patrick; Ji, Yusheng

    2012-01-01

    A viewer's visual attention during video playback is the matching of his eye gaze movement to the changing video content over time. If the gaze movement matches the video content (e.g., follow a rolling soccer ball), then the viewer keeps his visual attention. If the gaze location moves from one video object to another, then the viewer shifts his visual attention. A video that causes a viewer to shift his attention often is a "busy" video. Determination of which video content is busy is an important practical problem; a busy video is difficult for encoder to deploy region of interest (ROI)-based bit allocation, and hard for content provider to insert additional overlays like advertisements, making the video even busier. One way to determine the busyness of video content is to conduct eye gaze experiments with a sizable group of test subjects, but this is time-consuming and costineffective. In this paper, we propose an alternative method to determine the busyness of video-formally called video attention deviation (VAD): analyze the spatial visual saliency maps of the video frames across time. We first derive transition probabilities of a Markov model for eye gaze using saliency maps of a number of consecutive frames. We then compute steady state probability of the saccade state in the model-our estimate of VAD. We demonstrate that the computed steady state probability for saccade using saliency map analysis matches that computed using actual gaze traces for a range of videos with different degrees of busyness. Further, our analysis can also be used to segment video into shorter clips of different degrees of busyness by computing the Kullback-Leibler divergence using consecutive motion compensated saliency maps.

  11. Comparison of Optic Disc Margin Identified by Color Disc Photography and High-Speed Ultrahigh-Resolution Optical Coherence Tomography

    PubMed Central

    Manassakorn, Anita; Ishikawa, Hiroshi; Kim, Jong S.; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Gabriele, Michelle L.; Sung, Kyung Rim; Mumcuoglu, Tarkan; Duker, Jay S.; Fujimoto, James G.; Schuman, Joel S.

    2009-01-01

    Objective To determine the correspondence between optic disc margins evaluated using disc photography (DP) and optical coherence tomography (OCT). Methods From May 1, 2005, through November 10, 2005, 17 healthy volunteers (17 eyes) had raster scans (180 frames, 501 samplings per frame) centered on the optic disc taken with stereo-optic DP and high-speed ultrahigh-resolution OCT (hsUHR-OCT). Two image outputs were derived from the hsUHR-OCT data set: an en face hsUHR-OCT fundus image and a set of 180 frames of cross-sectional images. Three ophthalmologists independently and in a masked, randomized fashion marked the disc margin on the DP, hsUHR-OCT fundus, and cross-sectional images using custom software. Disc size (area and horizontal and vertical diameters) and location of the geometric disc center were compared among the 3 types of images. Results The hsUHR-OCT fundus image definition showed a significantly smaller disc size than the DP definition (P<.001, mixed-effects analysis). The hsUHR-OCT cross-sectional image definition showed a significantly larger disc size than the DP definition (P<.001). The geometric disc center location was similar among the 3 types of images except for the y-coordinate, which was significantly smaller in the hsUHR-OCT fundus images than in the DP images. Conclusion The optic disc margin as defined by hsUHR-OCT was significantly different than the margin defined by DP. PMID:18195219

  12. Imaging of vaporised sub-micron phase change contrast agents with high frame rate ultrasound and optics

    NASA Astrophysics Data System (ADS)

    Lin, Shengtao; Zhang, Ge; Jamburidze, Akaki; Chee, Melisse; Hau Leow, Chee; Garbin, Valeria; Tang, Meng-Xing

    2018-03-01

    Phase-change ultrasound contrast agent (PCCA), or nanodroplet, shows promise as an alternative to the conventional microbubble agent over a wide range of diagnostic applications. Meanwhile, high-frame-rate (HFR) ultrasound imaging with microbubbles enables unprecedented temporal resolution compared to traditional contrast-enhanced ultrasound imaging. The combination of HFR ultrasound imaging and PCCAs can offer the opportunity to observe and better understand PCCA behaviour after vaporisation captures the fast phenomenon at a high temporal resolution. In this study, we utilised HFR ultrasound at frame rates in the kilohertz range (5-20 kHz) to image native and size-selected PCCA populations immediately after vaporisation in vitro within clinical acoustic parameters. The size-selected PCCAs through filtration are shown to preserve a sub-micron-sized (mean diameter  <  200 nm) population without micron-sized outliers (>1 µm) that originate from native PCCA emulsion. The results demonstrate imaging signals with different amplitudes and temporal features compared to that of microbubbles. Compared with the microbubbles, both the B-mode and pulse-inversion (PI) signals from the vaporised PCCA populations were reduced significantly in the first tens of milliseconds, while only the B-mode signals from the PCCAs were recovered during the next 400 ms, suggesting significant changes to the size distribution of the PCCAs after vaporisation. It is also shown that such recovery in signal over time is not evident when using size-selective PCCAs. Furthermore, it was found that signals from the vaporised PCCA populations are affected by the amplitude and frame rate of the HFR ultrasound imaging. Using high-speed optical camera observation (30 kHz), we observed a change in particle size in the vaporised PCCA populations exposed to the HFR ultrasound imaging pulses. These findings can further the understanding of PCCA behaviour under HFR ultrasound imaging.

  13. Iodine filter imaging system for subtraction angiography using synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Umetani, K.; Ueda, K.; Takeda, T.; Itai, Y.; Akisada, M.; Nakajima, T.

    1993-11-01

    A new type of real-time imaging system was developed for transvenous coronary angiography. A combination of an iodine filter and a single energy broad-bandwidth X-ray produces two-energy images for the iodine K-edge subtraction technique. X-ray images are sequentially converted to visible images by an X-ray image intensifier. By synchronizing the timing of the movement of the iodine filter into and out of the X-ray beam, two output images of the image intensifier are focused side by side on the photoconductive layer of a camera tube by an oscillating mirror. Both images are read out by electron beam scanning of a 1050-scanning-line video camera within a camera frame time of 66.7 ms. One hundred ninety two pairs of iodine-filtered and non-iodine-filtered images are stored in the frame memory at a rate of 15 pairs/s. In vivo subtracted images of coronary arteries in dogs were obtained in the form of motion pictures.

  14. Multi-volumetric registration and mosaicking using swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Bozic, Ivan; El-Haddad, Mohamed T.; Malone, Joseph D.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.

    2017-02-01

    Ophthalmic diagnostic imaging using optical coherence tomography (OCT) is limited by bulk eye motions and a fundamental trade-off between field-of-view (FOV) and sampling density. Here, we introduced a novel multi-volumetric registration and mosaicking method using our previously described multimodal swept-source spectrally encoded scanning laser ophthalmoscopy and OCT (SS-SESLO-OCT) system. Our SS-SESLO-OCT acquires an entire en face fundus SESLO image simultaneously with every OCT cross-section at 200 frames-per-second. In vivo human retinal imaging was performed in a healthy volunteer, and three volumetric datasets were acquired with the volunteer moving freely and refixating between each acquisition. In post-processing, SESLO frames were used to estimate en face rotational and translational motions by registering every frame in all three volumetric datasets to the first frame in the first volume. OCT cross-sections were contrast-normalized and registered axially and rotationally across all volumes. Rotational and translational motions calculated from SESLO frames were applied to corresponding OCT B-scans to compensate for interand intra-B-scan bulk motions, and the three registered volumes were combined into a single interpolated multi-volumetric mosaic. Using complementary information from SESLO and OCT over serially acquired volumes, we demonstrated multivolumetric registration and mosaicking to recover regions of missing data resulting from blinks, saccades, and ocular drifts. We believe our registration method can be directly applied for multi-volumetric motion compensation, averaging, widefield mosaicking, and vascular mapping with potential applications in ophthalmic clinical diagnostics, handheld imaging, and intraoperative guidance.

  15. Evaluation of stability of stereotactic space defined by cone-beam CT for the Leksell Gamma Knife Icon.

    PubMed

    AlDahlawi, Ismail; Prasad, Dheerendra; Podgorsak, Matthew B

    2017-05-01

    The Gamma Knife Icon comes with an integrated cone-beam CT (CBCT) for image-guided stereotactic treatment deliveries. The CBCT can be used for defining the Leksell stereotactic space using imaging without the need for the traditional invasive frame system, and this allows also for frameless thermoplastic mask stereotactic treatments (single or fractionated) with the Gamma Knife unit. In this study, we used an in-house built marker tool to evaluate the stability of the CBCT-based stereotactic space and its agreement with the standard frame-based stereotactic space. We imaged the tool with a CT indicator box using our CT-simulator at the beginning, middle, and end of the study period (6 weeks) for determining the frame-based stereotactic space. The tool was also scanned with the Icon's CBCT on a daily basis throughout the study period, and the CBCT images were used for determining the CBCT-based stereotactic space. The coordinates of each marker were determined in each CT and CBCT scan using the Leksell GammaPlan treatment planning software. The magnitudes of vector difference between the means of each marker in frame-based and CBCT-based stereotactic space ranged from 0.21 to 0.33 mm, indicating good agreement of CBCT-based and frame-based stereotactic space definition. Scanning 4-month later showed good prolonged stability of the CBCT-based stereotactic space definition. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  16. A 20 Mfps high frame-depth CMOS burst-mode imager with low power in-pixel NMOS-only passive amplifier

    NASA Astrophysics Data System (ADS)

    Wu, L.; San Segundo Bello, D.; Coppejans, P.; Craninckx, J.; Wambacq, P.; Borremans, J.

    2017-02-01

    This paper presents a 20 Mfps 32 × 84 pixels CMOS burst-mode imager featuring high frame depth with a passive in-pixel amplifier. Compared to the CCD alternatives, CMOS burst-mode imagers are attractive for their low power consumption and integration of circuitry such as ADCs. Due to storage capacitor size and its noise limitations, CMOS burst-mode imagers usually suffer from a lower frame depth than CCD implementations. In order to capture fast transitions over a longer time span, an in-pixel CDS technique has been adopted to reduce the required memory cells for each frame by half. Moreover, integrated with in-pixel CDS, an in-pixel NMOS-only passive amplifier alleviates the kTC noise requirements of the memory bank allowing the usage of smaller capacitors. Specifically, a dense 108-cell MOS memory bank (10fF/cell) has been implemented inside a 30μm pitch pixel, with an area of 25 × 30μm2 occupied by the memory bank. There is an improvement of about 4x in terms of frame depth per pixel area by applying in-pixel CDS and amplification. With the amplifier's gain of 3.3, an FD input-referred RMS noise of 1mV is achieved at 20 Mfps operation. While the amplification is done without burning DC current, including the pixel source follower biasing, the full pixel consumes 10μA at 3.3V supply voltage at full speed. The chip has been fabricated in imec's 130nm CMOS CIS technology.

  17. Compression Algorithm Analysis of In-Situ (S)TEM Video: Towards Automatic Event Detection and Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teuton, Jeremy R.; Griswold, Richard L.; Mehdi, Beata L.

    Precise analysis of both (S)TEM images and video are time and labor intensive processes. As an example, determining when crystal growth and shrinkage occurs during the dynamic process of Li dendrite deposition and stripping involves manually scanning through each frame in the video to extract a specific set of frames/images. For large numbers of images, this process can be very time consuming, so a fast and accurate automated method is desirable. Given this need, we developed software that uses analysis of video compression statistics for detecting and characterizing events in large data sets. This software works by converting the datamore » into a series of images which it compresses into an MPEG-2 video using the open source “avconv” utility [1]. The software does not use the video itself, but rather analyzes the video statistics from the first pass of the video encoding that avconv records in the log file. This file contains statistics for each frame of the video including the frame quality, intra-texture and predicted texture bits, forward and backward motion vector resolution, among others. In all, avconv records 15 statistics for each frame. By combining different statistics, we have been able to detect events in various types of data. We have developed an interactive tool for exploring the data and the statistics that aids the analyst in selecting useful statistics for each analysis. Going forward, an algorithm for detecting and possibly describing events automatically can be written based on statistic(s) for each data type.« less

  18. Hybrid MRI-Ultrasound acquisitions, and scannerless real-time imaging.

    PubMed

    Preiswerk, Frank; Toews, Matthew; Cheng, Cheng-Chieh; Chiou, Jr-Yuan George; Mei, Chang-Sheng; Schaefer, Lena F; Hoge, W Scott; Schwartz, Benjamin M; Panych, Lawrence P; Madore, Bruno

    2017-09-01

    To combine MRI, ultrasound, and computer science methodologies toward generating MRI contrast at the high frame rates of ultrasound, inside and even outside the MRI bore. A small transducer, held onto the abdomen with an adhesive bandage, collected ultrasound signals during MRI. Based on these ultrasound signals and their correlations with MRI, a machine-learning algorithm created synthetic MR images at frame rates up to 100 per second. In one particular implementation, volunteers were taken out of the MRI bore with the ultrasound sensor still in place, and MR images were generated on the basis of ultrasound signal and learned correlations alone in a "scannerless" manner. Hybrid ultrasound-MRI data were acquired in eight separate imaging sessions. Locations of liver features, in synthetic images, were compared with those from acquired images: The mean error was 1.0 pixel (2.1 mm), with best case 0.4 and worst case 4.1 pixels (in the presence of heavy coughing). For results from outside the bore, qualitative validation involved optically tracked ultrasound imaging with/without coughing. The proposed setup can generate an accurate stream of high-speed MR images, up to 100 frames per second, inside or even outside the MR bore. Magn Reson Med 78:897-908, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  19. Algorithm design for a gun simulator based on image processing

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Wei, Ping; Ke, Jun

    2015-08-01

    In this paper, an algorithm is designed for shooting games under strong background light. Six LEDs are uniformly distributed on the edge of a game machine screen. They are located at the four corners and in the middle of the top and the bottom edges. Three LEDs are enlightened in the odd frames, and the other three are enlightened in the even frames. A simulator is furnished with one camera, which is used to obtain the image of the LEDs by applying inter-frame difference between the even and odd frames. In the resulting images, six LED are six bright spots. To obtain the LEDs' coordinates rapidly, we proposed a method based on the area of the bright spots. After calibrating the camera based on a pinhole model, four equations can be found using the relationship between the image coordinate system and the world coordinate system with perspective transformation. The center point of the image of LEDs is supposed to be at the virtual shooting point. The perspective transformation matrix is applied to the coordinate of the center point. Then we can obtain the virtual shooting point's coordinate in the world coordinate system. When a game player shoots a target about two meters away, using the method discussed in this paper, the calculated coordinate error is less than ten mm. We can obtain 65 coordinate results per second, which meets the requirement of a real-time system. It proves the algorithm is reliable and effective.

  20. The Use of Message Framing to Promote Sexual Risk Reduction in Young Adolescents: A Pilot Exploratory Study

    ERIC Educational Resources Information Center

    Camenga, Deepa R.; Hieftje, Kimberly D.; Fiellin, Lynn E.; Edelman, E. Jennifer; Rosenthal, Marjorie S.; Duncan, Lindsay R.

    2014-01-01

    Few studies have explored the application of message framing to promote health behaviors in adolescents. In this exploratory study, we examined young adolescents' selection of gain- versus loss-framed images and messages when designing an HIV-prevention intervention to promote delayed sexual initiation. Twenty-six adolescents (aged 10-14 years)…

Top