Sample records for image sequences captured

  1. High-speed imaging using 3CCD camera and multi-color LED flashes

    NASA Astrophysics Data System (ADS)

    Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis

    2017-11-01

    This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.

  2. Integrated Georeferencing of Stereo Image Sequences Captured with a Stereovision Mobile Mapping System - Approaches and Practical Results

    NASA Astrophysics Data System (ADS)

    Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.

    2012-07-01

    Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.

  3. Preparation of 2D sequences of corneal images for 3D model building.

    PubMed

    Elbita, Abdulhakim; Qahwaji, Rami; Ipson, Stanley; Sharif, Mhd Saeed; Ghanchi, Faruque

    2014-04-01

    A confocal microscope provides a sequence of images, at incremental depths, of the various corneal layers and structures. From these, medical practioners can extract clinical information on the state of health of the patient's cornea. In this work we are addressing problems associated with capturing and processing these images including blurring, non-uniform illumination and noise, as well as the displacement of images laterally and in the anterior-posterior direction caused by subject movement. The latter may cause some of the captured images to be out of sequence in terms of depth. In this paper we introduce automated algorithms for classification, reordering, registration and segmentation to solve these problems. The successful implementation of these algorithms could open the door for another interesting development, which is the 3D modelling of these sequences. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Deblurring sequential ocular images from multi-spectral imaging (MSI) via mutual information.

    PubMed

    Lian, Jian; Zheng, Yuanjie; Jiao, Wanzhen; Yan, Fang; Zhao, Bojun

    2018-06-01

    Multi-spectral imaging (MSI) produces a sequence of spectral images to capture the inner structure of different species, which was recently introduced into ocular disease diagnosis. However, the quality of MSI images can be significantly degraded by motion blur caused by the inevitable saccades and exposure time required for maintaining a sufficiently high signal-to-noise ratio. This degradation may confuse an ophthalmologist, reduce the examination quality, or defeat various image analysis algorithms. We propose an early work specially on deblurring sequential MSI images, which is distinguished from many of the current image deblurring techniques by resolving the blur kernel simultaneously for all the images in an MSI sequence. It is accomplished by incorporating several a priori constraints including the sharpness of the latent clear image, the spatial and temporal smoothness of the blur kernel and the similarity between temporally-neighboring images in MSI sequence. Specifically, we model the similarity between MSI images with mutual information considering the different wavelengths used for capturing different images in MSI sequence. The optimization of the proposed approach is based on a multi-scale framework and stepwise optimization strategy. Experimental results from 22 MSI sequences validate that our approach outperforms several state-of-the-art techniques in natural image deblurring.

  5. Very-high-resolution time-lapse photography for plant and ecosystems research.

    PubMed

    Nichols, Mary H; Steven, Janet C; Sargent, Randy; Dille, Paul; Schapiro, Joshua

    2013-09-01

    Traditional photography is a compromise between image detail and area covered. We report a new method for creating time-lapse sequences of very-high-resolution photographs to produce zoomable images that facilitate observation across a range of spatial and temporal scales. • A robotic camera mount and software were used to capture images of the growth and movement in Brassica rapa every 15 s in the laboratory. The resultant time-lapse sequence (http://timemachine.gigapan.org/wiki/Plant_Growth) captures growth detail such as circumnutation. A modified, solar-powered system was deployed at a remote field site in southern Arizona. Images were collected every 2 h over a 3-mo period to capture the response of vegetation to monsoon season rainfall (http://timemachine.gigapan.org/wiki/Arizona_Grasslands). • A technique for observing time sequences of both individual plant and ecosystem response at a range of spatial scales is available for use in the laboratory and in the field.

  6. Very-high-resolution time-lapse photography for plant and ecosystems research1

    PubMed Central

    Nichols, Mary H.; Steven, Janet C.; Sargent, Randy; Dille, Paul; Schapiro, Joshua

    2013-01-01

    • Premise of the study: Traditional photography is a compromise between image detail and area covered. We report a new method for creating time-lapse sequences of very-high-resolution photographs to produce zoomable images that facilitate observation across a range of spatial and temporal scales. • Methods and Results: A robotic camera mount and software were used to capture images of the growth and movement in Brassica rapa every 15 s in the laboratory. The resultant time-lapse sequence (http://timemachine.gigapan.org/wiki/Plant_Growth) captures growth detail such as circumnutation. A modified, solar-powered system was deployed at a remote field site in southern Arizona. Images were collected every 2 h over a 3-mo period to capture the response of vegetation to monsoon season rainfall (http://timemachine.gigapan.org/wiki/Arizona_Grasslands). • Conclusions: A technique for observing time sequences of both individual plant and ecosystem response at a range of spatial scales is available for use in the laboratory and in the field. PMID:25202588

  7. MHz-Rate NO PLIF Imaging in a Mach 10 Hypersonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Jiang, N.; Webster, M.; Lempert, Walter R.; Miller, J. D.; Meyer, T. R.; Danehy, Paul M.

    2010-01-01

    NO PLIF imaging at repetition rates as high as 1 MHz is demonstrated in the NASA Langley 31 inch Mach 10 hypersonic wind tunnel. Approximately two hundred time correlated image sequences, of between ten and twenty individual frames, were obtained over eight days of wind tunnel testing spanning two entries in March and September of 2009. The majority of the image sequences were obtained from the boundary layer of a 20 flat plate model, in which transition was induced using a variety of cylindrical and triangular shaped protuberances. The high speed image sequences captured a variety of laminar and transitional flow phenomena, ranging from mostly laminar flow, typically at lower Reynolds number and/or in the near wall region of the model, to highly transitional flow in which the temporal evolution and progression of characteristic streak instabilities and/or corkscrew-shaped vortices could be clearly identified. A series of image sequences were also obtained from a 20 compression ramp at a 10 angle of attack in which the temporal dynamics of the characteristic separated flow was captured in a time correlated manner.

  8. Platform control for space-based imaging: the TOPSAT mission

    NASA Astrophysics Data System (ADS)

    Dungate, D.; Morgan, C.; Hardacre, S.; Liddle, D.; Cropp, A.; Levett, W.; Price, M.; Steyn, H.

    2004-11-01

    This paper describes the imaging mode ADCS design for the TOPSAT satellite, an Earth observation demonstration mission targeted at military applications. The baselined orbit for TOPSAT is a 600-700km sun synchronous orbit from which images up to 30° off track can be captured. For this baseline, the imaging camera proves a resolution of 2.5m and a nominal image size of 15x15km. The ADCS design solution for the imaging mode uses a moving demand approach to enable a single control algorithm solution for both the preparatory reorientation prior to image capture and the post capture return to nadir pointing. During image capture proper, control is suspended to minimise the disturbances experienced by the satellite from the wheels. Prior to each imaging sequence, the moving demand attitude and rate profiles are calculated such that the correct attitude and rate are achieved at the correct orbital position, enabling the correct target area to be captured.

  9. Optical flow estimation on image sequences with differently exposed frames

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-09-01

    Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.

  10. Extracting flat-field images from scene-based image sequences using phase correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caron, James N., E-mail: Caron@RSImd.com; Montes, Marcos J.; Obermark, Jerome L.

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method usesmore » sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.« less

  11. Triptych of the Moon

    NASA Image and Video Library

    1999-09-10

    This composite image was made from NASA Cassini which captured a significant portion of the Moon during a Moon flyby imaging sequence.All three images have been scaled so that the brightness of Crisium basin, the dark circular region in the upper right,

  12. Display of travelling 3D scenes from single integral-imaging capture

    NASA Astrophysics Data System (ADS)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  13. Information recovery through image sequence fusion under wavelet transformation

    NASA Astrophysics Data System (ADS)

    He, Qiang

    2010-04-01

    Remote sensing is widely applied to provide information of areas with limited ground access with applications such as to assess the destruction from natural disasters and to plan relief and recovery operations. However, the data collection of aerial digital images is constrained by bad weather, atmospheric conditions, and unstable camera or camcorder. Therefore, how to recover the information from the low-quality remote sensing images and how to enhance the image quality becomes very important for many visual understanding tasks, such like feature detection, object segmentation, and object recognition. The quality of remote sensing imagery can be improved through meaningful combination of the employed images captured from different sensors or from different conditions through information fusion. Here we particularly address information fusion to remote sensing images under multi-resolution analysis in the employed image sequences. The image fusion is to recover complete information by integrating multiple images captured from the same scene. Through image fusion, a new image with high-resolution or more perceptive for human and machine is created from a time series of low-quality images based on image registration between different video frames.

  14. Simulating Planet-Hunting in a Lab

    NASA Image and Video Library

    2007-04-11

    Three simulated planets -- one as bright as Jupiter, one half as bright as Jupiter and one as faint as Earth -- stand out plainly in this image created from a sequence of 480 images captured by the High Contrast Imaging Testbed at NASA JPL.

  15. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  16. Compressive Coded-Aperture Multimodal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Rueda-Chacon, Hoover F.

    Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.

  17. Tilted pillar array fabrication by the combination of proton beam writing and soft lithography for microfluidic cell capture Part 2: Image sequence analysis based evaluation and biological application.

    PubMed

    Járvás, Gábor; Varga, Tamás; Szigeti, Márton; Hajba, László; Fürjes, Péter; Rajta, István; Guttman, András

    2018-02-01

    As a continuation of our previously published work, this paper presents a detailed evaluation of a microfabricated cell capture device utilizing a doubly tilted micropillar array. The device was fabricated using a novel hybrid technology based on the combination of proton beam writing and conventional lithography techniques. Tilted pillars offer unique flow characteristics and support enhanced fluidic interaction for improved immunoaffinity based cell capture. The performance of the microdevice was evaluated by an image sequence analysis based in-house developed single-cell tracking system. Individual cell tracking allowed in-depth analysis of the cell-chip surface interaction mechanism from hydrodynamic point of view. Simulation results were validated by using the hybrid device and the optimized surface functionalization procedure. Finally, the cell capture capability of this new generation microdevice was demonstrated by efficiently arresting cells from a HT29 cell-line suspension. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Time-lapse Sequence of Jupiter's South Pole

    NASA Image and Video Library

    2018-02-22

    This series of images captures cloud patterns near Jupiter's south pole, looking up towards the planet's equator. NASA's Juno spacecraft took the color-enhanced time-lapse sequence of images during its eleventh close flyby of the gas giant planet on Feb. 7 between 7:21 a.m. and 8:01 a.m. PST (10:21 a.m. and 11:01 a.m. EST). At the time, the spacecraft was between 85,292 to 124,856 miles (137,264 to 200,937 kilometers) from the tops of the clouds of the planet with the images centered on latitudes from 84.1 to 75.5 degrees south. At first glance, the series might appear to be the same image repeated. But closer inspection reveals slight changes, which are most easily noticed by comparing the far left image with the far right image. Directly, the images show Jupiter. But, through slight variations in the images, they indirectly capture the motion of the Juno spacecraft itself, once again swinging around a giant planet hundreds of millions of miles from Earth. https://photojournal.jpl.nasa.gov/catalog/PIA21979

  19. Real-time image sequence segmentation using curve evolution

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Liu, Weisong

    2001-04-01

    In this paper, we describe a novel approach to image sequence segmentation and its real-time implementation. This approach uses the 3D structure tensor to produce a more robust frame difference signal and uses curve evolution to extract whole objects. Our algorithm is implemented on a standard PC running the Windows operating system with video capture from a USB camera that is a standard Windows video capture device. Using the Windows standard video I/O functionalities, our segmentation software is highly portable and easy to maintain and upgrade. In its current implementation on a Pentium 400, the system can perform segmentation at 5 frames/sec with a frame resolution of 160 by 120.

  20. Descent Through Clouds to Surface

    NASA Image and Video Library

    2005-01-18

    This frame from an animation is made up from a sequence of images taken by the Descent Imager/Spectral Radiometer (DISR) instrument on board ESA's Huygens probe, during its successful descent to Titan on Jan. 14, 2005. The animation is available at http://photojournal.jpl.nasa.gov/catalog/PIA07234 It shows what a passenger riding on Huygens would have seen. The sequence starts from an altitude of 152 kilometers (about 95 miles) and initially only shows a hazy view looking into thick cloud. As the probe descends, ground features can be discerned and Huygens emerges from the clouds at around 30 kilometers (about 19 miles) altitude. The ground features seem to rotate as Huygens spins slowly underits parachute. The DISR consists of a downward-looking High Resolution Imager (HRI), a Medium Resolution Imager (MRI), which looks out at an angle, and a Side Looking Imager (SLI). For this animation, most images used were captured by the HRI and MRI. Once on the ground, the final landing scene was captured by the SLI. The Descent Imager/Spectral Radiometer is one of two NASA instruments on the probe.

  1. Advanced Prosthetic Gait Training Tool

    DTIC Science & Technology

    2015-12-01

    motion capture sequences was provided by MPL to CCAD and OGAL. CCAD’s work focused on imposing these sequences on the SantosTM digital human avatar ...manipulating the avatar image. These manipulations are accomplished in the context of reinforcing what is the more ideal position and relating...focus on the visual environment by asking users to manipulate a static image of the Santos avatar to represent their perception of what they observe

  2. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    NASA Astrophysics Data System (ADS)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.

  3. Integration of image capture and processing: beyond single-chip digital camera

    NASA Astrophysics Data System (ADS)

    Lim, SukHwan; El Gamal, Abbas

    2001-05-01

    An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.

  4. High Speed Intensified Video Observations of TLEs in Support of PhOCAL

    NASA Technical Reports Server (NTRS)

    Lyons, Walter A.; Nelson, Thomas E.; Cummer, Steven A.; Lang, Timothy; Miller, Steven; Beavis, Nick; Yue, Jia; Samaras, Tim; Warner, Tom A.

    2013-01-01

    The third observing season of PhOCAL (Physical Origins of Coupling to the upper Atmosphere by Lightning) was conducted over the U.S. High Plains during the late spring and summer of 2013. The goal was to capture using an intensified high-speed camera, a transient luminous event (TLE), especially a sprite, as well as its parent cloud-to-ground (SP+CG) lightning discharge, preferably within the domain of a 3-D lightning mapping array (LMA). The co-capture of sprite and its SP+CG was achieved within useful range of an interferometer operating near Rapid City. Other high-speed sprite video sequences were captured above the West Texas LMA. On several occasions the large mesoscale convective complexes (MCSs) producing the TLE-class lightning were also generating vertically propagating convectively generated gravity waves (CGGWs) at the mesopause which were easily visible using NIR-sensitive color cameras. These were captured concurrent with sprites. These observations were follow-ons to a case on 15 April 2012 in which CGGWs were also imaged by the new Day/Night Band on the Suomi NPP satellite system. The relationship between the CGGW and sprite initiation are being investigated. The past year was notable for a large number of elve+halo+sprite sequences sequences generated by the same parent CG. And on several occasions there appear to be prominent banded modulations of the elves' luminosity imaged at >3000 ips. These stripes appear coincident with the banded CGGW structure, and presumably its density variations. Several elves and a sprite from negative CGs were also noted. New color imaging systems have been tested and found capable of capturing sprites. Two cases of sprites with an aurora as a backdrop were also recorded. High speed imaging was also provided in support of the UPLIGHTS program near Rapid City, SD and the USAFA SPRITES II airborne campaign over the Great Plains.

  5. New developments in super-resolution for GaoFen-4

    NASA Astrophysics Data System (ADS)

    Li, Feng; Fu, Jie; Xin, Lei; Liu, Yuhong; Liu, Zhijia

    2017-10-01

    In this paper, the application of super resolution (SR, restoring a high spatial resolution image from a series of low resolution images of the same scene) techniques to GaoFen(GF)-4, which is the most advanced geostationaryorbit earth observing satellite in China, remote sensing images is investigated and tested. SR has been a hot research area for decades, but one of the barriers of applying SR in remote sensing community is the time slot between those low resolution (LR) images acquisition. In general, the longer the time slot, the less reliable the reconstruction. GF-4 has the unique advantage of capturing a sequence of LR of the same region in minutes, i.e. working as a staring camera from the point view of SR. This is the first experiment of applying super resolution to a sequence of low resolution images captured by GF-4 within a short time period. In this paper, we use Maximum a Posteriori (MAP) to solve the ill-conditioned problem of SR. Both the wavelet transform and the curvelet transform are used to setup a sparse prior for remote sensing images. By combining several images of both the BeiJing and DunHuang regions captured by GF-4 our method can improve spatial resolution both visually and numerically. Experimental tests show that lots of detail cannot be observed in the captured LR images, but can be seen in the super resolved high resolution (HR) images. To help the evaluation, Google Earth image can also be referenced. Moreover, our experimental tests also show that the higher the temporal resolution, the better the HR images can be resolved. The study illustrates that the application for SR to geostationary-orbit based earth observation data is very feasible and worthwhile, and it holds the potential application for all other geostationary-orbit based earth observing systems.

  6. Light field rendering with omni-directional camera

    NASA Astrophysics Data System (ADS)

    Todoroki, Hiroshi; Saito, Hideo

    2003-06-01

    This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.

  7. Vision-based overlay of a virtual object into real scene for designing room interior

    NASA Astrophysics Data System (ADS)

    Harasaki, Shunsuke; Saito, Hideo

    2001-10-01

    In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).

  8. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  9. A long-term target detection approach in infrared image sequence

    NASA Astrophysics Data System (ADS)

    Li, Hang; Zhang, Qi; Wang, Xin; Hu, Chao

    2016-10-01

    An automatic target detection method used in long term infrared (IR) image sequence from a moving platform is proposed. Firstly, based on POME(the principle of maximum entropy), target candidates are iteratively segmented. Then the real target is captured via two different selection approaches. At the beginning of image sequence, the genuine target with litter texture is discriminated from other candidates by using contrast-based confidence measure. On the other hand, when the target becomes larger, we apply online EM method to estimate and update the distributions of target's size and position based on the prior detection results, and then recognize the genuine one which satisfies both the constraints of size and position. Experimental results demonstrate that the presented method is accurate, robust and efficient.

  10. Tongue Motion Averaging from Contour Sequences

    ERIC Educational Resources Information Center

    Li, Min; Kambhamettu, Chandra; Stone, Maureen

    2005-01-01

    In this paper, a method to get the best representation of a speech motion from several repetitions is presented. Each repetition is a representation of the same speech captured at different times by sequence of ultrasound images and is composed of a set of 2D spatio-temporal contours. These 2D contours in different repetitions are time aligned…

  11. Application of a digital high-speed camera and image processing system for investigations of short-term hypersonic fluids

    NASA Astrophysics Data System (ADS)

    Renken, Hartmut; Oelze, Holger W.; Rath, Hans J.

    1998-04-01

    The design and application of a digital high sped image data capturing system with a following image processing system applied to the Bremer Hochschul Hyperschallkanal BHHK is the content of this presentation. It is also the result of the cooperation between the departments aerodynamic and image processing at the ZARM-institute at the Drop Tower of Brennen. Similar systems are used by the combustion working group at ZARM and other external project partners. The BHHK, camera- and image storage system as well as the personal computer based image processing software are described next. Some examples of images taken at the BHHK are shown to illustrate the application. The new and very user-friendly Windows 32-bit system is capable to capture all camera data with a maximum pixel clock of 43 MHz and to process complete sequences of images in one step by using only one comfortable program.

  12. 2013 R&D 100 Award: Movie-mode electron microscope captures nanoscale

    ScienceCinema

    Lagrange, Thomas; Reed, Bryan

    2018-01-26

    A new instrument developed by LLNL scientists and engineers, the Movie Mode Dynamic Transmission Electron Microscope (MM-DTEM), captures billionth-of-a-meter-scale images with frame rates more than 100,000 times faster than those of conventional techniques. The work was done in collaboration with a Pleasanton-based company, Integrated Dynamic Electron Solutions (IDES) Inc. Using this revolutionary imaging technique, a range of fundamental and technologically important material and biological processes can be captured in action, in complete billionth-of-a-meter detail, for the first time. The primary application of MM-DTEM is the direct observation of fast processes, including microstructural changes, phase transformations and chemical reactions, that shape real-world performance of nanostructured materials and potentially biological entities. The instrument could prove especially valuable in the direct observation of macromolecular interactions, such as protein-protein binding and host-pathogen interactions. While an earlier version of the technology, Single Shot-DTEM, could capture a single snapshot of a rapid process, MM-DTEM captures a multiframe movie that reveals complex sequences of events in detail. It is the only existing technology that can capture multiple electron microscopy images in the span of a single microsecond.

  13. 2013 R&D 100 Award: Movie-mode electron microscope captures nanoscale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagrange, Thomas; Reed, Bryan

    2014-04-03

    A new instrument developed by LLNL scientists and engineers, the Movie Mode Dynamic Transmission Electron Microscope (MM-DTEM), captures billionth-of-a-meter-scale images with frame rates more than 100,000 times faster than those of conventional techniques. The work was done in collaboration with a Pleasanton-based company, Integrated Dynamic Electron Solutions (IDES) Inc. Using this revolutionary imaging technique, a range of fundamental and technologically important material and biological processes can be captured in action, in complete billionth-of-a-meter detail, for the first time. The primary application of MM-DTEM is the direct observation of fast processes, including microstructural changes, phase transformations and chemical reactions, that shapemore » real-world performance of nanostructured materials and potentially biological entities. The instrument could prove especially valuable in the direct observation of macromolecular interactions, such as protein-protein binding and host-pathogen interactions. While an earlier version of the technology, Single Shot-DTEM, could capture a single snapshot of a rapid process, MM-DTEM captures a multiframe movie that reveals complex sequences of events in detail. It is the only existing technology that can capture multiple electron microscopy images in the span of a single microsecond.« less

  14. Temporal Characteristics of Radiologists' and Novices' Lesion Detection in Viewing Medical Images Presented Rapidly and Sequentially.

    PubMed

    Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko

    2016-01-01

    Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks.

  15. Temporal Characteristics of Radiologists' and Novices' Lesion Detection in Viewing Medical Images Presented Rapidly and Sequentially

    PubMed Central

    Nakashima, Ryoichi; Komori, Yuya; Maeda, Eriko; Yoshikawa, Takeharu; Yokosawa, Kazuhiko

    2016-01-01

    Although viewing multiple stacks of medical images presented on a display is a relatively new but useful medical task, little is known about this task. Particularly, it is unclear how radiologists search for lesions in this type of image reading. When viewing cluttered and dynamic displays, continuous motion itself does not capture attention. Thus, it is effective for the target detection that observers' attention is captured by the onset signal of a suddenly appearing target among the continuously moving distractors (i.e., a passive viewing strategy). This can be applied to stack viewing tasks, because lesions often show up as transient signals in medical images which are sequentially presented simulating a dynamic and smoothly transforming image progression of organs. However, it is unclear whether observers can detect a target when the target appears at the beginning of a sequential presentation where the global apparent motion onset signal (i.e., signal of the initiation of the apparent motion by sequential presentation) occurs. We investigated the ability of radiologists to detect lesions during such tasks by comparing the performances of radiologists and novices. Results show that overall performance of radiologists is better than novices. Furthermore, the temporal locations of lesions in CT image sequences, i.e., when a lesion appears in an image sequence, does not affect the performance of radiologists, whereas it does affect the performance of novices. Results indicate that novices have greater difficulty in detecting a lesion appearing early than late in the image sequence. We suggest that radiologists have other mechanisms to detect lesions in medical images with little attention which novices do not have. This ability is critically important when viewing rapid sequential presentations of multiple CT images, such as stack viewing tasks. PMID:27774080

  16. Motion detection and compensation in infrared retinal image sequences.

    PubMed

    Scharcanski, J; Schardosim, L R; Santos, D; Stuchi, A

    2013-01-01

    Infrared image data captured by non-mydriatic digital retinography systems often are used in the diagnosis and treatment of the diabetic macular edema (DME). Infrared illumination is less aggressive to the patient retina, and retinal studies can be carried out without pupil dilation. However, sequences of infrared eye fundus images of static scenes, tend to present pixel intensity fluctuations in time, and noisy and background illumination changes pose a challenge to most motion detection methods proposed in the literature. In this paper, we present a retinal motion detection method that is adaptive to background noise and illumination changes. Our experimental results indicate that this method is suitable for detecting retinal motion in infrared image sequences, and compensate the detected motion, which is relevant in retinal laser treatment systems for DME. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. Human body motion capture from multi-image video sequences

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola

    2003-01-01

    In this paper is presented a method to capture the motion of the human body from multi image video sequences without using markers. The process is composed of five steps: acquisition of video sequences, calibration of the system, surface measurement of the human body for each frame, 3-D surface tracking and tracking of key points. The image acquisition system is currently composed of three synchronized progressive scan CCD cameras and a frame grabber which acquires a sequence of triplet images. Self calibration methods are applied to gain exterior orientation of the cameras, the parameters of internal orientation and the parameters modeling the lens distortion. From the video sequences, two kinds of 3-D information are extracted: a three-dimensional surface measurement of the visible parts of the body for each triplet and 3-D trajectories of points on the body. The approach for surface measurement is based on multi-image matching, using the adaptive least squares method. A full automatic matching process determines a dense set of corresponding points in the triplets. The 3-D coordinates of the matched points are then computed by forward ray intersection using the orientation and calibration data of the cameras. The tracking process is also based on least squares matching techniques. Its basic idea is to track triplets of corresponding points in the three images through the sequence and compute their 3-D trajectories. The spatial correspondences between the three images at the same time and the temporal correspondences between subsequent frames are determined with a least squares matching algorithm. The results of the tracking process are the coordinates of a point in the three images through the sequence, thus the 3-D trajectory is determined by computing the 3-D coordinates of the point at each time step by forward ray intersection. Velocities and accelerations are also computed. The advantage of this tracking process is twofold: it can track natural points, without using markers; and it can track local surfaces on the human body. In the last case, the tracking process is applied to all the points matched in the region of interest. The result can be seen as a vector field of trajectories (position, velocity and acceleration). The last step of the process is the definition of selected key points of the human body. A key point is a 3-D region defined in the vector field of trajectories, whose size can vary and whose position is defined by its center of gravity. The key points are tracked in a simple way: the position at the next time step is established by the mean value of the displacement of all the trajectories inside its region. The tracked key points lead to a final result comparable to the conventional motion capture systems: 3-D trajectories of key points which can be afterwards analyzed and used for animation or medical purposes.

  18. R&D 100, 2016: Ultrafast X-ray Imager

    ScienceCinema

    Porter, John; Claus, Liam; Sanchez, Marcos; Robertson, Gideon; Riley, Nathan; Rochau, Greg

    2018-06-13

    The Ultrafast X-ray Imager is a solid-state camera capable of capturing a sequence of images with user-selectable exposure times as short as 2 billionths of a second. Using 3D semiconductor integration techniques to form a hybrid chip, this camera was developed to enable scientists to study the heating and compression of fusion targets in the quest to harness the energy process that powers the stars.

  19. R&D 100, 2016: Ultrafast X-ray Imager

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Porter, John; Claus, Liam; Sanchez, Marcos

    The Ultrafast X-ray Imager is a solid-state camera capable of capturing a sequence of images with user-selectable exposure times as short as 2 billionths of a second. Using 3D semiconductor integration techniques to form a hybrid chip, this camera was developed to enable scientists to study the heating and compression of fusion targets in the quest to harness the energy process that powers the stars.

  20. Touch HDR: photograph enhancement by user controlled wide dynamic range adaptation

    NASA Astrophysics Data System (ADS)

    Verrall, Steve; Siddiqui, Hasib; Atanassov, Kalin; Goma, Sergio; Ramachandra, Vikas

    2013-03-01

    High Dynamic Range (HDR) technology enables photographers to capture a greater range of tonal detail. HDR is typically used to bring out detail in a dark foreground object set against a bright background. HDR technologies include multi-frame HDR and single-frame HDR. Multi-frame HDR requires the combination of a sequence of images taken at different exposures. Single-frame HDR requires histogram equalization post-processing of a single image, a technique referred to as local tone mapping (LTM). Images generated using HDR technology can look less natural than their non- HDR counterparts. Sometimes it is only desired to enhance small regions of an original image. For example, it may be desired to enhance the tonal detail of one subject's face while preserving the original background. The Touch HDR technique described in this paper achieves these goals by enabling selective blending of HDR and non-HDR versions of the same image to create a hybrid image. The HDR version of the image can be generated by either multi-frame or single-frame HDR. Selective blending can be performed as a post-processing step, for example, as a feature of a photo editor application, at any time after the image has been captured. HDR and non-HDR blending is controlled by a weighting surface, which is configured by the user through a sequence of touches on a touchscreen.

  1. Accurate estimation of object location in an image sequence using helicopter flight data

    NASA Technical Reports Server (NTRS)

    Tang, Yuan-Liang; Kasturi, Rangachar

    1994-01-01

    In autonomous navigation, it is essential to obtain a three-dimensional (3D) description of the static environment in which the vehicle is traveling. For a rotorcraft conducting low-latitude flight, this description is particularly useful for obstacle detection and avoidance. In this paper, we address the problem of 3D position estimation for static objects from a monocular sequence of images captured from a low-latitude flying helicopter. Since the environment is static, it is well known that the optical flow in the image will produce a radiating pattern from the focus of expansion. We propose a motion analysis system which utilizes the epipolar constraint to accurately estimate 3D positions of scene objects in a real world image sequence taken from a low-altitude flying helicopter. Results show that this approach gives good estimates of object positions near the rotorcraft's intended flight-path.

  2. MHz-rate nitric oxide planar laser-induced fluorescence imaging in a Mach 10 hypersonic wind tunnel.

    PubMed

    Jiang, Naibo; Webster, Matthew; Lempert, Walter R; Miller, Joseph D; Meyer, Terrence R; Ivey, Christopher B; Danehy, Paul M

    2011-02-01

    Nitric oxide planar laser-induced fluorescence (NO PLIF) imaging at repetition rates as high as 1 MHz is demonstrated in the NASA Langley 31 in. Mach 10 hypersonic wind tunnel. Approximately 200 time-correlated image sequences of between 10 and 20 individual frames were obtained over eight days of wind tunnel testing spanning two entries in March and September of 2009. The image sequences presented were obtained from the boundary layer of a 20° flat plate model, in which transition was induced using a variety of different shaped protuberances, including a cylinder and a triangle. The high-speed image sequences captured a variety of laminar and transitional flow phenomena, ranging from mostly laminar flow, typically at a lower Reynolds number and/or in the near wall region of the model, to highly transitional flow in which the temporal evolution and progression of characteristic streak instabilities and/or corkscrew-shaped vortices could be clearly identified.

  3. High dynamic range image acquisition based on multiplex cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  4. A long-term target detection approach in infrared image sequence

    NASA Astrophysics Data System (ADS)

    Li, Hang; Zhang, Qi; Li, Yuanyuan; Wang, Liqiang

    2015-12-01

    An automatic target detection method used in long term infrared (IR) image sequence from a moving platform is proposed. Firstly, based on non-linear histogram equalization, target candidates are coarse-to-fine segmented by using two self-adapt thresholds generated in the intensity space. Then the real target is captured via two different selection approaches. At the beginning of image sequence, the genuine target with litter texture is discriminated from other candidates by using contrast-based confidence measure. On the other hand, when the target becomes larger, we apply online EM method to iteratively estimate and update the distributions of target's size and position based on the prior detection results, and then recognize the genuine one which satisfies both the constraints of size and position. Experimental results demonstrate that the presented method is accurate, robust and efficient.

  5. The infinitesimal operator for the semigroup of the Frobenius-Perron operator from image sequence data: vector fields and transport barriers from movies.

    PubMed

    Santitissadeekorn, N; Bollt, E M

    2007-06-01

    In this paper, we present an approach to approximate the Frobenius-Perron transfer operator from a sequence of time-ordered images, that is, a movie dataset. Unlike time-series data, successive images do not provide a direct access to a trajectory of a point in a phase space; more precisely, a pixel in an image plane. Therefore, we reconstruct the velocity field from image sequences based on the infinitesimal generator of the Frobenius-Perron operator. Moreover, we relate this problem to the well-known optical flow problem from the computer vision community and we validate the continuity equation derived from the infinitesimal operator as a constraint equation for the optical flow problem. Once the vector field and then a discrete transfer operator are found, then, in addition, we present a graph modularity method as a tool to discover basin structure in the phase space. Together with a tool to reconstruct a velocity field, this graph-based partition method provides us with a way to study transport behavior and other ergodic properties of measurable dynamical systems captured only through image sequences.

  6. Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor

    NASA Astrophysics Data System (ADS)

    Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.

    2018-04-01

    RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.

  7. Robust obstacle detection for unmanned surface vehicles

    NASA Astrophysics Data System (ADS)

    Qin, Yueming; Zhang, Xiuzhi

    2018-03-01

    Obstacle detection is of essential importance for Unmanned Surface Vehicles (USV). Although some obstacles (e.g., ships, islands) can be detected by Radar, there are many other obstacles (e.g., floating pieces of woods, swimmers) which are difficult to be detected via Radar because these obstacles have low radar cross section. Therefore, detecting obstacle from images taken onboard is an effective supplement. In this paper, a robust vision-based obstacle detection method for USVs is developed. The proposed method employs the monocular image sequence captured by the camera on the USVs and detects obstacles on the sea surface from the image sequence. The experiment results show that the proposed scheme is efficient to fulfill the obstacle detection task.

  8. Cassini "Noodle" Mosaic of Saturn

    NASA Image and Video Library

    2017-07-24

    This mosaic of images combines views captured by NASA's Cassini spacecraft as it made the first dive of the mission's Grand Finale on April 26, 2017. It shows a vast swath of Saturn's atmosphere, from the north polar vortex to the boundary of the hexagon-shaped jet stream, to details in bands and swirls at middle latitudes and beyond. The mosaic is a composite of 137 images captured as Cassini made its first dive toward the gap between Saturn and its rings. It is an update to a previously released image product. In the earlier version, the images were presented as individual movie frames, whereas here, they have been combined into a single, continuous mosaic. The mosaic is presented as a still image as well as a video that pans across its length. Imaging scientists referred to this long, narrow mosaic as a "noodle" in planning the image sequence. The first frame of the mosaic is centered on Saturn's north pole, and the last frame is centered on a region at 18 degrees north latitude. During the dive, the spacecraft's altitude above the clouds changed from 45,000 to 3,200 miles (72,400 to 8374 kilometers), while the image scale changed from 5.4 miles (8.7 kilometers) per pixel to 0.6 mile (1 kilometer) per pixel. The bottom of the mosaic (near the end of the movie) has a curved shape. This is where the spacecraft rotated to point its high-gain antenna in the direction of motion as a protective measure before crossing Saturn's ring plane. The images in this sequence were captured in visible light using the Cassini spacecraft wide-angle camera. The original versions of these images, as sent by the spacecraft, have a size of 512 by 512 pixels. The small image size was chosen in order to allow the camera to take images quickly as Cassini sped over Saturn. These images of the planet's curved surface were projected onto a flat plane before being combined into a mosaic. Each image was mapped in stereographic projection centered at 55 degree north latitude. A movie is available at https://photojournal.jpl.nasa.gov/catalog/PIA21617

  9. Algorithms for detection of objects in image sequences captured from an airborne imaging system

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia; Tang, Yuan-Liang; Devadiga, Sadashiva; Gandhi, Tarak

    1995-01-01

    This research was initiated as a part of the effort at the NASA Ames Research Center to design a computer vision based system that can enhance the safety of navigation by aiding the pilots in detecting various obstacles on the runway during critical section of the flight such as a landing maneuver. The primary goal is the development of algorithms for detection of moving objects from a sequence of images obtained from an on-board video camera. Image regions corresponding to the independently moving objects are segmented from the background by applying constraint filtering on the optical flow computed from the initial few frames of the sequence. These detected regions are tracked over subsequent frames using a model based tracking algorithm. Position and velocity of the moving objects in the world coordinate is estimated using an extended Kalman filter. The algorithms are tested using the NASA line image sequence with six static trucks and a simulated moving truck and experimental results are described. Various limitations of the currently implemented version of the above algorithm are identified and possible solutions to build a practical working system are investigated.

  10. A phase space model of Fourier ptychographic microscopy

    PubMed Central

    Horstmeyer, Roarke; Yang, Changhuei

    2014-01-01

    A new computational imaging technique, termed Fourier ptychographic microscopy (FPM), uses a sequence of low-resolution images captured under varied illumination to iteratively converge upon a high-resolution complex sample estimate. Here, we propose a mathematical model of FPM that explicitly connects its operation to conventional ptychography, a common procedure applied to electron and X-ray diffractive imaging. Our mathematical framework demonstrates that under ideal illumination conditions, conventional ptychography and FPM both produce datasets that are mathematically linked by a linear transformation. We hope this finding encourages the future cross-pollination of ideas between two otherwise unconnected experimental imaging procedures. In addition, the coherence state of the illumination source used by each imaging platform is critical to successful operation, yet currently not well understood. We apply our mathematical framework to demonstrate that partial coherence uniquely alters both conventional ptychography’s and FPM’s captured data, but up to a certain threshold can still lead to accurate resolution-enhanced imaging through appropriate computational post-processing. We verify this theoretical finding through simulation and experiment. PMID:24514995

  11. Real-time myocardium segmentation for the assessment of cardiac function variation

    NASA Astrophysics Data System (ADS)

    Zoehrer, Fabian; Huellebrand, Markus; Chitiboi, Teodora; Oechtering, Thekla; Sieren, Malte; Frahm, Jens; Hahn, Horst K.; Hennemuth, Anja

    2017-03-01

    Recent developments in MRI enable the acquisition of image sequences with high spatio-temporal resolution. Cardiac motion can be captured without gating and triggering. Image size and contrast relations differ from conventional cardiac MRI cine sequences requiring new adapted analysis methods. We suggest a novel segmentation approach utilizing contrast invariant polar scanning techniques. It has been tested with 20 datasets of arrhythmia patients. The results do not differ significantly more between automatic and manual segmentations than between observers. This indicates that the presented solution could enable clinical applications of real-time MRI for the examination of arrhythmic cardiac motion in the future.

  12. Region-based multifocus image fusion for the precise acquisition of Pap smear images.

    PubMed

    Tello-Mijares, Santiago; Bescós, Jesús

    2018-05-01

    A multifocus image fusion method to obtain a single focused image from a sequence of microscopic high-magnification Papanicolau source (Pap smear) images is presented. These images, captured each in a different position of the microscope lens, frequently show partially focused cells or parts of cells, which makes them unpractical for the direct application of image analysis techniques. The proposed method obtains a focused image with a high preservation of original pixels information while achieving a negligible visibility of the fusion artifacts. The method starts by identifying the best-focused image of the sequence; then, it performs a mean-shift segmentation over this image; the focus level of the segmented regions is evaluated in all the images of the sequence, and best-focused regions are merged in a single combined image; finally, this image is processed with an adaptive artifact removal process. The combination of a region-oriented approach, instead of block-based approaches, and a minimum modification of the value of focused pixels in the original images achieve a highly contrasted image with no visible artifacts, which makes this method especially convenient for the medical imaging domain. The proposed method is compared with several state-of-the-art alternatives over a representative dataset. The experimental results show that our proposal obtains the best and more stable quality indicators. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  13. Creating photorealistic virtual model with polarization-based vision system

    NASA Astrophysics Data System (ADS)

    Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi

    2005-08-01

    Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.

  14. Primary Visual Cortex Represents the Difference Between Past and Present

    PubMed Central

    Nortmann, Nora; Rekauzke, Sascha; Onat, Selim; König, Peter; Jancke, Dirk

    2015-01-01

    The visual system is confronted with rapidly changing stimuli in everyday life. It is not well understood how information in such a stream of input is updated within the brain. We performed voltage-sensitive dye imaging across the primary visual cortex (V1) to capture responses to sequences of natural scene contours. We presented vertically and horizontally filtered natural images, and their superpositions, at 10 or 33 Hz. At low frequency, the encoding was found to represent not the currently presented images, but differences in orientation between consecutive images. This was in sharp contrast to more rapid sequences for which we found an ongoing representation of current input, consistent with earlier studies. Our finding that for slower image sequences, V1 does no longer report actual features but represents their relative difference in time counteracts the view that the first cortical processing stage must always transfer complete information. Instead, we show its capacities for change detection with a new emphasis on the role of automatic computation evolving in the 100-ms range, inevitably affecting information transmission further downstream. PMID:24343889

  15. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    PubMed Central

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  16. Frame Rate and Human Vision

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    2012-01-01

    To enhance the quality of the theatre experience, the film industry is interested in achieving higher frame rates for capture and display. In this talk I will describe the basic spatio-temporal sensitivities of human vision, and how they respond to the time sequence of static images that is fundamental to cinematic presentation.

  17. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    PubMed

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  18. MPCM: a hardware coder for super slow motion video sequences

    NASA Astrophysics Data System (ADS)

    Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.

    2013-12-01

    In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.

  19. Reproducibility of the dynamics of facial expressions in unilateral facial palsy.

    PubMed

    Alagha, M A; Ju, X; Morley, S; Ayoub, A

    2018-02-01

    The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, P<0.05). Facial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  20. Efficient super-resolution image reconstruction applied to surveillance video captured by small unmanned aircraft systems

    NASA Astrophysics Data System (ADS)

    He, Qiang; Schultz, Richard R.; Chu, Chee-Hung Henry

    2008-04-01

    The concept surrounding super-resolution image reconstruction is to recover a highly-resolved image from a series of low-resolution images via between-frame subpixel image registration. In this paper, we propose a novel and efficient super-resolution algorithm, and then apply it to the reconstruction of real video data captured by a small Unmanned Aircraft System (UAS). Small UAS aircraft generally have a wingspan of less than four meters, so that these vehicles and their payloads can be buffeted by even light winds, resulting in potentially unstable video. This algorithm is based on a coarse-to-fine strategy, in which a coarsely super-resolved image sequence is first built from the original video data by image registration and bi-cubic interpolation between a fixed reference frame and every additional frame. It is well known that the median filter is robust to outliers. If we calculate pixel-wise medians in the coarsely super-resolved image sequence, we can restore a refined super-resolved image. The primary advantage is that this is a noniterative algorithm, unlike traditional approaches based on highly-computational iterative algorithms. Experimental results show that our coarse-to-fine super-resolution algorithm is not only robust, but also very efficient. In comparison with five well-known super-resolution algorithms, namely the robust super-resolution algorithm, bi-cubic interpolation, projection onto convex sets (POCS), the Papoulis-Gerchberg algorithm, and the iterated back projection algorithm, our proposed algorithm gives both strong efficiency and robustness, as well as good visual performance. This is particularly useful for the application of super-resolution to UAS surveillance video, where real-time processing is highly desired.

  1. Video enhancement workbench: an operational real-time video image processing system

    NASA Astrophysics Data System (ADS)

    Yool, Stephen R.; Van Vactor, David L.; Smedley, Kirk G.

    1993-01-01

    Video image sequences can be exploited in real-time, giving analysts rapid access to information for military or criminal investigations. Video-rate dynamic range adjustment subdues fluctuations in image intensity, thereby assisting discrimination of small or low- contrast objects. Contrast-regulated unsharp masking enhances differentially shadowed or otherwise low-contrast image regions. Real-time removal of localized hotspots, when combined with automatic histogram equalization, may enhance resolution of objects directly adjacent. In video imagery corrupted by zero-mean noise, real-time frame averaging can assist resolution and location of small or low-contrast objects. To maximize analyst efficiency, lengthy video sequences can be screened automatically for low-frequency, high-magnitude events. Combined zoom, roam, and automatic dynamic range adjustment permit rapid analysis of facial features captured by video cameras recording crimes in progress. When trying to resolve small objects in murky seawater, stereo video places the moving imagery in an optimal setting for human interpretation.

  2. Highly selective detection of single-nucleotide polymorphisms using a quartz crystal microbalance biosensor based on the toehold-mediated strand displacement reaction.

    PubMed

    Wang, Dingzhong; Tang, Wei; Wu, Xiaojie; Wang, Xinyi; Chen, Gengjia; Chen, Qiang; Li, Na; Liu, Feng

    2012-08-21

    Toehold-mediated strand displacement reaction (SDR) is first introduced to develop a simple quartz crystal microbalance (QCM) biosensor without an enzyme or label at normal temperature for highly selective and sensitive detection of single-nucleotide polymorphism (SNP) in the p53 tumor suppressor gene. A hairpin capture probe with an external toehold is designed and immobilized on the gold electrode surface of QCM. A successive SDR is initiated by the target sequence hybridization with the toehold domain and ends with the unfolding of the capture probe. Finally, the open-loop capture probe hybridizes with the streptavidin-coupled reporter probe as an efficient mass amplifier to enhance the QCM signal. The proposed biosensor displays remarkable specificity to target the p53 gene fragment against single-base mutant sequences (e.g., the largest discrimination factor is 63 to C-C mismatch) and high sensitivity with the detection limit of 0.3 nM at 20 °C. As the crucial component of the fabricated biosensor for providing the high discrimination capability, the design rationale of the capture probe is further verified by fluorescence sensing and atomic force microscopy imaging. Additionally, a recovery of 84.1% is obtained when detecting the target sequence in spiked HeLa cells lysate, demonstrating the feasibility of employing this biosensor in detecting SNPs in biological samples.

  3. Prediction suppression in monkey inferotemporal cortex depends on the conditional probability between images.

    PubMed

    Ramachandran, Suchitra; Meyer, Travis; Olson, Carl R

    2016-01-01

    When monkeys view two images in fixed sequence repeatedly over days and weeks, neurons in area TE of the inferotemporal cortex come to exhibit prediction suppression. The trailing image elicits only a weak response when presented following the leading image that preceded it during training. Induction of prediction suppression might depend either on the contiguity of the images, as determined by their co-occurrence and captured in the measure of joint probability P(A,B), or on their contingency, as determined by their correlation and as captured in the measures of conditional probability P(A|B) and P(B|A). To distinguish between these possibilities, we measured prediction suppression after imposing training regimens that held P(A,B) constant but varied P(A|B) and P(B|A). We found that reducing either P(A|B) or P(B|A) during training attenuated prediction suppression as measured during subsequent testing. We conclude that prediction suppression depends on contingency, as embodied in the predictive relations between the images, and not just on contiguity, as embodied in their co-occurrence. Copyright © 2016 the American Physiological Society.

  4. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver  

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015155 (10 July 2011) --- This is one of a series of images showing various parts of the space shuttle Atlantis in Earth orbit as photographed by one of the six crewmembers on the International Space Station as the shuttle “posed” for photo and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). An 800 millimeter lens was used to capture this particular series of images.

  5. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver  

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015080 (10 July 2011) --- This is one of a series of images showing various parts of the space shuttle Atlantis in Earth orbit as photographed by one of the six crewmembers on the International Space Station as the shuttle “posed” for photo and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). An 800 millimeter lens was used to capture this particular series of images.

  6. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver  

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015099 (10 July 2011) --- This is one of a series of images showing various parts of the space shuttle Atlantis in Earth orbit as photographed by one of the six crewmembers on the International Space Station as the shuttle “posed” for photo and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). An 800 millimeter lens was used to capture this particular series of images.

  7. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver  

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015094 (10 July 2011) --- This nose view is one of a series of images showing various parts of the space shuttle Atlantis in Earth orbit as photographed by one of the six crewmembers on the International Space Station as the shuttle “posed” for photo and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). An 800 millimeter lens was used to capture this particular series of images.

  8. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver  

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015380 (10 July 2011) --- This is one of a series of images showing various parts of the space shuttle Atlantis in Earth orbit as photographed by one of the six crewmembers on the International Space Station as the shuttle “posed” for photo and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). An 800 millimeter lens was used to capture this particular series of images.

  9. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver  

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015178 (10 July 2011) --- This is one of a series of images showing various parts of the space shuttle Atlantis in Earth orbit as photographed by one of the six crewmembers on the International Space Station as the shuttle “posed” for photo and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). An 800 millimeter lens was used to capture this particular series of images.

  10. Monitoring of facial stress during space flight: Optical computer recognition combining discriminative and generative methods

    NASA Astrophysics Data System (ADS)

    Dinges, David F.; Venkataraman, Sundara; McGlinchey, Eleanor L.; Metaxas, Dimitris N.

    2007-02-01

    Astronauts are required to perform mission-critical tasks at a high level of functional capability throughout spaceflight. Stressors can compromise their ability to do so, making early objective detection of neurobehavioral problems in spaceflight a priority. Computer optical approaches offer a completely unobtrusive way to detect distress during critical operations in space flight. A methodology was developed and a study completed to determine whether optical computer recognition algorithms could be used to discriminate facial expressions during stress induced by performance demands. Stress recognition from a facial image sequence is a subject that has not received much attention although it is an important problem for many applications beyond space flight (security, human-computer interaction, etc.). This paper proposes a comprehensive method to detect stress from facial image sequences by using a model-based tracker. The image sequences were captured as subjects underwent a battery of psychological tests under high- and low-stress conditions. A cue integration-based tracking system accurately captured the rigid and non-rigid parameters of different parts of the face (eyebrows, lips). The labeled sequences were used to train the recognition system, which consisted of generative (hidden Markov model) and discriminative (support vector machine) parts that yield results superior to using either approach individually. The current optical algorithm methods performed at a 68% accuracy rate in an experimental study of 60 healthy adults undergoing periods of high-stress versus low-stress performance demands. Accuracy and practical feasibility of the technique is being improved further with automatic multi-resolution selection for the discretization of the mask, and automated face detection and mask initialization algorithms.

  11. Advantages of using voiced questionnaire and image capture application for data collection from a minority group in rural areas along the Thailand-Myanmar border.

    PubMed

    Monyarit, Siriporn; Pan-ngum, Wirichada; Lawpoolsri, Saranath; Yimsamran, Surapon; Pongnumkul, Suporn; Kaewkungwal, Jaranit; Singhasivanon, Pratap

    2014-01-01

    To compare the quality of data collection via electronic data capture (EDC) with voiced questionnaire (QNN) and data image capture features using a tablet versus standard paper-based QNN, to assess the user's perception of using the EDC tool, and to compare user satisfaction with the two methods. Randomised cross-over study. Study sites: This study was conducted in two villages along the Thailand-Myanmar border. This study included 30 community health volunteers (CHVs) and 120 Karen hill tribe villagers. Employing a cross-over study design, the CHVs were allocated randomly to two groups, in which they performed interviews in different sequences using EDC and QNN. Data discrepancies were found between the two data-collection methods, when data from the paper-based and image-capture methods were compared, and when conducting skip pattern questions. More than 90% of the CHVs perceived the EDC to be useful and easy to use. Both interviewers and interviewees were more satisfied with the EDC compared with QNN in terms of format, ease of use, and system speed. The EDC can effectively be used as an alternative method to paper-based QNNs for data collection. It produces more accurate data that can be considered evidence-based.

  12. Dual-slit confocal light sheet microscopy for in vivo whole-brain imaging of zebrafish

    PubMed Central

    Yang, Zhe; Mei, Li; Xia, Fei; Luo, Qingming; Fu, Ling; Gong, Hui

    2015-01-01

    In vivo functional imaging at single-neuron resolution is an important approach to visualize biological processes in neuroscience. Light sheet microscopy (LSM) is a cutting edge in vivo imaging technique that provides micron-scale spatial resolution at high frame rate. Due to the scattering and absorption of tissue, however, conventional LSM is inadequate to resolve cells because of the attenuated signal to noise ratio (SNR). Using dual-beam illumination and confocal dual-slit detection, here a dual-slit confocal LSM is demonstrated to obtain the SNR enhanced images with frame rate twice as high as line confocal LSM method. Through theoretical calculations and experiments, the correlation between the slit’s width and SNR was determined to optimize the image quality. In vivo whole brain structural imaging stacks and the functional imaging sequences of single slice were obtained for analysis of calcium activities at single-cell resolution. A two-fold increase in imaging speed of conventional confocal LSM makes it possible to capture the sequence of the neurons’ activities and help reveal the potential functional connections in the whole zebrafish’s brain. PMID:26137381

  13. Single Molecule Visualization of Protein-DNA Complexes: Watching Machines at Work

    NASA Astrophysics Data System (ADS)

    Kowalczykowski, Stephen

    2013-03-01

    We can now watch individual proteins acting on single molecules of DNA. Such imaging provides unprecedented interrogation of fundamental biophysical processes. Visualization is achieved through the application of two complementary procedures. In one, single DNA molecules are attached to a polystyrene bead and are then captured by an optical trap. The DNA, a worm-like coil, is extended either by the force of solution flow in a micro-fabricated channel, or by capturing the opposite DNA end in a second optical trap. In the second procedure, DNA is attached by one end to a glass surface. The coiled DNA is elongated either by continuous solution flow or by subsequently tethering the opposite end to the surface. Protein action is visualized by fluorescent reporters: fluorescent dyes that bind double-stranded DNA (dsDNA), fluorescent biosensors for single-stranded DNA (ssDNA), or fluorescently-tagged proteins. Individual molecules are imaged using either epifluorescence microscopy or total internal reflection fluorescence (TIRF) microscopy. Using these approaches, we imaged the search for DNA sequence homology conducted by the RecA-ssDNA filament. The manner by which RecA protein finds a single homologous sequence in the genome had remained undefined for almost 30 years. Single-molecule imaging revealed that the search occurs through a mechanism termed ``intersegmental contact sampling,'' in which the randomly coiled structure of DNA is essential for reiterative sampling of DNA sequence identity: an example of parallel processing. In addition, the assembly of RecA filaments on single molecules of single-stranded DNA was visualized. Filament assembly requires nucleation of a protein dimer on DNA, and subsequent growth occurs via monomer addition. Furthermore, we discovered a class of proteins that catalyzed both nucleation and growth of filaments, revealing how the cell controls assembly of this protein-DNA complex.

  14. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver  

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015328 (10 July 2011) --- Parts of Atlantis' set of main engines are visible in one of a series of images showing various parts of the space shuttle in Earth orbit as photographed by one of the six crewmembers on the International Space Station as the shuttle “posed” for photo and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). An 800 millimeter lens was used to capture this particular series of images.

  15. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver  

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015093 (10 July 2011) --- This nose cone view is one of a series of images showing various parts of the space shuttle Atlantis in Earth orbit as photographed by one of the six crewmembers on the International Space Station as the shuttle “posed” for photo and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). An 800 millimeter lens was used to capture this particular series of images.

  16. Using Long-Term Time-Lapse Imaging of Mammalian Cell Cycle Progression for Laboratory Instruction and Analysis

    ERIC Educational Resources Information Center

    Hinchcliffe, Edward H.

    2005-01-01

    Cinemicrography--the capture of moving cellular sequences through the microscope--has been influential in revealing the dynamic nature of cellular behavior. One of the more dramatic cellular events is mitosis, the division of sister chromatids into two daughter cells. Mitosis has been extensively studied in a variety of organisms, both…

  17. Variational optical flow estimation for images with spectral and photometric sensor diversity

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-03-01

    Motion estimation of objects in image sequences is an essential computer vision task. To this end, optical flow methods compute pixel-level motion, with the purpose of providing low-level input to higher-level algorithms and applications. Robust flow estimation is crucial for the success of applications, which in turn depends on the quality of the captured image data. This work explores the use of sensor diversity in the image data within a framework for variational optical flow. In particular, a custom image sensor setup intended for vehicle applications is tested. Experimental results demonstrate the improved flow estimation performance when IR sensitivity or flash illumination is added to the system.

  18. Exome-wide DNA capture and next generation sequencing in domestic and wild species.

    PubMed

    Cosart, Ted; Beja-Pereira, Albano; Chen, Shanyuan; Ng, Sarah B; Shendure, Jay; Luikart, Gordon

    2011-07-05

    Gene-targeted and genome-wide markers are crucial to advance evolutionary biology, agriculture, and biodiversity conservation by improving our understanding of genetic processes underlying adaptation and speciation. Unfortunately, for eukaryotic species with large genomes it remains costly to obtain genome sequences and to develop genome resources such as genome-wide SNPs. A method is needed to allow gene-targeted, next-generation sequencing that is flexible enough to include any gene or number of genes, unlike transcriptome sequencing. Such a method would allow sequencing of many individuals, avoiding ascertainment bias in subsequent population genetic analyses.We demonstrate the usefulness of a recent technology, exon capture, for genome-wide, gene-targeted marker discovery in species with no genome resources. We use coding gene sequences from the domestic cow genome sequence (Bos taurus) to capture (enrich for), and subsequently sequence, thousands of exons of B. taurus, B. indicus, and Bison bison (wild bison). Our capture array has probes for 16,131 exons in 2,570 genes, including 203 candidate genes with known function and of interest for their association with disease and other fitness traits. We successfully sequenced and mapped exon sequences from across the 29 autosomes and X chromosome in the B. taurus genome sequence. Exon capture and high-throughput sequencing identified thousands of putative SNPs spread evenly across all reference chromosomes, in all three individuals, including hundreds of SNPs in our targeted candidate genes. This study shows exon capture can be customized for SNP discovery in many individuals and for non-model species without genomic resources. Our captured exome subset was small enough for affordable next-generation sequencing, and successfully captured exons from a divergent wild species using the domestic cow genome as reference.

  19. Design of multi-mode compatible image acquisition system for HD area array CCD

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Sui, Xiubao

    2014-11-01

    Combining with the current development trend in video surveillance-digitization and high-definition, a multimode-compatible image acquisition system for HD area array CCD is designed. The hardware and software designs of the color video capture system of HD area array CCD KAI-02150 presented by Truesense Imaging company are analyzed, and the structure parameters of the HD area array CCD and the color video gathering principle of the acquisition system are introduced. Then, the CCD control sequence and the timing logic of the whole capture system are realized. The noises of the video signal (KTC noise and 1/f noise) are filtered by using the Correlated Double Sampling (CDS) technique to enhance the signal-to-noise ratio of the system. The compatible designs in both software and hardware for the two other image sensors of the same series: KAI-04050 and KAI-08050 are put forward; the effective pixels of these two HD image sensors are respectively as many as four million and eight million. A Field Programmable Gate Array (FPGA) is adopted as the key controller of the system to perform the modularization design from top to bottom, which realizes the hardware design by software and improves development efficiency. At last, the required time sequence driving is simulated accurately by the use of development platform of Quartus II 12.1 combining with VHDL. The result of the simulation indicates that the driving circuit is characterized by simple framework, low power consumption, and strong anti-interference ability, which meet the demand of miniaturization and high-definition for the current tendency.

  20. A comparative analysis of exome capture.

    PubMed

    Parla, Jennifer S; Iossifov, Ivan; Grabill, Ian; Spector, Mona S; Kramer, Melissa; McCombie, W Richard

    2011-09-29

    Human exome resequencing using commercial target capture kits has been and is being used for sequencing large numbers of individuals to search for variants associated with various human diseases. We rigorously evaluated the capabilities of two solution exome capture kits. These analyses help clarify the strengths and limitations of those data as well as systematically identify variables that should be considered in the use of those data. Each exome kit performed well at capturing the targets they were designed to capture, which mainly corresponds to the consensus coding sequences (CCDS) annotations of the human genome. In addition, based on their respective targets, each capture kit coupled with high coverage Illumina sequencing produced highly accurate nucleotide calls. However, other databases, such as the Reference Sequence collection (RefSeq), define the exome more broadly, and so not surprisingly, the exome kits did not capture these additional regions. Commercial exome capture kits provide a very efficient way to sequence select areas of the genome at very high accuracy. Here we provide the data to help guide critical analyses of sequencing data derived from these products.

  1. Managing complex processing of medical image sequences by program supervision techniques

    NASA Astrophysics Data System (ADS)

    Crubezy, Monica; Aubry, Florent; Moisan, Sabine; Chameroy, Virginie; Thonnat, Monique; Di Paola, Robert

    1997-05-01

    Our objective is to offer clinicians wider access to evolving medical image processing (MIP) techniques, crucial to improve assessment and quantification of physiological processes, but difficult to handle for non-specialists in MIP. Based on artificial intelligence techniques, our approach consists in the development of a knowledge-based program supervision system, automating the management of MIP libraries. It comprises a library of programs, a knowledge base capturing the expertise about programs and data and a supervision engine. It selects, organizes and executes the appropriate MIP programs given a goal to achieve and a data set, with dynamic feedback based on the results obtained. It also advises users in the development of new procedures chaining MIP programs.. We have experimented the approach for an application of factor analysis of medical image sequences as a means of predicting the response of osteosarcoma to chemotherapy, with both MRI and NM dynamic image sequences. As a result our program supervision system frees clinical end-users from performing tasks outside their competence, permitting them to concentrate on clinical issues. Therefore our approach enables a better exploitation of possibilities offered by MIP and higher quality results, both in terms of robustness and reliability.

  2. Efficient burst image compression using H.265/HEVC

    NASA Astrophysics Data System (ADS)

    Roodaki-Lavasani, Hoda; Lainema, Jani

    2014-02-01

    New imaging use cases are emerging as more powerful camera hardware is entering consumer markets. One family of such use cases is based on capturing multiple pictures instead of just one when taking a photograph. That kind of a camera operation allows e.g. selecting the most successful shot from a sequence of images, showing what happened right before or after the shot was taken or combining the shots by computational means to improve either visible characteristics of the picture (such as dynamic range or focus) or the artistic aspects of the photo (e.g. by superimposing pictures on top of each other). Considering that photographic images are typically of high resolution and quality and the fact that these kind of image bursts can consist of at least tens of individual pictures, an efficient compression algorithm is desired. However, traditional video coding approaches fail to provide the random access properties these use cases require to achieve near-instantaneous access to the pictures in the coded sequence. That feature is critical to allow users to browse the pictures in an arbitrary order or imaging algorithms to extract desired pictures from the sequence quickly. This paper proposes coding structures that provide such random access properties while achieving coding efficiency superior to existing image coders. The results indicate that using HEVC video codec with a single reference picture fixed for the whole sequence can achieve nearly as good compression as traditional IPPP coding structures. It is also shown that the selection of the reference frame can further improve the coding efficiency.

  3. Projector-Camera Systems for Immersive Training

    DTIC Science & Technology

    2006-01-01

    average to a sequence of 100 captured distortion corrected images. The OpenCV library [ OpenCV ] was used for camera calibration. To correct for...rendering application [Treskunov, Pair, and Swartout, 2004]. It was transposed to take into account different matrix conventions between OpenCV and...Screen Imperfections. Proc. Workshop on Projector-Camera Systems (PROCAMS), Nice, France, IEEE. OpenCV : Open Source Computer Vision. [Available

  4. Storage, retrieval, and edit of digital video using Motion JPEG

    NASA Astrophysics Data System (ADS)

    Sudharsanan, Subramania I.; Lee, D. H.

    1994-04-01

    In a companion paper we describe a Micro Channel adapter card that can perform real-time JPEG (Joint Photographic Experts Group) compression of a 640 by 480 24-bit image within 1/30th of a second. Since this corresponds to NTSC video rates at considerably good perceptual quality, this system can be used for real-time capture and manipulation of continuously fed video. To facilitate capturing the compressed video in a storage medium, an IBM Bus master SCSI adapter with cache is utilized. Efficacy of the data transfer mechanism is considerably improved using the System Control Block architecture, an extension to Micro Channel bus masters. We show experimental results that the overall system can perform at compressed data rates of about 1.5 MBytes/second sustained and with sporadic peaks to about 1.8 MBytes/second depending on the image sequence content. We also describe mechanisms to access the compressed data very efficiently through special file formats. This in turn permits creation of simpler sequence editors. Another advantage of the special file format is easy control of forward, backward and slow motion playback. The proposed method can be extended for design of a video compression subsystem for a variety of personal computing systems.

  5. Image Quality in High-resolution and High-cadence Solar Imaging

    NASA Astrophysics Data System (ADS)

    Denker, C.; Dineva, E.; Balthasar, H.; Verma, M.; Kuckein, C.; Diercke, A.; González Manrique, S. J.

    2018-03-01

    Broad-band imaging and even imaging with a moderate bandpass (about 1 nm) provides a photon-rich environment, where frame selection (lucky imaging) becomes a helpful tool in image restoration, allowing us to perform a cost-benefit analysis on how to design observing sequences for imaging with high spatial resolution in combination with real-time correction provided by an adaptive optics (AO) system. This study presents high-cadence (160 Hz) G-band and blue continuum image sequences obtained with the High-resolution Fast Imager (HiFI) at the 1.5-meter GREGOR solar telescope, where the speckle-masking technique is used to restore images with nearly diffraction-limited resolution. The HiFI employs two synchronized large-format and high-cadence sCMOS detectors. The median filter gradient similarity (MFGS) image-quality metric is applied, among others, to AO-corrected image sequences of a pore and a small sunspot observed on 2017 June 4 and 5. A small region of interest, which was selected for fast-imaging performance, covered these contrast-rich features and their neighborhood, which were part of Active Region NOAA 12661. Modifications of the MFGS algorithm uncover the field- and structure-dependency of this image-quality metric. However, MFGS still remains a good choice for determining image quality without a priori knowledge, which is an important characteristic when classifying the huge number of high-resolution images contained in data archives. In addition, this investigation demonstrates that a fast cadence and millisecond exposure times are still insufficient to reach the coherence time of daytime seeing. Nonetheless, the analysis shows that data acquisition rates exceeding 50 Hz are required to capture a substantial fraction of the best seeing moments, significantly boosting the performance of post-facto image restoration.

  6. BNU-LSVED: a multimodal spontaneous expression database in educational environment

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Wei, Qinglan; He, Jun; Yu, Lejun; Zhu, Xiaoming

    2016-09-01

    In the field of pedagogy or educational psychology, emotions are treated as very important factors, which are closely associated with cognitive processes. Hence, it is meaningful for teachers to analyze students' emotions in classrooms, thus adjusting their teaching activities and improving students ' individual development. To provide a benchmark for different expression recognition algorithms, a large collection of training and test data in classroom environment has become an acute problem that needs to be resolved. In this paper, we present a multimodal spontaneous database in real learning environment. To collect the data, students watched seven kinds of teaching videos and were simultaneously filmed by a camera. Trained coders made one of the five learning expression labels for each image sequence extracted from the captured videos. This subset consists of 554 multimodal spontaneous expression image sequences (22,160 frames) recorded in real classrooms. There are four main advantages in this database. 1) Due to recorded in the real classroom environment, viewer's distance from the camera and the lighting of the database varies considerably between image sequences. 2) All the data presented are natural spontaneous responses to teaching videos. 3) The multimodal database also contains nonverbal behavior including eye movement, head posture and gestures to infer a student ' s affective state during the courses. 4) In the video sequences, there are different kinds of temporal activation patterns. In addition, we have demonstrated the labels for the image sequences are in high reliability through Cronbach's alpha method.

  7. Profiling defect depth in composite materials using thermal imaging NDE

    NASA Astrophysics Data System (ADS)

    Obeidat, Omar; Yu, Qiuye; Han, Xiaoyan

    2018-04-01

    Sonic Infrared (IR) NDE, is a relatively new NDE technology; it has been demonstrated as a reliable and sensitive method to detect defects. SIR uses ultrasonic excitation with IR imaging to detect defects and flaws in the structures being inspected. An IR camera captures infrared radiation from the target for a period of time covering the ultrasound pulse. This period of time may be much longer than the pulse depending on the defect depth and the thermal properties of the materials. With the increasing deployment of composites in modern aerospace and automobile structures, fast, wide-area and reliable NDE methods are necessary. Impact damage is one of the major concerns in modern composites. Damage can occur at a certain depth without any visual indication on the surface. Defect depth information can influence maintenance decisions. Depth profiling relies on the time delays in the captured image sequence. We'll present our work on the defect depth profiling by using the temporal information of IR images. An analytical model is introduced to describe heat diffusion from subsurface defects in composite materials. Depth profiling using peak time is introduced as well.

  8. Joint denoising, demosaicing, and chromatic aberration correction for UHD video

    NASA Astrophysics Data System (ADS)

    Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank

    2017-09-01

    High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.

  9. Understanding the optics to aid microscopy image segmentation.

    PubMed

    Yin, Zhaozheng; Li, Kang; Kanade, Takeo; Chen, Mei

    2010-01-01

    Image segmentation is essential for many automated microscopy image analysis systems. Rather than treating microscopy images as general natural images and rushing into the image processing warehouse for solutions, we propose to study a microscope's optical properties to model its image formation process first using phase contrast microscopy as an exemplar. It turns out that the phase contrast imaging system can be relatively well explained by a linear imaging model. Using this model, we formulate a quadratic optimization function with sparseness and smoothness regularizations to restore the "authentic" phase contrast images that directly correspond to specimen's optical path length without phase contrast artifacts such as halo and shade-off. With artifacts removed, high quality segmentation can be achieved by simply thresholding the restored images. The imaging model and restoration method are quantitatively evaluated on two sequences with thousands of cells captured over several days.

  10. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015593 (10 July 2011) --- This is one of a series of images showing various parts of the space shuttle Atlantis in Earth orbit as photographed by one of three crew members -- half the station crew -- who were equipped with still cameras for this purpose on the International Space Station as the shuttle “posed” for photos and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). A 1000 millimeter lens was used to capture this particular series of images.

  11. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver  

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015396 (10 July 2011) --- This is one of a series of images showing various parts of the space shuttle Atlantis in Earth orbit as photographed by one of three crewmembers – half the International Space Station crew – who were equipped with still cameras for this purpose on t station as the shuttle “posed” for photos and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). An 800 millimeter lens was used to capture this particular series of images.

  12. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015600 (10 July 2011) --- This is one of a series of images showing various parts of the space shuttle Atlantis in Earth orbit as photographed by one of three crew members -- half the station crew -- who were equipped with still cameras for this purpose on the International Space Station as the shuttle “posed” for photos and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). A 1000 millimeter lens was used to capture this particular series of images.

  13. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015662 (10 July 2011) --- This is one of a series of images showing various parts of the space shuttle Atlantis in Earth orbit as photographed by one of three crew members -- half the station crew -- who were equipped with still cameras for this purpose on the International Space Station as the shuttle “posed” for photos and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). A 1000 millimeter lens was used to capture this particular series of images.

  14. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015668 (10 July 2011) --- This is one of a series of images showing various parts of the space shuttle Atlantis in Earth orbit as photographed by one of three crew members -- half the station crew -- who were equipped with still cameras for this purpose on the International Space Station as the shuttle “posed” for photos and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). A 1000 millimeter lens was used to capture this particular series of images.

  15. Rotation invariant features for wear particle classification

    NASA Astrophysics Data System (ADS)

    Arof, Hamzah; Deravi, Farzin

    1997-09-01

    This paper investigates the ability of a set of rotation invariant features to classify images of wear particles found in used lubricating oil of machinery. The rotation invariant attribute of the features is derived from the property of the magnitudes of Fourier transform coefficients that do not change with spatial shift of the input elements. By analyzing individual circular neighborhoods centered at every pixel in an image, local and global texture characteristics of an image can be described. A number of input sequences are formed by the intensities of pixels on concentric rings of various radii measured from the center of each neighborhood. Fourier transforming the sequences would generate coefficients whose magnitudes are invariant to rotation. Rotation invariant features extracted from these coefficients were utilized to classify wear particle images that were obtained from a number of different particles captured at different orientations. In an experiment involving images of 6 classes, the circular neighborhood features obtained a 91% recognition rate which compares favorably to a 76% rate achieved by features of a 6 by 6 co-occurrence matrix.

  16. Image preprocessing for improving computational efficiency in implementation of restoration and superresolution algorithms.

    PubMed

    Sundareshan, Malur K; Bhattacharjee, Supratik; Inampudi, Radhika; Pang, Ho-Yuen

    2002-12-10

    Computational complexity is a major impediment to the real-time implementation of image restoration and superresolution algorithms in many applications. Although powerful restoration algorithms have been developed within the past few years utilizing sophisticated mathematical machinery (based on statistical optimization and convex set theory), these algorithms are typically iterative in nature and require a sufficient number of iterations to be executed to achieve the desired resolution improvement that may be needed to meaningfully perform postprocessing image exploitation tasks in practice. Additionally, recent technological breakthroughs have facilitated novel sensor designs (focal plane arrays, for instance) that make it possible to capture megapixel imagery data at video frame rates. A major challenge in the processing of these large-format images is to complete the execution of the image processing steps within the frame capture times and to keep up with the output rate of the sensor so that all data captured by the sensor can be efficiently utilized. Consequently, development of novel methods that facilitate real-time implementation of image restoration and superresolution algorithms is of significant practical interest and is the primary focus of this study. The key to designing computationally efficient processing schemes lies in strategically introducing appropriate preprocessing steps together with the superresolution iterations to tailor optimized overall processing sequences for imagery data of specific formats. For substantiating this assertion, three distinct methods for tailoring a preprocessing filter and integrating it with the superresolution processing steps are outlined. These methods consist of a region-of-interest extraction scheme, a background-detail separation procedure, and a scene-derived information extraction step for implementing a set-theoretic restoration of the image that is less demanding in computation compared with the superresolution iterations. A quantitative evaluation of the performance of these algorithms for restoring and superresolving various imagery data captured by diffraction-limited sensing operations are also presented.

  17. Video System for Viewing From a Remote or Windowless Cockpit

    NASA Technical Reports Server (NTRS)

    Banerjee, Amamath

    2009-01-01

    A system of electronic hardware and software synthesizes, in nearly real time, an image of a portion of a scene surveyed by as many as eight video cameras aimed, in different directions, at portions of the scene. This is a prototype of systems that would enable a pilot to view the scene outside a remote or windowless cockpit. The outputs of the cameras are digitized. Direct memory addressing is used to store the data of a few captured images in sequence, and the sequence is repeated in cycles. Cylindrical warping is used in merging adjacent images at their borders to construct a mosaic image of the scene. The mosaic-image data are written to a memory block from which they can be rendered on a head-mounted display (HMD) device. A subsystem in the HMD device tracks the direction of gaze of the wearer, providing data that are used to select, for display, the portion of the mosaic image corresponding to the direction of gaze. The basic functionality of the system has been demonstrated by mounting the cameras on the roof of a van and steering the van by use of the images presented on the HMD device.

  18. TARGETED CAPTURE IN EVOLUTIONARY AND ECOLOGICAL GENOMICS

    PubMed Central

    Jones, Matthew R.; Good, Jeffrey M.

    2016-01-01

    The rapid expansion of next-generation sequencing has yielded a powerful array of tools to address fundamental biological questions at a scale that was inconceivable just a few years ago. Various genome partitioning strategies to sequence select subsets of the genome have emerged as powerful alternatives to whole genome sequencing in ecological and evolutionary genomic studies. High throughput targeted capture is one such strategy that involves the parallel enrichment of pre-selected genomic regions of interest. The growing use of targeted capture demonstrates its potential power to address a range of research questions, yet these approaches have yet to expand broadly across labs focused on evolutionary and ecological genomics. In part, the use of targeted capture has been hindered by the logistics of capture design and implementation in species without established reference genomes. Here we aim to 1) increase the accessibility of targeted capture to researchers working in non-model taxa by discussing capture methods that circumvent the need of a reference genome, 2) highlight the evolutionary and ecological applications where this approach is emerging as a powerful sequencing strategy, and 3) discuss the future of targeted capture and other genome partitioning approaches in light of the increasing accessibility of whole genome sequencing. Given the practical advantages and increasing feasibility of high-throughput targeted capture, we anticipate an ongoing expansion of capture-based approaches in evolutionary and ecological research, synergistic with an expansion of whole genome sequencing. PMID:26137993

  19. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  20. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  1. Single-Cell RNA Sequencing of Glioblastoma Cells.

    PubMed

    Sen, Rajeev; Dolgalev, Igor; Bayin, N Sumru; Heguy, Adriana; Tsirigos, Aris; Placantonakis, Dimitris G

    2018-01-01

    Single-cell RNA sequencing (sc-RNASeq) is a recently developed technique used to evaluate the transcriptome of individual cells. As opposed to conventional RNASeq in which entire populations are sequenced in bulk, sc-RNASeq can be beneficial when trying to better understand gene expression patterns in markedly heterogeneous populations of cells or when trying to identify transcriptional signatures of rare cells that may be underrepresented when using conventional bulk RNASeq. In this method, we describe the generation and analysis of cDNA libraries from single patient-derived glioblastoma cells using the C1 Fluidigm system. The protocol details the use of the C1 integrated fluidics circuit (IFC) for capturing, imaging and lysing cells; performing reverse transcription; and generating cDNA libraries that are ready for sequencing and analysis.

  2. Scalable Photogrammetric Motion Capture System "mosca": Development and Application

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.

    2015-05-01

    Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.

  3. Capturing Revolute Motion and Revolute Joint Parameters with Optical Tracking

    NASA Astrophysics Data System (ADS)

    Antonya, C.

    2017-12-01

    Optical tracking of users and various technical systems are becoming more and more popular. It consists of analysing sequence of recorded images using video capturing devices and image processing algorithms. The returned data contains mainly point-clouds, coordinates of markers or coordinates of point of interest. These data can be used for retrieving information related to the geometry of the objects, but also to extract parameters for the analytical model of the system useful in a variety of computer aided engineering simulations. The parameter identification of joints deals with extraction of physical parameters (mainly geometric parameters) for the purpose of constructing accurate kinematic and dynamic models. The input data are the time-series of the marker’s position. The least square method was used for fitting the data into different geometrical shapes (ellipse, circle, plane) and for obtaining the position and orientation of revolute joins.

  4. Creating cinematic wide gamut HDR-video for the evaluation of tone mapping operators and HDR-displays

    NASA Astrophysics Data System (ADS)

    Froehlich, Jan; Grandinetti, Stefan; Eberhardt, Bernd; Walter, Simon; Schilling, Andreas; Brendel, Harald

    2014-03-01

    High quality video sequences are required for the evaluation of tone mapping operators and high dynamic range (HDR) displays. We provide scenic and documentary scenes with a dynamic range of up to 18 stops. The scenes are staged using professional film lighting, make-up and set design to enable the evaluation of image and material appearance. To address challenges for HDR-displays and temporal tone mapping operators, the sequences include highlights entering and leaving the image, brightness changing over time, high contrast skin tones, specular highlights and bright, saturated colors. HDR-capture is carried out using two cameras mounted on a mirror-rig. To achieve a cinematic depth of field, digital motion picture cameras with Super-35mm size sensors are used. We provide HDR-video sequences to serve as a common ground for the evaluation of temporal tone mapping operators and HDR-displays. They are available to the scientific community for further research.

  5. Sparse Representations-Based Super-Resolution of Key-Frames Extracted from Frames-Sequences Generated by a Visual Sensor Network

    PubMed Central

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-01-01

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes. PMID:24566632

  6. Sparse representations-based super-resolution of key-frames extracted from frames-sequences generated by a visual sensor network.

    PubMed

    Sajjad, Muhammad; Mehmood, Irfan; Baik, Sung Wook

    2014-02-21

    Visual sensor networks (VSNs) usually generate a low-resolution (LR) frame-sequence due to energy and processing constraints. These LR-frames are not very appropriate for use in certain surveillance applications. It is very important to enhance the resolution of the captured LR-frames using resolution enhancement schemes. In this paper, an effective framework for a super-resolution (SR) scheme is proposed that enhances the resolution of LR key-frames extracted from frame-sequences captured by visual-sensors. In a VSN, a visual processing hub (VPH) collects a huge amount of visual data from camera sensors. In the proposed framework, at the VPH, key-frames are extracted using our recent key-frame extraction technique and are streamed to the base station (BS) after compression. A novel effective SR scheme is applied at BS to produce a high-resolution (HR) output from the received key-frames. The proposed SR scheme uses optimized orthogonal matching pursuit (OOMP) for sparse-representation recovery in SR. OOMP does better in terms of detecting true sparsity than orthogonal matching pursuit (OMP). This property of the OOMP helps produce a HR image which is closer to the original image. The K-SVD dictionary learning procedure is incorporated for dictionary learning. Batch-OMP improves the dictionary learning process by removing the limitation in handling a large set of observed signals. Experimental results validate the effectiveness of the proposed scheme and show its superiority over other state-of-the-art schemes.

  7. Still-to-video face recognition in unconstrained environments

    NASA Astrophysics Data System (ADS)

    Wang, Haoyu; Liu, Changsong; Ding, Xiaoqing

    2015-02-01

    Face images from video sequences captured in unconstrained environments usually contain several kinds of variations, e.g. pose, facial expression, illumination, image resolution and occlusion. Motion blur and compression artifacts also deteriorate recognition performance. Besides, in various practical systems such as law enforcement, video surveillance and e-passport identification, only a single still image per person is enrolled as the gallery set. Many existing methods may fail to work due to variations in face appearances and the limit of available gallery samples. In this paper, we propose a novel approach for still-to-video face recognition in unconstrained environments. By assuming that faces from still images and video frames share the same identity space, a regularized least squares regression method is utilized to tackle the multi-modality problem. Regularization terms based on heuristic assumptions are enrolled to avoid overfitting. In order to deal with the single image per person problem, we exploit face variations learned from training sets to synthesize virtual samples for gallery samples. We adopt a learning algorithm combining both affine/convex hull-based approach and regularizations to match image sets. Experimental results on a real-world dataset consisting of unconstrained video sequences demonstrate that our method outperforms the state-of-the-art methods impressively.

  8. Saturn's Hexagon as Summer Solstice Approaches

    NASA Image and Video Library

    2017-05-24

    These natural color views from NASA's Cassini spacecraft compare the appearance of Saturn's north-polar region in June 2013 and April 2017. In both views, Saturn's polar hexagon dominates the scene. The comparison shows how clearly the color of the region changed in the interval between the two views, which represents the latter half of Saturn's northern hemisphere spring. In 2013, the entire interior of the hexagon appeared blue. By 2017, most of the hexagon's interior was covered in yellowish haze, and only the center of the polar vortex retained the blue color. The seasonal arrival of the sun's ultraviolet light triggers the formation of photochemical aerosols, leading to haze formation. The general yellowing of the polar region is believed to be caused by smog particles produced by increasing solar radiation shining on the polar region as Saturn approached the northern summer solstice on May 24, 2017. Scientists are considering several ideas to explain why the center of the polar vortex remains blue while the rest of the polar region has turned yellow. One idea is that, because the atmosphere in the vortex's interior is the last place in the northern hemisphere to be exposed to spring and summer sunlight, smog particles have not yet changed the color of the region. A second explanation hypothesizes that the polar vortex may have an internal circulation similar to hurricanes on Earth. If the Saturnian polar vortex indeed has an analogous structure to terrestrial hurricanes, the circulation should be downward in the eye of the vortex. The downward circulation should keep the atmosphere clear of the photochemical smog particles, and may explain the blue color. Images captured with Cassini's wide-angle camera using red, green and blue spectral filters were combined to create these natural-color views. The 2013 view (left in the combined view), was captured on June 25, 2013, when the spacecraft was about 430,000 miles (700,000 kilometers) away from Saturn. The original versions of these images, as sent by the spacecraft, have a size of 512 by 512 pixels and an image scale of about 52 miles (80 kilometers) per pixel; the images have been mapped in polar stereographic projection to the resolution of approximately 16 miles (25 kilometers) per pixel. The second and third frames in the animation were taken approximately 130 and 260 minutes after the first image. The 2017 sequence (right in the combined view) was captured on April 25, 2017, just before Cassini made its first dive between Saturn and its rings. During the imaging sequence, the spacecraft's distance from the center of the planet changed from 450,000 miles (725,000 kilometers) to 143,000 miles (230,000 kilometers). The original versions of these images, as sent by the spacecraft, have a size of 512 by 512 pixels. The resolution of the original images changed from about 52 miles (80 kilometers) per pixel at the beginning to about 9 miles (14 kilometers) per pixel at the end. The images have been mapped in polar stereographic projection to the resolution of approximately 16 miles (25 kilometers) per pixel. The average interval between the frames in the movie sequence is 230 minutes. Corresponding animated movie sequences are available at https://photojournal.jpl.nasa.gov/catalog/PIA21611 https://photojournal.jpl.nasa.gov/catalog/PIA21611

  9. A New Test Method of Circuit Breaker Spring Telescopic Characteristics Based Image Processing

    NASA Astrophysics Data System (ADS)

    Huang, Huimin; Wang, Feifeng; Lu, Yufeng; Xia, Xiaofei; Su, Yi

    2018-06-01

    This paper applied computer vision technology to the fatigue condition monitoring of springs, and a new telescopic characteristics test method is proposed for circuit breaker operating mechanism spring based on image processing technology. High-speed camera is utilized to capture spring movement image sequences when high voltage circuit breaker operated. Then the image-matching method is used to obtain the deformation-time curve and speed-time curve, and the spring expansion and deformation parameters are extracted from it, which will lay a foundation for subsequent spring force analysis and matching state evaluation. After performing simulation tests at the experimental site, this image analyzing method could solve the complex problems of traditional mechanical sensor installation and monitoring online, status assessment of the circuit breaker spring.

  10. Ns-scaled time-gated fluorescence lifetime imaging for forensic document examination

    NASA Astrophysics Data System (ADS)

    Zhong, Xin; Wang, Xinwei; Zhou, Yan

    2018-01-01

    A method of ns-scaled time-gated fluorescence lifetime imaging (TFLI) is proposed to distinguish different fluorescent substances in forensic document examination. Compared with Video Spectral Comparator (VSC) which can examine fluorescence intensity images only, TFLI can detect questioned documents like falsification or alteration. TFLI system can enhance weak signal by accumulation method. The two fluorescence intensity images of the interval delay time tg are acquired by ICCD and fitted into fluorescence lifetime image. The lifetimes of fluorescence substances are represented by different colors, which make it easy to detect the fluorescent substances and the sequence of handwritings. It proves that TFLI is a powerful tool for forensic document examination. Furthermore, the advantages of TFLI system are ns-scaled precision preservation and powerful capture capability.

  11. Detection of Obstacles in Monocular Image Sequences

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Camps, Octavia

    1997-01-01

    The ability to detect and locate runways/taxiways and obstacles in images captured using on-board sensors is an essential first step in the automation of low-altitude flight, landing, takeoff, and taxiing phase of aircraft navigation. Automation of these functions under different weather and lighting situations, can be facilitated by using sensors of different modalities. An aircraft-based Synthetic Vision System (SVS), with sensors of different modalities mounted on-board, complements the current ground-based systems in functions such as detection and prevention of potential runway collisions, airport surface navigation, and landing and takeoff in all weather conditions. In this report, we address the problem of detection of objects in monocular image sequences obtained from two types of sensors, a Passive Millimeter Wave (PMMW) sensor and a video camera mounted on-board a landing aircraft. Since the sensors differ in their spatial resolution, and the quality of the images obtained using these sensors is not the same, different approaches are used for detecting obstacles depending on the sensor type. These approaches are described separately in two parts of this report. The goal of the first part of the report is to develop a method for detecting runways/taxiways and objects on the runway in a sequence of images obtained from a moving PMMW sensor. Since the sensor resolution is low and the image quality is very poor, we propose a model-based approach for detecting runways/taxiways. We use the approximate runway model and the position information of the camera provided by the Global Positioning System (GPS) to define regions of interest in the image plane to search for the image features corresponding to the runway markers. Once the runway region is identified, we use histogram-based thresholding to detect obstacles on the runway and regions outside the runway. This algorithm is tested using image sequences simulated from a single real PMMW image.

  12. Mechanism of Disease in early Osteoarthritis: Application of modern MR imaging techniques – A technical report

    PubMed Central

    Jobke, B.; Bolbos, R.; Saadat, E.; Cheng, J.; Li, X.; Majumdar, S.

    2012-01-01

    The application of biomolecular magnetic resonance imaging becomes increasingly important in the context of early cartilage changes in degenerative and inflammatory joint disease before gross morphological changes become apparent. In this limited technical report, we investigate the correlation of MRI T1, T2 and T1 relaxation times with quantitative biochemical measurements of proteoglycan and collagen contents of cartilage in close synopsis with histologic morphology. A recently developed MR imaging sequence, T1, was able to detect early intracartilaginous degeneration quantitatively and also qualitatively by color mapping demonstrating a higher sensitivity than standard T2-w sequences. The results correlated highly with reduced proteoglycan content and disrupted collagen architecture as measured by biochemistry and histology. The findings lend support to a clinical implementation that allows rapid visual capturing of pathology on a local, millimeter level. Further information about articular cartilage quality otherwise not detectable in-vivo, via normal inspection, is needed for orthopedic treatment decisions in the present and future. PMID:22902064

  13. Evaluation of privacy in high dynamic range video sequences

    NASA Astrophysics Data System (ADS)

    Řeřábek, Martin; Yuan, Lin; Krasula, Lukáš; Korshunov, Pavel; Fliegel, Karel; Ebrahimi, Touradj

    2014-09-01

    The ability of high dynamic range (HDR) to capture details in environments with high contrast has a significant impact on privacy in video surveillance. However, the extent to which HDR imaging affects privacy, when compared to a typical low dynamic range (LDR) imaging, is neither well studied nor well understood. To achieve such an objective, a suitable dataset of images and video sequences is needed. Therefore, we have created a publicly available dataset of HDR video for privacy evaluation PEViD-HDR, which is an HDR extension of an existing Privacy Evaluation Video Dataset (PEViD). PEViD-HDR video dataset can help in the evaluations of privacy protection tools, as well as for showing the importance of HDR imaging in video surveillance applications and its influence on the privacy-intelligibility trade-off. We conducted a preliminary subjective experiment demonstrating the usability of the created dataset for evaluation of privacy issues in video. The results confirm that a tone-mapped HDR video contains more privacy sensitive information and details compared to a typical LDR video.

  14. Targeted Capture and High-Throughput Sequencing Using Molecular Inversion Probes (MIPs).

    PubMed

    Cantsilieris, Stuart; Stessman, Holly A; Shendure, Jay; Eichler, Evan E

    2017-01-01

    Molecular inversion probes (MIPs) in combination with massively parallel DNA sequencing represent a versatile, yet economical tool for targeted sequencing of genomic DNA. Several thousand genomic targets can be selectively captured using long oligonucleotides containing unique targeting arms and universal linkers. The ability to append sequencing adaptors and sample-specific barcodes allows large-scale pooling and subsequent high-throughput sequencing at relatively low cost per sample. Here, we describe a "wet bench" protocol detailing the capture and subsequent sequencing of >2000 genomic targets from 192 samples, representative of a single lane on the Illumina HiSeq 2000 platform.

  15. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015588 (10 July 2011) --- This picture of Atlantis' main and subsystem engines is one of a series of images showing various parts of the space shuttle Atlantis in Earth orbit as photographed by one of three crew members -- half the station crew -- who were equipped with still cameras for this purpose on the International Space Station as the shuttle “posed” for photos and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). A 1000 millimeter lens was used to capture this particular series of images.

  16. Mapping Sequence performed during the STS-135 R-Bar Pitch Maneuver

    NASA Image and Video Library

    2011-07-10

    ISS028-E-015671 (10 July 2011) --- This head-on picture of Atlantis' nose and part of the underside's thermal protective system tiles is one of a series of images showing various parts of the shuttle in Earth orbit as photographed by one of three crew members -- half the station crew -- who were equipped with still cameras for this purpose on the International Space Station as the shuttle “posed” for photos and visual surveys and performed a back-flip for the rendezvous pitch maneuver (RPM). A 1000 millimeter lens was used to capture this particular series of images.

  17. 3D real-time visualization of blood flow in cerebral aneurysms by light field particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Carlsohn, Matthias F.; Kemmling, André; Petersen, Arne; Wietzke, Lennart

    2016-04-01

    Cerebral aneurysms require endovascular treatment to eliminate potentially lethal hemorrhagic rupture by hemostasis of blood flow within the aneurysm. Devices (e.g. coils and flow diverters) promote homeostasis, however, measurement of blood flow within an aneurysm or cerebral vessel before and after device placement on a microscopic level has not been possible so far. This would allow better individualized treatment planning and improve manufacture design of devices. For experimental analysis, direct measurement of real-time microscopic cerebrovascular flow in micro-structures may be an alternative to computed flow simulations. An application of microscopic aneurysm flow measurement on a regular basis to empirically assess a high number of different anatomic shapes and the corresponding effect of different devices would require a fast and reliable method at low cost with high throughout assessment. Transparent three dimensional 3D models of brain vessels and aneurysms may be used for microscopic flow measurements by particle image velocimetry (PIV), however, up to now the size of structures has set the limits for conventional 3D-imaging camera set-ups. On line flow assessment requires additional computational power to cope with the processing large amounts of data generated by sequences of multi-view stereo images, e.g. generated by a light field camera capturing the 3D information by plenoptic imaging of complex flow processes. Recently, a fast and low cost workflow for producing patient specific three dimensional models of cerebral arteries has been established by stereo-lithographic (SLA) 3D printing. These 3D arterial models are transparent an exhibit a replication precision within a submillimeter range required for accurate flow measurements under physiological conditions. We therefore test the feasibility of microscopic flow measurements by PIV analysis using a plenoptic camera system capturing light field image sequences. Averaging across a sequence of single double or triple shots of flashed images enables reconstruction of the real-time corpuscular flow through the vessel system before and after device placement. This approach could enable 3D-insight of microscopic flow within blood vessels and aneurysms at submillimeter resolution. We present an approach that allows real-time assessment of 3D particle flow by high-speed light field image analysis including a solution that addresses high computational load by image processing. The imaging set-up accomplishes fast and reliable PIV analysis in transparent 3D models of brain aneurysms at low cost. High throughput microscopic flow assessment of different shapes of brain aneurysms may therefore be possibly required for patient specific device designs.

  18. Capturing intraoperative deformations: research experience at Brigham and Women's Hospital.

    PubMed

    Warfield, Simon K; Haker, Steven J; Talos, Ion-Florin; Kemper, Corey A; Weisenfeld, Neil; Mewes, Andrea U J; Goldberg-Zimring, Daniel; Zou, Kelly H; Westin, Carl-Fredrik; Wells, William M; Tempany, Clare M C; Golby, Alexandra; Black, Peter M; Jolesz, Ferenc A; Kikinis, Ron

    2005-04-01

    During neurosurgical procedures the objective of the neurosurgeon is to achieve the resection of as much diseased tissue as possible while achieving the preservation of healthy brain tissue. The restricted capacity of the conventional operating room to enable the surgeon to visualize critical healthy brain structures and tumor margin has lead, over the past decade, to the development of sophisticated intraoperative imaging techniques to enhance visualization. However, both rigid motion due to patient placement and nonrigid deformations occurring as a consequence of the surgical intervention disrupt the correspondence between preoperative data used to plan surgery and the intraoperative configuration of the patient's brain. Similar challenges are faced in other interventional therapies, such as in cryoablation of the liver, or biopsy of the prostate. We have developed algorithms to model the motion of key anatomical structures and system implementations that enable us to estimate the deformation of the critical anatomy from sequences of volumetric images and to prepare updated fused visualizations of preoperative and intraoperative images at a rate compatible with surgical decision making. This paper reviews the experience at Brigham and Women's Hospital through the process of developing and applying novel algorithms for capturing intraoperative deformations in support of image guided therapy.

  19. Photography of the histological and radiological analysis of the ligaments of the distal radioulnar joint.

    PubMed

    Clayton, Gemma

    2013-06-01

    This project was undertaken as part of the PhD research project of Paul Malone, Pricipal Investigator, Covance plc, Harrogate. Mr Malone approached the photography department for involvement in the study with the aim of settling the current debate on the anatomical and histological features of the distal radioulnar ligaments by capturing the anatomy photographically throughout the process of dissection via a microtome. The author was approached to lead on the photographic protocol as part of her post-graduate certificate training at Staffordshire University. High-resolution digital images of an entire human arm were required, the main area of interest being the distal radioulnar joint of the wrist. Images were to be taken at 40 μm intervals as the specimen was sliced. When microtomy was undertaken through the ligaments images were made at 20 μm intervals. A method of suspending a camera approximately 1 metre above the specimen was devised, together with the preparation for the capture, processing and storage of images. The resulting images were then to be subject to further analysis in the form of 3-Dimensional reconstruction, using computer modelling techniques and software. The possibility of merging the images with sequences obtained from both CT & MRI using image handling software is also an area of exploration, in collaboration with the University of Manchester's Visualisation Centre.

  20. Optical diagnostics of mercury jet for an intense proton target.

    PubMed

    Park, H; Tsang, T; Kirk, H G; Ladeinde, F; Graves, V B; Spampinato, P T; Carroll, A J; Titus, P H; McDonald, K T

    2008-04-01

    An optical diagnostic system is designed and constructed for imaging a free mercury jet interacting with a high intensity proton beam in a pulsed high-field solenoid magnet. The optical imaging system employs a backilluminated, laser shadow photography technique. Object illumination and image capture are transmitted through radiation-hard multimode optical fibers and flexible coherent imaging fibers. A retroreflected illumination design allows the entire passive imaging system to fit inside the bore of the solenoid magnet. A sequence of synchronized short laser light pulses are used to freeze the transient events, and the images are recorded by several high speed charge coupled devices. Quantitative and qualitative data analysis using image processing based on probability approach is described. The characteristics of free mercury jet as a high power target for beam-jet interaction at various levels of the magnetic induction field is reported in this paper.

  1. Radiofrequency power deposition near metallic wires during MR imaging: feasibility study using T1-weighted thermal imaging.

    PubMed

    Oulmane, F; Detti, V; Grenier, D; Perrin, E; Saint-Jalmes, H

    2007-01-01

    The presence of metallic conductors (implants, wires or catheters) is prohibited in MR imaging for safety purpose with respect to radiofrequency (RF) power deposition caused by RF excitation B1 field. This work describes the use of T1-weigthed MR imaging for estimating a thermal map around a metallic (copper) wire located in the center of a MR imaging unit during an imaging sequence. The experimental set up and the methodology used for capturing the elevation of temperature created by radiofrequency power deposition around the wire is presented. A proof of its efficiency to followup temperature elevation about 0,5 degrees C in a milimetric region of interest (pixel size: 1 x 1 mm2, slice thickness 5 mm) located around the wire is given, leading to further developments of MR imaging in presence of metallic implants, coils or catheters.

  2. Automated Meteor Detection by All-Sky Digital Camera Systems

    NASA Astrophysics Data System (ADS)

    Suk, Tomáš; Šimberová, Stanislava

    2017-12-01

    We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.

  3. Exome capture from the spruce and pine giga-genomes.

    PubMed

    Suren, H; Hodgins, K A; Yeaman, S; Nurkowski, K A; Smets, P; Rieseberg, L H; Aitken, S N; Holliday, J A

    2016-09-01

    Sequence capture is a flexible tool for generating reduced representation libraries, particularly in species with massive genomes. We used an exome capture approach to sequence the gene space of two of the dominant species in Canadian boreal and montane forests - interior spruce (Picea glauca x engelmanii) and lodgepole pine (Pinus contorta). Transcriptome data generated with RNA-seq were coupled with draft genome sequences to design baits corresponding to 26 824 genes from pine and 28 649 genes from spruce. A total of 579 samples for spruce and 631 samples for pine were included, as well as two pine congeners and six spruce congeners. More than 50% of targeted regions were sequenced at >10× depth in each species, while ~12% captured near-target regions within 500 bp of a bait position were sequenced to a depth >10×. Much of our read data arose from off-target regions, which was likely due to the fragmented and incomplete nature of the draft genome assemblies. Capture in general was successful for the related species, suggesting that baits designed for a single species are likely to successfully capture sequences from congeners. From these data, we called approximately 10 million SNPs and INDELs in each species from coding regions, introns, untranslated and flanking regions, as well as from the intergenic space. Our study demonstrates the utility of sequence capture for resequencing in complex conifer genomes, suggests guidelines for improving capture efficiency and provides a rich resource of genetic variants for studies of selection and local adaptation in these species. © 2016 John Wiley & Sons Ltd.

  4. Application and comparison of large-scale solution-based DNA capture-enrichment methods on ancient DNA

    PubMed Central

    Ávila-Arcos, María C.; Cappellini, Enrico; Romero-Navarro, J. Alberto; Wales, Nathan; Moreno-Mayar, J. Víctor; Rasmussen, Morten; Fordyce, Sarah L.; Montiel, Rafael; Vielle-Calzada, Jean-Philippe; Willerslev, Eske; Gilbert, M. Thomas P.

    2011-01-01

    The development of second-generation sequencing technologies has greatly benefitted the field of ancient DNA (aDNA). Its application can be further exploited by the use of targeted capture-enrichment methods to overcome restrictions posed by low endogenous and contaminating DNA in ancient samples. We tested the performance of Agilent's SureSelect and Mycroarray's MySelect in-solution capture systems on Illumina sequencing libraries built from ancient maize to identify key factors influencing aDNA capture experiments. High levels of clonality as well as the presence of multiple-copy sequences in the capture targets led to biases in the data regardless of the capture method. Neither method consistently outperformed the other in terms of average target enrichment, and no obvious difference was observed either when two tiling designs were compared. In addition to demonstrating the plausibility of capturing aDNA from ancient plant material, our results also enable us to provide useful recommendations for those planning targeted-sequencing on aDNA. PMID:22355593

  5. Sequence Capture versus Restriction Site Associated DNA Sequencing for Shallow Systematics.

    PubMed

    Harvey, Michael G; Smith, Brian Tilston; Glenn, Travis C; Faircloth, Brant C; Brumfield, Robb T

    2016-09-01

    Sequence capture and restriction site associated DNA sequencing (RAD-Seq) are two genomic enrichment strategies for applying next-generation sequencing technologies to systematics studies. At shallow timescales, such as within species, RAD-Seq has been widely adopted among researchers, although there has been little discussion of the potential limitations and benefits of RAD-Seq and sequence capture. We discuss a series of issues that may impact the utility of sequence capture and RAD-Seq data for shallow systematics in non-model species. We review prior studies that used both methods, and investigate differences between the methods by re-analyzing existing RAD-Seq and sequence capture data sets from a Neotropical bird (Xenops minutus). We suggest that the strengths of RAD-Seq data sets for shallow systematics are the wide dispersion of markers across the genome, the relative ease and cost of laboratory work, the deep coverage and read overlap at recovered loci, and the high overall information that results. Sequence capture's benefits include flexibility and repeatability in the genomic regions targeted, success using low-quality samples, more straightforward read orthology assessment, and higher per-locus information content. The utility of a method in systematics, however, rests not only on its performance within a study, but on the comparability of data sets and inferences with those of prior work. In RAD-Seq data sets, comparability is compromised by low overlap of orthologous markers across species and the sensitivity of genetic diversity in a data set to an interaction between the level of natural heterozygosity in the samples examined and the parameters used for orthology assessment. In contrast, sequence capture of conserved genomic regions permits interrogation of the same loci across divergent species, which is preferable for maintaining comparability among data sets and studies for the purpose of drawing general conclusions about the impact of historical processes across biotas. We argue that sequence capture should be given greater attention as a method of obtaining data for studies in shallow systematics and comparative phylogeography. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Capturing change: the duality of time-lapse imagery to acquire data and depict ecological dynamics

    USGS Publications Warehouse

    Brinley Buckley, Emma M.; Allen, Craig R.; Forsberg, Michael; Farrell, Michael; Caven, Andrew J.

    2017-01-01

    We investigate the scientific and communicative value of time-lapse imagery by exploring applications for data collection and visualization. Time-lapse imagery has a myriad of possible applications to study and depict ecosystems and can operate at unique temporal and spatial scales to bridge the gap between large-scale satellite imagery projects and observational field research. Time-lapse data sequences, linking time-lapse imagery with data visualization, have the ability to make data come alive for a wider audience by connecting abstract numbers to images that root data in time and place. Utilizing imagery from the Platte Basin Timelapse Project, water inundation and vegetation phenology metrics are quantified via image analysis and then paired with passive monitoring data, including streamflow and water chemistry. Dynamic and interactive time-lapse data sequences elucidate the visible and invisible ecological dynamics of a significantly altered yet internationally important river system in central Nebraska.

  7. Quality and noise measurements in mobile phone video capture

    NASA Astrophysics Data System (ADS)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  8. Comprehensive comparison of three commercial human whole-exome capture platforms.

    PubMed

    Asan; Xu, Yu; Jiang, Hui; Tyler-Smith, Chris; Xue, Yali; Jiang, Tao; Wang, Jiawei; Wu, Mingzhi; Liu, Xiao; Tian, Geng; Wang, Jun; Wang, Jian; Yang, Huangming; Zhang, Xiuqing

    2011-09-28

    Exome sequencing, which allows the global analysis of protein coding sequences in the human genome, has become an effective and affordable approach to detecting causative genetic mutations in diseases. Currently, there are several commercial human exome capture platforms; however, the relative performances of these have not been characterized sufficiently to know which is best for a particular study. We comprehensively compared three platforms: NimbleGen's Sequence Capture Array and SeqCap EZ, and Agilent's SureSelect. We assessed their performance in a variety of ways, including number of genes covered and capture efficacy. Differences that may impact on the choice of platform were that Agilent SureSelect covered approximately 1,100 more genes, while NimbleGen provided better flanking sequence capture. Although all three platforms achieved similar capture specificity of targeted regions, the NimbleGen platforms showed better uniformity of coverage and greater genotype sensitivity at 30- to 100-fold sequencing depth. All three platforms showed similar power in exome SNP calling, including medically relevant SNPs. Compared with genotyping and whole-genome sequencing data, the three platforms achieved a similar accuracy of genotype assignment and SNP detection. Importantly, all three platforms showed similar levels of reproducibility, GC bias and reference allele bias. We demonstrate key differences between the three platforms, particularly advantages of solutions over array capture and the importance of a large gene target set.

  9. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

    PubMed Central

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-01

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable. PMID:29342908

  10. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    PubMed

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  11. Phylogenomics of Phrynosomatid Lizards: Conflicting Signals from Sequence Capture versus Restriction Site Associated DNA Sequencing

    PubMed Central

    Leaché, Adam D.; Chavez, Andreas S.; Jones, Leonard N.; Grummer, Jared A.; Gottscho, Andrew D.; Linkem, Charles W.

    2015-01-01

    Sequence capture and restriction site associated DNA sequencing (RADseq) are popular methods for obtaining large numbers of loci for phylogenetic analysis. These methods are typically used to collect data at different evolutionary timescales; sequence capture is primarily used for obtaining conserved loci, whereas RADseq is designed for discovering single nucleotide polymorphisms (SNPs) suitable for population genetic or phylogeographic analyses. Phylogenetic questions that span both “recent” and “deep” timescales could benefit from either type of data, but studies that directly compare the two approaches are lacking. We compared phylogenies estimated from sequence capture and double digest RADseq (ddRADseq) data for North American phrynosomatid lizards, a species-rich and diverse group containing nine genera that began diversifying approximately 55 Ma. Sequence capture resulted in 584 loci that provided a consistent and strong phylogeny using concatenation and species tree inference. However, the phylogeny estimated from the ddRADseq data was sensitive to the bioinformatics steps used for determining homology, detecting paralogs, and filtering missing data. The topological conflicts among the SNP trees were not restricted to any particular timescale, but instead were associated with short internal branches. Species tree analysis of the largest SNP assembly, which also included the most missing data, supported a topology that matched the sequence capture tree. This preferred phylogeny provides strong support for the paraphyly of the earless lizard genera Holbrookia and Cophosaurus, suggesting that the earless morphology either evolved twice or evolved once and was subsequently lost in Callisaurus. PMID:25663487

  12. Microfluidic immunocapture of circulating pancreatic cells using parallel EpCAM and MUC1 capture: characterization, optimization and downstream analysis.

    PubMed

    Thege, Fredrik I; Lannin, Timothy B; Saha, Trisha N; Tsai, Shannon; Kochman, Michael L; Hollingsworth, Michael A; Rhim, Andrew D; Kirby, Brian J

    2014-05-21

    We have developed and optimized a microfluidic device platform for the capture and analysis of circulating pancreatic cells (CPCs) and pancreatic circulating tumor cells (CTCs). Our platform uses parallel anti-EpCAM and cancer-specific mucin 1 (MUC1) immunocapture in a silicon microdevice. Using a combination of anti-EpCAM and anti-MUC1 capture in a single device, we are able to achieve efficient capture while extending immunocapture beyond single marker recognition. We also have detected a known oncogenic KRAS mutation in cells spiked in whole blood using immunocapture, RNA extraction, RT-PCR and Sanger sequencing. To allow for downstream single-cell genetic analysis, intact nuclei were released from captured cells by using targeted membrane lysis. We have developed a staining protocol for clinical samples, including standard CTC markers; DAPI, cytokeratin (CK) and CD45, and a novel marker of carcinogenesis in CPCs, mucin 4 (MUC4). We have also demonstrated a semi-automated approach to image analysis and CPC identification, suitable for clinical hypothesis generation. Initial results from immunocapture of a clinical pancreatic cancer patient sample show that parallel capture may capture more of the heterogeneity of the CPC population. With this platform, we aim to develop a diagnostic biomarker for early pancreatic carcinogenesis and patient risk stratification.

  13. Captured metagenomics: large-scale targeting of genes based on ‘sequence capture’ reveals functional diversity in soils

    PubMed Central

    Manoharan, Lokeshwaran; Kushwaha, Sandeep K.; Hedlund, Katarina; Ahrén, Dag

    2015-01-01

    Microbial enzyme diversity is a key to understand many ecosystem processes. Whole metagenome sequencing (WMG) obtains information on functional genes, but it is costly and inefficient due to large amount of sequencing that is required. In this study, we have applied a captured metagenomics technique for functional genes in soil microorganisms, as an alternative to WMG. Large-scale targeting of functional genes, coding for enzymes related to organic matter degradation, was applied to two agricultural soil communities through captured metagenomics. Captured metagenomics uses custom-designed, hybridization-based oligonucleotide probes that enrich functional genes of interest in metagenomic libraries where only probe-bound DNA fragments are sequenced. The captured metagenomes were highly enriched with targeted genes while maintaining their target diversity and their taxonomic distribution correlated well with the traditional ribosomal sequencing. The captured metagenomes were highly enriched with genes related to organic matter degradation; at least five times more than similar, publicly available soil WMG projects. This target enrichment technique also preserves the functional representation of the soils, thereby facilitating comparative metagenomics projects. Here, we present the first study that applies the captured metagenomics approach in large scale, and this novel method allows deep investigations of central ecosystem processes by studying functional gene abundances. PMID:26490729

  14. Optical sedimentation recorder

    DOEpatents

    Bishop, James K.B.

    2014-05-06

    A robotic optical sedimentation recorder is described for the recordation of carbon flux in the oceans wherein both POC and PIC particles are captured at the open end of a submersible sampling platform, the captured particles allowed to drift down onto a collection plate where they can be imaged over time. The particles are imaged using three separate light sources, activated in sequence, one source being a back light, a second source being a side light to provide dark field illumination, and a third source comprising a cross polarized light source to illuminate birefringent particles. The recorder in one embodiment is attached to a buoyancy unit which is capable upon command for bringing the sedimentation recorder to a programmed depth below the ocean surface during recordation mode, and on command returning the unit to the ocean surface for transmission of recorded data and receipt of new instructions. The combined unit is provided with its own power source and is designed to operate autonomously in the ocean for extended periods of time.

  15. Gradient-Induced Voltages on 12-Lead ECGs during High Duty-Cycle MRI Sequences and a Method for Their Removal considering Linear and Concomitant Gradient Terms

    PubMed Central

    Zhang, Shelley HuaLei; Ho Tse, Zion Tsz; Dumoulin, Charles L.; Kwong, Raymond Y.; Stevenson, William G.; Watkins, Ronald; Ward, Jay; Wang, Wei; Schmidt, Ehud J.

    2015-01-01

    Purpose To restore 12-lead ECG signal fidelity inside MRI by removing magnetic-field gradient induced-voltages during high gradient-duty-cycle sequences. Theory and Methods A theoretical equation was derived, providing first- and second-order electrical fields induced at individual ECG electrode as a function of gradient fields. Experiments were performed at 3T on healthy volunteers, using a customized acquisition system which captured full amplitude and frequency response of ECGs, or a commercial recording system. The 19 equation coefficients were derived by linear regression of data from accelerated sequences, and used to compute induced-voltages in real-time during full-resolution sequences to remove ECG artifacts. Restored traces were evaluated relative to ones acquired without imaging. Results Measured induced-voltages were 0.7V peak-to-peak during balanced Steady-State Free Precession (bSSFP) with heart at the isocenter. Applying the equation during gradient echo sequencing, three-dimensional fast spin echo and multi-slice bSSFP imaging restored nonsaturated traces and second-order concomitant terms showed larger contributions in electrodes farther from the magnet isocenter. Equation coefficients are evaluated with high repeatability (ρ = 0.996) and are subject, sequence, and slice-orientation dependent. Conclusion Close agreement between theoretical and measured gradient-induced voltages allowed for real-time removal. Prospective estimation of sequence-periods where large induced-voltages occur may allow hardware removal of these signals. PMID:26101951

  16. Simulated Lidar Images of Human Pose using a 3DS Max Virtual Laboratory

    DTIC Science & Technology

    2015-12-01

    developed in Autodesk 3DS Max, with an animated, biofidelic 3D human mesh biped character ( avatar ) as the subject. The biped animation modifies the digital...character ( avatar ) as the subject. The biped animation modifies the digital human model through a time sequence of motion capture data representing an...AFB. Mr. Isiah Davenport from Infoscitex Corp developed the method for creating the biofidelic avatars from laboratory data and 3DS Max code for

  17. AIRS Ozone Burden During Antarctic Winter: Time Series from 8/1/2005 to 9/30/2005

    NASA Image and Video Library

    2007-07-24

    The Atmospheric Infrared Sounder (AIRS) provides a daily global 3-dimensional view of Earth's ozone layer. Since AIRS observes in the thermal infrared spectral range, it also allows scientists to view from space the Antarctic ozone hole for the first time continuously during polar winter. This image sequence captures the intensification of the annual ozone hole in the Antarctic Polar Vortex. http://photojournal.jpl.nasa.gov/catalog/PIA09938

  18. Schedule Optimization of Imaging Missions for Multiple Satellites and Ground Stations Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lee, Junghyun; Kim, Heewon; Chung, Hyun; Kim, Haedong; Choi, Sujin; Jung, Okchul; Chung, Daewon; Ko, Kwanghee

    2018-04-01

    In this paper, we propose a method that uses a genetic algorithm for the dynamic schedule optimization of imaging missions for multiple satellites and ground systems. In particular, the visibility conflicts of communication and mission operation using satellite resources (electric power and onboard memory) are integrated in sequence. Resource consumption and restoration are considered in the optimization process. Image acquisition is an essential part of satellite missions and is performed via a series of subtasks such as command uplink, image capturing, image storing, and image downlink. An objective function for optimization is designed to maximize the usability by considering the following components: user-assigned priority, resource consumption, and image-acquisition time. For the simulation, a series of hypothetical imaging missions are allocated to a multi-satellite control system comprising five satellites and three ground stations having S- and X-band antennas. To demonstrate the performance of the proposed method, simulations are performed via three operation modes: general, commercial, and tactical.

  19. Pulling out the 1%: Whole-Genome Capture for the Targeted Enrichment of Ancient DNA Sequencing Libraries

    PubMed Central

    Carpenter, Meredith L.; Buenrostro, Jason D.; Valdiosera, Cristina; Schroeder, Hannes; Allentoft, Morten E.; Sikora, Martin; Rasmussen, Morten; Gravel, Simon; Guillén, Sonia; Nekhrizov, Georgi; Leshtakov, Krasimir; Dimitrova, Diana; Theodossiev, Nikola; Pettener, Davide; Luiselli, Donata; Sandoval, Karla; Moreno-Estrada, Andrés; Li, Yingrui; Wang, Jun; Gilbert, M. Thomas P.; Willerslev, Eske; Greenleaf, William J.; Bustamante, Carlos D.

    2013-01-01

    Most ancient specimens contain very low levels of endogenous DNA, precluding the shotgun sequencing of many interesting samples because of cost. Ancient DNA (aDNA) libraries often contain <1% endogenous DNA, with the majority of sequencing capacity taken up by environmental DNA. Here we present a capture-based method for enriching the endogenous component of aDNA sequencing libraries. By using biotinylated RNA baits transcribed from genomic DNA libraries, we are able to capture DNA fragments from across the human genome. We demonstrate this method on libraries created from four Iron Age and Bronze Age human teeth from Bulgaria, as well as bone samples from seven Peruvian mummies and a Bronze Age hair sample from Denmark. Prior to capture, shotgun sequencing of these libraries yielded an average of 1.2% of reads mapping to the human genome (including duplicates). After capture, this fraction increased substantially, with up to 59% of reads mapped to human and enrichment ranging from 6- to 159-fold. Furthermore, we maintained coverage of the majority of regions sequenced in the precapture library. Intersection with the 1000 Genomes Project reference panel yielded an average of 50,723 SNPs (range 3,062–147,243) for the postcapture libraries sequenced with 1 million reads, compared with 13,280 SNPs (range 217–73,266) for the precapture libraries, increasing resolution in population genetic analyses. Our whole-genome capture approach makes it less costly to sequence aDNA from specimens containing very low levels of endogenous DNA, enabling the analysis of larger numbers of samples. PMID:24568772

  20. Super resolution for astronomical observations

    NASA Astrophysics Data System (ADS)

    Li, Zhan; Peng, Qingyu; Bhanu, Bir; Zhang, Qingfeng; He, Haifeng

    2018-05-01

    In order to obtain detailed information from multiple telescope observations a general blind super-resolution (SR) reconstruction approach for astronomical images is proposed in this paper. A pixel-reliability-based SR reconstruction algorithm is described and implemented, where the developed process incorporates flat field correction, automatic star searching and centering, iterative star matching, and sub-pixel image registration. Images captured by the 1-m telescope at Yunnan Observatory are used to test the proposed technique. The results of these experiments indicate that, following SR reconstruction, faint stars are more distinct, bright stars have sharper profiles, and the backgrounds have higher details; thus these results benefit from the high-precision star centering and image registration provided by the developed method. Application of the proposed approach not only provides more opportunities for new discoveries from astronomical image sequences, but will also contribute to enhancing the capabilities of most spatial or ground-based telescopes.

  1. Comparison of taxon-specific versus general locus sets for targeted sequence capture in plant phylogenomics.

    PubMed

    Chau, John H; Rahfeldt, Wolfgang A; Olmstead, Richard G

    2018-03-01

    Targeted sequence capture can be used to efficiently gather sequence data for large numbers of loci, such as single-copy nuclear loci. Most published studies in plants have used taxon-specific locus sets developed individually for a clade using multiple genomic and transcriptomic resources. General locus sets can also be developed from loci that have been identified as single-copy and have orthologs in large clades of plants. We identify and compare a taxon-specific locus set and three general locus sets (conserved ortholog set [COSII], shared single-copy nuclear [APVO SSC] genes, and pentatricopeptide repeat [PPR] genes) for targeted sequence capture in Buddleja (Scrophulariaceae) and outgroups. We evaluate their performance in terms of assembly success, sequence variability, and resolution and support of inferred phylogenetic trees. The taxon-specific locus set had the most target loci. Assembly success was high for all locus sets in Buddleja samples. For outgroups, general locus sets had greater assembly success. Taxon-specific and PPR loci had the highest average variability. The taxon-specific data set produced the best-supported tree, but all data sets showed improved resolution over previous non-sequence capture data sets. General locus sets can be a useful source of sequence capture targets, especially if multiple genomic resources are not available for a taxon.

  2. Pre-capture multiplexing improves efficiency and cost-effectiveness of targeted genomic enrichment.

    PubMed

    Shearer, A Eliot; Hildebrand, Michael S; Ravi, Harini; Joshi, Swati; Guiffre, Angelica C; Novak, Barbara; Happe, Scott; LeProust, Emily M; Smith, Richard J H

    2012-11-14

    Targeted genomic enrichment (TGE) is a widely used method for isolating and enriching specific genomic regions prior to massively parallel sequencing. To make effective use of sequencer output, barcoding and sample pooling (multiplexing) after TGE and prior to sequencing (post-capture multiplexing) has become routine. While previous reports have indicated that multiplexing prior to capture (pre-capture multiplexing) is feasible, no thorough examination of the effect of this method has been completed on a large number of samples. Here we compare standard post-capture TGE to two levels of pre-capture multiplexing: 12 or 16 samples per pool. We evaluated these methods using standard TGE metrics and determined the ability to identify several classes of genetic mutations in three sets of 96 samples, including 48 controls. Our overall goal was to maximize cost reduction and minimize experimental time while maintaining a high percentage of reads on target and a high depth of coverage at thresholds required for variant detection. We adapted the standard post-capture TGE method for pre-capture TGE with several protocol modifications, including redesign of blocking oligonucleotides and optimization of enzymatic and amplification steps. Pre-capture multiplexing reduced costs for TGE by at least 38% and significantly reduced hands-on time during the TGE protocol. We found that pre-capture multiplexing reduced capture efficiency by 23 or 31% for pre-capture pools of 12 and 16, respectively. However efficiency losses at this step can be compensated by reducing the number of simultaneously sequenced samples. Pre-capture multiplexing and post-capture TGE performed similarly with respect to variant detection of positive control mutations. In addition, we detected no instances of sample switching due to aberrant barcode identification. Pre-capture multiplexing improves efficiency of TGE experiments with respect to hands-on time and reagent use compared to standard post-capture TGE. A decrease in capture efficiency is observed when using pre-capture multiplexing; however, it does not negatively impact variant detection and can be accommodated by the experimental design.

  3. Ultrasound-modulated optical tomography with intense acoustic bursts.

    PubMed

    Zemp, Roger J; Kim, Chulhong; Wang, Lihong V

    2007-04-01

    Ultrasound-modulated optical tomography (UOT) detects ultrasonically modulated light to spatially localize multiply scattered photons in turbid media with the ultimate goal of imaging the optical properties in living subjects. A principal challenge of the technique is weak modulated signal strength. We discuss ways to push the limits of signal enhancement with intense acoustic bursts while conforming to optical and ultrasonic safety standards. A CCD-based speckle-contrast detection scheme is used to detect acoustically modulated light by measuring changes in speckle statistics between ultrasound-on and ultrasound-off states. The CCD image capture is synchronized with the ultrasound burst pulse sequence. Transient acoustic radiation force, a consequence of bursts, is seen to produce slight signal enhancement over pure ultrasonic-modulation mechanisms for bursts and CCD exposure times of the order of milliseconds. However, acoustic radiation-force-induced shear waves are launched away from the acoustic sample volume, which degrade UOT spatial resolution. By time gating the CCD camera to capture modulated light before radiation force has an opportunity to accumulate significant tissue displacement, we reduce the effects of shear-wave image degradation, while enabling very high signal-to-noise ratios. Additionally, we maintain high-resolution images representative of optical and not mechanical contrast. Signal-to-noise levels are sufficiently high so as to enable acquisition of 2D images of phantoms with one acoustic burst per pixel.

  4. A scalable, fully automated process for construction of sequence-ready human exome targeted capture libraries

    PubMed Central

    2011-01-01

    Genome targeting methods enable cost-effective capture of specific subsets of the genome for sequencing. We present here an automated, highly scalable method for carrying out the Solution Hybrid Selection capture approach that provides a dramatic increase in scale and throughput of sequence-ready libraries produced. Significant process improvements and a series of in-process quality control checkpoints are also added. These process improvements can also be used in a manual version of the protocol. PMID:21205303

  5. Introducing the slime mold graph repository

    NASA Astrophysics Data System (ADS)

    Dirnberger, M.; Mehlhorn, K.; Mehlhorn, T.

    2017-07-01

    We introduce the slime mold graph repository or SMGR, a novel data collection promoting the visibility, accessibility and reuse of experimental data revolving around network-forming slime molds. By making data readily available to researchers across multiple disciplines, the SMGR promotes novel research as well as the reproduction of original results. While SMGR data may take various forms, we stress the importance of graph representations of slime mold networks due to their ease of handling and their large potential for reuse. Data added to the SMGR stands to gain impact beyond initial publications or even beyond its domain of origin. We initiate the SMGR with the comprehensive Kist Europe data set focusing on the slime mold Physarum polycephalum, which we obtained in the course of our original research. It contains sequences of images documenting growth and network formation of the organism under constant conditions. Suitable image sequences depicting the typical P. polycephalum network structures are used to compute sequences of graphs faithfully capturing them. Given such sequences, node identities are computed, tracking the development of nodes over time. Based on this information we demonstrate two out of many possible ways to begin exploring the data. The entire data set is well-documented, self-contained and ready for inspection at http://smgr.mpi-inf.mpg.de.

  6. Automated Leaf Tracking using Multi-view Image Sequences of Maize Plants for Leaf-growth Monitoring

    NASA Astrophysics Data System (ADS)

    Das Choudhury, S.; Awada, T.; Samal, A.; Stoerger, V.; Bashyam, S.

    2017-12-01

    Extraction of phenotypes with botanical importance by analyzing plant image sequences has the desirable advantages of non-destructive temporal phenotypic measurements of a large number of plants with little or no manual intervention in a relatively short period of time. The health of a plant is best interpreted by the emergence timing and temporal growth of individual leaves. For automated leaf growth monitoring, it is essential to track each leaf throughout the life cycle of the plant. Plants are constantly changing organisms with increasing complexity in architecture due to variations in self-occlusions and phyllotaxy, i.e., arrangements of leaves around the stem. The leaf cross-overs pose challenges to accurately track each leaf using single view image sequence. Thus, we introduce a novel automated leaf tracking algorithm using a graph theoretic approach by multi-view image sequence analysis based on the determination of leaf-tips and leaf-junctions in the 3D space. The basis of the leaf tracking algorithm is: the leaves emerge using bottom-up approach in the case of a maize plant, and the direction of leaf emergence strictly alternates in terms of direction. The algorithm involves labeling of the individual parts of a plant, i.e., leaves and stem, following graphical representation of the plant skeleton, i.e., one-pixel wide connected line obtained from the binary image. The length of the leaf is measured by the number of pixels in the leaf skeleton. To evaluate the performance of the algorithm, a benchmark dataset is indispensable. Thus, we publicly release University of Nebraska-Lincoln Component Plant Phenotyping dataset-2 (UNL-CPPD-2) consisting of images of the 20 maize plants captured by visible light camera of the Lemnatec Scanalyzer 3D high throughout plant phenotyping facility once daily for 60 days from 10 different views. The dataset is aimed to facilitate the development and evaluation of leaf tracking algorithms and their uniform comparisons.

  7. Recurrent neural networks for breast lesion classification based on DCE-MRIs

    NASA Astrophysics Data System (ADS)

    Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen

    2018-02-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).

  8. Staring at Saturn

    NASA Image and Video Library

    2016-09-15

    NASA's Cassini spacecraft stared at Saturn for nearly 44 hours on April 25 to 27, 2016, to obtain this movie showing just over four Saturn days. With Cassini's orbit being moved closer to the planet in preparation for the mission's 2017 finale, scientists took this final opportunity to capture a long movie in which the planet's full disk fit into a single wide-angle camera frame. Visible at top is the giant hexagon-shaped jet stream that surrounds the planet's north pole. Each side of this huge shape is slightly wider than Earth. The resolution of the 250 natural color wide-angle camera frames comprising this movie is 512x512 pixels, rather than the camera's full resolution of 1024x1024 pixels. Cassini's imaging cameras have the ability to take reduced-size images like these in order to decrease the amount of data storage space required for an observation. The spacecraft began acquiring this sequence of images just after it obtained the images to make a three-panel color mosaic. When it began taking images for this movie sequence, Cassini was 1,847,000 miles (2,973,000 kilometers) from Saturn, with an image scale of 355 kilometers per pixel. When it finished gathering the images, the spacecraft had moved 171,000 miles (275,000 kilometers) closer to the planet, with an image scale of 200 miles (322 kilometers) per pixel. A movie is available at http://photojournal.jpl.nasa.gov/catalog/PIA21047

  9. Automated camera-phone experience with the frequency of imaging necessary to capture diet.

    PubMed

    Arab, Lenore; Winter, Ashley

    2010-08-01

    Camera-enabled cell phones provide an opportunity to strengthen dietary recall through automated imaging of foods eaten during a specified period. To explore the frequency of imaging needed to capture all foods eaten, we examined the number of images of individual foods consumed in a pilot study of automated imaging using camera phones set to an image-capture frequency of one snapshot every 10 seconds. Food images were tallied from 10 young adult subjects who wore the phone continuously during the work day and consented to share their images. Based on the number of images received for each eating experience, the pilot data suggest that automated capturing of images at a frequency of once every 10 seconds is adequate for recording foods consumed during regular meals, whereas a greater frequency of imaging is necessary to capture snacks and beverages eaten quickly. 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.

  10. Solution Hybrid Selection Capture for the Recovery of Functional Full-Length Eukaryotic cDNAs From Complex Environmental Samples

    PubMed Central

    Bragalini, Claudia; Ribière, Céline; Parisot, Nicolas; Vallon, Laurent; Prudent, Elsa; Peyretaillade, Eric; Girlanda, Mariangela; Peyret, Pierre; Marmeisse, Roland; Luis, Patricia

    2014-01-01

    Eukaryotic microbial communities play key functional roles in soil biology and potentially represent a rich source of natural products including biocatalysts. Culture-independent molecular methods are powerful tools to isolate functional genes from uncultured microorganisms. However, none of the methods used in environmental genomics allow for a rapid isolation of numerous functional genes from eukaryotic microbial communities. We developed an original adaptation of the solution hybrid selection (SHS) for an efficient recovery of functional complementary DNAs (cDNAs) synthesized from soil-extracted polyadenylated mRNAs. This protocol was tested on the Glycoside Hydrolase 11 gene family encoding endo-xylanases for which we designed 35 explorative 31-mers capture probes. SHS was implemented on four soil eukaryotic cDNA pools. After two successive rounds of capture, >90% of the resulting cDNAs were GH11 sequences, of which 70% (38 among 53 sequenced genes) were full length. Between 1.5 and 25% of the cloned captured sequences were expressed in Saccharomyces cerevisiae. Sequencing of polymerase chain reaction-amplified GH11 gene fragments from the captured sequences highlighted hundreds of phylogenetically diverse sequences that were not yet described, in public databases. This protocol offers the possibility of performing exhaustive exploration of eukaryotic gene families within microbial communities thriving in any type of environment. PMID:25281543

  11. Auditory attentional capture during serial recall: violations at encoding of an algorithm-based neural model?

    PubMed

    Hughes, Robert W; Vachon, François; Jones, Dylan M

    2005-07-01

    A novel attentional capture effect is reported in which visual-verbal serial recall was disrupted if a single deviation in the interstimulus interval occurred within otherwise regularly presented task-irrelevant spoken items. The degree of disruption was the same whether the temporal deviant was embedded in a sequence made up of a repeating item or a sequence of changing items. Moreover, the effect was evident during the presentation of the to-be-remembered sequence but not during rehearsal just prior to recall, suggesting that the encoding of sequences is particularly susceptible. The results suggest that attentional capture is due to a violation of an algorithm rather than an aggregate-based neural model and further undermine an attentional capture-based account of the classical changing-state irrelevant sound effect. ((c) 2005 APA, all rights reserved).

  12. Lights, Camera, Action! Antimicrobial Peptide Mechanisms Imaged in Space and Time

    PubMed Central

    Choi, Heejun; Rangarajan, Nambirajan; Weisshaar, James C.

    2015-01-01

    Deeper understanding of the bacteriostatic and bactericidal mechanisms of antimicrobial peptides (AMPs) should help in the design of new antibacterial agents. Over several decades, a variety of biochemical assays have been applied to bulk bacterial cultures. While some of these bulk assays provide time resolution on the order of 1 min, they do not capture faster mechanistic events. Nor can they provide subcellular spatial information or discern cell-to-cell heterogeneity within the bacterial population. Single-cell, time-resolved imaging assays bring a completely new spatiotemporal dimension to AMP mechanistic studies. We review recent work that provides new insights into the timing, sequence, and spatial distribution of AMP-induced effects on bacterial cells. PMID:26691950

  13. Using genic sequence capture in combination with a syntenic pseudo genome to map a deletion mutant in a wheat species.

    PubMed

    Gardiner, Laura-Jayne; Gawroński, Piotr; Olohan, Lisa; Schnurbusch, Thorsten; Hall, Neil; Hall, Anthony

    2014-12-01

    Mapping-by-sequencing analyses have largely required a complete reference sequence and employed whole genome re-sequencing. In species such as wheat, no finished genome reference sequence is available. Additionally, because of its large genome size (17 Gb), re-sequencing at sufficient depth of coverage is not practical. Here, we extend the utility of mapping by sequencing, developing a bespoke pipeline and algorithm to map an early-flowering locus in einkorn wheat (Triticum monococcum L.) that is closely related to the bread wheat genome A progenitor. We have developed a genomic enrichment approach using the gene-rich regions of hexaploid bread wheat to design a 110-Mbp NimbleGen SeqCap EZ in solution capture probe set, representing the majority of genes in wheat. Here, we use the capture probe set to enrich and sequence an F2 mapping population of the mutant. The mutant locus was identified in T. monococcum, which lacks a complete genome reference sequence, by mapping the enriched data set onto pseudo-chromosomes derived from the capture probe target sequence, with a long-range order of genes based on synteny of wheat with Brachypodium distachyon. Using this approach we are able to map the region and identify a set of deleted genes within the interval. © 2014 The Authors.The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.

  14. Model based estimation of image depth and displacement

    NASA Technical Reports Server (NTRS)

    Damour, Kevin T.

    1992-01-01

    Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided.

  15. Digital dissection system for medical school anatomy training

    NASA Astrophysics Data System (ADS)

    Augustine, Kurt E.; Pawlina, Wojciech; Carmichael, Stephen W.; Korinek, Mark J.; Schroeder, Kathryn K.; Segovis, Colin M.; Robb, Richard A.

    2003-05-01

    As technology advances, new and innovative ways of viewing and visualizing the human body are developed. Medicine has benefited greatly from imaging modalities that provide ways for us to visualize anatomy that cannot be seen without invasive procedures. As long as medical procedures include invasive operations, students of anatomy will benefit from the cadaveric dissection experience. Teaching proper technique for dissection of human cadavers is a challenging task for anatomy educators. Traditional methods, which have not changed significantly for centuries, include the use of textbooks and pictures to show students what a particular dissection specimen should look like. The ability to properly carry out such highly visual and interactive procedures is significantly constrained by these methods. The student receives a single view and has no idea how the procedure was carried out. The Department of Anatomy at Mayo Medical School recently built a new, state-of-the-art teaching laboratory, including data ports and power sources above each dissection table. This feature allows students to access the Mayo intranet from a computer mounted on each table. The vision of the Department of Anatomy is to replace all paper-based resources in the laboratory (dissection manuals, anatomic atlases, etc.) with a more dynamic medium that will direct students in dissection and in learning human anatomy. Part of that vision includes the use of interactive 3-D visualization technology. The Biomedical Imaging Resource (BIR) at Mayo Clinic has developed, in collaboration with the Department of Anatomy, a system for the control and capture of high resolution digital photographic sequences which can be used to create 3-D interactive visualizations of specimen dissections. The primary components of the system include a Kodak DC290 digital camera, a motorized controller rig from Kaidan, a PC, and custom software to synchronize and control the components. For each dissection procedure, the images are captured automatically, and then processed to generate a Quicktime VR sequence, which permits users to view an object from multiple angles by rotating it on the screen. This provides 3-D visualizations of anatomy for students without the need for special '3-D glasses' that would be impractical to use in a laboratory setting. In addition, a digital video camera may be mounted on the rig for capturing video recordings of selected dissection procedures being carried out by expert anatomists for playback by the students. Anatomists from the Department of Anatomy at Mayo have captured several sets of dissection sequences and processed them into Quicktime VR sequences. The students are able to look at these specimens from multiple angles using this VR technology. In addition, the student may zoom in to obtain high-resolution close-up views of the specimen. They may interactively view the specimen at varying stages of dissection, providing a way to quickly and intuitively navigate through the layers of tissue. Electronic media has begun to impact all areas of education, but a 3-D interactive visualization of specimen dissections in the laboratory environment is a unique and powerful means of teaching anatomy. When fully implemented, anatomy education will be enhanced significantly by comparison to traditional methods.

  16. Some uses of wavelets for imaging dynamic processes in live cochlear structures

    NASA Astrophysics Data System (ADS)

    Boutet de Monvel, J.

    2007-09-01

    A variety of image and signal processing algorithms based on wavelet filtering tools have been developed during the last few decades, that are well adapted to the experimental variability typically encountered in live biological microscopy. A number of processing tools are reviewed, that use wavelets for adaptive image restoration and for motion or brightness variation analysis by optical flow computation. The usefulness of these tools for biological imaging is illustrated in the context of the restoration of images of the inner ear and the analysis of cochlear motion patterns in two and three dimensions. I also report on recent work that aims at capturing fluorescence intensity changes associated with vesicle dynamics at synaptic zones of sensory hair cells. This latest application requires one to separate the intensity variations associated with the physiological process under study from the variations caused by motion of the observed structures. A wavelet optical flow algorithm for doing this is presented, and its effectiveness is demonstrated on artificial and experimental image sequences.

  17. Imaging atomic-level random walk of a point defect in graphene

    NASA Astrophysics Data System (ADS)

    Kotakoski, Jani; Mangler, Clemens; Meyer, Jannik C.

    2014-05-01

    Deviations from the perfect atomic arrangements in crystals play an important role in affecting their properties. Similarly, diffusion of such deviations is behind many microstructural changes in solids. However, observation of point defect diffusion is hindered both by the difficulties related to direct imaging of non-periodic structures and by the timescales involved in the diffusion process. Here, instead of imaging thermal diffusion, we stimulate and follow the migration of a divacancy through graphene lattice using a scanning transmission electron microscope operated at 60 kV. The beam-activated process happens on a timescale that allows us to capture a significant part of the structural transformations and trajectory of the defect. The low voltage combined with ultra-high vacuum conditions ensure that the defect remains stable over long image sequences, which allows us for the first time to directly follow the diffusion of a point defect in a crystalline material.

  18. Cameras and settings for optimal image capture from UAVs

    NASA Astrophysics Data System (ADS)

    Smith, Mike; O'Connor, James; James, Mike R.

    2017-04-01

    Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.

  19. Establishing gene models from the Pinus pinaster genome using gene capture and BAC sequencing.

    PubMed

    Seoane-Zonjic, Pedro; Cañas, Rafael A; Bautista, Rocío; Gómez-Maldonado, Josefa; Arrillaga, Isabel; Fernández-Pozo, Noé; Claros, M Gonzalo; Cánovas, Francisco M; Ávila, Concepción

    2016-02-27

    In the era of DNA throughput sequencing, assembling and understanding gymnosperm mega-genomes remains a challenge. Although drafts of three conifer genomes have recently been published, this number is too low to understand the full complexity of conifer genomes. Using techniques focused on specific genes, gene models can be established that can aid in the assembly of gene-rich regions, and this information can be used to compare genomes and understand functional evolution. In this study, gene capture technology combined with BAC isolation and sequencing was used as an experimental approach to establish de novo gene structures without a reference genome. Probes were designed for 866 maritime pine transcripts to sequence genes captured from genomic DNA. The gene models were constructed using GeneAssembler, a new bioinformatic pipeline, which reconstructed over 82% of the gene structures, and a high proportion (85%) of the captured gene models contained sequences from the promoter regulatory region. In a parallel experiment, the P. pinaster BAC library was screened to isolate clones containing genes whose cDNA sequence were already available. BAC clones containing the asparagine synthetase, sucrose synthase and xyloglucan endotransglycosylase gene sequences were isolated and used in this study. The gene models derived from the gene capture approach were compared with the genomic sequences derived from the BAC clones. This combined approach is a particularly efficient way to capture the genomic structures of gene families with a small number of members. The experimental approach used in this study is a valuable combined technique to study genomic gene structures in species for which a reference genome is unavailable. It can be used to establish exon/intron boundaries in unknown gene structures, to reconstruct incomplete genes and to obtain promoter sequences that can be used for transcriptional studies. A bioinformatics algorithm (GeneAssembler) is also provided as a Ruby gem for this class of analyses.

  20. Light-efficient photography.

    PubMed

    Hasinoff, Samuel W; Kutulakos, Kiriakos N

    2011-11-01

    In this paper, we consider the problem of imaging a scene with a given depth of field at a given exposure level in the shortest amount of time possible. We show that by 1) collecting a sequence of photos and 2) controlling the aperture, focus, and exposure time of each photo individually, we can span the given depth of field in less total time than it takes to expose a single narrower-aperture photo. Using this as a starting point, we obtain two key results. First, for lenses with continuously variable apertures, we derive a closed-form solution for the globally optimal capture sequence, i.e., that collects light from the specified depth of field in the most efficient way possible. Second, for lenses with discrete apertures, we derive an integer programming problem whose solution is the optimal sequence. Our results are applicable to off-the-shelf cameras and typical photography conditions, and advocate the use of dense, wide-aperture photo sequences as a light-efficient alternative to single-shot, narrow-aperture photography.

  1. Scheimpflug with computational imaging to extend the depth of field of iris recognition systems

    NASA Astrophysics Data System (ADS)

    Sinharoy, Indranil

    Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can significantly relax the constraints on subject movement during iris acquisition, making it less restrictive.

  2. Controlling the Display of Capsule Endoscopy Video for Diagnostic Assistance

    NASA Astrophysics Data System (ADS)

    Vu, Hai; Echigo, Tomio; Sagawa, Ryusuke; Yagi, Keiko; Shiba, Masatsugu; Higuchi, Kazuhide; Arakawa, Tetsuo; Yagi, Yasushi

    Interpretations by physicians of capsule endoscopy image sequences captured over periods of 7-8 hours usually require 45 to 120 minutes of extreme concentration. This paper describes a novel method to reduce diagnostic time by automatically controlling the display frame rate. Unlike existing techniques, this method displays original images with no skipping of frames. The sequence can be played at a high frame rate in stable regions to save time. Then, in regions with rough changes, the speed is decreased to more conveniently ascertain suspicious findings. To realize such a system, cue information about the disparity of consecutive frames, including color similarity and motion displacements is extracted. A decision tree utilizes these features to classify the states of the image acquisitions. For each classified state, the delay time between frames is calculated by parametric functions. A scheme selecting the optimal parameters set determined from assessments by physicians is deployed. Experiments involved clinical evaluations to investigate the effectiveness of this method compared to a standard-view using an existing system. Results from logged action based analysis show that compared with an existing system the proposed method reduced diagnostic time to around 32.5 ± minutes per full sequence while the number of abnormalities found was similar. As well, physicians needed less effort because of the systems efficient operability. The results of the evaluations should convince physicians that they can safely use this method and obtain reduced diagnostic times.

  3. Super-resolved all-refocused image with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Li, Lin; Hou, Guangqi

    2015-12-01

    This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.

  4. Micro-optical system based 3D imaging for full HD depth image capturing

    NASA Astrophysics Data System (ADS)

    Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan

    2012-03-01

    20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.

  5. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.

  6. Human Motion Capture Data Tailored Transform Coding.

    PubMed

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  7. 77 FR 4059 - Certain Electronic Devices for Capturing and Transmitting Images, and Components Thereof; Receipt...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-26

    ... Images, and Components Thereof; Receipt of Complaint; Solicitation of Comments Relating to the Public... Devices for Capturing and Transmitting Images, and Components Thereof, DN 2869; the Commission is... importation of certain electronic devices for capturing and transmitting images, and components thereof. The...

  8. Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.

    PubMed

    Inchang Choi; Seung-Hwan Baek; Kim, Min H

    2017-11-01

    For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.

  9. A new method for automatic tracking of facial landmarks in 3D motion captured images (4D).

    PubMed

    Al-Anezi, T; Khambay, B; Peng, M J; O'Leary, E; Ju, X; Ayoub, A

    2013-01-01

    The aim of this study was to validate the automatic tracking of facial landmarks in 3D image sequences. 32 subjects (16 males and 16 females) aged 18-35 years were recruited. 23 anthropometric landmarks were marked on the face of each subject with non-permanent ink using a 0.5mm pen. The subjects were asked to perform three facial animations (maximal smile, lip purse and cheek puff) from rest position. Each animation was captured by the 3D imaging system. A single operator manually digitised the landmarks on the 3D facial models and their locations were compared with those of the automatically tracked ones. To investigate the accuracy of manual digitisation, the operator re-digitised the same set of 3D images of 10 subjects (5 male and 5 female) at 1 month interval. The discrepancies in x, y and z coordinates between the 3D position of the manual digitised landmarks and that of the automatic tracked facial landmarks were within 0.17mm. The mean distance between the manually digitised and the automatically tracked landmarks using the tracking software was within 0.55 mm. The automatic tracking of facial landmarks demonstrated satisfactory accuracy which would facilitate the analysis of the dynamic motion during facial animations. Copyright © 2012 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  10. Fibered fluorescence microscopy (FFM) of intra epidermal nerve fibers--translational marker for peripheral neuropathies in preclinical research: processing and analysis of the data

    NASA Astrophysics Data System (ADS)

    Cornelissen, Frans; De Backer, Steve; Lemeire, Jan; Torfs, Berf; Nuydens, Rony; Meert, Theo; Schelkens, Peter; Scheunders, Paul

    2008-08-01

    Peripheral neuropathy can be caused by diabetes or AIDS or be a side-effect of chemotherapy. Fibered Fluorescence Microscopy (FFM) is a recently developed imaging modality using a fiber optic probe connected to a laser scanning unit. It allows for in-vivo scanning of small animal subjects by moving the probe along the tissue surface. In preclinical research, FFM enables non-invasive, longitudinal in vivo assessment of intra epidermal nerve fibre density in various models for peripheral neuropathies. By moving the probe, FFM allows visualization of larger surfaces, since, during the movement, images are continuously captured, allowing to acquire an area larger then the field of view of the probe. For analysis purposes, we need to obtain a single static image from the multiple overlapping frames. We introduce a mosaicing procedure for this kind of video sequence. Construction of mosaic images with sub-pixel alignment is indispensable and must be integrated into a global consistent image aligning. An additional motivation for the mosaicing is the use of overlapping redundant information to improve the signal to noise ratio of the acquisition, because the individual frames tend to have both high noise levels and intensity inhomogeneities. For longitudinal analysis, mosaics captured at different times must be aligned as well. For alignment, global correlation-based matching is compared with interest point matching. Use of algorithms working on multiple CPU's (parallel processor/cluster/grid) is imperative for use in a screening model.

  11. [Current practice in MR imaging of the liver].

    PubMed

    Kanematsu, M; Kondo, H; Matsuo, M; Hoshi, H

    2001-12-01

    MR imaging, which is able to evaluate T1- and T2-relaxation time, fat, hemorrhage, metal deposition, blood flow, perfusion, diffusion, and so on, has offered more information for the diagnosis of diffuse and focal hepatic diseases than CT. The spoiled-GRE sequence with high contrast resolution and ease of the aimed contrast capture derived from the k-space property, with the use of a phased-array multicoil, have remarkably increased the value of gadolinium-enhanced dynamic MR diagnosis of the liver. In recent years, the clinical use of ferumoxide has begun, and issues concerning the superiority or inferiority and combination of contrast media are being debated. This paper describes the value, role, and clinical practice of unenhanced, gadolinium-enhanced, and ferumoxide-enhanced MR imaging of the liver based on knowledge obtained in our institution, with some reference to the literature.

  12. Polar Lights at Saturn Bid Cassini Farewell

    NASA Image and Video Library

    2017-10-16

    On Sept. 14, 2017, one day before making its final plunge into Saturn's atmosphere, NASA's Cassini spacecraft used its Ultraviolet Imaging Spectrograph, or UVIS, instrument to capture this final view of ultraviolet auroral emissions in the planet's north polar region. The view is centered on the north pole of Saturn, with lines of latitude visible for 80, 70 and 60 degrees. Lines of longitude are spaced 40 degrees apart. The planet's day side is at bottom, while the night side is at top. A sequence of images from this observation has also been assembled into a movie sequence. The last image in the movie was taken about an hour before the still image, which was the actual final UVIS auroral image. Auroral emissions are generated by charged particles traveling along the invisible lines of Saturn's magnetic field. These particles precipitate into the atmosphere, releasing light when they strike gas molecules there. Several individual auroral structures are visible here, despite that this UVIS view was acquired at a fairly large distance from the planet (about 424,000 miles or 683,000 kilometers). Each of these features is connected to a particular phenomenon in Saturn's magnetosphere. For instance, it is possible to identify auroral signatures here that are related to the injection of hot plasma from the dayside magnetosphere, as well as auroral features associated with a change in the magnetic field's shape on the magnetosphere's night side. Several possible scenarios have been postulated over the years to explain Saturn's changing auroral emissions, but researchers are still far from a complete understanding of this complicated puzzle. Researchers will continue to analyze the hundreds of image sequences UVIS obtained of Saturn's auroras during Cassini's 13-year mission, with many new discoveries likely to be made. This image and movie sequence were produced by the Laboratory for Planetary and Atmospheric Physics (LPAP) of the STAR Institute of the University of Liege in Belgium, in collaboration with the UVIS Team. The animation is available at https://photojournal.jpl.nasa.gov/catalog/PIA21899

  13. Low-coverage single-cell mRNA sequencing reveals cellular heterogeneity and activated signaling pathways in developing cerebral cortex.

    PubMed

    Pollen, Alex A; Nowakowski, Tomasz J; Shuga, Joe; Wang, Xiaohui; Leyrat, Anne A; Lui, Jan H; Li, Nianzhen; Szpankowski, Lukasz; Fowler, Brian; Chen, Peilin; Ramalingam, Naveen; Sun, Gang; Thu, Myo; Norris, Michael; Lebofsky, Ronald; Toppani, Dominique; Kemp, Darnell W; Wong, Michael; Clerkson, Barry; Jones, Brittnee N; Wu, Shiquan; Knutsson, Lawrence; Alvarado, Beatriz; Wang, Jing; Weaver, Lesley S; May, Andrew P; Jones, Robert C; Unger, Marc A; Kriegstein, Arnold R; West, Jay A A

    2014-10-01

    Large-scale surveys of single-cell gene expression have the potential to reveal rare cell populations and lineage relationships but require efficient methods for cell capture and mRNA sequencing. Although cellular barcoding strategies allow parallel sequencing of single cells at ultra-low depths, the limitations of shallow sequencing have not been investigated directly. By capturing 301 single cells from 11 populations using microfluidics and analyzing single-cell transcriptomes across downsampled sequencing depths, we demonstrate that shallow single-cell mRNA sequencing (~50,000 reads per cell) is sufficient for unbiased cell-type classification and biomarker identification. In the developing cortex, we identify diverse cell types, including multiple progenitor and neuronal subtypes, and we identify EGR1 and FOS as previously unreported candidate targets of Notch signaling in human but not mouse radial glia. Our strategy establishes an efficient method for unbiased analysis and comparison of cell populations from heterogeneous tissue by microfluidic single-cell capture and low-coverage sequencing of many cells.

  14. Continuous Mapping of Tunnel Walls in a Gnss-Denied Environment

    NASA Astrophysics Data System (ADS)

    Chapman, Michael A.; Min, Cao; Zhang, Deijin

    2016-06-01

    The need for reliable systems for capturing precise detail in tunnels has increased as the number of tunnels (e.g., for cars and trucks, trains, subways, mining and other infrastructure) has increased and the age of these structures and, subsequent, deterioration has introduced structural degradations and eventual failures. Due to the hostile environments encountered in tunnels, mobile mapping systems are plagued with various problems such as loss of GNSS signals, drift of inertial measurements systems, low lighting conditions, dust and poor surface textures for feature identification and extraction. A tunnel mapping system using alternate sensors and algorithms that can deliver precise coordinates and feature attributes from surfaces along the entire tunnel path is presented. This system employs image bridging or visual odometry to estimate precise sensor positions and orientations. The fundamental concept is the use of image sequences to geometrically extend the control information in the absence of absolute positioning data sources. This is a non-trivial problem due to changes in scale, perceived resolution, image contrast and lack of salient features. The sensors employed include forward-looking high resolution digital frame cameras coupled with auxiliary light sources. In addition, a high frequency lidar system and a thermal imager are included to offer three dimensional point clouds of the tunnel walls along with thermal images for moisture detection. The mobile mapping system is equipped with an array of 16 cameras and light sources to capture the tunnel walls. Continuous images are produced using a semi-automated mosaicking process. Results of preliminary experimentation are presented to demonstrate the effectiveness of the system for the generation of seamless precise tunnel maps.

  15. Segmentation of Environmental Time Lapse Image Sequences for the Determination of Shore Lines Captured by Hand-Held Smartphone Cameras

    NASA Astrophysics Data System (ADS)

    Kröhnert, M.; Meichsner, R.

    2017-09-01

    The relevance of globally environmental issues gains importance since the last years with still rising trends. Especially disastrous floods may cause in serious damage within very short times. Although conventional gauging stations provide reliable information about prevailing water levels, they are highly cost-intensive and thus just sparsely installed. Smartphones with inbuilt cameras, powerful processing units and low-cost positioning systems seem to be very suitable wide-spread measurement devices that could be used for geo-crowdsourcing purposes. Thus, we aim for the development of a versatile mobile water level measurement system to establish a densified hydrological network of water levels with high spatial and temporal resolution. This paper addresses a key issue of the entire system: the detection of running water shore lines in smartphone images. Flowing water never appears equally in close-range images even if the extrinsics remain unchanged. Its non-rigid behavior impedes the use of good practices for image segmentation as a prerequisite for water line detection. Consequently, we use a hand-held time lapse image sequence instead of a single image that provides the time component to determine a spatio-temporal texture image. Using a region growing concept, the texture is analyzed for immutable shore and dynamic water areas. Finally, the prevalent shore line is examined by the resultant shapes. For method validation, various study areas are observed from several distances covering urban and rural flowing waters with different characteristics. Future work provides a transformation of the water line into object space by image-to-geometry intersection.

  16. Terrain detection and classification using single polarization SAR

    DOEpatents

    Chow, James G.; Koch, Mark W.

    2016-01-19

    The various technologies presented herein relate to identifying manmade and/or natural features in a radar image. Two radar images (e.g., single polarization SAR images) can be captured for a common scene. The first image is captured at a first instance and the second image is captured at a second instance, whereby the duration between the captures are of sufficient time such that temporal decorrelation occurs for natural surfaces in the scene, and only manmade surfaces, e.g., a road, produce correlated pixels. A LCCD image comprising the correlated and decorrelated pixels can be generated from the two radar images. A median image can be generated from a plurality of radar images, whereby any features in the median image can be identified. A superpixel operation can be performed on the LCCD image and the median image, thereby enabling a feature(s) in the LCCD image to be classified.

  17. Rapid Material Appearance Acquisition Using Consumer Hardware

    PubMed Central

    Filip, Jiří; Vávra, Radomír; Krupička, Mikuláš

    2014-01-01

    A photo-realistic representation of material appearance can be achieved by means of bidirectional texture function (BTF) capturing a material’s appearance for varying illumination, viewing directions, and spatial pixel coordinates. BTF captures many non-local effects in material structure such as inter-reflections, occlusions, shadowing, or scattering. The acquisition of BTF data is usually time and resource-intensive due to the high dimensionality of BTF data. This results in expensive, complex measurement setups and/or excessively long measurement times. We propose an approximate BTF acquisition setup based on a simple, affordable mechanical gantry containing a consumer camera and two LED lights. It captures a very limited subset of material surface images by shooting several video sequences. A psychophysical study comparing captured and reconstructed data with the reference BTFs of seven tested materials revealed that results of our method show a promising visual quality. Speed of the setup has been demonstrated on measurement of human skin and measurement and modeling of a glue dessication time-varying process. As it allows for fast, inexpensive, acquisition of approximate BTFs, this method can be beneficial to visualization applications demanding less accuracy, where BTF utilization has previously been limited. PMID:25340451

  18. Mutation Scanning in Wheat by Exon Capture and Next-Generation Sequencing.

    PubMed

    King, Robert; Bird, Nicholas; Ramirez-Gonzalez, Ricardo; Coghill, Jane A; Patil, Archana; Hassani-Pak, Keywan; Uauy, Cristobal; Phillips, Andrew L

    2015-01-01

    Targeted Induced Local Lesions in Genomes (TILLING) is a reverse genetics approach to identify novel sequence variation in genomes, with the aims of investigating gene function and/or developing useful alleles for breeding. Despite recent advances in wheat genomics, most current TILLING methods are low to medium in throughput, being based on PCR amplification of the target genes. We performed a pilot-scale evaluation of TILLING in wheat by next-generation sequencing through exon capture. An oligonucleotide-based enrichment array covering ~2 Mbp of wheat coding sequence was used to carry out exon capture and sequencing on three mutagenised lines of wheat containing previously-identified mutations in the TaGA20ox1 homoeologous genes. After testing different mapping algorithms and settings, candidate SNPs were identified by mapping to the IWGSC wheat Chromosome Survey Sequences. Where sequence data for all three homoeologues were found in the reference, mutant calls were unambiguous; however, where the reference lacked one or two of the homoeologues, captured reads from these genes were mis-mapped to other homoeologues, resulting either in dilution of the variant allele frequency or assignment of mutations to the wrong homoeologue. Competitive PCR assays were used to validate the putative SNPs and estimate cut-off levels for SNP filtering. At least 464 high-confidence SNPs were detected across the three mutagenized lines, including the three known alleles in TaGA20ox1, indicating a mutation rate of ~35 SNPs per Mb, similar to that estimated by PCR-based TILLING. This demonstrates the feasibility of using exon capture for genome re-sequencing as a method of mutation detection in polyploid wheat, but accurate mutation calling will require an improved genomic reference with more comprehensive coverage of homoeologues.

  19. [Target gene sequence capture and next generation sequencing technology to diagnose four children with Alagille syndrome].

    PubMed

    Gao, M L; Zhong, X M; Ma, X; Ning, H J; Zhu, D; Zou, J Z

    2016-06-02

    To make genetic diagnosis of Alagille syndrome (ALGS) patients using target gene sequence capture and next generation sequencing technology. Target gene sequence capture and next generation sequencing were used to detect ALGS gene of 4 patients. They were hospitalized at the Affiliated Hospital, Capital Institute of Pediatrics between January 2014 and December 2015, referred to clinical diagnosis of ALGS typical and atypical respectively in 2 cases. Blood samples were collected from patients and their parents and genomic DNA was extracted from lymphocytes. Target gene sequence capture and next generation sequencing was detected. Sanger sequencing was used to confirm the results of the patients and their parents. Cholestasis, heart defects, inverted triangular face and butterfly vertebrae were presented as main clinical features in 4 male patients. The first hospital visiting ages ranged from 3 months and 14 days to 3 years and 1 month. The age of onset ranged from 3 days to 42 days (median 23 days). According to the clinical diagnostic criteria of ALGS, patient 1 and patient 2 were considered as typical ALGS. The other 2 patients were considered as atypical ALGS. Four Jagged 1(JAG1) pathogenic mutations were detected. Three different missense mutations were detected in patient 1 to patient 3 with ALGS(c.839C>T(p.W280X), c. 703G>A(p.R235X), c. 1720C>T(p.V574M)). The JAG1 mutation of patient 3 was first reported. Patient 4 had one novel insertion mutation (c.1779_1780insA(p.Ile594AsnfsTer23)). Parental analysis verified that the JAG1 missense mutation of 3 patients were de novo. The results of sanger sequencing was consistent with the results of the next generation sequencing. Target gene sequence capture combined with next generation sequencing can detect two pathogenic genes in ALGS and test genes of other related diseases in infantile cholestatic diseases simultaneously and presents a high throughput, high efficiency and low cost. It may provide molecular diagnosis and treatment for clinicians with good clinical application prospects.

  20. A High-Throughput Process for the Solid-Phase Purification of Synthetic DNA Sequences

    PubMed Central

    Grajkowski, Andrzej; Cieślak, Jacek; Beaucage, Serge L.

    2017-01-01

    An efficient process for the purification of synthetic phosphorothioate and native DNA sequences is presented. The process is based on the use of an aminopropylated silica gel support functionalized with aminooxyalkyl functions to enable capture of DNA sequences through an oximation reaction with the keto function of a linker conjugated to the 5′-terminus of DNA sequences. Deoxyribonucleoside phosphoramidites carrying this linker, as a 5′-hydroxyl protecting group, have been synthesized for incorporation into DNA sequences during the last coupling step of a standard solid-phase synthesis protocol executed on a controlled pore glass (CPG) support. Solid-phase capture of the nucleobase- and phosphate-deprotected DNA sequences released from the CPG support is demonstrated to proceed near quantitatively. Shorter than full-length DNA sequences are first washed away from the capture support; the solid-phase purified DNA sequences are then released from this support upon reaction with tetra-n-butylammonium fluoride in dry dimethylsulfoxide (DMSO) and precipitated in tetrahydrofuran (THF). The purity of solid-phase-purified DNA sequences exceeds 98%. The simulated high-throughput and scalability features of the solid-phase purification process are demonstrated without sacrificing purity of the DNA sequences. PMID:28628204

  1. 78 FR 16531 - Certain Electronic Devices for Capturing and Transmitting Images, and Components Thereof...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-15

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-831] Certain Electronic Devices for Capturing and Transmitting Images, and Components Thereof; Commission Determination Not To Review an Initial... certain electronic devices for capturing and transmitting images, and components thereof. The complaint...

  2. Color Imaging management in film processing

    NASA Astrophysics Data System (ADS)

    Tremeau, Alain; Konik, Hubert; Colantoni, Philippe

    2003-12-01

    The latest research projects in the laboratory LIGIV concerns capture, processing, archiving and display of color images considering the trichromatic nature of the Human Vision System (HSV). Among these projects one addresses digital cinematographic film sequences of high resolution and dynamic range. This project aims to optimize the use of content for the post-production operators and for the end user. The studies presented in this paper address the use of metadata to optimise the consumption of video content on a device of user's choice independent of the nature of the equipment that captured the content. Optimising consumption includes enhancing the quality of image reconstruction on a display. Another part of this project addresses the content-based adaptation of image display. Main focus is on Regions of Interest (ROI) operations, based on the ROI concepts of MPEG-7. The aim of this second part is to characterize and ensure the conditions of display even if display device or display media changes. This requires firstly the definition of a reference color space and the definition of bi-directional color transformations for each peripheral device (camera, display, film recorder, etc.). The complicating factor is that different devices have different color gamuts, depending on the chromaticity of their primaries and the ambient illumination under which they are viewed. To match the displayed image to the aimed appearance, all kind of production metadata (camera specification, camera colour primaries, lighting conditions) should be associated to the film material. Metadata and content build together rich content. The author is assumed to specify conditions as known from digital graphics arts. To control image pre-processing and image post-processing, these specifications should be contained in the film's metadata. The specifications are related to the ICC profiles but need additionally consider mesopic viewing conditions.

  3. Genotyping by Sequencing Using Specific Allelic Capture to Build a High-Density Genetic Map of Durum Wheat

    PubMed Central

    Holtz, Yan; Ardisson, Morgane; Ranwez, Vincent; Besnard, Alban; Leroy, Philippe; Poux, Gérard; Roumet, Pierre; Viader, Véronique; Santoni, Sylvain; David, Jacques

    2016-01-01

    Targeted sequence capture is a promising technology which helps reduce costs for sequencing and genotyping numerous genomic regions in large sets of individuals. Bait sequences are designed to capture specific alleles previously discovered in parents or reference populations. We studied a set of 135 RILs originating from a cross between an emmer cultivar (Dic2) and a recent durum elite cultivar (Silur). Six thousand sequence baits were designed to target Dic2 vs. Silur polymorphisms discovered in a previous RNAseq study. These baits were exposed to genomic DNA of the RIL population. Eighty percent of the targeted SNPs were recovered, 65% of which were of high quality and coverage. The final high density genetic map consisted of more than 3,000 markers, whose genetic and physical mapping were consistent with those obtained with large arrays. PMID:27171472

  4. BayMeth: improved DNA methylation quantification for affinity capture sequencing data using a flexible Bayesian approach

    PubMed Central

    2014-01-01

    Affinity capture of DNA methylation combined with high-throughput sequencing strikes a good balance between the high cost of whole genome bisulfite sequencing and the low coverage of methylation arrays. We present BayMeth, an empirical Bayes approach that uses a fully methylated control sample to transform observed read counts into regional methylation levels. In our model, inefficient capture can readily be distinguished from low methylation levels. BayMeth improves on existing methods, allows explicit modeling of copy number variation, and offers computationally efficient analytical mean and variance estimators. BayMeth is available in the Repitools Bioconductor package. PMID:24517713

  5. Animation control of surface motion capture.

    PubMed

    Tejera, Margara; Casas, Dan; Hilton, Adrian

    2013-12-01

    Surface motion capture (SurfCap) of actor performance from multiple view video provides reconstruction of the natural nonrigid deformation of skin and clothing. This paper introduces techniques for interactive animation control of SurfCap sequences which allow the flexibility in editing and interactive manipulation associated with existing tools for animation from skeletal motion capture (MoCap). Laplacian mesh editing is extended using a basis model learned from SurfCap sequences to constrain the surface shape to reproduce natural deformation. Three novel approaches for animation control of SurfCap sequences, which exploit the constrained Laplacian mesh editing, are introduced: 1) space–time editing for interactive sequence manipulation; 2) skeleton-driven animation to achieve natural nonrigid surface deformation; and 3) hybrid combination of skeletal MoCap driven and SurfCap sequence to extend the range of movement. These approaches are combined with high-level parametric control of SurfCap sequences in a hybrid surface and skeleton-driven animation control framework to achieve natural surface deformation with an extended range of movement by exploiting existing MoCap archives. Evaluation of each approach and the integrated animation framework are presented on real SurfCap sequences for actors performing multiple motions with a variety of clothing styles. Results demonstrate that these techniques enable flexible control for interactive animation with the natural nonrigid surface dynamics of the captured performance and provide a powerful tool to extend current SurfCap databases by incorporating new motions from MoCap sequences.

  6. Combining fluorescence imaging with Hi-C to study 3D genome architecture of the same single cell.

    PubMed

    Lando, David; Basu, Srinjan; Stevens, Tim J; Riddell, Andy; Wohlfahrt, Kai J; Cao, Yang; Boucher, Wayne; Leeb, Martin; Atkinson, Liam P; Lee, Steven F; Hendrich, Brian; Klenerman, Dave; Laue, Ernest D

    2018-05-01

    Fluorescence imaging and chromosome conformation capture assays such as Hi-C are key tools for studying genome organization. However, traditionally, they have been carried out independently, making integration of the two types of data difficult to perform. By trapping individual cell nuclei inside a well of a 384-well glass-bottom plate with an agarose pad, we have established a protocol that allows both fluorescence imaging and Hi-C processing to be carried out on the same single cell. The protocol identifies 30,000-100,000 chromosome contacts per single haploid genome in parallel with fluorescence images. Contacts can be used to calculate intact genome structures to better than 100-kb resolution, which can then be directly compared with the images. Preparation of 20 single-cell Hi-C libraries using this protocol takes 5 d of bench work by researchers experienced in molecular biology techniques. Image acquisition and analysis require basic understanding of fluorescence microscopy, and some bioinformatics knowledge is required to run the sequence-processing tools described here.

  7. Near-infrared transillumination of teeth: measurement of a system performance

    NASA Astrophysics Data System (ADS)

    Karlsson, Lena; Maia, Ana M. A.; Kyotoku, Bernardo B. C.; Tranæus, Sofia; Gomes, Anderson S. L.; Margulis, Walter

    2010-05-01

    Transillumination (TI) of dental enamel with near-infrared light is a promising nonionizing imaging method for detection of early caries lesion. Increased mineral loss (caries lesion) leads to increased scattering and absorption. Caries thus appear as dark regions because less light reaches the detector. The aim of this work was to characterize the performance of a TI system from the resolution of acquired images using the modulation transfer function at two wavelengths, 1.28 and 1.4 μm. Test charts with various values of spatial periods, mimicking a perfect caries lesion, were attached to tooth sections, followed by capture of the transmitted image, using both wavelengths. The sections were then consecutively reduced in thickness, and a sequence of all sizes of the test charts were used for repeatedly imaging procedures. The results show that the TI system can detect feature size of 250 μm with 30% modulation. From the information about how the image degrades as it propagates through enamel, we also examined the possibility of estimating the position of a simulated approximal caries lesion by comparing images obtained from the two sides of a tooth section.

  8. Near-infrared transillumination of teeth: measurement of a system performance.

    PubMed

    Karlsson, Lena; Maia, Ana M A; Kyotoku, Bernardo B C; Tranaeus, Sofia; Gomes, Anderson S L; Margulis, Walter

    2010-01-01

    Transillumination (TI) of dental enamel with near-infrared light is a promising nonionizing imaging method for detection of early caries lesion. Increased mineral loss (caries lesion) leads to increased scattering and absorption. Caries thus appear as dark regions because less light reaches the detector. The aim of this work was to characterize the performance of a TI system from the resolution of acquired images using the modulation transfer function at two wavelengths, 1.28 and 1.4 mum. Test charts with various values of spatial periods, mimicking a perfect caries lesion, were attached to tooth sections, followed by capture of the transmitted image, using both wavelengths. The sections were then consecutively reduced in thickness, and a sequence of all sizes of the test charts were used for repeatedly imaging procedures. The results show that the TI system can detect feature size of 250 mum with 30% modulation. From the information about how the image degrades as it propagates through enamel, we also examined the possibility of estimating the position of a simulated approximal caries lesion by comparing images obtained from the two sides of a tooth section.

  9. Acceleration of integral imaging based incoherent Fourier hologram capture using graphic processing unit.

    PubMed

    Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung

    2012-10-08

    Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.

  10. Improving the image discontinuous problem by using color temperature mapping method

    NASA Astrophysics Data System (ADS)

    Jeng, Wei-De; Mang, Ou-Yang; Lai, Chien-Cheng; Wu, Hsien-Ming

    2011-09-01

    This article mainly focuses on image processing of radial imaging capsule endoscope (RICE). First, it used the radial imaging capsule endoscope (RICE) to take the images, the experimental used a piggy to get the intestines and captured the images, but the images captured by RICE were blurred due to the RICE has aberration problems in the image center and lower light uniformity affect the image quality. To solve the problems, image processing can use to improve it. Therefore, the images captured by different time can use Person correlation coefficient algorithm to connect all the images, and using the color temperature mapping way to improve the discontinuous problem in the connection region.

  11. ssHMM: extracting intuitive sequence-structure motifs from high-throughput RNA-binding protein data

    PubMed Central

    Krestel, Ralf; Ohler, Uwe; Vingron, Martin; Marsico, Annalisa

    2017-01-01

    Abstract RNA-binding proteins (RBPs) play an important role in RNA post-transcriptional regulation and recognize target RNAs via sequence-structure motifs. The extent to which RNA structure influences protein binding in the presence or absence of a sequence motif is still poorly understood. Existing RNA motif finders either take the structure of the RNA only partially into account, or employ models which are not directly interpretable as sequence-structure motifs. We developed ssHMM, an RNA motif finder based on a hidden Markov model (HMM) and Gibbs sampling which fully captures the relationship between RNA sequence and secondary structure preference of a given RBP. Compared to previous methods which output separate logos for sequence and structure, it directly produces a combined sequence-structure motif when trained on a large set of sequences. ssHMM’s model is visualized intuitively as a graph and facilitates biological interpretation. ssHMM can be used to find novel bona fide sequence-structure motifs of uncharacterized RBPs, such as the one presented here for the YY1 protein. ssHMM reaches a high motif recovery rate on synthetic data, it recovers known RBP motifs from CLIP-Seq data, and scales linearly on the input size, being considerably faster than MEMERIS and RNAcontext on large datasets while being on par with GraphProt. It is freely available on Github and as a Docker image. PMID:28977546

  12. Mechanism of disease in early osteoarthritis: application of modern MR imaging techniques -- a technical report.

    PubMed

    Jobke, Bjoern; Bolbos, Radu; Saadat, Ehsan; Cheng, Jonathan; Li, Xiaojuan; Majumdar, Sharmila

    2013-01-01

    The application of biomolecular magnetic resonance imaging becomes increasingly important in the context of early cartilage changes in degenerative and inflammatory joint disease before gross morphological changes become apparent. In this limited technical report, we investigate the correlation of MRI T1, T2 and T1ρ relaxation times with quantitative biochemical measurements of proteoglycan and collagen contents of cartilage in close synopsis with histologic morphology. A recently developed MRI sequence, T1ρ, was able to detect early intracartilaginous degeneration quantitatively and also qualitatively by color mapping demonstrating a higher sensitivity than standard T2-weighted sequences. The results correlated highly with reduced proteoglycan content and disrupted collagen architecture as measured by biochemistry and histology. The findings lend support to a clinical implementation that allows rapid visual capturing of pathology on a local, millimeter level. Further information about articular cartilage quality otherwise not detectable in vivo, via normal inspection, is needed for orthopedic treatment decisions in the present and future. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Measurement of body joint angles for physical therapy based on mean shift tracking using two low cost Kinect images.

    PubMed

    Chen, Y C; Lee, H J; Lin, K H

    2015-08-01

    Range of motion (ROM) is commonly used to assess a patient's joint function in physical therapy. Because motion capture systems are generally very expensive, physical therapists mostly use simple rulers to measure patients' joint angles in clinical diagnosis, which will suffer from low accuracy, low reliability, and subjective. In this study we used color and depth image feature from two sets of low-cost Microsoft Kinect to reconstruct 3D joint positions, and then calculate moveable joint angles to assess the ROM. A Gaussian background model is first used to segment the human body from the depth images. The 3D coordinates of the joints are reconstructed from both color and depth images. To track the location of joints throughout the sequence more precisely, we adopt the mean shift algorithm to find out the center of voxels upon the joints. The two sets of Kinect are placed three meters away from each other and facing to the subject. The joint moveable angles and the motion data are calculated from the position of joints frame by frame. To verify the results of our system, we take the results from a motion capture system called VICON as golden standard. Our 150 test results showed that the deviation of joint moveable angles between those obtained by VICON and our system is about 4 to 8 degree in six different upper limb exercises, which are acceptable in clinical environment.

  14. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  15. Space-time light field rendering.

    PubMed

    Wang, Huamin; Sun, Mingxuan; Yang, Ruigang

    2007-01-01

    In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video inputs and the added freedom of controlling the visualization in the temporal domain, such as smooth slow motion and temporal integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.

  16. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  17. Spread-Spectrum Beamforming and Clutter Filtering for Plane-Wave Color Doppler Imaging.

    PubMed

    Mansour, Omar; Poepping, Tamie L; Lacefield, James C

    2016-07-21

    Plane-wave imaging is desirable for its ability to achieve high frame rates, allowing the capture of fast dynamic events and continuous Doppler data. In most implementations of plane-wave imaging, multiple low-resolution images from different plane wave tilt angles are compounded to form a single high-resolution image, thereby reducing the frame rate. Compounding improves the lateral beam profile in the high-resolution image, but it also acts as a low-pass filter in slow time that causes attenuation and aliasing of signals with high Doppler shifts. This paper introduces a spread-spectrum color Doppler imaging method that produces high-resolution images without the use of compounding, thereby eliminating the tradeoff between beam quality, maximum unaliased Doppler frequency, and frame rate. The method uses a long, random sequence of transmit angles rather than a linear sweep of plane wave directions. The random angle sequence randomizes the phase of off-focus (clutter) signals, thereby spreading the clutter power in the Doppler spectrum, while keeping the spectrum of the in-focus signal intact. The ensemble of randomly tilted low-resolution frames also acts as the Doppler ensemble, so it can be much longer than a conventional linear sweep, thereby improving beam formation while also making the slow-time Doppler sampling frequency equal to the pulse repetition frequency. Experiments performed using a carotid artery phantom with constant flow demonstrate that the spread-spectrum method more accurately measures the parabolic flow profile of the vessel and outperforms conventional plane-wave Doppler in both contrast resolution and estimation of high flow velocities. The spread-spectrum method is expected to be valuable for Doppler applications that require measurement of high velocities at high frame rates.

  18. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    PubMed

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  19. Extracting cardiac shapes and motion of the chick embryo heart outflow tract from four-dimensional optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Yin, Xin; Liu, Aiping; Thornburg, Kent L.; Wang, Ruikang K.; Rugonyi, Sandra

    2012-09-01

    Recent advances in optical coherence tomography (OCT), and the development of image reconstruction algorithms, enabled four-dimensional (4-D) (three-dimensional imaging over time) imaging of the embryonic heart. To further analyze and quantify the dynamics of cardiac beating, segmentation procedures that can extract the shape of the heart and its motion are needed. Most previous studies analyzed cardiac image sequences using manually extracted shapes and measurements. However, this is time consuming and subject to inter-operator variability. Automated or semi-automated analyses of 4-D cardiac OCT images, although very desirable, are also extremely challenging. This work proposes a robust algorithm to semi automatically detect and track cardiac tissue layers from 4-D OCT images of early (tubular) embryonic hearts. Our algorithm uses a two-dimensional (2-D) deformable double-line model (DLM) to detect target cardiac tissues. The detection algorithm uses a maximum-likelihood estimator and was successfully applied to 4-D in vivo OCT images of the heart outflow tract of day three chicken embryos. The extracted shapes captured the dynamics of the chick embryonic heart outflow tract wall, enabling further analysis of cardiac motion.

  20. Sub-surface defects detection of by using active thermography and advanced image edge detection

    NASA Astrophysics Data System (ADS)

    Tse, Peter W.; Wang, Gaochao

    2017-05-01

    Active or pulsed thermography is a popular non-destructive testing (NDT) tool for inspecting the integrity and anomaly of industrial equipment. One of the recent research trends in using active thermography is to automate the process in detecting hidden defects. As of today, human effort has still been using to adjust the temperature intensity of the thermo camera in order to visually observe the difference in cooling rates caused by a normal target as compared to that by a sub-surface crack exists inside the target. To avoid the tedious human-visual inspection and minimize human induced error, this paper reports the design of an automatic method that is capable of detecting subsurface defects. The method used the technique of active thermography, edge detection in machine vision and smart algorithm. An infrared thermo-camera was used to capture a series of temporal pictures after slightly heating up the inspected target by flash lamps. Then the Canny edge detector was employed to automatically extract the defect related images from the captured pictures. The captured temporal pictures were preprocessed by a packet of Canny edge detector and then a smart algorithm was used to reconstruct the whole sequences of image signals. During the processes, noise and irrelevant backgrounds exist in the pictures were removed. Consequently, the contrast of the edges of defective areas had been highlighted. The designed automatic method was verified by real pipe specimens that contains sub-surface cracks. After applying such smart method, the edges of cracks can be revealed visually without the need of using manual adjustment on the setting of thermo-camera. With the help of this automatic method, the tedious process in manually adjusting the colour contract and the pixel intensity in order to reveal defects can be avoided.

  1. Online coupled camera pose estimation and dense reconstruction from video

    DOEpatents

    Medioni, Gerard; Kang, Zhuoliang

    2016-11-01

    A product may receive each image in a stream of video image of a scene, and before processing the next image, generate information indicative of the position and orientation of an image capture device that captured the image at the time of capturing the image. The product may do so by identifying distinguishable image feature points in the image; determining a coordinate for each identified image feature point; and for each identified image feature point, attempting to identify one or more distinguishable model feature points in a three dimensional (3D) model of at least a portion of the scene that appears likely to correspond to the identified image feature point. Thereafter, the product may find each of the following that, in combination, produce a consistent projection transformation of the 3D model onto the image: a subset of the identified image feature points for which one or more corresponding model feature points were identified; and, for each image feature point that has multiple likely corresponding model feature points, one of the corresponding model feature points. The product may update a 3D model of at least a portion of the scene following the receipt of each video image and before processing the next video image base on the generated information indicative of the position and orientation of the image capture device at the time of capturing the received image. The product may display the updated 3D model after each update to the model.

  2. An integrated approach to fast and informative morphological vouchering of nematodes for applications in molecular barcoding

    PubMed Central

    De Ley, Paul; De Ley, Irma Tandingan; Morris, Krystalynne; Abebe, Eyualem; Mundo-Ocampo, Manuel; Yoder, Melissa; Heras, Joseph; Waumann, Dora; Rocha-Olivares, Axayácatl; Jay Burr, A.H; Baldwin, James G; Thomas, W. Kelley

    2005-01-01

    Molecular surveys of meiofaunal diversity face some interesting methodological challenges when it comes to interstitial nematodes from soils and sediments. Morphology-based surveys are greatly limited in processing speed, while barcoding approaches for nematodes are hampered by difficulties of matching sequence data with traditional taxonomy. Intermediate technology is needed to bridge the gap between both approaches. An example of such technology is video capture and editing microscopy, which consists of the recording of taxonomically informative multifocal series of microscopy images as digital video clips. The integration of multifocal imaging with sequence analysis of the D2D3 region of large subunit (LSU) rDNA is illustrated here in the context of a combined morphological and barcode sequencing survey of marine nematodes from Baja California and California. The resulting video clips and sequence data are made available online in the database NemATOL (http://nematol.unh.edu/). Analyses of 37 barcoded nematodes suggest that these represent at least 32 species, none of which matches available D2D3 sequences in public databases. The recorded multifocal vouchers allowed us to identify most specimens to genus, and will be used to match specimens with subsequent species identifications and descriptions of preserved specimens. Like molecular barcodes, multifocal voucher archives are part of a wider effort at structuring and changing the process of biodiversity discovery. We argue that data-rich surveys and phylogenetic tools for analysis of barcode sequences are an essential component of the exploration of phyla with a high fraction of undiscovered species. Our methods are also directly applicable to other meiofauna such as for example gastrotrichs and tardigrades. PMID:16214752

  3. Robust and Accurate Image-Based Georeferencing Exploiting Relative Orientation Constraints

    NASA Astrophysics Data System (ADS)

    Cavegn, S.; Blaser, S.; Nebiker, S.; Haala, N.

    2018-05-01

    Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2-3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.

  4. Implementing targeted region capture sequencing for the clinical detection of Alagille syndrome: An efficient and cost‑effective method.

    PubMed

    Huang, Tianhong; Yang, Guilin; Dang, Xiao; Ao, Feijian; Li, Jiankang; He, Yizhou; Tang, Qiyuan; He, Qing

    2017-11-01

    Alagille syndrome (AGS) is a highly variable, autosomal dominant disease that affects multiple structures including the liver, heart, eyes, bones and face. Targeted region capture sequencing focuses on a panel of known pathogenic genes and provides a rapid, cost‑effective and accurate method for molecular diagnosis. In a Chinese family, this method was used on the proband and Sanger sequencing was applied to validate the candidate mutation. A de novo heterozygous mutation (c.3254_3255insT p.Leu1085PhefsX24) of the jagged 1 gene was identified as the potential disease‑causing gene mutation. In conclusion, the present study suggested that target region capture sequencing is an efficient, reliable and accurate approach for the clinical diagnosis of AGS. Furthermore, these results expand on the understanding of the pathogenesis of AGS.

  5. Image processing system design for microcantilever-based optical readout infrared arrays

    NASA Astrophysics Data System (ADS)

    Tong, Qiang; Dong, Liquan; Zhao, Yuejin; Gong, Cheng; Liu, Xiaohua; Yu, Xiaomei; Yang, Lei; Liu, Weiyu

    2012-12-01

    Compared with the traditional infrared imaging technology, the new type of optical-readout uncooled infrared imaging technology based on MEMS has many advantages, such as low cost, small size, producing simple. In addition, the theory proves that the technology's high thermal detection sensitivity. So it has a very broad application prospects in the field of high performance infrared detection. The paper mainly focuses on an image capturing and processing system in the new type of optical-readout uncooled infrared imaging technology based on MEMS. The image capturing and processing system consists of software and hardware. We build our image processing core hardware platform based on TI's high performance DSP chip which is the TMS320DM642, and then design our image capturing board based on the MT9P031. MT9P031 is Micron's company high frame rate, low power consumption CMOS chip. Last we use Intel's company network transceiver devices-LXT971A to design the network output board. The software system is built on the real-time operating system DSP/BIOS. We design our video capture driver program based on TI's class-mini driver and network output program based on the NDK kit for image capturing and processing and transmitting. The experiment shows that the system has the advantages of high capturing resolution and fast processing speed. The speed of the network transmission is up to 100Mbps.

  6. Visual content highlighting via automatic extraction of embedded captions on MPEG compressed video

    NASA Astrophysics Data System (ADS)

    Yeo, Boon-Lock; Liu, Bede

    1996-03-01

    Embedded captions in TV programs such as news broadcasts, documentaries and coverage of sports events provide important information on the underlying events. In digital video libraries, such captions represent a highly condensed form of key information on the contents of the video. In this paper we propose a scheme to automatically detect the presence of captions embedded in video frames. The proposed method operates on reduced image sequences which are efficiently reconstructed from compressed MPEG video and thus does not require full frame decompression. The detection, extraction and analysis of embedded captions help to capture the highlights of visual contents in video documents for better organization of video, to present succinctly the important messages embedded in the images, and to facilitate browsing, searching and retrieval of relevant clips.

  7. Intelligent image capture of cartridge cases for firearms examiners

    NASA Astrophysics Data System (ADS)

    Jones, Brett C.; Guerci, Joseph R.

    1997-02-01

    The FBI's DRUGFIRETM system is a nationwide computerized networked image database of ballistic forensic evidence. This evidence includes images of cartridge cases and bullets obtained from both crime scenes and controlled test firings of seized weapons. Currently, the system is installed in over 80 forensic labs across the country and has enjoyed a high degree of success. In this paper, we discuss some of the issues and methods associated with providing a front-end semi-automated image capture system that simultaneously satisfies the often conflicting criteria of the many human examiners visual perception versus the criteria associated with optimizing autonomous digital image correlation. Specifically, we detail the proposed processing chain of an intelligent image capture system (IICS), involving a real- time capture 'assistant,' which assesses the quality of the image under test utilizing a custom designed neural network.

  8. Mapping brain activity in gradient-echo functional MRI using principal component analysis

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Singh, Manbir; Don, Manuel

    1997-05-01

    The detection of sites of brain activation in functional MRI has been a topic of immense research interest and many technique shave been proposed to this end. Recently, principal component analysis (PCA) has been applied to extract the activated regions and their time course of activation. This method is based on the assumption that the activation is orthogonal to other signal variations such as brain motion, physiological oscillations and other uncorrelated noises. A distinct advantage of this method is that it does not require any knowledge of the time course of the true stimulus paradigm. This technique is well suited to EPI image sequences where the sampling rate is high enough to capture the effects of physiological oscillations. In this work, we propose and apply tow methods that are based on PCA to conventional gradient-echo images and investigate their usefulness as tools to extract reliable information on brain activation. The first method is a conventional technique where a single image sequence with alternating on and off stages is subject to a principal component analysis. The second method is a PCA-based approach called the common spatial factor analysis technique (CSF). As the name suggests, this method relies on common spatial factors between the above fMRI image sequence and a background fMRI. We have applied these methods to identify active brain ares during visual stimulation and motor tasks. The results from these methods are compared to those obtained by using the standard cross-correlation technique. We found good agreement in the areas identified as active across all three techniques. The results suggest that PCA and CSF methods have good potential in detecting the true stimulus correlated changes in the presence of other interfering signals.

  9. Laboratory Study of Water Surface Roughness Generation by Wave-Current Interaction

    NASA Technical Reports Server (NTRS)

    Klinke, Jochen

    2000-01-01

    Within the framework of this project, the blocking of waves by inhomogeneous currents was studied. A laboratory experiment was conducted in collaboration with Steven R. Long at the linear wave tank of the NASA Air-Sea Interaction Facility, Wallops Island, VA during May 1999. Mechanically-generated waves were blocked approximately 3m upstream from the wave paddle by an opposing current. A false bottom was used to obtain a spatially varying flow field in the measurement section of the wave tank. We used an imaging slope gauge, which was mounted directly underneath the sloping section of the false tank bottom to observe the wave field. For a given current speed, the amplitude and the frequency of the waves was adjusted so that the blocking occurred within the observed footprint. Image sequences of up to 600 images at up 100 Hz sampling rate were recorded for an area of approximately 25cm x 25cm. Unlike previous measurements with wave wire gauges, the captured image sequences show the generation of the capillary waves at the blocking point and give detailed insight into the spatial and temporal evolution of the blocking process. The image data were used to study the wave-current interaction for currents from 5 to 25 cm/s and waves with frequencies between 1 and 3 Hz. First the images were calibrated with regard to size and slope. Then standard Fourier techniques as well the empirical mode decomposition method developed by Dr. Norden Huang and Dr. Steven R. Long were employed to quantify the wave number downshift from the gravity to the capillary regime.

  10. Scalable Coding of Plenoptic Images by Using a Sparse Set and Disparities.

    PubMed

    Li, Yun; Sjostrom, Marten; Olsson, Roger; Jennehag, Ulf

    2016-01-01

    One of the light field capturing techniques is the focused plenoptic capturing. By placing a microlens array in front of the photosensor, the focused plenoptic cameras capture both spatial and angular information of a scene in each microlens image and across microlens images. The capturing results in a significant amount of redundant information, and the captured image is usually of a large resolution. A coding scheme that removes the redundancy before coding can be of advantage for efficient compression, transmission, and rendering. In this paper, we propose a lossy coding scheme to efficiently represent plenoptic images. The format contains a sparse image set and its associated disparities. The reconstruction is performed by disparity-based interpolation and inpainting, and the reconstructed image is later employed as a prediction reference for the coding of the full plenoptic image. As an outcome of the representation, the proposed scheme inherits a scalable structure with three layers. The results show that plenoptic images are compressed efficiently with over 60 percent bit rate reduction compared with High Efficiency Video Coding intra coding, and with over 20 percent compared with an High Efficiency Video Coding block copying mode.

  11. Diversity among Tacaribe serocomplex viruses (family Arenaviridae) naturally associated with the Mexican woodrat (Neotoma mexicana)

    PubMed Central

    Cajimat, Maria N. B.; Milazzo, Mary Louise; Borchert, Jeff N.; Abbott, Ken D.; Bradley, Robert D.; Fulhorst, Charles F.

    2008-01-01

    The results of analyses of glycoprotein precursor and nucleocapsid protein gene sequences indicated that an arenavirus isolated from a Mexican woodrat (Neotoma mexicana) captured in Arizona is a strain of a novel species (proposed name Skinner Tank virus) and that arenaviruses isolated from Mexican woodrats captured in Colorado, New Mexico, and Utah are strains of Whitewater Arroyo virus or species phylogenetically closely related to Whitewater Arroyo virus. Pairwise comparisons of glycoprotein precursor sequences and nucleocapsid protein sequences revealed a high level of divergence among the viruses isolated from the Mexican woodrats captured in Colorado, New Mexico, and Utah and the Whitewater Arroyo virus prototype strain AV 9310135, which originally was isolated from a white-throated woodrat (Neotoma albigula) captured in New Mexico. Conceptually, the viruses from Colorado, New Mexico, and Utah and strain AV 9310135 could be grouped together in a species complex in the family Arenaviridae, genus Arenavirus. PMID:18304671

  12. Automated Adaptive Brightness in Wireless Capsule Endoscopy Using Image Segmentation and Sigmoid Function.

    PubMed

    Shrestha, Ravi; Mohammed, Shahed K; Hasan, Md Mehedi; Zhang, Xuechao; Wahid, Khan A

    2016-08-01

    Wireless capsule endoscopy (WCE) plays an important role in the diagnosis of gastrointestinal (GI) diseases by capturing images of human small intestine. Accurate diagnosis of endoscopic images depends heavily on the quality of captured images. Along with image and frame rate, brightness of the image is an important parameter that influences the image quality which leads to the design of an efficient illumination system. Such design involves the choice and placement of proper light source and its ability to illuminate GI surface with proper brightness. Light emitting diodes (LEDs) are normally used as sources where modulated pulses are used to control LED's brightness. In practice, instances like under- and over-illumination are very common in WCE, where the former provides dark images and the later provides bright images with high power consumption. In this paper, we propose a low-power and efficient illumination system that is based on an automated brightness algorithm. The scheme is adaptive in nature, i.e., the brightness level is controlled automatically in real-time while the images are being captured. The captured images are segmented into four equal regions and the brightness level of each region is calculated. Then an adaptive sigmoid function is used to find the optimized brightness level and accordingly a new value of duty cycle of the modulated pulse is generated to capture future images. The algorithm is fully implemented in a capsule prototype and tested with endoscopic images. Commercial capsules like Pillcam and Mirocam were also used in the experiment. The results show that the proposed algorithm works well in controlling the brightness level accordingly to the environmental condition, and as a result, good quality images are captured with an average of 40% brightness level that saves power consumption of the capsule.

  13. Real-time look-up table-based color correction for still image stabilization of digital cameras without using frame memory

    NASA Astrophysics Data System (ADS)

    Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha

    2012-09-01

    Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.

  14. Groupwise registration of cardiac perfusion MRI sequences using normalized mutual information in high dimension

    NASA Astrophysics Data System (ADS)

    Hamrouni, Sameh; Rougon, Nicolas; Pr"teux, Françoise

    2011-03-01

    In perfusion MRI (p-MRI) exams, short-axis (SA) image sequences are captured at multiple slice levels along the long-axis of the heart during the transit of a vascular contrast agent (Gd-DTPA) through the cardiac chambers and muscle. Compensating cardio-thoracic motions is a requirement for enabling computer-aided quantitative assessment of myocardial ischaemia from contrast-enhanced p-MRI sequences. The classical paradigm consists of registering each sequence frame on a reference image using some intensity-based matching criterion. In this paper, we introduce a novel unsupervised method for the spatio-temporal groupwise registration of cardiac p-MRI exams based on normalized mutual information (NMI) between high-dimensional feature distributions. Here, local contrast enhancement curves are used as a dense set of spatio-temporal features, and statistically matched through variational optimization to a target feature distribution derived from a registered reference template. The hard issue of probability density estimation in high-dimensional state spaces is bypassed by using consistent geometric entropy estimators, allowing NMI to be computed directly from feature samples. Specifically, a computationally efficient kth-nearest neighbor (kNN) estimation framework is retained, leading to closed-form expressions for the gradient flow of NMI over finite- and infinite-dimensional motion spaces. This approach is applied to the groupwise alignment of cardiac p-MRI exams using a free-form Deformation (FFD) model for cardio-thoracic motions. Experiments on simulated and natural datasets suggest its accuracy and robustness for registering p-MRI exams comprising more than 30 frames.

  15. Barley whole exome capture: a tool for genomic research in the genus Hordeum and beyond

    PubMed Central

    Mascher, Martin; Richmond, Todd A; Gerhardt, Daniel J; Himmelbach, Axel; Clissold, Leah; Sampath, Dharanya; Ayling, Sarah; Steuernagel, Burkhard; Pfeifer, Matthias; D'Ascenzo, Mark; Akhunov, Eduard D; Hedley, Pete E; Gonzales, Ana M; Morrell, Peter L; Kilian, Benjamin; Blattner, Frank R; Scholz, Uwe; Mayer, Klaus FX; Flavell, Andrew J; Muehlbauer, Gary J; Waugh, Robbie; Jeddeloh, Jeffrey A; Stein, Nils

    2013-01-01

    Advanced resources for genome-assisted research in barley (Hordeum vulgare) including a whole-genome shotgun assembly and an integrated physical map have recently become available. These have made possible studies that aim to assess genetic diversity or to isolate single genes by whole-genome resequencing and in silico variant detection. However such an approach remains expensive given the 5 Gb size of the barley genome. Targeted sequencing of the mRNA-coding exome reduces barley genomic complexity more than 50-fold, thus dramatically reducing this heavy sequencing and analysis load. We have developed and employed an in-solution hybridization-based sequence capture platform to selectively enrich for a 61.6 megabase coding sequence target that includes predicted genes from the genome assembly of the cultivar Morex as well as publicly available full-length cDNAs and de novo assembled RNA-Seq consensus sequence contigs. The platform provides a highly specific capture with substantial and reproducible enrichment of targeted exons, both for cultivated barley and related species. We show that this exome capture platform provides a clear path towards a broader and deeper understanding of the natural variation residing in the mRNA-coding part of the barley genome and will thus constitute a valuable resource for applications such as mapping-by-sequencing and genetic diversity analyzes. PMID:23889683

  16. A fast and automatic fusion algorithm for unregistered multi-exposure image sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Yu, Feihong

    2014-09-01

    Human visual system (HVS) can visualize all the brightness levels of the scene through visual adaptation. However, the dynamic range of most commercial digital cameras and display devices are smaller than the dynamic range of human eye. This implies low dynamic range (LDR) images captured by normal digital camera may lose image details. We propose an efficient approach to high dynamic (HDR) image fusion that copes with image displacement and image blur degradation in a computationally efficient manner, which is suitable for implementation on mobile devices. The various image registration algorithms proposed in the previous literatures are unable to meet the efficiency and performance requirements in the application of mobile devices. In this paper, we selected Oriented Brief (ORB) detector to extract local image structures. The descriptor selected in multi-exposure image fusion algorithm has to be fast and robust to illumination variations and geometric deformations. ORB descriptor is the best candidate in our algorithm. Further, we perform an improved RANdom Sample Consensus (RANSAC) algorithm to reject incorrect matches. For the fusion of images, a new approach based on Stationary Wavelet Transform (SWT) is used. The experimental results demonstrate that the proposed algorithm generates high quality images at low computational cost. Comparisons with a number of other feature matching methods show that our method gets better performance.

  17. Early melanoma diagnosis with mobile imaging.

    PubMed

    Do, Thanh-Toan; Zhou, Yiren; Zheng, Haitian; Cheung, Ngai-Man; Koh, Dawn

    2014-01-01

    We research a mobile imaging system for early diagnosis of melanoma. Different from previous work, we focus on smartphone-captured images, and propose a detection system that runs entirely on the smartphone. Smartphone-captured images taken under loosely-controlled conditions introduce new challenges for melanoma detection, while processing performed on the smartphone is subject to computation and memory constraints. To address these challenges, we propose to localize the skin lesion by combining fast skin detection and fusion of two fast segmentation results. We propose new features to capture color variation and border irregularity which are useful for smartphone-captured images. We also propose a new feature selection criterion to select a small set of good features used in the final lightweight system. Our evaluation confirms the effectiveness of proposed algorithms and features. In addition, we present our system prototype which computes selected visual features from a user-captured skin lesion image, and analyzes them to estimate the likelihood of malignance, all on an off-the-shelf smartphone.

  18. The GENCODE exome: sequencing the complete human exome

    PubMed Central

    Coffey, Alison J; Kokocinski, Felix; Calafato, Maria S; Scott, Carol E; Palta, Priit; Drury, Eleanor; Joyce, Christopher J; LeProust, Emily M; Harrow, Jen; Hunt, Sarah; Lehesjoki, Anna-Elina; Turner, Daniel J; Hubbard, Tim J; Palotie, Aarno

    2011-01-01

    Sequencing the coding regions, the exome, of the human genome is one of the major current strategies to identify low frequency and rare variants associated with human disease traits. So far, the most widely used commercial exome capture reagents have mainly targeted the consensus coding sequence (CCDS) database. We report the design of an extended set of targets for capturing the complete human exome, based on annotation from the GENCODE consortium. The extended set covers an additional 5594 genes and 10.3 Mb compared with the current CCDS-based sets. The additional regions include potential disease genes previously inaccessible to exome resequencing studies, such as 43 genes linked to ion channel activity and 70 genes linked to protein kinase activity. In total, the new GENCODE exome set developed here covers 47.9 Mb and performed well in sequence capture experiments. In the sample set used in this study, we identified over 5000 SNP variants more in the GENCODE exome target (24%) than in the CCDS-based exome sequencing. PMID:21364695

  19. High-speed plasma imaging: A lightning bolt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurden, G.A.; Whiteson, D.O.

    Using a gated intensified digital Kodak Ektapro camera system, the authors captured a lightning bolt at 1,000 frames per second, with 100-{micro}s exposure time on each consecutive frame. As a thunder storm approaches while darkness descended (7:50 pm) on July 21, 1994, they photographed lightning bolts with an f22 105-mm lens and 100% gain on the intensified camera. This 15-frame sequence shows a cloud to ground stroke at a distance of about 1.5 km, which has a series of stepped leaders propagating downwards, following by the upward-propagating main return stroke.

  20. Investigation of sparsity metrics for autofocusing in digital holographic microscopy

    NASA Astrophysics Data System (ADS)

    Fan, Xin; Healy, John J.; Hennelly, Bryan M.

    2017-05-01

    Digital holographic microscopy (DHM) is an optoelectronic technique that is made up of two parts: (i) the recording of the interference pattern of the diffraction pattern of an object and a known reference wavefield using a digital camera and (ii) the numerical reconstruction of the complex object wavefield using the recorded interferogram and a distance parameter as input. The latter is based on the simulation of optical propagation from the camera plane to a plane at any arbitrary distance from the camera. A key advantage of DHM over conventional microscopy is that both the phase and intensity information of the object can be recovered at any distance, using only one capture, and this facilitates the recording of scenes that may change dynamically and that may otherwise go in and out of focus. Autofocusing using traditional microscopy requires mechanical movement of the translation stage or the microscope objective, and multiple image captures that are then compared using some metric. Autofocusing in DHM is similar, except that the sequence of intensity images, to which the metric is applied, is generated numerically from a single capture. We recently investigated the application of a number of sparsity metrics for DHM autofocusing and in this paper we extend this work to include more such metrics, and apply them over a greater range of biological diatom cells and magnification/numerical apertures. We demonstrate for the first time that these metrics may be grouped together according to matching behavior following high pass filtering.

  1. Bone marrow cavity segmentation using graph-cuts with wavelet-based texture feature.

    PubMed

    Shigeta, Hironori; Mashita, Tomohiro; Kikuta, Junichi; Seno, Shigeto; Takemura, Haruo; Ishii, Masaru; Matsuda, Hideo

    2017-10-01

    Emerging bioimaging technologies enable us to capture various dynamic cellular activities [Formula: see text]. As large amounts of data are obtained these days and it is becoming unrealistic to manually process massive number of images, automatic analysis methods are required. One of the issues for automatic image segmentation is that image-taking conditions are variable. Thus, commonly, many manual inputs are required according to each image. In this paper, we propose a bone marrow cavity (BMC) segmentation method for bone images as BMC is considered to be related to the mechanism of bone remodeling, osteoporosis, and so on. To reduce manual inputs to segment BMC, we classified the texture pattern using wavelet transformation and support vector machine. We also integrated the result of texture pattern classification into the graph-cuts-based image segmentation method because texture analysis does not consider spatial continuity. Our method is applicable to a particular frame in an image sequence in which the condition of fluorescent material is variable. In the experiment, we evaluated our method with nine types of mother wavelets and several sets of scale parameters. The proposed method with graph-cuts and texture pattern classification performs well without manual inputs by a user.

  2. DNA capture elements for rapid detection and identification of biological agents

    NASA Astrophysics Data System (ADS)

    Kiel, Johnathan L.; Parker, Jill E.; Holwitt, Eric A.; Vivekananda, Jeeva

    2004-08-01

    DNA capture elements (DCEs; aptamers) are artificial DNA sequences, from a random pool of sequences, selected for their specific binding to potential biological warfare agents. These sequences were selected by an affinity method using filters to which the target agent was attached and the DNA isolated and amplified by polymerase chain reaction (PCR) in an iterative, increasingly stringent, process. Reporter molecules were attached to the finished sequences. To date, we have made DCEs to Bacillus anthracis spores, Shiga toxin, Venezuelan Equine Encephalitis (VEE) virus, and Francisella tularensis. These DCEs have demonstrated specificity and sensitivity equal to or better than antibody.

  3. Camera pose refinement by matching uncertain 3D building models with thermal infrared image sequences for high quality texture extraction

    NASA Astrophysics Data System (ADS)

    Iwaszczuk, Dorota; Stilla, Uwe

    2017-10-01

    Thermal infrared (TIR) images are often used to picture damaged and weak spots in the insulation of the building hull, which is widely used in thermal inspections of buildings. Such inspection in large-scale areas can be carried out by combining TIR imagery and 3D building models. This combination can be achieved via texture mapping. Automation of texture mapping avoids time consuming imaging and manually analyzing each face independently. It also provides a spatial reference for façade structures extracted in the thermal textures. In order to capture all faces, including the roofs, façades, and façades in the inner courtyard, an oblique looking camera mounted on a flying platform is used. Direct geo-referencing is usually not sufficient for precise texture extraction. In addition, 3D building models have also uncertain geometry. In this paper, therefore, methodology for co-registration of uncertain 3D building models with airborne oblique view images is presented. For this purpose, a line-based model-to-image matching is developed, in which the uncertainties of the 3D building model, as well as of the image features are considered. Matched linear features are used for the refinement of the exterior orientation parameters of the camera in order to ensure optimal co-registration. Moreover, this study investigates whether line tracking through the image sequence supports the matching. The accuracy of the extraction and the quality of the textures are assessed. For this purpose, appropriate quality measures are developed. The tests showed good results on co-registration, particularly in cases where tracking between the neighboring frames had been applied.

  4. Comparison and evaluation of datasets for off-angle iris recognition

    NASA Astrophysics Data System (ADS)

    Kurtuncu, Osman M.; Cerme, Gamze N.; Karakaya, Mahmut

    2016-05-01

    In this paper, we investigated the publicly available iris recognition datasets and their data capture procedures in order to determine if they are suitable for the stand-off iris recognition research. Majority of the iris recognition datasets include only frontal iris images. Even if a few datasets include off-angle iris images, the frontal and off-angle iris images are not captured at the same time. The comparison of the frontal and off-angle iris images shows not only differences in the gaze angle but also change in pupil dilation and accommodation as well. In order to isolate the effect of the gaze angle from other challenging issues including dilation and accommodation, the frontal and off-angle iris images are supposed to be captured at the same time by using two different cameras. Therefore, we developed an iris image acquisition platform by using two cameras in this work where one camera captures frontal iris image and the other one captures iris images from off-angle. Based on the comparison of Hamming distance between frontal and off-angle iris images captured with the two-camera- setup and one-camera-setup, we observed that Hamming distance in two-camera-setup is less than one-camera-setup ranging from 0.05 to 0.001. These results show that in order to have accurate results in the off-angle iris recognition research, two-camera-setup is necessary in order to distinguish the challenging issues from each other.

  5. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture.

    PubMed

    Trivedi, Chintan A; Bollmann, Johann H

    2013-01-01

    Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback.

  6. Light ray field capture using focal plane sweeping and its optical reconstruction using 3D displays.

    PubMed

    Park, Jae-Hyeung; Lee, Sung-Keun; Jo, Na-Young; Kim, Hee-Jae; Kim, Yong-Soo; Lim, Hong-Gi

    2014-10-20

    We propose a method to capture light ray field of three-dimensional scene using focal plane sweeping. Multiple images are captured using a usual camera at different focal distances, spanning the three-dimensional scene. The captured images are then back-projected to four-dimensional spatio-angular space to obtain the light ray field. The obtained light ray field can be visualized either using digital processing or optical reconstruction using various three-dimensional display techniques including integral imaging, layered display, and holography.

  7. Visualizing Ebolavirus Particles Using Single-Particle Interferometric Reflectance Imaging Sensor (SP-IRIS).

    PubMed

    Carter, Erik P; Seymour, Elif Ç; Scherr, Steven M; Daaboul, George G; Freedman, David S; Selim Ünlü, M; Connor, John H

    2017-01-01

    This chapter describes an approach for the label-free imaging and quantification of intact Ebola virus (EBOV) and EBOV viruslike particles (VLPs) using a light microscopy technique. In this technique, individual virus particles are captured onto a silicon chip that has been printed with spots of virus-specific capture antibodies. These captured virions are then detected using an optical approach called interference reflectance imaging. This approach allows for the detection of each virus particle that is captured on an antibody spot and can resolve the filamentous structure of EBOV VLPs without the need for electron microscopy. Capture of VLPs and virions can be done from a variety of sample types ranging from tissue culture medium to blood. The technique also allows automated quantitative analysis of the number of virions captured. This can be used to identify the virus concentration in an unknown sample. In addition, this technique offers the opportunity to easily image virions captured from native solutions without the need for additional labeling approaches while offering a means of assessing the range of particle sizes and morphologies in a quantitative manner.

  8. The Microphenotron: a robotic miniaturized plant phenotyping platform with diverse applications in chemical biology.

    PubMed

    Burrell, Thomas; Fozard, Susan; Holroyd, Geoff H; French, Andrew P; Pound, Michael P; Bigley, Christopher J; James Taylor, C; Forde, Brian G

    2017-01-01

    Chemical genetics provides a powerful alternative to conventional genetics for understanding gene function. However, its application to plants has been limited by the lack of a technology that allows detailed phenotyping of whole-seedling development in the context of a high-throughput chemical screen. We have therefore sought to develop an automated micro-phenotyping platform that would allow both root and shoot development to be monitored under conditions where the phenotypic effects of large numbers of small molecules can be assessed. The 'Microphenotron' platform uses 96-well microtitre plates to deliver chemical treatments to seedlings of Arabidopsis thaliana L. and is based around four components: (a) the 'Phytostrip', a novel seedling growth device that enables chemical treatments to be combined with the automated capture of images of developing roots and shoots; (b) an illuminated robotic platform that uses a commercially available robotic manipulator to capture images of developing shoots and roots; (c) software to control the sequence of robotic movements and integrate these with the image capture process; (d) purpose-made image analysis software for automated extraction of quantitative phenotypic data. Imaging of each plate (representing 80 separate assays) takes 4 min and can easily be performed daily for time-course studies. As currently configured, the Microphenotron has a capacity of 54 microtitre plates in a growth room footprint of 2.1 m 2 , giving a potential throughput of up to 4320 chemical treatments in a typical 10 days experiment. The Microphenotron has been validated by using it to screen a collection of 800 natural compounds for qualitative effects on root development and to perform a quantitative analysis of the effects of a range of concentrations of nitrate and ammonium on seedling development. The Microphenotron is an automated screening platform that for the first time is able to combine large numbers of individual chemical treatments with a detailed analysis of whole-seedling development, and particularly root system development. The Microphenotron should provide a powerful new tool for chemical genetics and for wider chemical biology applications, including the development of natural and synthetic chemical products for improved agricultural sustainability.

  9. High frame rate imaging systems developed in Northwest Institute of Nuclear Technology

    NASA Astrophysics Data System (ADS)

    Li, Binkang; Wang, Kuilu; Guo, Mingan; Ruan, Linbo; Zhang, Haibing; Yang, Shaohua; Feng, Bing; Sun, Fengrong; Chen, Yanli

    2007-01-01

    This paper presents high frame rate imaging systems developed in Northwest Institute of Nuclear Technology in recent years. Three types of imaging systems are included. The first type of system utilizes EG&G RETICON Photodiode Array (PDA) RA100A as the image sensor, which can work at up to 1000 frame per second (fps). Besides working continuously, the PDA system is also designed to switch to capture flash light event working mode. A specific time sequence is designed to satisfy this request. The camera image data can be transmitted to remote area by coaxial or optic fiber cable and then be stored. The second type of imaging system utilizes PHOTOBIT Complementary Metal Oxygen Semiconductor (CMOS) PB-MV13 as the image sensor, which has a high resolution of 1280 (H) ×1024 (V) pixels per frame. The CMOS system can operate at up to 500fps in full frame and 4000fps partially. The prototype scheme of the system is presented. The third type of imaging systems adopts charge coupled device (CCD) as the imagers. MINTRON MTV-1881EX, DALSA CA-D1 and CA-D6 camera head are used in the systems development. The features comparison of the RA100A, PB-MV13, and CA-D6 based systems are given in the end.

  10. Single molecule targeted sequencing for cancer gene mutation detection.

    PubMed

    Gao, Yan; Deng, Liwei; Yan, Qin; Gao, Yongqian; Wu, Zengding; Cai, Jinsen; Ji, Daorui; Li, Gailing; Wu, Ping; Jin, Huan; Zhao, Luyang; Liu, Song; Ge, Liangjin; Deem, Michael W; He, Jiankui

    2016-05-19

    With the rapid decline in cost of sequencing, it is now affordable to examine multiple genes in a single disease-targeted clinical test using next generation sequencing. Current targeted sequencing methods require a separate step of targeted capture enrichment during sample preparation before sequencing. Although there are fast sample preparation methods available in market, the library preparation process is still relatively complicated for physicians to use routinely. Here, we introduced an amplification-free Single Molecule Targeted Sequencing (SMTS) technology, which combined targeted capture and sequencing in one step. We demonstrated that this technology can detect low-frequency mutations using artificially synthesized DNA sample. SMTS has several potential advantages, including simple sample preparation thus no biases and errors are introduced by PCR reaction. SMTS has the potential to be an easy and quick sequencing technology for clinical diagnosis such as cancer gene mutation detection, infectious disease detection, inherited condition screening and noninvasive prenatal diagnosis.

  11. NASA SOFIA Captures Images of the Planetary Nebula M2-9

    NASA Image and Video Library

    2012-03-29

    Researchers using NASA Stratospheric Observatory for Infrared Astronomy SOFIA have captured infrared images of the last exhalations of a dying sun-like star. This image is of the planetary Nebula M2-9.

  12. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  13. Multimodal RNA-seq using single-strand, double-strand, and CircLigase-based capture yields a refined and extended description of the C. elegans transcriptome.

    PubMed

    Lamm, Ayelet T; Stadler, Michael R; Zhang, Huibin; Gent, Jonathan I; Fire, Andrew Z

    2011-02-01

    We have used a combination of three high-throughput RNA capture and sequencing methods to refine and augment the transcriptome map of a well-studied genetic model, Caenorhabditis elegans. The three methods include a standard (non-directional) library preparation protocol relying on cDNA priming and foldback that has been used in several previous studies for transcriptome characterization in this species, and two directional protocols, one involving direct capture of single-stranded RNA fragments and one involving circular-template PCR (CircLigase). We find that each RNA-seq approach shows specific limitations and biases, with the application of multiple methods providing a more complete map than was obtained from any single method. Of particular note in the analysis were substantial advantages of CircLigase-based and ssRNA-based capture for defining sequences and structures of the precise 5' ends (which were lost using the double-strand cDNA capture method). Of the three methods, ssRNA capture was most effective in defining sequences to the poly(A) junction. Using data sets from a spectrum of C. elegans strains and stages and the UCSC Genome Browser, we provide a series of tools, which facilitate rapid visualization and assignment of gene structures.

  14. Mapping the Apollo 17 landing site area based on Lunar Reconnaissance Orbiter Camera images and Apollo surface photography

    NASA Astrophysics Data System (ADS)

    Haase, I.; Oberst, J.; Scholten, F.; Wählisch, M.; Gläser, P.; Karachevtseva, I.; Robinson, M. S.

    2012-05-01

    Newly acquired high resolution Lunar Reconnaissance Orbiter Camera (LROC) images allow accurate determination of the coordinates of Apollo hardware, sampling stations, and photographic viewpoints. In particular, the positions from where the Apollo 17 astronauts recorded panoramic image series, at the so-called “traverse stations”, were precisely determined for traverse path reconstruction. We analyzed observations made in Apollo surface photography as well as orthorectified orbital images (0.5 m/pixel) and Digital Terrain Models (DTMs) (1.5 m/pixel and 100 m/pixel) derived from LROC Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images. Key features captured in the Apollo panoramic sequences were identified in LROC NAC orthoimages. Angular directions of these features were measured in the panoramic images and fitted to the NAC orthoimage by applying least squares techniques. As a result, we obtained the surface panoramic camera positions to within 50 cm. At the same time, the camera orientations, North azimuth angles and distances to nearby features of interest were also determined. Here, initial results are shown for traverse station 1 (northwest of Steno Crater) as well as the Apollo Lunar Surface Experiment Package (ALSEP) area.

  15. Tensor-based Dictionary Learning for Dynamic Tomographic Reconstruction

    PubMed Central

    Tan, Shengqi; Zhang, Yanbo; Wang, Ge; Mou, Xuanqin; Cao, Guohua; Wu, Zhifang; Yu, Hengyong

    2015-01-01

    In dynamic computed tomography (CT) reconstruction, the data acquisition speed limits the spatio-temporal resolution. Recently, compressed sensing theory has been instrumental in improving CT reconstruction from far few-view projections. In this paper, we present an adaptive method to train a tensor-based spatio-temporal dictionary for sparse representation of an image sequence during the reconstruction process. The correlations among atoms and across phases are considered to capture the characteristics of an object. The reconstruction problem is solved by the alternating direction method of multipliers. To recover fine or sharp structures such as edges, the nonlocal total variation is incorporated into the algorithmic framework. Preclinical examples including a sheep lung perfusion study and a dynamic mouse cardiac imaging demonstrate that the proposed approach outperforms the vectorized dictionary-based CT reconstruction in the case of few-view reconstruction. PMID:25779991

  16. High-repetition-rate interferometric Rayleigh scattering for flow-velocity measurements

    NASA Astrophysics Data System (ADS)

    Estevadeordal, Jordi; Jiang, Naibo; Cutler, Andrew D.; Felver, Josef J.; Slipchenko, Mikhail N.; Danehy, Paul M.; Gord, James R.; Roy, Sukesh

    2018-03-01

    High-repetition-rate interferometric-Rayleigh-scattering (IRS) velocimetry is demonstrated for non-intrusive, high-speed flow-velocity measurements. High temporal resolution is obtained with a quasi-continuous burst-mode laser that is capable of operating at 10-100 kHz, providing 10-ms bursts with pulse widths of 5-1000 ns and pulse energy > 100 mJ at 532 nm. Coupled with a high-speed camera system, the IRS method is based on imaging the flow field through an etalon with 8-GHz free spectral range and capturing the Doppler shift of the Rayleigh-scattered light from the flow at multiple points having constructive interference. The seed-laser linewidth permits a laser linewidth of < 150 MHz at 532 nm. The technique is demonstrated in a high-speed jet, and high-repetition-rate image sequences are shown.

  17. Capturing Motion and Depth Before Cinematography.

    PubMed

    Wade, Nicholas J

    2016-01-01

    Visual representations of biological states have traditionally faced two problems: they lacked motion and depth. Attempts were made to supply these wants over many centuries, but the major advances were made in the early-nineteenth century. Motion was synthesized by sequences of slightly different images presented in rapid succession and depth was added by presenting slightly different images to each eye. Apparent motion and depth were combined some years later, but they tended to be applied separately. The major figures in this early period were Wheatstone, Plateau, Horner, Duboscq, Claudet, and Purkinje. Others later in the century, like Marey and Muybridge, were stimulated to extend the uses to which apparent motion and photography could be applied to examining body movements. These developments occurred before the birth of cinematography, and significant insights were derived from attempts to combine motion and depth.

  18. High-resolution ophthalmic imaging system

    DOEpatents

    Olivier, Scot S.; Carrano, Carmen J.

    2007-12-04

    A system for providing an improved resolution retina image comprising an imaging camera for capturing a retina image and a computer system operatively connected to the imaging camera, the computer producing short exposures of the retina image and providing speckle processing of the short exposures to provide the improved resolution retina image. The system comprises the steps of capturing a retina image, producing short exposures of the retina image, and speckle processing the short exposures of the retina image to provide the improved resolution retina image.

  19. The minimal amount of starting DNA for Agilent’s hybrid capture-based targeted massively parallel sequencing

    PubMed Central

    Chung, Jongsuk; Son, Dae-Soon; Jeon, Hyo-Jeong; Kim, Kyoung-Mee; Park, Gahee; Ryu, Gyu Ha; Park, Woong-Yang; Park, Donghyun

    2016-01-01

    Targeted capture massively parallel sequencing is increasingly being used in clinical settings, and as costs continue to decline, use of this technology may become routine in health care. However, a limited amount of tissue has often been a challenge in meeting quality requirements. To offer a practical guideline for the minimum amount of input DNA for targeted sequencing, we optimized and evaluated the performance of targeted sequencing depending on the input DNA amount. First, using various amounts of input DNA, we compared commercially available library construction kits and selected Agilent’s SureSelect-XT and KAPA Biosystems’ Hyper Prep kits as the kits most compatible with targeted deep sequencing using Agilent’s SureSelect custom capture. Then, we optimized the adapter ligation conditions of the Hyper Prep kit to improve library construction efficiency and adapted multiplexed hybrid selection to reduce the cost of sequencing. In this study, we systematically evaluated the performance of the optimized protocol depending on the amount of input DNA, ranging from 6.25 to 200 ng, suggesting the minimal input DNA amounts based on coverage depths required for specific applications. PMID:27220682

  20. ASTER Captures New Image of Pakistan Flooding

    NASA Image and Video Library

    2010-08-20

    NASA Terra spacecraft captured this cloud-free image over the city of Sukkur, Pakistan, on Aug. 18, 2010. Sukkur, located in southeastern Pakistan Sindh Province, is visible as the grey, urbanized area in the lower left center of the image.

  1. A Distributed Compressive Sensing Scheme for Event Capture in Wireless Visual Sensor Networks

    NASA Astrophysics Data System (ADS)

    Hou, Meng; Xu, Sen; Wu, Weiling; Lin, Fei

    2018-01-01

    Image signals which acquired by wireless visual sensor network can be used for specific event capture. This event capture is realized by image processing at the sink node. A distributed compressive sensing scheme is used for the transmission of these image signals from the camera nodes to the sink node. A measurement and joint reconstruction algorithm for these image signals are proposed in this paper. Make advantage of spatial correlation between images within a sensing area, the cluster head node which as the image decoder can accurately co-reconstruct these image signals. The subjective visual quality and the reconstruction error rate are used for the evaluation of reconstructed image quality. Simulation results show that the joint reconstruction algorithm achieves higher image quality at the same image compressive rate than the independent reconstruction algorithm.

  2. Toward 1-mm depth precision with a solid state full-field range imaging system

    NASA Astrophysics Data System (ADS)

    Dorrington, Adrian A.; Carnegie, Dale A.; Cree, Michael J.

    2006-02-01

    Previously, we demonstrated a novel heterodyne based solid-state full-field range-finding imaging system. This system is comprised of modulated LED illumination, a modulated image intensifier, and a digital video camera. A 10 MHz drive is provided with 1 Hz difference between the LEDs and image intensifier. A sequence of images of the resulting beating intensifier output are captured and processed to determine phase and hence distance to the object for each pixel. In a previous publication, we detailed results showing a one-sigma precision of 15 mm to 30 mm (depending on signal strength). Furthermore, we identified the limitations of the system and potential improvements that were expected to result in a range precision in the order of 1 mm. These primarily include increasing the operating frequency and improving optical coupling and sensitivity. In this paper, we report on the implementation of these improvements and the new system characteristics. We also comment on the factors that are important for high precision image ranging and present configuration strategies for best performance. Ranging with sub-millimeter precision is demonstrated by imaging a planar surface and calculating the deviations from a planar fit. The results are also illustrated graphically by imaging a garden gnome.

  3. Geometric rectification of camera-captured document images.

    PubMed

    Liang, Jian; DeMenthon, Daniel; Doermann, David

    2008-04-01

    Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.

  4. A 3D photographic capsule endoscope system with full field of view

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng

    2013-09-01

    Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.

  5. Near-Infrared Coloring via a Contrast-Preserving Mapping Model.

    PubMed

    Chang-Hwan Son; Xiao-Ping Zhang

    2017-11-01

    Near-infrared gray images captured along with corresponding visible color images have recently proven useful for image restoration and classification. This paper introduces a new coloring method to add colors to near-infrared gray images based on a contrast-preserving mapping model. A naive coloring method directly adds the colors from the visible color image to the near-infrared gray image. However, this method results in an unrealistic image because of the discrepancies in the brightness and image structure between the captured near-infrared gray image and the visible color image. To solve the discrepancy problem, first, we present a new contrast-preserving mapping model to create a new near-infrared gray image with a similar appearance in the luminance plane to the visible color image, while preserving the contrast and details of the captured near-infrared gray image. Then, we develop a method to derive realistic colors that can be added to the newly created near-infrared gray image based on the proposed contrast-preserving mapping model. Experimental results show that the proposed new method not only preserves the local contrast and details of the captured near-infrared gray image, but also transfers the realistic colors from the visible color image to the newly created near-infrared gray image. It is also shown that the proposed near-infrared coloring can be used effectively for noise and haze removal, as well as local contrast enhancement.

  6. NASA Spacecraft Captures Image of Brazil Flooding

    NASA Image and Video Library

    2011-01-19

    On Jan. 18, 2011, NASA Terra spacecraft captured this 3-D perspective image of the city of Nova Friburgo, Brazil. A week of torrential rains triggered a series of deadly mudslides and floods. More details about this image at the Photojournal.

  7. SDO's Ultra-high Definition View of 2012 Venus Transit -- Path Sequence

    NASA Image and Video Library

    2017-12-08

    NASA image captured June 5-6, 2012. On June 5-6 2012, SDO is collecting images of one of the rarest predictable solar events: the transit of Venus across the face of the sun. This event happens in pairs eight years apart that are separated from each other by 105 or 121 years. The last transit was in 2004 and the next will not happen until 2117. Credit: NASA/SDO, AIA To read more about the 2012 Venus Transit go to: sunearthday.nasa.gov/transitofvenus Add your photos of the Transit of Venus to our Flickr Group here: www.flickr.com/groups/venustransit/ NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  8. Human fatigue expression recognition through image-based dynamic multi-information and bimodal deep learning

    NASA Astrophysics Data System (ADS)

    Zhao, Lei; Wang, Zengcai; Wang, Xiaojin; Qi, Yazhou; Liu, Qing; Zhang, Guoxin

    2016-09-01

    Human fatigue is an important cause of traffic accidents. To improve the safety of transportation, we propose, in this paper, a framework for fatigue expression recognition using image-based facial dynamic multi-information and a bimodal deep neural network. First, the landmark of face region and the texture of eye region, which complement each other in fatigue expression recognition, are extracted from facial image sequences captured by a single camera. Then, two stacked autoencoder neural networks are trained for landmark and texture, respectively. Finally, the two trained neural networks are combined by learning a joint layer on top of them to construct a bimodal deep neural network. The model can be used to extract a unified representation that fuses landmark and texture modalities together and classify fatigue expressions accurately. The proposed system is tested on a human fatigue dataset obtained from an actual driving environment. The experimental results demonstrate that the proposed method performs stably and robustly, and that the average accuracy achieves 96.2%.

  9. Video Image Tracking Engine

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)

    2004-01-01

    A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).

  10. Dual-pathway multi-echo sequence for simultaneous frequency and T2 mapping

    NASA Astrophysics Data System (ADS)

    Cheng, Cheng-Chieh; Mei, Chang-Sheng; Duryea, Jeffrey; Chung, Hsiao-Wen; Chao, Tzu-Cheng; Panych, Lawrence P.; Madore, Bruno

    2016-04-01

    Purpose: To present a dual-pathway multi-echo steady state sequence and reconstruction algorithm to capture T2, T2∗ and field map information. Methods: Typically, pulse sequences based on spin echoes are needed for T2 mapping while gradient echoes are needed for field mapping, making it difficult to jointly acquire both types of information. A dual-pathway multi-echo pulse sequence is employed here to generate T2 and field maps from the same acquired data. The approach might be used, for example, to obtain both thermometry and tissue damage information during thermal therapies, or susceptibility and T2 information from a same head scan, or to generate bonus T2 maps during a knee scan. Results: Quantitative T2, T2∗ and field maps were generated in gel phantoms, ex vivo bovine muscle, and twelve volunteers. T2 results were validated against a spin-echo reference standard: A linear regression based on ROI analysis in phantoms provided close agreement (slope/R2 = 0.99/0.998). A pixel-wise in vivo Bland-Altman analysis of R2 = 1/T2 showed a bias of 0.034 Hz (about 0.3%), as averaged over four volunteers. Ex vivo results, with and without motion, suggested that tissue damage detection based on T2 rather than temperature-dose measurements might prove more robust to motion. Conclusion: T2, T2∗ and field maps were obtained simultaneously, from the same datasets, in thermometry, susceptibility-weighted imaging and knee-imaging contexts.

  11. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor.

    PubMed

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-Ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-03-05

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes.

  12. Nucleic acid sequence detection using multiplexed oligonucleotide PCR

    DOEpatents

    Nolan, John P [Santa Fe, NM; White, P Scott [Los Alamos, NM

    2006-12-26

    Methods for rapidly detecting single or multiple sequence alleles in a sample nucleic acid are described. Provided are all of the oligonucleotide pairs capable of annealing specifically to a target allele and discriminating among possible sequences thereof, and ligating to each other to form an oligonucleotide complex when a particular sequence feature is present (or, alternatively, absent) in the sample nucleic acid. The design of each oligonucleotide pair permits the subsequent high-level PCR amplification of a specific amplicon when the oligonucleotide complex is formed, but not when the oligonucleotide complex is not formed. The presence or absence of the specific amplicon is used to detect the allele. Detection of the specific amplicon may be achieved using a variety of methods well known in the art, including without limitation, oligonucleotide capture onto DNA chips or microarrays, oligonucleotide capture onto beads or microspheres, electrophoresis, and mass spectrometry. Various labels and address-capture tags may be employed in the amplicon detection step of multiplexed assays, as further described herein.

  13. Water surface capturing by image processing

    USDA-ARS?s Scientific Manuscript database

    An alternative means of measuring the water surface interface during laboratory experiments is processing a series of sequentially captured images. Image processing can provide a continuous, non-intrusive record of the water surface profile whose accuracy is not dependent on water depth. More trad...

  14. Combined subtraction hybridization and polymerase chain reaction amplification procedure for isolation of strain-specific Rhizobium DNA sequences.

    PubMed Central

    Bjourson, A J; Stone, C E; Cooper, J E

    1992-01-01

    A novel subtraction hybridization procedure, incorporating a combination of four separation strategies, was developed to isolate unique DNA sequences from a strain of Rhizobium leguminosarum bv. trifolii. Sau3A-digested DNA from this strain, i.e., the probe strain, was ligated to a linker and hybridized in solution with an excess of pooled subtracter DNA from seven other strains of the same biovar which had been restricted, ligated to a different, biotinylated, subtracter-specific linker, and amplified by polymerase chain reaction to incorporate dUTP. Subtracter DNA and subtracter-probe hybrids were removed by phenol-chloroform extraction of a streptavidin-biotin-DNA complex. NENSORB chromatography of the sequences remaining in the aqueous layer captured biotinylated subtracter DNA which may have escaped removal by phenol-chloroform treatment. Any traces of contaminating subtracter DNA were removed by digestion with uracil DNA glycosylase. Finally, remaining sequences were amplified by polymerase chain reaction with a probe strain-specific primer, labelled with 32P, and tested for specificity in dot blot hybridizations against total genomic target DNA from each strain in the subtracter pool. Two rounds of subtraction-amplification were sufficient to remove cross-hybridizing sequences and to give a probe which hybridized only with homologous target DNA. The method is applicable to the isolation of DNA and RNA sequences from both procaryotic and eucaryotic cells. Images PMID:1637166

  15. DNA capture and next-generation sequencing can recover whole mitochondrial genomes from highly degraded samples for human identification

    PubMed Central

    2013-01-01

    Background Mitochondrial DNA (mtDNA) typing can be a useful aid for identifying people from compromised samples when nuclear DNA is too damaged, degraded or below detection thresholds for routine short tandem repeat (STR)-based analysis. Standard mtDNA typing, focused on PCR amplicon sequencing of the control region (HVS I and HVS II), is limited by the resolving power of this short sequence, which misses up to 70% of the variation present in the mtDNA genome. Methods We used in-solution hybridisation-based DNA capture (using DNA capture probes prepared from modern human mtDNA) to recover mtDNA from post-mortem human remains in which the majority of DNA is both highly fragmented (<100 base pairs in length) and chemically damaged. The method ‘immortalises’ the finite quantities of DNA in valuable extracts as DNA libraries, which is followed by the targeted enrichment of endogenous mtDNA sequences and characterisation by next-generation sequencing (NGS). Results We sequenced whole mitochondrial genomes for human identification from samples where standard nuclear STR typing produced only partial profiles or demonstrably failed and/or where standard mtDNA hypervariable region sequences lacked resolving power. Multiple rounds of enrichment can substantially improve coverage and sequencing depth of mtDNA genomes from highly degraded samples. The application of this method has led to the reliable mitochondrial sequencing of human skeletal remains from unidentified World War Two (WWII) casualties approximately 70 years old and from archaeological remains (up to 2,500 years old). Conclusions This approach has potential applications in forensic science, historical human identification cases, archived medical samples, kinship analysis and population studies. In particular the methodology can be applied to any case, involving human or non-human species, where whole mitochondrial genome sequences are required to provide the highest level of maternal lineage discrimination. Multiple rounds of in-solution hybridisation-based DNA capture can retrieve whole mitochondrial genome sequences from even the most challenging samples. PMID:24289217

  16. Transmission Geometry Laser Ablation into a Non-Contact Liquid Vortex Capture Probe for Mass Spectrometry Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ovchinnikova, Olga S; Bhandari, Deepak; Lorenz, Matthias

    2014-01-01

    RATIONALE: Capture of material from a laser ablation plume into a continuous flow stream of solvent provides the means for uninterrupted sampling, transport and ionization of collected material for coupling with mass spectral analysis. Reported here is the use of vertically aligned transmission geometry laser ablation in combination with a new non-contact liquid vortex capture probe coupled with electrospray ionization for spot sampling and chemical imaging with mass spectrometry. Methods: A vertically aligned continuous flow liquid vortex capture probe was positioned directly underneath a sample surface in a transmission geometry laser ablation (355 nm, 10 Hz, 7 ns pulse width)more » setup to capture into solution the ablated material. The outlet of the vortex probe was coupled to the Turbo V ion source of an AB SCIEX TripleTOF 5600+ mass spectrometer. System operation and performance metrics were tested using inked patterns and thin tissue sections. Glass slides and slides designed especially for laser capture microdissection, viz., DIRECTOR slides and PEN 1.0 (polyethylene naphthalate) membrane slides, were used as sample substrates. Results: The estimated capture efficiency of laser ablated material was 24%, which was enabled by the use of a probe with large liquid surface area (~ 2.8 mm2) and with gravity to help direct ablated material vertically down towards the probe. The swirling vortex action of the liquid surface potentially enhanced capture and dissolution of not only particulates, but also gaseous products of the laser ablation. The use of DIRECTOR slides and PEN 1.0 (polyethylene naphthalate) membrane slides as sample substrates enabled effective ablation of a wide range of sample types (basic blue 7, polypropylene glycol, insulin and cyctochrome c) without photodamage using a UV laser. Imaging resolution of about 6 m was demonstrated for stamped ink on DIRECTOR slides based on the ability to distinguish features present both in the optical and in the chemical image. This imaging resolution was 20 times better than the previous best reported results with laser ablation/liquid sample capture mass spectrometry imaging. Using thin sections of brain tissue the chemical image of a selected lipid was obtained with an estimated imaging resolution of about 50 um. Conclusions: A vertically aligned, transmission geometry laser ablation liquid vortex capture probe, electrospray ionization mass spectrometry system provides an effective means for spatially resolved spot sampling and imaging with mass spectrometry.« less

  17. Transmission geometry laser ablation into a non-contact liquid vortex capture probe for mass spectrometry imaging.

    PubMed

    Ovchinnikova, Olga S; Bhandari, Deepak; Lorenz, Matthias; Van Berkel, Gary J

    2014-08-15

    Capture of material from a laser ablation plume into a continuous flow stream of solvent provides the means for uninterrupted sampling, transport and ionization of collected material for coupling with mass spectral analysis. Reported here is the use of vertically aligned transmission geometry laser ablation in combination with a new non-contact liquid vortex capture probe coupled with electrospray ionization for spot sampling and chemical imaging with mass spectrometry. A vertically aligned continuous flow liquid vortex capture probe was positioned directly underneath a sample surface in a transmission geometry laser ablation (355 nm, 10 Hz, 7 ns pulse width) set up to capture into solution the ablated material. The outlet of the vortex probe was coupled to the Turbo V™ ion source of an AB SCIEX TripleTOF 5600+ mass spectrometer. System operation and performance metrics were tested using inked patterns and thin tissue sections. Glass slides and slides designed especially for laser capture microdissection, viz., DIRECTOR(®) slides and PEN 1.0 (polyethylene naphthalate) membrane slides, were used as sample substrates. The estimated capture efficiency of laser-ablated material was 24%, which was enabled by the use of a probe with large liquid surface area (~2.8 mm(2) ) and with gravity to help direct ablated material vertically down towards the probe. The swirling vortex action of the liquid surface potentially enhanced capture and dissolution not only of particulates, but also of gaseous products of the laser ablation. The use of DIRECTOR(®) slides and PEN 1.0 (polyethylene naphthalate) membrane slides as sample substrates enabled effective ablation of a wide range of sample types (basic blue 7, polypropylene glycol, insulin and cyctochrome c) without photodamage using a UV laser. Imaging resolution of about 6 µm was demonstrated for stamped ink on DIRECTOR(®) slides based on the ability to distinguish features present both in the optical and in the chemical image. This imaging resolution was 20 times better than the previous best reported results with laser ablation/liquid sample capture mass spectrometry imaging. Using thin sections of brain tissue the chemical image of a selected lipid was obtained with an estimated imaging resolution of about 50 µm. A vertically aligned, transmission geometry laser ablation liquid vortex capture probe, electrospray ionization mass spectrometry system provides an effective means for spatially resolved spot sampling and imaging with mass spectrometry. Published in 2014. This article is a U.S. Government work and is in the public domain in the USA.

  18. Study on Vignetting Correction of Uav Images and Its Application to 2010 Ms7.0 Lushan Earthquake, China

    NASA Astrophysics Data System (ADS)

    Yuan, X.; Wang, X.; Dou, A.; Ding, X.

    2014-12-01

    As the UAV is widely used in earthquake disaster prevention and mitigation, the efficiency of UAV image processing determines the effectiveness of its application to pre-earthquake disaster prevention, post-earthquake emergency rescue, and disaster assessment. Because of bad weather conditions after destructive earthquake, the wide field cameras captured images with serious vignetting phenomenon, which can significantly affects the speed and efficiency of image mosaic, especially the extraction of pre-earthquake building and geological structure information and also the accuracy of post-earthquake quantitative damage extraction. In this paper, an improved radial gradient correction method (IRGCM) was developed to reduce the influence from random distribution of land surface objects on the images based on radial gradient correction method (RGCM, Y. Zheng, 2008; 2013). First, a mean-value image was obtained by the average of serial UAV images. It was used as calibration instead of single images to obtain the comprehensive vignetting function by using RGCM. Then each UAV image would be corrected by the comprehensive vignetting function. A case study was done to correct the UAV images sequence, which were obtained in Lushan County after Ms7.0 Lushan, Sichuan, China earthquake occurred on April 20, 2013. The results show that the comprehensive vignetting function generated by IRGCM is more robust and accurate to express the specific optical response of camera in a particular setting. Thus it is particularly useful for correction of a mass UAV images with non-uniform illuminations. Also, the correction process was simplified and it is faster than conventional methods. After correction, the images have better radial homogeneity and clearer details, to a certain extent, which reduces the difficulties of image mosaic, and provides a better result for further analysis and damage information extraction. Further test shows also that better results were obtained by taking advantage of comprehensive vignetting function to the other UAV image sequences from different regions. The research was supported by these projects, NO.2012BAK15B02 and 2013IES010106.

  19. Grizzly bear corticosteroid binding globulin: Cloning and serum protein expression.

    PubMed

    Chow, Brian A; Hamilton, Jason; Alsop, Derek; Cattet, Marc R L; Stenhouse, Gordon; Vijayan, Mathilakath M

    2010-06-01

    Serum corticosteroid levels are routinely measured as markers of stress in wild animals. However, corticosteroid levels rise rapidly in response to the acute stress of capture and restraint for sampling, limiting its use as an indicator of chronic stress. We hypothesized that serum corticosteroid binding globulin (CBG), the primary transport protein for corticosteroids in circulation, may be a better marker of the stress status prior to capture in grizzly bears (Ursus arctos). To test this, a full-length CBG cDNA was cloned and sequenced from grizzly bear testis and polyclonal antibodies were generated for detection of this protein in bear sera. The deduced nucleotide and protein sequences were 1218 bp and 405 amino acids, respectively. Multiple sequence alignments showed that grizzly bear CBG (gbCBG) was 90% and 83% identical to the dog CBG nucleotide and amino acid sequences, respectively. The affinity purified rabbit gbCBG antiserum detected grizzly bear but not human CBG. There were no sex differences in serum total cortisol concentration, while CBG expression was significantly higher in adult females compared to males. Serum cortisol levels were significantly higher in bears captured by leg-hold snare compared to those captured by remote drug delivery from helicopter. However, serum CBG expression between these two groups did not differ significantly. Overall, serum CBG levels may be a better marker of chronic stress, especially because this protein is not modulated by the stress of capture and restraint in grizzly bears. Copyright 2010 Elsevier Inc. All rights reserved.

  20. Mass Spectrometric Determination of ILPR G-quadruplex Binding Sites in Insulin and IGF-2

    PubMed Central

    Xiao, JunFeng

    2009-01-01

    The insulin-linked polymorphic region (ILPR) of the human insulin gene promoter region forms G-quadruplex structures in vitro. Previous studies show that insulin and insulin-like growth factor-2 (IGF-2) exhibit high affinity binding in vitro to 2-repeat sequences of ILPR variants a and h, but negligible binding to variant i. Two-repeat sequences of variants a and h form intramolecular G-quadruplex structures that are not evidenced for variant i. Here we report on the use of protein digestion combined with affinity capture and MALDI-MS detection to pinpoint ILPR binding sites in insulin and IGF-2. Peptides captured by ILPR variants a and h were sequenced by MALDI-MS/MS, LC-MS and in silico digestion. On-bead digestion of insulin-ILPR variant a complexes supported the conclusions. The results indicate that the sequence VCG(N)RGF is generally present in the captured peptides and is likely involved in the affinity binding interactions of the proteins with the ILPR G-quadruplexes. The significance of arginine in the interactions was studied by comparing the affinities of synthesized peptides VCGERGF and VCGEAGF with ILPR variant a. Peptides from other regions of the proteins that are connected through disulfide linkages were also detected in some capture experiments. Identification of binding sites could facilitate design of DNA binding ligands for capture and detection of insulin and IGF-2. The interactions may have biological significance as well. PMID:19747845

  1. Capturing the 'ome': the expanding molecular toolbox for RNA and DNA library construction.

    PubMed

    Boone, Morgane; De Koker, Andries; Callewaert, Nico

    2018-04-06

    All sequencing experiments and most functional genomics screens rely on the generation of libraries to comprehensively capture pools of targeted sequences. In the past decade especially, driven by the progress in the field of massively parallel sequencing, numerous studies have comprehensively assessed the impact of particular manipulations on library complexity and quality, and characterized the activities and specificities of several key enzymes used in library construction. Fortunately, careful protocol design and reagent choice can substantially mitigate many of these biases, and enable reliable representation of sequences in libraries. This review aims to guide the reader through the vast expanse of literature on the subject to promote informed library generation, independent of the application.

  2. Reducing flicker due to ambient illumination in camera captured images

    NASA Astrophysics Data System (ADS)

    Kim, Minwoong; Bengtson, Kurt; Li, Lisa; Allebach, Jan P.

    2013-02-01

    The flicker artifact dealt with in this paper is the scanning distortion arising when an image is captured by a digital camera using a CMOS imaging sensor with an electronic rolling shutter under strong ambient light sources powered by AC. This type of camera scans a target line-by-line in a frame. Therefore, time differences exist between the lines. This mechanism causes a captured image to be corrupted by the change of illumination. This phenomenon is called the flicker artifact. The non-content area of the captured image is used to estimate a flicker signal that is a key to being able to compensate the flicker artifact. The average signal of the non-content area taken along the scan direction has local extrema where the peaks of flicker exist. The locations of the extrema are very useful information to estimate the desired distribution of pixel intensities assuming that the flicker artifact does not exist. The flicker-reduced images compensated by our approach clearly demonstrate the reduced flicker artifact, based on visual observation.

  3. Development of a Digital Microarray with Interferometric Reflectance Imaging

    NASA Astrophysics Data System (ADS)

    Sevenler, Derin

    This dissertation describes a new type of molecular assay for nucleic acids and proteins. We call this technique a digital microarray since it is conceptually similar to conventional fluorescence microarrays, yet it performs enumerative ('digital') counting of the number captured molecules. Digital microarrays are approximately 10,000-fold more sensitive than fluorescence microarrays, yet maintain all of the strengths of the platform including low cost and high multiplexing (i.e., many different tests on the same sample simultaneously). Digital microarrays use gold nanorods to label the captured target molecules. Each gold nanorod on the array is individually detected based on its light scattering, with an interferometric microscopy technique called SP-IRIS. Our optimized high-throughput version of SP-IRIS is able to scan a typical array of 500 spots in less than 10 minutes. Digital DNA microarrays may have utility in applications where sequencing is prohibitively expensive or slow. As an example, we describe a digital microarray assay for gene expression markers of bacterial drug resistance.

  4. Video-based face recognition via convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  5. Reduction and coding of synthetic aperture radar data with Fourier transforms

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1995-01-01

    Recently, aboard the Space Radar Laboratory (SRL), the two roles of Fourier Transforms for ocean image synthesis and surface wave analysis have been implemented with a dedicated radar processor to significantly reduce Synthetic Aperture Radar (SAR) ocean data before transmission to the ground. The object was to archive the SAR image spectrum, rather than the SAR image itself, to reduce data volume and capture the essential descriptors of the surface wave field. SAR signal data are usually sampled and coded in the time domain for transmission to the ground where Fourier Transforms are applied both to individual radar pulses and to long sequences of radar pulses to form two-dimensional images. High resolution images of the ocean often contain no striking features and subtle image modulations by wind generated surface waves are only apparent when large ocean regions are studied, with Fourier transforms, to reveal periodic patterns created by wind stress over the surface wave field. Major ocean currents and atmospheric instability in coastal environments are apparent as large scale modulations of SAR imagery. This paper explores the possibility of computing complex Fourier spectrum codes representing SAR images, transmitting the coded spectra to Earth for data archives and creating scenes of surface wave signatures and air-sea interactions via inverse Fourier transformations with ground station processors.

  6. Detection of large-scale concentric gravity waves from a Chinese airglow imager network

    NASA Astrophysics Data System (ADS)

    Lai, Chang; Yue, Jia; Xu, Jiyao; Yuan, Wei; Li, Qinzeng; Liu, Xiao

    2018-06-01

    Concentric gravity waves (CGWs) contain a broad spectrum of horizontal wavelengths and periods due to their instantaneous localized sources (e.g., deep convection, volcanic eruptions, or earthquake, etc.). However, it is difficult to observe large-scale gravity waves of >100 km wavelength from the ground for the limited field of view of a single camera and local bad weather. Previously, complete large-scale CGW imagery could only be captured by satellite observations. In the present study, we developed a novel method that uses assembling separate images and applying low-pass filtering to obtain temporal and spatial information about complete large-scale CGWs from a network of all-sky airglow imagers. Coordinated observations from five all-sky airglow imagers in Northern China were assembled and processed to study large-scale CGWs over a wide area (1800 km × 1 400 km), focusing on the same two CGW events as Xu et al. (2015). Our algorithms yielded images of large-scale CGWs by filtering out the small-scale CGWs. The wavelengths, wave speeds, and periods of CGWs were measured from a sequence of consecutive assembled images. Overall, the assembling and low-pass filtering algorithms can expand the airglow imager network to its full capacity regarding the detection of large-scale gravity waves.

  7. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure

    NASA Astrophysics Data System (ADS)

    Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.

    2017-02-01

    Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.

  8. Multimodal Translation System Using Texture-Mapped Lip-Sync Images for Video Mail and Automatic Dubbing Applications

    NASA Astrophysics Data System (ADS)

    Morishima, Shigeo; Nakamura, Satoshi

    2004-12-01

    We introduce a multimodal English-to-Japanese and Japanese-to-English translation system that also translates the speaker's speech motion by synchronizing it to the translated speech. This system also introduces both a face synthesis technique that can generate any viseme lip shape and a face tracking technique that can estimate the original position and rotation of a speaker's face in an image sequence. To retain the speaker's facial expression, we substitute only the speech organ's image with the synthesized one, which is made by a 3D wire-frame model that is adaptable to any speaker. Our approach provides translated image synthesis with an extremely small database. The tracking motion of the face from a video image is performed by template matching. In this system, the translation and rotation of the face are detected by using a 3D personal face model whose texture is captured from a video frame. We also propose a method to customize the personal face model by using our GUI tool. By combining these techniques and the translated voice synthesis technique, an automatic multimodal translation can be achieved that is suitable for video mail or automatic dubbing systems into other languages.

  9. Highly multiplexed targeted DNA sequencing from single nuclei.

    PubMed

    Leung, Marco L; Wang, Yong; Kim, Charissa; Gao, Ruli; Jiang, Jerry; Sei, Emi; Navin, Nicholas E

    2016-02-01

    Single-cell DNA sequencing methods are challenged by poor physical coverage, high technical error rates and low throughput. To address these issues, we developed a single-cell DNA sequencing protocol that combines flow-sorting of single nuclei, time-limited multiple-displacement amplification (MDA), low-input library preparation, DNA barcoding, targeted capture and next-generation sequencing (NGS). This approach represents a major improvement over our previous single nucleus sequencing (SNS) Nature Protocols paper in terms of generating higher-coverage data (>90%), thereby enabling the detection of genome-wide variants in single mammalian cells at base-pair resolution. Furthermore, by pooling 48-96 single-cell libraries together for targeted capture, this approach can be used to sequence many single-cell libraries in parallel in a single reaction. This protocol greatly reduces the cost of single-cell DNA sequencing, and it can be completed in 5-6 d by advanced users. This single-cell DNA sequencing protocol has broad applications for studying rare cells and complex populations in diverse fields of biological research and medicine.

  10. Sequence capture by hybridization to explore modern and ancient genomic diversity in model and nonmodel organisms

    PubMed Central

    Gasc, Cyrielle; Peyretaillade, Eric

    2016-01-01

    Abstract The recent expansion of next-generation sequencing has significantly improved biological research. Nevertheless, deep exploration of genomes or metagenomic samples remains difficult because of the sequencing depth and the associated costs required. Therefore, different partitioning strategies have been developed to sequence informative subsets of studied genomes. Among these strategies, hybridization capture has proven to be an innovative and efficient tool for targeting and enriching specific biomarkers in complex DNA mixtures. It has been successfully applied in numerous areas of biology, such as exome resequencing for the identification of mutations underlying Mendelian or complex diseases and cancers, and its usefulness has been demonstrated in the agronomic field through the linking of genetic variants to agricultural phenotypic traits of interest. Moreover, hybridization capture has provided access to underexplored, but relevant fractions of genomes through its ability to enrich defined targets and their flanking regions. Finally, on the basis of restricted genomic information, this method has also allowed the expansion of knowledge of nonreference species and ancient genomes and provided a better understanding of metagenomic samples. In this review, we present the major advances and discoveries permitted by hybridization capture and highlight the potency of this approach in all areas of biology. PMID:27105841

  11. Sequence capture by hybridization to explore modern and ancient genomic diversity in model and nonmodel organisms.

    PubMed

    Gasc, Cyrielle; Peyretaillade, Eric; Peyret, Pierre

    2016-06-02

    The recent expansion of next-generation sequencing has significantly improved biological research. Nevertheless, deep exploration of genomes or metagenomic samples remains difficult because of the sequencing depth and the associated costs required. Therefore, different partitioning strategies have been developed to sequence informative subsets of studied genomes. Among these strategies, hybridization capture has proven to be an innovative and efficient tool for targeting and enriching specific biomarkers in complex DNA mixtures. It has been successfully applied in numerous areas of biology, such as exome resequencing for the identification of mutations underlying Mendelian or complex diseases and cancers, and its usefulness has been demonstrated in the agronomic field through the linking of genetic variants to agricultural phenotypic traits of interest. Moreover, hybridization capture has provided access to underexplored, but relevant fractions of genomes through its ability to enrich defined targets and their flanking regions. Finally, on the basis of restricted genomic information, this method has also allowed the expansion of knowledge of nonreference species and ancient genomes and provided a better understanding of metagenomic samples. In this review, we present the major advances and discoveries permitted by hybridization capture and highlight the potency of this approach in all areas of biology. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  12. Knowledge of healthcare professionals about rights of patient’s images

    PubMed Central

    Caires, Bianca Rodrigues; Lopes, Maria Carolina Barbosa Teixeira; Okuno, Meiry Fernanda Pinto; Vancini-Campanharo, Cássia Regina; Batista, Ruth Ester Assayag

    2015-01-01

    Objective To assess knowledge of healthcare professionals about capture and reproduction of images of patients in a hospital setting. Methods A cross-sectional and observational study among 360 healthcare professionals (nursing staff, physical therapists, and physicians), working at a teaching hospital in the city of São Paulo (SP). A questionnaire with sociodemographic information was distributed and data were correlated to capture and reproduction of images at hospitals. Results Of the 360 respondents, 142 had captured images of patients in the last year, and 312 reported seeing other professionals taking photographs of patients. Of the participants who captured images, 61 said they used them for studies and presentation of clinical cases, and 168 professionals reported not knowing of any legislation in the Brazilian Penal Code regarding collection and use of images. Conclusion There is a gap in the training of healthcare professionals regarding the use of patient´s images. It is necessary to include subjects that address this theme in the syllabus of undergraduate courses, and the healthcare organizations should regulate this issue. PMID:26267838

  13. Initial characterization of the large genome of the salamander Ambystoma mexicanum using shotgun and laser capture chromosome sequencing

    PubMed Central

    Keinath, Melissa C.; Timoshevskiy, Vladimir A.; Timoshevskaya, Nataliya Y.; Tsonis, Panagiotis A.; Voss, S. Randal; Smith, Jeramiah J.

    2015-01-01

    Vertebrates exhibit substantial diversity in genome size, and some of the largest genomes exist in species that uniquely inform diverse areas of basic and biomedical research. For example, the salamander Ambystoma mexicanum (the Mexican axolotl) is a model organism for studies of regeneration, development and genome evolution, yet its genome is ~10× larger than the human genome. As part of a hierarchical approach toward improving genome resources for the species, we generated 600 Gb of shotgun sequence data and developed methods for sequencing individual laser-captured chromosomes. Based on these data, we estimate that the A. mexicanum genome is ~32 Gb. Notably, as much as 19 Gb of the A. mexicanum genome can potentially be considered single copy, which presumably reflects the evolutionary diversification of mobile elements that accumulated during an ancient episode of genome expansion. Chromosome-targeted sequencing permitted the development of assemblies within the constraints of modern computational platforms, allowed us to place 2062 genes on the two smallest A. mexicanum chromosomes and resolves key events in the history of vertebrate genome evolution. Our analyses show that the capture and sequencing of individual chromosomes is likely to provide valuable information for the systematic sequencing, assembly and scaffolding of large genomes. PMID:26553646

  14. Initial characterization of the large genome of the salamander Ambystoma mexicanum using shotgun and laser capture chromosome sequencing.

    PubMed

    Keinath, Melissa C; Timoshevskiy, Vladimir A; Timoshevskaya, Nataliya Y; Tsonis, Panagiotis A; Voss, S Randal; Smith, Jeramiah J

    2015-11-10

    Vertebrates exhibit substantial diversity in genome size, and some of the largest genomes exist in species that uniquely inform diverse areas of basic and biomedical research. For example, the salamander Ambystoma mexicanum (the Mexican axolotl) is a model organism for studies of regeneration, development and genome evolution, yet its genome is ~10× larger than the human genome. As part of a hierarchical approach toward improving genome resources for the species, we generated 600 Gb of shotgun sequence data and developed methods for sequencing individual laser-captured chromosomes. Based on these data, we estimate that the A. mexicanum genome is ~32 Gb. Notably, as much as 19 Gb of the A. mexicanum genome can potentially be considered single copy, which presumably reflects the evolutionary diversification of mobile elements that accumulated during an ancient episode of genome expansion. Chromosome-targeted sequencing permitted the development of assemblies within the constraints of modern computational platforms, allowed us to place 2062 genes on the two smallest A. mexicanum chromosomes and resolves key events in the history of vertebrate genome evolution. Our analyses show that the capture and sequencing of individual chromosomes is likely to provide valuable information for the systematic sequencing, assembly and scaffolding of large genomes.

  15. Zero-Echo-Time and Dixon Deep Pseudo-CT (ZeDD CT): Direct Generation of Pseudo-CT Images for Pelvic PET/MRI Attenuation Correction Using Deep Convolutional Neural Networks with Multiparametric MRI.

    PubMed

    Leynes, Andrew P; Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep S; Shanbhag, Dattesh D; Seo, Youngho; Hope, Thomas A; Larson, Peder E Z

    2018-05-01

    Accurate quantification of uptake on PET images depends on accurate attenuation correction in reconstruction. Current MR-based attenuation correction methods for body PET use a fat and water map derived from a 2-echo Dixon MRI sequence in which bone is neglected. Ultrashort-echo-time or zero-echo-time (ZTE) pulse sequences can capture bone information. We propose the use of patient-specific multiparametric MRI consisting of Dixon MRI and proton-density-weighted ZTE MRI to directly synthesize pseudo-CT images with a deep learning model: we call this method ZTE and Dixon deep pseudo-CT (ZeDD CT). Methods: Twenty-six patients were scanned using an integrated 3-T time-of-flight PET/MRI system. Helical CT images of the patients were acquired separately. A deep convolutional neural network was trained to transform ZTE and Dixon MR images into pseudo-CT images. Ten patients were used for model training, and 16 patients were used for evaluation. Bone and soft-tissue lesions were identified, and the SUV max was measured. The root-mean-squared error (RMSE) was used to compare the MR-based attenuation correction with the ground-truth CT attenuation correction. Results: In total, 30 bone lesions and 60 soft-tissue lesions were evaluated. The RMSE in PET quantification was reduced by a factor of 4 for bone lesions (10.24% for Dixon PET and 2.68% for ZeDD PET) and by a factor of 1.5 for soft-tissue lesions (6.24% for Dixon PET and 4.07% for ZeDD PET). Conclusion: ZeDD CT produces natural-looking and quantitatively accurate pseudo-CT images and reduces error in pelvic PET/MRI attenuation correction compared with standard methods. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  16. Obstacle Detection and Avoidance of a Mobile Robotic Platform Using Active Depth Sensing

    DTIC Science & Technology

    2014-06-01

    price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its environment in three...inception. At the price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its...cropped between 280 and 480 pixels. ........11 Figure 9. RGB image captured by the camera on the Xbox Kinect. ...............................12 Figure

  17. On-Chip AC self-test controller

    DOEpatents

    Flanagan, John D [Rhinebeck, NY; Herring, Jay R [Poughkeepsie, NY; Lo, Tin-Chee [Fishkill, NY

    2009-09-29

    A system for performing AC self-test on an integrated circuit that includes a system clock for normal operation is provided. The system includes the system clock, self-test circuitry, a first and second test register to capture and launch test data in response to a sequence of data pulses, and a logic circuit to be tested. The self-test circuitry includes an AC self-test controller and a clock splitter. The clock splitter generates the sequence of data pulses including a long data capture pulse followed by an at speed data launch pulse and an at speed data capture pulse followed by a long data launch pulse. The at speed data launch pulse and the at speed data capture pulse are generated for a common cycle of the system clock.

  18. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture

    PubMed Central

    Trivedi, Chintan A.; Bollmann, Johann H.

    2013-01-01

    Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback. PMID:23675322

  19. Scene-based nonuniformity correction algorithm based on interframe registration.

    PubMed

    Zuo, Chao; Chen, Qian; Gu, Guohua; Sui, Xiubao

    2011-06-01

    In this paper, we present a simple and effective scene-based nonuniformity correction (NUC) method for infrared focal plane arrays based on interframe registration. This method estimates the global translation between two adjacent frames and minimizes the mean square error between the two properly registered images to make any two detectors with the same scene produce the same output value. In this way, the accumulation of the registration error can be avoided and the NUC can be achieved. The advantages of the proposed algorithm lie in its low computational complexity and storage requirements and ability to capture temporal drifts in the nonuniformity parameters. The performance of the proposed technique is thoroughly studied with infrared image sequences with simulated nonuniformity and infrared imagery with real nonuniformity. It shows a significantly fast and reliable fixed-pattern noise reduction and obtains an effective frame-by-frame adaptive estimation of each detector's gain and offset.

  20. Detection of Cardiopulmonary Activity and Related Abnormal Events Using Microsoft Kinect Sensor.

    PubMed

    Al-Naji, Ali; Chahl, Javaan

    2018-03-20

    Monitoring of cardiopulmonary activity is a challenge when attempted under adverse conditions, including different sleeping postures, environmental settings, and an unclear region of interest (ROI). This study proposes an efficient remote imaging system based on a Microsoft Kinect v2 sensor for the observation of cardiopulmonary-signal-and-detection-related abnormal cardiopulmonary events (e.g., tachycardia, bradycardia, tachypnea, bradypnea, and central apnoea) in many possible sleeping postures within varying environmental settings including in total darkness and whether the subject is covered by a blanket or not. The proposed system extracts the signal from the abdominal-thoracic region where cardiopulmonary activity is most pronounced, using a real-time image sequence captured by Kinect v2 sensor. The proposed system shows promising results in any sleep posture, regardless of illumination conditions and unclear ROI even in the presence of a blanket, whilst being reliable, safe, and cost-effective.

  1. Detection of Cardiopulmonary Activity and Related Abnormal Events Using Microsoft Kinect Sensor

    PubMed Central

    Chahl, Javaan

    2018-01-01

    Monitoring of cardiopulmonary activity is a challenge when attempted under adverse conditions, including different sleeping postures, environmental settings, and an unclear region of interest (ROI). This study proposes an efficient remote imaging system based on a Microsoft Kinect v2 sensor for the observation of cardiopulmonary-signal-and-detection-related abnormal cardiopulmonary events (e.g., tachycardia, bradycardia, tachypnea, bradypnea, and central apnoea) in many possible sleeping postures within varying environmental settings including in total darkness and whether the subject is covered by a blanket or not. The proposed system extracts the signal from the abdominal-thoracic region where cardiopulmonary activity is most pronounced, using a real-time image sequence captured by Kinect v2 sensor. The proposed system shows promising results in any sleep posture, regardless of illumination conditions and unclear ROI even in the presence of a blanket, whilst being reliable, safe, and cost-effective. PMID:29558414

  2. Pixel-based characterisation of CMOS high-speed camera systems

    NASA Astrophysics Data System (ADS)

    Weber, V.; Brübach, J.; Gordon, R. L.; Dreizler, A.

    2011-05-01

    Quantifying high-repetition rate laser diagnostic techniques for measuring scalars in turbulent combustion relies on a complete description of the relationship between detected photons and the signal produced by the detector. CMOS-chip based cameras are becoming an accepted tool for capturing high frame rate cinematographic sequences for laser-based techniques such as Particle Image Velocimetry (PIV) and Planar Laser Induced Fluorescence (PLIF) and can be used with thermographic phosphors to determine surface temperatures. At low repetition rates, imaging techniques have benefitted from significant developments in the quality of CCD-based camera systems, particularly with the uniformity of pixel response and minimal non-linearities in the photon-to-signal conversion. The state of the art in CMOS technology displays a significant number of technical aspects that must be accounted for before these detectors can be used for quantitative diagnostics. This paper addresses these issues.

  3. The Dynamic Photometric Stereo Method Using a Multi-Tap CMOS Image Sensor †

    PubMed Central

    Yoda, Takuya; Nagahara, Hajime; Taniguchi, Rin-ichiro; Kagawa, Keiichiro; Yasutomi, Keita; Kawahito, Shoji

    2018-01-01

    The photometric stereo method enables estimation of surface normals from images that have been captured using different but known lighting directions. The classical photometric stereo method requires at least three images to determine the normals in a given scene. However, this method cannot be applied to dynamic scenes because it is assumed that the scene remains static while the required images are captured. In this work, we present a dynamic photometric stereo method for estimation of the surface normals in a dynamic scene. We use a multi-tap complementary metal-oxide-semiconductor (CMOS) image sensor to capture the input images required for the proposed photometric stereo method. This image sensor can divide the electrons from the photodiode from a single pixel into the different taps of the exposures and can thus capture multiple images under different lighting conditions with almost identical timing. We implemented a camera lighting system and created a software application to enable estimation of the normal map in real time. We also evaluated the accuracy of the estimated surface normals and demonstrated that our proposed method can estimate the surface normals of dynamic scenes. PMID:29510599

  4. Russian Character Recognition using Self-Organizing Map

    NASA Astrophysics Data System (ADS)

    Gunawan, D.; Arisandi, D.; Ginting, F. M.; Rahmat, R. F.; Amalia, A.

    2017-01-01

    The World Tourism Organization (UNWTO) in 2014 released that there are 28 million visitors who visit Russia. Most of the visitors might have problem in typing Russian word when using digital dictionary. This is caused by the letters, called Cyrillic that used by the Russian and the countries around it, have different shape than Latin letters. The visitors might not familiar with Cyrillic. This research proposes an alternative way to input the Cyrillic words. Instead of typing the Cyrillic words directly, camera can be used to capture image of the words as input. The captured image is cropped, then several pre-processing steps are applied such as noise filtering, binary image processing, segmentation and thinning. Next, the feature extraction process is applied to the image. Cyrillic letters recognition in the image is done by utilizing Self-Organizing Map (SOM) algorithm. SOM successfully recognizes 89.09% Cyrillic letters from the computer-generated images. On the other hand, SOM successfully recognizes 88.89% Cyrillic letters from the image captured by the smartphone’s camera. For the word recognition, SOM successfully recognized 292 words and partially recognized 58 words from the image captured by the smartphone’s camera. Therefore, the accuracy of the word recognition using SOM is 83.42%

  5. Multi-exposure high dynamic range image synthesis with camera shake correction

    NASA Astrophysics Data System (ADS)

    Li, Xudong; Chen, Yongfu; Jiang, Hongzhi; Zhao, Huijie

    2017-10-01

    Machine vision plays an important part in industrial online inspection. Owing to the nonuniform illuminance conditions and variable working distances, the captured image tends to be over-exposed or under-exposed. As a result, when processing the image such as crack inspection, the algorithm complexity and computing time increase. Multiexposure high dynamic range (HDR) image synthesis is used to improve the quality of the captured image, whose dynamic range is limited. Inevitably, camera shake will result in ghost effect, which blurs the synthesis image to some extent. However, existed exposure fusion algorithms assume that the input images are either perfectly aligned or captured in the same scene. These assumptions limit the application. At present, widely used registration based on Scale Invariant Feature Transform (SIFT) is usually time consuming. In order to rapidly obtain a high quality HDR image without ghost effect, we come up with an efficient Low Dynamic Range (LDR) images capturing approach and propose a registration method based on ORiented Brief (ORB) and histogram equalization which can eliminate the illumination differences between the LDR images. The fusion is performed after alignment. The experiment results demonstrate that the proposed method is robust to illumination changes and local geometric distortion. Comparing with other exposure fusion methods, our method is more efficient and can produce HDR images without ghost effect by registering and fusing four multi-exposure images.

  6. A Quantitative Three-Dimensional Image Analysis Tool for Maximal Acquisition of Spatial Heterogeneity Data.

    PubMed

    Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios

    2017-02-01

    Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.

  7. A Digital Preclinical PET/MRI Insert and Initial Results.

    PubMed

    Weissler, Bjoern; Gebhardt, Pierre; Dueppenbecker, Peter M; Wehner, Jakob; Schug, David; Lerche, Christoph W; Goldschmidt, Benjamin; Salomon, Andre; Verel, Iris; Heijman, Edwin; Perkuhn, Michael; Heberling, Dirk; Botnar, Rene M; Kiessling, Fabian; Schulz, Volkmar

    2015-11-01

    Combining Positron Emission Tomography (PET) with Magnetic Resonance Imaging (MRI) results in a promising hybrid molecular imaging modality as it unifies the high sensitivity of PET for molecular and cellular processes with the functional and anatomical information from MRI. Digital Silicon Photomultipliers (dSiPMs) are the digital evolution in scintillation light detector technology and promise high PET SNR. DSiPMs from Philips Digital Photon Counting (PDPC) were used to develop a preclinical PET/RF gantry with 1-mm scintillation crystal pitch as an insert for clinical MRI scanners. With three exchangeable RF coils, the hybrid field of view has a maximum size of 160 mm × 96.6 mm (transaxial × axial). 0.1 ppm volume-root-mean-square B 0-homogeneity is kept within a spherical diameter of 96 mm (automatic volume shimming). Depending on the coil, MRI SNR is decreased by 13% or 5% by the PET system. PET count rates, energy resolution of 12.6% FWHM, and spatial resolution of 0.73 mm (3) (isometric volume resolution at isocenter) are not affected by applied MRI sequences. PET time resolution of 565 ps (FWHM) degraded by 6 ps during an EPI sequence. Timing-optimized settings yielded 260 ps time resolution. PET and MR images of a hot-rod phantom show no visible differences when the other modality was in operation and both resolve 0.8-mm rods. Versatility of the insert is shown by successfully combining multi-nuclei MRI ((1)H/(19)F) with simultaneously measured PET ((18)F-FDG). A longitudinal study of a tumor-bearing mouse verifies the operability, stability, and in vivo capabilities of the system. Cardiac- and respiratory-gated PET/MRI motion-capturing (CINE) images of the mouse heart demonstrate the advantage of simultaneous acquisition for temporal and spatial image registration.

  8. Enhanced image capture through fusion

    NASA Technical Reports Server (NTRS)

    Burt, Peter J.; Hanna, Keith; Kolczynski, Raymond J.

    1993-01-01

    Image fusion may be used to combine images from different sensors, such as IR and visible cameras, to obtain a single composite with extended information content. Fusion may also be used to combine multiple images from a given sensor to form a composite image in which information of interest is enhanced. We present a general method for performing image fusion and show that this method is effective for diverse fusion applications. We suggest that fusion may provide a powerful tool for enhanced image capture with broad utility in image processing and computer vision.

  9. Introducing the depth transfer curve for 3D capture system characterization

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  10. Performance comparison of leading image codecs: H.264/AVC Intra, JPEG2000, and Microsoft HD Photo

    NASA Astrophysics Data System (ADS)

    Tran, Trac D.; Liu, Lijie; Topiwala, Pankaj

    2007-09-01

    This paper provides a detailed rate-distortion performance comparison between JPEG2000, Microsoft HD Photo, and H.264/AVC High Profile 4:4:4 I-frame coding for high-resolution still images and high-definition (HD) 1080p video sequences. This work is an extension to our previous comparative study published in previous SPIE conferences [1, 2]. Here we further optimize all three codecs for compression performance. Coding simulations are performed on a set of large-format color images captured from mainstream digital cameras and 1080p HD video sequences commonly used for H.264/AVC standardization work. Overall, our experimental results show that all three codecs offer very similar coding performances at the high-quality, high-resolution setting. Differences tend to be data-dependent: JPEG2000 with the wavelet technology tends to be the best performer with smooth spatial data; H.264/AVC High-Profile with advanced spatial prediction modes tends to cope best with more complex visual content; Microsoft HD Photo tends to be the most consistent across the board. For the still-image data sets, JPEG2000 offers the best R-D performance gains (around 0.2 to 1 dB in peak signal-to-noise ratio) over H.264/AVC High-Profile intra coding and Microsoft HD Photo. For the 1080p video data set, all three codecs offer very similar coding performance. As in [1, 2], neither do we consider scalability nor complexity in this study (JPEG2000 is operating in non-scalable, but optimal performance mode).

  11. Environmental High-content Fluorescence Microscopy (e-HCFM) of Tara Oceans Samples Provides a View of Global Ocean Protist Biodiversity

    NASA Astrophysics Data System (ADS)

    Coelho, L. P.; Colin, S.; Sunagawa, S.; Karsenti, E.; Bork, P.; Pepperkok, R.; de Vargas, C.

    2016-02-01

    Protists are responsible for much of the diversity in the eukaryotic kingdomand are crucial to several biogeochemical processes of global importance (e.g.,the carbon cycle). Recent global investigations of these organisms have reliedon sequence-based approaches. These methods do not, however, capture thecomplex functional morphology of these organisms nor can they typically capturephenomena such as interactions (except indirectly through statistical means).Direct imaging of these organisms, can therefore provide a valuable complementto sequencing and, when performed quantitatively, provide measures ofstructures and interaction patterns which can then be related back to sequencebased measurements. Towards this end, we developed a framework, environmentalhigh-content fluorescence microscopy (e-HCFM) which can be applied toenvironmental samples composed of mixed communities. This strategy is based ongeneral purposes dyes that stain major structures in eukaryotes. Samples areimaged using scanning confocal microscopy, resulting in a three-dimensionalimage-stack. High-throughput can be achieved using automated microscopy andcomputational analysis. Standard bioimage informatics segmentation methodscombined with feature computation and machine learning results in automatictaxonomic assignments to the objects that are imaged in addition to severalbiochemically relevant measurements (such as biovolumes, fluorescenceestimates) per organism. We provide results on 174 image acquisition from TaraOcean samples, which cover organisms from 5 to 180 microns (82 samples in the5-20 fraction, 96 in the 20-180 fraction). We show a validation of the approachboth on technical grounds (demonstrating the high accuracy of automatedclassification) and provide results obtain from image analysis and fromintegrating with other data, such as associated environmental parametersmeasured in situ as well as perspectives on integration with sequenceinformation.

  12. Extended Field Laser Confocal Microscopy (EFLCM): Combining automated Gigapixel image capture with in silico virtual microscopy

    PubMed Central

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-01-01

    Background Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Methods Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). Results We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. Conclusion The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes. PMID:18627634

  13. Extended Field Laser Confocal Microscopy (EFLCM): combining automated Gigapixel image capture with in silico virtual microscopy.

    PubMed

    Flaberg, Emilie; Sabelström, Per; Strandh, Christer; Szekely, Laszlo

    2008-07-16

    Confocal laser scanning microscopy has revolutionized cell biology. However, the technique has major limitations in speed and sensitivity due to the fact that a single laser beam scans the sample, allowing only a few microseconds signal collection for each pixel. This limitation has been overcome by the introduction of parallel beam illumination techniques in combination with cold CCD camera based image capture. Using the combination of microlens enhanced Nipkow spinning disc confocal illumination together with fully automated image capture and large scale in silico image processing we have developed a system allowing the acquisition, presentation and analysis of maximum resolution confocal panorama images of several Gigapixel size. We call the method Extended Field Laser Confocal Microscopy (EFLCM). We show using the EFLCM technique that it is possible to create a continuous confocal multi-colour mosaic from thousands of individually captured images. EFLCM can digitize and analyze histological slides, sections of entire rodent organ and full size embryos. It can also record hundreds of thousands cultured cells at multiple wavelength in single event or time-lapse fashion on fixed slides, in live cell imaging chambers or microtiter plates. The observer independent image capture of EFLCM allows quantitative measurements of fluorescence intensities and morphological parameters on a large number of cells. EFLCM therefore bridges the gap between the mainly illustrative fluorescence microscopy and purely quantitative flow cytometry. EFLCM can also be used as high content analysis (HCA) instrument for automated screening processes.

  14. Image stitching and image reconstruction of intestines captured using radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Wu, Yin-Yi; Dung, Lan-Rong; Wu, Hsien-Ming; Weng, Ping-Kuo; Huang, Ker-Jer; Chiu, Luan-Jiau

    2012-05-01

    This study investigates image processing using the radial imaging capsule endoscope (RICE) system. First, an experimental environment is established in which a simulated object has a shape that is similar to a cylinder, such that a triaxial platform can be used to push the RICE into the sample and capture radial images. Then four algorithms (mean absolute error, mean square error, Pearson correlation coefficient, and deformation processing) are used to stitch the images together. The Pearson correlation coefficient method is the most effective algorithm because it yields the highest peak signal-to-noise ratio, higher than 80.69 compared to the original image. Furthermore, a living animal experiment is carried out. Finally, the Pearson correlation coefficient method and vector deformation processing are used to stitch the images that were captured in the living animal experiment. This method is very attractive because unlike the other methods, in which two lenses are required to reconstruct the geometrical image, RICE uses only one lens and one mirror.

  15. Enrichment allows identification of diverse, rare elements in metagenomic resistome-virulome sequencing.

    PubMed

    Noyes, Noelle R; Weinroth, Maggie E; Parker, Jennifer K; Dean, Chris J; Lakin, Steven M; Raymond, Robert A; Rovira, Pablo; Doster, Enrique; Abdo, Zaid; Martin, Jennifer N; Jones, Kenneth L; Ruiz, Jaime; Boucher, Christina A; Belk, Keith E; Morley, Paul S

    2017-10-17

    Shotgun metagenomic sequencing is increasingly utilized as a tool to evaluate ecological-level dynamics of antimicrobial resistance and virulence, in conjunction with microbiome analysis. Interest in use of this method for environmental surveillance of antimicrobial resistance and pathogenic microorganisms is also increasing. In published metagenomic datasets, the total of all resistance- and virulence-related sequences accounts for < 1% of all sequenced DNA, leading to limitations in detection of low-abundance resistome-virulome elements. This study describes the extent and composition of the low-abundance portion of the resistome-virulome, using a bait-capture and enrichment system that incorporates unique molecular indices to count DNA molecules and correct for enrichment bias. The use of the bait-capture and enrichment system significantly increased on-target sequencing of the resistome-virulome, enabling detection of an additional 1441 gene accessions and revealing a low-abundance portion of the resistome-virulome that was more diverse and compositionally different than that detected by more traditional metagenomic assays. The low-abundance portion of the resistome-virulome also contained resistance genes with public health importance, such as extended-spectrum betalactamases, that were not detected using traditional shotgun metagenomic sequencing. In addition, the use of the bait-capture and enrichment system enabled identification of rare resistance gene haplotypes that were used to discriminate between sample origins. These results demonstrate that the rare resistome-virulome contains valuable and unique information that can be utilized for both surveillance and population genetic investigations of resistance. Access to the rare resistome-virulome using the bait-capture and enrichment system validated in this study can greatly advance our understanding of microbiome-resistome dynamics.

  16. Improved wheal detection from skin prick test images

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan

    2014-03-01

    Skin prick test is a commonly used method for diagnosis of allergic diseases (e.g., pollen allergy, food allergy, etc.) in allergy clinics. The results of this test are erythema and wheal provoked on the skin where the test is applied. The sensitivity of the patient against a specific allergen is determined by the physical size of the wheal, which can be estimated from images captured by digital cameras. Accurate wheal detection from these images is an important step for precise estimation of wheal size. In this paper, we propose a method for improved wheal detection on prick test images captured by digital cameras. Our method operates by first localizing the test region by detecting calibration marks drawn on the skin. The luminance variation across the localized region is eliminated by applying a color transformation from RGB to YCbCr and discarding the luminance channel. We enhance the contrast of the captured images for the purpose of wheal detection by performing principal component analysis on the blue-difference (Cb) and red-difference (Cr) color channels. We finally, perform morphological operations on the contrast enhanced image to detect the wheal on the image plane. Our experiments performed on images acquired from 36 different patients show the efficiency of the proposed method for wheal detection from skin prick test images captured in an uncontrolled environment.

  17. 3D Reconstruction of Space Objects from Multi-Views by a Visible Sensor

    PubMed Central

    Zhang, Haopeng; Wei, Quanmao; Jiang, Zhiguo

    2017-01-01

    In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points. PMID:28737675

  18. Systems and Methods for Imaging of Falling Objects

    NASA Technical Reports Server (NTRS)

    Fallgatter, Cale (Inventor); Garrett, Tim (Inventor)

    2014-01-01

    Imaging of falling objects is described. Multiple images of a falling object can be captured substantially simultaneously using multiple cameras located at multiple angles around the falling object. An epipolar geometry of the captured images can be determined. The images can be rectified to parallelize epipolar lines of the epipolar geometry. Correspondence points between the images can be identified. At least a portion of the falling object can be digitally reconstructed using the identified correspondence points to create a digital reconstruction.

  19. Design and implementation of a contactless multiple hand feature acquisition system

    NASA Astrophysics Data System (ADS)

    Zhao, Qiushi; Bu, Wei; Wu, Xiangqian; Zhang, David

    2012-06-01

    In this work, an integrated contactless multiple hand feature acquisition system is designed. The system can capture palmprint, palm vein, and palm dorsal vein images simultaneously. Moreover, the images are captured in a contactless manner, that is, users need not to touch any part of the device when capturing. Palmprint is imaged under visible illumination while palm vein and palm dorsal vein are imaged under near infrared (NIR) illumination. The capturing is controlled by computer and the whole process is less than 1 second, which is sufficient for online biometric systems. Based on this device, this paper also implements a contactless hand-based multimodal biometric system. Palmprint, palm vein, palm dorsal vein, finger vein, and hand geometry features are extracted from the captured images. After similarity measure, the matching scores are fused using weighted sum fusion rule. Experimental results show that although the verification accuracy of each uni-modality is not as high as that of state-of-the-art, the fusion result is superior to most of the existing hand-based biometric systems. This result indicates that the proposed device is competent in the application of contactless multimodal hand-based biometrics.

  20. Mapping-by-sequencing in complex polyploid genomes using genic sequence capture: a case study to map yellow rust resistance in hexaploid wheat.

    PubMed

    Gardiner, Laura-Jayne; Bansept-Basler, Pauline; Olohan, Lisa; Joynson, Ryan; Brenchley, Rachel; Hall, Neil; O'Sullivan, Donal M; Hall, Anthony

    2016-08-01

    Previously we extended the utility of mapping-by-sequencing by combining it with sequence capture and mapping sequence data to pseudo-chromosomes that were organized using wheat-Brachypodium synteny. This, with a bespoke haplotyping algorithm, enabled us to map the flowering time locus in the diploid wheat Triticum monococcum L. identifying a set of deleted genes (Gardiner et al., 2014). Here, we develop this combination of gene enrichment and sliding window mapping-by-synteny analysis to map the Yr6 locus for yellow stripe rust resistance in hexaploid wheat. A 110 MB NimbleGen capture probe set was used to enrich and sequence a doubled haploid mapping population of hexaploid wheat derived from an Avalon and Cadenza cross. The Yr6 locus was identified by mapping to the POPSEQ chromosomal pseudomolecules using a bespoke pipeline and algorithm (Chapman et al., 2015). Furthermore the same locus was identified using newly developed pseudo-chromosome sequences as a mapping reference that are based on the genic sequence used for sequence enrichment. The pseudo-chromosomes allow us to demonstrate the application of mapping-by-sequencing to even poorly defined polyploidy genomes where chromosomes are incomplete and sub-genome assemblies are collapsed. This analysis uniquely enabled us to: compare wheat genome annotations; identify the Yr6 locus - defining a smaller genic region than was previously possible; associate the interval with one wheat sub-genome and increase the density of SNP markers associated. Finally, we built the pipeline in iPlant, making it a user-friendly community resource for phenotype mapping. © 2016 The Authors. The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.

  1. SERS detection of indirect viral DNA capture using colloidal gold and methylene blue as a Raman label

    USDA-ARS?s Scientific Manuscript database

    An indirect capture model assay using colloidal Au nanoparticles is demonstrated for surface enhanced Raman scattering (SERS) spectroscopy detection of DNA. The sequence targeted for capture is derived from the West Nile Virus (WNV) RNA genome and was selected on the basis of exhibiting minimal seco...

  2. Automated Mapping and Characterization of RSL from HiRISE data with MAARSL

    NASA Astrophysics Data System (ADS)

    Bue, Brian; Wagstaff, Kiri; Stillman, David

    2017-10-01

    Recurring slope lineae (RSL) are narrow (0.5-5m) low-albedo features on Mars that recur, fade, and incrementally lengthen on steep slopes throughout the year. Determining the processes that generate RSL requires detailed analysis of high-resolution orbital images to measure RSL surface properties and seasonal variation. However, conducting this analysis manually is labor intensive, time consuming, and infeasible given the large number of relevant sites. This abstract describes the Mapping and Automated Analysis of RSL (MAARSL) system, which we designed to aid large-scale analysis of seasonal RSL properties. MAARSL takes an ordered sequence of high spatial resolution, orthorectified, and coregistered orbital image data (e.g., MRO HiRISE images) and a corresponding Digital Terrain Model (DTM) as input and performs three primary functions: (1) detect and delineate candidate RSL in each image, (2) compute statistics of surface morphology and observed radiance for each candidate, and (3) measure temporal variation between candidates in adjacent images.The main challenge in automatic image-based RSL detection is discriminating true RSL from other low-albedo regions such as shadows or changes in surface materials is . To discriminate RSL from shadows, MAARSL constructs a linear illumination model for each image based on the DTM and position and orientation of the instrument at image acquisition time. We filter out any low-albedo regions that appear to be shadows via a least-squares fit between the modeled illumination and the observed intensity in each image. False detections occur in areas where the 1m/pixel HiRISE DTM poorly captures the variability of terrain observed in the 0.25m/pixel HiRISE images. To remove these spurious detections, we developed an interactive machine learning graphical interface that uses expert input to filter and validate the RSL candidates. This tool yielded 636 candidates from a well-studied sequence of 18 HiRISE images of Garni crater in Valles Marineris with minimal manual effort. We describe our analysis of RSL candidates at Garni crater and Coprates Montes and ongoing studies of other regions where RSL occur.

  3. Feature tracking for automated volume of interest stabilization on 4D-OCT images

    NASA Astrophysics Data System (ADS)

    Laves, Max-Heinrich; Schoob, Andreas; Kahrs, Lüder A.; Pfeiffer, Tom; Huber, Robert; Ortmaier, Tobias

    2017-03-01

    A common representation of volumetric medical image data is the triplanar view (TV), in which the surgeon manually selects slices showing the anatomical structure of interest. In addition to common medical imaging such as MRI or computed tomography, recent advances in the field of optical coherence tomography (OCT) have enabled live processing and volumetric rendering of four-dimensional images of the human body. Due to the region of interest undergoing motion, it is challenging for the surgeon to simultaneously keep track of an object by continuously adjusting the TV to desired slices. To select these slices in subsequent frames automatically, it is necessary to track movements of the volume of interest (VOI). This has not been addressed with respect to 4DOCT images yet. Therefore, this paper evaluates motion tracking by applying state-of-the-art tracking schemes on maximum intensity projections (MIP) of 4D-OCT images. Estimated VOI location is used to conveniently show corresponding slices and to improve the MIPs by calculating thin-slab MIPs. Tracking performances are evaluated on an in-vivo sequence of human skin, captured at 26 volumes per second. Among investigated tracking schemes, our recently presented tracking scheme for soft tissue motion provides highest accuracy with an error of under 2.2 voxels for the first 80 volumes. Object tracking on 4D-OCT images enables its use for sub-epithelial tracking of microvessels for image-guidance.

  4. Image charge effects on electron capture by dust grains in dusty plasmas.

    PubMed

    Jung, Y D; Tawara, H

    2001-07-01

    Electron-capture processes by negatively charged dust grains from hydrogenic ions in dusty plasmas are investigated in accordance with the classical Bohr-Lindhard model. The attractive interaction between the electron in a hydrogenic ion and its own image charge inside the dust grain is included to obtain the total interaction energy between the electron and the dust grain. The electron-capture radius is determined by the total interaction energy and the kinetic energy of the released electron in the frame of the projectile dust grain. The classical straight-line trajectory approximation is applied to the motion of the ion in order to visualize the electron-capture cross section as a function of the impact parameter, kinetic energy of the projectile ion, and dust charge. It is found that the image charge inside the dust grain plays a significant role in the electron-capture process near the surface of the dust grain. The electron-capture cross section is found to be quite sensitive to the collision energy and dust charge.

  5. FRAP Analysis: Accounting for Bleaching during Image Capture

    PubMed Central

    Wu, Jun; Shekhar, Nandini; Lele, Pushkar P.; Lele, Tanmay P.

    2012-01-01

    The analysis of Fluorescence Recovery After Photobleaching (FRAP) experiments involves mathematical modeling of the fluorescence recovery process. An important feature of FRAP experiments that tends to be ignored in the modeling is that there can be a significant loss of fluorescence due to bleaching during image capture. In this paper, we explicitly include the effects of bleaching during image capture in the model for the recovery process, instead of correcting for the effects of bleaching using reference measurements. Using experimental examples, we demonstrate the usefulness of such an approach in FRAP analysis. PMID:22912750

  6. A Comparison of Earthquake Back-Projection Imaging Methods for Dense Local Arrays, and Application to the 2011 Virginia Aftershock Sequence

    NASA Astrophysics Data System (ADS)

    Beskardes, G. D.; Hole, J. A.; Wang, K.; Wu, Q.; Chapman, M. C.; Davenport, K. K.; Michaelides, M.; Brown, L. D.; Quiros, D. A.

    2016-12-01

    Back-projection imaging has recently become a practical method for local earthquake detection and location due to the deployment of densely sampled, continuously recorded, local seismograph arrays. Back-projection is scalable to earthquakes with a wide range of magnitudes from very tiny to very large. Local dense arrays provide the opportunity to capture very tiny events for a range applications, such as tectonic microseismicity, source scaling studies, wastewater injection-induced seismicity, hydraulic fracturing, CO2 injection monitoring, volcano studies, and mining safety. While back-projection sometimes utilizes the full seismic waveform, the waveforms are often pre-processed to overcome imaging issues. We compare the performance of back-projection using four previously used data pre-processing methods: full waveform, envelope, short-term averaging / long-term averaging (STA/LTA), and kurtosis. The goal is to identify an optimized strategy for an entirely automated imaging process that is robust in the presence of real-data issues, has the lowest signal-to-noise thresholds for detection and for location, has the best spatial resolution of the energy imaged at the source, preserves magnitude information, and considers computational cost. Real data issues include aliased station spacing, low signal-to-noise ratio (to <1), large noise bursts and spatially varying waveform polarity. For evaluation, the four imaging methods were applied to the aftershock sequence of the 2011 Virginia earthquake as recorded by the AIDA array with 200-400 m station spacing. These data include earthquake magnitudes from -2 to 3 with highly variable signal to noise, spatially aliased noise, and large noise bursts: realistic issues in many environments. Each of the four back-projection methods has advantages and disadvantages, and a combined multi-pass method achieves the best of all criteria. Preliminary imaging results from the 2011 Virginia dataset will be presented.

  7. High-Density Dielectrophoretic Microwell Array for Detection, Capture, and Single-Cell Analysis of Rare Tumor Cells in Peripheral Blood.

    PubMed

    Morimoto, Atsushi; Mogami, Toshifumi; Watanabe, Masaru; Iijima, Kazuki; Akiyama, Yasuyuki; Katayama, Koji; Futami, Toru; Yamamoto, Nobuyuki; Sawada, Takeshi; Koizumi, Fumiaki; Koh, Yasuhiro

    2015-01-01

    Development of a reliable platform and workflow to detect and capture a small number of mutation-bearing circulating tumor cells (CTCs) from a blood sample is necessary for the development of noninvasive cancer diagnosis. In this preclinical study, we aimed to develop a capture system for molecular characterization of single CTCs based on high-density dielectrophoretic microwell array technology. Spike-in experiments using lung cancer cell lines were conducted. The microwell array was used to capture spiked cancer cells, and captured single cells were subjected to whole genome amplification followed by sequencing. A high detection rate (70.2%-90.0%) and excellent linear performance (R2 = 0.8189-0.9999) were noted between the observed and expected numbers of tumor cells. The detection rate was markedly higher than that obtained using the CellSearch system in a blinded manner, suggesting the superior sensitivity of our system in detecting EpCAM- tumor cells. Isolation of single captured tumor cells, followed by detection of EGFR mutations, was achieved using Sanger sequencing. Using a microwell array, we established an efficient and convenient platform for the capture and characterization of single CTCs. The results of a proof-of-principle preclinical study indicated that this platform has potential for the molecular characterization of captured CTCs from patients.

  8. JunoCam Images of Jupiter: Science from an Outreach Experiment

    NASA Astrophysics Data System (ADS)

    Hansen, C. J.; Orton, G. S.; Caplinger, M. A.; Ravine, M. A.; Rogers, J.; Eichstädt, G.; Jensen, E.; Bolton, S. J.; Momary, T.; Ingersoll, A. P.

    2017-12-01

    The Juno mission to Jupiter carries a visible imager on its payload primarily for outreach, and also very useful for jovian atmospheric science. Lacking a formal imaging science team, members of the public have volunteered to process JunoCam images. Lightly processed and raw JunoCam data are posted on the JunoCam webpage at https://missionjuno.swri.edu/junocam/processing. Citizen scientists download these images and upload their processed contributions. JunoCam images through broadband red, green and blue filters and a narrowband methane filter centered at 889 nm mounted directly on the detector. JunoCam is a push-frame imager with a 58 deg wide field of view covering a 1600 pixel width, and builds the second dimension of the image as the spacecraft rotates. This design enables capture of the entire pole of Jupiter in a single image at low emission angle when Juno is 1 hour from perijove (closest approach). At perijove the wide field of view images are high-resolution while still capturing entire storms, e.g. the Great Red Spot. Juno's unique polar orbit yields polar perspectives unavailable to earth-based observers or most previous spacecraft. The first discovery was that the familiar belt-zone structure gives way to more chaotic storms, with cyclones grouped around both the north and south poles [1, 2]. Recent time-lapse sequences have enabled measurement of the rotation rates and wind speeds of these circumpolar cyclones [3]. Other topics are being investigated with substantial, in many cases essential, contributions from citizen scientists. These include correlating the high resolution JunoCam images to storms and disruptions of the belts and zones tracked throughout the historical record. A phase function for Jupiter is being developed empirically to allow image brightness to be flattened from the subsolar point to the terminator. We are studying high hazes and the stratigraphy of the upper atmosphere, utilizing the methane filter, structures illuminated beyond the terminator, and clouds casting shadows. Numerous high altitude clouds have been detected and we are investigating whether they are the jovian equivalent of squall lines. [1] Bolton, S. et al. (2017) Science 356:821; [2] Orton, G. et al. (2017) GRL 44:4599; [3] Adriani, A. et al. (2017) submitted to Nature.

  9. Looking beyond the exome: a phenotype-first approach to molecular diagnostic resolution in rare and undiagnosed diseases.

    PubMed

    Pena, Loren D M; Jiang, Yong-Hui; Schoch, Kelly; Spillmann, Rebecca C; Walley, Nicole; Stong, Nicholas; Rapisardo Horn, Sarah; Sullivan, Jennifer A; McConkie-Rosell, Allyn; Kansagra, Sujay; Smith, Edward C; El-Dairi, Mays; Bellet, Jane; Keels, Martha Ann; Jasien, Joan; Kranz, Peter G; Noel, Richard; Nagaraj, Shashi K; Lark, Robert K; Wechsler, Daniel S G; Del Gaudio, Daniela; Leung, Marco L; Hendon, Laura G; Parker, Collette C; Jones, Kelly L; Goldstein, David B; Shashi, Vandana

    2018-04-01

    PurposeTo describe examples of missed pathogenic variants on whole-exome sequencing (WES) and the importance of deep phenotyping for further diagnostic testing.MethodsGuided by phenotypic information, three children with negative WES underwent targeted single-gene testing.ResultsIndividual 1 had a clinical diagnosis consistent with infantile systemic hyalinosis, although WES and a next-generation sequencing (NGS)-based ANTXR2 test were negative. Sanger sequencing of ANTXR2 revealed a homozygous single base pair insertion, previously missed by the WES variant caller software. Individual 2 had neurodevelopmental regression and cerebellar atrophy, with no diagnosis on WES. New clinical findings prompted Sanger sequencing and copy number testing of PLA2G6. A novel homozygous deletion of the noncoding exon 1 (not included in the WES capture kit) was detected, with extension into the promoter, confirming the clinical suspicion of infantile neuroaxonal dystrophy. Individual 3 had progressive ataxia, spasticity, and magnetic resonance image changes of vanishing white matter leukoencephalopathy. An NGS leukodystrophy gene panel and WES showed a heterozygous pathogenic variant in EIF2B5; no deletions/duplications were detected. Sanger sequencing of EIF2B5 showed a frameshift indel, probably missed owing to failure of alignment.ConclusionThese cases illustrate potential pitfalls of WES/NGS testing and the importance of phenotype-guided molecular testing in yielding diagnoses.

  10. High Density Aerial Image Matching: State-Of and Future Prospects

    NASA Astrophysics Data System (ADS)

    Haala, N.; Cavegn, S.

    2016-06-01

    Ongoing innovations in matching algorithms are continuously improving the quality of geometric surface representations generated automatically from aerial images. This development motivated the launch of the joint ISPRS/EuroSDR project "Benchmark on High Density Aerial Image Matching", which aims on the evaluation of photogrammetric 3D data capture in view of the current developments in dense multi-view stereo-image matching. Originally, the test aimed on image based DSM computation from conventional aerial image flights for different landuse and image block configurations. The second phase then put an additional focus on high quality, high resolution 3D geometric data capture in complex urban areas. This includes both the extension of the test scenario to oblique aerial image flights as well as the generation of filtered point clouds as additional output of the respective multi-view reconstruction. The paper uses the preliminary outcomes of the benchmark to demonstrate the state-of-the-art in airborne image matching with a special focus of high quality geometric data capture in urban scenarios.

  11. NASA CloudSat Captures Hurricane Daniel Transformation

    NASA Image and Video Library

    2006-07-25

    Hurricane Daniel intensified between July 18 and July 23rd. NASA new CloudSat satellite was able to capture and confirm this transformation in its side-view images of Hurricane Daniel as seen in this series of images

  12. 3D reconstruction based on light field images

    NASA Astrophysics Data System (ADS)

    Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei

    2018-04-01

    This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.

  13. Spatially selective photonic crystal enhanced fluorescence and application to background reduction for biomolecule detection assays

    PubMed Central

    Chaudhery, Vikram; Huang, Cheng-Sheng; Pokhriyal, Anusha; Polans, James; Cunningham, Brian T.

    2011-01-01

    By combining photonic crystal label-free biosensor imaging with photonic crystal enhanced fluorescence, it is possible to selectively enhance the fluorescence emission from regions of the PC surface based upon the density of immobilized capture molecules. A label-free image of the capture molecules enables determination of optimal coupling conditions of the laser used for fluorescence imaging of the photonic crystal surface on a pixel-by-pixel basis, allowing maximization of fluorescence enhancement factor from regions incorporating a biomolecule capture spot and minimization of background autofluorescence from areas between capture spots. This capability significantly improves the contrast of enhanced fluorescent images, and when applied to an antibody protein microarray, provides a substantial advantage over conventional fluorescence microscopy. Using the new approach, we demonstrate detection limits as low as 0.97 pg/ml for a representative protein biomarker in buffer. PMID:22109210

  14. Spatially selective photonic crystal enhanced fluorescence and application to background reduction for biomolecule detection assays.

    PubMed

    Chaudhery, Vikram; Huang, Cheng-Sheng; Pokhriyal, Anusha; Polans, James; Cunningham, Brian T

    2011-11-07

    By combining photonic crystal label-free biosensor imaging with photonic crystal enhanced fluorescence, it is possible to selectively enhance the fluorescence emission from regions of the PC surface based upon the density of immobilized capture molecules. A label-free image of the capture molecules enables determination of optimal coupling conditions of the laser used for fluorescence imaging of the photonic crystal surface on a pixel-by-pixel basis, allowing maximization of fluorescence enhancement factor from regions incorporating a biomolecule capture spot and minimization of background autofluorescence from areas between capture spots. This capability significantly improves the contrast of enhanced fluorescent images, and when applied to an antibody protein microarray, provides a substantial advantage over conventional fluorescence microscopy. Using the new approach, we demonstrate detection limits as low as 0.97 pg/ml for a representative protein biomarker in buffer.

  15. High-contrast fast Fourier transform acousto-optical tomography of phantom tissues with a frequency-chirp modulation of the ultrasound.

    PubMed

    Forget, Benoît-Claude; Ramaz, François; Atlan, Michaël; Selb, Juliette; Boccara, Albert-Claude

    2003-03-01

    We report new results on acousto-optical tomography in phantom tissues using a frequency chirp modulation and a CCD camera. This technique allows quick recording of three-dimensional images of the optical contrast with a two-dimensional scan of the ultrasound source in a plane perpendicular to the ultrasonic path. The entire optical contrast along the ultrasonic path is concurrently obtained from the capture of a film sequence at a rate of 200 Hz. This technique reduces the acquisition time, and it enhances the axial resolution and thus the contrast, which are usually poor owing to the large volume of interaction of the ultrasound perturbation.

  16. Detection of hepatitis C virus RNA using ligation-dependent polymerase chain reaction in formalin-fixed, paraffin-embedded liver tissues.

    PubMed Central

    Park, Y. N.; Abe, K.; Li, H.; Hsuih, T.; Thung, S. N.; Zhang, D. Y.

    1996-01-01

    Reverse transcription polymerase chain reaction (RT-PCR) has been used to detect hepatitis C virus (HCV) sequences in liver tissue. However, RT-PCR has a variable detection sensitivity, especially on routinely processed formalin-fixed, paraffin-embedded (FFPE) specimens. RNA-RNA and RNA-protein cross-links formed during formalin fixation is the major limiting factor preventing reverse trans criptase from extending the primers. To overcome this problem, we applied the ligation-dependent PCR (LD-PCR) for the detection of HCV RNA in FFPE liver tissue. This method uses two capture probes for RNA isolation and two hemiprobes for the subsequent PCR. Despite cross-links, the capture probes and the hemiprobes are able to form hybrids with HCV RNAs released from the FFPE tissue. The hybrids are isolated through binding of the capture probes to paramagnetic beads. The hemiprobes are then ligated by a T4 DNA ligase to form a full probe that serves as a template for the Taq DNA polymerase. A total of 22 FFPE liver specimens, 21 with hepatocellular carcinoma (HCC) and 1 with biliary cirrhosis secondary to bile duct atresia were selected for this study, of which 13 patients were HCV seropositive and 9 seronegative. HCV RNA was detectable by ID-PCR from all 13 HCV-seropositive HCCs and from 5 of 8 HCV-seronegative HCCs but not from the HCV-seronegative liver with biliary atresia. By contrast, RT-PCR detected HCV sequences in only 5 of the HCV-sero-positive and in 1 of the HCV-seronegative HCCs. To resolve the discordance between the LD-PCR and RT-PCR results, RT-PCR was performed on frozen liver tissue of the discrepant specimens, which confirmed the LD-PCR positive results. In conclusion, LD-PCR is a more sensitive method than RT-PCR for the detection of HCV sequences in routinely processed liver tissues. A high rate of HCV infection (86%) is found in HCC specimens, indicating a previously underestimated role of HCV in HCC pathogenesis. Images Figure 2 PMID:8909238

  17. Automatic summarization of changes in biological image sequences using algorithmic information theory.

    PubMed

    Cohen, Andrew R; Bjornsson, Christopher S; Temple, Sally; Banker, Gary; Roysam, Badrinath

    2009-08-01

    An algorithmic information-theoretic method is presented for object-level summarization of meaningful changes in image sequences. Object extraction and tracking data are represented as an attributed tracking graph (ATG). Time courses of object states are compared using an adaptive information distance measure, aided by a closed-form multidimensional quantization. The notion of meaningful summarization is captured by using the gap statistic to estimate the randomness deficiency from algorithmic statistics. The summary is the clustering result and feature subset that maximize the gap statistic. This approach was validated on four bioimaging applications: 1) It was applied to a synthetic data set containing two populations of cells differing in the rate of growth, for which it correctly identified the two populations and the single feature out of 23 that separated them; 2) it was applied to 59 movies of three types of neuroprosthetic devices being inserted in the brain tissue at three speeds each, for which it correctly identified insertion speed as the primary factor affecting tissue strain; 3) when applied to movies of cultured neural progenitor cells, it correctly distinguished neurons from progenitors without requiring the use of a fixative stain; and 4) when analyzing intracellular molecular transport in cultured neurons undergoing axon specification, it automatically confirmed the role of kinesins in axon specification.

  18. A Data Hiding Technique to Synchronously Embed Physiological Signals in H.264/AVC Encoded Video for Medicine Healthcare.

    PubMed

    Peña, Raul; Ávila, Alfonso; Muñoz, David; Lavariega, Juan

    2015-01-01

    The recognition of clinical manifestations in both video images and physiological-signal waveforms is an important aid to improve the safety and effectiveness in medical care. Physicians can rely on video-waveform (VW) observations to recognize difficult-to-spot signs and symptoms. The VW observations can also reduce the number of false positive incidents and expand the recognition coverage to abnormal health conditions. The synchronization between the video images and the physiological-signal waveforms is fundamental for the successful recognition of the clinical manifestations. The use of conventional equipment to synchronously acquire and display the video-waveform information involves complex tasks such as the video capture/compression, the acquisition/compression of each physiological signal, and the video-waveform synchronization based on timestamps. This paper introduces a data hiding technique capable of both enabling embedding channels and synchronously hiding samples of physiological signals into encoded video sequences. Our data hiding technique offers large data capacity and simplifies the complexity of the video-waveform acquisition and reproduction. The experimental results revealed successful embedding and full restoration of signal's samples. Our results also demonstrated a small distortion in the video objective quality, a small increment in bit-rate, and embedded cost savings of -2.6196% for high and medium motion video sequences.

  19. A method for the real-time construction of a full parallax light field

    NASA Astrophysics Data System (ADS)

    Tanaka, Kenji; Aoki, Soko

    2006-02-01

    We designed and implemented a light field acquisition and reproduction system for dynamic objects called LiveDimension, which serves as a 3D live video system for multiple viewers. The acquisition unit consists of circularly arranged NTSC cameras surrounding an object. The display consists of circularly arranged projectors and a rotating screen. The projectors are constantly projecting images captured by the corresponding cameras onto the screen. The screen rotates around an in-plane vertical axis at a sufficient speed so that it faces each of the projectors in sequence. Since the Lambertian surfaces of the screens are covered by light-collimating plastic films with vertical louver patterns that are used for the selection of appropriate light rays, viewers can only observe images from a projector located in the same direction as the viewer. Thus, the dynamic view of an object is dependent on the viewer's head position. We evaluated the system by projecting both objects and human figures and confirmed that the entire system can reproduce light fields with a horizontal parallax to display video sequences of 430x770 pixels at a frame rate of 45 fps. Applications of this system include product design reviews, sales promotion, art exhibits, fashion shows, and sports training with form checking.

  20. Panoramic 3D Reconstruction by Fusing Color Intensity and Laser Range Data

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Lu, Jian

    Technology for capturing panoramic (360 degrees) three-dimensional information in a real environment have many applications in fields: virtual and complex reality, security, robot navigation, and so forth. In this study, we examine an acquisition device constructed of a regular CCD camera and a 2D laser range scanner, along with a technique for panoramic 3D reconstruction using a data fusion algorithm based on an energy minimization framework. The acquisition device can capture two types of data of a panoramic scene without occlusion between two sensors: a dense spatio-temporal volume from a camera and distance information from a laser scanner. We resample the dense spatio-temporal volume for generating a dense multi-perspective panorama that has equal spatial resolution to that of the original images acquired using a regular camera, and also estimate a dense panoramic depth-map corresponding to the generated reference panorama by extracting trajectories from the dense spatio-temporal volume with a selecting camera. Moreover, for determining distance information robustly, we propose a data fusion algorithm that is embedded into an energy minimization framework that incorporates active depth measurements using a 2D laser range scanner and passive geometry reconstruction from an image sequence obtained using the CCD camera. Thereby, measurement precision and robustness can be improved beyond those available by conventional methods using either passive geometry reconstruction (stereo vision) or a laser range scanner. Experimental results using both synthetic and actual images show that our approach can produce high-quality panoramas and perform accurate 3D reconstruction in a panoramic environment.

  1. A Variational Approach to Video Registration with Subspace Constraints.

    PubMed

    Garg, Ravi; Roussos, Anastasios; Agapito, Lourdes

    2013-01-01

    This paper addresses the problem of non-rigid video registration, or the computation of optical flow from a reference frame to each of the subsequent images in a sequence, when the camera views deformable objects. We exploit the high correlation between 2D trajectories of different points on the same non-rigid surface by assuming that the displacement of any point throughout the sequence can be expressed in a compact way as a linear combination of a low-rank motion basis. This subspace constraint effectively acts as a trajectory regularization term leading to temporally consistent optical flow. We formulate it as a robust soft constraint within a variational framework by penalizing flow fields that lie outside the low-rank manifold. The resulting energy functional can be decoupled into the optimization of the brightness constancy and spatial regularization terms, leading to an efficient optimization scheme. Additionally, we propose a novel optimization scheme for the case of vector valued images, based on the dualization of the data term. This allows us to extend our approach to deal with colour images which results in significant improvements on the registration results. Finally, we provide a new benchmark dataset, based on motion capture data of a flag waving in the wind, with dense ground truth optical flow for evaluation of multi-frame optical flow algorithms for non-rigid surfaces. Our experiments show that our proposed approach outperforms state of the art optical flow and dense non-rigid registration algorithms.

  2. High resolution depth reconstruction from monocular images and sparse point clouds using deep convolutional neural network

    NASA Astrophysics Data System (ADS)

    Dimitrievski, Martin; Goossens, Bart; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Understanding the 3D structure of the environment is advantageous for many tasks in the field of robotics and autonomous vehicles. From the robot's point of view, 3D perception is often formulated as a depth image reconstruction problem. In the literature, dense depth images are often recovered deterministically from stereo image disparities. Other systems use an expensive LiDAR sensor to produce accurate, but semi-sparse depth images. With the advent of deep learning there have also been attempts to estimate depth by only using monocular images. In this paper we combine the best of the two worlds, focusing on a combination of monocular images and low cost LiDAR point clouds. We explore the idea that very sparse depth information accurately captures the global scene structure while variations in image patches can be used to reconstruct local depth to a high resolution. The main contribution of this paper is a supervised learning depth reconstruction system based on a deep convolutional neural network. The network is trained on RGB image patches reinforced with sparse depth information and the output is a depth estimate for each pixel. Using image and point cloud data from the KITTI vision dataset we are able to learn a correspondence between local RGB information and local depth, while at the same time preserving the global scene structure. Our results are evaluated on sequences from the KITTI dataset and our own recordings using a low cost camera and LiDAR setup.

  3. Fusion of infrared and visible images based on BEMD and NSDFB

    NASA Astrophysics Data System (ADS)

    Zhu, Pan; Huang, Zhanhua; Lei, Hai

    2016-07-01

    This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.

  4. Retrotransposon Capture Sequencing (RC-Seq): A Targeted, High-Throughput Approach to Resolve Somatic L1 Retrotransposition in Humans.

    PubMed

    Sanchez-Luque, Francisco J; Richardson, Sandra R; Faulkner, Geoffrey J

    2016-01-01

    Mobile genetic elements (MGEs) are of critical importance in genomics and developmental biology. Polymorphic and somatic MGE insertions have the potential to impact the phenotype of an individual, depending on their genomic locations and functional consequences. However, the identification of polymorphic and somatic insertions among the plethora of copies residing in the genome presents a formidable technical challenge. Whole genome sequencing has the potential to address this problem; however, its efficacy depends on the abundance of cells carrying the new insertion. Robust detection of somatic insertions present in only a subset of cells within a given sample can also be prohibitively expensive due to a requirement for high sequencing depth. Here, we describe retrotransposon capture sequencing (RC-seq), a sequence capture approach in which Illumina libraries are enriched for fragments containing the 5' and 3' termini of specific MGEs. RC-seq allows the detection of known polymorphic insertions present in an individual, as well as the identification of rare or private germline insertions not previously described. Furthermore, RC-seq can be used to detect and characterize somatic insertions, providing a valuable tool to elucidate the extent and characteristics of MGE activity in healthy tissues and in various disease states.

  5. A Study of Light Level Effect on the Accuracy of Image Processing-based Tomato Grading

    NASA Astrophysics Data System (ADS)

    Prijatna, D.; Muhaemin, M.; Wulandari, R. P.; Herwanto, T.; Saukat, M.; Sugandi, W. K.

    2018-05-01

    Image processing method has been used in non-destructive tests of agricultural products. Compared to manual method, image processing method may produce more objective and consistent results. Image capturing box installed in currently used tomato grading machine (TEP-4) is equipped with four fluorescence lamps to illuminate the processed tomatoes. Since the performance of any lamp will decrease if its service time has exceeded its lifetime, it is predicted that this will affect tomato classification. The objective of this study was to determine the minimum light levels which affect classification accuracy. This study was conducted by varying light level from minimum and maximum on tomatoes in image capturing boxes and then investigates its effects on image characteristics. Research results showed that light intensity affects two variables which are important for classification, for example, area and color of captured image. Image processing program was able to determine correctly the weight and classification of tomatoes when light level was 30 lx to 140 lx.

  6. Application of TrackEye in equine locomotion research.

    PubMed

    Drevemo, S; Roepstorff, L; Kallings, P; Johnston, C J

    1993-01-01

    TrackEye is an analysis system, which is applicable for equine biokinematic studies. It covers the whole process from digitizing of images, automatic target tracking and analysis. Key components in the system are an image work station for processing of video images and a high-resolution film-to-video scanner for 16-mm film. A recording module controls the input device and handles the capture of image sequences into a videodisc system, and a tracking module is able to follow reference markers automatically. The system offers a flexible analysis including calculations of markers displacements, distances and joint angles, velocities and accelerations. TrackEye was used to study effects of phenylbutazone on the fetlock and carpal joint angle movements in a horse with a mild lameness caused by osteo-arthritis in the fetlock joint of a forelimb. Significant differences, most evident before treatment, were observed in the minimum fetlock and carpal joint angles when contralateral limbs were compared (p < 0.001). The minimum fetlock angle and the minimum carpal joint angle were significantly greater in the lame limb before treatment compared to those 6, 37 and 49 h after the last treatment (p < 0.001).

  7. Real Time Apnoea Monitoring of Children Using the Microsoft Kinect Sensor: A Pilot Study.

    PubMed

    Al-Naji, Ali; Gibson, Kim; Lee, Sang-Heon; Chahl, Javaan

    2017-02-03

    The objective of this study was to design a non-invasive system for the observation of respiratory rates and detection of apnoea using analysis of real time image sequences captured in any given sleep position and under any light conditions (even in dark environments). A Microsoft Kinect sensor was used to visualize the variations in the thorax and abdomen from the respiratory rhythm. These variations were magnified, analyzed and detected at a distance of 2.5 m from the subject. A modified motion magnification system and frame subtraction technique were used to identify breathing movements by detecting rapid motion areas in the magnified frame sequences. The experimental results on a set of video data from five subjects (3 h for each subject) showed that our monitoring system can accurately measure respiratory rate and therefore detect apnoea in infants and young children. The proposed system is feasible, accurate, safe and low computational complexity, making it an efficient alternative for non-contact home sleep monitoring systems and advancing health care applications.

  8. Online tracking of outdoor lighting variations for augmented reality with moving cameras.

    PubMed

    Liu, Yanli; Granier, Xavier

    2012-04-01

    In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key idea is to estimate the relative intensities of sunlight and skylight via a sparse set of planar feature-points extracted from each frame. To address the inevitable feature misalignments, a set of constraints are introduced to select the most reliable ones. Exploiting the spatial and temporal coherence of illumination, the relative intensities of sunlight and skylight are finally estimated by using an optimization process. We validate our technique on a set of real-life videos and show that the results with our estimations are visually coherent along the video sequences.

  9. Dual gait generative models for human motion estimation from a single camera.

    PubMed

    Zhang, Xin; Fan, Guoliang

    2010-08-01

    This paper presents a general gait representation framework for video-based human motion estimation. Specifically, we want to estimate the kinematics of an unknown gait from image sequences taken by a single camera. This approach involves two generative models, called the kinematic gait generative model (KGGM) and the visual gait generative model (VGGM), which represent the kinematics and appearances of a gait by a few latent variables, respectively. The concept of gait manifold is proposed to capture the gait variability among different individuals by which KGGM and VGGM can be integrated together, so that a new gait with unknown kinematics can be inferred from gait appearances via KGGM and VGGM. Moreover, a new particle-filtering algorithm is proposed for dynamic gait estimation, which is embedded with a segmental jump-diffusion Markov Chain Monte Carlo scheme to accommodate the gait variability in a long observed sequence. The proposed algorithm is trained from the Carnegie Mellon University (CMU) Mocap data and tested on the Brown University HumanEva data with promising results.

  10. Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Gul, M. Shahzeb Khan; Gunturk, Bahadir K.

    2018-05-01

    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.

  11. Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks.

    PubMed

    Gul, M Shahzeb Khan; Gunturk, Bahadir K

    2018-05-01

    Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.

  12. High-sensitivity HLA typing by Saturated Tiling Capture Sequencing (STC-Seq).

    PubMed

    Jiao, Yang; Li, Ran; Wu, Chao; Ding, Yibin; Liu, Yanning; Jia, Danmei; Wang, Lifeng; Xu, Xiang; Zhu, Jing; Zheng, Min; Jia, Junling

    2018-01-15

    Highly polymorphic human leukocyte antigen (HLA) genes are responsible for fine-tuning the adaptive immune system. High-resolution HLA typing is important for the treatment of autoimmune and infectious diseases. Additionally, it is routinely performed for identifying matched donors in transplantation medicine. Although many HLA typing approaches have been developed, the complexity, low-efficiency and high-cost of current HLA-typing assays limit their application in population-based high-throughput HLA typing for donors, which is required for creating large-scale databases for transplantation and precision medicine. Here, we present a cost-efficient Saturated Tiling Capture Sequencing (STC-Seq) approach to capturing 14 HLA class I and II genes. The highly efficient capture (an approximately 23,000-fold enrichment) of these genes allows for simplified allele calling. Tests on five genes (HLA-A/B/C/DRB1/DQB1) from 31 human samples and 351 datasets using STC-Seq showed results that were 98% consistent with the known two sets of digitals (field1 and field2) genotypes. Additionally, STC can capture genomic DNA fragments longer than 3 kb from HLA loci, making the library compatible with the third-generation sequencing. STC-Seq is a highly accurate and cost-efficient method for HLA typing which can be used to facilitate the establishment of population-based HLA databases for the precision and transplantation medicine.

  13. Object tracking using plenoptic image sequences

    NASA Astrophysics Data System (ADS)

    Kim, Jae Woo; Bae, Seong-Joon; Park, Seongjin; Kim, Do Hyung

    2017-05-01

    Object tracking is a very important problem in computer vision research. Among the difficulties of object tracking, partial occlusion problem is one of the most serious and challenging problems. To address the problem, we proposed novel approaches to object tracking on plenoptic image sequences. Our approaches take advantage of the refocusing capability that plenoptic images provide. Our approaches input the sequences of focal stacks constructed from plenoptic image sequences. The proposed image selection algorithms select the sequence of optimal images that can maximize the tracking accuracy from the sequence of focal stacks. Focus measure approach and confidence measure approach were proposed for image selection and both of the approaches were validated by the experiments using thirteen plenoptic image sequences that include heavily occluded target objects. The experimental results showed that the proposed approaches were satisfactory comparing to the conventional 2D object tracking algorithms.

  14. Optimizing Radiometric Fidelity to Enhance Aerial Image Change Detection Utilizing Digital Single Lens Reflex (DSLR) Cameras

    NASA Astrophysics Data System (ADS)

    Kerr, Andrew D.

    Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.

  15. Secure steganography designed for mobile platforms

    NASA Astrophysics Data System (ADS)

    Agaian, Sos S.; Cherukuri, Ravindranath; Sifuentes, Ronnie R.

    2006-05-01

    Adaptive steganography, an intelligent approach to message hiding, integrated with matrix encoding and pn-sequences serves as a promising resolution to recent security assurance concerns. Incorporating the above data hiding concepts with established cryptographic protocols in wireless communication would greatly increase the security and privacy of transmitting sensitive information. We present an algorithm which will address the following problems: 1) low embedding capacity in mobile devices due to fixed image dimensions and memory constraints, 2) compatibility between mobile and land based desktop computers, and 3) detection of stego images by widely available steganalysis software [1-3]. Consistent with the smaller available memory, processor capabilities, and limited resolution associated with mobile devices, we propose a more magnified approach to steganography by focusing adaptive efforts at the pixel level. This deeper method, in comparison to the block processing techniques commonly found in existing adaptive methods, allows an increase in capacity while still offering a desired level of security. Based on computer simulations using high resolution, natural imagery and mobile device captured images, comparisons show that the proposed method securely allows an increased amount of embedding capacity but still avoids detection by varying steganalysis techniques.

  16. Human visual system-based smoking event detection

    NASA Astrophysics Data System (ADS)

    Odetallah, Amjad D.; Agaian, Sos S.

    2012-06-01

    Human action (e.g. smoking, eating, and phoning) analysis is an important task in various application domains like video surveillance, video retrieval, human-computer interaction systems, and so on. Smoke detection is a crucial task in many video surveillance applications and could have a great impact to raise the level of safety of urban areas, public parks, airplanes, hospitals, schools and others. The detection task is challenging since there is no prior knowledge about the object's shape, texture and color. In addition, its visual features will change under different lighting and weather conditions. This paper presents a new scheme of a system for detecting human smoking events, or small smoke, in a sequence of images. In developed system, motion detection and background subtraction are combined with motion-region-saving, skin-based image segmentation, and smoke-based image segmentation to capture potential smoke regions which are further analyzed to decide on the occurrence of smoking events. Experimental results show the effectiveness of the proposed approach. As well, the developed method is capable of detecting the small smoking events of uncertain actions with various cigarette sizes, colors, and shapes.

  17. Registration of Large Motion Blurred Images

    DTIC Science & Technology

    2016-05-09

    in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS

  18. Sparsity-based image monitoring of crystal size distribution during crystallization

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Huo, Yan; Ma, Cai Y.; Wang, Xue Z.

    2017-07-01

    To facilitate monitoring crystal size distribution (CSD) during a crystallization process by using an in-situ imaging system, a sparsity-based image analysis method is proposed for real-time implementation. To cope with image degradation arising from in-situ measurement subject to particle motion, solution turbulence, and uneven illumination background in the crystallizer, sparse representation of a real-time captured crystal image is developed based on using an in-situ image dictionary established in advance, such that the noise components in the captured image can be efficiently removed. Subsequently, the edges of a crystal shape in a captured image are determined in terms of the salience information defined from the denoised crystal images. These edges are used to derive a blur kernel for reconstruction of a denoised image. A non-blind deconvolution algorithm is given for the real-time reconstruction. Consequently, image segmentation can be easily performed for evaluation of CSD. The crystal image dictionary and blur kernels are timely updated in terms of the imaging conditions to improve the restoration efficiency. An experimental study on the cooling crystallization of α-type L-glutamic acid (LGA) is shown to demonstrate the effectiveness and merit of the proposed method.

  19. Coded aperture solution for improving the performance of traffic enforcement cameras

    NASA Astrophysics Data System (ADS)

    Masoudifar, Mina; Pourreza, Hamid Reza

    2016-10-01

    A coded aperture camera is proposed for automatic license plate recognition (ALPR) systems. It captures images using a noncircular aperture. The aperture pattern is designed for the rapid acquisition of high-resolution images while preserving high spatial frequencies of defocused regions. It is obtained by minimizing an objective function, which computes the expected value of perceptual deblurring error. The imaging conditions and camera sensor specifications are also considered in the proposed function. The designed aperture improves the depth of field (DoF) and subsequently ALPR performance. The captured images can be directly analyzed by the ALPR software up to a specific depth, which is 13 m in our case, though it is 11 m for the circular aperture. Moreover, since the deblurring results of images captured by our aperture yield fewer artifacts than those captured by the circular aperture, images can be first deblurred and then analyzed by the ALPR software. In this way, the DoF and recognition rate can be improved at the same time. Our case study shows that the proposed camera can improve the DoF up to 17 m while it is limited to 11 m in the conventional aperture.

  20. Whole-exome sequencing for mutation detection in pediatric disorders of insulin secretion: Maturity onset diabetes of the young and congenital hyperinsulinism.

    PubMed

    Johnson, S R; Leo, P J; McInerney-Leo, A M; Anderson, L K; Marshall, M; McGown, I; Newell, F; Brown, M A; Conwell, L S; Harris, M; Duncan, E L

    2018-06-01

    To assess the utility of whole-exome sequencing (WES) for mutation detection in maturity-onset diabetes of the young (MODY) and congenital hyperinsulinism (CHI). MODY and CHI are the two commonest monogenic disorders of glucose-regulated insulin secretion in childhood, with 13 causative genes known for MODY and 10 causative genes identified for CHI. The large number of potential genes makes comprehensive screening using traditional methods expensive and time-consuming. Ten subjects with MODY and five with CHI with known mutations underwent WES using two different exome capture kits (Nimblegen SeqCap EZ Human v3.0 Exome Enrichment Kit, Nextera Rapid Capture Exome Kit). Analysis was blinded to previously identified mutations, and included assessment for large deletions. The target capture of five exome capture technologies was also analyzed using sequencing data from >2800 unrelated samples. Four of five MODY mutations were identified using Nimblegen (including a large deletion in HNF1B). Although targeted, one mutation (in INS) had insufficient coverage for detection. Eleven of eleven mutations (six MODY, five CHI) were identified using Nextera Rapid (including the previously missed mutation). On reconciliation, all mutations concorded with previous data and no additional variants in MODY genes were detected. There were marked differences in the performance of the capture technologies. WES can be useful for screening for MODY/CHI mutations, detecting both point mutations and large deletions. However, capture technologies require careful selection. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. An Example-Based Super-Resolution Algorithm for Selfie Images

    PubMed Central

    William, Jino Hans; Venkateswaran, N.; Narayanan, Srinath; Ramachandran, Sandeep

    2016-01-01

    A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. PMID:27064500

  2. Capturing the genetic makeup of the active microbiome in situ.

    PubMed

    Singer, Esther; Wagner, Michael; Woyke, Tanja

    2017-09-01

    More than any other technology, nucleic acid sequencing has enabled microbial ecology studies to be complemented with the data volumes necessary to capture the extent of microbial diversity and dynamics in a wide range of environments. In order to truly understand and predict environmental processes, however, the distinction between active, inactive and dead microbial cells is critical. Also, experimental designs need to be sensitive toward varying population complexity and activity, and temporal as well as spatial scales of process rates. There are a number of approaches, including single-cell techniques, which were designed to study in situ microbial activity and that have been successively coupled to nucleic acid sequencing. The exciting new discoveries regarding in situ microbial activity provide evidence that future microbial ecology studies will indispensably rely on techniques that specifically capture members of the microbiome active in the environment. Herein, we review those currently used activity-based approaches that can be directly linked to shotgun nucleic acid sequencing, evaluate their relevance to ecology studies, and discuss future directions.

  3. Capturing the genetic makeup of the active microbiome in situ

    PubMed Central

    Singer, Esther; Wagner, Michael; Woyke, Tanja

    2017-01-01

    More than any other technology, nucleic acid sequencing has enabled microbial ecology studies to be complemented with the data volumes necessary to capture the extent of microbial diversity and dynamics in a wide range of environments. In order to truly understand and predict environmental processes, however, the distinction between active, inactive and dead microbial cells is critical. Also, experimental designs need to be sensitive toward varying population complexity and activity, and temporal as well as spatial scales of process rates. There are a number of approaches, including single-cell techniques, which were designed to study in situ microbial activity and that have been successively coupled to nucleic acid sequencing. The exciting new discoveries regarding in situ microbial activity provide evidence that future microbial ecology studies will indispensably rely on techniques that specifically capture members of the microbiome active in the environment. Herein, we review those currently used activity-based approaches that can be directly linked to shotgun nucleic acid sequencing, evaluate their relevance to ecology studies, and discuss future directions. PMID:28574490

  4. High-dynamic-range imaging for cloud segmentation

    NASA Astrophysics Data System (ADS)

    Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan

    2018-04-01

    Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.

  5. A new method for digital video documentation in surgical procedures and minimally invasive surgery.

    PubMed

    Wurnig, P N; Hollaus, P H; Wurnig, C H; Wolf, R K; Ohtsuka, T; Pridun, N S

    2003-02-01

    Documentation of surgical procedures is limited to the accuracy of description, which depends on the vocabulary and the descriptive prowess of the surgeon. Even analog video recording could not solve the problem of documentation satisfactorily due to the abundance of recorded material. By capturing the video digitally, most problems are solved in the circumstances described in this article. We developed a cheap and useful digital video capturing system that consists of conventional computer components. Video images and clips can be captured intraoperatively and are immediately available. The system is a commercial personal computer specially configured for digital video capturing and is connected by wire to the video tower. Filming was done with a conventional endoscopic video camera. A total of 65 open and endoscopic procedures were documented in an orthopedic and a thoracic surgery unit. The median number of clips per surgical procedure was 6 (range, 1-17), and the median storage volume was 49 MB (range, 3-360 MB) in compressed form. The median duration of a video clip was 4 min 25 s (range, 45 s to 21 min). Median time for editing a video clip was 12 min for an advanced user (including cutting, title for the movie, and compression). The quality of the clips renders them suitable for presentations. This digital video documentation system allows easy capturing of intraoperative video sequences in high quality. All possibilities of documentation can be performed. With the use of an endoscopic video camera, no compromises with respect to sterility and surgical elbowroom are necessary. The cost is much lower than commercially available systems, and setting changes can be performed easily without trained specialists.

  6. The Cooking and Pneumonia Study (CAPS) in Malawi: Implementation of Remote Source Data Verification

    PubMed Central

    Weston, William; Smedley, James; Bennett, Andrew; Mortimer, Kevin

    2016-01-01

    Background Source data verification (SDV) is a data monitoring procedure which compares the original records with the Case Report Form (CRF). Traditionally, on-site SDV relies on monitors making multiples visits to study sites requiring extensive resources. The Cooking And Pneumonia Study (CAPS) is a 24- month village-level cluster randomized controlled trial assessing the effectiveness of an advanced cook-stove intervention in preventing pneumonia in children under five in rural Malawi (www.capstudy.org). CAPS used smartphones to capture digital images of the original records on an electronic CRF (eCRF). In the present study, descriptive statistics are used to report the experience of electronic data capture with remote SDV in a challenging research setting in rural Malawi. Methods At three monthly intervals, fieldworkers, who were employed by CAPS, captured pneumonia data from the original records onto the eCRF. Fieldworkers also captured digital images of the original records. Once Internet connectivity was available, the data captured on the eCRF and the digital images of the original records were uploaded to a web-based SDV application. This enabled SDV to be conducted remotely from the UK. We conducted SDV of the pneumonia data (occurrence, severity, and clinical indicators) recorded in the eCRF with the data in the digital images of the original records. Result 664 episodes of pneumonia were recorded after 6 months of follow-up. Of these 664 episodes, 611 (92%) had a finding of pneumonia in the original records. All digital images of the original records were clear and legible. Conclusion Electronic data capture using eCRFs on mobile technology is feasible in rural Malawi. Capturing digital images of the original records in the field allows remote SDV to be conducted efficiently and securely without requiring additional field visits. We recommend these approaches in similar settings, especially those with health endpoints. PMID:27355447

  7. The Cooking and Pneumonia Study (CAPS) in Malawi: Implementation of Remote Source Data Verification.

    PubMed

    Weston, William; Smedley, James; Bennett, Andrew; Mortimer, Kevin

    2016-01-01

    Source data verification (SDV) is a data monitoring procedure which compares the original records with the Case Report Form (CRF). Traditionally, on-site SDV relies on monitors making multiples visits to study sites requiring extensive resources. The Cooking And Pneumonia Study (CAPS) is a 24- month village-level cluster randomized controlled trial assessing the effectiveness of an advanced cook-stove intervention in preventing pneumonia in children under five in rural Malawi (www.capstudy.org). CAPS used smartphones to capture digital images of the original records on an electronic CRF (eCRF). In the present study, descriptive statistics are used to report the experience of electronic data capture with remote SDV in a challenging research setting in rural Malawi. At three monthly intervals, fieldworkers, who were employed by CAPS, captured pneumonia data from the original records onto the eCRF. Fieldworkers also captured digital images of the original records. Once Internet connectivity was available, the data captured on the eCRF and the digital images of the original records were uploaded to a web-based SDV application. This enabled SDV to be conducted remotely from the UK. We conducted SDV of the pneumonia data (occurrence, severity, and clinical indicators) recorded in the eCRF with the data in the digital images of the original records. 664 episodes of pneumonia were recorded after 6 months of follow-up. Of these 664 episodes, 611 (92%) had a finding of pneumonia in the original records. All digital images of the original records were clear and legible. Electronic data capture using eCRFs on mobile technology is feasible in rural Malawi. Capturing digital images of the original records in the field allows remote SDV to be conducted efficiently and securely without requiring additional field visits. We recommend these approaches in similar settings, especially those with health endpoints.

  8. Next-generation sequencing strategies enable routine detection of balanced chromosome rearrangements for clinical diagnostics and genetic research.

    PubMed

    Talkowski, Michael E; Ernst, Carl; Heilbut, Adrian; Chiang, Colby; Hanscom, Carrie; Lindgren, Amelia; Kirby, Andrew; Liu, Shangtao; Muddukrishna, Bhavana; Ohsumi, Toshiro K; Shen, Yiping; Borowsky, Mark; Daly, Mark J; Morton, Cynthia C; Gusella, James F

    2011-04-08

    The contribution of balanced chromosomal rearrangements to complex disorders remains unclear because they are not detected routinely by genome-wide microarrays and clinical localization is imprecise. Failure to consider these events bypasses a potentially powerful complement to single nucleotide polymorphism and copy-number association approaches to complex disorders, where much of the heritability remains unexplained. To capitalize on this genetic resource, we have applied optimized sequencing and analysis strategies to test whether these potentially high-impact variants can be mapped at reasonable cost and throughput. By using a whole-genome multiplexing strategy, rearrangement breakpoints could be delineated at a fraction of the cost of standard sequencing. For rearrangements already mapped regionally by karyotyping and fluorescence in situ hybridization, a targeted approach enabled capture and sequencing of multiple breakpoints simultaneously. Importantly, this strategy permitted capture and unique alignment of up to 97% of repeat-masked sequences in the targeted regions. Genome-wide analyses estimate that only 3.7% of bases should be routinely omitted from genomic DNA capture experiments. Illustrating the power of these approaches, the rearrangement breakpoints were rapidly defined to base pair resolution and revealed unexpected sequence complexity, such as co-occurrence of inversion and translocation as an underlying feature of karyotypically balanced alterations. These findings have implications ranging from genome annotation to de novo assemblies and could enable sequencing screens for structural variations at a cost comparable to that of microarrays in standard clinical practice. Copyright © 2011 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  9. Preliminary experiments on quantification of skin condition

    NASA Astrophysics Data System (ADS)

    Kitajima, Kenzo; Iyatomi, Hitoshi

    2014-03-01

    In this study, we investigated a preliminary assessment method for skin conditions such as a moisturizing property and its fineness of the skin with an image analysis only. We captured a facial images from volunteer subjects aged between 30s and 60s by Pocket Micro (R) device (Scalar Co., Japan). This device has two image capturing modes; the normal mode and the non-reflection mode with the aid of the equipped polarization filter. We captured skin images from a total of 68 spots from subjects' face using both modes (i.e. total of 136 skin images). The moisture-retaining property of the skin and subjective evaluation score of the skin fineness in 5-point scale for each case were also obtained in advance as a gold standard (their mean and SD were 35.15 +/- 3.22 (μS) and 3.45 +/- 1.17, respectively). We extracted a total of 107 image features from each image and built linear regression models for estimating abovementioned criteria with a stepwise feature selection. The developed model for estimating the skin moisture achieved the MSE of 1.92 (μS) with 6 selected parameters, while the model for skin fineness achieved that of 0.51 scales with 7 parameters under the leave-one-out cross validation. We confirmed the developed models predicted the moisture-retaining property and fineness of the skin appropriately with only captured image.

  10. Multisensory Integration and Behavioral Plasticity in Sharks from Different Ecological Niches

    PubMed Central

    Gardiner, Jayne M.; Atema, Jelle; Hueter, Robert E.; Motta, Philip J.

    2014-01-01

    The underwater sensory world and the sensory systems of aquatic animals have become better understood in recent decades, but typically have been studied one sense at a time. A comprehensive analysis of multisensory interactions during complex behavioral tasks has remained a subject of discussion without experimental evidence. We set out to generate a general model of multisensory information extraction by aquatic animals. For our model we chose to analyze the hierarchical, integrative, and sometimes alternate use of various sensory systems during the feeding sequence in three species of sharks that differ in sensory anatomy and behavioral ecology. By blocking senses in different combinations, we show that when some of their normal sensory cues were unavailable, sharks were often still capable of successfully detecting, tracking and capturing prey by switching to alternate sensory modalities. While there were significant species differences, odor was generally the first signal detected, leading to upstream swimming and wake tracking. Closer to the prey, as more sensory cues became available, the preferred sensory modalities varied among species, with vision, hydrodynamic imaging, electroreception, and touch being important for orienting to, striking at, and capturing the prey. Experimental deprivation of senses showed how sharks exploit the many signals that comprise their sensory world, each sense coming into play as they provide more accurate information during the behavioral sequence of hunting. The results may be applicable to aquatic hunting in general and, with appropriate modification, to other types of animal behavior. PMID:24695492

  11. Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy

    NASA Astrophysics Data System (ADS)

    Bucht, Curry; Söderberg, Per; Manneberg, Göran

    2009-02-01

    The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor. Morphometry of the corneal endothelium is presently done by semi-automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development of fully automated analysis of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images. The digitally enhanced images of the corneal endothelium were transformed, using the fast Fourier transform (FFT). Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.

  12. An integrated model-driven method for in-treatment upper airway motion tracking using cine MRI in head and neck radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hua, E-mail: huli@radonc.wustl.edu; Chen, Hsin

    Purpose: For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Methods: Considering the complex H&N structures andmore » ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. Results: The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28%  ±  1.46%) and margin error (0.49  ±  0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. Conclusions: The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.« less

  13. An integrated model-driven method for in-treatment upper airway motion tracking using cine MRI in head and neck radiation therapy.

    PubMed

    Li, Hua; Chen, Hsin-Chen; Dolly, Steven; Li, Harold; Fischer-Valuck, Benjamin; Victoria, James; Dempsey, James; Ruan, Su; Anastasio, Mark; Mazur, Thomas; Gach, Michael; Kashani, Rojano; Green, Olga; Rodriguez, Vivian; Gay, Hiram; Thorstad, Wade; Mutic, Sasa

    2016-08-01

    For the first time, MRI-guided radiation therapy systems can acquire cine images to dynamically monitor in-treatment internal organ motion. However, the complex head and neck (H&N) structures and low-contrast/resolution of on-board cine MRI images make automatic motion tracking a very challenging task. In this study, the authors proposed an integrated model-driven method to automatically track the in-treatment motion of the H&N upper airway, a complex and highly deformable region wherein internal motion often occurs in an either voluntary or involuntary manner, from cine MRI images for the analysis of H&N motion patterns. Considering the complex H&N structures and ensuring automatic and robust upper airway motion tracking, the authors firstly built a set of linked statistical shapes (including face, face-jaw, and face-jaw-palate) using principal component analysis from clinically approved contours delineated on a set of training data. The linked statistical shapes integrate explicit landmarks and implicit shape representation. Then, a hierarchical model-fitting algorithm was developed to align the linked shapes on the first image frame of a to-be-tracked cine sequence and to localize the upper airway region. Finally, a multifeature level set contour propagation scheme was performed to identify the upper airway shape change, frame-by-frame, on the entire image sequence. The multifeature fitting energy, including the information of intensity variations, edge saliency, curve geometry, and temporal shape continuity, was minimized to capture the details of moving airway boundaries. Sagittal cine MR image sequences acquired from three H&N cancer patients were utilized to demonstrate the performance of the proposed motion tracking method. The tracking accuracy was validated by comparing the results to the average of two manual delineations in 50 randomly selected cine image frames from each patient. The resulting average dice similarity coefficient (93.28%  ±  1.46%) and margin error (0.49  ±  0.12 mm) showed good agreement between the automatic and manual results. The comparison with three other deformable model-based segmentation methods illustrated the superior shape tracking performance of the proposed method. Large interpatient variations of swallowing frequency, swallowing duration, and upper airway cross-sectional area were observed from the testing cine image sequences. The proposed motion tracking method can provide accurate upper airway motion tracking results, and enable automatic and quantitative identification and analysis of in-treatment H&N upper airway motion. By integrating explicit and implicit linked-shape representations within a hierarchical model-fitting process, the proposed tracking method can process complex H&N structures and low-contrast/resolution cine MRI images. Future research will focus on the improvement of method reliability, patient motion pattern analysis for providing more information on patient-specific prediction of structure displacements, and motion effects on dosimetry for better H&N motion management in radiation therapy.

  14. Bioinformatics approaches to single-cell analysis in developmental biology.

    PubMed

    Yalcin, Dicle; Hakguder, Zeynep M; Otu, Hasan H

    2016-03-01

    Individual cells within the same population show various degrees of heterogeneity, which may be better handled with single-cell analysis to address biological and clinical questions. Single-cell analysis is especially important in developmental biology as subtle spatial and temporal differences in cells have significant associations with cell fate decisions during differentiation and with the description of a particular state of a cell exhibiting an aberrant phenotype. Biotechnological advances, especially in the area of microfluidics, have led to a robust, massively parallel and multi-dimensional capturing, sorting, and lysis of single-cells and amplification of related macromolecules, which have enabled the use of imaging and omics techniques on single cells. There have been improvements in computational single-cell image analysis in developmental biology regarding feature extraction, segmentation, image enhancement and machine learning, handling limitations of optical resolution to gain new perspectives from the raw microscopy images. Omics approaches, such as transcriptomics, genomics and epigenomics, targeting gene and small RNA expression, single nucleotide and structural variations and methylation and histone modifications, rely heavily on high-throughput sequencing technologies. Although there are well-established bioinformatics methods for analysis of sequence data, there are limited bioinformatics approaches which address experimental design, sample size considerations, amplification bias, normalization, differential expression, coverage, clustering and classification issues, specifically applied at the single-cell level. In this review, we summarize biological and technological advancements, discuss challenges faced in the aforementioned data acquisition and analysis issues and present future prospects for application of single-cell analyses to developmental biology. © The Author 2015. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial

    EPA Pesticide Factsheets

    The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.

  16. Comparison of MR imaging sequences for liver and head and neck interventions: is there a single optimal sequence for all purposes?

    PubMed

    Boll, Daniel T; Lewin, Jonathan S; Duerk, Jeffrey L; Aschoff, Andrik J; Merkle, Elmar M

    2004-05-01

    To compare the appropriate pulse sequences for interventional device guidance during magnetic resonance (MR) imaging at 0.2 T and to evaluate the dependence of sequence selection on the anatomic region of the procedure. Using a C-arm 0.2 T system, four interventional MR sequences were applied in 23 liver cases and during MR-guided neck interventions in 13 patients. The imaging protocol consisted of: multislice turbo spin echo (TSE) T2w, sequential-slice fast imaging with steady precession (FISP), a time-reversed version of FISP (PSIF), and FISP with balanced gradients in all spatial directions (True-FISP) sequences. Vessel conspicuity was rated and contrast-to-noise ratio (CNR) was calculated for each sequence and a differential receiver operating characteristic was performed. Liver findings were detected in 96% using the TSE sequence. PSIF, FISP, and True-FISP imaging showed lesions in 91%, 61%, and 65%, respectively. The TSE sequence offered the best CNR, followed by PSIF imaging. Differential receiver operating characteristic analysis also rated TSE and PSIF to be the superior sequences. Lesions in the head and neck were detected in all cases by TSE and FISP, in 92% using True-FISP, and in 84% using PSIF. True-FISP offered the best CNR, followed by TSE imaging. Vessels appeared bright on FISP and True-FISP imaging and dark on the other sequences. In interventional MR imaging, no single sequence fits all purposes. Image guidance for interventional MR during liver procedures is best achieved by PSIF or TSE, whereas biopsies in the head and neck are best performed using FISP or True-FISP sequences.

  17. Effective Fingerprint Quality Estimation for Diverse Capture Sensors

    PubMed Central

    Xie, Shan Juan; Yoon, Sook; Shin, Jinwook; Park, Dong Sun

    2010-01-01

    Recognizing the quality of fingerprints in advance can be beneficial for improving the performance of fingerprint recognition systems. The representative features to assess the quality of fingerprint images from different types of capture sensors are known to vary. In this paper, an effective quality estimation system that can be adapted for different types of capture sensors is designed by modifying and combining a set of features including orientation certainty, local orientation quality and consistency. The proposed system extracts basic features, and generates next level features which are applicable for various types of capture sensors. The system then uses the Support Vector Machine (SVM) classifier to determine whether or not an image should be accepted as input to the recognition system. The experimental results show that the proposed method can perform better than previous methods in terms of accuracy. In the meanwhile, the proposed method has an ability to eliminate residue images from the optical and capacitive sensors, and the coarse images from thermal sensors. PMID:22163632

  18. Restoration of motion blurred images

    NASA Astrophysics Data System (ADS)

    Gaxiola, Leopoldo N.; Juarez-Salazar, Rigoberto; Diaz-Ramirez, Victor H.

    2017-08-01

    Image restoration is a classic problem in image processing. Image degradations can occur due to several reasons, for instance, imperfections of imaging systems, quantization errors, atmospheric turbulence, relative motion between camera or objects, among others. Motion blur is a typical degradation in dynamic imaging systems. In this work, we present a method to estimate the parameters of linear motion blur degradation from a captured blurred image. The proposed method is based on analyzing the frequency spectrum of a captured image in order to firstly estimate the degradation parameters, and then, to restore the image with a linear filter. The performance of the proposed method is evaluated by processing synthetic and real-life images. The obtained results are characterized in terms of accuracy of image restoration given by an objective criterion.

  19. Principles of Quantitative MR Imaging with Illustrated Review of Applicable Modular Pulse Diagrams.

    PubMed

    Mills, Andrew F; Sakai, Osamu; Anderson, Stephan W; Jara, Hernan

    2017-01-01

    Continued improvements in diagnostic accuracy using magnetic resonance (MR) imaging will require development of methods for tissue analysis that complement traditional qualitative MR imaging studies. Quantitative MR imaging is based on measurement and interpretation of tissue-specific parameters independent of experimental design, compared with qualitative MR imaging, which relies on interpretation of tissue contrast that results from experimental pulse sequence parameters. Quantitative MR imaging represents a natural next step in the evolution of MR imaging practice, since quantitative MR imaging data can be acquired using currently available qualitative imaging pulse sequences without modifications to imaging equipment. The article presents a review of the basic physical concepts used in MR imaging and how quantitative MR imaging is distinct from qualitative MR imaging. Subsequently, the article reviews the hierarchical organization of major applicable pulse sequences used in this article, with the sequences organized into conventional, hybrid, and multispectral sequences capable of calculating the main tissue parameters of T1, T2, and proton density. While this new concept offers the potential for improved diagnostic accuracy and workflow, awareness of this extension to qualitative imaging is generally low. This article reviews the basic physical concepts in MR imaging, describes commonly measured tissue parameters in quantitative MR imaging, and presents the major available pulse sequences used for quantitative MR imaging, with a focus on the hierarchical organization of these sequences. © RSNA, 2017.

  20. Development of Sorting System for Fishes by Feed-forward Neural Networks Using Rotation Invariant Features

    NASA Astrophysics Data System (ADS)

    Shiraishi, Yuhki; Takeda, Fumiaki

    In this research, we have developed a sorting system for fishes, which is comprised of a conveyance part, a capturing image part, and a sorting part. In the conveyance part, we have developed an independent conveyance system in order to separate one fish from an intertwined group of fishes. After the image of the separated fish is captured in the capturing part, a rotation invariant feature is extracted using two-dimensional fast Fourier transform, which is the mean value of the power spectrum with the same distance from the origin in the spectrum field. After that, the fishes are classified by three-layered feed-forward neural networks. The experimental results show that the developed system classifies three kinds of fishes captured in various angles with the classification ratio of 98.95% for 1044 captured images of five fishes. The other experimental results show the classification ratio of 90.7% for 300 fishes by 10-fold cross validation method.

  1. Real-Time X-ray Radiography Diagnostics of Components in Solid Rocket Motors

    NASA Technical Reports Server (NTRS)

    Cortopassi, A. C.; Martin, H. T.; Boyer, E.; Kuo, K. K.

    2012-01-01

    Solid rocket motors (SRMs) typically use nozzle materials which are required to maintain their shape as well as insulate the underlying support structure during the motor operation. In addition, SRMs need internal insulation materials to protect the motor case from the harsh environment resulting from the combustion of solid propellant. In the nozzle, typical materials consist of high density graphite, carbon-carbon composites and carbon phenolic composites. Internal insulation of the motor cases is typically a composite material with carbon, asbestos, Kevlar, or silica fibers in an ablative matrix such as EPDM or NBR. For both nozzle and internal insulation materials, the charring process occurs when the hot combustion products heat the material intensely. The pyrolysis of the matrix material takes away a portion of the thermal energy near the wall surface and leaves behind a char layer. The fiber reinforcement retains the porous char layer which provides continued thermal protection from the hot combustion products. It is of great interest to characterize both the total erosion rates of the material and the char layer thickness. By better understanding of the erosion process for a particular ablative material in a specific flow environment, the required insulation material thickness can be properly selected. The recession rates of internal insulation and nozzle materials of SRMs are typically determined by testing in some sort of simulated environment; either arc-jet testing, flame torch testing, or subscale SRMs of different size. Material recession rates are deduced by comparison of pre- and post-test measurements and then averaging over the duration of the test. However, these averaging techniques cannot be used to determine the instantaneous recession rates of the material. Knowledge of the variation in recession rates in response to the instantaneous flow conditions during the motor operation is of great importance. For example, in many SRM configurations the recession of the solid propellant grain can drastically alter the flow-field and effect the recession of internal insulation and nozzle materials. Simultaneous measurement of the overall erosion rate, the development of the char layer, and the recession of the char-virgin interface during the motor operation can be rather difficult. While invasive techniques have been used with limited success, they have serious drawbacks. Break wires or make wire sensors can be installed into a sufficient number of locations in the charring material from which a time history of the charring surface can be deduced. These sensors fundamentally alter the local structure of the material in which they are imbedded. Also, the location of these sensors within the material is not known precisely without the use of an X-ray. To determine instantaneous recession rates, real-time X-ray radiography (X-ray RTR) has been utilized in several SRM experiments at PSU. The X-ray RTR system discussed in this paper consists of an X-ray source, X-ray image intensifier, and CCD camera connected to a capture computer. The system has been used to examine the ablation process of internal insulation as well as nozzle material erosion in a subscale SRM. The X-ray source is rated to 320 kV at 10 mA and has both a large (5.5 mm) and small (3.0 mm) focal spot. The lead-lined cesium iodide X-ray image intensifier produces an image which is captured by a CCD camera with a 1,000 x 1,000 pixel resolution. To produce accurate imagery of the object of interest, the alignment of the X-ray source to the X-ray image intensifier is crucial. The image sequences captured during the operation of an SRM are then processed to enhance the quality of the images. This procedure allows for computer software to extract data on the total erosion rate and the char layer thickness. Figure 1 Error! Reference source not found.shows a sequence of images captured during the operation the subscale SRM with the X-ray RTR system. The X-rayTR system, alignment procedure, uncertainty determination, and image analysis process will be discussed in detail in the full manuscript.

  2. Unified Deep Learning Architecture for Modeling Biology Sequence.

    PubMed

    Wu, Hongjie; Cao, Chengyuan; Xia, Xiaoyan; Lu, Qiang

    2017-10-09

    Prediction of the spatial structure or function of biological macromolecules based on their sequence remains an important challenge in bioinformatics. When modeling biological sequences using traditional sequencing models, characteristics, such as long-range interactions between basic units, the complicated and variable output of labeled structures, and the variable length of biological sequences, usually lead to different solutions on a case-by-case basis. This study proposed the use of bidirectional recurrent neural networks based on long short-term memory or a gated recurrent unit to capture long-range interactions by designing the optional reshape operator to adapt to the diversity of the output labels and implementing a training algorithm to support the training of sequence models capable of processing variable-length sequences. Additionally, the merge and pooling operators enhanced the ability to capture short-range interactions between basic units of biological sequences. The proposed deep-learning model and its training algorithm might be capable of solving currently known biological sequence-modeling problems through the use of a unified framework. We validated our model on one of the most difficult biological sequence-modeling problems currently known, with our results indicating the ability of the model to obtain predictions of protein residue interactions that exceeded the accuracy of current popular approaches by 10% based on multiple benchmarks.

  3. Novel technologies for assessing dietary intake: evaluating the usability of a mobile telephone food record among adults and adolescents.

    PubMed

    Daugherty, Bethany L; Schap, TusaRebecca E; Ettienne-Gittens, Reynolette; Zhu, Fengqing M; Bosch, Marc; Delp, Edward J; Ebert, David S; Kerr, Deborah A; Boushey, Carol J

    2012-04-13

    The development of a mobile telephone food record has the potential to ameliorate much of the burden associated with current methods of dietary assessment. When using the mobile telephone food record, respondents capture an image of their foods and beverages before and after eating. Methods of image analysis and volume estimation allow for automatic identification and volume estimation of foods. To obtain a suitable image, all foods and beverages and a fiducial marker must be included in the image. To evaluate a defined set of skills among adolescents and adults when using the mobile telephone food record to capture images and to compare the perceptions and preferences between adults and adolescents regarding their use of the mobile telephone food record. We recruited 135 volunteers (78 adolescents, 57 adults) to use the mobile telephone food record for one or two meals under controlled conditions. Volunteers received instruction for using the mobile telephone food record prior to their first meal, captured images of foods and beverages before and after eating, and participated in a feedback session. We used chi-square for comparisons of the set of skills, preferences, and perceptions between the adults and adolescents, and McNemar test for comparisons within the adolescents and adults. Adults were more likely than adolescents to include all foods and beverages in the before and after images, but both age groups had difficulty including the entire fiducial marker. Compared with adolescents, significantly more adults had to capture more than one image before (38% vs 58%, P = .03) and after (25% vs 50%, P = .008) meal session 1 to obtain a suitable image. Despite being less efficient when using the mobile telephone food record, adults were more likely than adolescents to perceive remembering to capture images as easy (P < .001). A majority of both age groups were able to follow the defined set of skills; however, adults were less efficient when using the mobile telephone food record. Additional interactive training will likely be necessary for all users to provide extra practice in capturing images before entering a free-living situation. These results will inform age-specific development of the mobile telephone food record that may translate to a more accurate method of dietary assessment.

  4. A standardization model based on image recognition for performance evaluation of an oral scanner.

    PubMed

    Seo, Sang-Wan; Lee, Wan-Sun; Byun, Jae-Young; Lee, Kyu-Bok

    2017-12-01

    Accurate information is essential in dentistry. The image information of missing teeth is used in optically based medical equipment in prosthodontic treatment. To evaluate oral scanners, the standardized model was examined from cases of image recognition errors of linear discriminant analysis (LDA), and a model that combines the variables with reference to ISO 12836:2015 was designed. The basic model was fabricated by applying 4 factors to the tooth profile (chamfer, groove, curve, and square) and the bottom surface. Photo-type and video-type scanners were used to analyze 3D images after image capture. The scans were performed several times according to the prescribed sequence to distinguish the model from the one that did not form, and the results confirmed it to be the best. In the case of the initial basic model, a 3D shape could not be obtained by scanning even if several shots were taken. Subsequently, the recognition rate of the image was improved with every variable factor, and the difference depends on the tooth profile and the pattern of the floor surface. Based on the recognition error of the LDA, the recognition rate decreases when the model has a similar pattern. Therefore, to obtain the accurate 3D data, the difference of each class needs to be provided when developing a standardized model.

  5. Three-Dimensional Root Phenotyping with a Novel Imaging and Software Platform1[C][W][OA

    PubMed Central

    Clark, Randy T.; MacCurdy, Robert B.; Jung, Janelle K.; Shaff, Jon E.; McCouch, Susan R.; Aneshansley, Daniel J.; Kochian, Leon V.

    2011-01-01

    A novel imaging and software platform was developed for the high-throughput phenotyping of three-dimensional root traits during seedling development. To demonstrate the platform’s capacity, plants of two rice (Oryza sativa) genotypes, Azucena and IR64, were grown in a transparent gellan gum system and imaged daily for 10 d. Rotational image sequences consisting of 40 two-dimensional images were captured using an optically corrected digital imaging system. Three-dimensional root reconstructions were generated and analyzed using a custom-designed software, RootReader3D. Using the automated and interactive capabilities of RootReader3D, five rice root types were classified and 27 phenotypic root traits were measured to characterize these two genotypes. Where possible, measurements from the three-dimensional platform were validated and were highly correlated with conventional two-dimensional measurements. When comparing gellan gum-grown plants with those grown under hydroponic and sand culture, significant differences were detected in morphological root traits (P < 0.05). This highly flexible platform provides the capacity to measure root traits with a high degree of spatial and temporal resolution and will facilitate novel investigations into the development of entire root systems or selected components of root systems. In combination with the extensive genetic resources that are now available, this platform will be a powerful resource to further explore the molecular and genetic determinants of root system architecture. PMID:21454799

  6. Laser scanning endoscope for diagnostic medicine

    NASA Astrophysics Data System (ADS)

    Ouimette, Donald R.; Nudelman, Sol; Spackman, Thomas; Zaccheo, Scott

    1990-07-01

    A new type of endoscope is being developed which utilizes an optical raster scanning system for imaging through an endoscope. The optical raster scanner utilizes a high speed, multifaceted, rotating polygon mirror system for horizontal deflection, and a slower speed galvanometer driven mirror as the vertical deflection system. When used in combination, the optical raster scanner traces out a raster similar to an electron beam raster used in television systems. This flying spot of light can then be detected by various types of photosensitive detectors to generate a video image of the surface or scene being illuminated by the scanning beam. The optical raster scanner has been coupled to an endoscope. The raster is projected down the endoscope, thereby illuminating the object to be imaged at the distal end of the endoscope. Elemental photodetectors are placed at the distal or proximal end of the endoscope to detect the reflected illumination from the flying spot of light. This time sequenced signal is captured by an image processor for display and processing. This technique offers the possibility for very small diameter endoscopes since illumination channel requirements are eliminated. Using various lasers, very specific spectral selectivity can be achieved to optimum contrast of specific lesions of interest. Using several laser lines, or a white light source, with detectors of specific spectral response, multiple spectrally selected images can be acquired simultaneously. The potential for co-linear therapy delivery while imaging is also possible.

  7. Method and apparatus to monitor a beam of ionizing radiation

    DOEpatents

    Blackburn, Brandon W.; Chichester, David L.; Watson, Scott M.; Johnson, James T.; Kinlaw, Mathew T.

    2015-06-02

    Methods and apparatus to capture images of fluorescence generated by ionizing radiation and determine a position of a beam of ionizing radiation generating the fluorescence from the captured images. In one embodiment, the fluorescence is the result of ionization and recombination of nitrogen in air.

  8. Technology Tips

    ERIC Educational Resources Information Center

    Mathematics Teacher, 2004

    2004-01-01

    Some inexpensive or free ways that enable to capture and use images in work are mentioned. The first tip demonstrates the methods of using some of the built-in capabilities of the Macintosh and Windows-based PC operating systems, and the second tip describes methods to capture and create images using SnagIt.

  9. Device for wavelength-selective imaging

    DOEpatents

    Frangioni, John V.

    2010-09-14

    An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.

  10. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  11. Optical cell monitoring system for underwater targets

    NASA Astrophysics Data System (ADS)

    Moon, SangJun; Manzur, Fahim; Manzur, Tariq; Demirci, Utkan

    2008-10-01

    We demonstrate a cell based detection system that could be used for monitoring an underwater target volume and environment using a microfluidic chip and charge-coupled-device (CCD). This technique allows us to capture specific cells and enumerate these cells on a large area on a microchip. The microfluidic chip and a lens-less imaging platform were then merged to monitor cell populations and morphologies as a system that may find use in distributed sensor networks. The chip, featuring surface chemistry and automatic cell imaging, was fabricated from a cover glass slide, double sided adhesive film and a transparent Polymethlymetacrylate (PMMA) slab. The optically clear chip allows detecting cells with a CCD sensor. These chips were fabricated with a laser cutter without the use of photolithography. We utilized CD4+ cells that are captured on the floor of a microfluidic chip due to the ability to address specific target cells using antibody-antigen binding. Captured CD4+ cells were imaged with a fluorescence microscope to verify the chip specificity and efficiency. We achieved 70.2 +/- 6.5% capturing efficiency and 88.8 +/- 5.4% specificity for CD4+ T lymphocytes (n = 9 devices). Bright field images of the captured cells in the 24 mm × 4 mm × 50 μm microfluidic chip were obtained with the CCD sensor in one second. We achieved an inexpensive system that rapidly captures cells and images them using a lens-less CCD system. This microfluidic device can be modified for use in single cell detection utilizing a cheap light-emitting diode (LED) chip instead of a wide range CCD system.

  12. Development of a balloon-borne device for analysis of high-altitude ice and aerosol particulates: Ice Cryo Encapsulator by Balloon (ICE-Ball)

    NASA Astrophysics Data System (ADS)

    Boaggio, K.; Bandamede, M.; Bancroft, L.; Hurler, K.; Magee, N. B.

    2016-12-01

    We report on details of continuing instrument development and deployment of a novel balloon-borne device for capturing and characterizing atmospheric ice and aerosol particles, the Ice Cryo Encapsulator by Balloon (ICE-Ball). The device is designed to capture and preserve cirrus ice particles, maintaining them at cold equilibrium temperatures, so that high-altitude particles can recovered, transferred intact, and then imaged under SEM at an unprecedented resolution (approximately 3 nm maximum resolution). In addition to cirrus ice particles, high altitude aerosol particles are also captured, imaged, and analyzed for geometry, chemical composition, and activity as ice nucleating particles. Prototype versions of ICE-Ball have successfully captured and preserved high altitude ice particles and aerosols, then returned them for recovery and SEM imaging and analysis. New improvements include 1) ability to capture particles from multiple narrowly-defined altitudes on a single payload, 2) high quality measurements of coincident temperature, humidity, and high-resolution video at capture altitude, 3) ability to capture particles during both ascent and descent, 4) better characterization of particle collection volume and collection efficiency, and 5) improved isolation and characterization of capture-cell cryo environment. This presentation provides detailed capability specifications for anyone interested in using measurements, collaborating on continued instrument development, or including this instrument in ongoing or future field campaigns.

  13. Presence capture cameras - a new challenge to the image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  14. Use Massive Parallel Sequencing and Exome Capture Technology to Sequence the Exome of Fanconi Anemia Children and Their Patents

    ClinicalTrials.gov

    2013-11-21

    Fanconi Anemia; Autosomal or Sex Linked Recessive Genetic Disease; Bone Marrow Hematopoiesis Failure, Multiple Congenital Abnormalities, and Susceptibility to Neoplastic Diseases.; Hematopoiesis Maintainance.

  15. Nonintrusive iris image acquisition system based on a pan-tilt-zoom camera and light stripe projection

    NASA Astrophysics Data System (ADS)

    Yoon, Soweon; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie

    2009-03-01

    Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specifically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that provides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the following three contributions compared with previous works: (1) the capture volume is significantly increased by using a pan-tilt-zoom (PTZ) camera guided by a light stripe projection, (2) the iris location in the large capture volume is found fast due to 1-D vertical face searching from the user's horizontal position obtained by the light stripe projection, and (3) zooming and focusing on the user's irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed system can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user.

  16. Feasibility and Use of the Mobile Food Record for Capturing Eating Occasions among Children Ages 3-10 Years in Guam.

    PubMed

    Aflague, Tanisha F; Boushey, Carol J; Guerrero, Rachael T Leon; Ahmad, Ziad; Kerr, Deborah A; Delp, Edward J

    2015-06-02

    Children's readiness to use technology supports the idea of children using mobile applications for dietary assessment. Our goal was to determine if children 3-10 years could successfully use the mobile food record (mFR) to capture a usable image pair or pairs. Children in Sample 1 were tasked to use the mFR to capture an image pair of one eating occasion while attending summer camp. For Sample 2, children were tasked to record all eating occasions for two consecutive days at two time periods that were two to four weeks apart. Trained analysts evaluated images. In Sample 1, 90% (57/63) captured one usable image pair. All children (63/63) returned the mFR undamaged. Sixty-two children reported: The mFR was easy to use (89%); willingness to use the mFR again (87%); and the fiducial marker easy to manage (94%). Children in Sample 2 used the mFR at least one day at Time 1 (59/63, 94%); Time 2 (49/63, 78%); and at both times (47/63, 75%). This latter group captured 6.21 ± 4.65 and 5.65 ± 3.26 mean (± SD) image pairs for Time 1 and Time 2, respectively. Results support the potential for children to independently record dietary intakes using the mFR.

  17. Image quality assessment of silent T2 PROPELLER sequence for brain imaging in infants.

    PubMed

    Kim, Hyun Gi; Choi, Jin Wook; Yoon, Soo Han; Lee, Sieun

    2018-02-01

    Infants are vulnerable to high acoustic noise. Acoustic noise generated by MR scanning can be reduced by a silent sequence. The purpose of this study is to compare the image quality of the conventional and silent T2 PROPELLER sequences for brain imaging in infants. A total of 36 scans were acquired from 24 infants using a 3 T MR scanner. Each patient underwent both conventional and silent T2 PROPELLER sequences. Acoustic noise level was measured. Quantitative and qualitative assessments were performed with the images taken with each sequence. The sound pressure level of the conventional T2 PROPELLER imaging sequence was 92.1 dB and that of the silent T2 PROPELLER imaging sequence was 73.3 dB (reduction of 20%). On quantitative assessment, the two sequences (conventional vs silent T2 PROPELLER) did not show significant difference in relative contrast (0.069 vs 0.068, p value = 0.536) and signal-to-noise ratio (75.4 vs 114.8, p value = 0.098). Qualitative assessment of overall image quality (p value = 0.572), grey-white differentiation (p value = 0.986), shunt-related artefact (p value > 0.999), motion artefact (p value = 0.801) and myelination degree in different brain regions (p values ≥ 0.092) did not show significant difference between the two sequences. The silent T2 PROPELLER sequence reduces acoustic noise and generated comparable image quality to that of the conventional sequence. Advances in knowledge: This is the first report to compare silent T2 PROPELLER images with that of conventional T2 PROPELLER images in children.

  18. Retinal Image Quality Assessment for Spaceflight-Induced Vision Impairment Study

    NASA Technical Reports Server (NTRS)

    Vu, Amanda Cadao; Raghunandan, Sneha; Vyas, Ruchi; Radhakrishnan, Krishnan; Taibbi, Giovanni; Vizzeri, Gianmarco; Grant, Maria; Chalam, Kakarla; Parsons-Wingerter, Patricia

    2015-01-01

    Long-term exposure to space microgravity poses significant risks for visual impairment. Evidence suggests such vision changes are linked to cephalad fluid shifts, prompting a need to directly quantify microgravity-induced retinal vascular changes. The quality of retinal images used for such vascular remodeling analysis, however, is dependent on imaging methodology. For our exploratory study, we hypothesized that retinal images captured using fluorescein imaging methodologies would be of higher quality in comparison to images captured without fluorescein. A semi-automated image quality assessment was developed using Vessel Generation Analysis (VESGEN) software and MATLAB® image analysis toolboxes. An analysis of ten images found that the fluorescein imaging modality provided a 36% increase in overall image quality (two-tailed p=0.089) in comparison to nonfluorescein imaging techniques.

  19. Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images

    PubMed Central

    Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong

    2015-01-01

    In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods. PMID:26703596

  20. Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images.

    PubMed

    Lin, Chih-Lung; Wang, Shih-Hung; Cheng, Hsu-Yung; Fan, Kuo-Chin; Hsu, Wei-Lieh; Lai, Chin-Rong

    2015-12-12

    In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods.

  1. A galaxy of folds.

    PubMed

    Alva, Vikram; Remmert, Michael; Biegert, Andreas; Lupas, Andrei N; Söding, Johannes

    2010-01-01

    Many protein classification systems capture homologous relationships by grouping domains into families and superfamilies on the basis of sequence similarity. Superfamilies with similar 3D structures are further grouped into folds. In the absence of discernable sequence similarity, these structural similarities were long thought to have originated independently, by convergent evolution. However, the growth of databases and advances in sequence comparison methods have led to the discovery of many distant evolutionary relationships that transcend the boundaries of superfamilies and folds. To investigate the contributions of convergent versus divergent evolution in the origin of protein folds, we clustered representative domains of known structure by their sequence similarity, treating them as point masses in a virtual 2D space which attract or repel each other depending on their pairwise sequence similarities. As expected, families in the same superfamily form tight clusters. But often, superfamilies of the same fold are linked with each other, suggesting that the entire fold evolved from an ancient prototype. Strikingly, some links connect superfamilies with different folds. They arise from modular peptide fragments of between 20 and 40 residues that co-occur in the connected folds in disparate structural contexts. These may be descendants of an ancestral pool of peptide modules that evolved as cofactors in the RNA world and from which the first folded proteins arose by amplification and recombination. Our galaxy of folds summarizes, in a single image, most known and many yet undescribed homologous relationships between protein superfamilies, providing new insights into the evolution of protein domains.

  2. Efficient mutation identification in zebrafish by microarray capturing and next generation sequencing.

    PubMed

    Bontems, Franck; Baerlocher, Loic; Mehenni, Sabrina; Bahechar, Ilham; Farinelli, Laurent; Dosch, Roland

    2011-02-18

    Fish models like medaka, stickleback or zebrafish provide a valuable resource to study vertebrate genes. However, finding genetic variants e.g. mutations in the genome is still arduous. Here we used a combination of microarray capturing and next generation sequencing to identify the affected gene in the mozartkugelp11cv (mzlp11cv) mutant zebrafish. We discovered a 31-bp deletion in macf1 demonstrating the potential of this technique to efficiently isolate mutations in a vertebrate genome. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. A vision-based system for measuring the displacements of large structures: Simultaneous adaptive calibration and full motion estimation

    NASA Astrophysics Data System (ADS)

    Santos, C. Almeida; Costa, C. Oliveira; Batista, J.

    2016-05-01

    The paper describes a kinematic model-based solution to estimate simultaneously the calibration parameters of the vision system and the full-motion (6-DOF) of large civil engineering structures, namely of long deck suspension bridges, from a sequence of stereo images captured by digital cameras. Using an arbitrary number of images and assuming a smooth structure motion, an Iterated Extended Kalman Filter is used to recursively estimate the projection matrices of the cameras and the structure full-motion (displacement and rotation) over time, helping to meet the structure health monitoring fulfilment. Results related to the performance evaluation, obtained by numerical simulation and with real experiments, are reported. The real experiments were carried out in indoor and outdoor environment using a reduced structure model to impose controlled motions. In both cases, the results obtained with a minimum setup comprising only two cameras and four non-coplanar tracking points, showed a high accuracy results for on-line camera calibration and structure full motion estimation.

  4. Time multiplexing super-resolution nanoscopy based on the Brownian motion of gold nanoparticles

    NASA Astrophysics Data System (ADS)

    Ilovitsh, Tali; Ilovitsh, Asaf; Wagner, Omer; Zalevsky, Zeev

    2017-02-01

    Super-resolution localization microscopy can overcome the diffraction limit and achieve a tens of order improvement in resolution. It requires labeling the sample with fluorescent probes followed with their repeated cycles of activation and photobleaching. This work presents an alternative approach that is free from direct labeling and does not require the activation and photobleaching cycles. Fluorescently labeled gold nanoparticles in a solution are distributed on top of the sample. The nanoparticles move in a random Brownian motion, and interact with the sample. By obscuring different areas in the sample, the nanoparticles encode the sub-wavelength features. A sequence of images of the sample is captured and decoded by digital post processing to create the super-resolution image. The achievable resolution is limited by the additive noise and the size of the nanoparticles. Regular nanoparticles with diameter smaller than 100nm are barely seen in a conventional bright field microscope, thus fluorescently labeled gold nanoparticles were used, with proper

  5. A versatile nanobody-based toolkit to analyze retrograde transport from the cell surface.

    PubMed

    Buser, Dominik P; Schleicher, Kai D; Prescianotto-Baschong, Cristina; Spiess, Martin

    2018-06-18

    Retrograde transport of membranes and proteins from the cell surface to the Golgi and beyond is essential to maintain homeostasis, compartment identity, and physiological functions. To study retrograde traffic biochemically, by live-cell imaging or by electron microscopy, we engineered functionalized anti-GFP nanobodies (camelid VHH antibody domains) to be bacterially expressed and purified. Tyrosine sulfation consensus sequences were fused to the nanobody for biochemical detection of trans -Golgi arrival, fluorophores for fluorescence microscopy and live imaging, and APEX2 (ascorbate peroxidase 2) for electron microscopy and compartment ablation. These functionalized nanobodies are specifically captured by GFP-modified reporter proteins at the cell surface and transported piggyback to the reporters' homing compartments. As an application of this tool, we have used it to determine the contribution of adaptor protein-1/clathrin in retrograde transport kinetics of the mannose-6-phosphate receptors from endosomes back to the trans -Golgi network. Our experiments establish functionalized nanobodies as a powerful tool to demonstrate and quantify retrograde transport pathways.

  6. High-speed railway real-time localization auxiliary method based on deep neural network

    NASA Astrophysics Data System (ADS)

    Chen, Dongjie; Zhang, Wensheng; Yang, Yang

    2017-11-01

    High-speed railway intelligent monitoring and management system is composed of schedule integration, geographic information, location services, and data mining technology for integration of time and space data. Assistant localization is a significant submodule of the intelligent monitoring system. In practical application, the general access is to capture the image sequences of the components by using a high-definition camera, digital image processing technique and target detection, tracking and even behavior analysis method. In this paper, we present an end-to-end character recognition method based on a deep CNN network called YOLO-toc for high-speed railway pillar plate number. Different from other deep CNNs, YOLO-toc is an end-to-end multi-target detection framework, furthermore, it exhibits a state-of-art performance on real-time detection with a nearly 50fps achieved on GPU (GTX960). Finally, we realize a real-time but high-accuracy pillar plate number recognition system and integrate natural scene OCR into a dedicated classification YOLO-toc model.

  7. Quantifying efficacy and limits of unmanned aerial vehicle (UAV) technology for weed seedling detection as affected by sensor resolution.

    PubMed

    Peña, José M; Torres-Sánchez, Jorge; Serrano-Pérez, Angélica; de Castro, Ana I; López-Granados, Francisca

    2015-03-06

    In order to optimize the application of herbicides in weed-crop systems, accurate and timely weed maps of the crop-field are required. In this context, this investigation quantified the efficacy and limitations of remote images collected with an unmanned aerial vehicle (UAV) for early detection of weed seedlings. The ability to discriminate weeds was significantly affected by the imagery spectral (type of camera), spatial (flight altitude) and temporal (the date of the study) resolutions. The colour-infrared images captured at 40 m and 50 days after sowing (date 2), when plants had 5-6 true leaves, had the highest weed detection accuracy (up to 91%). At this flight altitude, the images captured before date 2 had slightly better results than the images captured later. However, this trend changed in the visible-light images captured at 60 m and higher, which had notably better results on date 3 (57 days after sowing) because of the larger size of the weed plants. Our results showed the requirements on spectral and spatial resolutions needed to generate a suitable weed map early in the growing season, as well as the best moment for the UAV image acquisition, with the ultimate objective of applying site-specific weed management operations.

  8. Quantifying Efficacy and Limits of Unmanned Aerial Vehicle (UAV) Technology for Weed Seedling Detection as Affected by Sensor Resolution

    PubMed Central

    Peña, José M.; Torres-Sánchez, Jorge; Serrano-Pérez, Angélica; de Castro, Ana I.; López-Granados, Francisca

    2015-01-01

    In order to optimize the application of herbicides in weed-crop systems, accurate and timely weed maps of the crop-field are required. In this context, this investigation quantified the efficacy and limitations of remote images collected with an unmanned aerial vehicle (UAV) for early detection of weed seedlings. The ability to discriminate weeds was significantly affected by the imagery spectral (type of camera), spatial (flight altitude) and temporal (the date of the study) resolutions. The colour-infrared images captured at 40 m and 50 days after sowing (date 2), when plants had 5–6 true leaves, had the highest weed detection accuracy (up to 91%). At this flight altitude, the images captured before date 2 had slightly better results than the images captured later. However, this trend changed in the visible-light images captured at 60 m and higher, which had notably better results on date 3 (57 days after sowing) because of the larger size of the weed plants. Our results showed the requirements on spectral and spatial resolutions needed to generate a suitable weed map early in the growing season, as well as the best moment for the UAV image acquisition, with the ultimate objective of applying site-specific weed management operations. PMID:25756867

  9. Detection of a novel herpesvirus from bats in the Philippines.

    PubMed

    Sano, Kaori; Okazaki, Sachiko; Taniguchi, Satoshi; Masangkay, Joseph S; Puentespina, Roberto; Eres, Eduardo; Cosico, Edison; Quibod, Niña; Kondo, Taisuke; Shimoda, Hiroshi; Hatta, Yuuki; Mitomo, Shumpei; Oba, Mami; Katayama, Yukie; Sassa, Yukiko; Furuya, Tetsuya; Nagai, Makoto; Une, Yumi; Maeda, Ken; Kyuwa, Shigeru; Yoshikawa, Yasuhiro; Akashi, Hiroomi; Omatsu, Tsutomu; Mizutani, Tetsuya

    2015-08-01

    Bats are natural hosts of many zoonotic viruses. Monitoring bat viruses is important to detect novel bat-borne infectious diseases. In this study, next generation sequencing techniques and conventional PCR were used to analyze intestine, lung, and blood clot samples collected from wild bats captured at three locations in Davao region, in the Philippines in 2012. Different viral genes belonging to the Retroviridae and Herpesviridae families were identified using next generation sequencing. The existence of herpesvirus in the samples was confirmed by PCR using herpesvirus consensus primers. The nucleotide sequences of the resulting PCR amplicons were 166-bp. Further phylogenetic analysis identified that the virus from which this nucleotide sequence was obtained belonged to the Gammaherpesvirinae subfamily. PCR using primers specific to the nucleotide sequence obtained revealed that the infection rate among the captured bats was 30 %. In this study, we present the partial genome of a novel gammaherpesvirus detected from wild bats. Our observations also indicate that this herpesvirus may be widely distributed in bat populations in Davao region.

  10. RNA-Seq analysis to capture the transcriptome landscape of a single cell

    PubMed Central

    Tang, Fuchou; Barbacioru, Catalin; Nordman, Ellen; Xu, Nanlan; Bashkirov, Vladimir I; Lao, Kaiqin; Surani, M. Azim

    2013-01-01

    We describe here a protocol for digital transcriptome analysis in a single mouse blastomere using a deep sequencing approach. An individual blastomere was first isolated and put into lysate buffer by mouth pipette. Reverse transcription was then performed directly on the whole cell lysate. After this, the free primers were removed by Exonuclease I and a poly(A) tail was added to the 3′ end of the first-strand cDNA by Terminal Deoxynucleotidyl Transferase. Then the single cell cDNAs were amplified by 20 plus 9 cycles of PCR. Then 100-200 ng of these amplified cDNAs were used to construct a sequencing library. The sequencing library can be used for deep sequencing using the SOLiD system. Compared with the cDNA microarray technique, our assay can capture up to 75% more genes expressed in early embryos. The protocol can generate deep sequencing libraries within 6 days for 16 single cell samples. PMID:20203668

  11. Historic Methods for Capturing Magnetic Field Images

    ERIC Educational Resources Information Center

    Kwan, Alistair

    2016-01-01

    I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection…

  12. A design of real time image capturing and processing system using Texas Instrument's processor

    NASA Astrophysics Data System (ADS)

    Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng

    2007-09-01

    In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.

  13. Rectification of curved document images based on single view three-dimensional reconstruction.

    PubMed

    Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang

    2016-10-01

    Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.

  14. The UBIRIS.v2: a database of visible wavelength iris images captured on-the-move and at-a-distance.

    PubMed

    Proença, Hugo; Filipe, Sílvio; Santos, Ricardo; Oliveira, João; Alexandre, Luís A

    2010-08-01

    The iris is regarded as one of the most useful traits for biometric recognition and the dissemination of nationwide iris-based recognition systems is imminent. However, currently deployed systems rely on heavy imaging constraints to capture near infrared images with enough quality. Also, all of the publicly available iris image databases contain data correspondent to such imaging constraints and therefore are exclusively suitable to evaluate methods thought to operate on these type of environments. The main purpose of this paper is to announce the availability of the UBIRIS.v2 database, a multisession iris images database which singularly contains data captured in the visible wavelength, at-a-distance (between four and eight meters) and on on-the-move. This database is freely available for researchers concerned about visible wavelength iris recognition and will be useful in accessing the feasibility and specifying the constraints of this type of biometric recognition.

  15. Multiplexed direct genomic selection (MDiGS): a pooled BAC capture approach for highly accurate CNV and SNP/INDEL detection.

    PubMed

    Alvarado, David M; Yang, Ping; Druley, Todd E; Lovett, Michael; Gurnett, Christina A

    2014-06-01

    Despite declining sequencing costs, few methods are available for cost-effective single-nucleotide polymorphism (SNP), insertion/deletion (INDEL) and copy number variation (CNV) discovery in a single assay. Commercially available methods require a high investment to a specific region and are only cost-effective for large samples. Here, we introduce a novel, flexible approach for multiplexed targeted sequencing and CNV analysis of large genomic regions called multiplexed direct genomic selection (MDiGS). MDiGS combines biotinylated bacterial artificial chromosome (BAC) capture and multiplexed pooled capture for SNP/INDEL and CNV detection of 96 multiplexed samples on a single MiSeq run. MDiGS is advantageous over other methods for CNV detection because pooled sample capture and hybridization to large contiguous BAC baits reduces sample and probe hybridization variability inherent in other methods. We performed MDiGS capture for three chromosomal regions consisting of ∼ 550 kb of coding and non-coding sequence with DNA from 253 patients with congenital lower limb disorders. PITX1 nonsense and HOXC11 S191F missense mutations were identified that segregate in clubfoot families. Using a novel pooled-capture reference strategy, we identified recurrent chromosome chr17q23.1q23.2 duplications and small HOXC 5' cluster deletions (51 kb and 12 kb). Given the current interest in coding and non-coding variants in human disease, MDiGS fulfills a niche for comprehensive and low-cost evaluation of CNVs, coding, and non-coding variants across candidate regions of interest. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. A Segmentation Method for Lung Parenchyma Image Sequences Based on Superpixels and a Self-Generating Neural Forest

    PubMed Central

    Liao, Xiaolei; Zhao, Juanjuan; Jiao, Cheng; Lei, Lei; Qiang, Yan; Cui, Qiang

    2016-01-01

    Background Lung parenchyma segmentation is often performed as an important pre-processing step in the computer-aided diagnosis of lung nodules based on CT image sequences. However, existing lung parenchyma image segmentation methods cannot fully segment all lung parenchyma images and have a slow processing speed, particularly for images in the top and bottom of the lung and the images that contain lung nodules. Method Our proposed method first uses the position of the lung parenchyma image features to obtain lung parenchyma ROI image sequences. A gradient and sequential linear iterative clustering algorithm (GSLIC) for sequence image segmentation is then proposed to segment the ROI image sequences and obtain superpixel samples. The SGNF, which is optimized by a genetic algorithm (GA), is then utilized for superpixel clustering. Finally, the grey and geometric features of the superpixel samples are used to identify and segment all of the lung parenchyma image sequences. Results Our proposed method achieves higher segmentation precision and greater accuracy in less time. It has an average processing time of 42.21 seconds for each dataset and an average volume pixel overlap ratio of 92.22 ± 4.02% for four types of lung parenchyma image sequences. PMID:27532214

  17. Endoscopic device for functional imaging of the retina

    NASA Astrophysics Data System (ADS)

    Barriga, Simon; Lohani, Sweyta; Martell, Bret; Soliz, Peter; Ts'o, Dan

    2011-03-01

    Non-invasive imaging of retinal function based on the recording of spatially distributed reflectance changes evoked by visual stimuli has to-date been performed primarily using modified commercial fundus cameras. We have constructed a prototype retinal functional imager, using a commercial endoscope (Storz) for the frontend optics, and a low-cost back-end that includes the needed dichroic beam splitter to separate the stimulus path from the imaging path. This device has been tested to demonstrate its performance for the delivery of adequate near infrared (NIR) illumination, intensity of the visual stimulus and reflectance return in the imaging path. The current device was found to be capable of imaging reflectance changes of 0.1%, similar to that observable using the modified commercial fundus camera approach. The visual stimulus (a 505nm spot of 0.5secs) was used with an interrogation illumination of 780nm, and a sequence of imaged captured. At each pixel, the imaged signal was subtracted and normalized by the baseline reflectance, so that the measurement was ΔR/R. The typical retinal activity signal observed had a ΔR/R of 0.3-1.0%. The noise levels were measured when no stimulus was applied and found to vary between +/- 0.05%. Functional imaging has been suggested as a means to provide objective information on retina function that may be a preclinical indicator of ocular diseases, such as age-related macular degeneration (AMD), glaucoma, and diabetic retinopathy. The endoscopic approach promises to yield a significantly more economical retinal functional imaging device that would be clinically important.

  18. Using timed event sequential data in nursing research.

    PubMed

    Pecanac, Kristen E; Doherty-King, Barbara; Yoon, Ju Young; Brown, Roger; Schiefelbein, Tony

    2015-01-01

    Measuring behavior is important in nursing research, and innovative technologies are needed to capture the "real-life" complexity of behaviors and events. The purpose of this article is to describe the use of timed event sequential data in nursing research and to demonstrate the use of this data in a research study. Timed event sequencing allows the researcher to capture the frequency, duration, and sequence of behaviors as they occur in an observation period and to link the behaviors to contextual details. Timed event sequential data can easily be collected with handheld computers, loaded with a software program designed for capturing observations in real time. Timed event sequential data add considerable strength to analysis of any nursing behavior of interest, which can enhance understanding and lead to improvement in nursing practice.

  19. MR-based detection of individual histotripsy bubble clouds formed in tissues and phantoms.

    PubMed

    Allen, Steven P; Hernandez-Garcia, Luis; Cain, Charles A; Hall, Timothy L

    2016-11-01

    To demonstrate that MR sequences can detect individual histotripsy bubble clouds formed inside intact tissues. A line-scan and an EPI sequence were sensitized to histotripsy by inserting a bipolar gradient whose lobes bracketed the lifespan of a histotripsy bubble cloud. Using a 7 Tesla, small-bore scanner, these sequences monitored histotripsy clouds formed in an agar phantom and in vitro porcine liver and brain. The bipolar gradients were adjusted to apply phase with k-space frequencies of 10, 300 or 400 cm -1 . Acoustic pressure amplitude was also varied. Cavitation was simultaneously monitored using a passive cavitation detection system. Each image captured local signal loss specific to an individual bubble cloud. In the agar phantom, this signal loss appeared only when the transducer output exceeded the cavitation threshold pressure. In tissues, bubble clouds were immediately detected when the gradients created phase with k-space frequencies of 300 and 400 cm -1 . When the gradients created phase with a k-space frequency of 10 cm -1 , individual bubble clouds were not detectable until many acoustic pulses had been applied to the tissue. Cavitation-sensitive MR-sequences can detect single histotripsy bubble clouds formed in biologic tissue. Detection is influenced by the sensitizing gradients and treatment history. Magn Reson Med 76:1486-1493, 2016. © 2015 International Society for Magnetic Resonance in Medicine. © 2015 International Society for Magnetic Resonance in Medicine.

  20. Effective Identification of Similar Patients Through Sequential Matching over ICD Code Embedding.

    PubMed

    Nguyen, Dang; Luo, Wei; Venkatesh, Svetha; Phung, Dinh

    2018-04-11

    Evidence-based medicine often involves the identification of patients with similar conditions, which are often captured in ICD (International Classification of Diseases (World Health Organization 2013)) code sequences. With no satisfying prior solutions for matching ICD-10 code sequences, this paper presents a method which effectively captures the clinical similarity among routine patients who have multiple comorbidities and complex care needs. Our method leverages the recent progress in representation learning of individual ICD-10 codes, and it explicitly uses the sequential order of codes for matching. Empirical evaluation on a state-wide cancer data collection shows that our proposed method achieves significantly higher matching performance compared with state-of-the-art methods ignoring the sequential order. Our method better identifies similar patients in a number of clinical outcomes including readmission and mortality outlook. Although this paper focuses on ICD-10 diagnosis code sequences, our method can be adapted to work with other codified sequence data.

  1. Portable LED-induced autofluorescence imager with a probe of L shape for oral cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Huang, Ting-Wei; Lee, Yu-Cheng; Cheng, Nai-Lun; Yan, Yung-Jhe; Chiang, Hou-Chi; Chiou, Jin-Chern; Mang, Ou-Yang

    2015-08-01

    The difference of spectral distribution between lesions of epithelial cells and normal cells after excited fluorescence is one of methods for the cancer diagnosis. In our previous work, we developed a portable LED Induced autofluorescence (LIAF) imager contained the multiple wavelength of LED excitation light and multiple filters to capture ex-vivo oral tissue autofluorescence images. Our portable system for detection of oral cancer has a probe in front of the lens for fixing the object distance. The shape of the probe is cone, and it is not convenient for doctor to capture the oral image under an appropriate view angle in front of the probe. Therefore, a probe of L shape containing a mirror is proposed for doctors to capture the images with the right angles, and the subjects do not need to open their mouse constrainedly. Besides, a glass plate is placed in probe to prevent the liquid entering in the body, but the light reflected from the glass plate directly causes the light spots inside the images. We set the glass plate in front of LED to avoiding the light spots. When the distance between the glasses plate and the LED model plane is less than the critical value, then we can prevent the light spots caused from the glasses plate. The experiments show that the image captured with the new probe that the glasses plate placed in the back-end of the probe has no light spots inside the image.

  2. Ultrahigh-frame CCD imagers

    NASA Astrophysics Data System (ADS)

    Lowrance, John L.; Mastrocola, V. J.; Renda, George F.; Swain, Pradyumna K.; Kabra, R.; Bhaskaran, Mahalingham; Tower, John R.; Levine, Peter A.

    2004-02-01

    This paper describes the architecture, process technology, and performance of a family of high burst rate CCDs. These imagers employ high speed, low lag photo-detectors with local storage at each photo-detector to achieve image capture at rates greater than 106 frames per second. One imager has a 64 x 64 pixel array with 12 frames of storage. A second imager has a 80 x 160 array with 28 frames of storage, and the third imager has a 64 x 64 pixel array with 300 frames of storage. Application areas include capture of rapid mechanical motion, optical wavefront sensing, fluid cavitation research, combustion studies, plasma research and wind-tunnel-based gas dynamics research.

  3. Comparison of three-dimensional surface-imaging systems.

    PubMed

    Tzou, Chieh-Han John; Artner, Nicole M; Pona, Igor; Hold, Alina; Placheta, Eva; Kropatsch, Walter G; Frey, Manfred

    2014-04-01

    In recent decades, three-dimensional (3D) surface-imaging technologies have gained popularity worldwide, but because most published articles that mention them are technical, clinicians often have difficulties gaining a proper understanding of them. This article aims to provide the reader with relevant information on 3D surface-imaging systems. In it, we compare the most recent technologies to reveal their differences. We have accessed five international companies with the latest technologies in 3D surface-imaging systems: 3dMD, Axisthree, Canfield, Crisalix and Dimensional Imaging (Di3D; in alphabetical order). We evaluated their technical equipment, independent validation studies and corporate backgrounds. The fastest capturing devices are the 3dMD and Di3D systems, capable of capturing images within 1.5 and 1 ms, respectively. All companies provide software for tissue modifications. Additionally, 3dMD, Canfield and Di3D can fuse computed tomography (CT)/cone-beam computed tomography (CBCT) images into their 3D surface-imaging data. 3dMD and Di3D provide 4D capture systems, which allow capturing the movement of a 3D surface over time. Crisalix greatly differs from the other four systems as it is purely web based and realised via cloud computing. 3D surface-imaging systems are becoming important in today's plastic surgical set-ups, taking surgeons to a new level of communication with patients, surgical planning and outcome evaluation. Technologies used in 3D surface-imaging systems and their intended field of application vary within the companies evaluated. Potential users should define their requirements and assignment of 3D surface-imaging systems in their clinical as research environment before making the final decision for purchase. Copyright © 2014 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  4. Three-dimensional T1rho-weighted MRI at 1.5 Tesla.

    PubMed

    Borthakur, Arijitt; Wheaton, Andrew; Charagundla, Sridhar R; Shapiro, Erik M; Regatte, Ravinder R; Akella, Sarma V S; Kneeland, J Bruce; Reddy, Ravinder

    2003-06-01

    To design and implement a magnetic resonance imaging (MRI) pulse sequence capable of performing three-dimensional T(1rho)-weighted MRI on a 1.5-T clinical scanner, and determine the optimal sequence parameters, both theoretically and experimentally, so that the energy deposition by the radiofrequency pulses in the sequence, measured as the specific absorption rate (SAR), does not exceed safety guidelines for imaging human subjects. A three-pulse cluster was pre-encoded to a three-dimensional gradient-echo imaging sequence to create a three-dimensional, T(1rho)-weighted MRI pulse sequence. Imaging experiments were performed on a GE clinical scanner with a custom-built knee-coil. We validated the performance of this sequence by imaging articular cartilage of a bovine patella and comparing T(1rho) values measured by this sequence to those obtained with a previously tested two-dimensional imaging sequence. Using a previously developed model for SAR calculation, the imaging parameters were adjusted such that the energy deposition by the radiofrequency pulses in the sequence did not exceed safety guidelines for imaging human subjects. The actual temperature increase due to the sequence was measured in a phantom by a MRI-based temperature mapping technique. Following these experiments, the performance of this sequence was demonstrated in vivo by obtaining T(1rho)-weighted images of the knee joint of a healthy individual. Calculated T(1rho) of articular cartilage in the specimen was similar for both and three-dimensional and two-dimensional methods (84 +/- 2 msec and 80 +/- 3 msec, respectively). The temperature increase in the phantom resulting from the sequence was 0.015 degrees C, which is well below the established safety guidelines. Images of the human knee joint in vivo demonstrate a clear delineation of cartilage from surrounding tissues. We developed and implemented a three-dimensional T(1rho)-weighted pulse sequence on a 1.5-T clinical scanner. Copyright 2003 Wiley-Liss, Inc.

  5. Optimization of a double inversion recovery sequence for noninvasive synovium imaging of joint effusion in the knee.

    PubMed

    Jahng, Geon-Ho; Jin, Wook; Yang, Dal Mo; Ryu, Kyung Nam

    2011-05-01

    We wanted to optimize a double inversion recovery (DIR) sequence to image joint effusion regions of the knee, especially intracapsular or intrasynovial imaging in the suprapatellar bursa and patellofemoral joint space. Computer simulations were performed to determine the optimum inversion times (TI) for suppressing both fat and water signals, and a DIR sequence was optimized based on the simulations for distinguishing synovitis from fluid. In vivo studies were also performed on individuals who showed joint effusion on routine knee MR images to demonstrate the feasibility of using the DIR sequence with a 3T whole-body MR scanner. To compare intracapsular or intrasynovial signals on the DIR images, intermediate density-weighted images and/or post-enhanced T1-weighted images were acquired. The timings to enhance the synovial contrast from the fluid components were TI1 = 2830 ms and TI2 = 254 ms for suppressing the water and fat signals, respectively. Improved contrast for the intrasynovial area in the knees was observed with the DIR turbo spin-echo pulse sequence compared to the intermediate density-weighted sequence. Imaging contrast obtained noninvasively with the DIR sequence was similar to that of the post-enhanced T1-weighted sequence. The DIR sequence may be useful for delineating synovium without using contrast materials.

  6. Computerized image analysis for acetic acid induced intraepithelial lesions

    NASA Astrophysics Data System (ADS)

    Li, Wenjing; Ferris, Daron G.; Lieberman, Rich W.

    2008-03-01

    Cervical Intraepithelial Neoplasia (CIN) exhibits certain morphologic features that can be identified during a visual inspection exam. Immature and dysphasic cervical squamous epithelium turns white after application of acetic acid during the exam. The whitening process occurs visually over several minutes and subjectively discriminates between dysphasic and normal tissue. Digital imaging technologies allow us to assist the physician analyzing the acetic acid induced lesions (acetowhite region) in a fully automatic way. This paper reports a study designed to measure multiple parameters of the acetowhitening process from two images captured with a digital colposcope. One image is captured before the acetic acid application, and the other is captured after the acetic acid application. The spatial change of the acetowhitening is extracted using color and texture information in the post acetic acid image; the temporal change is extracted from the intensity and color changes between the post acetic acid and pre acetic acid images with an automatic alignment. The imaging and data analysis system has been evaluated with a total of 99 human subjects and demonstrate its potential to screening underserved women where access to skilled colposcopists is limited.

  7. Tvashtar in Motion

    NASA Technical Reports Server (NTRS)

    2007-01-01

    This five-frame sequence of New Horizons images captures the giant plume from Io's Tvashtar volcano. Snapped by the probe's Long Range Reconnaissance Imager (LORRI) as the spacecraft flew past Jupiter earlier this year, this first-ever 'movie' of an Io plume clearly shows motion in the cloud of volcanic debris, which extends 330 kilometers (200 miles) above the moon's surface. Only the upper part of the plume is visible from this vantage point -- the plume's source is 130 kilometers (80 miles) below the edge of Io's disk, on the far side of the moon.

    The appearance and motion of the plume is remarkably similar to an ornamental fountain on Earth, replicated on a gigantic scale. The knots and filaments that allow us to track the plume's motion are still mysterious, but this movie is likely to help scientists understand their origin, as well as provide unique information on the plume dynamics.

    Io's hyperactive nature is emphasized by the fact that two other volcanic plumes are also visible off the edge of Io's disk: Masubi at the 7 o'clock position, and a very faint plume, possibly from the volcano Zal, at the 10 o'clock position. Jupiter illuminates the night side of Io, and the most prominent feature visible on the disk is the dark horseshoe shape of the volcano Loki, likely an enormous lava lake. Boosaule Mons, which at 18 kilometers (11 miles) is the highest mountain on Io and one of the highest mountains in the solar system, pokes above the edge of the disk on the right side.

    The five images were obtained over an 8-minute span, with two minutes between frames, from 23:50 to 23:58 Universal Time on March 1, 2007. Io was 3.8 million kilometers (2.4 million miles) from New Horizons; the image is centered at Io coordinates 0 degrees north, 342 degrees west.

    The pictures were part of a sequence designed to look at Jupiter's rings, but planners included Io in the sequence because the moon was passing behind Jupiter's rings at the time.

  8. Deep sequencing with intronic capture enables identification of an APC exon 10 inversion in a patient with polyposis.

    PubMed

    Shirts, Brian H; Salipante, Stephen J; Casadei, Silvia; Ryan, Shawnia; Martin, Judith; Jacobson, Angela; Vlaskin, Tatyana; Koehler, Karen; Livingston, Robert J; King, Mary-Claire; Walsh, Tom; Pritchard, Colin C

    2014-10-01

    Single-exon inversions have rarely been described in clinical syndromes and are challenging to detect using Sanger sequencing. We report the case of a 40-year-old woman with adenomatous colon polyps too numerous to count and who had a complex inversion spanning the entire exon 10 in APC (the gene encoding for adenomatous polyposis coli), causing exon skipping and resulting in a frameshift and premature protein truncation. In this study, we employed complete APC gene sequencing using high-coverage next-generation sequencing by ColoSeq, analysis with BreakDancer and SLOPE software, and confirmatory transcript analysis. ColoSeq identified a complex small genomic rearrangement consisting of an inversion that results in translational skipping of exon 10 in the APC gene. This mutation would not have been detected by traditional sequencing or gene-dosage methods. We report a case of adenomatous polyposis resulting from a complex single-exon inversion. Our report highlights the benefits of large-scale sequencing methods that capture intronic sequences with high enough depth of coverage-as well as the use of informatics tools-to enable detection of small pathogenic structural rearrangements.

  9. Capturing and stitching images with a large viewing angle and low distortion properties for upper gastrointestinal endoscopy

    NASA Astrophysics Data System (ADS)

    Liu, Ya-Cheng; Chung, Chien-Kai; Lai, Jyun-Yi; Chang, Han-Chao; Hsu, Feng-Yi

    2013-06-01

    Upper gastrointestinal endoscopies are primarily performed to observe the pathologies of the esophagus, stomach, and duodenum. However, when an endoscope is pushed into the esophagus or stomach by the physician, the organs behave similar to a balloon being gradually inflated. Consequently, their shapes and depth-of-field of images change continually, preventing thorough examination of the inflammation or anabrosis position, which delays the curing period. In this study, a 2.9-mm image-capturing module and a convoluted mechanism was incorporated into the tube like a standard 10- mm upper gastrointestinal endoscope. The scale-invariant feature transform (SIFT) algorithm was adopted to implement disease feature extraction on a koala doll. Following feature extraction, the smoothly varying affine stitching (SVAS) method was employed to resolve stitching distortion problems. Subsequently, the real-time splice software developed in this study was embedded in an upper gastrointestinal endoscope to obtain a panoramic view of stomach inflammation in the captured images. The results showed that the 2.9-mm image-capturing module can provide approximately 50 verified images in one spin cycle, a viewing angle of 120° can be attained, and less than 10% distortion can be achieved in each image. Therefore, these methods can solve the problems encountered when using a standard 10-mm upper gastrointestinal endoscope with a single camera, such as image distortion, and partial inflammation displays. The results also showed that the SIFT algorithm provides the highest correct matching rate, and the SVAS method can be employed to resolve the parallax problems caused by stitching together images of different flat surfaces.

  10. High-throughput physical mapping of chromosomes using automated in situ hybridization.

    PubMed

    George, Phillip; Sharakhova, Maria V; Sharakhov, Igor V

    2012-06-28

    Projects to obtain whole-genome sequences for 10,000 vertebrate species and for 5,000 insect and related arthropod species are expected to take place over the next 5 years. For example, the sequencing of the genomes for 15 malaria mosquitospecies is currently being done using an Illumina platform. This Anopheles species cluster includes both vectors and non-vectors of malaria. When the genome assemblies become available, researchers will have the unique opportunity to perform comparative analysis for inferring evolutionary changes relevant to vector ability. However, it has proven difficult to use next-generation sequencing reads to generate high-quality de novo genome assemblies. Moreover, the existing genome assemblies for Anopheles gambiae, although obtained using the Sanger method, are gapped or fragmented. Success of comparative genomic analyses will be limited if researchers deal with numerous sequencing contigs, rather than with chromosome-based genome assemblies. Fragmented, unmapped sequences create problems for genomic analyses because: (i) unidentified gaps cause incorrect or incomplete annotation of genomic sequences; (ii) unmapped sequences lead to confusion between paralogous genes and genes from different haplotypes; and (iii) the lack of chromosome assignment and orientation of the sequencing contigs does not allow for reconstructing rearrangement phylogeny and studying chromosome evolution. Developing high-resolution physical maps for species with newly sequenced genomes is a timely and cost-effective investment that will facilitate genome annotation, evolutionary analysis, and re-sequencing of individual genomes from natural populations. Here, we present innovative approaches to chromosome preparation, fluorescent in situ hybridization (FISH), and imaging that facilitate rapid development of physical maps. Using An. gambiae as an example, we demonstrate that the development of physical chromosome maps can potentially improve genome assemblies and, thus, the quality of genomic analyses. First, we use a high-pressure method to prepare polytene chromosome spreads. This method, originally developed for Drosophila, allows the user to visualize more details on chromosomes than the regular squashing technique. Second, a fully automated, front-end system for FISH is used for high-throughput physical genome mapping. The automated slide staining system runs multiple assays simultaneously and dramatically reduces hands-on time. Third, an automatic fluorescent imaging system, which includes a motorized slide stage, automatically scans and photographs labeled chromosomes after FISH. This system is especially useful for identifying and visualizing multiple chromosomal plates on the same slide. In addition, the scanning process captures a more uniform FISH result. Overall, the automated high-throughput physical mapping protocol is more efficient than a standard manual protocol.

  11. Lateral flow nucleic acid biosensor for sensitive detection of microRNAs based on the dual amplification strategy of duplex-specific nuclease and hybridization chain reaction.

    PubMed

    Ying, Na; Ju, Chuanjing; Sun, Xiuwei; Li, Letian; Chang, Hongbiao; Song, Guangping; Li, Zhongyi; Wan, Jiayu; Dai, Enyong

    2017-01-01

    MicroRNAs (miRNAs) constitute novel biomarkers for various diseases. Accurate and quantitative analysis of miRNA expression is critical for biomedical research and clinical theranostics. In this study, a method was developed for sensitive and specific detection of miRNAs via dual signal amplification based on duplex specific nuclease (DSN) and hybridization chain reaction (HCR). A reporter probe (RP), comprising recognition sequence (3' end modified with biotin) for a target miRNA of miR-21 and capture sequence (5' end modified with Fam) for HCR product, was designed and synthesized. HCR was initiated by partial sequence of initiator probe (IP), the other part of which can hybridize with capture sequence of RP, and was assembled by hairpin probes modified with biotin (H1-bio and H2-bio). A miR-21 triggered cyclical DSN cleavage of RP, which was immobilized to a streptavidin (SA) coated magnetic bead (MB). The released Fam labeled capture sequence then hybridized with the HCR product to generate a detectable dsDNA. This polymer was then dropped on lateral flow strip and positive result was observed. The proposed method allowed quantitative sequence-specific detection of miR-21 (with a detection limit of 2.1 fM, S/N = 3) in a dynamic range from 100 fM to 100 pM, with an excellent ability to discriminate differences in miRNAs. The method showed acceptable testing recoveries for the determination of miRNAs in serum.

  12. A Rapid, High-Quality, Cost-Effective, Comprehensive and Expandable Targeted Next-Generation Sequencing Assay for Inherited Heart Diseases.

    PubMed

    Wilson, Kitchener D; Shen, Peidong; Fung, Eula; Karakikes, Ioannis; Zhang, Angela; InanlooRahatloo, Kolsoum; Odegaard, Justin; Sallam, Karim; Davis, Ronald W; Lui, George K; Ashley, Euan A; Scharfe, Curt; Wu, Joseph C

    2015-09-11

    Thousands of mutations across >50 genes have been implicated in inherited cardiomyopathies. However, options for sequencing this rapidly evolving gene set are limited because many sequencing services and off-the-shelf kits suffer from slow turnaround, inefficient capture of genomic DNA, and high cost. Furthermore, customization of these assays to cover emerging targets that suit individual needs is often expensive and time consuming. We sought to develop a custom high throughput, clinical-grade next-generation sequencing assay for detecting cardiac disease gene mutations with improved accuracy, flexibility, turnaround, and cost. We used double-stranded probes (complementary long padlock probes), an inexpensive and customizable capture technology, to efficiently capture and amplify the entire coding region and flanking intronic and regulatory sequences of 88 genes and 40 microRNAs associated with inherited cardiomyopathies, congenital heart disease, and cardiac development. Multiplexing 11 samples per sequencing run resulted in a mean base pair coverage of 420, of which 97% had >20× coverage and >99% were concordant with known heterozygous single nucleotide polymorphisms. The assay correctly detected germline variants in 24 individuals and revealed several polymorphic regions in miR-499. Total run time was 3 days at an approximate cost of $100 per sample. Accurate, high-throughput detection of mutations across numerous cardiac genes is achievable with complementary long padlock probe technology. Moreover, this format allows facile insertion of additional probes as more cardiomyopathy and congenital heart disease genes are discovered, giving researchers a powerful new tool for DNA mutation detection and discovery. © 2015 American Heart Association, Inc.

  13. Dynamic Textures Modeling via Joint Video Dictionary Learning.

    PubMed

    Wei, Xian; Li, Yuanxiang; Shen, Hao; Chen, Fang; Kleinsteuber, Martin; Wang, Zhongfeng

    2017-04-06

    Video representation is an important and challenging task in the computer vision community. In this paper, we consider the problem of modeling and classifying video sequences of dynamic scenes which could be modeled in a dynamic textures (DT) framework. At first, we assume that image frames of a moving scene can be modeled as a Markov random process. We propose a sparse coding framework, named joint video dictionary learning (JVDL), to model a video adaptively. By treating the sparse coefficients of image frames over a learned dictionary as the underlying "states", we learn an efficient and robust linear transition matrix between two adjacent frames of sparse events in time series. Hence, a dynamic scene sequence is represented by an appropriate transition matrix associated with a dictionary. In order to ensure the stability of JVDL, we impose several constraints on such transition matrix and dictionary. The developed framework is able to capture the dynamics of a moving scene by exploring both sparse properties and the temporal correlations of consecutive video frames. Moreover, such learned JVDL parameters can be used for various DT applications, such as DT synthesis and recognition. Experimental results demonstrate the strong competitiveness of the proposed JVDL approach in comparison with state-of-the-art video representation methods. Especially, it performs significantly better in dealing with DT synthesis and recognition on heavily corrupted data.

  14. Watching the action unfold: New cryo-EM images capture CRISPR’s interaction with target DNA | Center for Cancer Research

    Cancer.gov

    Using the Nobel-prize winning technique of cryo-EM, researchers led by CCR Senior Investigator Sriram Subramaniam, Ph.D., have captured a series of highly detailed images of a protein complex belonging to the CRISPR system that can be used by bacteria to recognize and destroy foreign DNA. The images reveal the molecule’s form before and after its interaction with DNA and help

  15. Processing Dynamic Image Sequences from a Moving Sensor.

    DTIC Science & Technology

    1984-02-01

    65 Roadsign Image Sequence ..... ................ ... 70 Roadsign Sequence with Redundant Features .. ........ . 79 Roadsign Subimage...Selected Feature Error Values .. ........ 66 2c. Industrial Image Selected Feature Local Search Values. .. .... 67 3ab. Roadsign Image Error Values...72 3c. Roadsign Image Local Search Values ............. 73 4ab. Roadsign Redundant Feature Error Values. ............ 8 4c. Roadsign

  16. Deep Recurrent Neural Networks for Human Activity Recognition

    PubMed Central

    Murad, Abdulmajid

    2017-01-01

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs. PMID:29113103

  17. Deep Recurrent Neural Networks for Human Activity Recognition.

    PubMed

    Murad, Abdulmajid; Pyun, Jae-Young

    2017-11-06

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.

  18. Novel mutations in CRB1 gene identified in a chinese pedigree with retinitis pigmentosa by targeted capture and next generation sequencing

    PubMed Central

    Lo, David; Weng, Jingning; Liu, xiaohong; Yang, Juhua; He, Fen; Wang, Yun; Liu, Xuyang

    2016-01-01

    PURPOSE To detect the disease-causing gene in a Chinese pedigree with autosomal-recessive retinitis pigmentosa (ARRP). METHODS All subjects in this family underwent a complete ophthalmic examination. Targeted-capture next generation sequencing (NGS) was performed on the proband to detect variants. All variants were verified in the remaining family members by PCR amplification and Sanger sequencing. RESULTS All the affected subjects in this pedigree were diagnosed with retinitis pigmentosa (RP). The compound heterozygous c.138delA (p.Asp47IlefsX24) and c.1841G>T (p.Gly614Val) mutations in the Crumbs homolog 1 (CRB1) gene were identified in all the affected patients but not in the unaffected individuals in this family. These mutations were inherited from their parents, respectively. CONCLUSION The novel compound heterozygous mutations in CRB1 were identified in a Chinese pedigree with ARRP using targeted-capture next generation sequencing. After evaluating the significant heredity and impaired protein function, the compound heterozygous c.138delA (p.Asp47IlefsX24) and c.1841G>T (p.Gly614Val) mutations are the causal genes of early onset ARRP in this pedigree. To the best of our knowledge, there is no previous report regarding the compound mutations. PMID:27806333

  19. Dense infraspecific sampling reveals rapid and independent trajectories of plastome degradation in a heterotrophic orchid complex.

    PubMed

    Barrett, Craig F; Wicke, Susann; Sass, Chodon

    2018-05-01

    Heterotrophic plants provide excellent opportunities to study the effects of altered selective regimes on genome evolution. Plastid genome (plastome) studies in heterotrophic plants are often based on one or a few highly divergent species or sequences as representatives of an entire lineage, thus missing important evolutionary-transitory events. Here, we present the first infraspecific analysis of plastome evolution in any heterotrophic plant. By combining genome skimming and targeted sequence capture, we address hypotheses on the degree and rate of plastome degradation in a complex of leafless orchids (Corallorhiza striata) across its geographic range. Plastomes provide strong support for relationships and evidence of reciprocal monophyly between C. involuta and the endangered C. bentleyi. Plastome degradation is extensive, occurring rapidly over a few million years, with evidence of differing rates of genomic change among the two principal clades of the complex. Genome skimming and targeted sequence capture differ widely in coverage depth overall, with depth in targeted sequence capture datasets varying immensely across the plastome as a function of GC content. These findings will help to fill a knowledge gap in models of heterotrophic plastid genome evolution, and have implications for future studies in heterotrophs. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.

  20. A high sensitivity 20Mfps CMOS image sensor with readout speed of 1Tpixel/sec for visualization of ultra-high speed phenomena

    NASA Astrophysics Data System (ADS)

    Kuroda, R.; Sugawa, S.

    2017-02-01

    Ultra-high speed (UHS) CMOS image sensors with on-chop analog memories placed on the periphery of pixel array for the visualization of UHS phenomena are overviewed in this paper. The developed UHS CMOS image sensors consist of 400H×256V pixels and 128 memories/pixel, and the readout speed of 1Tpixel/sec is obtained, leading to 10 Mfps full resolution video capturing with consecutive 128 frames, and 20 Mfps half resolution video capturing with consecutive 256 frames. The first development model has been employed in the high speed video camera and put in practical use in 2012. By the development of dedicated process technologies, photosensitivity improvement and power consumption reduction were simultaneously achieved, and the performance improved version has been utilized in the commercialized high-speed video camera since 2015 that offers 10 Mfps with ISO16,000 photosensitivity. Due to the improved photosensitivity, clear images can be captured and analyzed even under low light condition, such as under a microscope as well as capturing of UHS light emission phenomena.

  1. Insect haptoelectrical stimulation of Venus flytrap triggers exocytosis in gland cells.

    PubMed

    Scherzer, Sönke; Shabala, Lana; Hedrich, Benjamin; Fromm, Jörg; Bauer, Hubert; Munz, Eberhard; Jakob, Peter; Al-Rascheid, Khaled A S; Kreuzer, Ines; Becker, Dirk; Eiblmeier, Monika; Rennenberg, Heinz; Shabala, Sergey; Bennett, Malcolm; Neher, Erwin; Hedrich, Rainer

    2017-05-02

    The Venus flytrap Dionaea muscipula captures insects and consumes their flesh. Prey contacting touch-sensitive hairs trigger traveling electrical waves. These action potentials (APs) cause rapid closure of the trap and activate secretory functions of glands, which cover its inner surface. Such prey-induced haptoelectric stimulation activates the touch hormone jasmonate (JA) signaling pathway, which initiates secretion of an acidic hydrolase mixture to decompose the victim and acquire the animal nutrients. Although postulated since Darwin's pioneering studies, these secretory events have not been recorded so far. Using advanced analytical and imaging techniques, such as vibrating ion-selective electrodes, carbon fiber amperometry, and magnetic resonance imaging, we monitored stimulus-coupled glandular secretion into the flytrap. Trigger-hair bending or direct application of JA caused a quantal release of oxidizable material from gland cells monitored as distinct amperometric spikes. Spikes reminiscent of exocytotic events in secretory animal cells progressively increased in frequency, reaching steady state 1 d after stimulation. Our data indicate that trigger-hair mechanical stimulation evokes APs. Gland cells translate APs into touch-inducible JA signaling that promotes the formation of secretory vesicles. Early vesicles loaded with H + and Cl - fuse with the plasma membrane, hyperacidifying the "green stomach"-like digestive organ, whereas subsequent ones carry hydrolases and nutrient transporters, together with a glutathione redox moiety, which is likely to act as the major detected compound in amperometry. Hence, when glands perceive the haptoelectrical stimulation, secretory vesicles are tailored to be released in a sequence that optimizes digestion of the captured animal.

  2. Capturing the genetic makeup of the active microbiome in situ

    DOE PAGES

    Singer, Esther; Wagner, Michael; Woyke, Tanja

    2017-06-02

    More than any other technology, nucleic acid sequencing has enabled microbial ecology studies to be complemented with the data volumes necessary to capture the extent of microbial diversity and dynamics in a wide range of environments. In order to truly understand and predict environmental processes, however, the distinction between active, inactive and dead microbial cells is critical. Also, experimental designs need to be sensitive toward varying population complexity and activity, and temporal as well as spatial scales of process rates. There are a number of approaches, including single-cell techniques, which were designed to study in situ microbial activity and thatmore » have been successively coupled to nucleic acid sequencing. The exciting new discoveries regarding in situ microbial activity provide evidence that future microbial ecology studies will indispensably rely on techniques that specifically capture members of the microbiome active in the environment. Herein, we review those currently used activity-based approaches that can be directly linked to shotgun nucleic acid sequencing, evaluate their relevance to ecology studies, and discuss future directions.« less

  3. Nanoparticle-labeled DNA capture elements for detection and identification of biological agents

    NASA Astrophysics Data System (ADS)

    Kiel, Johnathan L.; Holwitt, Eric A.; Parker, Jill E.; Vivekananda, Jeevalatha; Franz, Veronica

    2004-12-01

    Aptamers, synthetic DNA capture elements (DCEs), can be made chemically or in genetically engineered bacteria. DNA capture elements are artificial DNA sequences, from a random pool of sequences, selected for their specific binding to potential biological warfare or terrorism agents. These sequences were selected by an affinity method using filters to which the target agent was attached and the DNA isolated and amplified by polymerase chain reaction (PCR) in an iterative, increasingly stringent, process. The probes can then be conjugated to Quantum Dots and super paramagnetic nanoparticles. The former provide intense, bleach-resistant fluorescent detection of bioagent and the latter provide a means to collect the bioagents with a magnet. The fluorescence can be detected in a flow cytometer, in a fluorescence plate reader, or with a fluorescence microscope. To date, we have made DCEs to Bacillus anthracis spores, Shiga toxin, Venezuelan Equine Encephalitis (VEE) virus, and Francisella tularensis. DCEs can easily distinguish Bacillus anthracis from its nearest relatives, Bacillus cereus and Bacillus thuringiensis. Development of a high through-put process is currently being investigated.

  4. Capturing the genetic makeup of the active microbiome in situ

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singer, Esther; Wagner, Michael; Woyke, Tanja

    More than any other technology, nucleic acid sequencing has enabled microbial ecology studies to be complemented with the data volumes necessary to capture the extent of microbial diversity and dynamics in a wide range of environments. In order to truly understand and predict environmental processes, however, the distinction between active, inactive and dead microbial cells is critical. Also, experimental designs need to be sensitive toward varying population complexity and activity, and temporal as well as spatial scales of process rates. There are a number of approaches, including single-cell techniques, which were designed to study in situ microbial activity and thatmore » have been successively coupled to nucleic acid sequencing. The exciting new discoveries regarding in situ microbial activity provide evidence that future microbial ecology studies will indispensably rely on techniques that specifically capture members of the microbiome active in the environment. Herein, we review those currently used activity-based approaches that can be directly linked to shotgun nucleic acid sequencing, evaluate their relevance to ecology studies, and discuss future directions.« less

  5. Three Dimentional Reconstruction of Large Cultural Heritage Objects Based on Uav Video and Tls Data

    NASA Astrophysics Data System (ADS)

    Xu, Z.; Wu, T. H.; Shen, Y.; Wu, L.

    2016-06-01

    This paper investigates the synergetic use of unmanned aerial vehicle (UAV) and terrestrial laser scanner (TLS) in 3D reconstruction of cultural heritage objects. Rather than capturing still images, the UAV that equips a consumer digital camera is used to collect dynamic videos to overcome its limited endurance capacity. Then, a set of 3D point-cloud is generated from video image sequences using the automated structure-from-motion (SfM) and patch-based multi-view stereo (PMVS) methods. The TLS is used to collect the information that beyond the reachability of UAV imaging e.g., partial building facades. A coarse to fine method is introduced to integrate the two sets of point clouds UAV image-reconstruction and TLS scanning for completed 3D reconstruction. For increased reliability, a variant of ICP algorithm is introduced using local terrain invariant regions in the combined designation. The experimental study is conducted in the Tulou culture heritage building in Fujian province, China, which is focused on one of the TuLou clusters built several hundred years ago. Results show a digital 3D model of the Tulou cluster with complete coverage and textural information. This paper demonstrates the usability of the proposed method for efficient 3D reconstruction of heritage object based on UAV video and TLS data.

  6. Biometric template revocation

    NASA Astrophysics Data System (ADS)

    Arndt, Craig M.

    2004-08-01

    Biometric are a powerful technology for identifying humans both locally and at a distance. In order to perform identification or verification biometric systems capture an image of some biometric of a user or subject. The image is then converted mathematical to representation of the person call a template. Since we know that every human in the world is different each human will have different biometric images (different fingerprints, or faces, etc.). This is what makes biometrics useful for identification. However unlike a credit card number or a password to can be given to a person and later revoked if it is compromised and biometric is with the person for life. The problem then is to develop biometric templates witch can be easily revoked and reissued which are also unique to the user and can be easily used for identification and verification. In this paper we develop and present a method to generate a set of templates which are fully unique to the individual and also revocable. By using bases set compression algorithms in an n-dimensional orthogonal space we can represent a give biometric image in an infinite number of equally valued and unique ways. The verification and biometric matching system would be presented with a given template and revocation code. The code will then representing where in the sequence of n-dimensional vectors to start the recognition.

  7. Hyperspectral range imaging for transportation systems evaluation

    NASA Astrophysics Data System (ADS)

    Bridgelall, Raj; Rafert, J. B.; Atwood, Don; Tolliver, Denver D.

    2016-04-01

    Transportation agencies expend significant resources to inspect critical infrastructure such as roadways, railways, and pipelines. Regular inspections identify important defects and generate data to forecast maintenance needs. However, cost and practical limitations prevent the scaling of current inspection methods beyond relatively small portions of the network. Consequently, existing approaches fail to discover many high-risk defect formations. Remote sensing techniques offer the potential for more rapid and extensive non-destructive evaluations of the multimodal transportation infrastructure. However, optical occlusions and limitations in the spatial resolution of typical airborne and space-borne platforms limit their applicability. This research proposes hyperspectral image classification to isolate transportation infrastructure targets for high-resolution photogrammetric analysis. A plenoptic swarm of unmanned aircraft systems will capture images with centimeter-scale spatial resolution, large swaths, and polarization diversity. The light field solution will incorporate structure-from-motion techniques to reconstruct three-dimensional details of the isolated targets from sequences of two-dimensional images. A comparative analysis of existing low-power wireless communications standards suggests an application dependent tradeoff in selecting the best-suited link to coordinate swarming operations. This study further produced a taxonomy of specific roadway and railway defects, distress symptoms, and other anomalies that the proposed plenoptic swarm sensing system would identify and characterize to estimate risk levels.

  8. Detection of secondary and backscattered electrons for 3D imaging with multi-detector method in VP/ESEM.

    PubMed

    Slówko, Witold; Wiatrowski, Artur; Krysztof, Michał

    2018-01-01

    The paper considers some major problems of adapting the multi-detector method for three-dimensional (3D) imaging of wet bio-medical samples in Variable Pressure/Environmental Scanning Electron Microscope (VP/ESEM). The described method pertains to "single-view techniques", which to create the 3D surface model utilise a sequence of 2D SEM images captured from a single view point (along the electron beam axis) but illuminated from four directions. The basis of the method and requirements resulting from them are given for the detector systems of secondary (SE) and backscattered electrons (BSE), as well as designs of the systems which could work in variable conditions. The problems of SE detection with application of the Pressure Limiting Aperture (PLA) as the signal collector are discussed with respect to secondary electron backscattering by a gaseous environment. However, the authors' attention is turned mainly to the directional BSE detection, realized in two ways. The high take off angle BSE were captured through PLA with use of the quadruple semiconductor detector placed inside the intermediate chamber, while BSE starting at lower angles were detected by the four-folded ionization device working in the sample chamber environment. The latter relied on a conversion of highly energetic BSE into low energetic SE generated on walls and a gaseous environment of the deep discharge gap oriented along the BSE velocity direction. The converted BSE signal was amplified in an ionising avalanche developed in the electric field arranged transversally to the gap. The detector system operation is illustrated with numerous computer simulations and examples of experiments and 3D images. The latter were conducted in a JSM 840 microscope with its combined detector-vacuum equipment which could extend capabilities of this high vacuum instrument toward elevated pressures (over 1kPa) and environmental conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Deep Impact Autonomous Navigation : the trials of targeting the unknown

    NASA Technical Reports Server (NTRS)

    Kubitschek, Daniel G.; Mastrodemos, Nickolaos; Werner, Robert A.; Kennedy, Brian M.; Synnott, Stephen P.; Null, George W.; Bhaskaran, Shyam; Riedel, Joseph E.; Vaughan, Andrew T.

    2006-01-01

    On July 4, 2005 at 05:44:34.2 UTC the Impactor Spacecraft (s/c) impacted comet Tempel 1 with a relative speed of 10.3 km/s capturing high-resolution images of the surface of a cometary nucleus just seconds before impact. Meanwhile, the Flyby s/c captured the impact event using both the Medium Resolution Imager (MRI) and the High Resolution Imager (HRI) and tracked the nucleus for the entire 800 sec period between impact and shield attitude transition. The objective of the Impactor s/c was to impact in an illuminated area viewable from the Flyby s/c and capture high-resolution context images of the impact site. This was accomplished by using autonomous navigation (AutoNav) algorithms and precise attitude information from the attitude determination and control subsystem (ADCS). The Flyby s/c had two primary objectives: 1) capture the impact event with the highest temporal resolution possible in order to observe the ejecta plume expansion dynamics; and 2) track the impact site for at least 800 sec to observe the crater formation and capture the highest resolution images possible of the fully developed crater. These two objectives were met by estimating the Flyby s/c trajectory relative to Tempel 1 using the same AutoNav algorithms along with precise attitude information from ADCS and independently selecting the best impact site. This paper describes the AutoNav system, what happened during the encounter with Tempel 1 and what could have happened.

  10. Development of a dual-modality, dual-view smartphone-based imaging system for oral cancer detection

    NASA Astrophysics Data System (ADS)

    Uthoff, Ross D.; Song, Bofan; Birur, Praveen; Kuriakose, Moni Abraham; Sunny, Sumsum; Suresh, Amritha; Patrick, Sanjana; Anbarani, Afarin; Spires, Oliver; Wilder-Smith, Petra; Liang, Rongguang

    2018-02-01

    Oral cancer is a rising health issue in many low and middle income countries (LMIC). Proposed is an implementation of autofluorescence imaging (AFI) and white light imaging (WLI) on a smartphone platform providing inexpensive early detection of cancerous conditions in the oral cavity. Interchangeable modules allow both whole mouth imaging for an overview of the patients' oral health and an intraoral imaging probe for localized information. Custom electronics synchronize image capture and external LED operation for the excitation of tissue fluorescence. A custom Android application captures images and an image processing algorithm provides likelihood estimates of cancerous conditions. Finally, all data can be uploaded to a cloud server where a convolutional neural network classifies the images and a remote specialist can provide diagnosis and triage instructions.

  11. Computational photography with plenoptic camera and light field capture: tutorial.

    PubMed

    Lam, Edmund Y

    2015-11-01

    Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.

  12. Diuretic-enhanced gadolinium excretory MR urography: comparison of conventional gradient-echo sequences and echo-planar imaging.

    PubMed

    Nolte-Ernsting, C C; Tacke, J; Adam, G B; Haage, P; Jung, P; Jakse, G; Günther, R W

    2001-01-01

    The aim of this study was to investigate the utility of different gadolinium-enhanced T1-weighted gradient-echo techniques in excretory MR urography. In 74 urologic patients, excretory MR urography was performed using various T1-weighted gradient-echo (GRE) sequences after injection of gadolinium-DTPA and low-dose furosemide. The examinations included conventional GRE sequences and echo-planar imaging (GRE EPI), both obtained with 3D data sets and 2D projection images. Breath-hold acquisition was used primarily. In 20 of 74 examinations, we compared breath-hold imaging with respiratory gating. Breath-hold imaging was significantly superior to respiratory gating for the visualization of pelvicaliceal systems, but not for the ureters. Complete MR urograms were obtained within 14-20 s using 3D GRE EPI sequences and in 20-30 s with conventional 3D GRE sequences. Ghost artefacts caused by ureteral peristalsis often occurred with conventional 3D GRE imaging and were almost completely suppressed in EPI sequences (p < 0.0001). Susceptibility effects were more pronounced on GRE EPI MR urograms and calculi measured 0.8-21.7% greater in diameter compared with conventional GRE sequences. Increased spatial resolution degraded the image quality only in GRE-EPI urograms. In projection MR urography, the entire pelvicaliceal system was imaged by acquisition of a fast single-slice sequence and the conventional 2D GRE technique provided superior morphological accuracy than 2D GRE EPI projection images (p < 0.0003). Fast 3D GRE EPI sequences improve the clinical practicability of excretory MR urography especially in old or critically ill patients unable to suspend breathing for more than 20 s. Conventional GRE sequences are superior to EPI in high-resolution detail MR urograms and in projection imaging.

  13. Simultaneous acquisition of differing image types

    DOEpatents

    Demos, Stavros G

    2012-10-09

    A system in one embodiment includes an image forming device for forming an image from an area of interest containing different image components; an illumination device for illuminating the area of interest with light containing multiple components; at least one light source coupled to the illumination device, the at least one light source providing light to the illumination device containing different components, each component having distinct spectral characteristics and relative intensity; an image analyzer coupled to the image forming device, the image analyzer decomposing the image formed by the image forming device into multiple component parts based on type of imaging; and multiple image capture devices, each image capture device receiving one of the component parts of the image. A method in one embodiment includes receiving an image from an image forming device; decomposing the image formed by the image forming device into multiple component parts based on type of imaging; receiving the component parts of the image; and outputting image information based on the component parts of the image. Additional systems and methods are presented.

  14. Fully automated corneal endothelial morphometry of images captured by clinical specular microscopy

    NASA Astrophysics Data System (ADS)

    Bucht, Curry; Söderberg, Per; Manneberg, Göran

    2010-02-01

    The corneal endothelium serves as the posterior barrier of the cornea. Factors such as clarity and refractive properties of the cornea are in direct relationship to the quality of the endothelium. The endothelial cell density is considered the most important morphological factor of the corneal endothelium. Pathological conditions and physical trauma may threaten the endothelial cell density to such an extent that the optical property of the cornea and thus clear eyesight is threatened. Diagnosis of the corneal endothelium through morphometry is an important part of several clinical applications. Morphometry of the corneal endothelium is presently carried out by semi automated analysis of pictures captured by a Clinical Specular Microscope (CSM). Because of the occasional need of operator involvement, this process can be tedious, having a negative impact on sampling size. This study was dedicated to the development and use of fully automated analysis of a very large range of images of the corneal endothelium, captured by CSM, using Fourier analysis. Software was developed in the mathematical programming language Matlab. Pictures of the corneal endothelium, captured by CSM, were read into the analysis software. The software automatically performed digital enhancement of the images, normalizing lights and contrasts. The digitally enhanced images of the corneal endothelium were Fourier transformed, using the fast Fourier transform (FFT) and stored as new images. Tools were developed and applied for identification and analysis of relevant characteristics of the Fourier transformed images. The data obtained from each Fourier transformed image was used to calculate the mean cell density of its corresponding corneal endothelium. The calculation was based on well known diffraction theory. Results in form of estimated cell density of the corneal endothelium were obtained, using fully automated analysis software on 292 images captured by CSM. The cell density obtained by the fully automated analysis was compared to the cell density obtained from classical, semi-automated analysis and a relatively large correlation was found.

  15. Systematic evaluation of a targeted gene capture sequencing panel for molecular diagnosis of retinitis pigmentosa.

    PubMed

    Huang, Hui; Chen, Yanhua; Chen, Huishuang; Ma, Yuanyuan; Chiang, Pei-Wen; Zhong, Jing; Liu, Xuyang; Asan; Wu, Jing; Su, Yan; Li, Xin; Deng, Jianlian; Huang, Yingping; Zhang, Xinxin; Li, Yang; Fan, Ning; Wang, Ying; Tang, Lihui; Shen, Jinting; Chen, Meiyan; Zhang, Xiuqing; Te, Deng; Banerjee, Santasree; Liu, Hui; Qi, Ming; Yi, Xin

    2018-01-01

    Inherited eye diseases are major causes of vision loss in both children and adults. Inherited eye diseases are characterized by clinical variability and pronounced genetic heterogeneity. Genetic testing may provide an accurate diagnosis for ophthalmic genetic disorders and allow gene therapy for specific diseases. A targeted gene capture panel was designed to capture exons of 283 inherited eye disease genes including 58 known causative retinitis pigmentosa (RP) genes. 180 samples were tested with this panel, 68 were previously tested by Sanger sequencing. Systematic evaluation of our method and comprehensive molecular diagnosis were carried on 99 RP patients. 96.85% targeted regions were covered by at least 20 folds, the accuracy of variants detection was 99.994%. In 4 of the 68 samples previously tested by Sanger sequencing, mutations of other diseases not consisting with the clinical diagnosis were detected by next-generation sequencing (NGS) not Sanger. Among the 99 RP patients, 64 (64.6%) were detected with pathogenic mutations, while in 3 patients, it was inconsistent between molecular diagnosis and their initial clinical diagnosis. After revisiting, one patient's clinical diagnosis was reclassified. In addition, 3 patients were found carrying large deletions. We have systematically evaluated our method and compared it with Sanger sequencing, and have identified a large number of novel mutations in a cohort of 99 RP patients. The results showed a sufficient accuracy of our method and suggested the importance of molecular diagnosis in clinical diagnosis.

  16. Systematic evaluation of a targeted gene capture sequencing panel for molecular diagnosis of retinitis pigmentosa

    PubMed Central

    Ma, Yuanyuan; Chiang, Pei-Wen; Zhong, Jing; Liu, Xuyang; Asan; Wu, Jing; Su, Yan; Li, Xin; Deng, Jianlian; Huang, Yingping; Zhang, Xinxin; Li, Yang; Fan, Ning; Wang, Ying; Tang, Lihui; Shen, Jinting; Chen, Meiyan; Zhang, Xiuqing; Te, Deng; Banerjee, Santasree; Liu, Hui; Qi, Ming; Yi, Xin

    2018-01-01

    Background Inherited eye diseases are major causes of vision loss in both children and adults. Inherited eye diseases are characterized by clinical variability and pronounced genetic heterogeneity. Genetic testing may provide an accurate diagnosis for ophthalmic genetic disorders and allow gene therapy for specific diseases. Methods A targeted gene capture panel was designed to capture exons of 283 inherited eye disease genes including 58 known causative retinitis pigmentosa (RP) genes. 180 samples were tested with this panel, 68 were previously tested by Sanger sequencing. Systematic evaluation of our method and comprehensive molecular diagnosis were carried on 99 RP patients. Results 96.85% targeted regions were covered by at least 20 folds, the accuracy of variants detection was 99.994%. In 4 of the 68 samples previously tested by Sanger sequencing, mutations of other diseases not consisting with the clinical diagnosis were detected by next-generation sequencing (NGS) not Sanger. Among the 99 RP patients, 64 (64.6%) were detected with pathogenic mutations, while in 3 patients, it was inconsistent between molecular diagnosis and their initial clinical diagnosis. After revisiting, one patient’s clinical diagnosis was reclassified. In addition, 3 patients were found carrying large deletions. Conclusions We have systematically evaluated our method and compared it with Sanger sequencing, and have identified a large number of novel mutations in a cohort of 99 RP patients. The results showed a sufficient accuracy of our method and suggested the importance of molecular diagnosis in clinical diagnosis. PMID:29641573

  17. Visibility through the gaseous smoke in airborne remote sensing using a DSLR camera

    NASA Astrophysics Data System (ADS)

    Chabok, Mirahmad; Millington, Andrew; Hacker, Jorg M.; McGrath, Andrew J.

    2016-08-01

    Visibility and clarity of remotely sensed images acquired by consumer grade DSLR cameras, mounted on an unmanned aerial vehicle or a manned aircraft, are critical factors in obtaining accurate and detailed information from any area of interest. The presence of substantial haze, fog or gaseous smoke particles; caused, for example, by an active bushfire at the time of data capture, will dramatically reduce image visibility and quality. Although most modern hyperspectral imaging sensors are capable of capturing a large number of narrow range bands of the shortwave and thermal infrared spectral range, which have the potential to penetrate smoke and haze, the resulting images do not contain sufficient spatial detail to enable locating important objects or assist search and rescue or similar applications which require high resolution information. We introduce a new method for penetrating gaseous smoke without compromising spatial resolution using a single modified DSLR camera in conjunction with image processing techniques which effectively improves the visibility of objects in the captured images. This is achieved by modifying a DSLR camera and adding a custom optical filter to enable it to capture wavelengths from 480-1200nm (R, G and Near Infrared) instead of the standard RGB bands (400-700nm). With this modified camera mounted on an aircraft, images were acquired over an area polluted by gaseous smoke from an active bushfire. Processed data using our proposed method shows significant visibility improvements compared with other existing solutions.

  18. Eyelid contour detection and tracking for startle research related eye-blink measurements from high-speed video records.

    PubMed

    Bernard, Florian; Deuter, Christian Eric; Gemmar, Peter; Schachinger, Hartmut

    2013-10-01

    Using the positions of the eyelids is an effective and contact-free way for the measurement of startle induced eye-blinks, which plays an important role in human psychophysiological research. To the best of our knowledge, no methods for an efficient detection and tracking of the exact eyelid contours in image sequences captured at high-speed exist that are conveniently usable by psychophysiological researchers. In this publication a semi-automatic model-based eyelid contour detection and tracking algorithm for the analysis of high-speed video recordings from an eye tracker is presented. As a large number of images have been acquired prior to method development it was important that our technique is able to deal with images that are recorded without any special parametrisation of the eye tracker. The method entails pupil detection, specular reflection removal and makes use of dynamic model adaption. In a proof-of-concept study we could achieve a correct detection rate of 90.6%. With this approach, we provide a feasible method to accurately assess eye-blinks from high-speed video recordings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Contrast-enhanced ultrasound imaging and in vivo circulatory kinetics with low-boiling-point nanoscale phase-change perfluorocarbon agents.

    PubMed

    Sheeran, Paul S; Rojas, Juan D; Puett, Connor; Hjelmquist, Jordan; Arena, Christopher B; Dayton, Paul A

    2015-03-01

    Many studies have explored phase-change contrast agents (PCCAs) that can be vaporized by an ultrasonic pulse to form microbubbles for ultrasound imaging and therapy. However, few investigations have been published on the utility and characteristics of PCCAs as contrast agents in vivo. In this study, we examine the properties of low-boiling-point nanoscale PCCAs evaluated in vivo and compare data with those for conventional microbubbles with respect to contrast generation and circulation properties. To do this, we develop a custom pulse sequence to vaporize and image PCCAs using the Verasonics research platform and a clinical array transducer. Results indicate that droplets can produce contrast enhancement similar to that of microbubbles (7.29 to 18.24 dB over baseline, depending on formulation) and can be designed to circulate for as much as 3.3 times longer than microbubbles. This study also reports for the first time the ability to capture contrast washout kinetics of the target organ as a measure of vascular perfusion. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  20. 3D Surface Reconstruction for Lower Limb Prosthetic Model using Radon Transform

    NASA Astrophysics Data System (ADS)

    Sobani, S. S. Mohd; Mahmood, N. H.; Zakaria, N. A.; Razak, M. A. Abdul

    2018-03-01

    This paper describes the idea to realize three-dimensional surfaces of objects with cylinder-based shapes where the techniques adopted and the strategy developed for a non-rigid three-dimensional surface reconstruction of an object from uncalibrated two-dimensional image sequences using multiple-view digital camera and turntable setup. The surface of an object is reconstructed based on the concept of tomography with the aid of performing several digital image processing algorithms on the two-dimensional images captured by a digital camera in thirty-six different projections and the three-dimensional structure of the surface is analysed. Four different objects are used as experimental models in the reconstructions and each object is placed on a manually rotated turntable. The results shown that the proposed method has successfully reconstruct the three-dimensional surface of the objects and practicable. The shape and size of the reconstructed three-dimensional objects are recognizable and distinguishable. The reconstructions of objects involved in the test are strengthened with the analysis where the maximum percent error obtained from the computation is approximately 1.4 % for the height whilst 4.0%, 4.79% and 4.7% for the diameters at three specific heights of the objects.

  1. Time lapse video recordings of highly purified human hematopoietic progenitor cells in culture.

    PubMed

    Denkers, I A; Dragowska, W; Jaggi, B; Palcic, B; Lansdorp, P M

    1993-05-01

    Major hurdles in studies of stem cell biology include the low frequency and heterogeneity of human hematopoietic precursor cells in bone marrow and the difficulty of directly studying the effect of various culture conditions and growth factors on such cells. We have adapted the cell analyzer imaging system for monitoring and recording the morphology of limited numbers of cells under various culture conditions. Hematopoietic progenitor cells with a CD34+ CD45RAlo CD71lo phenotype were purified from previously frozen organ donor bone marrow by fluorescence activated cell sorting. Cultures of such cells were analyzed with the imaging system composed of an inverted microscope contained in an incubator, a video camera, an optical memory disk recorder and a computer-controlled motorized microscope XYZ precision stage. Fully computer-controlled video images at defined XYZ positions were captured at selected time intervals and recorded at a predetermined sequence on an optical memory disk. In this study, the cell analyzer system was used to obtain descriptions and measurements of hematopoietic cell behavior, like cell motility, cell interactions, cell shape, cell division, cell cycle time and cell size changes under different culture conditions.

  2. Speckle-learning-based object recognition through scattering media.

    PubMed

    Ando, Takamasa; Horisaki, Ryoichi; Tanida, Jun

    2015-12-28

    We experimentally demonstrated object recognition through scattering media based on direct machine learning of a number of speckle intensity images. In the experiments, speckle intensity images of amplitude or phase objects on a spatial light modulator between scattering plates were captured by a camera. We used the support vector machine for binary classification of the captured speckle intensity images of face and non-face data. The experimental results showed that speckles are sufficient for machine learning.

  3. Holistic and component plant phenotyping using temporal image sequence.

    PubMed

    Das Choudhury, Sruti; Bashyam, Srinidhi; Qiu, Yumou; Samal, Ashok; Awada, Tala

    2018-01-01

    Image-based plant phenotyping facilitates the extraction of traits noninvasively by analyzing large number of plants in a relatively short period of time. It has the potential to compute advanced phenotypes by considering the whole plant as a single object (holistic phenotypes) or as individual components, i.e., leaves and the stem (component phenotypes), to investigate the biophysical characteristics of the plants. The emergence timing, total number of leaves present at any point of time and the growth of individual leaves during vegetative stage life cycle of the maize plants are significant phenotypic expressions that best contribute to assess the plant vigor. However, image-based automated solution to this novel problem is yet to be explored. A set of new holistic and component phenotypes are introduced in this paper. To compute the component phenotypes, it is essential to detect the individual leaves and the stem. Thus, the paper introduces a novel method to reliably detect the leaves and the stem of the maize plants by analyzing 2-dimensional visible light image sequences captured from the side using a graph based approach. The total number of leaves are counted and the length of each leaf is measured for all images in the sequence to monitor leaf growth. To evaluate the performance of the proposed algorithm, we introduce University of Nebraska-Lincoln Component Plant Phenotyping Dataset (UNL-CPPD) and provide ground truth to facilitate new algorithm development and uniform comparison. The temporal variation of the component phenotypes regulated by genotypes and environment (i.e., greenhouse) are experimentally demonstrated for the maize plants on UNL-CPPD. Statistical models are applied to analyze the greenhouse environment impact and demonstrate the genetic regulation of the temporal variation of the holistic phenotypes on the public dataset called Panicoid Phenomap-1. The central contribution of the paper is a novel computer vision based algorithm for automated detection of individual leaves and the stem to compute new component phenotypes along with a public release of a benchmark dataset, i.e., UNL-CPPD. Detailed experimental analyses are performed to demonstrate the temporal variation of the holistic and component phenotypes in maize regulated by environment and genetic variation with a discussion on their significance in the context of plant science.

  4. SU-D-207A-05: Investigating Sparse-Sampled MRI for Motion Management in Thoracic Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabouri, P; Sawant, A; Arai, T

    Purpose: Sparse sampling and reconstruction-based MRI techniques represent an attractive strategy to achieve sufficiently high image acquisition speed while maintaining image quality for the task of radiotherapy guidance. In this study, we examine rapid dynamic MRI using a sparse sampling sequence k-t BLAST in capturing motion-induced, cycle-to-cycle variations in tumor position. We investigate the utility of long-term MRI-based motion monitoring as a means of better characterizing respiration-induced tumor motion compared to a single-cycle 4DCT. Methods: An MRI-compatible, programmable, deformable lung motion phantom with eleven 1.5 ml water marker tubes was placed inside a 3.0 T whole-body MR scanner (Philips Ingenia).more » The phantom was programmed with 10 lung tumor motion traces previously recorded using the Synchrony system. 2D+t image sequences of a coronal slice were acquired using a balanced-SSFP sequence combined with k-t BLAST (accn=3, resolution=0.66×0.66×5 mm3; acquisition time = 110 ms/slice). kV fluoroscopic (ground truth) and 4DCT imaging was performed with the same phantom setup and motion trajectories. Marker positions in all three modalities were segmented and tracked using an opensource deformable image registration package, NiftyReg. Results: Marker trajectories obtained from rapid MRI exhibited <1 mm error compared to kv Fluoro trajectories in the presence of complex motion including baseline shifts and changes in respiratory amplitude, indicating the ability of MRI to monitor motion with adequate geometric fidelity for the purpose of radiotherapy guidance. In contrast, the trajectory derived from 4DCT exhibited significant errors up to 6 mm due to cycle-to-cycle variations and baseline shifts. Consequently, 4DCT was found to underestimate the range of marker motion by as much as 50%. Conclusion: Dynamic MRI is a promising tool for radiotherapy motion management as it permits for longterm, dose-free, soft-tissue-based monitoring of motion, yielding richer and more accurate information about tumor position and motion range compared to the current state-of-the-art, 4DCT. This work was partially supported through research funding from National Institutes of Health (R01CA169102).« less

  5. High resolution shallow co-seismic and post-seismic slip from the 2016 central Italy earthquake sequence captured using terrestrial laser scanning, structure from motion and low-cost near-field GNSS

    NASA Astrophysics Data System (ADS)

    Wedmore, L. N. J.; Gregory, L. C.; McCaffrey, K. J. W.; Wilkinson, M.; Walters, R. J.

    2017-12-01

    Coseismic fault slip in the shallow crust is poorly constrained by many of the conventional tools used to record deformation during earthquakes. GNSS stations are often distributed too far from faults and radar images tend to decorrelate across earthquake surface ruptures. As a result, our understanding of near-field fault slip, shallow slip deficits, and off-fault deformation is limited. We present evidence from the 2016 central Italy earthquake sequence, during which we captured shallow coseismic and post-seismic slip using a combination of terrestrial laser scanning (TLS), structure-from-motion (SfM), and near-field low-cost GNSS recording at 1Hz. Three Mw>6 earthquakes on the 24th August, 26th and 30th October all involved slip on the Mt Vettore-Mt Bove fault system. We collected TLS and SfM point clouds across three separate segments of this system. Each segment experienced a different record of slip during the earthquake sequence; all three ruptured in the largest event (Mw 6.6. on October 30th) but two segments also ruptured during either the 24th August or the 26th October earthquakes. Following the Mw 6.6 earthquake, the faults were repeatedly surveyed using TLS, with the first scan collected c. 5 hours following the earthquake. This represents the first known instance where shallow co-seismic slip has been recorded by pre- and post-event terrestrial laser scanning. Displacement continuously measured across GNSS pairs at 1 Hz demonstrates that permanent near field displacement developed across the fault in the immediate seconds following the initiation of the rupture. However, a discrepancy between on-fault field measurements of surface displacement and the GNSS recorded displacement over 1km long baselines hints at a more complex rupture processes and the possibility of high slip gradients in the shallow subsurface. Displacement measured by differential TLS confirms the presence of these shallow slip deficits but suggests that shallow slip gradient may be controlled by the pattern and timing of slip in the preceding earthquakes. Postseismic afterslip captured by repeated TLS surveys hints at more complicated temporal evolution of nearfield afterslip than is currently predicted by logarithmic models for this process.

  6. Image-based computer-assisted diagnosis system for benign paroxysmal positional vertigo

    NASA Astrophysics Data System (ADS)

    Kohigashi, Satoru; Nakamae, Koji; Fujioka, Hiromu

    2005-04-01

    We develop the image based computer assisted diagnosis system for benign paroxysmal positional vertigo (BPPV) that consists of the balance control system simulator, the 3D eye movement simulator, and the extraction method of nystagmus response directly from an eye movement image sequence. In the system, the causes and conditions of BPPV are estimated by searching the database for record matching with the nystagmus response for the observed eye image sequence of the patient with BPPV. The database includes the nystagmus responses for simulated eye movement sequences. The eye movement velocity is obtained by using the balance control system simulator that allows us to simulate BPPV under various conditions such as canalithiasis, cupulolithiasis, number of otoconia, otoconium size, and so on. Then the eye movement image sequence is displayed on the CRT by the 3D eye movement simulator. The nystagmus responses are extracted from the image sequence by the proposed method and are stored in the database. In order to enhance the diagnosis accuracy, the nystagmus response for a newly simulated sequence is matched with that for the observed sequence. From the matched simulation conditions, the causes and conditions of BPPV are estimated. We apply our image based computer assisted diagnosis system to two real eye movement image sequences for patients with BPPV to show its validity.

  7. Point-of-care mobile digital microscopy and deep learning for the detection of soil-transmitted helminths and Schistosoma haematobium.

    PubMed

    Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan

    2017-06-01

    Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3-100%) in the test set (n = 217) of manually labeled helminth eggs. In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images.

  8. Point-of-care mobile digital microscopy and deep learning for the detection of soil-transmitted helminths and Schistosoma haematobium

    PubMed Central

    Holmström, Oscar; Linder, Nina; Ngasala, Billy; Mårtensson, Andreas; Linder, Ewert; Lundin, Mikael; Moilanen, Hannu; Suutala, Antti; Diwan, Vinod; Lundin, Johan

    2017-01-01

    ABSTRACT Background: Microscopy remains the gold standard in the diagnosis of neglected tropical diseases. As resource limited, rural areas often lack laboratory equipment and trained personnel, new diagnostic techniques are needed. Low-cost, point-of-care imaging devices show potential in the diagnosis of these diseases. Novel, digital image analysis algorithms can be utilized to automate sample analysis. Objective: Evaluation of the imaging performance of a miniature digital microscopy scanner for the diagnosis of soil-transmitted helminths and Schistosoma haematobium, and training of a deep learning-based image analysis algorithm for automated detection of soil-transmitted helminths in the captured images. Methods: A total of 13 iodine-stained stool samples containing Ascaris lumbricoides, Trichuris trichiura and hookworm eggs and 4 urine samples containing Schistosoma haematobium were digitized using a reference whole slide-scanner and the mobile microscopy scanner. Parasites in the images were identified by visual examination and by analysis with a deep learning-based image analysis algorithm in the stool samples. Results were compared between the digital and visual analysis of the images showing helminth eggs. Results: Parasite identification by visual analysis of digital slides captured with the mobile microscope was feasible for all analyzed parasites. Although the spatial resolution of the reference slide-scanner is higher, the resolution of the mobile microscope is sufficient for reliable identification and classification of all parasites studied. Digital image analysis of stool sample images captured with the mobile microscope showed high sensitivity for detection of all helminths studied (range of sensitivity = 83.3–100%) in the test set (n = 217) of manually labeled helminth eggs. Conclusions: In this proof-of-concept study, the imaging performance of a mobile, digital microscope was sufficient for visual detection of soil-transmitted helminths and Schistosoma haematobium. Furthermore, we show that deep learning-based image analysis can be utilized for the automated detection and classification of helminths in the captured images. PMID:28838305

  9. Capturing Attention When Attention "Blinks"

    ERIC Educational Resources Information Center

    Wee, Serena; Chua, Fook K.

    2004-01-01

    Four experiments addressed the question of whether attention may be captured when the visual system is in the midst of an attentional blink (AB). Participants identified 2 target letters embedded among distractor letters in a rapid serial visual presentation sequence. In some trials, a square frame was inserted between the targets; as the only…

  10. Auditory Attentional Capture: Effects of Singleton Distractor Sounds

    ERIC Educational Resources Information Center

    Dalton, Polly; Lavie, Nilli

    2004-01-01

    The phenomenon of attentional capture by a unique yet irrelevant singleton distractor has typically been studied in visual search. In this article, the authors examine whether a similar phenomenon occurs in the auditory domain. Participants searched sequences of sounds for targets defined by frequency, intensity, or duration. The presence of a…

  11. Enhanced spatio-temporal alignment of plantar pressure image sequences using B-splines.

    PubMed

    Oliveira, Francisco P M; Tavares, João Manuel R S

    2013-03-01

    This article presents an enhanced methodology to align plantar pressure image sequences simultaneously in time and space. The temporal alignment of the sequences is accomplished using B-splines in the time modeling, and the spatial alignment can be attained using several geometric transformation models. The methodology was tested on a dataset of 156 real plantar pressure image sequences (3 sequences for each foot of the 26 subjects) that was acquired using a common commercial plate during barefoot walking. In the alignment of image sequences that were synthetically deformed both in time and space, an outstanding accuracy was achieved with the cubic B-splines. This accuracy was significantly better (p < 0.001) than the one obtained using the best solution proposed in our previous work. When applied to align real image sequences with unknown transformation involved, the alignment based on cubic B-splines also achieved superior results than our previous methodology (p < 0.001). The consequences of the temporal alignment on the dynamic center of pressure (COP) displacement was also assessed by computing the intraclass correlation coefficients (ICC) before and after the temporal alignment of the three image sequence trials of each foot of the associated subject at six time instants. The results showed that, generally, the ICCs related to the medio-lateral COP displacement were greater when the sequences were temporally aligned than the ICCs of the original sequences. Based on the experimental findings, one can conclude that the cubic B-splines are a remarkable solution for the temporal alignment of plantar pressure image sequences. These findings also show that the temporal alignment can increase the consistency of the COP displacement on related acquired plantar pressure image sequences.

  12. Optimized protocols for cardiac magnetic resonance imaging in patients with thoracic metallic implants.

    PubMed

    Olivieri, Laura J; Cross, Russell R; O'Brien, Kendall E; Ratnayaka, Kanishka; Hansen, Michael S

    2015-09-01

    Cardiac magnetic resonance (MR) imaging is a valuable tool in congenital heart disease; however patients frequently have metal devices in the chest from the treatment of their disease that complicate imaging. Methods are needed to improve imaging around metal implants near the heart. Basic sequence parameter manipulations have the potential to minimize artifact while limiting effects on image resolution and quality. Our objective was to design cine and static cardiac imaging sequences to minimize metal artifact while maintaining image quality. Using systematic variation of standard imaging parameters on a fluid-filled phantom containing commonly used metal cardiac devices, we developed optimized sequences for steady-state free precession (SSFP), gradient recalled echo (GRE) cine imaging, and turbo spin-echo (TSE) black-blood imaging. We imaged 17 consecutive patients undergoing routine cardiac MR with 25 metal implants of various origins using both standard and optimized imaging protocols for a given slice position. We rated images for quality and metal artifact size by measuring metal artifact in two orthogonal planes within the image. All metal artifacts were reduced with optimized imaging. The average metal artifact reduction for the optimized SSFP cine was 1.5+/-1.8 mm, and for the optimized GRE cine the reduction was 4.6+/-4.5 mm (P < 0.05). Quality ratings favored the optimized GRE cine. Similarly, the average metal artifact reduction for the optimized TSE images was 1.6+/-1.7 mm (P < 0.05), and quality ratings favored the optimized TSE imaging. Imaging sequences tailored to minimize metal artifact are easily created by modifying basic sequence parameters, and images are superior to standard imaging sequences in both quality and artifact size. Specifically, for optimized cine imaging a GRE sequence should be used with settings that favor short echo time, i.e. flow compensation off, weak asymmetrical echo and a relatively high receiver bandwidth. For static black-blood imaging, a TSE sequence should be used with fat saturation turned off and high receiver bandwidth.

  13. In vivo Proton Electron Double Resonance Imaging of Mice with Fast Spin Echo Pulse Sequence

    PubMed Central

    Sun, Ziqi; Li, Haihong; Petryakov, Sergey; Samouilov, Alex; Zweier, Jay L.

    2011-01-01

    Purpose To develop and evaluate a 2D fast spin echo (FSE) pulse sequence for enhancing temporal resolution and reducing tissue heating for in vivo proton electron double resonance imaging (PEDRI) of mice. Materials and Methods A four-compartment phantom containing 2 mM TEMPONE was imaged at 20.1 mT using 2D FSE-PEDRI and regular gradient echo (GRE)-PEDRI pulse sequences. Control mice were infused with TEMPONE over ∼1 min followed by time-course imaging using the 2D FSE-PEDRI sequence at intervals of 10 – 30 s between image acquisitions. The average signal intensity from the time-course images was analyzed using a first-order kinetics model. Results Phantom experiments demonstrated that EPR power deposition can be greatly reduced using the FSE-PEDRI pulse sequence compared to the conventional gradient echo pulse sequence. High temporal resolution was achieved at ∼4 s per image acquisition using the FSE-PEDRI sequence with a good image SNR in the range of 233-266 in the phantom study. The TEMPONE half-life measured in vivo was ∼72 s. Conclusion Thus, the FSE-PEDRI pulse sequence enables fast in vivo functional imaging of free radical probes in small animals greatly reducing EPR irradiation time with decreased power deposition and provides increased temporal resolution. PMID:22147559

  14. Abdominal MR imaging in children: motion compensation, sequence optimization, and protocol organization.

    PubMed

    Chavhan, Govind B; Babyn, Paul S; Vasanawala, Shreyas S

    2013-05-01

    Familiarity with basic sequence properties and their trade-offs is necessary for radiologists performing abdominal magnetic resonance (MR) imaging. Acquiring diagnostic-quality MR images in the pediatric abdomen is challenging due to motion, inability to breath hold, varying patient size, and artifacts. Motion-compensation techniques (eg, respiratory gating, signal averaging, suppression of signal from moving tissue, swapping phase- and frequency-encoding directions, use of faster sequences with breath holding, parallel imaging, and radial k-space filling) can improve image quality. Each of these techniques is more suitable for use with certain sequences and acquisition planes and in specific situations and age groups. Different T1- and T2-weighted sequences work better in different age groups and with differing acquisition planes and have specific advantages and disadvantages. Dynamic imaging should be performed differently in younger children than in older children. In younger children, the sequence and the timing of dynamic phases need to be adjusted. Different sequences work better in smaller children and in older children because of differing breath-holding ability, breathing patterns, field of view, and use of sedation. Hence, specific protocols should be maintained for younger children and older children. Combining longer-higher-resolution sequences and faster-lower-resolution sequences helps acquire diagnostic-quality images in a reasonable time. © RSNA, 2013.

  15. Multispectral laser-induced fluorescence imaging system for large biological samples

    NASA Astrophysics Data System (ADS)

    Kim, Moon S.; Lefcourt, Alan M.; Chen, Yud-Ren

    2003-07-01

    A laser-induced fluorescence imaging system developed to capture multispectral fluorescence emission images simultaneously from a relatively large target object is described. With an expanded, 355-nm Nd:YAG laser as the excitation source, the system captures fluorescence emission images in the blue, green, red, and far-red regions of the spectrum centered at 450, 550, 678, and 730 nm, respectively, from a 30-cm-diameter target area in ambient light. Images of apples and of pork meat artificially contaminated with diluted animal feces have demonstrated the versatility of fluorescence imaging techniques for potential applications in food safety inspection. Regions of contamination, including sites that were not readily visible to the human eye, could easily be identified from the images.

  16. Embedded image processing engine using ARM cortex-M4 based STM32F407 microcontroller

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Samaiya, Devesh, E-mail: samaiya.devesh@gmail.com

    2014-10-06

    Due to advancement in low cost, easily available, yet powerful hardware and revolution in open source software, urge to make newer, more interactive machines and electronic systems have increased manifold among engineers. To make system more interactive, designers need easy to use sensor systems. Giving the boon of vision to machines was never easy, though it is not impossible these days; it is still not easy and expensive. This work presents a low cost, moderate performance and programmable Image processing engine. This Image processing engine is able to capture real time images, can store the images in the permanent storagemore » and can perform preprogrammed image processing operations on the captured images.« less

  17. Image Encryption Algorithm Based on Hyperchaotic Maps and Nucleotide Sequences Database

    PubMed Central

    2017-01-01

    Image encryption technology is one of the main means to ensure the safety of image information. Using the characteristics of chaos, such as randomness, regularity, ergodicity, and initial value sensitiveness, combined with the unique space conformation of DNA molecules and their unique information storage and processing ability, an efficient method for image encryption based on the chaos theory and a DNA sequence database is proposed. In this paper, digital image encryption employs a process of transforming the image pixel gray value by using chaotic sequence scrambling image pixel location and establishing superchaotic mapping, which maps quaternary sequences and DNA sequences, and by combining with the logic of the transformation between DNA sequences. The bases are replaced under the displaced rules by using DNA coding in a certain number of iterations that are based on the enhanced quaternary hyperchaotic sequence; the sequence is generated by Chen chaos. The cipher feedback mode and chaos iteration are employed in the encryption process to enhance the confusion and diffusion properties of the algorithm. Theoretical analysis and experimental results show that the proposed scheme not only demonstrates excellent encryption but also effectively resists chosen-plaintext attack, statistical attack, and differential attack. PMID:28392799

  18. Quantitative analysis of image quality for acceptance and commissioning of an MRI simulator with a semiautomatic method.

    PubMed

    Chen, Xinyuan; Dai, Jianrong

    2018-05-01

    Magnetic Resonance Imaging (MRI) simulation differs from diagnostic MRI in purpose, technical requirements, and implementation. We propose a semiautomatic method for image acceptance and commissioning for the scanner, the radiofrequency (RF) coils, and pulse sequences for an MRI simulator. The ACR MRI accreditation large phantom was used for image quality analysis with seven parameters. Standard ACR sequences with a split head coil were adopted to examine the scanner's basic performance. The performance of simulation RF coils were measured and compared using the standard sequence with different clinical diagnostic coils. We used simulation sequences with simulation coils to test the quality of image and advanced performance of the scanner. Codes and procedures were developed for semiautomatic image quality analysis. When using standard ACR sequences with a split head coil, image quality passed all ACR recommended criteria. The image intensity uniformity with a simulation RF coil decreased about 34% compared with the eight-channel diagnostic head coil, while the other six image quality parameters were acceptable. Those two image quality parameters could be improved to more than 85% by built-in intensity calibration methods. In the simulation sequences test, the contrast resolution was sensitive to the FOV and matrix settings. The geometric distortion of simulation sequences such as T1-weighted and T2-weighted images was well-controlled in the isocenter and 10 cm off-center within a range of ±1% (2 mm). We developed a semiautomatic image quality analysis method for quantitative evaluation of images and commissioning of an MRI simulator. The baseline performances of simulation RF coils and pulse sequences have been established for routine QA. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  19. Panax ginseng genome examination for ginsenoside biosynthesis.

    PubMed

    Xu, Jiang; Chu, Yang; Liao, Baosheng; Xiao, Shuiming; Yin, Qinggang; Bai, Rui; Su, He; Dong, Linlin; Li, Xiwen; Qian, Jun; Zhang, Jingjing; Zhang, Yujun; Zhang, Xiaoyan; Wu, Mingli; Zhang, Jie; Li, Guozheng; Zhang, Lei; Chang, Zhenzhan; Zhang, Yuebin; Jia, Zhengwei; Liu, Zhixiang; Afreh, Daniel; Nahurira, Ruth; Zhang, Lianjuan; Cheng, Ruiyang; Zhu, Yingjie; Zhu, Guangwei; Rao, Wei; Zhou, Chao; Qiao, Lirui; Huang, Zhihai; Cheng, Yung-Chi; Chen, Shilin

    2017-11-01

    Ginseng, which contains ginsenosides as bioactive compounds, has been regarded as an important traditional medicine for several millennia. However, the genetic background of ginseng remains poorly understood, partly because of the plant's large and complex genome composition. We report the entire genome sequence of Panax ginseng using next-generation sequencing. The 3.5-Gb nucleotide sequence contains more than 60% repeats and encodes 42 006 predicted genes. Twenty-two transcriptome datasets and mass spectrometry images of ginseng roots were adopted to precisely quantify the functional genes. Thirty-one genes were identified to be involved in the mevalonic acid pathway. Eight of these genes were annotated as 3-hydroxy-3-methylglutaryl-CoA reductases, which displayed diverse structures and expression characteristics. A total of 225 UDP-glycosyltransferases (UGTs) were identified, and these UGTs accounted for one of the largest gene families of ginseng. Tandem repeats contributed to the duplication and divergence of UGTs. Molecular modeling of UGTs in the 71st, 74th, and 94th families revealed a regiospecific conserved motif located at the N-terminus. Molecular docking predicted that this motif captures ginsenoside precursors. The ginseng genome represents a valuable resource for understanding and improving the breeding, cultivation, and synthesis biology of this key herb. © The Author 2017. Published by Oxford University Press.

  20. Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies.

    PubMed

    Matthé, Maximilian; Sannolo, Marco; Winiarski, Kristopher; Spitzen-van der Sluijs, Annemarieke; Goedbloed, Daniel; Steinfartz, Sebastian; Stachow, Ulrich

    2017-08-01

    Photographic capture-recapture is a valuable tool for obtaining demographic information on wildlife populations due to its noninvasive nature and cost-effectiveness. Recently, several computer-aided photo-matching algorithms have been developed to more efficiently match images of unique individuals in databases with thousands of images. However, the identification accuracy of these algorithms can severely bias estimates of vital rates and population size. Therefore, it is important to understand the performance and limitations of state-of-the-art photo-matching algorithms prior to implementation in capture-recapture studies involving possibly thousands of images. Here, we compared the performance of four photo-matching algorithms; Wild-ID, I3S Pattern+, APHIS, and AmphIdent using multiple amphibian databases of varying image quality. We measured the performance of each algorithm and evaluated the performance in relation to database size and the number of matching images in the database. We found that algorithm performance differed greatly by algorithm and image database, with recognition rates ranging from 100% to 22.6% when limiting the review to the 10 highest ranking images. We found that recognition rate degraded marginally with increased database size and could be improved considerably with a higher number of matching images in the database. In our study, the pixel-based algorithm of AmphIdent exhibited superior recognition rates compared to the other approaches. We recommend carefully evaluating algorithm performance prior to using it to match a complete database. By choosing a suitable matching algorithm, databases of sizes that are unfeasible to match "by eye" can be easily translated to accurate individual capture histories necessary for robust demographic estimates.

  1. Raspberry Pi-powered imaging for plant phenotyping.

    PubMed

    Tovar, Jose C; Hoyer, J Steen; Lin, Andy; Tielking, Allison; Callen, Steven T; Elizabeth Castillo, S; Miller, Michael; Tessman, Monica; Fahlgren, Noah; Carrington, James C; Nusinow, Dmitri A; Gehan, Malia A

    2018-03-01

    Image-based phenomics is a powerful approach to capture and quantify plant diversity. However, commercial platforms that make consistent image acquisition easy are often cost-prohibitive. To make high-throughput phenotyping methods more accessible, low-cost microcomputers and cameras can be used to acquire plant image data. We used low-cost Raspberry Pi computers and cameras to manage and capture plant image data. Detailed here are three different applications of Raspberry Pi-controlled imaging platforms for seed and shoot imaging. Images obtained from each platform were suitable for extracting quantifiable plant traits (e.g., shape, area, height, color) en masse using open-source image processing software such as PlantCV. This protocol describes three low-cost platforms for image acquisition that are useful for quantifying plant diversity. When coupled with open-source image processing tools, these imaging platforms provide viable low-cost solutions for incorporating high-throughput phenomics into a wide range of research programs.

  2. Comparison of magnetic resonance imaging sequences for depicting the subthalamic nucleus for deep brain stimulation.

    PubMed

    Nagahama, Hiroshi; Suzuki, Kengo; Shonai, Takaharu; Aratani, Kazuki; Sakurai, Yuuki; Nakamura, Manami; Sakata, Motomichi

    2015-01-01

    Electrodes are surgically implanted into the subthalamic nucleus (STN) of Parkinson's disease patients to provide deep brain stimulation. For ensuring correct positioning, the anatomic location of the STN must be determined preoperatively. Magnetic resonance imaging has been used for pinpointing the location of the STN. To identify the optimal imaging sequence for identifying the STN, we compared images produced with T2 star-weighted angiography (SWAN), gradient echo T2*-weighted imaging, and fast spin echo T2-weighted imaging in 6 healthy volunteers. Our comparison involved measurement of the contrast-to-noise ratio (CNR) for the STN and substantia nigra and a radiologist's interpretations of the images. Of the sequences examined, the CNR and qualitative scores were significantly higher on SWAN images than on other images (p < 0.01) for STN visualization. Kappa value (0.74) on SWAN images was the highest in three sequences for visualizing the STN. SWAN is the sequence best suited for identifying the STN at the present time.

  3. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  4. A new 4-dimensional imaging system for jaw tracking.

    PubMed

    Lauren, Mark

    2014-01-01

    A non-invasive 4D imaging system that produces high resolution time-based 3D surface data has been developed to capture jaw motion. Fluorescent microspheres are brushed onto both tooth and soft-tissue areas of the upper and lower arches to be imaged. An extraoral hand-held imaging device, operated about 12 cm from the mouth, captures a time-based set of perspective image triplets of the patch areas. Each triplet, containing both upper and lower arch data, is converted to a high-resolution 3D point mesh using photogrammetry, providing the instantaneous relative jaw position. Eight 3D positions per second are captured. Using one of the 3D frames as a reference, a 4D model can be constructed to describe the incremental free body motion of the mandible. The surface data produced by this system can be registered to conventional 3D models of the dentition, allowing them to be animated. Applications include integration into prosthetic CAD and CBCT data.

  5. Real Time Apnoea Monitoring of Children Using the Microsoft Kinect Sensor: A Pilot Study

    PubMed Central

    Al-Naji, Ali; Gibson, Kim; Lee, Sang-Heon; Chahl, Javaan

    2017-01-01

    The objective of this study was to design a non-invasive system for the observation of respiratory rates and detection of apnoea using analysis of real time image sequences captured in any given sleep position and under any light conditions (even in dark environments). A Microsoft Kinect sensor was used to visualize the variations in the thorax and abdomen from the respiratory rhythm. These variations were magnified, analyzed and detected at a distance of 2.5 m from the subject. A modified motion magnification system and frame subtraction technique were used to identify breathing movements by detecting rapid motion areas in the magnified frame sequences. The experimental results on a set of video data from five subjects (3 h for each subject) showed that our monitoring system can accurately measure respiratory rate and therefore detect apnoea in infants and young children. The proposed system is feasible, accurate, safe and low computational complexity, making it an efficient alternative for non-contact home sleep monitoring systems and advancing health care applications. PMID:28165382

  6. Full-color digitized holography for large-scale holographic 3D imaging of physical and nonphysical objects.

    PubMed

    Matsushima, Kyoji; Sonobe, Noriaki

    2018-01-01

    Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.

  7. Can light-field photography ease focusing on the scalp and oral cavity?

    PubMed

    Taheri, Arash; Feldman, Steven R

    2013-08-01

    Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  8. Simultaneous genomic identification and profiling of a single cell using semiconductor-based next generation sequencing.

    PubMed

    Watanabe, Manabu; Kusano, Junko; Ohtaki, Shinsaku; Ishikura, Takashi; Katayama, Jin; Koguchi, Akira; Paumen, Michael; Hayashi, Yoshiharu

    2014-09-01

    Combining single-cell methods and next-generation sequencing should provide a powerful means to understand single-cell biology and obviate the effects of sample heterogeneity. Here we report a single-cell identification method and seamless cancer gene profiling using semiconductor-based massively parallel sequencing. A549 cells (adenocarcinomic human alveolar basal epithelial cell line) were used as a model. Single-cell capture was performed using laser capture microdissection (LCM) with an Arcturus® XT system, and a captured single cell and a bulk population of A549 cells (≈ 10(6) cells) were subjected to whole genome amplification (WGA). For cell identification, a multiplex PCR method (AmpliSeq™ SNP HID panel) was used to enrich 136 highly discriminatory SNPs with a genotype concordance probability of 10(31-35). For cancer gene profiling, we used mutation profiling that was performed in parallel using a hotspot panel for 50 cancer-related genes. Sequencing was performed using a semiconductor-based bench top sequencer. The distribution of sequence reads for both HID and Cancer panel amplicons was consistent across these samples. For the bulk population of cells, the percentages of sequence covered at coverage of more than 100 × were 99.04% for the HID panel and 98.83% for the Cancer panel, while for the single cell percentages of sequence covered at coverage of more than 100 × were 55.93% for the HID panel and 65.96% for the Cancer panel. Partial amplification failure or randomly distributed non-amplified regions across samples from single cells during the WGA procedures or random allele drop out probably caused these differences. However, comparative analyses showed that this method successfully discriminated a single A549 cancer cell from a bulk population of A549 cells. Thus, our approach provides a powerful means to overcome tumor sample heterogeneity when searching for somatic mutations.

  9. Enhanced learning of natural visual sequences in newborn chicks.

    PubMed

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  10. Comparison of the quality of different magnetic resonance image sequences of multiple myeloma.

    PubMed

    Sun, Zhao-yong; Zhang, Hai-bo; Li, Shuo; Wang, Yun; Xue, Hua-dan; Jin, Zheng-yu

    2015-02-01

    To compare the image quality of T1WI fat phase,T1WI water phase, short time inversion recovery (STIR) sequence, and diffusion weighted imaging (DWI) sequence in the evaluation of multiple myeloma (MM). Totally 20MM patients were enrolled in this study. All patients underwent scanning at coronal T1WI fat phase, coronal T1WI water phase, coronal STIR sequence, and axial DWI sequence. The image quality of the four different sequences was evaluated. The image was divided into seven sections(head and neck, chest, abdomen, pelvis, thigh, leg, and foot), and the signal-to-noise ratio (SNR) of each section was measured at 7 segments (skull, spine, pelvis, humerus, femur, tibia and fibula and ribs) were measured. In addition, 20 active MM lesions were selected, and the contrast-to-noise ratio (CNR) of each scan sequence was calculated. The average image quality scores of T1WI fat phase,T1WI water phase, STIR sequence, and DWI sequence were 4.19 ± 0.70,4.16 ± 0.73,3.89 ± 0.70, and 3.76 ± 0.68, respectively. The image quality at T1-fat phase and T1-water phase were significantly higher than those at STIR (P=0.000 and P=0.001) and DWI sequence (both P=0.000); however, there was no significant difference between T1-fat and T1-water phase (P=0.723)and between STIR and DWI sequence (P=0.167). The SNR of T1WI fat phase was significantly higher than those of the other three sequences (all P=0.000), and there was no significant difference among the other three sequences (all P>0.05). Although the CNR of DWI sequences was slightly higher than those of the other three sequences,there was no significant difference among all of them (all P>0.05). Imaging at T1WI fat phase,T1WI water phase, STIR sequence, and DWI sequence has certain advantages,and they should be combined in the diagnosis of MM.

  11. MISTICA: Minimum Spanning Tree-based Coarse Image Alignment for Microscopy Image Sequences

    PubMed Central

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T.

    2016-01-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to re-order the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries. PMID:26415193

  12. MISTICA: Minimum Spanning Tree-Based Coarse Image Alignment for Microscopy Image Sequences.

    PubMed

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T

    2016-11-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to reorder the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by the way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries.

  13. Diffusion-weighted imaging of the liver with multiple b values: effect of diffusion gradient polarity and breathing acquisition on image quality and intravoxel incoherent motion parameters--a pilot study.

    PubMed

    Dyvorne, Hadrien A; Galea, Nicola; Nevers, Thomas; Fiel, M Isabel; Carpenter, David; Wong, Edmund; Orton, Matthew; de Oliveira, Andre; Feiweier, Thorsten; Vachon, Marie-Louise; Babb, James S; Taouli, Bachir

    2013-03-01

    To optimize intravoxel incoherent motion (IVIM) diffusion-weighted (DW) imaging by estimating the effects of diffusion gradient polarity and breathing acquisition scheme on image quality, signal-to-noise ratio (SNR), IVIM parameters, and parameter reproducibility, as well as to investigate the potential of IVIM in the detection of hepatic fibrosis. In this institutional review board-approved prospective study, 20 subjects (seven healthy volunteers, 13 patients with hepatitis C virus infection; 14 men, six women; mean age, 46 years) underwent IVIM DW imaging with four sequences: (a) respiratory-triggered (RT) bipolar (BP) sequence, (b) RT monopolar (MP) sequence, (c) free-breathing (FB) BP sequence, and (d) FB MP sequence. Image quality scores were assessed for all sequences. A biexponential analysis with the Bayesian method yielded true diffusion coefficient (D), pseudodiffusion coefficient (D*), and perfusion fraction (PF) in liver parenchyma. Mixed-model analysis of variance was used to compare image quality, SNR, IVIM parameters, and interexamination variability between the four sequences, as well as the ability to differentiate areas of liver fibrosis from normal liver tissue. Image quality with RT sequences was superior to that with FB acquisitions (P = .02) and was not affected by gradient polarity. SNR did not vary significantly between sequences. IVIM parameter reproducibility was moderate to excellent for PF and D, while it was less reproducible for D*. PF and D were both significantly lower in patients with hepatitis C virus than in healthy volunteers with the RT BP sequence (PF = 13.5% ± 5.3 [standard deviation] vs 9.2% ± 2.5, P = .038; D = [1.16 ± 0.07] × 10(-3) mm(2)/sec vs [1.03 ± 0.1] × 10(-3) mm(2)/sec, P = .006). The RT BP DW imaging sequence had the best results in terms of image quality, reproducibility, and ability to discriminate between healthy and fibrotic liver with biexponential fitting.

  14. A Unified Framework for Street-View Panorama Stitching

    PubMed Central

    Li, Li; Yao, Jian; Xie, Renping; Xia, Menghan; Zhang, Wei

    2016-01-01

    In this paper, we propose a unified framework to generate a pleasant and high-quality street-view panorama by stitching multiple panoramic images captured from the cameras mounted on the mobile platform. Our proposed framework is comprised of four major steps: image warping, color correction, optimal seam line detection and image blending. Since the input images are captured without a precisely common projection center from the scenes with the depth differences with respect to the cameras to different extents, such images cannot be precisely aligned in geometry. Therefore, an efficient image warping method based on the dense optical flow field is proposed to greatly suppress the influence of large geometric misalignment at first. Then, to lessen the influence of photometric inconsistencies caused by the illumination variations and different exposure settings, we propose an efficient color correction algorithm via matching extreme points of histograms to greatly decrease color differences between warped images. After that, the optimal seam lines between adjacent input images are detected via the graph cut energy minimization framework. At last, the Laplacian pyramid blending algorithm is applied to further eliminate the stitching artifacts along the optimal seam lines. Experimental results on a large set of challenging street-view panoramic images captured form the real world illustrate that the proposed system is capable of creating high-quality panoramas. PMID:28025481

  15. Web surveillance system using platform-based design

    NASA Astrophysics Data System (ADS)

    Lin, Shin-Yo; Tsai, Tsung-Han

    2004-04-01

    A revolutionary methodology of SOPC platform-based design environment for multimedia communications will be developed. We embed a softcore processor to perform the image compression in FPGA. Then, we plug-in an Ethernet daughter board in the SOPC development platform system. Afterward, a web surveillance platform system is presented. The web surveillance system consists of three parts: image capture, web server and JPEG compression. In this architecture, user can control the surveillance system by remote. By the IP address configures to Ethernet daughter board, the user can access the surveillance system via browser. When user access the surveillance system, the CMOS sensor presently capture the remote image. After that, it will feed the captured image with the embedded processor. The embedded processor immediately performs the JPEG compression. Afterward, the user receives the compressed data via Ethernet. To sum up of the above mentioned, the all system will be implemented on APEX20K200E484-2X device.

  16. A Pixel Correlation Technique for Smaller Telescopes to Measure Doubles

    NASA Astrophysics Data System (ADS)

    Wiley, E. O.

    2013-04-01

    Pixel correlation uses the same reduction techniques as speckle imaging but relies on autocorrelation among captured pixel hits rather than true speckles. A video camera operating at speeds (8-66 milliseconds) similar to lucky imaging to capture 400-1,000 video frames. The AVI files are converted to bitmap images and analyzed using the interferometric algorithms in REDUC using all frames. This results in a series of corellograms from which theta and rho can be measured. Results using a 20 cm (8") Dall-Kirkham working at f22.5 are presented for doubles with separations between 1" to 5.7" under average seeing conditions. I conclude that this form of visualizing and analyzing visual double stars is a viable alternative to lucky imaging that can be employed by telescopes that are too small in aperture to capture a sufficient number of speckles for true speckle interferometry.

  17. Automatically detect and track infrared small targets with kernel Fukunaga-Koontz transform and Kalman prediction.

    PubMed

    Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan

    2007-11-01

    Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.

  18. Automatically detect and track infrared small targets with kernel Fukunaga-Koontz transform and Kalman prediction

    NASA Astrophysics Data System (ADS)

    Liu, Ruiming; Liu, Erqi; Yang, Jie; Zeng, Yong; Wang, Fanglin; Cao, Yuan

    2007-11-01

    Fukunaga-Koontz transform (FKT), stemming from principal component analysis (PCA), is used in many pattern recognition and image-processing fields. It cannot capture the higher-order statistical property of natural images, so its detection performance is not satisfying. PCA has been extended into kernel PCA in order to capture the higher-order statistics. However, thus far there have been no researchers who have definitely proposed kernel FKT (KFKT) and researched its detection performance. For accurately detecting potential small targets from infrared images, we first extend FKT into KFKT to capture the higher-order statistical properties of images. Then a framework based on Kalman prediction and KFKT, which can automatically detect and track small targets, is developed. Results of experiments show that KFKT outperforms FKT and the proposed framework is competent to automatically detect and track infrared point targets.

  19. Lifting Scheme DWT Implementation in a Wireless Vision Sensor Network

    NASA Astrophysics Data System (ADS)

    Ong, Jia Jan; Ang, L.-M.; Seng, K. P.

    This paper presents the practical implementation of a Wireless Visual Sensor Network (WVSN) with DWT processing on the visual nodes. WVSN consists of visual nodes that capture video and transmit to the base-station without processing. Limitation of network bandwidth restrains the implementation of real time video streaming from remote visual nodes through wireless communication. Three layers of DWT filters are implemented to process the captured image from the camera. With having all the wavelet coefficients produced, it is possible just to transmit the low frequency band coefficients and obtain an approximate image at the base-station. This will reduce the amount of power required in transmission. When necessary, transmitting all the wavelet coefficients will produce the full detail of image, which is similar to the image captured at the visual nodes. The visual node combines the CMOS camera, Xilinx Spartan-3L FPGA and wireless ZigBee® network that uses the Ember EM250 chip.

  20. A smartphone application for psoriasis segmentation and classification (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Vasefi, Fartash; MacKinnon, Nicholas B.; Horita, Timothy; Shi, Kevin; Khan Munia, Tamanna Tabassum; Tavakolian, Kouhyar; Alhashim, Minhal; Fazel-Rezai, Reza

    2017-02-01

    Psoriasis is a chronic skin disease affecting approximately 125 million people worldwide. Currently, dermatologists monitor changes of psoriasis by clinical evaluation or by measuring psoriasis severity scores over time which lead to Subjective management of this condition. The goal of this paper is to develop a reliable assessment system to quantitatively assess the changes of erythema and intensity of scaling of psoriatic lesions. A smartphone deployable mobile application is presented that uses the smartphone camera and cloud-based image processing to analyze physiological characteristics of psoriasis lesions, identify the type and stage of the scaling and erythema. The application targets to automatically evaluate Psoriasis Area Severity Index (PASI) by measuring the severity and extent of psoriasis. The mobile application performs the following core functions: 1) it captures text information from user input to create a profile in a HIPAA compliant database. 2) It captures an image of the skin with psoriasis as well as image-related information entered by the user. 3) The application color correct the image based on environmental lighting condition using calibration process including calibration procedure by capturing Macbeth ColorChecker image. 4) The color-corrected image will be transmitted to a cloud-based engine for image processing. In cloud, first, the algorithm removes the non-skin background to ensure the psoriasis segmentation is only applied to the skin regions. Then, the psoriasis segmentation algorithm estimates the erythema and scaling boundary regions of lesion. We analyzed 10 images of psoriasis images captured by cellphone, determined PASI score for each subject during our pilot study, and correlated it with changes in severity scores given by dermatologists. The success of this work allows smartphone application for psoriasis severity assessment in a long-term treatment.

  1. A universal colorimetry for nucleic acids and aptamer-specific ligands detection based on DNA hybridization amplification.

    PubMed

    Li, Shuang; Shang, Xinxin; Liu, Jia; Wang, Yujie; Guo, Yingshu; You, Jinmao

    2017-07-01

    We present a universal amplified-colorimetric for detecting nucleic acid targets or aptamer-specific ligand targets based on gold nanoparticle-DNA (GNP-DNA) hybridization chain reaction (HCR). The universal arrays consisted of capture probe and hairpin DNA-GNP. First, capture probe recognized target specificity and released the initiator sequence. Then dispersed hairpin DNA modified GNPs were cross-linked to form aggregates through HCR events triggered by initiator sequence. As the aggregates accumulate, a significant red-to purple color change can be easily visualized by the naked eye. We used miRNA target sequence (miRNA-203) and aptamer-specific ligand (ATP) as target molecules for this proof-of-concept experiment. Initiator sequence (DNA2) was released from the capture probe (MNP/DNA1/2 conjugates) under the strong competitiveness of miRNA-203. Hairpin DNA (H1 and H2) can be complementary with the help of initiator DNA2 to form GNP-H1/GNP-H2 aggregates. The absorption ratio (A 620 /A 520 ) values of solutions were a sensitive function of miRNA-203 concentration covering from 1.0 × 10 -11  M to 9.0 × 10 -10  M, and as low as 1.0 × 10 -11  M could be detected. At the same time, the color changed from light wine red to purple and then to light blue have occurred in the solution. For ATP, initiator sequence (5'-end of DNA3) was released from the capture probe (DNA3) under the strong combination of aptamer-ATP. The present colorimetric for specific detection of ATP exhibited good sensitivity and 1.0 × 10 -8  M ATP could be detected. The proposed strategy also showed good performances for qualitative analysis and quantitative analysis of intracellular nucleic acids and aptamer-specific ligands. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Orthogonal-blendshape-based editing system for facial motion capture data.

    PubMed

    Li, Qing; Deng, Zhigang

    2008-01-01

    The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.

  3. Live Cell Imaging and 3D Analysis of Angiotensin Receptor Type 1a Trafficking in Transfected Human Embryonic Kidney Cells Using Confocal Microscopy.

    PubMed

    Kadam, Parnika; McAllister, Ryan; Urbach, Jeffrey S; Sandberg, Kathryn; Mueller, Susette C

    2017-03-27

    Live-cell imaging is used to simultaneously capture time-lapse images of angiotensin type 1a receptors (AT1aR) and intracellular compartments in transfected human embryonic kidney-293 (HEK) cells following stimulation with angiotensin II (Ang II). HEK cells are transiently transfected with plasmid DNA containing AT1aR tagged with enhanced green fluorescent protein (EGFP). Lysosomes are identified with a red fluorescent dye. Live-cell images are captured on a laser scanning confocal microscope after Ang II stimulation and analyzed by software in three dimensions (3D, voxels) over time. Live-cell imaging enables investigations into receptor trafficking and avoids confounds associated with fixation, and in particular, the loss or artefactual displacement of EGFP-tagged membrane receptors. Thus, as individual cells are tracked through time, the subcellular localization of receptors can be imaged and measured. Images must be acquired sufficiently rapidly to capture rapid vesicle movement. Yet, at faster imaging speeds, the number of photons collected is reduced. Compromises must also be made in the selection of imaging parameters like voxel size in order to gain imaging speed. Significant applications of live-cell imaging are to study protein trafficking, migration, proliferation, cell cycle, apoptosis, autophagy and protein-protein interaction and dynamics, to name but a few.

  4. Non-Cartesian Balanced SSFP Pulse Sequences for Real-Time Cardiac MRI

    PubMed Central

    Feng, Xue; Salerno, Michael; Kramer, Christopher M.; Meyer, Craig H.

    2015-01-01

    Purpose To develop a new spiral-in/out balanced steady-state free precession (bSSFP) pulse sequence for real-time cardiac MRI and compare it with radial and spiral-out techniques. Methods Non-Cartesian sampling strategies are efficient and robust to motion and thus have important advantages for real-time bSSFP cine imaging. This study describes a new symmetric spiral-in/out sequence with intrinsic gradient moment compensation and SSFP refocusing at TE=TR/2. In-vivo real-time cardiac imaging studies were performed to compare radial, spiral-out, and spiral-in/out bSSFP pulse sequences. Furthermore, phase-based fat-water separation taking advantage of the refocusing mechanism of the spiral-in/out bSSFP sequence was also studied. Results The image quality of the spiral-out and spiral-in/out bSSFP sequences was improved with off-resonance and k-space trajectory correction. The spiral-in/out bSSFP sequence had the highest SNR, CNR, and image quality ratings, with spiral-out bSSFP sequence second in each category and the radial bSSFP sequence third. The spiral-in/out bSSFP sequence provides separated fat and water images with no additional scan time. Conclusions In this work a new spiral-in/out bSSFP sequence was developed and tested. The superiority of spiral bSSFP sequences over the radial bSSFP sequence in terms of SNR and reduced artifacts was demonstrated in real-time MRI of cardiac function without image acceleration. PMID:25960254

  5. Robust temporal alignment of multimodal cardiac sequences

    NASA Astrophysics Data System (ADS)

    Perissinotto, Andrea; Queirós, Sandro; Morais, Pedro; Baptista, Maria J.; Monaghan, Mark; Rodrigues, Nuno F.; D'hooge, Jan; Vilaça, João. L.; Barbosa, Daniel

    2015-03-01

    Given the dynamic nature of cardiac function, correct temporal alignment of pre-operative models and intraoperative images is crucial for augmented reality in cardiac image-guided interventions. As such, the current study focuses on the development of an image-based strategy for temporal alignment of multimodal cardiac imaging sequences, such as cine Magnetic Resonance Imaging (MRI) or 3D Ultrasound (US). First, we derive a robust, modality-independent signal from the image sequences, estimated by computing the normalized cross-correlation between each frame in the temporal sequence and the end-diastolic frame. This signal is a resembler for the left-ventricle (LV) volume curve over time, whose variation indicates different temporal landmarks of the cardiac cycle. We then perform the temporal alignment of these surrogate signals derived from MRI and US sequences of the same patient through Dynamic Time Warping (DTW), allowing to synchronize both sequences. The proposed framework was evaluated in 98 patients, which have undergone both 3D+t MRI and US scans. The end-systolic frame could be accurately estimated as the minimum of the image-derived surrogate signal, presenting a relative error of 1.6 +/- 1.9% and 4.0 +/- 4.2% for the MRI and US sequences, respectively, thus supporting its association with key temporal instants of the cardiac cycle. The use of DTW reduces the desynchronization of the cardiac events in MRI and US sequences, allowing to temporally align multimodal cardiac imaging sequences. Overall, a generic, fast and accurate method for temporal synchronization of MRI and US sequences of the same patient was introduced. This approach could be straightforwardly used for the correct temporal alignment of pre-operative MRI information and intra-operative US images.

  6. Our experiences with development of digitised video streams and their use in animal-free medical education.

    PubMed

    Cervinka, Miroslav; Cervinková, Zuzana; Novák, Jan; Spicák, Jan; Rudolf, Emil; Peychl, Jan

    2004-06-01

    Alternatives and their teaching are an essential part of the curricula at the Faculty of Medicine. Dynamic screen-based video recordings are the most important type of alternative models employed for teaching purposes. Currently, the majority of teaching materials for this purpose are based on PowerPoint presentations, which are very popular because of their high versatility and visual impact. Furthermore, current developments in the field of image capturing devices and software enable the use of digitised video streams, tailored precisely to the specific situation. Here, we demonstrate that with reasonable financial resources, it is possible to prepare video sequences and to introduce them into the PowerPoint presentation, thereby shaping the teaching process according to individual students' needs and specificities.

  7. Characterization of an 18-kilodalton Brucella cytoplasmic protein which appears to be a serological marker of active infection of both human and bovine brucellosis.

    PubMed Central

    Goldbaum, F A; Leoni, J; Wallach, J C; Fossati, C A

    1993-01-01

    Some anticytoplasmic protein monoclonal antibodies (MAbs) from mice immunized by infection with Brucella ovis cells have been obtained. One of these MAbs, BI24, was used to purify by immunoaffinity a protein with a pI of 5.6 and a molecular mass of 18 kDa. This protein was present in all of the rough and smooth Brucella species studied, but it could not be detected in Yersinia enterocolitica 09. Three internal peptides of this protein were partially sequenced; no homology with other bacterial proteins was found. The immunogenicity of the 18-kDa protein was studied with both human and bovine sera by a capture enzyme-linked immunosorbent assay system with MAb BI24. Images PMID:8370742

  8. Microsatellite DNA capture from enriched libraries.

    PubMed

    Gonzalez, Elena G; Zardoya, Rafael

    2013-01-01

    Microsatellites are DNA sequences of tandem repeats of one to six nucleotides, which are highly polymorphic, and thus the molecular markers of choice in many kinship, population genetic, and conservation studies. There have been significant technical improvements since the early methods for microsatellite isolation were developed, and today the most common procedures take advantage of the hybrid capture methods of enriched-targeted microsatellite DNA. Furthermore, recent advents in sequencing technologies (i.e., next-generation sequencing, NGS) have fostered the mining of microsatellite markers in non-model organisms, affording a cost-effective way of obtaining a large amount of sequence data potentially useful for loci characterization. The rapid improvements of NGS platforms together with the increase in available microsatellite information open new avenues to the understanding of the evolutionary forces that shape genetic structuring in wild populations. Here, we provide detailed methodological procedures for microsatellite isolation based on the screening of GT microsatellite-enriched libraries, either by cloning and Sanger sequencing of positive clones or by direct NGS. Guides for designing new species-specific primers and basic genotyping are also given.

  9. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.

  10. Quantitative metrics for assessment of chemical image quality and spatial resolution

    DOE PAGES

    Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.

    2016-02-28

    Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less

  11. Quantitative metrics for assessment of chemical image quality and spatial resolution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertesz, Vilmos; Cahill, John F.; Van Berkel, Gary J.

    Rationale: Currently objective/quantitative descriptions of the quality and spatial resolution of mass spectrometry derived chemical images are not standardized. Development of these standardized metrics is required to objectively describe chemical imaging capabilities of existing and/or new mass spectrometry imaging technologies. Such metrics would allow unbiased judgment of intra-laboratory advancement and/or inter-laboratory comparison for these technologies if used together with standardized surfaces. Methods: We developed two image metrics, viz., chemical image contrast (ChemIC) based on signal-to-noise related statistical measures on chemical image pixels and corrected resolving power factor (cRPF) constructed from statistical analysis of mass-to-charge chronograms across features of interest inmore » an image. These metrics, quantifying chemical image quality and spatial resolution, respectively, were used to evaluate chemical images of a model photoresist patterned surface collected using a laser ablation/liquid vortex capture mass spectrometry imaging system under different instrument operational parameters. Results: The calculated ChemIC and cRPF metrics determined in an unbiased fashion the relative ranking of chemical image quality obtained with the laser ablation/liquid vortex capture mass spectrometry imaging system. These rankings were used to show that both chemical image contrast and spatial resolution deteriorated with increasing surface scan speed, increased lane spacing and decreasing size of surface features. Conclusions: ChemIC and cRPF, respectively, were developed and successfully applied for the objective description of chemical image quality and spatial resolution of chemical images collected from model surfaces using a laser ablation/liquid vortex capture mass spectrometry imaging system.« less

  12. The determination of high-resolution spatio-temporal glacier motion fields from time-lapse sequences

    NASA Astrophysics Data System (ADS)

    Schwalbe, Ellen; Maas, Hans-Gerd

    2017-12-01

    This paper presents a comprehensive method for the determination of glacier surface motion vector fields at high spatial and temporal resolution. These vector fields can be derived from monocular terrestrial camera image sequences and are a valuable data source for glaciological analysis of the motion behaviour of glaciers. The measurement concepts for the acquisition of image sequences are presented, and an automated monoscopic image sequence processing chain is developed. Motion vector fields can be derived with high precision by applying automatic subpixel-accuracy image matching techniques on grey value patterns in the image sequences. Well-established matching techniques have been adapted to the special characteristics of the glacier data in order to achieve high reliability in automatic image sequence processing, including the handling of moving shadows as well as motion effects induced by small instabilities in the camera set-up. Suitable geo-referencing techniques were developed to transform image measurements into a reference coordinate system.The result of monoscopic image sequence analysis is a dense raster of glacier surface point trajectories for each image sequence. Each translation vector component in these trajectories can be determined with an accuracy of a few centimetres for points at a distance of several kilometres from the camera. Extensive practical validation experiments have shown that motion vector and trajectory fields derived from monocular image sequences can be used for the determination of high-resolution velocity fields of glaciers, including the analysis of tidal effects on glacier movement, the investigation of a glacier's motion behaviour during calving events, the determination of the position and migration of the grounding line and the detection of subglacial channels during glacier lake outburst floods.

  13. A Fixed-Pattern Noise Correction Method Based on Gray Value Compensation for TDI CMOS Image Sensor.

    PubMed

    Liu, Zhenwang; Xu, Jiangtao; Wang, Xinlei; Nie, Kaiming; Jin, Weimin

    2015-09-16

    In order to eliminate the fixed-pattern noise (FPN) in the output image of time-delay-integration CMOS image sensor (TDI-CIS), a FPN correction method based on gray value compensation is proposed. One hundred images are first captured under uniform illumination. Then, row FPN (RFPN) and column FPN (CFPN) are estimated based on the row-mean vector and column-mean vector of all collected images, respectively. Finally, RFPN are corrected by adding the estimated RFPN gray value to the original gray values of pixels in the corresponding row, and CFPN are corrected by subtracting the estimated CFPN gray value from the original gray values of pixels in the corresponding column. Experimental results based on a 128-stage TDI-CIS show that, after correcting the FPN in the image captured under uniform illumination with the proposed method, the standard-deviation of row-mean vector decreases from 5.6798 to 0.4214 LSB, and the standard-deviation of column-mean vector decreases from 15.2080 to 13.4623 LSB. Both kinds of FPN in the real images captured by TDI-CIS are eliminated effectively with the proposed method.

  14. A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.

    PubMed

    Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G

    2017-08-01

    Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing

    PubMed Central

    Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge

    2011-01-01

    This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739

  16. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  17. The design of a microfluidic biochip for the rapid, multiplexed detection of foodborne pathogens by surface plasmon resonance imaging

    NASA Astrophysics Data System (ADS)

    Zordan, Michael D.; Grafton, Meggie M. G.; Park, Kinam; Leary, James F.

    2010-02-01

    The rapid detection of foodborne pathogens is increasingly important due to the rising occurrence of contaminated food supplies. We have previously demonstrated the design of a hybrid optical device that has the capability to perform realtime surface plasmon resonance (SPR) and epi-fluorescence imaging. We now present the design of a microfluidic biochip consisting of a two-dimensional array of functionalized gold spots. The spots on the array have been functionalized with capture peptides that specifically bind E. coli O157:H7 or Salmonella enterica. This array is enclosed by a PDMS microfluidic flow cell. A magnetically pre-concentrated sample is injected into the biochip, and whole pathogens will bind to the capture array. The previously constructed optical device is being used to detect the presence and identity of captured pathogens using SPR imaging. This detection occurs in a label-free manner, and does not require the culture of bacterial samples. Molecular imaging can also be performed using the epi-fluorescence capabilities of the device to determine pathogen state, or to validate the identity of the captured pathogens using fluorescently labeled antibodies. We demonstrate the real-time screening of a sample for the presence of E. coli O157:H7 and Salmonella enterica. Additionally the mechanical properties of the microfluidic flow cell will be assessed. The effect of these properties on pathogen capture will be examined.

  18. [The Role of Imaging in Central Nervous System Infections].

    PubMed

    Yokota, Hajime; Tazoe, Jun; Yamada, Kei

    2015-07-01

    Many infections invade the central nervous system. Magnetic resonance imaging (MRI) is the main tool that is used to evaluate infectious lesions of the central nervous system. The useful sequences on MRI are dependent on the locations, such as intra-axial, extra-axial, and spinal cord. For intra-axial lesions, besides the fundamental sequences, including T1-weighted images, T2-weighted images, and fluid-attenuated inversion recovery (FLAIR) images, advanced sequences, such as diffusion-weighted imaging, diffusion tensor imaging, susceptibility-weighted imaging, and MR spectroscopy, can be applied. They are occasionally used as determinants for quick and correct diagnosis. For extra-axial lesions, understanding the differences among 2D-conventional T1-weighted images, 2D-fat-saturated T1-weighted images, 3D-Spin echo sequences, and 3D-Gradient echo sequence after the administration of gadolinium is required to avoid wrong interpretations. FLAIR plus gadolinium is a useful tool for revealing abnormal enhancement on the brain surface. For the spinal cord, the sequences are limited. Evaluating the distribution and time course of the spinal cord are essential for correct diagnoses. We summarize the role of imaging in central nervous system infections and show the pitfalls, key points, and latest information in them on clinical practices.

  19. Falling Away from Jupiter

    NASA Image and Video Library

    2018-02-07

    This image of Jupiter's southern hemisphere was captured by NASA's Juno spacecraft as it performed a close flyby of the gas giant planet on Dec. 16, 2017. Juno captured this color-enhanced image at 10:24 a.m. PST (1:24 p.m. EST) when the spacecraft was about 19,244 miles (30,970 kilometers) from the tops of Jupiter's clouds at a latitude of 49.9 degrees south -- roughly halfway between the planet's equator and its south pole. Citizen scientist Gerald Eichstädt processed this image using data from the JunoCam imager. https://photojournal.jpl.nasa.gov/catalog/PIA21977

  20. Ceftriaxone-associated pancreatitis captured on serial computed tomography scans.

    PubMed

    Nakagawa, Nozomu; Ochi, Nobuaki; Yamane, Hiromichi; Honda, Yoshihiro; Nagasaki, Yasunari; Urata, Noriyo; Nakanishi, Hidekazu; Kawamoto, Hirofumi; Takigawa, Nagio

    2018-02-01

    A 74-year-old man was treated with ceftriaxone for 5 days and subsequently experienced epigastric pain. Computed tomography (CT) was performed 7 and 3 days before epigastralgia. Although the first CT image revealed no radiographic signs in his biliary system, the second CT image revealed dense radiopaque material in the gallbladder lumen. The third CT image, taken at symptom onset, showed high density in the common bile duct and enlargement of the pancreatic head. This is a very rare case of pseudolithiasis involving the common bile duct, as captured on a series of CT images.

Top