Sample records for input image sequence

  1. Enhanced learning of natural visual sequences in newborn chicks.

    PubMed

    Wood, Justin N; Prasad, Aditya; Goldman, Jason G; Wood, Samantha M W

    2016-07-01

    To what extent are newborn brains designed to operate over natural visual input? To address this question, we used a high-throughput controlled-rearing method to examine whether newborn chicks (Gallus gallus) show enhanced learning of natural visual sequences at the onset of vision. We took the same set of images and grouped them into either natural sequences (i.e., sequences showing different viewpoints of the same real-world object) or unnatural sequences (i.e., sequences showing different images of different real-world objects). When raised in virtual worlds containing natural sequences, newborn chicks developed the ability to recognize familiar images of objects. Conversely, when raised in virtual worlds containing unnatural sequences, newborn chicks' object recognition abilities were severely impaired. In fact, the majority of the chicks raised with the unnatural sequences failed to recognize familiar images of objects despite acquiring over 100 h of visual experience with those images. Thus, newborn chicks show enhanced learning of natural visual sequences at the onset of vision. These results indicate that newborn brains are designed to operate over natural visual input.

  2. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.

  3. Object tracking using plenoptic image sequences

    NASA Astrophysics Data System (ADS)

    Kim, Jae Woo; Bae, Seong-Joon; Park, Seongjin; Kim, Do Hyung

    2017-05-01

    Object tracking is a very important problem in computer vision research. Among the difficulties of object tracking, partial occlusion problem is one of the most serious and challenging problems. To address the problem, we proposed novel approaches to object tracking on plenoptic image sequences. Our approaches take advantage of the refocusing capability that plenoptic images provide. Our approaches input the sequences of focal stacks constructed from plenoptic image sequences. The proposed image selection algorithms select the sequence of optimal images that can maximize the tracking accuracy from the sequence of focal stacks. Focus measure approach and confidence measure approach were proposed for image selection and both of the approaches were validated by the experiments using thirteen plenoptic image sequences that include heavily occluded target objects. The experimental results showed that the proposed approaches were satisfactory comparing to the conventional 2D object tracking algorithms.

  4. Spatio-temporal alignment of pedobarographic image sequences.

    PubMed

    Oliveira, Francisco P M; Sousa, Andreia; Santos, Rubim; Tavares, João Manuel R S

    2011-07-01

    This article presents a methodology to align plantar pressure image sequences simultaneously in time and space. The spatial position and orientation of a foot in a sequence are changed to match the foot represented in a second sequence. Simultaneously with the spatial alignment, the temporal scale of the first sequence is transformed with the aim of synchronizing the two input footsteps. Consequently, the spatial correspondence of the foot regions along the sequences as well as the temporal synchronizing is automatically attained, making the study easier and more straightforward. In terms of spatial alignment, the methodology can use one of four possible geometric transformation models: rigid, similarity, affine, or projective. In the temporal alignment, a polynomial transformation up to the 4th degree can be adopted in order to model linear and curved time behaviors. Suitable geometric and temporal transformations are found by minimizing the mean squared error (MSE) between the input sequences. The methodology was tested on a set of real image sequences acquired from a common pedobarographic device. When used in experimental cases generated by applying geometric and temporal control transformations, the methodology revealed high accuracy. In addition, the intra-subject alignment tests from real plantar pressure image sequences showed that the curved temporal models produced better MSE results (P < 0.001) than the linear temporal model. This article represents an important step forward in the alignment of pedobarographic image data, since previous methods can only be applied on static images.

  5. Automatic Detection of Clouds and Shadows Using High Resolution Satellite Image Time Series

    NASA Astrophysics Data System (ADS)

    Champion, Nicolas

    2016-06-01

    Detecting clouds and their shadows is one of the primaries steps to perform when processing satellite images because they may alter the quality of some products such as large-area orthomosaics. The main goal of this paper is to present the automatic method developed at IGN-France for detecting clouds and shadows in a sequence of satellite images. In our work, surface reflectance orthoimages are used. They were processed from initial satellite images using a dedicated software. The cloud detection step consists of a region-growing algorithm. Seeds are firstly extracted. For that purpose and for each input ortho-image to process, we select the other ortho-images of the sequence that intersect it. The pixels of the input ortho-image are secondly labelled seeds if the difference of reflectance (in the blue channel) with overlapping ortho-images is bigger than a given threshold. Clouds are eventually delineated using a region-growing method based on a radiometric and homogeneity criterion. Regarding the shadow detection, our method is based on the idea that a shadow pixel is darker when comparing to the other images of the time series. The detection is basically composed of three steps. Firstly, we compute a synthetic ortho-image covering the whole study area. Its pixels have a value corresponding to the median value of all input reflectance ortho-images intersecting at that pixel location. Secondly, for each input ortho-image, a pixel is labelled shadows if the difference of reflectance (in the NIR channel) with the synthetic ortho-image is below a given threshold. Eventually, an optional region-growing step may be used to refine the results. Note that pixels labelled clouds during the cloud detection are not used for computing the median value in the first step; additionally, the NIR input data channel is used to perform the shadow detection, because it appeared to better discriminate shadow pixels. The method was tested on times series of Landsat 8 and Pléiades-HR images and our first experiments show the feasibility to automate the detection of shadows and clouds in satellite image sequences.

  6. Primary Visual Cortex Represents the Difference Between Past and Present

    PubMed Central

    Nortmann, Nora; Rekauzke, Sascha; Onat, Selim; König, Peter; Jancke, Dirk

    2015-01-01

    The visual system is confronted with rapidly changing stimuli in everyday life. It is not well understood how information in such a stream of input is updated within the brain. We performed voltage-sensitive dye imaging across the primary visual cortex (V1) to capture responses to sequences of natural scene contours. We presented vertically and horizontally filtered natural images, and their superpositions, at 10 or 33 Hz. At low frequency, the encoding was found to represent not the currently presented images, but differences in orientation between consecutive images. This was in sharp contrast to more rapid sequences for which we found an ongoing representation of current input, consistent with earlier studies. Our finding that for slower image sequences, V1 does no longer report actual features but represents their relative difference in time counteracts the view that the first cortical processing stage must always transfer complete information. Instead, we show its capacities for change detection with a new emphasis on the role of automatic computation evolving in the 100-ms range, inevitably affecting information transmission further downstream. PMID:24343889

  7. Novel techniques for data decomposition and load balancing for parallel processing of vision systems: Implementation and evaluation using a motion estimation system

    NASA Technical Reports Server (NTRS)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Computer vision systems employ a sequence of vision algorithms in which the output of an algorithm is the input of the next algorithm in the sequence. Algorithms that constitute such systems exhibit vastly different computational characteristics, and therefore, require different data decomposition techniques and efficient load balancing techniques for parallel implementation. However, since the input data for a task is produced as the output data of the previous task, this information can be exploited to perform knowledge based data decomposition and load balancing. Presented here are algorithms for a motion estimation system. The motion estimation is based on the point correspondence between the involved images which are a sequence of stereo image pairs. Researchers propose algorithms to obtain point correspondences by matching feature points among stereo image pairs at any two consecutive time instants. Furthermore, the proposed algorithms employ non-iterative procedures, which results in saving considerable amounts of computation time. The system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from consecutive time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters.

  8. Rotation invariant features for wear particle classification

    NASA Astrophysics Data System (ADS)

    Arof, Hamzah; Deravi, Farzin

    1997-09-01

    This paper investigates the ability of a set of rotation invariant features to classify images of wear particles found in used lubricating oil of machinery. The rotation invariant attribute of the features is derived from the property of the magnitudes of Fourier transform coefficients that do not change with spatial shift of the input elements. By analyzing individual circular neighborhoods centered at every pixel in an image, local and global texture characteristics of an image can be described. A number of input sequences are formed by the intensities of pixels on concentric rings of various radii measured from the center of each neighborhood. Fourier transforming the sequences would generate coefficients whose magnitudes are invariant to rotation. Rotation invariant features extracted from these coefficients were utilized to classify wear particle images that were obtained from a number of different particles captured at different orientations. In an experiment involving images of 6 classes, the circular neighborhood features obtained a 91% recognition rate which compares favorably to a 76% rate achieved by features of a 6 by 6 co-occurrence matrix.

  9. The processing of images of biological threats in visual short-term memory.

    PubMed

    Quinlan, Philip T; Yue, Yue; Cohen, Dale J

    2017-08-30

    The idea that there is enhanced memory for negatively, emotionally charged pictures was examined. Performance was measured under rapid, serial visual presentation (RSVP) conditions in which, on every trial, a sequence of six photo-images was presented. Briefly after the offset of the sequence, two alternative images (a target and a foil) were presented and participants attempted to choose which image had occurred in the sequence. Images were of threatening and non-threatening cats and dogs. The target depicted either an animal expressing an emotion distinct from the other images, or the sequences contained only images depicting the same emotional valence. Enhanced memory was found for targets that differed in emotional valence from the other sequence images, compared to targets that expressed the same emotional valence. Further controls in stimulus selection were then introduced and the same emotional distinctiveness effect obtained. In ruling out possible visual and attentional accounts of the data, an informal dual route topic model is discussed. This places emphasis on how visual short-term memory reveals a sensitivity to the emotional content of the input as it unfolds over time. Items that present with a distinctive emotional content stand out in memory. © 2017 The Author(s).

  10. A programmable CCD driver circuit for multiphase CCD operation

    NASA Technical Reports Server (NTRS)

    Ewin, Audrey J.; Reed, Kenneth V.

    1989-01-01

    A programmable CCD (charge-coupled device) driver circuit was designed to drive CCDs in multiphased modes. The purpose of the drive electronics is to operate developmental CCD imaging arrays for NASA's tiltable moderate resolution imaging spectrometer (MODIS-T). Five objectives for the driver were considered during its design: (1) the circuit drives CCD electrode voltages between 0 V and +30 V to produce reasonable potential wells, (2) the driving sequence is started with one input signal, (3) the driving sequence is started with one input signal, (4) the circuit allows programming of frame sequences required by arrays of any size, (5) it produces interfacing signals for the CCD and the DTF (detector test facility). Simulation of the driver verified its function with the master clock running up to 10 MHz. This suggests a maximum rate of 400,000 pixels/s. Timing and packaging parameters were verified. The design uses 54 TTL (transistor-transistor logic) chips. Two versions of hardware were fabricated: wirewrap and printed circuit board. Both were verified functionally with a logic analyzer.

  11. A method to synchronize signals from multiple patient monitoring devices through a single input channel for inclusion in list-mode acquisitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O’Connor, J. Michael; Pretorius, P. Hendrik; Johnson, Karen

    2013-12-15

    Purpose: This technical note documents a method that the authors developed for combining a signal to synchronize a patient-monitoring device with a second physiological signal for inclusion into list-mode acquisition. Our specific application requires synchronizing an external patient motion-tracking system with a medical imaging system by multiplexing the tracking input with the ECG input. The authors believe that their methodology can be adapted for use in a variety of medical imaging modalities including single photon emission computed tomography (SPECT) and positron emission tomography (PET). Methods: The authors insert a unique pulse sequence into a single physiological input channel. This sequencemore » is then recorded in the list-mode acquisition along with the R-wave pulse used for ECG gating. The specific form of our pulse sequence allows for recognition of the time point being synchronized even when portions of the pulse sequence are lost due to collisions with R-wave pulses. This was achieved by altering our software used in binning the list-mode data to recognize even a portion of our pulse sequence. Limitations on heart rates at which our pulse sequence could be reliably detected were investigated by simulating the mixing of the two signals as a function of heart rate and time point during the cardiac cycle at which our pulse sequence is mixed with the cardiac signal. Results: The authors have successfully achieved accurate temporal synchronization of our motion-tracking system with acquisition of SPECT projections used in 17 recent clinical research cases. In our simulation analysis the authors determined that synchronization to enable compensation for body and respiratory motion could be achieved for heart rates up to 125 beats-per-minute (bpm). Conclusions: Synchronization of list-mode acquisition with external patient monitoring devices such as those employed in motion-tracking can reliably be achieved using a simple method that can be implemented using minimal external hardware and software modification through a single input channel, while still recording cardiac gating signals.« less

  12. Retrieval of Sentence Sequences for an Image Stream via Coherence Recurrent Convolutional Networks.

    PubMed

    Park, Cesc Chunseong; Kim, Youngjin; Kim, Gunhee

    2018-04-01

    We propose an approach for retrieving a sequence of natural sentences for an image stream. Since general users often take a series of pictures on their experiences, much online visual information exists in the form of image streams, for which it would better take into consideration of the whole image stream to produce natural language descriptions. While almost all previous studies have dealt with the relation between a single image and a single natural sentence, our work extends both input and output dimension to a sequence of images and a sequence of sentences. For retrieving a coherent flow of multiple sentences for a photo stream, we propose a multimodal neural architecture called coherence recurrent convolutional network (CRCN), which consists of convolutional neural networks, bidirectional long short-term memory (LSTM) networks, and an entity-based local coherence model. Our approach directly learns from vast user-generated resource of blog posts as text-image parallel training data. We collect more than 22 K unique blog posts with 170 K associated images for the travel topics of NYC, Disneyland , Australia, and Hawaii. We demonstrate that our approach outperforms other state-of-the-art image captioning methods for text sequence generation, using both quantitative measures and user studies via Amazon Mechanical Turk.

  13. Acceleration techniques and their impact on arterial input function sampling: Non-accelerated versus view-sharing and compressed sensing sequences.

    PubMed

    Benz, Matthias R; Bongartz, Georg; Froehlich, Johannes M; Winkel, David; Boll, Daniel T; Heye, Tobias

    2018-07-01

    The aim was to investigate the variation of the arterial input function (AIF) within and between various DCE MRI sequences. A dynamic flow-phantom and steady signal reference were scanned on a 3T MRI using fast low angle shot (FLASH) 2d, FLASH3d (parallel imaging factor (P) = P0, P2, P4), volumetric interpolated breath-hold examination (VIBE) (P = P0, P3, P2 × 2, P2 × 3, P3 × 2), golden-angle radial sparse parallel imaging (GRASP), and time-resolved imaging with stochastic trajectories (TWIST). Signal over time curves were normalized and quantitatively analyzed by full width half maximum (FWHM) measurements to assess variation within and between sequences. The coefficient of variation (CV) for the steady signal reference ranged from 0.07-0.8%. The non-accelerated gradient echo FLASH2d, FLASH3d, and VIBE sequences showed low within sequence variation with 2.1%, 1.0%, and 1.6%. The maximum FWHM CV was 3.2% for parallel imaging acceleration (VIBE P2 × 3), 2.7% for GRASP and 9.1% for TWIST. The FWHM CV between sequences ranged from 8.5-14.4% for most non-accelerated/accelerated gradient echo sequences except 6.2% for FLASH3d P0 and 0.3% for FLASH3d P2; GRASP FWHM CV was 9.9% versus 28% for TWIST. MRI acceleration techniques vary in reproducibility and quantification of the AIF. Incomplete coverage of the k-space with TWIST as a representative of view-sharing techniques showed the highest variation within sequences and might be less suited for reproducible quantification of the AIF. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Edge enhancement of color images using a digital micromirror device.

    PubMed

    Di Martino, J Matías; Flores, Jorge L; Ayubi, Gastón A; Alonso, Julia R; Fernández, Ariel; Ferrari, José A

    2012-06-01

    A method for orientation-selective enhancement of edges in color images is proposed. The method utilizes the capacity of digital micromirror devices to generate a positive and a negative color replica of the image used as input. When both images are slightly displaced and imagined together, one obtains an image with enhanced edges. The proposed technique does not require a coherent light source or precise alignment. The proposed method could be potentially useful for processing large image sequences in real time. Validation experiments are presented.

  15. Simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) for dynamic contrast-enhanced MRI of liver.

    PubMed

    Ning, Jia; Sun, Yongliang; Xie, Sheng; Zhang, Bida; Huang, Feng; Koken, Peter; Smink, Jouke; Yuan, Chun; Chen, Huijun

    2018-05-01

    To propose a simultaneous acquisition sequence for improved hepatic pharmacokinetics quantification accuracy (SAHA) method for liver dynamic contrast-enhanced MRI. The proposed SAHA simultaneously acquired high temporal-resolution 2D images for vascular input function extraction using Cartesian sampling and 3D large-coverage high spatial-resolution liver dynamic contrast-enhanced images using golden angle stack-of-stars acquisition in an interleaved way. Simulations were conducted to investigate the accuracy of SAHA in pharmacokinetic analysis. A healthy volunteer and three patients with cirrhosis or hepatocellular carcinoma were included in the study to investigate the feasibility of SAHA in vivo. Simulation studies showed that SAHA can provide closer results to the true values and lower root mean square error of estimated pharmacokinetic parameters in all of the tested scenarios. The in vivo scans of subjects provided fair image quality of both 2D images for arterial input function and portal venous input function and 3D whole liver images. The in vivo fitting results showed that the perfusion parameters of healthy liver were significantly different from those of cirrhotic liver and HCC. The proposed SAHA can provide improved accuracy in pharmacokinetic modeling and is feasible in human liver dynamic contrast-enhanced MRI, suggesting that SAHA is a potential tool for liver dynamic contrast-enhanced MRI. Magn Reson Med 79:2629-2641, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  16. Synaptic plasticity in a cerebellum-like structure depends on temporal order

    NASA Astrophysics Data System (ADS)

    Bell, Curtis C.; Han, Victor Z.; Sugawara, Yoshiko; Grant, Kirsty

    1997-05-01

    Cerebellum-like structures in fish appear to act as adaptive sensory processors, in which learned predictions about sensory input are generated and subtracted from actual sensory input, allowing unpredicted inputs to stand out1-3. Pairing sensory input with centrally originating predictive signals, such as corollary discharge signals linked to motor commands, results in neural responses to the predictive signals alone that are Negative images' of the previously paired sensory responses. Adding these 'negative images' to actual sensory inputs minimizes the neural response to predictable sensory features. At the cellular level, sensory input is relayed to the basal region of Purkinje-like cells, whereas predictive signals are relayed by parallel fibres to the apical dendrites of the same cells4. The generation of negative images could be explained by plasticity at parallel fibre synapses5-7. We show here that such plasticity exists in the electrosensory lobe of mormyrid electric fish and that it has the necessary properties for such a model: it is reversible, anti-hebbian (excitatory postsynaptic potentials (EPSPs) are depressed after pairing with a postsynaptic spike) and tightly dependent on the sequence of pre- and postsynaptic events, with depression occurring only if the postsynaptic spike follows EPSP onset within 60 ms.

  17. Multi-Temporal Land Cover Classification with Sequential Recurrent Encoders

    NASA Astrophysics Data System (ADS)

    Rußwurm, Marc; Körner, Marco

    2018-03-01

    Earth observation (EO) sensors deliver data with daily or weekly temporal resolution. Most land use and land cover (LULC) approaches, however, expect cloud-free and mono-temporal observations. The increasing temporal capabilities of today's sensors enables the use of temporal, along with spectral and spatial features. Domains, such as speech recognition or neural machine translation, work with inherently temporal data and, today, achieve impressive results using sequential encoder-decoder structures. Inspired by these sequence-to-sequence models, we adapt an encoder structure with convolutional recurrent layers in order to approximate a phenological model for vegetation classes based on a temporal sequence of Sentinel 2 (S2) images. In our experiments, we visualize internal activations over a sequence of cloudy and non-cloudy images and find several recurrent cells, which reduce the input activity for cloudy observations. Hence, we assume that our network has learned cloud-filtering schemes solely from input data, which could alleviate the need for tedious cloud-filtering as a preprocessing step for many EO approaches. Moreover, using unfiltered temporal series of top-of-atmosphere (TOA) reflectance data, we achieved in our experiments state-of-the-art classification accuracies on a large number of crop classes with minimal preprocessing compared to other classification approaches.

  18. TH-EF-BRA-06: A Novel Retrospective 3D K-Space Sorting 4D-MRI Technique Using a Radial K-Space Acquisition MRI Sequence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Subashi, E; Yin, F

    Purpose: Current retrospective 4D-MRI provides superior tumor-to-tissue contrast and accurate respiratory motion information for radiotherapy motion management. The developed 4D-MRI techniques based on 2D-MRI image sorting require a high frame-rate of the MR sequences. However, several MRI sequences provide excellent image quality but have low frame-rate. This study aims at developing a novel retrospective 3D k-space sorting 4D-MRI technique using radial k-space acquisition MRI sequences to improve 4D-MRI image quality and temporal-resolution for imaging irregular organ/tumor respiratory motion. Methods: The method is based on a RF-spoiled, steady-state, gradient-recalled sequence with minimal echo time. A 3D radial k-space data acquisition trajectorymore » was used for sampling the datasets. Each radial spoke readout data line starts from the 3D center of Field-of-View. Respiratory signal can be extracted from the k-space center data point of each spoke. The spoke data was sorted based on its self-synchronized respiratory signal using phase sorting. Subsequently, 3D reconstruction was conducted to generate the time-resolved 4D-MRI images. As a feasibility study, this technique was implemented on a digital human phantom XCAT. The respiratory motion was controlled by an irregular motion profile. To validate using k-space center data as a respiratory surrogate, we compared it with the XCAT input controlling breathing profile. Tumor motion trajectories measured on reconstructed 4D-MRI were compared to the average input trajectory. The mean absolute amplitude difference (D) was calculated. Results: The signal extracted from k-space center data matches well with the input controlling respiratory profile of XCAT. The relative amplitude error was 8.6% and the relative phase error was 3.5%. XCAT 4D-MRI demonstrated a clear motion pattern with little serrated artifacts. D of tumor trajectories was 0.21mm, 0.23mm and 0.23mm in SI, AP and ML directions, respectively. Conclusion: A novel retrospective 3D k-space sorting 4D-MRI technique has been developed and evaluated on human digital phantom. NIH (1R21CA165384-01A1)« less

  19. Image data-processing system for solar astronomy

    NASA Technical Reports Server (NTRS)

    Wilson, R. M.; Teuber, D. L.; Watkins, J. R.; Thomas, D. T.; Cooper, C. M.

    1977-01-01

    The paper describes an image data processing system (IDAPS), its hardware/software configuration, and interactive and batch modes of operation for the analysis of the Skylab/Apollo Telescope Mount S056 X-Ray Telescope experiment data. Interactive IDAPS is primarily designed to provide on-line interactive user control of image processing operations for image familiarization, sequence and parameter optimization, and selective feature extraction and analysis. Batch IDAPS follows the normal conventions of card control and data input and output, and is best suited where the desired parameters and sequence of operations are known and when long image-processing times are required. Particular attention is given to the way in which this system has been used in solar astronomy and other investigations. Some recent results obtained by means of IDAPS are presented.

  20. Optical resonance imaging: An optical analog to MRI with sub-diffraction-limited capabilities.

    PubMed

    Allodi, Marco A; Dahlberg, Peter D; Mazuski, Richard J; Davis, Hunter C; Otto, John P; Engel, Gregory S

    2016-12-21

    We propose here optical resonance imaging (ORI), a direct optical analog to magnetic resonance imaging (MRI). The proposed pulse sequence for ORI maps space to time and recovers an image from a heterodyne-detected third-order nonlinear photon echo measurement. As opposed to traditional photon echo measurements, the third pulse in the ORI pulse sequence has significant pulse-front tilt that acts as a temporal gradient. This gradient couples space to time by stimulating the emission of a photon echo signal from different lateral spatial locations of a sample at different times, providing a widefield ultrafast microscopy. We circumvent the diffraction limit of the optics by mapping the lateral spatial coordinate of the sample with the emission time of the signal, which can be measured to high precision using interferometric heterodyne detection. This technique is thus an optical analog of MRI, where magnetic-field gradients are used to localize the spin-echo emission to a point below the diffraction limit of the radio-frequency wave used. We calculate the expected ORI signal using 15 fs pulses and 87° of pulse-front tilt, collected using f /2 optics and find a two-point resolution 275 nm using 800 nm light that satisfies the Rayleigh criterion. We also derive a general equation for resolution in optical resonance imaging that indicates that there is a possibility of superresolution imaging using this technique. The photon echo sequence also enables spectroscopic determination of the input and output energy. The technique thus correlates the input energy with the final position and energy of the exciton.

  1. Bone marrow cavity segmentation using graph-cuts with wavelet-based texture feature.

    PubMed

    Shigeta, Hironori; Mashita, Tomohiro; Kikuta, Junichi; Seno, Shigeto; Takemura, Haruo; Ishii, Masaru; Matsuda, Hideo

    2017-10-01

    Emerging bioimaging technologies enable us to capture various dynamic cellular activities [Formula: see text]. As large amounts of data are obtained these days and it is becoming unrealistic to manually process massive number of images, automatic analysis methods are required. One of the issues for automatic image segmentation is that image-taking conditions are variable. Thus, commonly, many manual inputs are required according to each image. In this paper, we propose a bone marrow cavity (BMC) segmentation method for bone images as BMC is considered to be related to the mechanism of bone remodeling, osteoporosis, and so on. To reduce manual inputs to segment BMC, we classified the texture pattern using wavelet transformation and support vector machine. We also integrated the result of texture pattern classification into the graph-cuts-based image segmentation method because texture analysis does not consider spatial continuity. Our method is applicable to a particular frame in an image sequence in which the condition of fluorescent material is variable. In the experiment, we evaluated our method with nine types of mother wavelets and several sets of scale parameters. The proposed method with graph-cuts and texture pattern classification performs well without manual inputs by a user.

  2. Stereo and IMU-Assisted Visual Odometry for Small Robots

    NASA Technical Reports Server (NTRS)

    2012-01-01

    This software performs two functions: (1) taking stereo image pairs as input, it computes stereo disparity maps from them by cross-correlation to achieve 3D (three-dimensional) perception; (2) taking a sequence of stereo image pairs as input, it tracks features in the image sequence to estimate the motion of the cameras between successive image pairs. A real-time stereo vision system with IMU (inertial measurement unit)-assisted visual odometry was implemented on a single 750 MHz/520 MHz OMAP3530 SoC (system on chip) from TI (Texas Instruments). Frame rates of 46 fps (frames per second) were achieved at QVGA (Quarter Video Graphics Array i.e. 320 240), or 8 fps at VGA (Video Graphics Array 640 480) resolutions, while simultaneously tracking up to 200 features, taking full advantage of the OMAP3530's integer DSP (digital signal processor) and floating point ARM processors. This is a substantial advancement over previous work as the stereo implementation produces 146 Mde/s (millions of disparities evaluated per second) in 2.5W, yielding a stereo energy efficiency of 58.8 Mde/J, which is 3.75 better than prior DSP stereo while providing more functionality.

  3. Optical flow estimation on image sequences with differently exposed frames

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-09-01

    Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.

  4. Empirical mode decomposition-based facial pose estimation inside video sequences

    NASA Astrophysics Data System (ADS)

    Qing, Chunmei; Jiang, Jianmin; Yang, Zhijing

    2010-03-01

    We describe a new pose-estimation algorithm via integration of the strength in both empirical mode decomposition (EMD) and mutual information. While mutual information is exploited to measure the similarity between facial images to estimate poses, EMD is exploited to decompose input facial images into a number of intrinsic mode function (IMF) components, which redistribute the effect of noise, expression changes, and illumination variations as such that, when the input facial image is described by the selected IMF components, all the negative effects can be minimized. Extensive experiments were carried out in comparisons to existing representative techniques, and the results show that the proposed algorithm achieves better pose-estimation performances with robustness to noise corruption, illumination variation, and facial expressions.

  5. Logarithmic r-θ mapping for hybrid optical neural network filter for multiple objects recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.; Birch, Phil M.

    2009-04-01

    θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter can allow multiple objects of the same class to be detected within the input image. Additionally, the architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural network filter becomes attractive for accommodating the recognition of multiple objects of different classes within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects within the cluttered scenes. We record in our results additional extracted information from the cluttered scenes about the objects' relative position, scale and in-plane rotation.

  6. An Imaging And Graphics Workstation For Image Sequence Analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-01-01

    This paper describes an application-specific engineering workstation designed and developed to analyze imagery sequences from a variety of sources. The system combines the software and hardware environment of the modern graphic-oriented workstations with the digital image acquisition, processing and display techniques. The objective is to achieve automation and high throughput for many data reduction tasks involving metric studies of image sequences. The applications of such an automated data reduction tool include analysis of the trajectory and attitude of aircraft, missile, stores and other flying objects in various flight regimes including launch and separation as well as regular flight maneuvers. The workstation can also be used in an on-line or off-line mode to study three-dimensional motion of aircraft models in simulated flight conditions such as wind tunnels. The system's key features are: 1) Acquisition and storage of image sequences by digitizing real-time video or frames from a film strip; 2) computer-controlled movie loop playback, slow motion and freeze frame display combined with digital image sharpening, noise reduction, contrast enhancement and interactive image magnification; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored image sequence; 4) automatic and manual field-of-view and spatial calibration; 5) image sequence data base generation and management, including the measurement data products; 6) off-line analysis software for trajectory plotting and statistical analysis; 7) model-based estimation and tracking of object attitude angles; and 8) interface to a variety of video players and film transport sub-systems.

  7. FliMax, a novel stimulus device for panoramic and highspeed presentation of behaviourally generated optic flow.

    PubMed

    Lindemann, J P; Kern, R; Michaelis, C; Meyer, P; van Hateren, J H; Egelhaaf, M

    2003-03-01

    A high-speed panoramic visual stimulation device is introduced which is suitable to analyse visual interneurons during stimulation with rapid image displacements as experienced by fast moving animals. The responses of an identified motion sensitive neuron in the visual system of the blowfly to behaviourally generated image sequences are very complex and hard to predict from the established input circuitry of the neuron. This finding suggests that the computational significance of visual interneurons can only be assessed if they are characterised not only by conventional stimuli as are often used for systems analysis, but also by behaviourally relevant input.

  8. Modified-hybrid optical neural network filter for multiple object recognition within cluttered scenes

    NASA Astrophysics Data System (ADS)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.

    2009-08-01

    Motivated by the non-linear interpolation and generalization abilities of the hybrid optical neural network filter between the reference and non-reference images of the true-class object we designed the modifiedhybrid optical neural network filter. We applied an optical mask to the hybrid optical neural network's filter input. The mask was built with the constant weight connections of a randomly chosen image included in the training set. The resulted design of the modified-hybrid optical neural network filter is optimized for performing best in cluttered scenes of the true-class object. Due to the shift invariance properties inherited by its correlator unit the filter can accommodate multiple objects of the same class to be detected within an input cluttered image. Additionally, the architecture of the neural network unit of the general hybrid optical neural network filter allows the recognition of multiple objects of different classes within the input cluttered image by modifying the output layer of the unit. We test the modified-hybrid optical neural network filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. The filter is shown to exhibit with a single pass over the input data simultaneously out-of-plane rotation, shift invariance and good clutter tolerance. It is able to successfully detect and classify correctly the true-class objects within background clutter for which there has been no previous training.

  9. enoLOGOS: a versatile web tool for energy normalized sequence logos

    PubMed Central

    Workman, Christopher T.; Yin, Yutong; Corcoran, David L.; Ideker, Trey; Stormo, Gary D.; Benos, Panayiotis V.

    2005-01-01

    enoLOGOS is a web-based tool that generates sequence logos from various input sources. Sequence logos have become a popular way to graphically represent DNA and amino acid sequence patterns from a set of aligned sequences. Each position of the alignment is represented by a column of stacked symbols with its total height reflecting the information content in this position. Currently, the available web servers are able to create logo images from a set of aligned sequences, but none of them generates weighted sequence logos directly from energy measurements or other sources. With the advent of high-throughput technologies for estimating the contact energy of different DNA sequences, tools that can create logos directly from binding affinity data are useful to researchers. enoLOGOS generates sequence logos from a variety of input data, including energy measurements, probability matrices, alignment matrices, count matrices and aligned sequences. Furthermore, enoLOGOS can represent the mutual information of different positions of the consensus sequence, a unique feature of this tool. Another web interface for our software, C2H2-enoLOGOS, generates logos for the DNA-binding preferences of the C2H2 zinc-finger transcription factor family members. enoLOGOS and C2H2-enoLOGOS are accessible over the web at . PMID:15980495

  10. Blind multirigid retrospective motion correction of MR images.

    PubMed

    Loktyushin, Alexander; Nickisch, Hannes; Pohmann, Rolf; Schölkopf, Bernhard

    2015-04-01

    Physiological nonrigid motion is inevitable when imaging, e.g., abdominal viscera, and can lead to serious deterioration of the image quality. Prospective techniques for motion correction can handle only special types of nonrigid motion, as they only allow global correction. Retrospective methods developed so far need guidance from navigator sequences or external sensors. We propose a fully retrospective nonrigid motion correction scheme that only needs raw data as an input. Our method is based on a forward model that describes the effects of nonrigid motion by partitioning the image into patches with locally rigid motion. Using this forward model, we construct an objective function that we can optimize with respect to both unknown motion parameters per patch and the underlying sharp image. We evaluate our method on both synthetic and real data in 2D and 3D. In vivo data was acquired using standard imaging sequences. The correction algorithm significantly improves the image quality. Our compute unified device architecture (CUDA)-enabled graphic processing unit implementation ensures feasible computation times. The presented technique is the first computationally feasible retrospective method that uses the raw data of standard imaging sequences, and allows to correct for nonrigid motion without guidance from external motion sensors. © 2014 Wiley Periodicals, Inc.

  11. Storage and retrieval of large digital images

    DOEpatents

    Bradley, J.N.

    1998-01-20

    Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T{sub ij}(x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T{sub ij}(x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T{sub ij}(x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval. 6 figs.

  12. Storage and retrieval of large digital images

    DOEpatents

    Bradley, Jonathan N.

    1998-01-01

    Image compression and viewing are implemented with (1) a method for performing DWT-based compression on a large digital image with a computer system possessing a two-level system of memory and (2) a method for selectively viewing areas of the image from its compressed representation at multiple resolutions and, if desired, in a client-server environment. The compression of a large digital image I(x,y) is accomplished by first defining a plurality of discrete tile image data subsets T.sub.ij (x,y) that, upon superposition, form the complete set of image data I(x,y). A seamless wavelet-based compression process is effected on I(x,y) that is comprised of successively inputting the tiles T.sub.ij (x,y) in a selected sequence to a DWT routine, and storing the resulting DWT coefficients in a first primary memory. These coefficients are periodically compressed and transferred to a secondary memory to maintain sufficient memory in the primary memory for data processing. The sequence of DWT operations on the tiles T.sub.ij (x,y) effectively calculates a seamless DWT of I(x,y). Data retrieval consists of specifying a resolution and a region of I(x,y) for display. The subset of stored DWT coefficients corresponding to each requested scene is determined and then decompressed for input to an inverse DWT, the output of which forms the image display. The repeated process whereby image views are specified may take the form an interaction with a computer pointing device on an image display from a previous retrieval.

  13. Visual mental image generation does not overlap with visual short-term memory: a dual-task interference study.

    PubMed

    Borst, Gregoire; Niven, Elaine; Logie, Robert H

    2012-04-01

    Visual mental imagery and working memory are often assumed to play similar roles in high-order functions, but little is known of their functional relationship. In this study, we investigated whether similar cognitive processes are involved in the generation of visual mental images, in short-term retention of those mental images, and in short-term retention of visual information. Participants encoded and recalled visually or aurally presented sequences of letters under two interference conditions: spatial tapping or irrelevant visual input (IVI). In Experiment 1, spatial tapping selectively interfered with the retention of sequences of letters when participants generated visual mental images from aural presentation of the letter names and when the letters were presented visually. In Experiment 2, encoding of the sequences was disrupted by both interference tasks. However, in Experiment 3, IVI interfered with the generation of the mental images, but not with their retention, whereas spatial tapping was more disruptive during retention than during encoding. Results suggest that the temporary retention of visual mental images and of visual information may be supported by the same visual short-term memory store but that this store is not involved in image generation.

  14. Image sequence analysis workstation for multipoint motion analysis

    NASA Astrophysics Data System (ADS)

    Mostafavi, Hassan

    1990-08-01

    This paper describes an application-specific engineering workstation designed and developed to analyze motion of objects from video sequences. The system combines the software and hardware environment of a modem graphic-oriented workstation with the digital image acquisition, processing and display techniques. In addition to automation and Increase In throughput of data reduction tasks, the objective of the system Is to provide less invasive methods of measurement by offering the ability to track objects that are more complex than reflective markers. Grey level Image processing and spatial/temporal adaptation of the processing parameters is used for location and tracking of more complex features of objects under uncontrolled lighting and background conditions. The applications of such an automated and noninvasive measurement tool include analysis of the trajectory and attitude of rigid bodies such as human limbs, robots, aircraft in flight, etc. The system's key features are: 1) Acquisition and storage of Image sequences by digitizing and storing real-time video; 2) computer-controlled movie loop playback, freeze frame display, and digital Image enhancement; 3) multiple leading edge tracking in addition to object centroids at up to 60 fields per second from both live input video or a stored Image sequence; 4) model-based estimation and tracking of the six degrees of freedom of a rigid body: 5) field-of-view and spatial calibration: 6) Image sequence and measurement data base management; and 7) offline analysis software for trajectory plotting and statistical analysis.

  15. Image based performance analysis of thermal imagers

    NASA Astrophysics Data System (ADS)

    Wegner, D.; Repasi, E.

    2016-05-01

    Due to advances in technology, modern thermal imagers resemble sophisticated image processing systems in functionality. Advanced signal and image processing tools enclosed into the camera body extend the basic image capturing capability of thermal cameras. This happens in order to enhance the display presentation of the captured scene or specific scene details. Usually, the implemented methods are proprietary company expertise, distributed without extensive documentation. This makes the comparison of thermal imagers especially from different companies a difficult task (or at least a very time consuming/expensive task - e.g. requiring the execution of a field trial and/or an observer trial). For example, a thermal camera equipped with turbulence mitigation capability stands for such a closed system. The Fraunhofer IOSB has started to build up a system for testing thermal imagers by image based methods in the lab environment. This will extend our capability of measuring the classical IR-system parameters (e.g. MTF, MTDP, etc.) in the lab. The system is set up around the IR- scene projector, which is necessary for the thermal display (projection) of an image sequence for the IR-camera under test. The same set of thermal test sequences might be presented to every unit under test. For turbulence mitigation tests, this could be e.g. the same turbulence sequence. During system tests, gradual variation of input parameters (e. g. thermal contrast) can be applied. First ideas of test scenes selection and how to assembly an imaging suite (a set of image sequences) for the analysis of imaging thermal systems containing such black boxes in the image forming path is discussed.

  16. A semi-Markov model for mitosis segmentation in time-lapse phase contrast microscopy image sequences of stem cell populations.

    PubMed

    Liu, An-An; Li, Kang; Kanade, Takeo

    2012-02-01

    We propose a semi-Markov model trained in a max-margin learning framework for mitosis event segmentation in large-scale time-lapse phase contrast microscopy image sequences of stem cell populations. Our method consists of three steps. First, we apply a constrained optimization based microscopy image segmentation method that exploits phase contrast optics to extract candidate subsequences in the input image sequence that contains mitosis events. Then, we apply a max-margin hidden conditional random field (MM-HCRF) classifier learned from human-annotated mitotic and nonmitotic sequences to classify each candidate subsequence as a mitosis or not. Finally, a max-margin semi-Markov model (MM-SMM) trained on manually-segmented mitotic sequences is utilized to reinforce the mitosis classification results, and to further segment each mitosis into four predefined temporal stages. The proposed method outperforms the event-detection CRF model recently reported by Huh as well as several other competing methods in very challenging image sequences of multipolar-shaped C3H10T1/2 mesenchymal stem cells. For mitosis detection, an overall precision of 95.8% and a recall of 88.1% were achieved. For mitosis segmentation, the mean and standard deviation for the localization errors of the start and end points of all mitosis stages were well below 1 and 2 frames, respectively. In particular, an overall temporal location error of 0.73 ± 1.29 frames was achieved for locating daughter cell birth events.

  17. DNA-based watermarks using the DNA-Crypt algorithm.

    PubMed

    Heider, Dominik; Barnekow, Angelika

    2007-05-29

    The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.

  18. DNA-based watermarks using the DNA-Crypt algorithm

    PubMed Central

    Heider, Dominik; Barnekow, Angelika

    2007-01-01

    Background The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms. Results The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein. Conclusion The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms. PMID:17535434

  19. 3D shape recovery from image focus using gray level co-occurrence matrix

    NASA Astrophysics Data System (ADS)

    Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid

    2018-04-01

    Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.

  20. Simulation of speckle patterns with pre-defined correlation distributions.

    PubMed

    Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S

    2016-03-01

    We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques.

  1. Simulation of speckle patterns with pre-defined correlation distributions

    PubMed Central

    Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S.

    2016-01-01

    We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques. PMID:27231589

  2. Experimental generation of Laguerre-Gaussian beam using digital micromirror device.

    PubMed

    Ren, Yu-Xuan; Li, Ming; Huang, Kun; Wu, Jian-Guang; Gao, Hong-Fang; Wang, Zi-Qiang; Li, Yin-Mei

    2010-04-01

    A digital micromirror device (DMD) modulates laser intensity through computer control of the device. We experimentally investigate the performance of the modulation property of a DMD and optimize the modulation procedure through image correction. Furthermore, Laguerre-Gaussian (LG) beams with different topological charges are generated by projecting a series of forklike gratings onto the DMD. We measure the field distribution with and without correction, the energy of LG beams with different topological charges, and the polarization property in sequence. Experimental results demonstrate that it is possible to generate LG beams with a DMD that allows the use of a high-intensity laser with proper correction to the input images, and that the polarization state of the LG beam differs from that of the input beam.

  3. Fuzzy logic particle tracking velocimetry

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1993-01-01

    Fuzzy logic has proven to be a simple and robust method for process control. Instead of requiring a complex model of the system, a user defined rule base is used to control the process. In this paper the principles of fuzzy logic control are applied to Particle Tracking Velocimetry (PTV). Two frames of digitally recorded, single exposure particle imagery are used as input. The fuzzy processor uses the local particle displacement information to determine the correct particle tracks. Fuzzy PTV is an improvement over traditional PTV techniques which typically require a sequence (greater than 2) of image frames for accurately tracking particles. The fuzzy processor executes in software on a PC without the use of specialized array or fuzzy logic processors. A pair of sample input images with roughly 300 particle images each, results in more than 200 velocity vectors in under 8 seconds of processing time.

  4. A CT and MRI scan to MCNP input conversion program.

    PubMed

    Van Riper, Kenneth A

    2005-01-01

    We describe a new program to read a sequence of tomographic scans and prepare the geometry and material sections of an MCNP input file. Image processing techniques include contrast controls and mapping of grey scales to colour. The user interface provides several tools with which the user can associate a range of image intensities to an MCNP material. Materials are loaded from a library. A separate material assignment can be made to a pixel intensity or range of intensities when that intensity dominates the image boundaries; this material is assigned to all pixels with that intensity contiguous with the boundary. Material fractions are computed in a user-specified voxel grid overlaying the scans. New materials are defined by mixing the library materials using the fractions. The geometry can be written as an MCNP lattice or as individual cells. A combination algorithm can be used to join neighbouring cells with the same material.

  5. Human action classification using procrustes shape theory

    NASA Astrophysics Data System (ADS)

    Cho, Wanhyun; Kim, Sangkyoon; Park, Soonyoung; Lee, Myungeun

    2015-02-01

    In this paper, we propose new method that can classify a human action using Procrustes shape theory. First, we extract a pre-shape configuration vector of landmarks from each frame of an image sequence representing an arbitrary human action, and then we have derived the Procrustes fit vector for pre-shape configuration vector. Second, we extract a set of pre-shape vectors from tanning sample stored at database, and we compute a Procrustes mean shape vector for these preshape vectors. Third, we extract a sequence of the pre-shape vectors from input video, and we project this sequence of pre-shape vectors on the tangent space with respect to the pole taking as a sequence of mean shape vectors corresponding with a target video. And we calculate the Procrustes distance between two sequences of the projection pre-shape vectors on the tangent space and the mean shape vectors. Finally, we classify the input video into the human action class with minimum Procrustes distance. We assess a performance of the proposed method using one public dataset, namely Weizmann human action dataset. Experimental results reveal that the proposed method performs very good on this dataset.

  6. Space-time light field rendering.

    PubMed

    Wang, Huamin; Sun, Mingxuan; Yang, Ruigang

    2007-01-01

    In this paper, we propose a novel framework called space-time light field rendering, which allows continuous exploration of a dynamic scene in both space and time. Compared to existing light field capture/rendering systems, it offers the capability of using unsynchronized video inputs and the added freedom of controlling the visualization in the temporal domain, such as smooth slow motion and temporal integration. In order to synthesize novel views from any viewpoint at any time instant, we develop a two-stage rendering algorithm. We first interpolate in the temporal domain to generate globally synchronized images using a robust spatial-temporal image registration algorithm followed by edge-preserving image morphing. We then interpolate these software-synchronized images in the spatial domain to synthesize the final view. In addition, we introduce a very accurate and robust algorithm to estimate subframe temporal offsets among input video sequences. Experimental results from unsynchronized videos with or without time stamps show that our approach is capable of maintaining photorealistic quality from a variety of real scenes.

  7. Synthesis of image sequences for Korean sign language using 3D shape model

    NASA Astrophysics Data System (ADS)

    Hong, Mun-Ho; Choi, Chang-Seok; Kim, Chang-Seok; Jeon, Joon-Hyeon

    1995-05-01

    This paper proposes a method for offering information and realizing communication to the deaf-mute. The deaf-mute communicates with another person by means of sign language, but most people are unfamiliar with it. This method enables to convert text data into the corresponding image sequences for Korean sign language (KSL). Using a general 3D shape model of the upper body leads to generating the 3D motions of KSL. It is necessary to construct the general 3D shape model considering the anatomical structure of the human body. To obtain a personal 3D shape model, this general model is to adjust to the personal base images. Image synthesis for KSL consists of deforming a personal 3D shape model and texture-mapping the personal images onto the deformed model. The 3D motions for KSL have the facial expressions and the 3D movements of the head, trunk, arms and hands and are parameterized for easily deforming the model. These motion parameters of the upper body are extracted from a skilled signer's motion for each KSL and are stored to the database. Editing the parameters according to the inputs of text data yields to generate the image sequences of 3D motions.

  8. Image processing methods used to simulate flight over remotely sensed data

    NASA Technical Reports Server (NTRS)

    Mortensen, H. B.; Hussey, K. J.; Mortensen, R. A.

    1988-01-01

    It has been demonstrated that image processing techniques can provide an effective means of simulating flight over remotely sensed data (Hussey et al. 1986). This paper explains the methods used to simulate and animate three-dimensional surfaces from two-dimensional imagery. The preprocessing techniques used on the input data, the selection of the animation sequence, the generation of the animation frames, and the recording of the animation is covered. The software used for all steps is discussed.

  9. Variational optical flow estimation for images with spectral and photometric sensor diversity

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-03-01

    Motion estimation of objects in image sequences is an essential computer vision task. To this end, optical flow methods compute pixel-level motion, with the purpose of providing low-level input to higher-level algorithms and applications. Robust flow estimation is crucial for the success of applications, which in turn depends on the quality of the captured image data. This work explores the use of sensor diversity in the image data within a framework for variational optical flow. In particular, a custom image sensor setup intended for vehicle applications is tested. Experimental results demonstrate the improved flow estimation performance when IR sensitivity or flash illumination is added to the system.

  10. Temporal and spatial localization of prediction-error signals in the visual brain.

    PubMed

    Johnston, Patrick; Robinson, Jonathan; Kokkinakis, Athanasios; Ridgeway, Samuel; Simpson, Michael; Johnson, Sam; Kaufman, Jordy; Young, Andrew W

    2017-04-01

    It has been suggested that the brain pre-empts changes in the environment through generating predictions, although real-time electrophysiological evidence of prediction violations in the domain of visual perception remain elusive. In a series of experiments we showed participants sequences of images that followed a predictable implied sequence or whose final image violated the implied sequence. Through careful design we were able to use the same final image transitions across predictable and unpredictable conditions, ensuring that any differences in neural responses were due only to preceding context and not to the images themselves. EEG and MEG recordings showed that early (N170) and mid-latency (N300) visual evoked potentials were robustly modulated by images that violated the implied sequence across a range of types of image change (expression deformations, rigid-rotations and visual field location). This modulation occurred irrespective of stimulus object category. Although the stimuli were static images, MEG source reconstruction of the early latency signal (N/M170) localized expectancy violation signals to brain areas associated with motion perception. Our findings suggest that the N/M170 can index mismatches between predicted and actual visual inputs in a system that predicts trajectories based on ongoing context. More generally we suggest that the N/M170 may reflect a "family" of brain signals generated across widespread regions of the visual brain indexing the resolution of top-down influences and incoming sensory data. This has important implications for understanding the N/M170 and investigating how the brain represents context to generate perceptual predictions. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Phonological memory in sign language relies on the visuomotor neural system outside the left hemisphere language network.

    PubMed

    Kanazawa, Yuji; Nakamura, Kimihiro; Ishii, Toru; Aso, Toshihiko; Yamazaki, Hiroshi; Omori, Koichi

    2017-01-01

    Sign language is an essential medium for everyday social interaction for deaf people and plays a critical role in verbal learning. In particular, language development in those people should heavily rely on the verbal short-term memory (STM) via sign language. Most previous studies compared neural activations during signed language processing in deaf signers and those during spoken language processing in hearing speakers. For sign language users, it thus remains unclear how visuospatial inputs are converted into the verbal STM operating in the left-hemisphere language network. Using functional magnetic resonance imaging, the present study investigated neural activation while bilinguals of spoken and signed language were engaged in a sequence memory span task. On each trial, participants viewed a nonsense syllable sequence presented either as written letters or as fingerspelling (4-7 syllables in length) and then held the syllable sequence for 12 s. Behavioral analysis revealed that participants relied on phonological memory while holding verbal information regardless of the type of input modality. At the neural level, this maintenance stage broadly activated the left-hemisphere language network, including the inferior frontal gyrus, supplementary motor area, superior temporal gyrus and inferior parietal lobule, for both letter and fingerspelling conditions. Interestingly, while most participants reported that they relied on phonological memory during maintenance, direct comparisons between letters and fingers revealed strikingly different patterns of neural activation during the same period. Namely, the effortful maintenance of fingerspelling inputs relative to letter inputs activated the left superior parietal lobule and dorsal premotor area, i.e., brain regions known to play a role in visuomotor analysis of hand/arm movements. These findings suggest that the dorsal visuomotor neural system subserves verbal learning via sign language by relaying gestural inputs to the classical left-hemisphere language network.

  12. Analyses of requirements for computer control and data processing experiment subsystems. Volume 2: ATM experiment S-056 image data processing system software development

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The IDAPS (Image Data Processing System) is a user-oriented, computer-based, language and control system, which provides a framework or standard for implementing image data processing applications, simplifies set-up of image processing runs so that the system may be used without a working knowledge of computer programming or operation, streamlines operation of the image processing facility, and allows multiple applications to be run in sequence without operator interaction. The control system loads the operators, interprets the input, constructs the necessary parameters for each application, and cells the application. The overlay feature of the IBSYS loader (IBLDR) provides the means of running multiple operators which would otherwise overflow core storage.

  13. Region-Based Prediction for Image Compression in the Cloud.

    PubMed

    Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine

    2018-04-01

    Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.

  14. Semi-automated camera trap image processing for the detection of ungulate fence crossing events.

    PubMed

    Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija

    2017-09-27

    Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.

  15. D-GENIES: dot plot large genomes in an interactive, efficient and simple way.

    PubMed

    Cabanettes, Floréal; Klopp, Christophe

    2018-01-01

    Dot plots are widely used to quickly compare sequence sets. They provide a synthetic similarity overview, highlighting repetitions, breaks and inversions. Different tools have been developed to easily generated genomic alignment dot plots, but they are often limited in the input sequence size. D-GENIES is a standalone and web application performing large genome alignments using minimap2 software package and generating interactive dot plots. It enables users to sort query sequences along the reference, zoom in the plot and download several image, alignment or sequence files. D-GENIES is an easy-to-install, open-source software package (GPL) developed in Python and JavaScript. The source code is available at https://github.com/genotoul-bioinfo/dgenies and it can be tested at http://dgenies.toulouse.inra.fr/.

  16. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Treesearch

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  17. Optimal Filter Estimation for Lucas-Kanade Optical Flow

    PubMed Central

    Sharmin, Nusrat; Brad, Remus

    2012-01-01

    Optical flow algorithms offer a way to estimate motion from a sequence of images. The computation of optical flow plays a key-role in several computer vision applications, including motion detection and segmentation, frame interpolation, three-dimensional scene reconstruction, robot navigation and video compression. In the case of gradient based optical flow implementation, the pre-filtering step plays a vital role, not only for accurate computation of optical flow, but also for the improvement of performance. Generally, in optical flow computation, filtering is used at the initial level on original input images and afterwards, the images are resized. In this paper, we propose an image filtering approach as a pre-processing step for the Lucas-Kanade pyramidal optical flow algorithm. Based on a study of different types of filtering methods and applied on the Iterative Refined Lucas-Kanade, we have concluded on the best filtering practice. As the Gaussian smoothing filter was selected, an empirical approach for the Gaussian variance estimation was introduced. Tested on the Middlebury image sequences, a correlation between the image intensity value and the standard deviation value of the Gaussian function was established. Finally, we have found that our selection method offers a better performance for the Lucas-Kanade optical flow algorithm.

  18. Hand gesture recognition by analysis of codons

    NASA Astrophysics Data System (ADS)

    Ramachandra, Poornima; Shrikhande, Neelima

    2007-09-01

    The problem of recognizing gestures from images using computers can be approached by closely understanding how the human brain tackles it. A full fledged gesture recognition system will substitute mouse and keyboards completely. Humans can recognize most gestures by looking at the characteristic external shape or the silhouette of the fingers. Many previous techniques to recognize gestures dealt with motion and geometric features of hands. In this thesis gestures are recognized by the Codon-list pattern extracted from the object contour. All edges of an image are described in terms of sequence of Codons. The Codons are defined in terms of the relationship between maxima, minima and zeros of curvature encountered as one traverses the boundary of the object. We have concentrated on a catalog of 24 gesture images from the American Sign Language alphabet (Letter J and Z are ignored as they are represented using motion) [2]. The query image given as an input to the system is analyzed and tested against the Codon-lists, which are shape descriptors for external parts of a hand gesture. We have used the Weighted Frequency Indexing Transform (WFIT) approach which is used in DNA sequence matching for matching the Codon-lists. The matching algorithm consists of two steps: 1) the query sequences are converted to short sequences and are assigned weights and, 2) all the sequences of query gestures are pruned into match and mismatch subsequences by the frequency indexing tree based on the weights of the subsequences. The Codon sequences with the most weight are used to determine the most precise match. Once a match is found, the identified gesture and corresponding interpretation are shown as output.

  19. Dactyl Alphabet Gesture Recognition in a Video Sequence Using Microsoft Kinect

    NASA Astrophysics Data System (ADS)

    Artyukhin, S. G.; Mestetskiy, L. M.

    2015-05-01

    This paper presents an efficient framework for solving the problem of static gesture recognition based on data obtained from the web cameras and depth sensor Kinect (RGB-D - data). Each gesture given by a pair of images: color image and depth map. The database store gestures by it features description, genereated by frame for each gesture of the alphabet. Recognition algorithm takes as input a video sequence (a sequence of frames) for marking, put in correspondence with each frame sequence gesture from the database, or decide that there is no suitable gesture in the database. First, classification of the frame of the video sequence is done separately without interframe information. Then, a sequence of successful marked frames in equal gesture is grouped into a single static gesture. We propose a method combined segmentation of frame by depth map and RGB-image. The primary segmentation is based on the depth map. It gives information about the position and allows to get hands rough border. Then, based on the color image border is specified and performed analysis of the shape of the hand. Method of continuous skeleton is used to generate features. We propose a method of skeleton terminal branches, which gives the opportunity to determine the position of the fingers and wrist. Classification features for gesture is description of the position of the fingers relative to the wrist. The experiments were carried out with the developed algorithm on the example of the American Sign Language. American Sign Language gesture has several components, including the shape of the hand, its orientation in space and the type of movement. The accuracy of the proposed method is evaluated on the base of collected gestures consisting of 2700 frames.

  20. A new RF transmit coil for foot and ankle imaging at 7T MRI.

    PubMed

    Santini, Tales; Kim, Junghwan; Wood, Sossena; Krishnamurthy, Narayanan; Farhat, Nadim; Maciel, Carlos; Raval, Shailesh B; Zhao, Tiejun; Ibrahim, Tamer S

    2018-01-01

    A four-channel Tic-Tac-Toe (TTT) transmit RF coil was designed and constructed for foot and ankle imaging at 7T MRI. Numerical simulations using an in-house developed FDTD package and experimental analyses using a homogenous phantom show an excellent agreement in terms of B 1 + field distribution and s-parameters. Simulations performed on an anatomically detailed human lower leg model demonstrated an B 1 + field distribution with a coefficient of variation (CV) of 23.9%/15.6%/28.8% and average B 1 + of 0.33μT/0.56μT/0.43μT for 1W input power (i.e., 0.25W per channel) in the ankle/calcaneus/mid foot respectively. In-vivo B 1 + mapping shows an average B 1 + of 0.29μT over the entire foot/ankle. This newly developed RF coil also presents acceptable levels of average SAR (0.07W/kg for 10g per 1W of input power) and peak SAR (0.34W/kg for 10g per 1W of input power) over the whole lower leg. Preliminary in-vivo images in the foot/ankle were acquired using the T2-DESS MRI sequence without the use of a dedicated receive-only array. Copyright © 2017. Published by Elsevier Inc.

  1. Neural net target-tracking system using structured laser patterns

    NASA Astrophysics Data System (ADS)

    Cho, Jae-Wan; Lee, Yong-Bum; Lee, Nam-Ho; Park, Soon-Yong; Lee, Jongmin; Choi, Gapchu; Baek, Sunghyun; Park, Dong-Sun

    1996-06-01

    In this paper, we describe a robot endeffector tracking system using sensory information from recently-announced structured pattern laser diodes, which can generate images with several different types of structured pattern. The neural network approach is employed to recognize the robot endeffector covering the situation of three types of motion: translation, scaling and rotation. Features for the neural network to detect the position of the endeffector are extracted from the preprocessed images. Artificial neural networks are used to store models and to match with unknown input features recognizing the position of the robot endeffector. Since a minimal number of samples are used for different directions of the robot endeffector in the system, an artificial neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network with the generalization capability can be utilized for unknown input features. A feedforward neural network trained with the back propagation learning is used to detect the position of the robot endeffector. Another feedforward neural network module is used to estimate the motion from a sequence of images and to control movements of the robot endeffector. COmbining the tow neural networks for recognizing the robot endeffector and estimating the motion with the preprocessing stage, the whole system keeps tracking of the robot endeffector effectively.

  2. Recurrent Network models of sequence generation and memory

    PubMed Central

    Rajan, Kanaka; Harvey, Christopher D; Tank, David W

    2016-01-01

    SUMMARY Sequential activation of neurons is a common feature of network activity during a variety of behaviors, including working memory and decision making. Previous network models for sequences and memory emphasized specialized architectures in which a principled mechanism is pre-wired into their connectivity. Here, we demonstrate that starting from random connectivity and modifying a small fraction of connections, a largely disordered recurrent network can produce sequences and implement working memory efficiently. We use this process, called Partial In-Network training (PINning), to model and match cellular-resolution imaging data from the posterior parietal cortex during a virtual memory-guided two-alternative forced choice task [Harvey, Coen and Tank, 2012]. Analysis of the connectivity reveals that sequences propagate by the cooperation between recurrent synaptic interactions and external inputs, rather than through feedforward or asymmetric connections. Together our results suggest that neural sequences may emerge through learning from largely unstructured network architectures. PMID:26971945

  3. Automated Mapping and Characterization of RSL from HiRISE data with MAARSL

    NASA Astrophysics Data System (ADS)

    Bue, Brian; Wagstaff, Kiri; Stillman, David

    2017-10-01

    Recurring slope lineae (RSL) are narrow (0.5-5m) low-albedo features on Mars that recur, fade, and incrementally lengthen on steep slopes throughout the year. Determining the processes that generate RSL requires detailed analysis of high-resolution orbital images to measure RSL surface properties and seasonal variation. However, conducting this analysis manually is labor intensive, time consuming, and infeasible given the large number of relevant sites. This abstract describes the Mapping and Automated Analysis of RSL (MAARSL) system, which we designed to aid large-scale analysis of seasonal RSL properties. MAARSL takes an ordered sequence of high spatial resolution, orthorectified, and coregistered orbital image data (e.g., MRO HiRISE images) and a corresponding Digital Terrain Model (DTM) as input and performs three primary functions: (1) detect and delineate candidate RSL in each image, (2) compute statistics of surface morphology and observed radiance for each candidate, and (3) measure temporal variation between candidates in adjacent images.The main challenge in automatic image-based RSL detection is discriminating true RSL from other low-albedo regions such as shadows or changes in surface materials is . To discriminate RSL from shadows, MAARSL constructs a linear illumination model for each image based on the DTM and position and orientation of the instrument at image acquisition time. We filter out any low-albedo regions that appear to be shadows via a least-squares fit between the modeled illumination and the observed intensity in each image. False detections occur in areas where the 1m/pixel HiRISE DTM poorly captures the variability of terrain observed in the 0.25m/pixel HiRISE images. To remove these spurious detections, we developed an interactive machine learning graphical interface that uses expert input to filter and validate the RSL candidates. This tool yielded 636 candidates from a well-studied sequence of 18 HiRISE images of Garni crater in Valles Marineris with minimal manual effort. We describe our analysis of RSL candidates at Garni crater and Coprates Montes and ongoing studies of other regions where RSL occur.

  4. Data compression of discrete sequence: A tree based approach using dynamic programming

    NASA Technical Reports Server (NTRS)

    Shivaram, Gurusrasad; Seetharaman, Guna; Rao, T. R. N.

    1994-01-01

    A dynamic programming based approach for data compression of a ID sequence is presented. The compression of an input sequence of size N to that of a smaller size k is achieved by dividing the input sequence into k subsequences and replacing the subsequences by their respective average values. The partitioning of the input sequence is carried with the intention of reducing the mean squared error in the reconstructed sequence. The complexity involved in finding the partitions which would result in such an optimal compressed sequence is reduced by using the dynamic programming approach, which is presented.

  5. Study of time-lapse processing for dynamic hydrologic conditions. [electronic satellite image analysis console for Earth Resources Technology Satellites imagery

    NASA Technical Reports Server (NTRS)

    Serebreny, S. M.; Evans, W. E.; Wiegman, E. J.

    1974-01-01

    The usefulness of dynamic display techniques in exploiting the repetitive nature of ERTS imagery was investigated. A specially designed Electronic Satellite Image Analysis Console (ESIAC) was developed and employed to process data for seven ERTS principal investigators studying dynamic hydrological conditions for diverse applications. These applications include measurement of snowfield extent and sediment plumes from estuary discharge, Playa Lake inventory, and monitoring of phreatophyte and other vegetation changes. The ESIAC provides facilities for storing registered image sequences in a magnetic video disc memory for subsequent recall, enhancement, and animated display in monochrome or color. The most unique feature of the system is the capability to time lapse the imagery and analytic displays of the imagery. Data products included quantitative measurements of distances and areas, binary thematic maps based on monospectral or multispectral decisions, radiance profiles, and movie loops. Applications of animation for uses other than creating time-lapse sequences are identified. Input to the ESIAC can be either digital or via photographic transparencies.

  6. Motion video compression system with neural network having winner-take-all function

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi (Inventor); Sheu, Bing J. (Inventor)

    1997-01-01

    A motion video data system includes a compression system, including an image compressor, an image decompressor correlative to the image compressor having an input connected to an output of the image compressor, a feedback summing node having one input connected to an output of the image decompressor, a picture memory having an input connected to an output of the feedback summing node, apparatus for comparing an image stored in the picture memory with a received input image and deducing therefrom pixels having differences between the stored image and the received image and for retrieving from the picture memory a partial image including the pixels only and applying the partial image to another input of the feedback summing node, whereby to produce at the output of the feedback summing node an updated decompressed image, a subtraction node having one input connected to received the received image and another input connected to receive the partial image so as to generate a difference image, the image compressor having an input connected to receive the difference image whereby to produce a compressed difference image at the output of the image compressor.

  7. Manchester visual query language

    NASA Astrophysics Data System (ADS)

    Oakley, John P.; Davis, Darryl N.; Shann, Richard T.

    1993-04-01

    We report a database language for visual retrieval which allows queries on image feature information which has been computed and stored along with images. The language is novel in that it provides facilities for dealing with feature data which has actually been obtained from image analysis. Each line in the Manchester Visual Query Language (MVQL) takes a set of objects as input and produces another, usually smaller, set as output. The MVQL constructs are mainly based on proven operators from the field of digital image analysis. An example is the Hough-group operator which takes as input a specification for the objects to be grouped, a specification for the relevant Hough space, and a definition of the voting rule. The output is a ranked list of high scoring bins. The query could be directed towards one particular image or an entire image database, in the latter case the bins in the output list would in general be associated with different images. We have implemented MVQL in two layers. The command interpreter is a Lisp program which maps each MVQL line to a sequence of commands which are used to control a specialized database engine. The latter is a hybrid graph/relational system which provides low-level support for inheritance and schema evolution. In the paper we outline the language and provide examples of useful queries. We also describe our solution to the engineering problems associated with the implementation of MVQL.

  8. Continuous Human Action Recognition Using Depth-MHI-HOG and a Spotter Model

    PubMed Central

    Eum, Hyukmin; Yoon, Changyong; Lee, Heejin; Park, Mignon

    2015-01-01

    In this paper, we propose a new method for spotting and recognizing continuous human actions using a vision sensor. The method is comprised of depth-MHI-HOG (DMH), action modeling, action spotting, and recognition. First, to effectively separate the foreground from background, we propose a method called DMH. It includes a standard structure for segmenting images and extracting features by using depth information, MHI, and HOG. Second, action modeling is performed to model various actions using extracted features. The modeling of actions is performed by creating sequences of actions through k-means clustering; these sequences constitute HMM input. Third, a method of action spotting is proposed to filter meaningless actions from continuous actions and to identify precise start and end points of actions. By employing the spotter model, the proposed method improves action recognition performance. Finally, the proposed method recognizes actions based on start and end points. We evaluate recognition performance by employing the proposed method to obtain and compare probabilities by applying input sequences in action models and the spotter model. Through various experiments, we demonstrate that the proposed method is efficient for recognizing continuous human actions in real environments. PMID:25742172

  9. Parameter Estimation in Atmospheric Data Sets

    NASA Technical Reports Server (NTRS)

    Wenig, Mark; Colarco, Peter

    2004-01-01

    In this study the structure tensor technique is used to estimate dynamical parameters in atmospheric data sets. The structure tensor is a common tool for estimating motion in image sequences. This technique can be extended to estimate other dynamical parameters such as diffusion constants or exponential decay rates. A general mathematical framework was developed for the direct estimation of the physical parameters that govern the underlying processes from image sequences. This estimation technique can be adapted to the specific physical problem under investigation, so it can be used in a variety of applications in trace gas, aerosol, and cloud remote sensing. As a test scenario this technique will be applied to modeled dust data. In this case vertically integrated dust concentrations were used to derive wind information. Those results can be compared to the wind vector fields which served as input to the model. Based on this analysis, a method to compute atmospheric data parameter fields will be presented. .

  10. MRI-Only Based Radiotherapy Treatment Planning for the Rat Brain on a Small Animal Radiation Research Platform (SARRP).

    PubMed

    Gutierrez, Shandra; Descamps, Benedicte; Vanhove, Christian

    2015-01-01

    Computed tomography (CT) is the standard imaging modality in radiation therapy treatment planning (RTP). However, magnetic resonance (MR) imaging provides superior soft tissue contrast, increasing the precision of target volume selection. We present MR-only based RTP for a rat brain on a small animal radiation research platform (SARRP) using probabilistic voxel classification with multiple MR sequences. Six rat heads were imaged, each with one CT and five MR sequences. The MR sequences were: T1-weighted, T2-weighted, zero-echo time (ZTE), and two ultra-short echo time sequences with 20 μs (UTE1) and 2 ms (UTE2) echo times. CT data were manually segmented into air, soft tissue, and bone to obtain the RTP reference. Bias field corrected MR images were automatically segmented into the same tissue classes using a fuzzy c-means segmentation algorithm with multiple images as input. Similarities between segmented CT and automatic segmented MR (ASMR) images were evaluated using Dice coefficient. Three ASMR images with high similarity index were used for further RTP. Three beam arrangements were investigated. Dose distributions were compared by analysing dose volume histograms. The highest Dice coefficients were obtained for the ZTE-UTE2 combination and for the T1-UTE1-T2 combination when ZTE was unavailable. Both combinations, along with UTE1-UTE2, often used to generate ASMR images, were used for further RTP. Using 1 beam, MR based RTP underestimated the dose to be delivered to the target (range: 1.4%-7.6%). When more complex beam configurations were used, the calculated dose using the ZTE-UTE2 combination was the most accurate, with 0.7% deviation from CT, compared to 0.8% for T1-UTE1-T2 and 1.7% for UTE1-UTE2. The presented MR-only based workflow for RTP on a SARRP enables both accurate organ delineation and dose calculations using multiple MR sequences. This method can be useful in longitudinal studies where CT's cumulative radiation dose might contribute to the total dose.

  11. MRI-Only Based Radiotherapy Treatment Planning for the Rat Brain on a Small Animal Radiation Research Platform (SARRP)

    PubMed Central

    Gutierrez, Shandra; Descamps, Benedicte; Vanhove, Christian

    2015-01-01

    Computed tomography (CT) is the standard imaging modality in radiation therapy treatment planning (RTP). However, magnetic resonance (MR) imaging provides superior soft tissue contrast, increasing the precision of target volume selection. We present MR-only based RTP for a rat brain on a small animal radiation research platform (SARRP) using probabilistic voxel classification with multiple MR sequences. Six rat heads were imaged, each with one CT and five MR sequences. The MR sequences were: T1-weighted, T2-weighted, zero-echo time (ZTE), and two ultra-short echo time sequences with 20 μs (UTE1) and 2 ms (UTE2) echo times. CT data were manually segmented into air, soft tissue, and bone to obtain the RTP reference. Bias field corrected MR images were automatically segmented into the same tissue classes using a fuzzy c-means segmentation algorithm with multiple images as input. Similarities between segmented CT and automatic segmented MR (ASMR) images were evaluated using Dice coefficient. Three ASMR images with high similarity index were used for further RTP. Three beam arrangements were investigated. Dose distributions were compared by analysing dose volume histograms. The highest Dice coefficients were obtained for the ZTE-UTE2 combination and for the T1-UTE1-T2 combination when ZTE was unavailable. Both combinations, along with UTE1-UTE2, often used to generate ASMR images, were used for further RTP. Using 1 beam, MR based RTP underestimated the dose to be delivered to the target (range: 1.4%-7.6%). When more complex beam configurations were used, the calculated dose using the ZTE-UTE2 combination was the most accurate, with 0.7% deviation from CT, compared to 0.8% for T1-UTE1-T2 and 1.7% for UTE1-UTE2. The presented MR-only based workflow for RTP on a SARRP enables both accurate organ delineation and dose calculations using multiple MR sequences. This method can be useful in longitudinal studies where CT’s cumulative radiation dose might contribute to the total dose. PMID:26633302

  12. Image enhancement by non-linear extrapolation in frequency space

    NASA Technical Reports Server (NTRS)

    Anderson, Charles H. (Inventor); Greenspan, Hayit K. (Inventor)

    1998-01-01

    An input image is enhanced to include spatial frequency components having frequencies higher than those in an input image. To this end, an edge map is generated from the input image using a high band pass filtering technique. An enhancing map is subsequently generated from the edge map, with the enhanced map having spatial frequencies exceeding an initial maximum spatial frequency of the input image. The enhanced map is generated by applying a non-linear operator to the edge map in a manner which preserves the phase transitions of the edges of the input image. The enhanced map is added to the input image to achieve a resulting image having spatial frequencies greater than those in the input image. Simplicity of computations and ease of implementation allow for image sharpening after enlargement and for real-time applications such as videophones, advanced definition television, zooming, and restoration of old motion pictures.

  13. Automatic detection of pelvic lymph nodes using multiple MR sequences

    NASA Astrophysics Data System (ADS)

    Yan, Michelle; Lu, Yue; Lu, Renzhi; Requardt, Martin; Moeller, Thomas; Takahashi, Satoru; Barentsz, Jelle

    2007-03-01

    A system for automatic detection of pelvic lymph nodes is developed by incorporating complementary information extracted from multiple MR sequences. A single MR sequence lacks sufficient diagnostic information for lymph node localization and staging. Correct diagnosis often requires input from multiple complementary sequences which makes manual detection of lymph nodes very labor intensive. Small lymph nodes are often missed even by highly-trained radiologists. The proposed system is aimed at assisting radiologists in finding lymph nodes faster and more accurately. To the best of our knowledge, this is the first such system reported in the literature. A 3-dimensional (3D) MR angiography (MRA) image is employed for extracting blood vessels that serve as a guide in searching for pelvic lymph nodes. Segmentation, shape and location analysis of potential lymph nodes are then performed using a high resolution 3D T1-weighted VIBE (T1-vibe) MR sequence acquired by Siemens 3T scanner. An optional contrast-agent enhanced MR image, such as post ferumoxtran-10 T2*-weighted MEDIC sequence, can also be incorporated to further improve detection accuracy of malignant nodes. The system outputs a list of potential lymph node locations that are overlaid onto the corresponding MR sequences and presents them to users with associated confidence levels as well as their sizes and lengths in each axis. Preliminary studies demonstrates the feasibility of automatic lymph node detection and scenarios in which this system may be used to assist radiologists in diagnosis and reporting.

  14. Weight distributions for turbo codes using random and nonrandom permutations

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Divsalar, D.

    1995-01-01

    This article takes a preliminary look at the weight distributions achievable for turbo codes using random, nonrandom, and semirandom permutations. Due to the recursiveness of the encoders, it is important to distinguish between self-terminating and non-self-terminating input sequences. The non-self-terminating sequences have little effect on decoder performance, because they accumulate high encoded weight until they are artificially terminated at the end of the block. From probabilistic arguments based on selecting the permutations randomly, it is concluded that the self-terminating weight-2 data sequences are the most important consideration in the design of constituent codes; higher-weight self-terminating sequences have successively decreasing importance. Also, increasing the number of codes and, correspondingly, the number of permutations makes it more and more likely that the bad input sequences will be broken up by one or more of the permuters. It is possible to design nonrandom permutations that ensure that the minimum distance due to weight-2 input sequences grows roughly as the square root of (2N), where N is the block length. However, these nonrandom permutations amplify the bad effects of higher-weight inputs, and as a result they are inferior in performance to randomly selected permutations. But there are 'semirandom' permutations that perform nearly as well as the designed nonrandom permutations with respect to weight-2 input sequences and are not as susceptible to being foiled by higher-weight inputs.

  15. Evaluation of Moving Object Detection Based on Various Input Noise Using Fixed Camera

    NASA Astrophysics Data System (ADS)

    Kiaee, N.; Hashemizadeh, E.; Zarrinpanjeh, N.

    2017-09-01

    Detecting and tracking objects in video has been as a research area of interest in the field of image processing and computer vision. This paper evaluates the performance of a novel method for object detection algorithm in video sequences. This process helps us to know the advantage of this method which is being used. The proposed framework compares the correct and wrong detection percentage of this algorithm. This method was evaluated with the collected data in the field of urban transport which include car and pedestrian in fixed camera situation. The results show that the accuracy of the algorithm will decreases because of image resolution reduction.

  16. Method for traffic-sign detection within a picture by color identification and external shape recognition

    NASA Astrophysics Data System (ADS)

    Falcoff, Daniel E.; Canali, Luis R.

    1999-08-01

    This work present one method aimed to individualization and recognition of vial signs in route and city. It is based fundamentally on the identification by means of color and form of the vial sing, located in the border of the route or street in city, and then recognition. To do so the obtained RGB image is processed, carrying out diverse filtrates in the sequence of input image, or intensifying the colors of the same ones otherwise, recognizing their silhouette and then segmenting the sign and comparing the symbology of them with the previously stored and classified database.

  17. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    NASA Astrophysics Data System (ADS)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  18. Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction

    NASA Astrophysics Data System (ADS)

    Su, X.

    2017-12-01

    A satellite cloud image contains much weather information such as precipitation information. Short-time cloud movement forecast is important for precipitation forecast and is the primary means for typhoon monitoring. The traditional methods are mostly using the cloud feature matching and linear extrapolation to predict the cloud movement, which makes that the nonstationary process such as inversion and deformation during the movement of the cloud is basically not considered. It is still a hard task to predict cloud movement timely and correctly. As deep learning model could perform well in learning spatiotemporal features, to meet this challenge, we could regard cloud image prediction as a spatiotemporal sequence forecasting problem and introduce deep learning model to solve this problem. In this research, we use a variant of Gated-Recurrent-Unit(GRU) that has convolutional structures to deal with spatiotemporal features and build an end-to-end model to solve this forecast problem. In this model, both the input and output are spatiotemporal sequences. Compared to Convolutional LSTM(ConvLSTM) model, this model has lower amount of parameters. We imply this model on GOES satellite data and the model perform well.

  19. Construction of trypanosome artificial mini-chromosomes.

    PubMed Central

    Lee, M G; E, Y; Axelrod, N

    1995-01-01

    We report the preparation of two linear constructs which, when transformed into the procyclic form of Trypanosoma brucei, become stably inherited artificial mini-chromosomes. Both of the two constructs, one of 10 kb and the other of 13 kb, contain a T.brucei PARP promoter driving a chloramphenicol acetyltransferase (CAT) gene. In the 10 kb construct the CAT gene is followed by one hygromycin phosphotransferase (Hph) gene, and in the 13 kb construct the CAT gene is followed by three tandemly linked Hph genes. At each end of these linear molecules are telomere repeats and subtelomeric sequences. Electroporation of these linear DNA constructs into the procyclic form of T.brucei generated hygromycin-B resistant cell lines. In these cell lines, the input DNA remained linear and bounded by the telomere ends, but it increased in size. In the cell lines generated by the 10 kb construct, the input DNA increased in size to 20-50 kb. In the cell lines generated by the 13 kb constructs, two sizes of linear DNAs containing the input plasmid were detected: one of 40-50 kb and the other of 150 kb. The increase in size was not the result of in vivo tandem repetitions of the input plasmid, but represented the addition of new sequences. These Hph containing linear DNA molecules were maintained stably in cell lines for at least 20 generations in the absence of drug selection and were subsequently referred to as trypanosome artificial mini-chromosomes, or TACs. Images PMID:8532534

  20. Test Input Generation for Red-Black Trees using Abstraction

    NASA Technical Reports Server (NTRS)

    Visser, Willem; Pasareanu, Corina S.; Pelanek, Radek

    2005-01-01

    We consider the problem of test input generation for code that manipulates complex data structures. Test inputs are sequences of method calls from the data structure interface. We describe test input generation techniques that rely on state matching to avoid generation of redundant tests. Exhaustive techniques use explicit state model checking to explore all the possible test sequences up to predefined input sizes. Lossy techniques rely on abstraction mappings to compute and store abstract versions of the concrete states; they explore under-approximations of all the possible test sequences. We have implemented the techniques on top of the Java PathFinder model checker and we evaluate them using a Java implementation of red-black trees.

  1. A magnetic-resonance-imaging-compatible remote catheter navigation system.

    PubMed

    Tavallaei, Mohammad Ali; Thakur, Yogesh; Haider, Syed; Drangova, Maria

    2013-04-01

    A remote catheter navigation system compatible with magnetic resonance imaging (MRI) has been developed to facilitate MRI-guided catheterization procedures. The interventionalist's conventional motions (axial motion and rotation) on an input catheter - acting as the master - are measured by a pair of optical encoders, and a custom embedded system relays the motions to a pair of ultrasonic motors. The ultrasonic motors drive the patient catheter (slave) within the MRI scanner, replicating the motion of the input catheter. The performance of the remote catheter navigation system was evaluated in terms of accuracy and delay of motion replication outside and within the bore of the magnet. While inside the scanner bore, motion accuracy was characterized during the acquisition of frequently used imaging sequences, including real-time gradient echo. The effect of the catheter navigation system on image signal-to-noise ratio (SNR) was also evaluated. The results show that the master-slave system has a maximum time delay of 41 ± 21 ms in replicating motion; an absolute value error of 2 ± 2° was measured for radial catheter motion replication over 360° and 1.0 ± 0.8 mm in axial catheter motion replication over 100 mm of travel. The worst-case SNR drop was observed to be 2.5%.

  2. Weighted LCS

    NASA Astrophysics Data System (ADS)

    Amir, Amihood; Gotthilf, Zvi; Shalom, B. Riva

    The Longest Common Subsequence (LCS) of two strings A and B is a well studied problem having a wide range of applications. When each symbol of the input strings is assigned a positive weight the problem becomes the Heaviest Common Subsequence (HCS) problem. In this paper we consider a different version of weighted LCS on Position Weight Matrices (PWM). The Position Weight Matrix was introduced as a tool to handle a set of sequences that are not identical, yet, have many local similarities. Such a weighted sequence is a 'statistical image' of this set where we are given the probability of every symbol's occurrence at every text location. We consider two possible definitions of LCS on PWM. For the first, we solve the weighted LCS problem of z sequences in time O(zn z + 1). For the second, we prove \\cal{NP}-hardness and provide an approximation algorithm.

  3. Passive IFF: Autonomous Nonintrusive Rapid Identification of Friendly Assets

    NASA Technical Reports Server (NTRS)

    Moynihan, Philip; Steenburg, Robert Van; Chao, Tien-Hsin

    2004-01-01

    A proposed optoelectronic instrument would identify targets rapidly, without need to radiate an interrogating signal, apply identifying marks to the targets, or equip the targets with transponders. The instrument was conceived as an identification, friend or foe (IFF) system in a battlefield setting, where it would be part of a targeting system for weapons, by providing rapid identification for aimed weapons to help in deciding whether and when to trigger them. The instrument could also be adapted to law-enforcement and industrial applications in which it is necessary to rapidly identify objects in view. The instrument would comprise mainly an optical correlator and a neural processor (see figure). The inherent parallel-processing speed and capability of the optical correlator would be exploited to obtain rapid identification of a set of probable targets within a scene of interest and to define regions within the scene for the neural processor to analyze. The neural processor would then concentrate on each region selected by the optical correlator in an effort to identify the target. Depending on whether or not a target was recognized by comparison of its image data with data in an internal database on which the neural processor was trained, the processor would generate an identifying signal (typically, friend or foe ). The time taken for this identification process would be less than the time needed by a human or robotic gunner to acquire a view of, and aim at, a target. An optical correlator that has been under development for several years and that has been demonstrated to be capable of tracking a cruise missile might be considered a prototype of the optical correlator in the proposed IFF instrument. This optical correlator features a 512-by-512-pixel input image frame and operates at an input frame rate of 60 Hz. It includes a spatial light modulator (SLM) for video-to-optical image conversion, a pair of precise lenses to effect Fourier transforms, a filter SLM for digital-to-optical correlation-filter data conversion, and a charge-coupled device (CCD) for detection of correlation peaks. In operation, the input scene grabbed by a video sensor is streamed into the input SLM. Precomputed correlation-filter data files representative of known targets are then downloaded and sequenced into the filter SLM at a rate of 1,000 Hz. When there occurs a match between the input target data and one of the known-target data files, the CCD detects a correlation peak at the location of the target. Distortion- invariant correlation filters from a bank of such filters are then sequenced through the optical correlator for each input frame. The net result is the rapid preliminary recognition of one or a few targets.

  4. The impact of visual sequencing of graphic symbols on the sentence construction output of children who have acquired language.

    PubMed

    Alant, Erna; du Plooy, Amelia; Dada, Shakila

    2007-01-01

    Although the sequence of graphic or pictorial symbols displayed on a communication board can have an impact on the language output of children, very little research has been conducted to describe this. Research in this area is particularly relevant for prioritising the importance of specific visual and graphic features in providing more effective and user-friendly access to communication boards. This study is concerned with understanding the impact ofspecific sequences of graphic symbol input on the graphic and spoken output of children who have acquired language. Forty participants were divided into two comparable groups. Each group was exposed to graphic symbol input with a certain word order sequence. The structure of input was either in typical English word order sequence Subject- Verb-Object (SVO) or in the word order sequence of Subject-Object-Verb (SOV). Both input groups had to answer six questions by using graphic output as well as speech. The findings indicated that there are significant differences in the PCS graphic output patterns of children who are exposed to graphic input in the SOV and SVO sequences. Furthermore, the output produced in the graphic mode differed considerably to the output produced in the spoken mode. Clinical implications of these findings are discussed

  5. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  6. The minimal amount of starting DNA for Agilent’s hybrid capture-based targeted massively parallel sequencing

    PubMed Central

    Chung, Jongsuk; Son, Dae-Soon; Jeon, Hyo-Jeong; Kim, Kyoung-Mee; Park, Gahee; Ryu, Gyu Ha; Park, Woong-Yang; Park, Donghyun

    2016-01-01

    Targeted capture massively parallel sequencing is increasingly being used in clinical settings, and as costs continue to decline, use of this technology may become routine in health care. However, a limited amount of tissue has often been a challenge in meeting quality requirements. To offer a practical guideline for the minimum amount of input DNA for targeted sequencing, we optimized and evaluated the performance of targeted sequencing depending on the input DNA amount. First, using various amounts of input DNA, we compared commercially available library construction kits and selected Agilent’s SureSelect-XT and KAPA Biosystems’ Hyper Prep kits as the kits most compatible with targeted deep sequencing using Agilent’s SureSelect custom capture. Then, we optimized the adapter ligation conditions of the Hyper Prep kit to improve library construction efficiency and adapted multiplexed hybrid selection to reduce the cost of sequencing. In this study, we systematically evaluated the performance of the optimized protocol depending on the amount of input DNA, ranging from 6.25 to 200 ng, suggesting the minimal input DNA amounts based on coverage depths required for specific applications. PMID:27220682

  7. Optical neural network system for pose determination of spinning satellites

    NASA Technical Reports Server (NTRS)

    Lee, Andrew; Casasent, David

    1990-01-01

    An optical neural network architecture and algorithm based on a Hopfield optimization network are presented for multitarget tracking. This tracker utilizes a neuron for every possible target track, and a quadratic energy function of neural activities which is minimized using gradient descent neural evolution. The neural net tracker is demonstrated as part of a system for determining position and orientation (pose) of spinning satellites with respect to a robotic spacecraft. The input to the system is time sequence video from a single camera. Novelty detection and filtering are utilized to locate and segment novel regions from the input images. The neural net multitarget tracker determines the correspondences (or tracks) of the novel regions as a function of time, and hence the paths of object (satellite) parts. The path traced out by a given part or region is approximately elliptical in image space, and the position, shape and orientation of the ellipse are functions of the satellite geometry and its pose. Having a geometric model of the satellite, and the elliptical path of a part in image space, the three-dimensional pose of the satellite is determined. Digital simulation results using this algorithm are presented for various satellite poses and lighting conditions.

  8. CBrowse: a SAM/BAM-based contig browser for transcriptome assembly visualization and analysis.

    PubMed

    Li, Pei; Ji, Guoli; Dong, Min; Schmidt, Emily; Lenox, Douglas; Chen, Liangliang; Liu, Qi; Liu, Lin; Zhang, Jie; Liang, Chun

    2012-09-15

    To address the impending need for exploring rapidly increased transcriptomics data generated for non-model organisms, we developed CBrowse, an AJAX-based web browser for visualizing and analyzing transcriptome assemblies and contigs. Designed in a standard three-tier architecture with a data pre-processing pipeline, CBrowse is essentially a Rich Internet Application that offers many seamlessly integrated web interfaces and allows users to navigate, sort, filter, search and visualize data smoothly. The pre-processing pipeline takes the contig sequence file in FASTA format and its relevant SAM/BAM file as the input; detects putative polymorphisms, simple sequence repeats and sequencing errors in contigs and generates image, JSON and database-compatible CSV text files that are directly utilized by different web interfaces. CBowse is a generic visualization and analysis tool that facilitates close examination of assembly quality, genetic polymorphisms, sequence repeats and/or sequencing errors in transcriptome sequencing projects. CBrowse is distributed under the GNU General Public License, available at http://bioinfolab.muohio.edu/CBrowse/ liangc@muohio.edu or liangc.mu@gmail.com; glji@xmu.edu.cn Supplementary data are available at Bioinformatics online.

  9. Multiframe digitization of x-ray (TV) images (abstract)

    NASA Astrophysics Data System (ADS)

    Karpenko, V. A.; Khil'chenko, A. D.; Lysenko, A. P.; Panchenko, V. E.

    1989-07-01

    The work in progress deals with the experimental search for a technique of digitizing x-ray TV images. The small volume of the buffer memory of the analog-to-digital (A/D) converter (ADC) we have previously used to detect TV signals made it necessary to digitize only one line at a time of the television raster and also to make use of gating to gain the video information contained in the whole frame. This paper is devoted to multiframe digitizing. The recorder of video signals comprises a broadband 8-bit A/D converter, a buffer memory having 128K words and a control circuit which forms a necessary sequence of advance pulses for the A/D converter and the memory relative to the input frame and line sync pulses (FSP and LSP). The device provides recording of video signals corresponding to one or a few frames following one after another, or to their fragments. The control circuit is responsible for the separation of the required fragment of the TV image. When loading the limit registers, the following input parameters of the control circuit are set: the skipping of a definite number of lines after the next FSP, the number of the lines of recording inside a fragment, the frequency of the information lines inside a fragment, the delay in the start of the ADC conversion relative to the arrival of the LSP, the length of the information section of a line, and the frequency of taking the readouts in a line. In addition, among the instructions given are the number of frames of recording and the frequency of their sequence. Thus, the A/D converter operates only inside a given fragment of the TV image. The information is introduced into the memory in sequence, fragment by fragment, without skipping and is then extracted as samples according to the addresses needed for representation in the required form, and processing. The video signal recorder governs the shortest time of the ADC conversion per point of 250 ns. As before, among the apparatus used were an image vidicon with luminophor conversion of x-radiation to light, and a single-crystal x-ray diffraction scheme necessary to form dynamic test objects from x-ray lines dispersed in space (the projections of the linear focus of an x-ray tube).

  10. Recurrent neural networks for breast lesion classification based on DCE-MRIs

    NASA Astrophysics Data System (ADS)

    Antropova, Natasha; Huynh, Benjamin; Giger, Maryellen

    2018-02-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a significant role in breast cancer screening, cancer staging, and monitoring response to therapy. Recently, deep learning methods are being rapidly incorporated in image-based breast cancer diagnosis and prognosis. However, most of the current deep learning methods make clinical decisions based on 2-dimentional (2D) or 3D images and are not well suited for temporal image data. In this study, we develop a deep learning methodology that enables integration of clinically valuable temporal components of DCE-MRIs into deep learning-based lesion classification. Our work is performed on a database of 703 DCE-MRI cases for the task of distinguishing benign and malignant lesions, and uses the area under the ROC curve (AUC) as the performance metric in conducting that task. We train a recurrent neural network, specifically a long short-term memory network (LSTM), on sequences of image features extracted from the dynamic MRI sequences. These features are extracted with VGGNet, a convolutional neural network pre-trained on a large dataset of natural images ImageNet. The features are obtained from various levels of the network, to capture low-, mid-, and high-level information about the lesion. Compared to a classification method that takes as input only images at a single time-point (yielding an AUC = 0.81 (se = 0.04)), our LSTM method improves lesion classification with an AUC of 0.85 (se = 0.03).

  11. Image scale measurement with correlation filters in a volume holographic optical correlator

    NASA Astrophysics Data System (ADS)

    Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan

    2013-08-01

    A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.

  12. Virtual three-dimensional blackboard: three-dimensional finger tracking with a single camera

    NASA Astrophysics Data System (ADS)

    Wu, Andrew; Hassan-Shafique, Khurram; Shah, Mubarak; da Vitoria Lobo, N.

    2004-01-01

    We present a method for three-dimensional (3D) tracking of a human finger from a monocular sequence of images. To recover the third dimension from the two-dimensional images, we use the fact that the motion of the human arm is highly constrained owing to the dependencies between elbow and forearm and the physical constraints on joint angles. We use these anthropometric constraints to derive a 3D trajectory of a gesticulating arm. The system is fully automated and does not require human intervention. The system presented can be used as a visualization tool, as a user-input interface, or as part of some gesture-analysis system in which 3D information is important.

  13. Learning viewpoint invariant perceptual representations from cluttered images.

    PubMed

    Spratling, Michael W

    2005-05-01

    In order to perform object recognition, it is necessary to form perceptual representations that are sufficiently specific to distinguish between objects, but that are also sufficiently flexible to generalize across changes in location, rotation, and scale. A standard method for learning perceptual representations that are invariant to viewpoint is to form temporal associations across image sequences showing object transformations. However, this method requires that individual stimuli be presented in isolation and is therefore unlikely to succeed in real-world applications where multiple objects can co-occur in the visual input. This paper proposes a simple modification to the learning method that can overcome this limitation and results in more robust learning of invariant representations.

  14. An ice-motion tracking system at the Alaska SAR facility

    NASA Technical Reports Server (NTRS)

    Kwok, Ronald; Curlander, John C.; Pang, Shirley S.; Mcconnell, Ross

    1990-01-01

    An operational system for extracting ice-motion information from synthetic aperture radar (SAR) imagery is being developed as part of the Alaska SAR Facility. This geophysical processing system (GPS) will derive ice-motion information by automated analysis of image sequences acquired by radars on the European ERS-1, Japanese ERS-1, and Canadian RADARSAT remote sensing satellites. The algorithm consists of a novel combination of feature-based and area-based techniques for the tracking of ice floes that undergo translation and rotation between imaging passes. The system performs automatic selection of the image pairs for input to the matching routines using an ice-motion estimator. It is designed to have a daily throughput of ten image pairs. A description is given of the GPS system, including an overview of the ice-motion-tracking algorithm, the system architecture, and the ice-motion products that will be available for distribution to geophysical data users.

  15. Progressive multi-atlas label fusion by dictionary evolution.

    PubMed

    Song, Yantao; Wu, Guorong; Bahrami, Khosro; Sun, Quansen; Shen, Dinggang

    2017-02-01

    Accurate segmentation of anatomical structures in medical images is important in recent imaging based studies. In the past years, multi-atlas patch-based label fusion methods have achieved a great success in medical image segmentation. In these methods, the appearance of each input image patch is first represented by an atlas patch dictionary (in the image domain), and then the latent label of the input image patch is predicted by applying the estimated representation coefficients to the corresponding anatomical labels of the atlas patches in the atlas label dictionary (in the label domain). However, due to the generally large gap between the patch appearance in the image domain and the patch structure in the label domain, the estimated (patch) representation coefficients from the image domain may not be optimal for the final label fusion, thus reducing the labeling accuracy. To address this issue, we propose a novel label fusion framework to seek for the suitable label fusion weights by progressively constructing a dynamic dictionary in a layer-by-layer manner, where the intermediate dictionaries act as a sequence of guidance to steer the transition of (patch) representation coefficients from the image domain to the label domain. Our proposed multi-layer label fusion framework is flexible enough to be applied to the existing labeling methods for improving their label fusion performance, i.e., by extending their single-layer static dictionary to the multi-layer dynamic dictionary. The experimental results show that our proposed progressive label fusion method achieves more accurate hippocampal segmentation results for the ADNI dataset, compared to the counterpart methods using only the single-layer static dictionary. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Image processing meta-algorithm development via genetic manipulation of existing algorithm graphs

    NASA Astrophysics Data System (ADS)

    Schalkoff, Robert J.; Shaaban, Khaled M.

    1999-07-01

    Automatic algorithm generation for image processing applications is not a new idea, however previous work is either restricted to morphological operates or impractical. In this paper, we show recent research result in the development and use of meta-algorithms, i.e. algorithms which lead to new algorithms. Although the concept is generally applicable, the application domain in this work is restricted to image processing. The meta-algorithm concept described in this paper is based upon out work in dynamic algorithm. The paper first present the concept of dynamic algorithms which, on the basis of training and archived algorithmic experience embedded in an algorithm graph (AG), dynamically adjust the sequence of operations applied to the input image data. Each node in the tree-based representation of a dynamic algorithm with out degree greater than 2 is a decision node. At these nodes, the algorithm examines the input data and determines which path will most likely achieve the desired results. This is currently done using nearest-neighbor classification. The details of this implementation are shown. The constrained perturbation of existing algorithm graphs, coupled with a suitable search strategy, is one mechanism to achieve meta-algorithm an doffers rich potential for the discovery of new algorithms. In our work, a meta-algorithm autonomously generates new dynamic algorithm graphs via genetic recombination of existing algorithm graphs. The AG representation is well suited to this genetic-like perturbation, using a commonly- employed technique in artificial neural network synthesis, namely the blueprint representation of graphs. A number of exam. One of the principal limitations of our current approach is the need for significant human input in the learning phase. Efforts to overcome this limitation are discussed. Future research directions are indicated.

  17. Correlation of Tumor Immunohistochemistry with Dynamic Contrast-Enhanced and DSC-MRI Parameters in Patients with Gliomas.

    PubMed

    Nguyen, T B; Cron, G O; Bezzina, K; Perdrizet, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Thornhill, R E; Zanette, B; Cameron, I G

    2016-12-01

    Tumor CBV is a prognostic and predictive marker for patients with gliomas. Tumor CBV can be measured noninvasively with different MR imaging techniques; however, it is not clear which of these techniques most closely reflects histologically-measured tumor CBV. Our aim was to investigate the correlations between dynamic contrast-enhanced and DSC-MR imaging parameters and immunohistochemistry in patients with gliomas. Forty-three patients with a new diagnosis of glioma underwent a preoperative MR imaging examination with dynamic contrast-enhanced and DSC sequences. Unnormalized and normalized cerebral blood volume was obtained from DSC MR imaging. Two sets of plasma volume and volume transfer constant maps were obtained from dynamic contrast-enhanced MR imaging. Plasma volume obtained from the phase-derived vascular input function and bookend T1 mapping (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function and bookend T1 mapping (K trans _Φ) were determined. Plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K trans _SI) were acquired, without T1 mapping. Using CD34 staining, we measured microvessel density and microvessel area within 3 representative areas of the resected tumor specimen. The Mann-Whitney U test was used to test for differences according to grade and degree of enhancement. The Spearman correlation was performed to determine the relationship between dynamic contrast-enhanced and DSC parameters and histopathologic measurements. Microvessel area, microvessel density, dynamic contrast-enhanced, and DSC-MR imaging parameters varied according to the grade and degree of enhancement (P < .05). A strong correlation was found between microvessel area and Vp_Φ and between microvessel area and unnormalized blood volume (r s ≥ 0.61). A moderate correlation was found between microvessel area and normalized blood volume, microvessel area and Vp_SI, microvessel area and K trans _Φ, microvessel area and K trans _SI, microvessel density and Vp_Φ, microvessel density and unnormalized blood volume, and microvessel density and normalized blood volume (0.44 ≤ r s ≤ 0.57). A weaker correlation was found between microvessel density and K trans _Φ and between microvessel density and K trans _SI (r s ≤ 0.41). With dynamic contrast-enhanced MR imaging, use of a phase-derived vascular input function and bookend T1 mapping improves the correlation between immunohistochemistry and plasma volume, but not between immunohistochemistry and the volume transfer constant. With DSC-MR imaging, normalization of tumor CBV could decrease the correlation with microvessel area. © 2016 by American Journal of Neuroradiology.

  18. Wearable Device-Based Gait Recognition Using Angle Embedded Gait Dynamic Images and a Convolutional Neural Network.

    PubMed

    Zhao, Yongjia; Zhou, Suiping

    2017-02-28

    The widespread installation of inertial sensors in smartphones and other wearable devices provides a valuable opportunity to identify people by analyzing their gait patterns, for either cooperative or non-cooperative circumstances. However, it is still a challenging task to reliably extract discriminative features for gait recognition with noisy and complex data sequences collected from casually worn wearable devices like smartphones. To cope with this problem, we propose a novel image-based gait recognition approach using the Convolutional Neural Network (CNN) without the need to manually extract discriminative features. The CNN's input image, which is encoded straightforwardly from the inertial sensor data sequences, is called Angle Embedded Gait Dynamic Image (AE-GDI). AE-GDI is a new two-dimensional representation of gait dynamics, which is invariant to rotation and translation. The performance of the proposed approach in gait authentication and gait labeling is evaluated using two datasets: (1) the McGill University dataset, which is collected under realistic conditions; and (2) the Osaka University dataset with the largest number of subjects. Experimental results show that the proposed approach achieves competitive recognition accuracy over existing approaches and provides an effective parametric solution for identification among a large number of subjects by gait patterns.

  19. Wearable Device-Based Gait Recognition Using Angle Embedded Gait Dynamic Images and a Convolutional Neural Network

    PubMed Central

    Zhao, Yongjia; Zhou, Suiping

    2017-01-01

    The widespread installation of inertial sensors in smartphones and other wearable devices provides a valuable opportunity to identify people by analyzing their gait patterns, for either cooperative or non-cooperative circumstances. However, it is still a challenging task to reliably extract discriminative features for gait recognition with noisy and complex data sequences collected from casually worn wearable devices like smartphones. To cope with this problem, we propose a novel image-based gait recognition approach using the Convolutional Neural Network (CNN) without the need to manually extract discriminative features. The CNN’s input image, which is encoded straightforwardly from the inertial sensor data sequences, is called Angle Embedded Gait Dynamic Image (AE-GDI). AE-GDI is a new two-dimensional representation of gait dynamics, which is invariant to rotation and translation. The performance of the proposed approach in gait authentication and gait labeling is evaluated using two datasets: (1) the McGill University dataset, which is collected under realistic conditions; and (2) the Osaka University dataset with the largest number of subjects. Experimental results show that the proposed approach achieves competitive recognition accuracy over existing approaches and provides an effective parametric solution for identification among a large number of subjects by gait patterns. PMID:28264503

  20. Sarnoff JND Vision Model for Flat-Panel Design

    NASA Technical Reports Server (NTRS)

    Brill, Michael H.; Lubin, Jeffrey

    1998-01-01

    This document describes adaptation of the basic Sarnoff JND Vision Model created in response to the NASA/ARPA need for a general-purpose model to predict the perceived image quality attained by flat-panel displays. The JND model predicts the perceptual ratings that humans will assign to a degraded color-image sequence relative to its nondegraded counterpart. Substantial flexibility is incorporated into this version of the model so it may be used to model displays at the sub-pixel and sub-frame level. To model a display (e.g., an LCD), the input-image data can be sampled at many times the pixel resolution and at many times the digital frame rate. The first stage of the model downsamples each sequence in time and in space to physiologically reasonable rates, but with minimum interpolative artifacts and aliasing. Luma and chroma parts of the model generate (through multi-resolution pyramid representation) a map of differences-between test and reference called the JND map, from which a summary rating predictor is derived. The latest model extensions have done well in calibration against psychophysical data and against image-rating data given a CRT-based front-end. THe software was delivered to NASA Ames and is being integrated with LCD display models at that facility,

  1. The use of in vivo fluorescence image sequences to indicate the occurrence and propagation of transient focal depolarizations in cerebral ischemia.

    PubMed

    Strong, A J; Harland, S P; Meldrum, B S; Whittington, D J

    1996-05-01

    A method for the detection and tracking of propagated fluorescence transients as indicators of depolarizations in focal cerebral ischemia is described, together with initial results indicating the potential of the method. The cortex of the right cerebral hemisphere was exposed for nonrecovery experiments in five cats anesthetized with chloralose and subjected to permanent middle cerebral artery (MCA) occlusion. Fluorescence with 370-nm excitation (attributed to the degree of reduction of the NAD/H couple) was imaged with an intensified charge-coupled device camera and digitized. Sequences of images representing changes in gray level from a baseline image were examined, together with the time courses of mean gray levels in specified regions of interest. Spontaneous increases in fluorescence occurred, starting most commonly at the edge of areas of core ischemia; they propagated usually throughout the periinfarct zone and resolved to varying degrees and at varying rates, depending on proximity of the locus to the MCA input. When a fluorescence transient reached the anterior cerebral artery territory, its initial polarity reversed from an increase to a decrease in fluorescence. An initial increase in fluorescence in response to the arrival of a transient may characterize cortex that will become infarcted, if pathophysiological changes in the periinfarct zone are allowed to evolve naturally.

  2. Suppressive and enhancing effects in early visual cortex during illusory shape perception: A comment on.

    PubMed

    Moors, Pieter

    2015-01-01

    In a recent functional magnetic resonance imaging study, Kok and de Lange (2014) observed that BOLD activity for a Kanizsa illusory shape stimulus, in which pacmen-like inducers elicit an illusory shape percept, was either enhanced or suppressed relative to a nonillusory control configuration depending on whether the spatial profile of BOLD activity in early visual cortex was related to the illusory shape or the inducers, respectively. The authors argued that these findings fit well with the predictive coding framework, because top-down predictions related to the illusory shape are not met with bottom-up sensory input and hence the feedforward error signal is enhanced. Conversely, for the inducing elements, there is a match between top-down predictions and input, leading to a decrease in error. Rather than invoking predictive coding as the explanatory framework, the suppressive effect related to the inducers might be caused by neural adaptation to perceptually stable input due to the trial sequence used in the experiment.

  3. Method and Apparatus for Evaluating the Visual Quality of Processed Digital Video Sequences

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B. (Inventor)

    2002-01-01

    A Digital Video Quality (DVQ) apparatus and method that incorporate a model of human visual sensitivity to predict the visibility of artifacts. The DVQ method and apparatus are used for the evaluation of the visual quality of processed digital video sequences and for adaptively controlling the bit rate of the processed digital video sequences without compromising the visual quality. The DVQ apparatus minimizes the required amount of memory and computation. The input to the DVQ apparatus is a pair of color image sequences: an original (R) non-compressed sequence, and a processed (T) sequence. Both sequences (R) and (T) are sampled, cropped, and subjected to color transformations. The sequences are then subjected to blocking and discrete cosine transformation, and the results are transformed to local contrast. The next step is a time filtering operation which implements the human sensitivity to different time frequencies. The results are converted to threshold units by dividing each discrete cosine transform coefficient by its respective visual threshold. At the next stage the two sequences are subtracted to produce an error sequence. The error sequence is subjected to a contrast masking operation, which also depends upon the reference sequence (R). The masked errors can be pooled in various ways to illustrate the perceptual error over various dimensions, and the pooled error can be converted to a visual quality measure.

  4. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  5. A three-layer model of natural image statistics.

    PubMed

    Gutmann, Michael U; Hyvärinen, Aapo

    2013-11-01

    An important property of visual systems is to be simultaneously both selective to specific patterns found in the sensory input and invariant to possible variations. Selectivity and invariance (tolerance) are opposing requirements. It has been suggested that they could be joined by iterating a sequence of elementary selectivity and tolerance computations. It is, however, unknown what should be selected or tolerated at each level of the hierarchy. We approach this issue by learning the computations from natural images. We propose and estimate a probabilistic model of natural images that consists of three processing layers. Two natural image data sets are considered: image patches, and complete visual scenes downsampled to the size of small patches. For both data sets, we find that in the first two layers, simple and complex cell-like computations are performed. In the third layer, we mainly find selectivity to longer contours; for patch data, we further find some selectivity to texture, while for the downsampled complete scenes, some selectivity to curvature is observed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A focus of attention mechanism for gaze control within a framework for intelligent image analysis tools

    NASA Astrophysics Data System (ADS)

    Rodrigo, Ranga P.; Ranaweera, Kamal; Samarabandu, Jagath K.

    2004-05-01

    Focus of attention is often attributed to biological vision system where the entire field of view is first monitored and then the attention is focused to the object of interest. We propose using a similar approach for object recognition in a color image sequence. The intention is to locate an object based on a prior motive, concentrate on the detected object so that the imaging device can be guided toward it. We use the abilities of the intelligent image analysis framework developed in our laboratory to generate an algorithm dynamically to detect the particular type of object based on the user's object description. The proposed method uses color clustering along with segmentation. The segmented image with labeled regions is used to calculate the shape descriptor parameters. These and the color information are matched with the input description. Gaze is then controlled by issuing camera movement commands as appropriate. We present some preliminary results that demonstrate the success of this approach.

  7. Intelligent bandwith compression

    NASA Astrophysics Data System (ADS)

    Tseng, D. Y.; Bullock, B. L.; Olin, K. E.; Kandt, R. K.; Olsen, J. D.

    1980-02-01

    The feasibility of a 1000:1 bandwidth compression ratio for image transmission has been demonstrated using image-analysis algorithms and a rule-based controller. Such a high compression ratio was achieved by first analyzing scene content using auto-cueing and feature-extraction algorithms, and then transmitting only the pertinent information consistent with mission requirements. A rule-based controller directs the flow of analysis and performs priority allocations on the extracted scene content. The reconstructed bandwidth-compressed image consists of an edge map of the scene background, with primary and secondary target windows embedded in the edge map. The bandwidth-compressed images are updated at a basic rate of 1 frame per second, with the high-priority target window updated at 7.5 frames per second. The scene-analysis algorithms used in this system together with the adaptive priority controller are described. Results of simulated 1000:1 band width-compressed images are presented. A video tape simulation of the Intelligent Bandwidth Compression system has been produced using a sequence of video input from the data base.

  8. SNP ID-info: SNP ID searching and visualization platform.

    PubMed

    Yang, Cheng-Hong; Chuang, Li-Yeh; Cheng, Yu-Huei; Wen, Cheng-Hao; Chang, Phei-Lang; Chang, Hsueh-Wei

    2008-09-01

    Many association studies provide the relationship between single nucleotide polymorphisms (SNPs), diseases and cancers, without giving a SNP ID, however. Here, we developed the SNP ID-info freeware to provide the SNP IDs within inputting genetic and physical information of genomes. The program provides an "SNP-ePCR" function to generate the full-sequence using primers and template inputs. In "SNPosition," sequence from SNP-ePCR or direct input is fed to match the SNP IDs from SNP fasta-sequence. In "SNP search" and "SNP fasta" function, information of SNPs within the cytogenetic band, contig position, and keyword input are acceptable. Finally, the SNP ID neighboring environment for inputs is completely visualized in the order of contig position and marked with SNP and flanking hits. The SNP identification problems inherent in NCBI SNP BLAST are also avoided. In conclusion, the SNP ID-info provides a visualized SNP ID environment for multiple inputs and assists systematic SNP association studies. The server and user manual are available at http://bio.kuas.edu.tw/snpid-info.

  9. Minimizing structural vibrations with Input Shaping (TM)

    NASA Technical Reports Server (NTRS)

    Singhose, Bill; Singer, Neil

    1995-01-01

    A new method for commanding machines to move with increased dynamic performance was developed. This method is an enhanced version of input shaping, a patented vibration suppression algorithm. This technique intercepts a command input to a system command that moves the mechanical system with increased performance and reduced residual vibration. This document describes many advanced methods for generating highly optimized shaping sequences which are tuned to particular systems. The shaping sequence is important because it determines the trade off between move/settle time of the system and the insensitivity of the input shaping algorithm to variations or uncertainties in the machine which can be controlled. For example, a system with a 5 Hz resonance that takes 1 second to settle can be improved to settle instantaneously using a 0.2 shaping sequence (thus improving settle time by a factor of 5). This system could vary by plus or minus 15% in its natural frequency and still have no apparent vibration. However, the same system shaped with a 0.3 second shaping sequence could tolerate plus or minus 40% or more variation in natural frequency. This document describes how to generate sequences that maximize performance, sequences that maximize insensitivity, and sequences that trade off between the two. Several software tools are documented and included.

  10. AlignMe—a membrane protein sequence alignment web server

    PubMed Central

    Stamm, Marcus; Staritzbichler, René; Khafizov, Kamil; Forrest, Lucy R.

    2014-01-01

    We present a web server for pair-wise alignment of membrane protein sequences, using the program AlignMe. The server makes available two operational modes of AlignMe: (i) sequence to sequence alignment, taking two sequences in fasta format as input, combining information about each sequence from multiple sources and producing a pair-wise alignment (PW mode); and (ii) alignment of two multiple sequence alignments to create family-averaged hydropathy profile alignments (HP mode). For the PW sequence alignment mode, four different optimized parameter sets are provided, each suited to pairs of sequences with a specific similarity level. These settings utilize different types of inputs: (position-specific) substitution matrices, secondary structure predictions and transmembrane propensities from transmembrane predictions or hydrophobicity scales. In the second (HP) mode, each input multiple sequence alignment is converted into a hydrophobicity profile averaged over the provided set of sequence homologs; the two profiles are then aligned. The HP mode enables qualitative comparison of transmembrane topologies (and therefore potentially of 3D folds) of two membrane proteins, which can be useful if the proteins have low sequence similarity. In summary, the AlignMe web server provides user-friendly access to a set of tools for analysis and comparison of membrane protein sequences. Access is available at http://www.bioinfo.mpg.de/AlignMe PMID:24753425

  11. Traveling wave linear accelerator with RF power flow outside of accelerating cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolgashev, Valery A.

    A high power RF traveling wave accelerator structure includes a symmetric RF feed, an input matching cell coupled to the symmetric RF feed, a sequence of regular accelerating cavities coupled to the input matching cell at an input beam pipe end of the sequence, one or more waveguides parallel to and coupled to the sequence of regular accelerating cavities, an output matching cell coupled to the sequence of regular accelerating cavities at an output beam pipe end of the sequence, and output waveguide circuit or RF loads coupled to the output matching cell. Each of the regular accelerating cavities hasmore » a nose cone that cuts off field propagating into the beam pipe and therefore all power flows in a traveling wave along the structure in the waveguide.« less

  12. Neurophysiology of Flight in Wild-Type and a Mutant Drosophila

    PubMed Central

    Levine, Jon D.; Wyman, Robert J.

    1973-01-01

    We report the flight motor output pattern in Drosophila melanogaster and the neural network responsible for it, and describe the bursting motor output pattern in a mutant. There are 26 singly-innervated muscle fibers. There are two basic firing patterns: phase progression, shown by units that receive a common input but have no cross-connections, and phase stability, in which synergic units, receiving a common input and inhibiting each other, fire in a repeating sequence. Flies carrying the mutation stripe cannot fly. Their motor output is reduced to a short duration, high-frequency burst, but the patterning within bursts shows many of the characteristics of the wild type. The mutation is restricted in its effect, as the nervous system has normal morphology by light microscopy and other behaviors of the mutant are normal. Images PMID:4197927

  13. Adaptive Intuitionistic Fuzzy Enhancement of Brain Tumor MR Images

    NASA Astrophysics Data System (ADS)

    Deng, He; Deng, Wankai; Sun, Xianping; Ye, Chaohui; Zhou, Xin

    2016-10-01

    Image enhancement techniques are able to improve the contrast and visual quality of magnetic resonance (MR) images. However, conventional methods cannot make up some deficiencies encountered by respective brain tumor MR imaging modes. In this paper, we propose an adaptive intuitionistic fuzzy sets-based scheme, called as AIFE, which takes information provided from different MR acquisitions and tries to enhance the normal and abnormal structural regions of the brain while displaying the enhanced results as a single image. The AIFE scheme firstly separates an input image into several sub images, then divides each sub image into object and background areas. After that, different novel fuzzification, hyperbolization and defuzzification operations are implemented on each object/background area, and finally an enhanced result is achieved via nonlinear fusion operators. The fuzzy implementations can be processed in parallel. Real data experiments demonstrate that the AIFE scheme is not only effectively useful to have information from images acquired with different MR sequences fused in a single image, but also has better enhancement performance when compared to conventional baseline algorithms. This indicates that the proposed AIFE scheme has potential for improving the detection and diagnosis of brain tumors.

  14. Converting from DDOR SASF to APF

    NASA Technical Reports Server (NTRS)

    Gladden, Roy E.; Khanampompan, Teerapat; Fisher, Forest W.

    2008-01-01

    A computer program called ddor_sasf2apf converts delta-door (delta differential one-way range) request from an SASF (spacecraft activity sequence file) format to an APF (apgen plan file) format for use in the Mars Reconnaissance Orbiter (MRO) missionplanning- and-sequencing process. The APF is used as an input to APGEN/AUTOGEN in the MRO activity- planning and command-sequencegenerating process to sequence the delta-door (DDOR) activity. The DDOR activity is a spacecraft tracking technique for determining spacecraft location. The input to ddor_sasf2apf is an input request SASF provided by an observation team that utilizes DDOR. ddor_sasf2apf parses this DDOR SASF input, rearranging parameters and reformatting the request to produce an APF file for use in AUTOGEN and/or APGEN. The benefit afforded by ddor_sasf2apf is to enable the use of the DDOR SASF file earlier in the planning stage of the command-sequence-generating process and to produce sequences, optimized for DDOR operations, that are more accurate and more robust than would otherwise be possible.

  15. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations were compared to those predicted from the expired air and venous blood samples. The glucose analog ('18)F-3-deoxy-3-fluoro-D -glucose (3-FDG) was used for quantitating the membrane transport rate of glucose. The measured data indicated that the phosphorylation rate of 3-FDG was low enough to allow accurate estimation of the transport rate using a two compartment model.

  16. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    PubMed

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Random forest regression for magnetic resonance image synthesis.

    PubMed

    Jog, Amod; Carass, Aaron; Roy, Snehashis; Pham, Dzung L; Prince, Jerry L

    2017-01-01

    By choosing different pulse sequences and their parameters, magnetic resonance imaging (MRI) can generate a large variety of tissue contrasts. This very flexibility, however, can yield inconsistencies with MRI acquisitions across datasets or scanning sessions that can in turn cause inconsistent automated image analysis. Although image synthesis of MR images has been shown to be helpful in addressing this problem, an inability to synthesize both T 2 -weighted brain images that include the skull and FLuid Attenuated Inversion Recovery (FLAIR) images has been reported. The method described herein, called REPLICA, addresses these limitations. REPLICA is a supervised random forest image synthesis approach that learns a nonlinear regression to predict intensities of alternate tissue contrasts given specific input tissue contrasts. Experimental results include direct image comparisons between synthetic and real images, results from image analysis tasks on both synthetic and real images, and comparison against other state-of-the-art image synthesis methods. REPLICA is computationally fast, and is shown to be comparable to other methods on tasks they are able to perform. Additionally REPLICA has the capability to synthesize both T 2 -weighted images of the full head and FLAIR images, and perform intensity standardization between different imaging datasets. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. Automatic detection of a hand-held needle in ultrasound via phased-based analysis of the tremor motion

    NASA Astrophysics Data System (ADS)

    Beigi, Parmida; Salcudean, Septimiu E.; Rohling, Robert; Ng, Gary C.

    2016-03-01

    This paper presents an automatic localization method for a standard hand-held needle in ultrasound based on temporal motion analysis of spatially decomposed data. Subtle displacement arising from tremor motion has a periodic pattern which is usually imperceptible in the intensity image but may convey information in the phase image. Our method aims to detect such periodic motion of a hand-held needle and distinguish it from intrinsic tissue motion, using a technique inspired by video magnification. Complex steerable pyramids allow specific design of the wavelets' orientations according to the insertion angle as well as the measurement of the local phase. We therefore use steerable pairs of even and odd Gabor wavelets to decompose the ultrasound B-mode sequence into various spatial frequency bands. Variations of the local phase measurements in the spatially decomposed input data is then temporally analyzed using a finite impulse response bandpass filter to detect regions with a tremor motion pattern. Results obtained from different pyramid levels are then combined and thresholded to generate the binary mask input for the Hough transform, which determines an estimate of the direction angle and discards some of the outliers. Polynomial fitting is used at the final stage to remove any remaining outliers and improve the trajectory detection. The detected needle is finally added back to the input sequence as an overlay of a cloud of points. We demonstrate the efficiency of our approach to detect the needle using subtle tremor motion in an agar phantom and in-vivo porcine cases where intrinsic motion is also present. The localization accuracy was calculated by comparing to expert manual segmentation, and presented in (mean, standard deviation and root-mean-square error) of (0.93°, 1.26° and 0.87°) and (1.53 mm, 1.02 mm and 1.82 mm) for the trajectory and the tip, respectively.

  19. Scene segmentation of natural images using texture measures and back-propagation

    NASA Technical Reports Server (NTRS)

    Sridhar, Banavar; Phatak, Anil; Chatterji, Gano

    1993-01-01

    Knowledge of the three-dimensional world is essential for many guidance and navigation applications. A sequence of images from an electro-optical sensor can be processed using optical flow algorithms to provide a sparse set of ranges as a function of azimuth and elevation. A natural way to enhance the range map is by interpolation. However, this should be undertaken with care since interpolation assumes continuity of range. The range is continuous in certain parts of the image and can jump at object boundaries. In such situations, the ability to detect homogeneous object regions by scene segmentation can be used to determine regions in the range map that can be enhanced by interpolation. The use of scalar features derived from the spatial gray-level dependence matrix for texture segmentation is explored. Thresholding of histograms of scalar texture features is done for several images to select scalar features which result in a meaningful segmentation of the images. Next, the selected scalar features are used with a neural net to automate the segmentation procedure. Back-propagation is used to train the feed forward neural network. The generalization of the network approach to subsequent images in the sequence is examined. It is shown that the use of multiple scalar features as input to the neural network result in a superior segmentation when compared with a single scalar feature. It is also shown that the scalar features, which are not useful individually, result in a good segmentation when used together. The methodology is applied to both indoor and outdoor images.

  20. Correlation peak analysis applied to a sequence of images using two different filters for eye tracking model

    NASA Astrophysics Data System (ADS)

    Patrón, Verónica A.; Álvarez Borrego, Josué; Coronel Beltrán, Ángel

    2015-09-01

    Eye tracking has many useful applications that range from biometrics to face recognition and human-computer interaction. The analysis of the characteristics of the eyes has become one of the methods to accomplish the location of the eyes and the tracking of the point of gaze. Characteristics such as the contrast between the iris and the sclera, the shape, and distribution of colors and dark/light zones in the area are the starting point for these analyses. In this work, the focus will be on the contrast between the iris and the sclera, performing a correlation in the frequency domain. The images are acquired with an ordinary camera, which with were taken images of thirty-one volunteers. The reference image is an image of the subjects looking to a point in front of them at 0° angle. Then sequences of images are taken with the subject looking at different angles. These images are processed in MATLAB, obtaining the maximum correlation peak for each image, using two different filters. Each filter were analyzed and then one was selected, which is the filter that gives the best performance in terms of the utility of the data, which is displayed in graphs that shows the decay of the correlation peak as the eye moves progressively at different angle. This data will be used to obtain a mathematical model or function that establishes a relationship between the angle of vision (AOV) and the maximum correlation peak (MCP). This model will be tested using different input images from other subject not contained in the initial database, being able to predict angle of vision using the maximum correlation peak data.

  1. Two-dimensional imaging via a narrowband MIMO radar system with two perpendicular linear arrays.

    PubMed

    Wang, Dang-wei; Ma, Xiao-yan; Su, Yi

    2010-05-01

    This paper presents a system model and method for the 2-D imaging application via a narrowband multiple-input multiple-output (MIMO) radar system with two perpendicular linear arrays. Furthermore, the imaging formulation for our method is developed through a Fourier integral processing, and the parameters of antenna array including the cross-range resolution, required size, and sampling interval are also examined. Different from the spatial sequential procedure sampling the scattered echoes during multiple snapshot illuminations in inverse synthetic aperture radar (ISAR) imaging, the proposed method utilizes a spatial parallel procedure to sample the scattered echoes during a single snapshot illumination. Consequently, the complex motion compensation in ISAR imaging can be avoided. Moreover, in our array configuration, multiple narrowband spectrum-shared waveforms coded with orthogonal polyphase sequences are employed. The mainlobes of the compressed echoes from the different filter band could be located in the same range bin, and thus, the range alignment in classical ISAR imaging is not necessary. Numerical simulations based on synthetic data are provided for testing our proposed method.

  2. Resolution enhancement of low-quality videos using a high-resolution frame

    NASA Astrophysics Data System (ADS)

    Pham, Tuan Q.; van Vliet, Lucas J.; Schutte, Klamer

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of corresponding LR-HR pairs of image patches from the HR still image, high-frequency details are transferred from the HR source to the LR video. The DCT-domain algorithm is much faster than example-based SR in spatial domain 6 because of a reduction in search dimensionality, which is a direct result of the compact and uncorrelated DCT representation. Fast searching techniques like tree-structure vector quantization 16 and coherence search1 are also key to the improved efficiency. Preliminary results on MJPEG sequence show promising result of the DCT-domain SR synthesis approach.

  3. ssHMM: extracting intuitive sequence-structure motifs from high-throughput RNA-binding protein data

    PubMed Central

    Krestel, Ralf; Ohler, Uwe; Vingron, Martin; Marsico, Annalisa

    2017-01-01

    Abstract RNA-binding proteins (RBPs) play an important role in RNA post-transcriptional regulation and recognize target RNAs via sequence-structure motifs. The extent to which RNA structure influences protein binding in the presence or absence of a sequence motif is still poorly understood. Existing RNA motif finders either take the structure of the RNA only partially into account, or employ models which are not directly interpretable as sequence-structure motifs. We developed ssHMM, an RNA motif finder based on a hidden Markov model (HMM) and Gibbs sampling which fully captures the relationship between RNA sequence and secondary structure preference of a given RBP. Compared to previous methods which output separate logos for sequence and structure, it directly produces a combined sequence-structure motif when trained on a large set of sequences. ssHMM’s model is visualized intuitively as a graph and facilitates biological interpretation. ssHMM can be used to find novel bona fide sequence-structure motifs of uncharacterized RBPs, such as the one presented here for the YY1 protein. ssHMM reaches a high motif recovery rate on synthetic data, it recovers known RBP motifs from CLIP-Seq data, and scales linearly on the input size, being considerably faster than MEMERIS and RNAcontext on large datasets while being on par with GraphProt. It is freely available on Github and as a Docker image. PMID:28977546

  4. Fast template matching with polynomials.

    PubMed

    Omachi, Shinichiro; Omachi, Masako

    2007-08-01

    Template matching is widely used for many applications in image and signal processing. This paper proposes a novel template matching algorithm, called algebraic template matching. Given a template and an input image, algebraic template matching efficiently calculates similarities between the template and the partial images of the input image, for various widths and heights. The partial image most similar to the template image is detected from the input image for any location, width, and height. In the proposed algorithm, a polynomial that approximates the template image is used to match the input image instead of the template image. The proposed algorithm is effective especially when the width and height of the template image differ from the partial image to be matched. An algorithm using the Legendre polynomial is proposed for efficient approximation of the template image. This algorithm not only reduces computational costs, but also improves the quality of the approximated image. It is shown theoretically and experimentally that the computational cost of the proposed algorithm is much smaller than the existing methods.

  5. Comparative assessment of pressure field reconstructions from particle image velocimetry measurements and Lagrangian particle tracking

    NASA Astrophysics Data System (ADS)

    van Gent, P. L.; Michaelis, D.; van Oudheusden, B. W.; Weiss, P.-É.; de Kat, R.; Laskari, A.; Jeon, Y. J.; David, L.; Schanz, D.; Huhn, F.; Gesemann, S.; Novara, M.; McPhaden, C.; Neeteson, N. J.; Rival, D. E.; Schneiders, J. F. G.; Schrijer, F. F. J.

    2017-04-01

    A test case for pressure field reconstruction from particle image velocimetry (PIV) and Lagrangian particle tracking (LPT) has been developed by constructing a simulated experiment from a zonal detached eddy simulation for an axisymmetric base flow at Mach 0.7. The test case comprises sequences of four subsequent particle images (representing multi-pulse data) as well as continuous time-resolved data which can realistically only be obtained for low-speed flows. Particle images were processed using tomographic PIV processing as well as the LPT algorithm `Shake-The-Box' (STB). Multiple pressure field reconstruction techniques have subsequently been applied to the PIV results (Eulerian approach, iterative least-square pseudo-tracking, Taylor's hypothesis approach, and instantaneous Vortex-in-Cell) and LPT results (FlowFit, Vortex-in-Cell-plus, Voronoi-based pressure evaluation, and iterative least-square pseudo-tracking). All methods were able to reconstruct the main features of the instantaneous pressure fields, including methods that reconstruct pressure from a single PIV velocity snapshot. Highly accurate reconstructed pressure fields could be obtained using LPT approaches in combination with more advanced techniques. In general, the use of longer series of time-resolved input data, when available, allows more accurate pressure field reconstruction. Noise in the input data typically reduces the accuracy of the reconstructed pressure fields, but none of the techniques proved to be critically sensitive to the amount of noise added in the present test case.

  6. Robust feature tracking for endoscopic pose estimation and structure recovery

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Krappe, S.; Röhl, S.; Bodenstedt, S.; Müller-Stich, B.; Dillmann, R.

    2013-03-01

    Minimally invasive surgery is a highly complex medical discipline with several difficulties for the surgeon. To alleviate these difficulties, augmented reality can be used for intraoperative assistance. For visualization, the endoscope pose must be known which can be acquired with a SLAM (Simultaneous Localization and Mapping) approach using the endoscopic images. In this paper we focus on feature tracking for SLAM in minimally invasive surgery. Robust feature tracking and minimization of false correspondences is crucial for localizing the endoscope. As sensory input we use a stereo endoscope and evaluate different feature types in a developed SLAM framework. The accuracy of the endoscope pose estimation is validated with synthetic and ex vivo data. Furthermore we test the approach with in vivo image sequences from da Vinci interventions.

  7. Removing flicker based on sparse color correspondences in old film restoration

    NASA Astrophysics Data System (ADS)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  8. Visual Perceptual Echo Reflects Learning of Regularities in Rapid Luminance Sequences.

    PubMed

    Chang, Acer Y-C; Schwartzman, David J; VanRullen, Rufin; Kanai, Ryota; Seth, Anil K

    2017-08-30

    A novel neural signature of active visual processing has recently been described in the form of the "perceptual echo", in which the cross-correlation between a sequence of randomly fluctuating luminance values and occipital electrophysiological signals exhibits a long-lasting periodic (∼100 ms cycle) reverberation of the input stimulus (VanRullen and Macdonald, 2012). As yet, however, the mechanisms underlying the perceptual echo and its function remain unknown. Reasoning that natural visual signals often contain temporally predictable, though nonperiodic features, we hypothesized that the perceptual echo may reflect a periodic process associated with regularity learning. To test this hypothesis, we presented subjects with successive repetitions of a rapid nonperiodic luminance sequence, and examined the effects on the perceptual echo, finding that echo amplitude linearly increased with the number of presentations of a given luminance sequence. These data suggest that the perceptual echo reflects a neural signature of regularity learning.Furthermore, when a set of repeated sequences was followed by a sequence with inverted luminance polarities, the echo amplitude decreased to the same level evoked by a novel stimulus sequence. Crucially, when the original stimulus sequence was re-presented, the echo amplitude returned to a level consistent with the number of presentations of this sequence, indicating that the visual system retained sequence-specific information, for many seconds, even in the presence of intervening visual input. Altogether, our results reveal a previously undiscovered regularity learning mechanism within the human visual system, reflected by the perceptual echo. SIGNIFICANCE STATEMENT How the brain encodes and learns fast-changing but nonperiodic visual input remains unknown, even though such visual input characterizes natural scenes. We investigated whether the phenomenon of "perceptual echo" might index such learning. The perceptual echo is a long-lasting reverberation between a rapidly changing visual input and evoked neural activity, apparent in cross-correlations between occipital EEG and stimulus sequences, peaking in the alpha (∼10 Hz) range. We indeed found that perceptual echo is enhanced by repeatedly presenting the same visual sequence, indicating that the human visual system can rapidly and automatically learn regularities embedded within fast-changing dynamic sequences. These results point to a previously undiscovered regularity learning mechanism, operating at a rate defined by the alpha frequency. Copyright © 2017 the authors 0270-6474/17/378486-12$15.00/0.

  9. Quality and noise measurements in mobile phone video capture

    NASA Astrophysics Data System (ADS)

    Petrescu, Doina; Pincenti, John

    2011-02-01

    The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.

  10. Experimental Optoelectronic Associative Memory

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin

    1992-01-01

    Optoelectronic associative memory responds to input image by displaying one of M remembered images. Which image to display determined by optoelectronic analog computation of resemblance between input image and each remembered image. Does not rely on precomputation and storage of outer-product synapse matrix. Size of memory needed to store and process images reduced.

  11. Predicting Moves-on-Stills for Comic Art Using Viewer Gaze Data.

    PubMed

    Jain, Eakta; Sheikh, Yaser; Hodgins, Jessica

    2016-01-01

    Comic art consists of a sequence of panels of different shapes and sizes that visually communicate the narrative to the reader. The move-on-stills technique allows such still images to be retargeted for digital displays via camera moves. Today, moves-on-stills can be created by software applications given user-provided parameters for each desired camera move. The proposed algorithm uses viewer gaze as input to computationally predict camera move parameters. The authors demonstrate their algorithm on various comic book panels and evaluate its performance by comparing their results with a professional DVD.

  12. Automatic image equalization and contrast enhancement using Gaussian mixture modeling.

    PubMed

    Celik, Turgay; Tjahjadi, Tardi

    2012-01-01

    In this paper, we propose an adaptive image equalization algorithm that automatically enhances the contrast in an input image. The algorithm uses the Gaussian mixture model to model the image gray-level distribution, and the intersection points of the Gaussian components in the model are used to partition the dynamic range of the image into input gray-level intervals. The contrast equalized image is generated by transforming the pixels' gray levels in each input interval to the appropriate output gray-level interval according to the dominant Gaussian component and the cumulative distribution function of the input interval. To take account of the hypothesis that homogeneous regions in the image represent homogeneous silences (or set of Gaussian components) in the image histogram, the Gaussian components with small variances are weighted with smaller values than the Gaussian components with larger variances, and the gray-level distribution is also used to weight the components in the mapping of the input interval to the output interval. Experimental results show that the proposed algorithm produces better or comparable enhanced images than several state-of-the-art algorithms. Unlike the other algorithms, the proposed algorithm is free of parameter setting for a given dynamic range of the enhanced image and can be applied to a wide range of image types.

  13. Image based SAR product simulation for analysis

    NASA Technical Reports Server (NTRS)

    Domik, G.; Leberl, F.

    1987-01-01

    SAR product simulation serves to predict SAR image gray values for various flight paths. Input typically consists of a digital elevation model and backscatter curves. A new method is described of product simulation that employs also a real SAR input image for image simulation. This can be denoted as 'image-based simulation'. Different methods to perform this SAR prediction are presented and advantages and disadvantages discussed. Ascending and descending orbit images from NASA's SIR-B experiment were used for verification of the concept: input images from ascending orbits were converted into images from a descending orbit; the results are compared to the available real imagery to verify that the prediction technique produces meaningful image data.

  14. Deep Recurrent Neural Networks for Human Activity Recognition

    PubMed Central

    Murad, Abdulmajid

    2017-01-01

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs. PMID:29113103

  15. Deep Recurrent Neural Networks for Human Activity Recognition.

    PubMed

    Murad, Abdulmajid; Pyun, Jae-Young

    2017-11-06

    Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.

  16. Short-term change detection for UAV video

    NASA Astrophysics Data System (ADS)

    Saur, Günter; Krüger, Wolfgang

    2012-11-01

    In the last years, there has been an increased use of unmanned aerial vehicles (UAV) for video reconnaissance and surveillance. An important application in this context is change detection in UAV video data. Here we address short-term change detection, in which the time between observations ranges from several minutes to a few hours. We distinguish this task from video motion detection (shorter time scale) and from long-term change detection, based on time series of still images taken between several days, weeks, or even years. Examples for relevant changes we are looking for are recently parked or moved vehicles. As a pre-requisite, a precise image-to-image registration is needed. Images are selected on the basis of the geo-coordinates of the sensor's footprint and with respect to a certain minimal overlap. The automatic imagebased fine-registration adjusts the image pair to a common geometry by using a robust matching approach to handle outliers. The change detection algorithm has to distinguish between relevant and non-relevant changes. Examples for non-relevant changes are stereo disparity at 3D structures of the scene, changed length of shadows, and compression or transmission artifacts. To detect changes in image pairs we analyzed image differencing, local image correlation, and a transformation-based approach (multivariate alteration detection). As input we used color and gradient magnitude images. To cope with local misalignment of image structures we extended the approaches by a local neighborhood search. The algorithms are applied to several examples covering both urban and rural scenes. The local neighborhood search in combination with intensity and gradient magnitude differencing clearly improved the results. Extended image differencing performed better than both the correlation based approach and the multivariate alternation detection. The algorithms are adapted to be used in semi-automatic workflows for the ABUL video exploitation system of Fraunhofer IOSB, see Heinze et. al. 2010.1 In a further step we plan to incorporate more information from the video sequences to the change detection input images, e.g., by image enhancement or by along-track stereo which are available in the ABUL system.

  17. Applicability of common measures in multifocus image fusion comparison

    NASA Astrophysics Data System (ADS)

    Vajgl, Marek

    2017-11-01

    Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.

  18. Human Splice-Site Prediction with Deep Neural Networks.

    PubMed

    Naito, Tatsuhiko

    2018-04-18

    Accurate splice-site prediction is essential to delineate gene structures from sequence data. Several computational techniques have been applied to create a system to predict canonical splice sites. For classification tasks, deep neural networks (DNNs) have achieved record-breaking results and often outperformed other supervised learning techniques. In this study, a new method of splice-site prediction using DNNs was proposed. The proposed system receives an input sequence data and returns an answer as to whether it is splice site. The length of input is 140 nucleotides, with the consensus sequence (i.e., "GT" and "AG" for the donor and acceptor sites, respectively) in the middle. Each input sequence model is applied to the pretrained DNN model that determines the probability that an input is a splice site. The model consists of convolutional layers and bidirectional long short-term memory network layers. The pretraining and validation were conducted using the data set tested in previously reported methods. The performance evaluation results showed that the proposed method can outperform the previous methods. In addition, the pattern learned by the DNNs was visualized as position frequency matrices (PFMs). Some of PFMs were very similar to the consensus sequence. The trained DNN model and the brief source code for the prediction system are uploaded. Further improvement will be achieved following the further development of DNNs.

  19. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  20. Supervised local error estimation for nonlinear image registration using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Eppenhof, Koen A. J.; Pluim, Josien P. W.

    2017-02-01

    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.

  1. Optoelectronic associative memory

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor)

    1993-01-01

    An associative optical memory including an input spatial light modulator (SLM) in the form of an edge enhanced liquid crystal light valve (LCLV) and a pair of memory SLM's in the form of liquid crystal televisions (LCTV's) forms a matrix array of an input image which is cross correlated with a matrix array of stored images. The correlation product is detected and nonlinearly amplified to illuminate a replica of the stored image array to select the stored image correlating with the input image. The LCLV is edge enhanced by reducing the bias frequency and voltage and rotating its orientation. The edge enhancement and nonlinearity of the photodetection improves the orthogonality of the stored image. The illumination of the replicate stored image provides a clean stored image, uncontaminated by the image comparison process.

  2. Maximising information recovery from rank-order codes

    NASA Astrophysics Data System (ADS)

    Sen, B.; Furber, S.

    2007-04-01

    The central nervous system encodes information in sequences of asynchronously generated voltage spikes, but the precise details of this encoding are not well understood. Thorpe proposed rank-order codes as an explanation of the observed speed of information processing in the human visual system. The work described in this paper is inspired by the performance of SpikeNET, a biologically inspired neural architecture using rank-order codes for information processing, and is based on the retinal model developed by VanRullen and Thorpe. This model mimics retinal information processing by passing an input image through a bank of Difference of Gaussian (DoG) filters and then encoding the resulting coefficients in rank-order. To test the effectiveness of this encoding in capturing the information content of an image, the rank-order representation is decoded to reconstruct an image that can be compared with the original. The reconstruction uses a look-up table to infer the filter coefficients from their rank in the encoded image. Since the DoG filters are approximately orthogonal functions, they are treated as their own inverses in the reconstruction process. We obtained a quantitative measure of the perceptually important information retained in the reconstructed image relative to the original using a slightly modified version of an objective metric proposed by Petrovic. It is observed that around 75% of the perceptually important information is retained in the reconstruction. In the present work we reconstruct the input using a pseudo-inverse of the DoG filter-bank with the aim of improving the reconstruction and thereby extracting more information from the rank-order encoded stimulus. We observe that there is an increase of 10 - 15% in the information retrieved from a reconstructed stimulus as a result of inverting the filter-bank.

  3. Implementation of Multi-Agent Object Attention System Based on Biologically Inspired Attractor Selection

    NASA Astrophysics Data System (ADS)

    Hashimoto, Ryoji; Matsumura, Tomoya; Nozato, Yoshihiro; Watanabe, Kenji; Onoye, Takao

    A multi-agent object attention system is proposed, which is based on biologically inspired attractor selection model. Object attention is facilitated by using a video sequence and a depth map obtained through a compound-eye image sensor TOMBO. Robustness of the multi-agent system over environmental changes is enhanced by utilizing the biological model of adaptive response by attractor selection. To implement the proposed system, an efficient VLSI architecture is employed with reducing enormous computational costs and memory accesses required for depth map processing and multi-agent attractor selection process. According to the FPGA implementation result of the proposed object attention system, which is accomplished by using 7,063 slices, 640×512 pixel input images can be processed in real-time with three agents at a rate of 9fps in 48MHz operation.

  4. Detection and Alignment of 3D Domain Swapping Proteins Using Angle-Distance Image-Based Secondary Structural Matching Techniques

    PubMed Central

    Wang, Hsin-Wei; Hsu, Yen-Chu; Hwang, Jenn-Kang; Lyu, Ping-Chiang; Pai, Tun-Wen; Tang, Chuan Yi

    2010-01-01

    This work presents a novel detection method for three-dimensional domain swapping (DS), a mechanism for forming protein quaternary structures that can be visualized as if monomers had “opened” their “closed” structures and exchanged the opened portion to form intertwined oligomers. Since the first report of DS in the mid 1990s, an increasing number of identified cases has led to the postulation that DS might occur in a protein with an unconstrained terminus under appropriate conditions. DS may play important roles in the molecular evolution and functional regulation of proteins and the formation of depositions in Alzheimer's and prion diseases. Moreover, it is promising for designing auto-assembling biomaterials. Despite the increasing interest in DS, related bioinformatics methods are rarely available. Owing to a dramatic conformational difference between the monomeric/closed and oligomeric/open forms, conventional structural comparison methods are inadequate for detecting DS. Hence, there is also a lack of comprehensive datasets for studying DS. Based on angle-distance (A-D) image transformations of secondary structural elements (SSEs), specific patterns within A-D images can be recognized and classified for structural similarities. In this work, a matching algorithm to extract corresponding SSE pairs from A-D images and a novel DS score have been designed and demonstrated to be applicable to the detection of DS relationships. The Matthews correlation coefficient (MCC) and sensitivity of the proposed DS-detecting method were higher than 0.81 even when the sequence identities of the proteins examined were lower than 10%. On average, the alignment percentage and root-mean-square distance (RMSD) computed by the proposed method were 90% and 1.8Å for a set of 1,211 DS-related pairs of proteins. The performances of structural alignments remain high and stable for DS-related homologs with less than 10% sequence identities. In addition, the quality of its hinge loop determination is comparable to that of manual inspection. This method has been implemented as a web-based tool, which requires two protein structures as the input and then the type and/or existence of DS relationships between the input structures are determined according to the A-D image-based structural alignments and the DS score. The proposed method is expected to trigger large-scale studies of this interesting structural phenomenon and facilitate related applications. PMID:20976204

  5. The Development of Motor Coordination in Drosophila Embryos

    PubMed Central

    Crisp, Sarah; Evers, Jan Felix; Fiala, André; Bate, Michael

    2012-01-01

    We use non-invasive muscle imaging to study onset of motor activity and emergence of coordinated movement in Drosophila embryos. Earliest movements are myogenic and neurally controlled muscle contractions first appear with the onset of bursting activity 17 hours after egg laying. Initial episodes of activity are poorly organised and coordinated crawling sequences only begin to appear after a further hour of bursting. Thus network performance improves during this first period of activity. The embryo continues to exhibit bursts of crawling like sequences until shortly before hatching, while other reflexes also mature. Bursting does not begin as a reflex response to sensory input but appears to reflect the onset of spontaneous activity in the motor network. It does not require GABA-ergic transmission, and using a light activated channel to excite the network we demonstrate activity dependent depression that may cause burst termination. PMID:18927150

  6. A new variant of Petri net controlled grammars

    NASA Astrophysics Data System (ADS)

    Jan, Nurhidaya Mohamad; Turaev, Sherzod; Fong, Wan Heng; Sarmin, Nor Haniza

    2015-10-01

    A Petri net controlled grammar is a Petri net with respect to a context-free grammar where the successful derivations of the grammar can be simulated using the occurrence sequences of the net. In this paper, we introduce a new variant of Petri net controlled grammars, called a place-labeled Petri net controlled grammar, which is a context-free grammar equipped with a Petri net and a function which maps places of the net to productions of the grammar. The language consists of all terminal strings that can be obtained by parallelly applying multisets of the rules which are the images of the sets of the input places of transitions in a successful occurrence sequence of the Petri net. We study the effect of the different labeling strategies to the computational power and establish lower and upper bounds for the generative capacity of place-labeled Petri net controlled grammars.

  7. CHANGES IN EARTHWORM DENSITY AND COMMUNITY STRUCTURE DURING SECONDARY SUCCESSION IN ABANDONED TROPICAL PASTURES

    Treesearch

    Xiaoming Zou; Grizelle Gonzalez

    1997-01-01

    Plant community succession alters the quantity and chemistry of organic inputs to soils. These differences in organic input may trigger changes in soil fertility and fauna1 activity. We examined earthworm density and community structure along a successional sequence of plant communities in abandoned tropical pastures in Puerto Rico. The chronological sequence of these...

  8. Sound Sequence Discrimination Learning Motivated by Reward Requires Dopaminergic D2 Receptor Activation in the Rat Auditory Cortex

    ERIC Educational Resources Information Center

    Kudoh, Masaharu; Shibuki, Katsuei

    2006-01-01

    We have previously reported that sound sequence discrimination learning requires cholinergic inputs to the auditory cortex (AC) in rats. In that study, reward was used for motivating discrimination behavior in rats. Therefore, dopaminergic inputs mediating reward signals may have an important role in the learning. We tested the possibility in the…

  9. Personal identification based on blood vessels of retinal fundus images

    NASA Astrophysics Data System (ADS)

    Fukuta, Keisuke; Nakagawa, Toshiaki; Hayashi, Yoshinori; Hatanaka, Yuji; Hara, Takeshi; Fujita, Hiroshi

    2008-03-01

    Biometric technique has been implemented instead of conventional identification methods such as password in computer, automatic teller machine (ATM), and entrance and exit management system. We propose a personal identification (PI) system using color retinal fundus images which are unique to each individual. The proposed procedure for identification is based on comparison of an input fundus image with reference fundus images in the database. In the first step, registration between the input image and the reference image is performed. The step includes translational and rotational movement. The PI is based on the measure of similarity between blood vessel images generated from the input and reference images. The similarity measure is defined as the cross-correlation coefficient calculated from the pixel values. When the similarity is greater than a predetermined threshold, the input image is identified. This means both the input and the reference images are associated to the same person. Four hundred sixty-two fundus images including forty-one same-person's image pairs were used for the estimation of the proposed technique. The false rejection rate and the false acceptance rate were 9.9×10 -5% and 4.3×10 -5%, respectively. The results indicate that the proposed method has a higher performance than other biometrics except for DNA. To be used for practical application in the public, the device which can take retinal fundus images easily is needed. The proposed method is applied to not only the PI but also the system which warns about misfiling of fundus images in medical facilities.

  10. Genome sequencing of a single tardigrade Hypsibius dujardini individual

    PubMed Central

    Arakawa, Kazuharu; Yoshida, Yuki; Tomita, Masaru

    2016-01-01

    Tardigrades are ubiquitous microscopic animals that play an important role in the study of metazoan phylogeny. Most terrestrial tardigrades can withstand extreme environments by entering an ametabolic desiccated state termed anhydrobiosis. Due to their small size and the non-axenic nature of laboratory cultures, molecular studies of tardigrades are prone to contamination. To minimize the possibility of microbial contaminations and to obtain high-quality genomic information, we have developed an ultra-low input library sequencing protocol to enable the genome sequencing of a single tardigrade Hypsibius dujardini individual. Here, we describe the details of our sequencing data and the ultra-low input library preparation methodologies. PMID:27529330

  11. Genome sequencing of a single tardigrade Hypsibius dujardini individual.

    PubMed

    Arakawa, Kazuharu; Yoshida, Yuki; Tomita, Masaru

    2016-08-16

    Tardigrades are ubiquitous microscopic animals that play an important role in the study of metazoan phylogeny. Most terrestrial tardigrades can withstand extreme environments by entering an ametabolic desiccated state termed anhydrobiosis. Due to their small size and the non-axenic nature of laboratory cultures, molecular studies of tardigrades are prone to contamination. To minimize the possibility of microbial contaminations and to obtain high-quality genomic information, we have developed an ultra-low input library sequencing protocol to enable the genome sequencing of a single tardigrade Hypsibius dujardini individual. Here, we describe the details of our sequencing data and the ultra-low input library preparation methodologies.

  12. A modular DNA signal translator for the controlled release of a protein by an aptamer.

    PubMed

    Beyer, Stefan; Simmel, Friedrich C

    2006-01-01

    Owing to the intimate linkage of sequence and structure in nucleic acids, DNA is an extremely attractive molecule for the development of molecular devices, in particular when a combination of information processing and chemomechanical tasks is desired. Many of the previously demonstrated devices are driven by hybridization between DNA 'effector' strands and specific recognition sequences on the device. For applications it is of great interest to link several of such molecular devices together within artificial reaction cascades. Often it will not be possible to choose DNA sequences freely, e.g. when functional nucleic acids such as aptamers are used. In such cases translation of an arbitrary 'input' sequence into a desired effector sequence may be required. Here we demonstrate a molecular 'translator' for information encoded in DNA and show how it can be used to control the release of a protein by an aptamer using an arbitrarily chosen DNA input strand. The function of the translator is based on branch migration and the action of the endonuclease FokI. The modular design of the translator facilitates the adaptation of the device to various input or output sequences.

  13. An improved non-uniformity correction algorithm and its GPU parallel implementation

    NASA Astrophysics Data System (ADS)

    Cheng, Kuanhong; Zhou, Huixin; Qin, Hanlin; Zhao, Dong; Qian, Kun; Rong, Shenghui

    2018-05-01

    The performance of SLP-THP based non-uniformity correction algorithm is seriously affected by the result of SLP filter, which always leads to image blurring and ghosting artifacts. To address this problem, an improved SLP-THP based non-uniformity correction method with curvature constraint was proposed. Here we put forward a new way to estimate spatial low frequency component. First, the details and contours of input image were obtained respectively by minimizing local Gaussian curvature and mean curvature of image surface. Then, the guided filter was utilized to combine these two parts together to get the estimate of spatial low frequency component. Finally, we brought this SLP component into SLP-THP method to achieve non-uniformity correction. The performance of proposed algorithm was verified by several real and simulated infrared image sequences. The experimental results indicated that the proposed algorithm can reduce the non-uniformity without detail losing. After that, a GPU based parallel implementation that runs 150 times faster than CPU was presented, which showed the proposed algorithm has great potential for real time application.

  14. Image processing tool for automatic feature recognition and quantification

    DOEpatents

    Chen, Xing; Stoddard, Ryan J.

    2017-05-02

    A system for defining structures within an image is described. The system includes reading of an input file, preprocessing the input file while preserving metadata such as scale information and then detecting features of the input file. In one version the detection first uses an edge detector followed by identification of features using a Hough transform. The output of the process is identified elements within the image.

  15. Analysis of deep learning methods for blind protein contact prediction in CASP12.

    PubMed

    Wang, Sheng; Sun, Siqi; Xu, Jinbo

    2018-03-01

    Here we present the results of protein contact prediction achieved in CASP12 by our RaptorX-Contact server, which is an early implementation of our deep learning method for contact prediction. On a set of 38 free-modeling target domains with a median family size of around 58 effective sequences, our server obtained an average top L/5 long- and medium-range contact accuracy of 47% and 44%, respectively (L = length). A complete implementation has an average accuracy of 59% and 57%, respectively. Our deep learning method formulates contact prediction as a pixel-level image labeling problem and simultaneously predicts all residue pairs of a protein using a combination of two deep residual neural networks, taking as input the residue conservation information, predicted secondary structure and solvent accessibility, contact potential, and coevolution information. Our approach differs from existing methods mainly in (1) formulating contact prediction as a pixel-level image labeling problem instead of an image-level classification problem; (2) simultaneously predicting all contacts of an individual protein to make effective use of contact occurrence patterns; and (3) integrating both one-dimensional and two-dimensional deep convolutional neural networks to effectively learn complex sequence-structure relationship including high-order residue correlation. This paper discusses the RaptorX-Contact pipeline, both contact prediction and contact-based folding results, and finally the strength and weakness of our method. © 2017 Wiley Periodicals, Inc.

  16. Effect of random phase mask on input plane in photorefractive authentic memory with two-wave encryption method

    NASA Astrophysics Data System (ADS)

    Mita, Akifumi; Okamoto, Atsushi; Funakoshi, Hisatoshi

    2004-06-01

    We have proposed an all-optical authentic memory with the two-wave encryption method. In the recording process, the image data are encrypted to a white noise by the random phase masks added on the input beam with the image data and the reference beam. Only reading beam with the phase-conjugated distribution of the reference beam can decrypt the encrypted data. If the encrypted data are read out with an incorrect phase distribution, the output data are transformed into a white noise. Moreover, during read out, reconstructions of the encrypted data interfere destructively resulting in zero intensity. Therefore our memory has a merit that we can detect unlawful accesses easily by measuring the output beam intensity. In our encryption method, the random phase mask on the input plane plays important roles in transforming the input image into a white noise and prohibiting to decrypt a white noise to the input image by the blind deconvolution method. Without this mask, when unauthorized users observe the output beam by using CCD in the readout with the plane wave, the completely same intensity distribution as that of Fourier transform of the input image is obtained. Therefore the encrypted image will be decrypted easily by using the blind deconvolution method. However in using this mask, even if unauthorized users observe the output beam using the same method, the encrypted image cannot be decrypted because the observed intensity distribution is dispersed at random by this mask. Thus it can be said the robustness is increased by this mask. In this report, we compare two correlation coefficients, which represents the degree of a white noise of the output image, between the output image and the input image in using this mask or not. We show that the robustness of this encryption method is increased as the correlation coefficient is improved from 0.3 to 0.1 by using this mask.

  17. Image feature detection and extraction techniques performance evaluation for development of panorama under different light conditions

    NASA Astrophysics Data System (ADS)

    Patil, Venkat P.; Gohatre, Umakant B.

    2018-04-01

    The technique of obtaining a wider field-of-view of an image to get high resolution integrated image is normally required for development of panorama of a photographic images or scene from a sequence of part of multiple views. There are various image stitching methods developed recently. For image stitching five basic steps are adopted stitching which are Feature detection and extraction, Image registration, computing homography, image warping and Blending. This paper provides review of some of the existing available image feature detection and extraction techniques and image stitching algorithms by categorizing them into several methods. For each category, the basic concepts are first described and later on the necessary modifications made to the fundamental concepts by different researchers are elaborated. This paper also highlights about the some of the fundamental techniques for the process of photographic image feature detection and extraction methods under various illumination conditions. The Importance of Image stitching is applicable in the various fields such as medical imaging, astrophotography and computer vision. For comparing performance evaluation of the techniques used for image features detection three methods are considered i.e. ORB, SURF, HESSIAN and time required for input images feature detection is measured. Results obtained finally concludes that for daylight condition, ORB algorithm found better due to the fact that less tome is required for more features extracted where as for images under night light condition it shows that SURF detector performs better than ORB/HESSIAN detectors.

  18. Human silhouette matching based on moment invariants

    NASA Astrophysics Data System (ADS)

    Sun, Yong-Chao; Qiu, Xian-Jie; Xia, Shi-Hong; Wang, Zhao-Qi

    2005-07-01

    This paper aims to apply the method of silhouette matching based on moment invariants to infer the human motion parameters from video sequences of single monocular uncalibrated camera. Currently, there are two ways of tracking human motion: Marker and Markerless. While a hybrid framework is introduced in this paper to recover the input video contents. A standard 3D motion database is built up by marker technique in advance. Given a video sequences, human silhouettes are extracted as well as the viewpoint information of the camera which would be utilized to project the standard 3D motion database onto the 2D one. Therefore, the video recovery problem is formulated as a matching issue of finding the most similar body pose in standard 2D library with the one in video image. The framework is applied to the special trampoline sport where we can obtain the complicated human motion parameters in the single camera video sequences, and a lot of experiments are demonstrated that this approach is feasible in the field of monocular video-based 3D motion reconstruction.

  19. Automated detection of arterial input function in DSC perfusion MRI in a stroke rat model

    NASA Astrophysics Data System (ADS)

    Yeh, M.-Y.; Lee, T.-H.; Yang, S.-T.; Kuo, H.-H.; Chyi, T.-K.; Liu, H.-L.

    2009-05-01

    Quantitative cerebral blood flow (CBF) estimation requires deconvolution of the tissue concentration time curves with an arterial input function (AIF). However, image-based determination of AIF in rodent is challenged due to limited spatial resolution. We evaluated the feasibility of quantitative analysis using automated AIF detection and compared the results with commonly applied semi-quantitative analysis. Permanent occlusion of bilateral or unilateral common carotid artery was used to induce cerebral ischemia in rats. The image using dynamic susceptibility contrast method was performed on a 3-T magnetic resonance scanner with a spin-echo echo-planar-image sequence (TR/TE = 700/80 ms, FOV = 41 mm, matrix = 64, 3 slices, SW = 2 mm), starting from 7 s prior to contrast injection (1.2 ml/kg) at four different time points. For quantitative analysis, CBF was calculated by the AIF which was obtained from 10 voxels with greatest contrast enhancement after deconvolution. For semi-quantitative analysis, relative CBF was estimated by the integral divided by the first moment of the relaxivity time curves. We observed if the AIFs obtained in the three different ROIs (whole brain, hemisphere without lesion and hemisphere with lesion) were similar, the CBF ratios (lesion/normal) between quantitative and semi-quantitative analyses might have a similar trend at different operative time points. If the AIFs were different, the CBF ratios might be different. We concluded that using local maximum one can define proper AIF without knowing the anatomical location of arteries in a stroke rat model.

  20. Application of population sequencing (POPSEQ) for ordering and inputting genotyping-by-sequencing markers in hexaploid wheat

    USDA-ARS?s Scientific Manuscript database

    The advancement of next-generation sequencing technologies in conjunction with new bioinformatics tools enabled fine-tuning of sequence-based high resolution mapping strategies for complex genomes. Although genotyping-by-sequencing (GBS) provides a large number of markers, its application for assoc...

  1. Convolution Operation of Optical Information via Quantum Storage

    NASA Astrophysics Data System (ADS)

    Li, Zhixiang; Liu, Jianji; Fan, Hongming; Zhang, Guoquan

    2017-06-01

    We proposed a novel method to achieve optical convolution of two input images via quantum storage based on electromagnetically induced transparency (EIT) effect. By placing an EIT media in the confocal Fourier plane of the 4f-imaging system, the optical convolution of the two input images can be achieved in the image plane.

  2. Online image classification under monotonic decision boundary constraint

    NASA Astrophysics Data System (ADS)

    Lu, Cheng; Allebach, Jan; Wagner, Jerry; Pitta, Brandi; Larson, David; Guo, Yandong

    2015-01-01

    Image classification is a prerequisite for copy quality enhancement in all-in-one (AIO) device that comprises a printer and scanner, and which can be used to scan, copy and print. Different processing pipelines are provided in an AIO printer. Each of the processing pipelines is designed specifically for one type of input image to achieve the optimal output image quality. A typical approach to this problem is to apply Support Vector Machine to classify the input image and feed it to its corresponding processing pipeline. The online training SVM can help users to improve the performance of classification as input images accumulate. At the same time, we want to make quick decision on the input image to speed up the classification which means sometimes the AIO device does not need to scan the entire image to make a final decision. These two constraints, online SVM and quick decision, raise questions regarding: 1) what features are suitable for classification; 2) how we should control the decision boundary in online SVM training. This paper will discuss the compatibility of online SVM and quick decision capability.

  3. Automated sequence-specific protein NMR assignment using the memetic algorithm MATCH.

    PubMed

    Volk, Jochen; Herrmann, Torsten; Wüthrich, Kurt

    2008-07-01

    MATCH (Memetic Algorithm and Combinatorial Optimization Heuristics) is a new memetic algorithm for automated sequence-specific polypeptide backbone NMR assignment of proteins. MATCH employs local optimization for tracing partial sequence-specific assignments within a global, population-based search environment, where the simultaneous application of local and global optimization heuristics guarantees high efficiency and robustness. MATCH thus makes combined use of the two predominant concepts in use for automated NMR assignment of proteins. Dynamic transition and inherent mutation are new techniques that enable automatic adaptation to variable quality of the experimental input data. The concept of dynamic transition is incorporated in all major building blocks of the algorithm, where it enables switching between local and global optimization heuristics at any time during the assignment process. Inherent mutation restricts the intrinsically required randomness of the evolutionary algorithm to those regions of the conformation space that are compatible with the experimental input data. Using intact and artificially deteriorated APSY-NMR input data of proteins, MATCH performed sequence-specific resonance assignment with high efficiency and robustness.

  4. The effect of input data transformations on object-based image analysis

    PubMed Central

    LIPPITT, CHRISTOPHER D.; COULTER, LLOYD L.; FREEMAN, MARY; LAMANTIA-BISHOP, JEFFREY; PANG, WYSON; STOW, DOUGLAS A.

    2011-01-01

    The effect of using spectral transform images as input data on segmentation quality and its potential effect on products generated by object-based image analysis are explored in the context of land cover classification in Accra, Ghana. Five image data transformations are compared to untransformed spectral bands in terms of their effect on segmentation quality and final product accuracy. The relationship between segmentation quality and product accuracy is also briefly explored. Results suggest that input data transformations can aid in the delineation of landscape objects by image segmentation, but the effect is idiosyncratic to the transformation and object of interest. PMID:21673829

  5. MPEG-4 ASP SoC receiver with novel image enhancement techniques for DAB networks

    NASA Astrophysics Data System (ADS)

    Barreto, D.; Quintana, A.; García, L.; Callicó, G. M.; Núñez, A.

    2007-05-01

    This paper presents a system for real-time video reception in low-power mobile devices using Digital Audio Broadcast (DAB) technology for transmission. A demo receiver terminal is designed into a FPGA platform using the Advanced Simple Profile (ASP) MPEG-4 standard for video decoding. In order to keep the demanding DAB requirements, the bandwidth of the encoded sequence must be drastically reduced. In this sense, prior to the MPEG-4 coding stage, a pre-processing stage is performed. It is firstly composed by a segmentation phase according to motion and texture based on the Principal Component Analysis (PCA) of the input video sequence, and secondly by a down-sampling phase, which depends on the segmentation results. As a result of the segmentation task, a set of texture and motion maps are obtained. These motion and texture maps are also included into the bit-stream as user data side-information and are therefore known to the receiver. For all bit-rates, the whole encoder/decoder system proposed in this paper exhibits higher image visual quality than the alternative encoding/decoding method, assuming equal image sizes. A complete analysis of both techniques has also been performed to provide the optimum motion and texture maps for the global system, which has been finally validated for a variety of video sequences. Additionally, an optimal HW/SW partition for the MPEG-4 decoder has been studied and implemented over a Programmable Logic Device with an embedded ARM9 processor. Simulation results show that a throughput of 15 QCIF frames per second can be achieved with low area and low power implementation.

  6. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    NASA Astrophysics Data System (ADS)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative poses between all frames are given.

  7. Long period pseudo random number sequence generator

    NASA Technical Reports Server (NTRS)

    Wang, Charles C. (Inventor)

    1989-01-01

    A circuit for generating a sequence of pseudo random numbers, (A sub K). There is an exponentiator in GF(2 sup m) for the normal basis representation of elements in a finite field GF(2 sup m) each represented by m binary digits and having two inputs and an output from which the sequence (A sub K). Of pseudo random numbers is taken. One of the two inputs is connected to receive the outputs (E sub K) of maximal length shift register of n stages. There is a switch having a pair of inputs and an output. The switch outputs is connected to the other of the two inputs of the exponentiator. One of the switch inputs is connected for initially receiving a primitive element (A sub O) in GF(2 sup m). Finally, there is a delay circuit having an input and an output. The delay circuit output is connected to the other of the switch inputs and the delay circuit input is connected to the output of the exponentiator. Whereby after the exponentiator initially receives the primitive element (A sub O) in GF(2 sup m) through the switch, the switch can be switched to cause the exponentiator to receive as its input a delayed output A(K-1) from the exponentiator thereby generating (A sub K) continuously at the output of the exponentiator. The exponentiator in GF(2 sup m) is novel and comprises a cyclic-shift circuit; a Massey-Omura multiplier; and, a control logic circuit all operably connected together to perform the function U(sub i) = 92(sup i) (for n(sub i) = 1 or 1 (for n(subi) = 0).

  8. Rigid-body motion correction of the liver in image reconstruction for golden-angle stack-of-stars DCE MRI.

    PubMed

    Johansson, Adam; Balter, James; Cao, Yue

    2018-03-01

    Respiratory motion can affect pharmacokinetic perfusion parameters quantified from liver dynamic contrast-enhanced MRI. Image registration can be used to align dynamic images after reconstruction. However, intra-image motion blur remains after alignment and can alter the shape of contrast-agent uptake curves. We introduce a method to correct for inter- and intra-image motion during image reconstruction. Sixteen liver dynamic contrast-enhanced MRI examinations of nine subjects were performed using a golden-angle stack-of-stars sequence. For each examination, an image time series with high temporal resolution but severe streak artifacts was reconstructed. Images were aligned using region-limited rigid image registration within a region of interest covering the liver. The transformations resulting from alignment were used to correct raw data for motion by modulating and rotating acquired lines in k-space. The corrected data were then reconstructed using view sharing. Portal-venous input functions extracted from motion-corrected images had significantly greater peak signal enhancements (mean increase: 16%, t-test, P <  0.001) than those from images aligned using image registration after reconstruction. In addition, portal-venous perfusion maps estimated from motion-corrected images showed fewer artifacts close to the edge of the liver. Motion-corrected image reconstruction restores uptake curves distorted by motion. Motion correction also reduces motion artifacts in estimated perfusion parameter maps. Magn Reson Med 79:1345-1353, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Reconstruction of input functions from a dynamic PET image with sequential administration of 15O2 and [Formula: see text] for noninvasive and ultra-rapid measurement of CBF, OEF, and CMRO2.

    PubMed

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Hiroyuki; Yamamoto, Yuka; Hatakeyama, Tetsuhiro; Nishiyama, Yoshihiro

    2018-05-01

    CBF, OEF, and CMRO 2 images can be quantitatively assessed using PET. Their image calculation requires arterial input functions, which require invasive procedure. The aim of the present study was to develop a non-invasive approach with image-derived input functions (IDIFs) using an image from an ultra-rapid O 2 and C 15 O 2 protocol. Our technique consists of using a formula to express the input using tissue curve with rate constants. For multiple tissue curves, the rate constants were estimated so as to minimize the differences of the inputs using the multiple tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects ( n = 24). The estimated IDIFs were well-reproduced against the measured ones. The difference in the calculated CBF, OEF, and CMRO 2 values by the two methods was small (<10%) against the invasive method, and the values showed tight correlations ( r = 0.97). The simulation showed errors associated with the assumed parameters were less than ∼10%. Our results demonstrate that IDIFs can be reconstructed from tissue curves, suggesting the possibility of using a non-invasive technique to assess CBF, OEF, and CMRO 2 .

  10. Input Sources of Third Person Singular –s Inconsistency in Children with and without Specific Language Impairment*

    PubMed Central

    Leonard, Laurence B.; Fey, Marc E.; Deevy, Patricia; Bredin-Oja, Shelley L.

    2015-01-01

    We tested four predictions based on the assumption that optional infinitives can be attributed to properties of the input whereby children inappropriately extract nonfinite subject-verb sequences (e.g. the girl run) from larger input utterances (e.g. Does the girl run? Let’s watch the girl run). Thirty children with specific language impairment (SLI) and 30 typically developing children heard novel and familiar verbs that appeared exclusively either in utterances containing nonfinite subject-verb sequences or in simple sentences with the verb inflected for third person singular –s. Subsequent testing showed strong input effects, especially for the SLI group. The results provide support for input-based factors as significant contributors not only to the optional infinitive period in typical development, but also to the especially protracted optional infinitive period seen in SLI. PMID:25076070

  11. Input preshaping with frequency domain information for flexible-link manipulator control

    NASA Technical Reports Server (NTRS)

    Tzes, Anthony; Englehart, Matthew J.; Yurkovich, Stephen

    1989-01-01

    The application of an input preshaping scheme to flexible manipulators is considered. The resulting control corresponds to a feedforward term that convolves in real-time the desired reference input with a sequence of impulses and produces a vibration free output. The robustness of the algorithm with respect to injected disturbances and modal frequency variations is not satisfactory and can be improved by convolving the input with a longer sequence of impulses. The incorporation of the preshaping scheme to a closed-loop plant, using acceleration feedback, offers satisfactory disturbance rejection due to feedback and cancellation of the flexible mode effects due to the preshaping. A frequency domain identification scheme is used to estimate the modal frequencies on-line and subsequently update the spacing between the impulses. The combined adaptive input preshaping scheme provides the fastest possible slew that results in a vibration free output.

  12. Vector generator scan converter

    DOEpatents

    Moore, James M.; Leighton, James F.

    1990-01-01

    High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O (input/output) channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardward for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold.

  13. Capturing Revolute Motion and Revolute Joint Parameters with Optical Tracking

    NASA Astrophysics Data System (ADS)

    Antonya, C.

    2017-12-01

    Optical tracking of users and various technical systems are becoming more and more popular. It consists of analysing sequence of recorded images using video capturing devices and image processing algorithms. The returned data contains mainly point-clouds, coordinates of markers or coordinates of point of interest. These data can be used for retrieving information related to the geometry of the objects, but also to extract parameters for the analytical model of the system useful in a variety of computer aided engineering simulations. The parameter identification of joints deals with extraction of physical parameters (mainly geometric parameters) for the purpose of constructing accurate kinematic and dynamic models. The input data are the time-series of the marker’s position. The least square method was used for fitting the data into different geometrical shapes (ellipse, circle, plane) and for obtaining the position and orientation of revolute joins.

  14. Block matching and Wiener filtering approach to optical turbulence mitigation and its application to simulated and real imagery with quantitative error analysis

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; Rucci, Michael A.; Dapore, Alexander J.; Karch, Barry K.

    2017-07-01

    We present a block-matching and Wiener filtering approach to atmospheric turbulence mitigation for long-range imaging of extended scenes. We evaluate the proposed method, along with some benchmark methods, using simulated and real-image sequences. The simulated data are generated with a simulation tool developed by one of the authors. These data provide objective truth and allow for quantitative error analysis. The proposed turbulence mitigation method takes a sequence of short-exposure frames of a static scene and outputs a single restored image. A block-matching registration algorithm is used to provide geometric correction for each of the individual input frames. The registered frames are then averaged, and the average image is processed with a Wiener filter to provide deconvolution. An important aspect of the proposed method lies in how we model the degradation point spread function (PSF) for the purposes of Wiener filtering. We use a parametric model that takes into account the level of geometric correction achieved during image registration. This is unlike any method we are aware of in the literature. By matching the PSF to the level of registration in this way, the Wiener filter is able to fully exploit the reduced blurring achieved by registration. We also describe a method for estimating the atmospheric coherence diameter (or Fried parameter) from the estimated motion vectors. We provide a detailed performance analysis that illustrates how the key tuning parameters impact system performance. The proposed method is relatively simple computationally, yet it has excellent performance in comparison with state-of-the-art benchmark methods in our study.

  15. Morphological-transformation-based technique of edge detection and skeletonization of an image using a single spatial light modulator

    NASA Astrophysics Data System (ADS)

    Munshi, Soumika; Datta, A. K.

    2003-03-01

    A technique of optically detecting the edge and skeleton of an image by defining shift operations for morphological transformation is described. A (2 × 2) source array, which acts as the structuring element of morphological operations, casts four angularly shifted optical projections of the input image. The resulting dilated image, when superimposed with the complementary input image, produces the edge image. For skeletonization, the source array casts four partially overlapped output images of the inverted input image, which is negated, and the resultant image is recorded in a CCD camera. This overlapped eroded image is again eroded and then dilated, producing an opened image. The difference between the eroded and opened image is then computed, resulting in a thinner image. This procedure of obtaining a thinned image is iterated until the difference image becomes zero, maintaining the connectivity conditions. The technique has been optically implemented using a single spatial modulator and has the advantage of single-instruction parallel processing of the image. The techniques have been tested both for binary and grey images.

  16. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps.

    PubMed

    Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A

    2014-08-01

    The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called "biophysical" map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.

  17. PrimerMapper: high throughput primer design and graphical assembly for PCR and SNP detection

    PubMed Central

    O’Halloran, Damien M.

    2016-01-01

    Primer design represents a widely employed gambit in diverse molecular applications including PCR, sequencing, and probe hybridization. Variations of PCR, including primer walking, allele-specific PCR, and nested PCR provide specialized validation and detection protocols for molecular analyses that often require screening large numbers of DNA fragments. In these cases, automated sequence retrieval and processing become important features, and furthermore, a graphic that provides the user with a visual guide to the distribution of designed primers across targets is most helpful in quickly ascertaining primer coverage. To this end, I describe here, PrimerMapper, which provides a comprehensive graphical user interface that designs robust primers from any number of inputted sequences while providing the user with both, graphical maps of primer distribution for each inputted sequence, and also a global assembled map of all inputted sequences with designed primers. PrimerMapper also enables the visualization of graphical maps within a browser and allows the user to draw new primers directly onto the webpage. Other features of PrimerMapper include allele-specific design features for SNP genotyping, a remote BLAST window to NCBI databases, and remote sequence retrieval from GenBank and dbSNP. PrimerMapper is hosted at GitHub and freely available without restriction. PMID:26853558

  18. Influence of speckle image reconstruction on photometric precision for large solar telescopes

    NASA Astrophysics Data System (ADS)

    Peck, C. L.; Wöger, F.; Marino, J.

    2017-11-01

    Context. High-resolution observations from large solar telescopes require adaptive optics (AO) systems to overcome image degradation caused by Earth's turbulent atmosphere. AO corrections are, however, only partial. Achieving near-diffraction limited resolution over a large field of view typically requires post-facto image reconstruction techniques to reconstruct the source image. Aims: This study aims to examine the expected photometric precision of amplitude reconstructed solar images calibrated using models for the on-axis speckle transfer functions and input parameters derived from AO control data. We perform a sensitivity analysis of the photometric precision under variations in the model input parameters for high-resolution solar images consistent with four-meter class solar telescopes. Methods: Using simulations of both atmospheric turbulence and partial compensation by an AO system, we computed the speckle transfer function under variations in the input parameters. We then convolved high-resolution numerical simulations of the solar photosphere with the simulated atmospheric transfer function, and subsequently deconvolved them with the model speckle transfer function to obtain a reconstructed image. To compute the resulting photometric precision, we compared the intensity of the original image with the reconstructed image. Results: The analysis demonstrates that high photometric precision can be obtained for speckle amplitude reconstruction using speckle transfer function models combined with AO-derived input parameters. Additionally, it shows that the reconstruction is most sensitive to the input parameter that characterizes the atmospheric distortion, and sub-2% photometric precision is readily obtained when it is well estimated.

  19. supernovae: Photometric classification of supernovae

    NASA Astrophysics Data System (ADS)

    Charnock, Tom; Moss, Adam

    2017-05-01

    Supernovae classifies supernovae using their light curves directly as inputs to a deep recurrent neural network, which learns information from the sequence of observations. Observational time and filter fluxes are used as inputs; since the inputs are agnostic, additional data such as host galaxy information can also be included.

  20. Whole-body diffusion-weighted MR image stitching and alignment to anatomical MRI

    NASA Astrophysics Data System (ADS)

    Ceranka, Jakub; Polfliet, Mathias; Lecouvet, Frederic; Michoux, Nicolas; Vandemeulebroucke, Jef

    2017-02-01

    Whole-body diffusion-weighted (WB-DW) MRI in combination with anatomical MRI has shown a great poten- tial in bone and soft tissue tumour detection, evaluation of lymph nodes and treatment response assessment. Because of the vast body coverage, whole-body MRI is acquired in separate stations, which are subsequently combined into a whole-body image. However, inter-station and inter-modality image misalignments can occur due to image distortions and patient motion during acquisition, which may lead to inaccurate representations of patient anatomy and hinder visual assessment. Automated and accurate whole-body image formation and alignment of the multi-modal MRI images is therefore crucial. We investigated several registration approaches for the formation or stitching of the whole-body image stations, followed by a deformable alignment of the multi- modal whole-body images. We compared a pairwise approach, where diffusion-weighted (DW) image stations were sequentially aligned to a reference station (pelvis), to a groupwise approach, where all stations were simultaneously mapped to a common reference space while minimizing the overall transformation. For each, a choice of input images and corresponding metrics was investigated. Performance was evaluated by assessing the quality of the obtained whole-body images, and by verifying the accuracy of the alignment with whole-body anatomical sequences. The groupwise registration approach provided the best compromise between the formation of WB- DW images and multi-modal alignment. The fully automated method was found to be robust, making its use in the clinic feasible.

  1. Team Electronic Gameplay Combining Different Means of Control

    NASA Technical Reports Server (NTRS)

    Palsson, Olafur S. (Inventor); Pope, Alan T. (Inventor)

    2014-01-01

    Disclosed are methods and apparatuses provided for modifying the effect of an operator controlled input device on an interactive device to encourage the self-regulation of at least one physiological activity by a person different than the operator. The interactive device comprises a display area which depicts images and apparatus for receiving at least one input from the operator controlled input device to thus permit the operator to control and interact with at least some of the depicted images. One effect modification comprises measurement of the physiological activity of a person different from the operator, while modifying the ability of the operator to control and interact with at least some of the depicted images by modifying the input from the operator controlled input device in response to changes in the measured physiological signal.

  2. Evaluating total inorganic nitrogen in coastal waters through fusion of multi-temporal RADARSAT-2 and optical imagery using random forest algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Meiling; Liu, Xiangnan; Li, Jin; Ding, Chao; Jiang, Jiale

    2014-12-01

    Satellites routinely provide frequent, large-scale, near-surface views of many oceanographic variables pertinent to plankton ecology. However, the nutrient fertility of water can be challenging to detect accurately using remote sensing technology. This research has explored an approach to estimate the nutrient fertility in coastal waters through the fusion of synthetic aperture radar (SAR) images and optical images using the random forest (RF) algorithm. The estimation of total inorganic nitrogen (TIN) in the Hong Kong Sea, China, was used as a case study. In March of 2009 and May and August of 2010, a sequence of multi-temporal in situ data and CCD images from China's HJ-1 satellite and RADARSAT-2 images were acquired. Four sensitive parameters were selected as input variables to evaluate TIN: single-band reflectance, a normalized difference spectral index (NDSI) and HV and VH polarizations. The RF algorithm was used to merge the different input variables from the SAR and optical imagery to generate a new dataset (i.e., the TIN outputs). The results showed the temporal-spatial distribution of TIN. The TIN values decreased from coastal waters to the open water areas, and TIN values in the northeast area were higher than those found in the southwest region of the study area. The maximum TIN values occurred in May. Additionally, the estimation accuracy for estimating TIN was significantly improved when the SAR and optical data were used in combination rather than a single data type alone. This study suggests that this method of estimating nutrient fertility in coastal waters by effectively fusing data from multiple sensors is very promising.

  3. Optimization of DSC MRI Echo Times for CBV Measurements Using Error Analysis in a Pilot Study of High-Grade Gliomas.

    PubMed

    Bell, L C; Does, M D; Stokes, A M; Baxter, L C; Schmainda, K M; Dueck, A C; Quarles, C C

    2017-09-01

    The optimal TE must be calculated to minimize the variance in CBV measurements made with DSC MR imaging. Simulations can be used to determine the influence of the TE on CBV, but they may not adequately recapitulate the in vivo heterogeneity of precontrast T2*, contrast agent kinetics, and the biophysical basis of contrast agent-induced T2* changes. The purpose of this study was to combine quantitative multiecho DSC MRI T2* time curves with error analysis in order to compute the optimal TE for a traditional single-echo acquisition. Eleven subjects with high-grade gliomas were scanned at 3T with a dual-echo DSC MR imaging sequence to quantify contrast agent-induced T2* changes in this retrospective study. Optimized TEs were calculated with propagation of error analysis for high-grade glial tumors, normal-appearing white matter, and arterial input function estimation. The optimal TE is a weighted average of the T2* values that occur as a contrast agent bolus transverses a voxel. The mean optimal TEs were 30.0 ± 7.4 ms for high-grade glial tumors, 36.3 ± 4.6 ms for normal-appearing white matter, and 11.8 ± 1.4 ms for arterial input function estimation (repeated-measures ANOVA, P < .001). Greater heterogeneity was observed in the optimal TE values for high-grade gliomas, and mean values of all 3 ROIs were statistically significant. The optimal TE for the arterial input function estimation is much shorter; this finding implies that quantitative DSC MR imaging acquisitions would benefit from multiecho acquisitions. In the case of a single-echo acquisition, the optimal TE prescribed should be 30-35 ms (without a preload) and 20-30 ms (with a standard full-dose preload). © 2017 by American Journal of Neuroradiology.

  4. North Twin Peak in super resolution

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This pair of images shows the result of taking a sequence of 25 identical exposures from the Imager for Mars Pathfinder (IMP) of the northern Twin Peak, with small camera motions, and processing them with the Super-Resolution algorithm developed at NASA's Ames Research Center.

    The upper image is a representative input image, scaled up by a factor of five, with the pixel edges smoothed out for a fair comparison. The lower image allows significantly finer detail to be resolved.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is an operating division of the California Institute of Technology (Caltech). The Imager for Mars Pathfinder (IMP) was developed by the University of Arizona Lunar and Planetary Laboratory under contract to JPL. Peter Smith is the Principal Investigator.

    The super-resolution research was conducted by Peter Cheeseman, Bob Kanefsky, Robin Hanson, and John Stutz of NASA's Ames Research Center, Mountain View, CA. More information on this technology is available on the Ames Super Resolution home page at

    http://ic-www.arc.nasa.gov/ic/projects/bayes-group/ group/super-res/

  5. Segmentation and learning in the quantitative analysis of microscopy images

    NASA Astrophysics Data System (ADS)

    Ruggiero, Christy; Ross, Amy; Porter, Reid

    2015-02-01

    In material science and bio-medical domains the quantity and quality of microscopy images is rapidly increasing and there is a great need to automatically detect, delineate and quantify particles, grains, cells, neurons and other functional "objects" within these images. These are challenging problems for image processing because of the variability in object appearance that inevitably arises in real world image acquisition and analysis. One of the most promising (and practical) ways to address these challenges is interactive image segmentation. These algorithms are designed to incorporate input from a human operator to tailor the segmentation method to the image at hand. Interactive image segmentation is now a key tool in a wide range of applications in microscopy and elsewhere. Historically, interactive image segmentation algorithms have tailored segmentation on an image-by-image basis, and information derived from operator input is not transferred between images. But recently there has been increasing interest to use machine learning in segmentation to provide interactive tools that accumulate and learn from the operator input over longer periods of time. These new learning algorithms reduce the need for operator input over time, and can potentially provide a more dynamic balance between customization and automation for different applications. This paper reviews the state of the art in this area, provides a unified view of these algorithms, and compares the segmentation performance of various design choices.

  6. Circuit for measuring time differences among events

    DOEpatents

    Romrell, Delwin M.

    1977-01-01

    An electronic circuit has a plurality of input terminals. Application of a first input signal to any one of the terminals initiates a timing sequence. Later inputs to the same terminal are ignored but a later input to any other terminal of the plurality generates a signal which can be used to measure the time difference between the later input and the first input signal. Also, such time differences may be measured between the first input signal and an input signal to any other terminal of the plurality or the circuit may be reset at any time by an external reset signal.

  7. Enhanced sequencing coverage with digital droplet multiple displacement amplification

    PubMed Central

    Sidore, Angus M.; Lan, Freeman; Lim, Shaun W.; Abate, Adam R.

    2016-01-01

    Sequencing small quantities of DNA is important for applications ranging from the assembly of uncultivable microbial genomes to the identification of cancer-associated mutations. To obtain sufficient quantities of DNA for sequencing, the small amount of starting material must be amplified significantly. However, existing methods often yield errors or non-uniform coverage, reducing sequencing data quality. Here, we describe digital droplet multiple displacement amplification, a method that enables massive amplification of low-input material while maintaining sequence accuracy and uniformity. The low-input material is compartmentalized as single molecules in millions of picoliter droplets. Because the molecules are isolated in compartments, they amplify to saturation without competing for resources; this yields uniform representation of all sequences in the final product and, in turn, enhances the quality of the sequence data. We demonstrate the ability to uniformly amplify the genomes of single Escherichia coli cells, comprising just 4.7 fg of starting DNA, and obtain sequencing coverage distributions that rival that of unamplified material. Digital droplet multiple displacement amplification provides a simple and effective method for amplifying minute amounts of DNA for accurate and uniform sequencing. PMID:26704978

  8. Effects of spatial resolution ratio in image fusion

    USGS Publications Warehouse

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2008-01-01

    In image fusion, the spatial resolution ratio can be defined as the ratio between the spatial resolution of the high-resolution panchromatic image and that of the low-resolution multispectral image. This paper attempts to assess the effects of the spatial resolution ratio of the input images on the quality of the fused image. Experimental results indicate that a spatial resolution ratio of 1:10 or higher is desired for optimal multisensor image fusion provided the input panchromatic image is not downsampled to a coarser resolution. Due to the synthetic pixels generated from resampling, the quality of the fused image decreases as the spatial resolution ratio decreases (e.g. from 1:10 to 1:30). However, even with a spatial resolution ratio as small as 1:30, the quality of the fused image is still better than the original multispectral image alone for feature interpretation. In cases where the spatial resolution ratio is too small (e.g. 1:30), to obtain better spectral integrity of the fused image, one may downsample the input high-resolution panchromatic image to a slightly lower resolution before fusing it with the multispectral image.

  9. Extracting flat-field images from scene-based image sequences using phase correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caron, James N., E-mail: Caron@RSImd.com; Montes, Marcos J.; Obermark, Jerome L.

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method usesmore » sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.« less

  10. SU-E-J-217: Multiparametric MR Imaging of Cranial Tumors On a Dedicated 1.0T MR Simulator Prior to Stereotactic Radiosurgery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wen, N; Glide-Hurst, C; Liu, M

    Purpose: Quantitative magnetic resonance imaging (MRI) of cranial lesions prior to stereotactic radiosurgery (SRS) may improve treatment planning and provide potential prognostic value. The practicality and logistics of acquiring advanced multiparametric MRI sequences to measure vascular and cellular properties of cerebral tumors are explored on a 1.0 Tesla MR Simulator. Methods: MR simulation was performed immediately following routine CT simulation on a 1T MR Simulator. MR sequences used were in the order they were performed: T2-Weighted Turbo Spin Echo (T2W-TSE), T2 FLAIR, Diffusion-weighted (DWI, b = 0, 800 to generate an apparent diffusion coefficient (ADC) map), 3D T1-Weighted Fast Fieldmore » Echo (T1W-FFE), Dynamic Contrast Enhanced (DCE) and Post Gadolinium Contrast Enhanced 3D T1W-FFE images. T1 pre-contrast values was generated by acquiring six different flip angles. The arterial input function was derived from arterial pixels in the perfusion images selected manually. The extended Tofts model was used to generate the permeability maps. Routine MRI scans took about 30 minutes to complete; the additional scans added 12 minutes. Results: To date, seven patients with cerebral tumors have been imaged and tumor physiology characterized. For example, on a glioblastoma patient, the volume contoured on T1 Gd images, ADC map and the pharmacokinetic map (Ktrans) were 1.9, 1.4, and 1.5 cc respectively with strong spatial correlation. The mean ADC value of the entire volume was 1141 μm2/s while the value in the white matter was 811 μm2/s. The mean value of Ktrans was 0.02 min-1 in the tumor volume and 0.00 in the normal white matter. Conclusion: Our initial results suggest that multiparametric MRI sequences may provide a more quantitative evaluation of vascular and tumor properties. Implementing functional imaging during MR-SIM may be particularly beneficial in assessing tumor extent, differentiating radiation necrosis from tumor recurrence, and establishing reliable bio-markers for treatment response evaluation. The Department of Radiation Oncology at Henry Ford Health System has research agreement with Varian Medical System and Philips Health Care.« less

  11. Timeline Resource Analysis Program (TRAP): User's manual and program document

    NASA Technical Reports Server (NTRS)

    Sessler, J. G.

    1981-01-01

    The Timeline Resource Analysis Program (TRAP), developed for scheduling and timelining problems, is described. Given an activity network, TRAP generates timeline plots, resource histograms, and tabular summaries of the network, schedules, and resource levels. It is written in ANSI FORTRAN for the Honeywell SIGMA 5 computer and operates in the interactive mode using the TEKTRONIX 4014-1 graphics terminal. The input network file may be a standard SIGMA 5 file or one generated using the Interactive Graphics Design System. The timeline plots can be displayed in two orderings: according to the sequence in which the tasks were read on input, and a waterfall sequence in which the tasks are ordered by start time. The input order is especially meaningful when the network consists of several interacting subnetworks. The waterfall sequence is helpful in assessing the project status at any point in time.

  12. Image Enhancement via Subimage Histogram Equalization Based on Mean and Variance

    PubMed Central

    2017-01-01

    This paper puts forward a novel image enhancement method via Mean and Variance based Subimage Histogram Equalization (MVSIHE), which effectively increases the contrast of the input image with brightness and details well preserved compared with some other methods based on histogram equalization (HE). Firstly, the histogram of input image is divided into four segments based on the mean and variance of luminance component, and the histogram bins of each segment are modified and equalized, respectively. Secondly, the result is obtained via the concatenation of the processed subhistograms. Lastly, the normalization method is deployed on intensity levels, and the integration of the processed image with the input image is performed. 100 benchmark images from a public image database named CVG-UGR-Database are used for comparison with other state-of-the-art methods. The experiment results show that the algorithm can not only enhance image information effectively but also well preserve brightness and details of the original image. PMID:29403529

  13. A text input system developed by using lips image recognition based LabVIEW for the seriously disabled.

    PubMed

    Chen, S C; Shao, C L; Liang, C K; Lin, S W; Huang, T H; Hsieh, M C; Yang, C H; Luo, C H; Wuo, C M

    2004-01-01

    In this paper, we present a text input system for the seriously disabled by using lips image recognition based on LabVIEW. This system can be divided into the software subsystem and the hardware subsystem. In the software subsystem, we adopted the technique of image processing to recognize the status of mouth-opened or mouth-closed depending the relative distance between the upper lip and the lower lip. In the hardware subsystem, parallel port built in PC is used to transmit the recognized result of mouth status to the Morse-code text input system. Integrating the software subsystem with the hardware subsystem, we implement a text input system by using lips image recognition programmed in LabVIEW language. We hope the system can help the seriously disabled to communicate with normal people more easily.

  14. Four-dimensional diffusion-weighted MR imaging (4D-DWI): a feasibility study.

    PubMed

    Liu, Yilin; Zhong, Xiaodong; Czito, Brian G; Palta, Manisha; Bashir, Mustafa R; Dale, Brian M; Yin, Fang-Fang; Cai, Jing

    2017-02-01

    Diffusion-weighted Magnetic Resonance Imaging (DWI) has been shown to be a powerful tool for cancer detection with high tumor-to-tissue contrast. This study aims to investigate the feasibility of developing a four-dimensional DWI technique (4D-DWI) for imaging respiratory motion for radiation therapy applications. Image acquisition was performed by repeatedly imaging a volume of interest (VOI) using an interleaved multislice single-shot echo-planar imaging (EPI) 2D-DWI sequence in the axial plane. Each 2D-DWI image was acquired with an intermediately low b-value (b = 500 s/mm 2 ) and with diffusion-encoding gradients in x, y, and z diffusion directions. Respiratory motion was simultaneously recorded using a respiratory bellow, and the synchronized respiratory signal was used to retrospectively sort the 2D images to generate 4D-DWI. Cine MRI using steady-state free precession was also acquired as a motion reference. As a preliminary feasibility study, this technique was implemented on a 4D digital human phantom (XCAT) with a simulated pancreas tumor. The respiratory motion of the phantom was controlled by regular sinusoidal motion profile. 4D-DWI tumor motion trajectories were extracted and compared with the input breathing curve. The mean absolute amplitude differences (D) were calculated in superior-inferior (SI) direction and anterior-posterior (AP) direction. The technique was then evaluated on two healthy volunteers. Finally, the effects of 4D-DWI on apparent diffusion coefficient (ADC) measurements were investigated for hypothetical heterogeneous tumors via simulations. Tumor trajectories extracted from XCAT 4D-DWI were consistent with the input signal: the average D value was 1.9 mm (SI) and 0.4 mm (AP). The average D value was 2.6 mm (SI) and 1.7 mm (AP) for the two healthy volunteers. A 4D-DWI technique has been developed and evaluated on digital phantom and human subjects. 4D-DWI can lead to more accurate respiratory motion measurement. This has a great potential to improve the visualization and delineation of cancer tumors for radiotherapy. © 2016 American Association of Physicists in Medicine.

  15. Robot Sequencing and Visualization Program (RSVP)

    NASA Technical Reports Server (NTRS)

    Cooper, Brian K.; Maxwell,Scott A.; Hartman, Frank R.; Wright, John R.; Yen, Jeng; Toole, Nicholas T.; Gorjian, Zareh; Morrison, Jack C

    2013-01-01

    The Robot Sequencing and Visualization Program (RSVP) is being used in the Mars Science Laboratory (MSL) mission for downlink data visualization and command sequence generation. RSVP reads and writes downlink data products from the operations data server (ODS) and writes uplink data products to the ODS. The primary users of RSVP are members of the Rover Planner team (part of the Integrated Planning and Execution Team (IPE)), who use it to perform traversability/articulation analyses, take activity plan input from the Science and Mission Planning teams, and create a set of rover sequences to be sent to the rover every sol. The primary inputs to RSVP are downlink data products and activity plans in the ODS database. The primary outputs are command sequences to be placed in the ODS for further processing prior to uplink to each rover. RSVP is composed of two main subsystems. The first, called the Robot Sequence Editor (RoSE), understands the MSL activity and command dictionaries and takes care of converting incoming activity level inputs into command sequences. The Rover Planners use the RoSE component of RSVP to put together command sequences and to view and manage command level resources like time, power, temperature, etc. (via a transparent realtime connection to SEQGEN). The second component of RSVP is called HyperDrive, a set of high-fidelity computer graphics displays of the Martian surface in 3D and in stereo. The Rover Planners can explore the environment around the rover, create commands related to motion of all kinds, and see the simulated result of those commands via its underlying tight coupling with flight navigation, motor, and arm software. This software is the evolutionary replacement for the Rover Sequencing and Visualization software used to create command sequences (and visualize the Martian surface) for the Mars Exploration Rover mission.

  16. Translating working memory into action: behavioral and neural evidence for using motor representations in encoding visuo-spatial sequences.

    PubMed

    Langner, Robert; Sternkopf, Melanie A; Kellermann, Tanja S; Grefkes, Christian; Kurth, Florian; Schneider, Frank; Zilles, Karl; Eickhoff, Simon B

    2014-07-01

    The neurobiological organization of action-oriented working memory is not well understood. To elucidate the neural correlates of translating visuo-spatial stimulus sequences into delayed (memory-guided) sequential actions, we measured brain activity using functional magnetic resonance imaging while participants encoded sequences of four to seven dots appearing on fingers of a left or right schematic hand. After variable delays, sequences were to be reproduced with the corresponding fingers. Recall became less accurate with longer sequences and was initiated faster after long delays. Across both hands, encoding and recall activated bilateral prefrontal, premotor, superior and inferior parietal regions as well as the basal ganglia, whereas hand-specific activity was found (albeit to a lesser degree during encoding) in contralateral premotor, sensorimotor, and superior parietal cortex. Activation differences after long versus short delays were restricted to motor-related regions, indicating that rehearsal during long delays might have facilitated the conversion of the memorandum into concrete motor programs at recall. Furthermore, basal ganglia activity during encoding selectively predicted correct recall. Taken together, the results suggest that to-be-reproduced visuo-spatial sequences are encoded as prospective action representations (motor intentions), possibly in addition to retrospective sensory codes. Overall, our study supports and extends multi-component models of working memory, highlighting the notion that sensory input can be coded in multiple ways depending on what the memorandum is to be used for. Copyright © 2013 Wiley Periodicals, Inc.

  17. Vehicle speed detection based on gaussian mixture model using sequential of images

    NASA Astrophysics Data System (ADS)

    Setiyono, Budi; Ratna Sulistyaningrum, Dwi; Soetrisno; Fajriyah, Farah; Wahyu Wicaksono, Danang

    2017-09-01

    Intelligent Transportation System is one of the important components in the development of smart cities. Detection of vehicle speed on the highway is supporting the management of traffic engineering. The purpose of this study is to detect the speed of the moving vehicles using digital image processing. Our approach is as follows: The inputs are a sequence of frames, frame rate (fps) and ROI. The steps are following: First we separate foreground and background using Gaussian Mixture Model (GMM) in each frames. Then in each frame, we calculate the location of object and its centroid. Next we determine the speed by computing the movement of centroid in sequence of frames. In the calculation of speed, we only consider frames when the centroid is inside the predefined region of interest (ROI). Finally we transform the pixel displacement into a time unit of km/hour. Validation of the system is done by comparing the speed calculated manually and obtained by the system. The results of software testing can detect the speed of vehicles with the highest accuracy is 97.52% and the lowest accuracy is 77.41%. And the detection results of testing by using real video footage on the road is included with real speed of the vehicle.

  18. Compact rotary sequencer

    NASA Technical Reports Server (NTRS)

    Appleberry, W. T.

    1980-01-01

    Rotary sequencer is assembled from conventional planetary differential gearset and latching mechanism utilizing inputs and outputs which are coaxial. Applications include automated production-line equipment in home appliances and in vehicles.

  19. Adaptive precompensators for flexible-link manipulator control

    NASA Technical Reports Server (NTRS)

    Tzes, Anthony P.; Yurkovich, Stephen

    1989-01-01

    The application of input precompensators to flexible manipulators is considered. Frequency domain compensators color the input around the flexible mode locations, resulting in a bandstop or notch filter in cascade with the system. Time domain compensators apply a sequence of impulses at prespecified times related to the modal frequencies. The resulting control corresponds to a feedforward term that convolves in real-time the desired reference input with a sequence of impulses and produces a vibration-free output. An adaptive precompensator can be implemented by combining a frequency domain identification scheme which is used to estimate online the modal frequencies and subsequently update the bandstop interval or the spacing between the impulses. The combined adaptive input preshaping scheme provides the most rapid slew that results in a vibration-free output. Experimental results are presented to verify the results.

  20. RBT-GA: a novel metaheuristic for solving the Multiple Sequence Alignment problem.

    PubMed

    Taheri, Javid; Zomaya, Albert Y

    2009-07-07

    Multiple Sequence Alignment (MSA) has always been an active area of research in Bioinformatics. MSA is mainly focused on discovering biologically meaningful relationships among different sequences or proteins in order to investigate the underlying main characteristics/functions. This information is also used to generate phylogenetic trees. This paper presents a novel approach, namely RBT-GA, to solve the MSA problem using a hybrid solution methodology combining the Rubber Band Technique (RBT) and the Genetic Algorithm (GA) metaheuristic. RBT is inspired by the behavior of an elastic Rubber Band (RB) on a plate with several poles, which is analogues to locations in the input sequences that could potentially be biologically related. A GA attempts to mimic the evolutionary processes of life in order to locate optimal solutions in an often very complex landscape. RBT-GA is a population based optimization algorithm designed to find the optimal alignment for a set of input protein sequences. In this novel technique, each alignment answer is modeled as a chromosome consisting of several poles in the RBT framework. These poles resemble locations in the input sequences that are most likely to be correlated and/or biologically related. A GA-based optimization process improves these chromosomes gradually yielding a set of mostly optimal answers for the MSA problem. RBT-GA is tested with one of the well-known benchmarks suites (BALiBASE 2.0) in this area. The obtained results show that the superiority of the proposed technique even in the case of formidable sequences.

  1. Supervised interpretation of echocardiograms with a psychological model of expert supervision

    NASA Astrophysics Data System (ADS)

    Revankar, Shriram V.; Sher, David B.; Shalin, Valerie L.; Ramamurthy, Maya

    1993-07-01

    We have developed a collaborative scheme that facilitates active human supervision of the binary segmentation of an echocardiogram. The scheme complements the reliability of a human expert with the precision of segmentation algorithms. In the developed system, an expert user compares the computer generated segmentation with the original image in a user friendly graphics environment, and interactively indicates the incorrectly classified regions either by pointing or by circling. The precise boundaries of the indicated regions are computed by studying original image properties at that region, and a human visual attention distribution map obtained from the published psychological and psychophysical research. We use the developed system to extract contours of heart chambers from a sequence of two dimensional echocardiograms. We are currently extending this method to incorporate a richer set of inputs from the human supervisor, to facilitate multi-classification of image regions depending on their functionality. We are integrating into our system the knowledge related constraints that cardiologists use, to improve the capabilities of our existing system. This extension involves developing a psychological model of expert reasoning, functional and relational models of typical views in echocardiograms, and corresponding interface modifications to map the suggested actions to image processing algorithms.

  2. Application of TrackEye in equine locomotion research.

    PubMed

    Drevemo, S; Roepstorff, L; Kallings, P; Johnston, C J

    1993-01-01

    TrackEye is an analysis system, which is applicable for equine biokinematic studies. It covers the whole process from digitizing of images, automatic target tracking and analysis. Key components in the system are an image work station for processing of video images and a high-resolution film-to-video scanner for 16-mm film. A recording module controls the input device and handles the capture of image sequences into a videodisc system, and a tracking module is able to follow reference markers automatically. The system offers a flexible analysis including calculations of markers displacements, distances and joint angles, velocities and accelerations. TrackEye was used to study effects of phenylbutazone on the fetlock and carpal joint angle movements in a horse with a mild lameness caused by osteo-arthritis in the fetlock joint of a forelimb. Significant differences, most evident before treatment, were observed in the minimum fetlock and carpal joint angles when contralateral limbs were compared (p < 0.001). The minimum fetlock angle and the minimum carpal joint angle were significantly greater in the lame limb before treatment compared to those 6, 37 and 49 h after the last treatment (p < 0.001).

  3. Mechanisms of inhibition in cat visual cortex.

    PubMed Central

    Berman, N J; Douglas, R J; Martin, K A; Whitteridge, D

    1991-01-01

    1. Neurones from layers 2-6 of the cat primary visual cortex were studied using extracellular and intracellular recordings made in vivo. The aim was to identify inhibitory events and determine whether they were associated with small or large (shunting) changes in the input conductance of the neurones. 2. Visual stimulation of subfields of simple receptive fields produced depolarizing or hyperpolarizing potentials that were associated with increased or decreased firing rates respectively. Hyperpolarizing potentials were small, 5 mV or less. In the same neurones, brief electrical stimulation of cortical afferents produced a characteristic sequence of a brief depolarization followed by a long-lasting (200-400 ms) hyperpolarization. 3. During the response to a stationary flashed bar, the synaptic activation increased the input conductance of the neurone by about 5-20%. Conductance changes of similar magnitude were obtained by electrically stimulating the neurone. Neurones stimulated with non-optimal orientations or directions of motion showed little change in input conductance. 4. These data indicate that while visually or electrically induced inhibition can be readily demonstrated in visual cortex, the inhibition is not associated with large sustained conductance changes. Thus a shunting or multiplicative inhibitory mechanism is not the principal mechanism of inhibition. Images Fig. 2 Fig. 3 Fig. 4 Fig. 5 Fig. 6 PMID:1804983

  4. FISH Finder: a high-throughput tool for analyzing FISH images

    PubMed Central

    Shirley, James W.; Ty, Sereyvathana; Takebayashi, Shin-ichiro; Liu, Xiuwen; Gilbert, David M.

    2011-01-01

    Motivation: Fluorescence in situ hybridization (FISH) is used to study the organization and the positioning of specific DNA sequences within the cell nucleus. Analyzing the data from FISH images is a tedious process that invokes an element of subjectivity. Automated FISH image analysis offers savings in time as well as gaining the benefit of objective data analysis. While several FISH image analysis software tools have been developed, they often use a threshold-based segmentation algorithm for nucleus segmentation. As fluorescence signal intensities can vary significantly from experiment to experiment, from cell to cell, and within a cell, threshold-based segmentation is inflexible and often insufficient for automatic image analysis, leading to additional manual segmentation and potential subjective bias. To overcome these problems, we developed a graphical software tool called FISH Finder to automatically analyze FISH images that vary significantly. By posing the nucleus segmentation as a classification problem, compound Bayesian classifier is employed so that contextual information is utilized, resulting in reliable classification and boundary extraction. This makes it possible to analyze FISH images efficiently and objectively without adjustment of input parameters. Additionally, FISH Finder was designed to analyze the distances between differentially stained FISH probes. Availability: FISH Finder is a standalone MATLAB application and platform independent software. The program is freely available from: http://code.google.com/p/fishfinder/downloads/list Contact: gilbert@bio.fsu.edu PMID:21310746

  5. Comparison of MR imaging sequences for liver and head and neck interventions: is there a single optimal sequence for all purposes?

    PubMed

    Boll, Daniel T; Lewin, Jonathan S; Duerk, Jeffrey L; Aschoff, Andrik J; Merkle, Elmar M

    2004-05-01

    To compare the appropriate pulse sequences for interventional device guidance during magnetic resonance (MR) imaging at 0.2 T and to evaluate the dependence of sequence selection on the anatomic region of the procedure. Using a C-arm 0.2 T system, four interventional MR sequences were applied in 23 liver cases and during MR-guided neck interventions in 13 patients. The imaging protocol consisted of: multislice turbo spin echo (TSE) T2w, sequential-slice fast imaging with steady precession (FISP), a time-reversed version of FISP (PSIF), and FISP with balanced gradients in all spatial directions (True-FISP) sequences. Vessel conspicuity was rated and contrast-to-noise ratio (CNR) was calculated for each sequence and a differential receiver operating characteristic was performed. Liver findings were detected in 96% using the TSE sequence. PSIF, FISP, and True-FISP imaging showed lesions in 91%, 61%, and 65%, respectively. The TSE sequence offered the best CNR, followed by PSIF imaging. Differential receiver operating characteristic analysis also rated TSE and PSIF to be the superior sequences. Lesions in the head and neck were detected in all cases by TSE and FISP, in 92% using True-FISP, and in 84% using PSIF. True-FISP offered the best CNR, followed by TSE imaging. Vessels appeared bright on FISP and True-FISP imaging and dark on the other sequences. In interventional MR imaging, no single sequence fits all purposes. Image guidance for interventional MR during liver procedures is best achieved by PSIF or TSE, whereas biopsies in the head and neck are best performed using FISP or True-FISP sequences.

  6. Long-term scale adaptive tracking with kernel correlation filters

    NASA Astrophysics Data System (ADS)

    Wang, Yueren; Zhang, Hong; Zhang, Lei; Yang, Yifan; Sun, Mingui

    2018-04-01

    Object tracking in video sequences has broad applications in both military and civilian domains. However, as the length of input video sequence increases, a number of problems arise, such as severe object occlusion, object appearance variation, and object out-of-view (some portion or the entire object leaves the image space). To deal with these problems and identify the object being tracked from cluttered background, we present a robust appearance model using Speeded Up Robust Features (SURF) and advanced integrated features consisting of the Felzenszwalb's Histogram of Oriented Gradients (FHOG) and color attributes. Since re-detection is essential in long-term tracking, we develop an effective object re-detection strategy based on moving area detection. We employ the popular kernel correlation filters in our algorithm design, which facilitates high-speed object tracking. Our evaluation using the CVPR2013 Object Tracking Benchmark (OTB2013) dataset illustrates that the proposed algorithm outperforms reference state-of-the-art trackers in various challenging scenarios.

  7. On the Performance Evaluation of 3D Reconstruction Techniques from a Sequence of Images

    NASA Astrophysics Data System (ADS)

    Eid, Ahmed; Farag, Aly

    2005-12-01

    The performance evaluation of 3D reconstruction techniques is not a simple problem to solve. This is not only due to the increased dimensionality of the problem but also due to the lack of standardized and widely accepted testing methodologies. This paper presents a unified framework for the performance evaluation of different 3D reconstruction techniques. This framework includes a general problem formalization, different measuring criteria, and a classification method as a first step in standardizing the evaluation process. Performance characterization of two standard 3D reconstruction techniques, stereo and space carving, is also presented. The evaluation is performed on the same data set using an image reprojection testing methodology to reduce the dimensionality of the evaluation domain. Also, different measuring strategies are presented and applied to the stereo and space carving techniques. These measuring strategies have shown consistent results in quantifying the performance of these techniques. Additional experiments are performed on the space carving technique to study the effect of the number of input images and the camera pose on its performance.

  8. Ocean-ice interaction in the marginal ice zone using synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Liu, Antony K.; Peng, Chich Y.; Weingartner, Thomas J.

    1994-01-01

    Ocean-ice interaction processes in the marginal ice zone (MIZ) by wind, waves, and mesoscale features, such as up/downwelling and eddies are studied using Earth Remote-Sensing Satellite (ERS) 1 synthetic aperture radar (SAR) images and an ocean-ice interaction model. A sequence of seven SAR images of the MIZ in the Chukchi Sea with 3 or 6 days interval are investigated for ice edge advance/retreat. Simultaneous current measurements from the northeast Chukchi Sea, as well as the Barrow wind record, are used to interpret the MIZ dynamics. SAR spectra of waves in ice and ocean waves in the Bering and Chukchi Sea are compared for the study of wave propagation and dominant SAR imaging mechanism. By using the SAR-observed ice edge configuration and wind and wave field in the Chukchi Sea as inputs, a numerical simulation has been performed with the ocean-ice interaction model. After 3 days of wind and wave forcing the resulting ice edge configuration, eddy formation, and flow velocity field are shown to be consistent with SAR observations.

  9. Principles of Quantitative MR Imaging with Illustrated Review of Applicable Modular Pulse Diagrams.

    PubMed

    Mills, Andrew F; Sakai, Osamu; Anderson, Stephan W; Jara, Hernan

    2017-01-01

    Continued improvements in diagnostic accuracy using magnetic resonance (MR) imaging will require development of methods for tissue analysis that complement traditional qualitative MR imaging studies. Quantitative MR imaging is based on measurement and interpretation of tissue-specific parameters independent of experimental design, compared with qualitative MR imaging, which relies on interpretation of tissue contrast that results from experimental pulse sequence parameters. Quantitative MR imaging represents a natural next step in the evolution of MR imaging practice, since quantitative MR imaging data can be acquired using currently available qualitative imaging pulse sequences without modifications to imaging equipment. The article presents a review of the basic physical concepts used in MR imaging and how quantitative MR imaging is distinct from qualitative MR imaging. Subsequently, the article reviews the hierarchical organization of major applicable pulse sequences used in this article, with the sequences organized into conventional, hybrid, and multispectral sequences capable of calculating the main tissue parameters of T1, T2, and proton density. While this new concept offers the potential for improved diagnostic accuracy and workflow, awareness of this extension to qualitative imaging is generally low. This article reviews the basic physical concepts in MR imaging, describes commonly measured tissue parameters in quantitative MR imaging, and presents the major available pulse sequences used for quantitative MR imaging, with a focus on the hierarchical organization of these sequences. © RSNA, 2017.

  10. Comparison of the Diagnostic Accuracy of DSC- and Dynamic Contrast-Enhanced MRI in the Preoperative Grading of Astrocytomas.

    PubMed

    Nguyen, T B; Cron, G O; Perdrizet, K; Bezzina, K; Torres, C H; Chakraborty, S; Woulfe, J; Jansen, G H; Sinclair, J; Thornhill, R E; Foottit, C; Zanette, B; Cameron, I G

    2015-11-01

    Dynamic contrast-enhanced MR imaging parameters can be biased by poor measurement of the vascular input function. We have compared the diagnostic accuracy of dynamic contrast-enhanced MR imaging by using a phase-derived vascular input function and "bookend" T1 measurements with DSC MR imaging for preoperative grading of astrocytomas. This prospective study included 48 patients with a new pathologic diagnosis of an astrocytoma. Preoperative MR imaging was performed at 3T, which included 2 injections of 5-mL gadobutrol for dynamic contrast-enhanced and DSC MR imaging. During dynamic contrast-enhanced MR imaging, both magnitude and phase images were acquired to estimate plasma volume obtained from phase-derived vascular input function (Vp_Φ) and volume transfer constant obtained from phase-derived vascular input function (K(trans)_Φ) as well as plasma volume obtained from magnitude-derived vascular input function (Vp_SI) and volume transfer constant obtained from magnitude-derived vascular input function (K(trans)_SI). From DSC MR imaging, corrected relative CBV was computed. Four ROIs were placed over the solid part of the tumor, and the highest value among the ROIs was recorded. A Mann-Whitney U test was used to test for difference between grades. Diagnostic accuracy was assessed by using receiver operating characteristic analysis. Vp_ Φ and K(trans)_Φ values were lower for grade II compared with grade III astrocytomas (P < .05). Vp_SI and K(trans)_SI were not significantly different between grade II and grade III astrocytomas (P = .08-0.15). Relative CBV and dynamic contrast-enhanced MR imaging parameters except for K(trans)_SI were lower for grade III compared with grade IV (P ≤ .05). In differentiating low- and high-grade astrocytomas, we found no statistically significant difference in diagnostic accuracy between relative CBV and dynamic contrast-enhanced MR imaging parameters. In the preoperative grading of astrocytomas, the diagnostic accuracy of dynamic contrast-enhanced MR imaging parameters is similar to that of relative CBV. © 2015 by American Journal of Neuroradiology.

  11. Object knowledge changes visual appearance: semantic effects on color afterimages.

    PubMed

    Lupyan, Gary

    2015-10-01

    According to predictive coding models of perception, what we see is determined jointly by the current input and the priors established by previous experience, expectations, and other contextual factors. The same input can thus be perceived differently depending on the priors that are brought to bear during viewing. Here, I show that expected (diagnostic) colors are perceived more vividly than arbitrary or unexpected colors, particularly when color input is unreliable. Participants were tested on a version of the 'Spanish Castle Illusion' in which viewing a hue-inverted image renders a subsequently shown achromatic version of the image in vivid color. Adapting to objects with intrinsic colors (e.g., a pumpkin) led to stronger afterimages than adapting to arbitrarily colored objects (e.g., a pumpkin-colored car). Considerably stronger afterimages were also produced by scenes containing intrinsically colored elements (grass, sky) compared to scenes with arbitrarily colored objects (books). The differences between images with diagnostic and arbitrary colors disappeared when the association between the image and color priors was weakened by, e.g., presenting the image upside-down, consistent with the prediction that color appearance is being modulated by color knowledge. Visual inputs that conflict with prior knowledge appear to be phenomenologically discounted, but this discounting is moderated by input certainty, as shown by the final study which uses conventional images rather than afterimages. As input certainty is increased, unexpected colors can become easier to detect than expected ones, a result consistent with predictive-coding models. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Prototype Focal-Plane-Array Optoelectronic Image Processor

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi; Shaw, Timothy; Yu, Jeffrey

    1995-01-01

    Prototype very-large-scale integrated (VLSI) planar array of optoelectronic processing elements combines speed of optical input and output with flexibility of reconfiguration (programmability) of electronic processing medium. Basic concept of processor described in "Optical-Input, Optical-Output Morphological Processor" (NPO-18174). Performs binary operations on binary (black and white) images. Each processing element corresponds to one picture element of image and located at that picture element. Includes input-plane photodetector in form of parasitic phototransistor part of processing circuit. Output of each processing circuit used to modulate one picture element in output-plane liquid-crystal display device. Intended to implement morphological processing algorithms that transform image into set of features suitable for high-level processing; e.g., recognition.

  13. Enhancing Web applications in radiology with Java: estimating MR imaging relaxation times.

    PubMed

    Dagher, A P; Fitzpatrick, M; Flanders, A E; Eng, J

    1998-01-01

    Java is a relatively new programming language that has been used to develop a World Wide Web-based tool for estimating magnetic resonance (MR) imaging relaxation times, thereby demonstrating how Java may be used for Web-based radiology applications beyond improving the user interface of teaching files. A standard processing algorithm coded with Java is downloaded along with the hypertext markup language (HTML) document. The user (client) selects the desired pulse sequence and inputs data obtained from a region of interest on the MR images. The algorithm is used to modify selected MR imaging parameters in an equation that models the phenomenon being evaluated. MR imaging relaxation times are estimated, and confidence intervals and a P value expressing the accuracy of the final results are calculated. Design features such as simplicity, object-oriented programming, and security restrictions allow Java to expand the capabilities of HTML by offering a more versatile user interface that includes dynamic annotations and graphics. Java also allows the client to perform more sophisticated information processing and computation than is usually associated with Web applications. Java is likely to become a standard programming option, and the development of stand-alone Java applications may become more common as Java is integrated into future versions of computer operating systems.

  14. Experimental image alignment system

    NASA Technical Reports Server (NTRS)

    Moyer, A. L.; Kowel, S. T.; Kornreich, P. G.

    1980-01-01

    A microcomputer-based instrument for image alignment with respect to a reference image is described which uses the DEFT sensor (Direct Electronic Fourier Transform) for image sensing and preprocessing. The instrument alignment algorithm which uses the two-dimensional Fourier transform as input is also described. It generates signals used to steer the stage carrying the test image into the correct orientation. This algorithm has computational advantages over algorithms which use image intensity data as input and is suitable for a microcomputer-based instrument since the two-dimensional Fourier transform is provided by the DEFT sensor.

  15. Image quality assessment of silent T2 PROPELLER sequence for brain imaging in infants.

    PubMed

    Kim, Hyun Gi; Choi, Jin Wook; Yoon, Soo Han; Lee, Sieun

    2018-02-01

    Infants are vulnerable to high acoustic noise. Acoustic noise generated by MR scanning can be reduced by a silent sequence. The purpose of this study is to compare the image quality of the conventional and silent T2 PROPELLER sequences for brain imaging in infants. A total of 36 scans were acquired from 24 infants using a 3 T MR scanner. Each patient underwent both conventional and silent T2 PROPELLER sequences. Acoustic noise level was measured. Quantitative and qualitative assessments were performed with the images taken with each sequence. The sound pressure level of the conventional T2 PROPELLER imaging sequence was 92.1 dB and that of the silent T2 PROPELLER imaging sequence was 73.3 dB (reduction of 20%). On quantitative assessment, the two sequences (conventional vs silent T2 PROPELLER) did not show significant difference in relative contrast (0.069 vs 0.068, p value = 0.536) and signal-to-noise ratio (75.4 vs 114.8, p value = 0.098). Qualitative assessment of overall image quality (p value = 0.572), grey-white differentiation (p value = 0.986), shunt-related artefact (p value > 0.999), motion artefact (p value = 0.801) and myelination degree in different brain regions (p values ≥ 0.092) did not show significant difference between the two sequences. The silent T2 PROPELLER sequence reduces acoustic noise and generated comparable image quality to that of the conventional sequence. Advances in knowledge: This is the first report to compare silent T2 PROPELLER images with that of conventional T2 PROPELLER images in children.

  16. Single-image super-resolution based on Markov random field and contourlet transform

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Liu, Zheng; Gueaieb, Wail; He, Xiaohai

    2011-04-01

    Learning-based methods are well adopted in image super-resolution. In this paper, we propose a new learning-based approach using contourlet transform and Markov random field. The proposed algorithm employs contourlet transform rather than the conventional wavelet to represent image features and takes into account the correlation between adjacent pixels or image patches through the Markov random field (MRF) model. The input low-resolution (LR) image is decomposed with the contourlet transform and fed to the MRF model together with the contourlet transform coefficients from the low- and high-resolution image pairs in the training set. The unknown high-frequency components/coefficients for the input low-resolution image are inferred by a belief propagation algorithm. Finally, the inverse contourlet transform converts the LR input and the inferred high-frequency coefficients into the super-resolved image. The effectiveness of the proposed method is demonstrated with the experiments on facial, vehicle plate, and real scene images. A better visual quality is achieved in terms of peak signal to noise ratio and the image structural similarity measurement.

  17. Cell assembly sequences arising from spike threshold adaptation keep track of time in the hippocampus

    PubMed Central

    Itskov, Vladimir; Curto, Carina; Pastalkova, Eva; Buzsáki, György

    2011-01-01

    Hippocampal neurons can display reliable and long-lasting sequences of transient firing patterns, even in the absence of changing external stimuli. We suggest that time-keeping is an important function of these sequences, and propose a network mechanism for their generation. We show that sequences of neuronal assemblies recorded from rat hippocampal CA1 pyramidal cells can reliably predict elapsed time (15-20 sec) during wheel running with a precision of 0.5sec. In addition, we demonstrate the generation of multiple reliable, long-lasting sequences in a recurrent network model. These sequences are generated in the presence of noisy, unstructured inputs to the network, mimicking stationary sensory input. Identical initial conditions generate similar sequences, whereas different initial conditions give rise to distinct sequences. The key ingredients responsible for sequence generation in the model are threshold-adaptation and a Mexican-hat-like pattern of connectivity among pyramidal cells. This pattern may arise from recurrent systems such as the hippocampal CA3 region or the entorhinal cortex. We hypothesize that mechanisms that evolved for spatial navigation also support tracking of elapsed time in behaviorally relevant contexts. PMID:21414904

  18. Proprioceptive coordination of movement sequences: role of velocity and position information.

    PubMed

    Cordo, P; Carlton, L; Bevan, L; Carlton, M; Kerr, G K

    1994-05-01

    1. Recent studies have shown that the CNS uses proprioceptive information to coordinate multijoint movement sequences; proprioceptive input related to the kinematics of one joint rotation in a movement sequence can be used to trigger a subsequent joint rotation. In this paper we adopt a broad definition of "proprioception," which includes all somatosensory information related to joint posture and kinematics. This paper addresses how the CNS uses proprioceptive information related to the velocity and position of joints to coordinate multijoint movement sequences. 2. Normal human subjects sat at an experimental apparatus and performed a movement sequence with the right arm without visual feedback. The apparatus passively rotated the right elbow horizontally in the extension direction with either a constant velocity trajectory or an unpredictable velocity trajectory. The subjects' task was to open briskly the right hand when the elbow passed through a prescribed target position, similar to backhand throwing in the horizontal plane. The randomization of elbow velocities and the absence of visual information was used to discourage subjects from using any information other than proprioceptive input to perform the task. 3. Our results indicate that the CNS is able to extract the necessary kinematic information from proprioceptive input to trigger the hand opening at the correct elbow position. We estimated the minimal sensory conduction and processing delay to be 150 ms, and on the basis of this estimate, we predicted the expected performance with different degrees of reduced proprioceptive information. These predictions were compared with the subjects' actual performances, revealing that the CNS was using proprioceptive input related to joint velocity in this motor task. To determine whether position information was also being used, we examined the subjects' performances with unpredictable velocity trajectories. The results from experiments with unpredictable velocity trajectories indicate that the CNS extracts proprioceptive information related to both the velocity and the angular position of the joint to trigger the hand movement in this movement sequence. 4. To determine the generality of proprioceptive triggering in movement sequences, we estimated the minimal movement duration with which proprioceptive information can be used as well as the amount of learning required to use proprioceptive input to perform the task. The temporal limits for proprioceptive processing in this movement task were established by determining the minimal movement time during which the task could be performed.(ABSTRACT TRUNCATED AT 400 WORDS)

  19. ShortRead: a bioconductor package for input, quality assessment and exploration of high-throughput sequence data

    PubMed Central

    Morgan, Martin; Anders, Simon; Lawrence, Michael; Aboyoun, Patrick; Pagès, Hervé; Gentleman, Robert

    2009-01-01

    Summary: ShortRead is a package for input, quality assessment, manipulation and output of high-throughput sequencing data. ShortRead is provided in the R and Bioconductor environments, allowing ready access to additional facilities for advanced statistical analysis, data transformation, visualization and integration with diverse genomic resources. Availability and Implementation: This package is implemented in R and available at the Bioconductor web site; the package contains a ‘vignette’ outlining typical work flows. Contact: mtmorgan@fhcrc.org PMID:19654119

  20. Relationship between fatigue of generation II image intensifier and input illumination

    NASA Astrophysics Data System (ADS)

    Chen, Qingyou

    1995-09-01

    If there is fatigue for an image intesifier, then it has an effect on the imaging property of the night vision system. In this paper, using the principle of Joule Heat, we derive a mathematical formula for the generated heat of semiconductor photocathode. We describe the relationship among the various parameters in the formula. We also discuss reasons for the fatigue of Generation II image intensifier caused by bigger input illumination.

  1. Improved automatic adjustment of density and contrast in FCR system using neural network

    NASA Astrophysics Data System (ADS)

    Takeo, Hideya; Nakajima, Nobuyoshi; Ishida, Masamitsu; Kato, Hisatoyo

    1994-05-01

    FCR system has an automatic adjustment of image density and contrast by analyzing the histogram of image data in the radiation field. Advanced image recognition methods proposed in this paper can improve the automatic adjustment performance, in which neural network technology is used. There are two methods. Both methods are basically used 3-layer neural network with back propagation. The image data are directly input to the input-layer in one method and the histogram data is input in the other method. The former is effective to the imaging menu such as shoulder joint in which the position of interest region occupied on the histogram changes by difference of positioning and the latter is effective to the imaging menu such as chest-pediatrics in which the histogram shape changes by difference of positioning. We experimentally confirm the validity of these methods (about the automatic adjustment performance) as compared with the conventional histogram analysis methods.

  2. ChIP-chip versus ChIP-seq: Lessons for experimental design and data analysis

    PubMed Central

    2011-01-01

    Background Chromatin immunoprecipitation (ChIP) followed by microarray hybridization (ChIP-chip) or high-throughput sequencing (ChIP-seq) allows genome-wide discovery of protein-DNA interactions such as transcription factor bindings and histone modifications. Previous reports only compared a small number of profiles, and little has been done to compare histone modification profiles generated by the two technologies or to assess the impact of input DNA libraries in ChIP-seq analysis. Here, we performed a systematic analysis of a modENCODE dataset consisting of 31 pairs of ChIP-chip/ChIP-seq profiles of the coactivator CBP, RNA polymerase II (RNA PolII), and six histone modifications across four developmental stages of Drosophila melanogaster. Results Both technologies produce highly reproducible profiles within each platform, ChIP-seq generally produces profiles with a better signal-to-noise ratio, and allows detection of more peaks and narrower peaks. The set of peaks identified by the two technologies can be significantly different, but the extent to which they differ varies depending on the factor and the analysis algorithm. Importantly, we found that there is a significant variation among multiple sequencing profiles of input DNA libraries and that this variation most likely arises from both differences in experimental condition and sequencing depth. We further show that using an inappropriate input DNA profile can impact the average signal profiles around genomic features and peak calling results, highlighting the importance of having high quality input DNA data for normalization in ChIP-seq analysis. Conclusions Our findings highlight the biases present in each of the platforms, show the variability that can arise from both technology and analysis methods, and emphasize the importance of obtaining high quality and deeply sequenced input DNA libraries for ChIP-seq analysis. PMID:21356108

  3. UWGSP7: a real-time optical imaging workstation

    NASA Astrophysics Data System (ADS)

    Bush, John E.; Kim, Yongmin; Pennington, Stan D.; Alleman, Andrew P.

    1995-04-01

    With the development of UWGSP7, the University of Washington Image Computing Systems Laboratory has a real-time workstation for continuous-wave (cw) optical reflectance imaging. Recent discoveries in optical science and imaging research have suggested potential practical use of the technology as a medical imaging modality and identified the need for a machine to support these applications in real time. The UWGSP7 system was developed to provide researchers with a high-performance, versatile tool for use in optical imaging experiments with the eventual goal of bringing the technology into clinical use. One of several major applications of cw optical reflectance imaging is tumor imaging which uses a light-absorbing dye that preferentially sequesters in tumor tissue. This property could be used to locate tumors and to identify tumor margins intraoperatively. Cw optical reflectance imaging consists of illumination of a target with a band-limited light source and monitoring the light transmitted by or reflected from the target. While continuously illuminating the target, a control image is acquired and stored. A dye is injected into a subject and a sequence of data images are acquired and processed. The data images are aligned with the control image and then subtracted to obtain a signal representing the change in optical reflectance over time. This signal can be enhanced by digital image processing and displayed in pseudo-color. This type of emerging imaging technique requires a computer system that is versatile and adaptable. The UWGSP7 utilizes a VESA local bus PC as a host computer running the Windows NT operating system and includes ICSL developed add-on boards for image acquisition and processing. The image acquisition board is used to digitize and format the analog signal from the input device into digital frames and to the average frames into images. To accommodate different input devices, the camera interface circuitry is designed in a small mezzanine board that supports the RS-170 standard. The image acquisition board is connected to the image- processing board using a direct connect port which provides a 66 Mbytes/s channel independent of the system bus. The image processing board utilizes the Texas Instruments TMS320C80 Multimedia Video Processor chip. This chip is capable of 2 billion operations per second providing the UWGSP7 with the capability to perform real-time image processing functions like median filtering, convolution and contrast enhancement. This processing power allows interactive analysis of the experiments as compared to current practice of off-line processing and analysis. Due to its flexibility and programmability, the UWGSP7 can be adapted into various research needs in intraoperative optical imaging.

  4. A Segmentation Method for Lung Parenchyma Image Sequences Based on Superpixels and a Self-Generating Neural Forest

    PubMed Central

    Liao, Xiaolei; Zhao, Juanjuan; Jiao, Cheng; Lei, Lei; Qiang, Yan; Cui, Qiang

    2016-01-01

    Background Lung parenchyma segmentation is often performed as an important pre-processing step in the computer-aided diagnosis of lung nodules based on CT image sequences. However, existing lung parenchyma image segmentation methods cannot fully segment all lung parenchyma images and have a slow processing speed, particularly for images in the top and bottom of the lung and the images that contain lung nodules. Method Our proposed method first uses the position of the lung parenchyma image features to obtain lung parenchyma ROI image sequences. A gradient and sequential linear iterative clustering algorithm (GSLIC) for sequence image segmentation is then proposed to segment the ROI image sequences and obtain superpixel samples. The SGNF, which is optimized by a genetic algorithm (GA), is then utilized for superpixel clustering. Finally, the grey and geometric features of the superpixel samples are used to identify and segment all of the lung parenchyma image sequences. Results Our proposed method achieves higher segmentation precision and greater accuracy in less time. It has an average processing time of 42.21 seconds for each dataset and an average volume pixel overlap ratio of 92.22 ± 4.02% for four types of lung parenchyma image sequences. PMID:27532214

  5. Satellite Image Mosaic Engine

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian

    2006-01-01

    A computer program automatically builds large, full-resolution mosaics of multispectral images of Earth landmasses from images acquired by Landsat 7, complete with matching of colors and blending between adjacent scenes. While the code has been used extensively for Landsat, it could also be used for other data sources. A single mosaic of as many as 8,000 scenes, represented by more than 5 terabytes of data and the largest set produced in this work, demonstrated what the code could do to provide global coverage. The program first statistically analyzes input images to determine areas of coverage and data-value distributions. It then transforms the input images from their original universal transverse Mercator coordinates to other geographical coordinates, with scaling. It applies a first-order polynomial brightness correction to each band in each scene. It uses a data-mask image for selecting data and blending of input scenes. Under control by a user, the program can be made to operate on small parts of the output image space, with check-point and restart capabilities. The program runs on SGI IRIX computers. It is capable of parallel processing using shared-memory code, large memories, and tens of central processing units. It can retrieve input data and store output data at locations remote from the processors on which it is executed.

  6. Small RNA Library Preparation Method for Next-Generation Sequencing Using Chemical Modifications to Prevent Adapter Dimer Formation.

    PubMed

    Shore, Sabrina; Henderson, Jordana M; Lebedev, Alexandre; Salcedo, Michelle P; Zon, Gerald; McCaffrey, Anton P; Paul, Natasha; Hogrefe, Richard I

    2016-01-01

    For most sample types, the automation of RNA and DNA sample preparation workflows enables high throughput next-generation sequencing (NGS) library preparation. Greater adoption of small RNA (sRNA) sequencing has been hindered by high sample input requirements and inherent ligation side products formed during library preparation. These side products, known as adapter dimer, are very similar in size to the tagged library. Most sRNA library preparation strategies thus employ a gel purification step to isolate tagged library from adapter dimer contaminants. At very low sample inputs, adapter dimer side products dominate the reaction and limit the sensitivity of this technique. Here we address the need for improved specificity of sRNA library preparation workflows with a novel library preparation approach that uses modified adapters to suppress adapter dimer formation. This workflow allows for lower sample inputs and elimination of the gel purification step, which in turn allows for an automatable sRNA library preparation protocol.

  7. RBT-GA: a novel metaheuristic for solving the multiple sequence alignment problem

    PubMed Central

    Taheri, Javid; Zomaya, Albert Y

    2009-01-01

    Background Multiple Sequence Alignment (MSA) has always been an active area of research in Bioinformatics. MSA is mainly focused on discovering biologically meaningful relationships among different sequences or proteins in order to investigate the underlying main characteristics/functions. This information is also used to generate phylogenetic trees. Results This paper presents a novel approach, namely RBT-GA, to solve the MSA problem using a hybrid solution methodology combining the Rubber Band Technique (RBT) and the Genetic Algorithm (GA) metaheuristic. RBT is inspired by the behavior of an elastic Rubber Band (RB) on a plate with several poles, which is analogues to locations in the input sequences that could potentially be biologically related. A GA attempts to mimic the evolutionary processes of life in order to locate optimal solutions in an often very complex landscape. RBT-GA is a population based optimization algorithm designed to find the optimal alignment for a set of input protein sequences. In this novel technique, each alignment answer is modeled as a chromosome consisting of several poles in the RBT framework. These poles resemble locations in the input sequences that are most likely to be correlated and/or biologically related. A GA-based optimization process improves these chromosomes gradually yielding a set of mostly optimal answers for the MSA problem. Conclusion RBT-GA is tested with one of the well-known benchmarks suites (BALiBASE 2.0) in this area. The obtained results show that the superiority of the proposed technique even in the case of formidable sequences. PMID:19594869

  8. Optimization of rotamers prior to template minimization improves stability predictions made by computational protein design.

    PubMed

    Davey, James A; Chica, Roberto A

    2015-04-01

    Computational protein design (CPD) predictions are highly dependent on the structure of the input template used. However, it is unclear how small differences in template geometry translate to large differences in stability prediction accuracy. Herein, we explored how structural changes to the input template affect the outcome of stability predictions by CPD. To do this, we prepared alternate templates by Rotamer Optimization followed by energy Minimization (ROM) and used them to recapitulate the stability of 84 protein G domain β1 mutant sequences. In the ROM process, side-chain rotamers for wild-type (WT) or mutant sequences are optimized on crystal or nuclear magnetic resonance (NMR) structures prior to template minimization, resulting in alternate structures termed ROM templates. We show that use of ROM templates prepared from sequences known to be stable results predominantly in improved prediction accuracy compared to using the minimized crystal or NMR structures. Conversely, ROM templates prepared from sequences that are less stable than the WT reduce prediction accuracy by increasing the number of false positives. These observed changes in prediction outcomes are attributed to differences in side-chain contacts made by rotamers in ROM templates. Finally, we show that ROM templates prepared from sequences that are unfolded or that adopt a nonnative fold result in the selective enrichment of sequences that are also unfolded or that adopt a nonnative fold, respectively. Our results demonstrate the existence of a rotamer bias caused by the input template that can be harnessed to skew predictions toward sequences displaying desired characteristics. © 2014 The Protein Society.

  9. The NMR phased array.

    PubMed

    Roemer, P B; Edelstein, W A; Hayes, C E; Souza, S P; Mueller, O M

    1990-11-01

    We describe methods for simultaneously acquiring and subsequently combining data from a multitude of closely positioned NMR receiving coils. The approach is conceptually similar to phased array radar and ultrasound and hence we call our techniques the "NMR phased array." The NMR phased array offers the signal-to-noise ratio (SNR) and resolution of a small surface coil over fields-of-view (FOV) normally associated with body imaging with no increase in imaging time. The NMR phased array can be applied to both imaging and spectroscopy for all pulse sequences. The problematic interactions among nearby surface coils is eliminated (a) by overlapping adjacent coils to give zero mutual inductance, hence zero interaction, and (b) by attaching low input impedance preamplifiers to all coils, thus eliminating interference among next nearest and more distant neighbors. We derive an algorithm for combining the data from the phased array elements to yield an image with optimum SNR. Other techniques which are easier to implement at the cost of lower SNR are explored. Phased array imaging is demonstrated with high resolution (512 x 512, 48-cm FOV, and 32-cm FOV) spin-echo images of the thoracic and lumbar spine. Data were acquired from four-element linear spine arrays, the first made of 12-cm square coils and the second made of 8-cm square coils. When compared with images from a single 15 x 30-cm rectangular coil and identical imaging parameters, the phased array yields a 2X and 3X higher SNR at the depth of the spine (approximately 7 cm).

  10. Three-dimensional T1rho-weighted MRI at 1.5 Tesla.

    PubMed

    Borthakur, Arijitt; Wheaton, Andrew; Charagundla, Sridhar R; Shapiro, Erik M; Regatte, Ravinder R; Akella, Sarma V S; Kneeland, J Bruce; Reddy, Ravinder

    2003-06-01

    To design and implement a magnetic resonance imaging (MRI) pulse sequence capable of performing three-dimensional T(1rho)-weighted MRI on a 1.5-T clinical scanner, and determine the optimal sequence parameters, both theoretically and experimentally, so that the energy deposition by the radiofrequency pulses in the sequence, measured as the specific absorption rate (SAR), does not exceed safety guidelines for imaging human subjects. A three-pulse cluster was pre-encoded to a three-dimensional gradient-echo imaging sequence to create a three-dimensional, T(1rho)-weighted MRI pulse sequence. Imaging experiments were performed on a GE clinical scanner with a custom-built knee-coil. We validated the performance of this sequence by imaging articular cartilage of a bovine patella and comparing T(1rho) values measured by this sequence to those obtained with a previously tested two-dimensional imaging sequence. Using a previously developed model for SAR calculation, the imaging parameters were adjusted such that the energy deposition by the radiofrequency pulses in the sequence did not exceed safety guidelines for imaging human subjects. The actual temperature increase due to the sequence was measured in a phantom by a MRI-based temperature mapping technique. Following these experiments, the performance of this sequence was demonstrated in vivo by obtaining T(1rho)-weighted images of the knee joint of a healthy individual. Calculated T(1rho) of articular cartilage in the specimen was similar for both and three-dimensional and two-dimensional methods (84 +/- 2 msec and 80 +/- 3 msec, respectively). The temperature increase in the phantom resulting from the sequence was 0.015 degrees C, which is well below the established safety guidelines. Images of the human knee joint in vivo demonstrate a clear delineation of cartilage from surrounding tissues. We developed and implemented a three-dimensional T(1rho)-weighted pulse sequence on a 1.5-T clinical scanner. Copyright 2003 Wiley-Liss, Inc.

  11. Application of artificial neural network to fMRI regression analysis.

    PubMed

    Misaki, Masaya; Miyauchi, Satoru

    2006-01-15

    We used an artificial neural network (ANN) to detect correlations between event sequences and fMRI (functional magnetic resonance imaging) signals. The layered feed-forward neural network, given a series of events as inputs and the fMRI signal as a supervised signal, performed a non-linear regression analysis. This type of ANN is capable of approximating any continuous function, and thus this analysis method can detect any fMRI signals that correlated with corresponding events. Because of the flexible nature of ANNs, fitting to autocorrelation noise is a problem in fMRI analyses. We avoided this problem by using cross-validation and an early stopping procedure. The results showed that the ANN could detect various responses with different time courses. The simulation analysis also indicated an additional advantage of ANN over non-parametric methods in detecting parametrically modulated responses, i.e., it can detect various types of parametric modulations without a priori assumptions. The ANN regression analysis is therefore beneficial for exploratory fMRI analyses in detecting continuous changes in responses modulated by changes in input values.

  12. Optimization of a double inversion recovery sequence for noninvasive synovium imaging of joint effusion in the knee.

    PubMed

    Jahng, Geon-Ho; Jin, Wook; Yang, Dal Mo; Ryu, Kyung Nam

    2011-05-01

    We wanted to optimize a double inversion recovery (DIR) sequence to image joint effusion regions of the knee, especially intracapsular or intrasynovial imaging in the suprapatellar bursa and patellofemoral joint space. Computer simulations were performed to determine the optimum inversion times (TI) for suppressing both fat and water signals, and a DIR sequence was optimized based on the simulations for distinguishing synovitis from fluid. In vivo studies were also performed on individuals who showed joint effusion on routine knee MR images to demonstrate the feasibility of using the DIR sequence with a 3T whole-body MR scanner. To compare intracapsular or intrasynovial signals on the DIR images, intermediate density-weighted images and/or post-enhanced T1-weighted images were acquired. The timings to enhance the synovial contrast from the fluid components were TI1 = 2830 ms and TI2 = 254 ms for suppressing the water and fat signals, respectively. Improved contrast for the intrasynovial area in the knees was observed with the DIR turbo spin-echo pulse sequence compared to the intermediate density-weighted sequence. Imaging contrast obtained noninvasively with the DIR sequence was similar to that of the post-enhanced T1-weighted sequence. The DIR sequence may be useful for delineating synovium without using contrast materials.

  13. Black optic display

    DOEpatents

    Veligdan, James T.

    1997-01-01

    An optical display includes a plurality of stacked optical waveguides having first and second opposite ends collectively defining an image input face and an image screen, respectively, with the screen being oblique to the input face. Each of the waveguides includes a transparent core bound by a cladding layer having a lower index of refraction for effecting internal reflection of image light transmitted into the input face to project an image on the screen, with each of the cladding layers including a cladding cap integrally joined thereto at the waveguide second ends. Each of the cores is beveled at the waveguide second end so that the cladding cap is viewable through the transparent core. Each of the cladding caps is black for absorbing external ambient light incident upon the screen for improving contrast of the image projected internally on the screen.

  14. Feedback shift register sequences versus uniformly distributed random sequences for correlation chromatography

    NASA Technical Reports Server (NTRS)

    Kaljurand, M.; Valentin, J. R.; Shao, M.

    1996-01-01

    Two alternative input sequences are commonly employed in correlation chromatography (CC). They are sequences derived according to the algorithm of the feedback shift register (i.e., pseudo random binary sequences (PRBS)) and sequences derived by using the uniform random binary sequences (URBS). These two sequences are compared. By applying the "cleaning" data processing technique to the correlograms that result from these sequences, we show that when the PRBS is used the S/N of the correlogram is much higher than the one resulting from using URBS.

  15. Intelligent fault diagnosis of rolling bearings using an improved deep recurrent neural network

    NASA Astrophysics Data System (ADS)

    Jiang, Hongkai; Li, Xingqiu; Shao, Haidong; Zhao, Ke

    2018-06-01

    Traditional intelligent fault diagnosis methods for rolling bearings heavily depend on manual feature extraction and feature selection. For this purpose, an intelligent deep learning method, named the improved deep recurrent neural network (DRNN), is proposed in this paper. Firstly, frequency spectrum sequences are used as inputs to reduce the input size and ensure good robustness. Secondly, DRNN is constructed by the stacks of the recurrent hidden layer to automatically extract the features from the input spectrum sequences. Thirdly, an adaptive learning rate is adopted to improve the training performance of the constructed DRNN. The proposed method is verified with experimental rolling bearing data, and the results confirm that the proposed method is more effective than traditional intelligent fault diagnosis methods.

  16. Event sequence detector

    NASA Technical Reports Server (NTRS)

    Hanna, M. F. (Inventor)

    1973-01-01

    An event sequence detector is described with input units, each associated with a row of bistable elements arranged in an array of rows and columns. The detector also includes a shift register which is responsive to clock pulses from any of the units to sequentially provide signals on its output lines each of which is connected to the bistable elements in a corresponding column. When the event-indicating signal is received by an input unit it provides a clock pulse to the shift register to provide the signal on one of its output lines. The input unit also enables all its bistable elements so that the particular element in the column supplied with the signal from the register is driven to an event-indicating state.

  17. DSAP: deep-sequencing small RNA analysis pipeline.

    PubMed

    Huang, Po-Jung; Liu, Yi-Chung; Lee, Chi-Ching; Lin, Wei-Chen; Gan, Richie Ruei-Chi; Lyu, Ping-Chiang; Tang, Petrus

    2010-07-01

    DSAP is an automated multiple-task web service designed to provide a total solution to analyzing deep-sequencing small RNA datasets generated by next-generation sequencing technology. DSAP uses a tab-delimited file as an input format, which holds the unique sequence reads (tags) and their corresponding number of copies generated by the Solexa sequencing platform. The input data will go through four analysis steps in DSAP: (i) cleanup: removal of adaptors and poly-A/T/C/G/N nucleotides; (ii) clustering: grouping of cleaned sequence tags into unique sequence clusters; (iii) non-coding RNA (ncRNA) matching: sequence homology mapping against a transcribed sequence library from the ncRNA database Rfam (http://rfam.sanger.ac.uk/); and (iv) known miRNA matching: detection of known miRNAs in miRBase (http://www.mirbase.org/) based on sequence homology. The expression levels corresponding to matched ncRNAs and miRNAs are summarized in multi-color clickable bar charts linked to external databases. DSAP is also capable of displaying miRNA expression levels from different jobs using a log(2)-scaled color matrix. Furthermore, a cross-species comparative function is also provided to show the distribution of identified miRNAs in different species as deposited in miRBase. DSAP is available at http://dsap.cgu.edu.tw.

  18. A comparison of ordinary fuzzy and intuitionistic fuzzy approaches in visualizing the image of flat electroencephalography

    NASA Astrophysics Data System (ADS)

    Zenian, Suzelawati; Ahmad, Tahir; Idris, Amidora

    2017-09-01

    Medical imaging is a subfield in image processing that deals with medical images. It is very crucial in visualizing the body parts in non-invasive way by using appropriate image processing techniques. Generally, image processing is used to enhance visual appearance of images for further interpretation. However, the pixel values of an image may not be precise as uncertainty arises within the gray values of an image due to several factors. In this paper, the input and output images of Flat Electroencephalography (fEEG) of an epileptic patient at varied time are presented. Furthermore, ordinary fuzzy and intuitionistic fuzzy approaches are implemented to the input images and the results are compared between these two approaches.

  19. Processing Dynamic Image Sequences from a Moving Sensor.

    DTIC Science & Technology

    1984-02-01

    65 Roadsign Image Sequence ..... ................ ... 70 Roadsign Sequence with Redundant Features .. ........ . 79 Roadsign Subimage...Selected Feature Error Values .. ........ 66 2c. Industrial Image Selected Feature Local Search Values. .. .... 67 3ab. Roadsign Image Error Values...72 3c. Roadsign Image Local Search Values ............. 73 4ab. Roadsign Redundant Feature Error Values. ............ 8 4c. Roadsign

  20. Input Scanners: A Growing Impact In A Diverse Marketplace

    NASA Astrophysics Data System (ADS)

    Marks, Kevin E.

    1989-08-01

    Just as newly invented photographic processes revolutionized the printing industry at the turn of the century, electronic imaging has affected almost every computer application today. To completely emulate traditionally mechanical means of information handling, computer based systems must be able to capture graphic images. Thus, there is a widespread need for the electronic camera, the digitizer, the input scanner. This paper will review how various types of input scanners are being used in many diverse applications. The following topics will be covered: - Historical overview of input scanners - New applications for scanners - Impact of scanning technology on select markets - Scanning systems issues

  1. Scheme of Optical Image Encryption with Digital Information Input and Dynamic Encryption Key based on Two LC SLMs

    NASA Astrophysics Data System (ADS)

    Bondareva, A. P.; Cheremkhin, P. A.; Evtikhiev, N. N.; Krasnov, V. V.; Starikov, S. N.

    Scheme of optical image encryption with digital information input and dynamic encryption key based on two liquid crystal spatial light modulators and operating with spatially-incoherent monochromatic illumination is experimentally implemented. Results of experiments on images optical encryption and numerical decryption are presented. Satisfactory decryption error of 0.20÷0.27 is achieved.

  2. Reconstruction of an input function from a dynamic PET water image using multiple tissue curves

    NASA Astrophysics Data System (ADS)

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro

    2016-08-01

    Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n  =  29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around  <8% and the calculated CBF values showed a tight correlation (r  =  0.97). The simulation showed that errors associated with the assumed parameters were  <10%, and that the optimal number of tissue curves to be used was around 500. Our results demonstrate that IDIF can be reconstructed directly from tissue curves obtained through {{\\text{H}}2} 15O PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.

  3. Diffraction phase microscopy imaging and multi-physics modeling of the nanoscale thermal expansion of a suspended resistor.

    PubMed

    Wang, Xiaozhen; Lu, Tianjian; Yu, Xin; Jin, Jian-Ming; Goddard, Lynford L

    2017-07-04

    We studied the nanoscale thermal expansion of a suspended resistor both theoretically and experimentally and obtained consistent results. In the theoretical analysis, we used a three-dimensional coupled electrical-thermal-mechanical simulation and obtained the temperature and displacement field of the suspended resistor under a direct current (DC) input voltage. In the experiment, we recorded a sequence of images of the axial thermal expansion of the central bridge region of the suspended resistor at a rate of 1.8 frames/s by using epi-illumination diffraction phase microscopy (epi-DPM). This method accurately measured nanometer level relative height changes of the resistor in a temporally and spatially resolved manner. Upon application of a 2 V step in voltage, the resistor exhibited a steady-state increase in resistance of 1.14 Ω and in relative height of 3.5 nm, which agreed reasonably well with the predicted values of 1.08 Ω and 4.4 nm, respectively.

  4. FASTdoop: a versatile and efficient library for the input of FASTA and FASTQ files for MapReduce Hadoop bioinformatics applications.

    PubMed

    Ferraro Petrillo, Umberto; Roscigno, Gianluca; Cattaneo, Giuseppe; Giancarlo, Raffaele

    2017-05-15

    MapReduce Hadoop bioinformatics applications require the availability of special-purpose routines to manage the input of sequence files. Unfortunately, the Hadoop framework does not provide any built-in support for the most popular sequence file formats like FASTA or BAM. Moreover, the development of these routines is not easy, both because of the diversity of these formats and the need for managing efficiently sequence datasets that may count up to billions of characters. We present FASTdoop, a generic Hadoop library for the management of FASTA and FASTQ files. We show that, with respect to analogous input management routines that have appeared in the Literature, it offers versatility and efficiency. That is, it can handle collections of reads, with or without quality scores, as well as long genomic sequences while the existing routines concentrate mainly on NGS sequence data. Moreover, in the domain where a comparison is possible, the routines proposed here are faster than the available ones. In conclusion, FASTdoop is a much needed addition to Hadoop-BAM. The software and the datasets are available at http://www.di.unisa.it/FASTdoop/ . umberto.ferraro@uniroma1.it. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  5. Diuretic-enhanced gadolinium excretory MR urography: comparison of conventional gradient-echo sequences and echo-planar imaging.

    PubMed

    Nolte-Ernsting, C C; Tacke, J; Adam, G B; Haage, P; Jung, P; Jakse, G; Günther, R W

    2001-01-01

    The aim of this study was to investigate the utility of different gadolinium-enhanced T1-weighted gradient-echo techniques in excretory MR urography. In 74 urologic patients, excretory MR urography was performed using various T1-weighted gradient-echo (GRE) sequences after injection of gadolinium-DTPA and low-dose furosemide. The examinations included conventional GRE sequences and echo-planar imaging (GRE EPI), both obtained with 3D data sets and 2D projection images. Breath-hold acquisition was used primarily. In 20 of 74 examinations, we compared breath-hold imaging with respiratory gating. Breath-hold imaging was significantly superior to respiratory gating for the visualization of pelvicaliceal systems, but not for the ureters. Complete MR urograms were obtained within 14-20 s using 3D GRE EPI sequences and in 20-30 s with conventional 3D GRE sequences. Ghost artefacts caused by ureteral peristalsis often occurred with conventional 3D GRE imaging and were almost completely suppressed in EPI sequences (p < 0.0001). Susceptibility effects were more pronounced on GRE EPI MR urograms and calculi measured 0.8-21.7% greater in diameter compared with conventional GRE sequences. Increased spatial resolution degraded the image quality only in GRE-EPI urograms. In projection MR urography, the entire pelvicaliceal system was imaged by acquisition of a fast single-slice sequence and the conventional 2D GRE technique provided superior morphological accuracy than 2D GRE EPI projection images (p < 0.0003). Fast 3D GRE EPI sequences improve the clinical practicability of excretory MR urography especially in old or critically ill patients unable to suspend breathing for more than 20 s. Conventional GRE sequences are superior to EPI in high-resolution detail MR urograms and in projection imaging.

  6. Three-dimensional image display system using stereogram and holographic optical memory techniques

    NASA Astrophysics Data System (ADS)

    Kim, Cheol S.; Kim, Jung G.; Shin, Chang-Mok; Kim, Soo-Joong

    2001-09-01

    In this paper, we implemented a three dimensional image display system using stereogram and holographic optical memory techniques which can store many images and reconstruct them automatically. In this system, to store and reconstruct stereo images, incident angle of reference beam must be controlled in real time, so we used BPH (binary phase hologram) and LCD (liquid crystal display) for controlling reference beam. And input images are represented on the LCD without polarizer/analyzer for maintaining uniform beam intensities regardless of the brightness of input images. The input images and BPHs are edited using application software with having the same recording scheduled time interval in storing. The reconstructed stereo images are acquired by capturing the output images with CCD camera at the behind of the analyzer which transforms phase information into brightness information of images. The reference beams are acquired by Fourier transform of BPH which designed with SA (simulated annealing) algorithm, and represented on the LCD with the 0.05 seconds time interval using application software for reconstructing the stereo images. In output plane, we used a LCD shutter that is synchronized to a monitor that displays alternate left and right eye images for depth perception. We demonstrated optical experiment which store and reconstruct four stereo images in BaTiO3 repeatedly using holographic optical memory techniques.

  7. Four dimensional magnetic resonance imaging with retrospective k-space reordering: A feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yilin; Yin, Fang-Fang; Cai, Jing, E-mail: jing.cai@duke.edu

    Purpose: Current four dimensional magnetic resonance imaging (4D-MRI) techniques lack sufficient temporal/spatial resolution and consistent tumor contrast. To overcome these limitations, this study presents the development and initial evaluation of a new strategy for 4D-MRI which is based on retrospective k-space reordering. Methods: We simulated a k-space reordered 4D-MRI on a 4D digital extended cardiac-torso (XCAT) human phantom. A 2D echo planar imaging MRI sequence [frame rate (F) = 0.448 Hz; image resolution (R) = 256 × 256; number of k-space segments (N{sub KS}) = 4] with sequential image acquisition mode was assumed for the simulation. Image quality of themore » simulated “4D-MRI” acquired from the XCAT phantom was qualitatively evaluated, and tumor motion trajectories were compared to input signals. In particular, mean absolute amplitude differences (D) and cross correlation coefficients (CC) were calculated. Furthermore, to evaluate the data sufficient condition for the new 4D-MRI technique, a comprehensive simulation study was performed using 30 cancer patients’ respiratory profiles to study the relationships between data completeness (C{sub p}) and a number of impacting factors: the number of repeated scans (N{sub R}), number of slices (N{sub S}), number of respiratory phase bins (N{sub P}), N{sub KS}, F, R, and initial respiratory phase at image acquisition (P{sub 0}). As a proof-of-concept, we implemented the proposed k-space reordering 4D-MRI technique on a T2-weighted fast spin echo MR sequence and tested it on a healthy volunteer. Results: The simulated 4D-MRI acquired from the XCAT phantom matched closely to the original XCAT images. Tumor motion trajectories measured from the simulated 4D-MRI matched well with input signals (D = 0.83 and 0.83 mm, and CC = 0.998 and 0.992 in superior–inferior and anterior–posterior directions, respectively). The relationship between C{sub p} and N{sub R} was found best represented by an exponential function (C{sub P}=100(1−e{sup −0.18N{sub R}}), when N{sub S} = 30, N{sub P} = 6). At a C{sub P} value of 95%, the relative error in tumor volume was 0.66%, indicating that N{sub R} at a C{sub P} value of 95% (N{sub R,95%}) is sufficient. It was found that N{sub R,95%} is approximately linearly proportional to N{sub P} (r = 0.99), and nearly independent of all other factors. The 4D-MRI images of the healthy volunteer clearly demonstrated respiratory motion in the diaphragm region with minimal motion induced noise or aliasing. Conclusions: It is feasible to generate respiratory correlated 4D-MRI by retrospectively reordering k-space based on respiratory phase. This new technology may lead to the next generation 4D-MRI with high spatiotemporal resolution and optimal tumor contrast, holding great promises to improve the motion management in radiotherapy of mobile cancers.« less

  8. Vector generator scan converter

    DOEpatents

    Moore, J.M.; Leighton, J.F.

    1988-02-05

    High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardware for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold. 7 figs.

  9. Sharpening of Hierarchical Visual Feature Representations of Blurred Images.

    PubMed

    Abdelhack, Mohamed; Kamitani, Yukiyasu

    2018-01-01

    The robustness of the visual system lies in its ability to perceive degraded images. This is achieved through interacting bottom-up, recurrent, and top-down pathways that process the visual input in concordance with stored prior information. The interaction mechanism by which they integrate visual input and prior information is still enigmatic. We present a new approach using deep neural network (DNN) representation to reveal the effects of such integration on degraded visual inputs. We transformed measured human brain activity resulting from viewing blurred images to the hierarchical representation space derived from a feedforward DNN. Transformed representations were found to veer toward the original nonblurred image and away from the blurred stimulus image. This indicated deblurring or sharpening in the neural representation, and possibly in our perception. We anticipate these results will help unravel the interplay mechanism between bottom-up, recurrent, and top-down pathways, leading to more comprehensive models of vision.

  10. Emergence of spike correlations in periodically forced excitable systems

    NASA Astrophysics Data System (ADS)

    Reinoso, José A.; Torrent, M. C.; Masoller, Cristina

    2016-09-01

    In sensory neurons the presence of noise can facilitate the detection of weak information-carrying signals, which are encoded and transmitted via correlated sequences of spikes. Here we investigate the relative temporal order in spike sequences induced by a subthreshold periodic input in the presence of white Gaussian noise. To simulate the spikes, we use the FitzHugh-Nagumo model and to investigate the output sequence of interspike intervals (ISIs), we use the symbolic method of ordinal analysis. We find different types of relative temporal order in the form of preferred ordinal patterns that depend on both the strength of the noise and the period of the input signal. We also demonstrate a resonancelike behavior, as certain periods and noise levels enhance temporal ordering in the ISI sequence, maximizing the probability of the preferred patterns. Our findings could be relevant for understanding the mechanisms underlying temporal coding, by which single sensory neurons represent in spike sequences the information about weak periodic stimuli.

  11. High-throughput automated microfluidic sample preparation for accurate microbial genomics

    PubMed Central

    Kim, Soohong; De Jonghe, Joachim; Kulesa, Anthony B.; Feldman, David; Vatanen, Tommi; Bhattacharyya, Roby P.; Berdy, Brittany; Gomez, James; Nolan, Jill; Epstein, Slava; Blainey, Paul C.

    2017-01-01

    Low-cost shotgun DNA sequencing is transforming the microbial sciences. Sequencing instruments are so effective that sample preparation is now the key limiting factor. Here, we introduce a microfluidic sample preparation platform that integrates the key steps in cells to sequence library sample preparation for up to 96 samples and reduces DNA input requirements 100-fold while maintaining or improving data quality. The general-purpose microarchitecture we demonstrate supports workflows with arbitrary numbers of reaction and clean-up or capture steps. By reducing the sample quantity requirements, we enabled low-input (∼10,000 cells) whole-genome shotgun (WGS) sequencing of Mycobacterium tuberculosis and soil micro-colonies with superior results. We also leveraged the enhanced throughput to sequence ∼400 clinical Pseudomonas aeruginosa libraries and demonstrate excellent single-nucleotide polymorphism detection performance that explained phenotypically observed antibiotic resistance. Fully-integrated lab-on-chip sample preparation overcomes technical barriers to enable broader deployment of genomics across many basic research and translational applications. PMID:28128213

  12. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  13. Image-based computer-assisted diagnosis system for benign paroxysmal positional vertigo

    NASA Astrophysics Data System (ADS)

    Kohigashi, Satoru; Nakamae, Koji; Fujioka, Hiromu

    2005-04-01

    We develop the image based computer assisted diagnosis system for benign paroxysmal positional vertigo (BPPV) that consists of the balance control system simulator, the 3D eye movement simulator, and the extraction method of nystagmus response directly from an eye movement image sequence. In the system, the causes and conditions of BPPV are estimated by searching the database for record matching with the nystagmus response for the observed eye image sequence of the patient with BPPV. The database includes the nystagmus responses for simulated eye movement sequences. The eye movement velocity is obtained by using the balance control system simulator that allows us to simulate BPPV under various conditions such as canalithiasis, cupulolithiasis, number of otoconia, otoconium size, and so on. Then the eye movement image sequence is displayed on the CRT by the 3D eye movement simulator. The nystagmus responses are extracted from the image sequence by the proposed method and are stored in the database. In order to enhance the diagnosis accuracy, the nystagmus response for a newly simulated sequence is matched with that for the observed sequence. From the matched simulation conditions, the causes and conditions of BPPV are estimated. We apply our image based computer assisted diagnosis system to two real eye movement image sequences for patients with BPPV to show its validity.

  14. Enhanced spatio-temporal alignment of plantar pressure image sequences using B-splines.

    PubMed

    Oliveira, Francisco P M; Tavares, João Manuel R S

    2013-03-01

    This article presents an enhanced methodology to align plantar pressure image sequences simultaneously in time and space. The temporal alignment of the sequences is accomplished using B-splines in the time modeling, and the spatial alignment can be attained using several geometric transformation models. The methodology was tested on a dataset of 156 real plantar pressure image sequences (3 sequences for each foot of the 26 subjects) that was acquired using a common commercial plate during barefoot walking. In the alignment of image sequences that were synthetically deformed both in time and space, an outstanding accuracy was achieved with the cubic B-splines. This accuracy was significantly better (p < 0.001) than the one obtained using the best solution proposed in our previous work. When applied to align real image sequences with unknown transformation involved, the alignment based on cubic B-splines also achieved superior results than our previous methodology (p < 0.001). The consequences of the temporal alignment on the dynamic center of pressure (COP) displacement was also assessed by computing the intraclass correlation coefficients (ICC) before and after the temporal alignment of the three image sequence trials of each foot of the associated subject at six time instants. The results showed that, generally, the ICCs related to the medio-lateral COP displacement were greater when the sequences were temporally aligned than the ICCs of the original sequences. Based on the experimental findings, one can conclude that the cubic B-splines are a remarkable solution for the temporal alignment of plantar pressure image sequences. These findings also show that the temporal alignment can increase the consistency of the COP displacement on related acquired plantar pressure image sequences.

  15. Versatile and Programmable DNA Logic Gates on Universal and Label-Free Homogeneous Electrochemical Platform.

    PubMed

    Ge, Lei; Wang, Wenxiao; Sun, Ximei; Hou, Ting; Li, Feng

    2016-10-04

    Herein, a novel universal and label-free homogeneous electrochemical platform is demonstrated, on which a complete set of DNA-based two-input Boolean logic gates (OR, NAND, AND, NOR, INHIBIT, IMPLICATION, XOR, and XNOR) is constructed by simply and rationally deploying the designed DNA polymerization/nicking machines without complicated sequence modulation. Single-stranded DNA is employed as the proof-of-concept target/input to initiate or prevent the DNA polymerization/nicking cyclic reactions on these DNA machines to synthesize numerous intact G-quadruplex sequences or binary G-quadruplex subunits as the output. The generated output strands then self-assemble into G-quadruplexes that render remarkable decrease to the diffusion current response of methylene blue and, thus, provide the amplified homogeneous electrochemical readout signal not only for the logic gate operations but also for the ultrasensitive detection of the target/input. This system represents the first example of homogeneous electrochemical logic operation. Importantly, the proposed homogeneous electrochemical logic gates possess the input/output homogeneity and share a constant output threshold value. Moreover, the modular design of DNA polymerization/nicking machines enables the adaptation of these homogeneous electrochemical logic gates to various input and output sequences. The results of this study demonstrate the versatility and universality of the label-free homogeneous electrochemical platform in the design of biomolecular logic gates and provide a potential platform for the further development of large-scale DNA-based biocomputing circuits and advanced biosensors for multiple molecular targets.

  16. Optimized protocols for cardiac magnetic resonance imaging in patients with thoracic metallic implants.

    PubMed

    Olivieri, Laura J; Cross, Russell R; O'Brien, Kendall E; Ratnayaka, Kanishka; Hansen, Michael S

    2015-09-01

    Cardiac magnetic resonance (MR) imaging is a valuable tool in congenital heart disease; however patients frequently have metal devices in the chest from the treatment of their disease that complicate imaging. Methods are needed to improve imaging around metal implants near the heart. Basic sequence parameter manipulations have the potential to minimize artifact while limiting effects on image resolution and quality. Our objective was to design cine and static cardiac imaging sequences to minimize metal artifact while maintaining image quality. Using systematic variation of standard imaging parameters on a fluid-filled phantom containing commonly used metal cardiac devices, we developed optimized sequences for steady-state free precession (SSFP), gradient recalled echo (GRE) cine imaging, and turbo spin-echo (TSE) black-blood imaging. We imaged 17 consecutive patients undergoing routine cardiac MR with 25 metal implants of various origins using both standard and optimized imaging protocols for a given slice position. We rated images for quality and metal artifact size by measuring metal artifact in two orthogonal planes within the image. All metal artifacts were reduced with optimized imaging. The average metal artifact reduction for the optimized SSFP cine was 1.5+/-1.8 mm, and for the optimized GRE cine the reduction was 4.6+/-4.5 mm (P < 0.05). Quality ratings favored the optimized GRE cine. Similarly, the average metal artifact reduction for the optimized TSE images was 1.6+/-1.7 mm (P < 0.05), and quality ratings favored the optimized TSE imaging. Imaging sequences tailored to minimize metal artifact are easily created by modifying basic sequence parameters, and images are superior to standard imaging sequences in both quality and artifact size. Specifically, for optimized cine imaging a GRE sequence should be used with settings that favor short echo time, i.e. flow compensation off, weak asymmetrical echo and a relatively high receiver bandwidth. For static black-blood imaging, a TSE sequence should be used with fat saturation turned off and high receiver bandwidth.

  17. Effects of control inputs on the estimation of stability and control parameters of a light airplane

    NASA Technical Reports Server (NTRS)

    Cannaday, R. L.; Suit, W. T.

    1977-01-01

    The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.

  18. In vivo Proton Electron Double Resonance Imaging of Mice with Fast Spin Echo Pulse Sequence

    PubMed Central

    Sun, Ziqi; Li, Haihong; Petryakov, Sergey; Samouilov, Alex; Zweier, Jay L.

    2011-01-01

    Purpose To develop and evaluate a 2D fast spin echo (FSE) pulse sequence for enhancing temporal resolution and reducing tissue heating for in vivo proton electron double resonance imaging (PEDRI) of mice. Materials and Methods A four-compartment phantom containing 2 mM TEMPONE was imaged at 20.1 mT using 2D FSE-PEDRI and regular gradient echo (GRE)-PEDRI pulse sequences. Control mice were infused with TEMPONE over ∼1 min followed by time-course imaging using the 2D FSE-PEDRI sequence at intervals of 10 – 30 s between image acquisitions. The average signal intensity from the time-course images was analyzed using a first-order kinetics model. Results Phantom experiments demonstrated that EPR power deposition can be greatly reduced using the FSE-PEDRI pulse sequence compared to the conventional gradient echo pulse sequence. High temporal resolution was achieved at ∼4 s per image acquisition using the FSE-PEDRI sequence with a good image SNR in the range of 233-266 in the phantom study. The TEMPONE half-life measured in vivo was ∼72 s. Conclusion Thus, the FSE-PEDRI pulse sequence enables fast in vivo functional imaging of free radical probes in small animals greatly reducing EPR irradiation time with decreased power deposition and provides increased temporal resolution. PMID:22147559

  19. Abdominal MR imaging in children: motion compensation, sequence optimization, and protocol organization.

    PubMed

    Chavhan, Govind B; Babyn, Paul S; Vasanawala, Shreyas S

    2013-05-01

    Familiarity with basic sequence properties and their trade-offs is necessary for radiologists performing abdominal magnetic resonance (MR) imaging. Acquiring diagnostic-quality MR images in the pediatric abdomen is challenging due to motion, inability to breath hold, varying patient size, and artifacts. Motion-compensation techniques (eg, respiratory gating, signal averaging, suppression of signal from moving tissue, swapping phase- and frequency-encoding directions, use of faster sequences with breath holding, parallel imaging, and radial k-space filling) can improve image quality. Each of these techniques is more suitable for use with certain sequences and acquisition planes and in specific situations and age groups. Different T1- and T2-weighted sequences work better in different age groups and with differing acquisition planes and have specific advantages and disadvantages. Dynamic imaging should be performed differently in younger children than in older children. In younger children, the sequence and the timing of dynamic phases need to be adjusted. Different sequences work better in smaller children and in older children because of differing breath-holding ability, breathing patterns, field of view, and use of sedation. Hence, specific protocols should be maintained for younger children and older children. Combining longer-higher-resolution sequences and faster-lower-resolution sequences helps acquire diagnostic-quality images in a reasonable time. © RSNA, 2013.

  20. Image Encryption Algorithm Based on Hyperchaotic Maps and Nucleotide Sequences Database

    PubMed Central

    2017-01-01

    Image encryption technology is one of the main means to ensure the safety of image information. Using the characteristics of chaos, such as randomness, regularity, ergodicity, and initial value sensitiveness, combined with the unique space conformation of DNA molecules and their unique information storage and processing ability, an efficient method for image encryption based on the chaos theory and a DNA sequence database is proposed. In this paper, digital image encryption employs a process of transforming the image pixel gray value by using chaotic sequence scrambling image pixel location and establishing superchaotic mapping, which maps quaternary sequences and DNA sequences, and by combining with the logic of the transformation between DNA sequences. The bases are replaced under the displaced rules by using DNA coding in a certain number of iterations that are based on the enhanced quaternary hyperchaotic sequence; the sequence is generated by Chen chaos. The cipher feedback mode and chaos iteration are employed in the encryption process to enhance the confusion and diffusion properties of the algorithm. Theoretical analysis and experimental results show that the proposed scheme not only demonstrates excellent encryption but also effectively resists chosen-plaintext attack, statistical attack, and differential attack. PMID:28392799

  1. Quantitative analysis of image quality for acceptance and commissioning of an MRI simulator with a semiautomatic method.

    PubMed

    Chen, Xinyuan; Dai, Jianrong

    2018-05-01

    Magnetic Resonance Imaging (MRI) simulation differs from diagnostic MRI in purpose, technical requirements, and implementation. We propose a semiautomatic method for image acceptance and commissioning for the scanner, the radiofrequency (RF) coils, and pulse sequences for an MRI simulator. The ACR MRI accreditation large phantom was used for image quality analysis with seven parameters. Standard ACR sequences with a split head coil were adopted to examine the scanner's basic performance. The performance of simulation RF coils were measured and compared using the standard sequence with different clinical diagnostic coils. We used simulation sequences with simulation coils to test the quality of image and advanced performance of the scanner. Codes and procedures were developed for semiautomatic image quality analysis. When using standard ACR sequences with a split head coil, image quality passed all ACR recommended criteria. The image intensity uniformity with a simulation RF coil decreased about 34% compared with the eight-channel diagnostic head coil, while the other six image quality parameters were acceptable. Those two image quality parameters could be improved to more than 85% by built-in intensity calibration methods. In the simulation sequences test, the contrast resolution was sensitive to the FOV and matrix settings. The geometric distortion of simulation sequences such as T1-weighted and T2-weighted images was well-controlled in the isocenter and 10 cm off-center within a range of ±1% (2 mm). We developed a semiautomatic image quality analysis method for quantitative evaluation of images and commissioning of an MRI simulator. The baseline performances of simulation RF coils and pulse sequences have been established for routine QA. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  2. Comparison of magnetic resonance imaging sequences for depicting the subthalamic nucleus for deep brain stimulation.

    PubMed

    Nagahama, Hiroshi; Suzuki, Kengo; Shonai, Takaharu; Aratani, Kazuki; Sakurai, Yuuki; Nakamura, Manami; Sakata, Motomichi

    2015-01-01

    Electrodes are surgically implanted into the subthalamic nucleus (STN) of Parkinson's disease patients to provide deep brain stimulation. For ensuring correct positioning, the anatomic location of the STN must be determined preoperatively. Magnetic resonance imaging has been used for pinpointing the location of the STN. To identify the optimal imaging sequence for identifying the STN, we compared images produced with T2 star-weighted angiography (SWAN), gradient echo T2*-weighted imaging, and fast spin echo T2-weighted imaging in 6 healthy volunteers. Our comparison involved measurement of the contrast-to-noise ratio (CNR) for the STN and substantia nigra and a radiologist's interpretations of the images. Of the sequences examined, the CNR and qualitative scores were significantly higher on SWAN images than on other images (p < 0.01) for STN visualization. Kappa value (0.74) on SWAN images was the highest in three sequences for visualizing the STN. SWAN is the sequence best suited for identifying the STN at the present time.

  3. Infrared thermal facial image sequence registration analysis and verification

    NASA Astrophysics Data System (ADS)

    Chen, Chieh-Li; Jian, Bo-Lin

    2015-03-01

    To study the emotional responses of subjects to the International Affective Picture System (IAPS), infrared thermal facial image sequence is preprocessed for registration before further analysis such that the variance caused by minor and irregular subject movements is reduced. Without affecting the comfort level and inducing minimal harm, this study proposes an infrared thermal facial image sequence registration process that will reduce the deviations caused by the unconscious head shaking of the subjects. A fixed image for registration is produced through the localization of the centroid of the eye region as well as image translation and rotation processes. Thermal image sequencing will then be automatically registered using the two-stage genetic algorithm proposed. The deviation before and after image registration will be demonstrated by image quality indices. The results show that the infrared thermal image sequence registration process proposed in this study is effective in localizing facial images accurately, which will be beneficial to the correlation analysis of psychological information related to the facial area.

  4. Robust Mapping of Incoherent Fiber-Optic Bundles

    NASA Technical Reports Server (NTRS)

    Roberts, Harry E.; Deason, Brent E.; DePlachett, Charles P.; Pilgrim, Robert A.; Sanford, Harold S.

    2007-01-01

    A method and apparatus for mapping between the positions of fibers at opposite ends of incoherent fiber-optic bundles have been invented to enable the use of such bundles to transmit images in visible or infrared light. The method is robust in the sense that it provides useful mapping even for a bundle that contains thousands of narrow, irregularly packed fibers, some of which may be defective. In a coherent fiber-optic bundle, the input and output ends of each fiber lie at identical positions in the input and output planes; therefore, the bundle can be used to transmit images without further modification. Unfortunately, the fabrication of coherent fiber-optic bundles is too labor-intensive and expensive for many applications. An incoherent fiber-optic bundle can be fabricated more easily and at lower cost, but it produces a scrambled image because the position of the end of each fiber in the input plane is generally different from the end of the same fiber in the output plane. However, the image transmitted by an incoherent fiber-optic bundle can be unscrambled (or, from a different perspective, decoded) by digital processing of the output image if the mapping between the input and output fiber-end positions is known. Thus, the present invention enables the use of relatively inexpensive fiber-optic bundles to transmit images.

  5. CLASSIFYING MEDICAL IMAGES USING MORPHOLOGICAL APPEARANCE MANIFOLDS.

    PubMed

    Varol, Erdem; Gaonkar, Bilwaj; Davatzikos, Christos

    2013-12-31

    Input features for medical image classification algorithms are extracted from raw images using a series of pre processing steps. One common preprocessing step in computational neuroanatomy and functional brain mapping is the nonlinear registration of raw images to a common template space. Typically, the registration methods used are parametric and their output varies greatly with changes in parameters. Most results reported previously perform registration using a fixed parameter setting and use the results as input to the subsequent classification step. The variation in registration results due to choice of parameters thus translates to variation of performance of the classifiers that depend on the registration step for input. Analogous issues have been investigated in the computer vision literature, where image appearance varies with pose and illumination, thereby making classification vulnerable to these confounding parameters. The proposed methodology addresses this issue by sampling image appearances as registration parameters vary, and shows that better classification accuracies can be obtained this way, compared to the conventional approach.

  6. A Fly-Inspired Mushroom Bodies Model for Sensory-Motor Control Through Sequence and Subsequence Learning.

    PubMed

    Arena, Paolo; Calí, Marco; Patané, Luca; Portera, Agnese; Strauss, Roland

    2016-09-01

    Classification and sequence learning are relevant capabilities used by living beings to extract complex information from the environment for behavioral control. The insect world is full of examples where the presentation time of specific stimuli shapes the behavioral response. On the basis of previously developed neural models, inspired by Drosophila melanogaster, a new architecture for classification and sequence learning is here presented under the perspective of the Neural Reuse theory. Classification of relevant input stimuli is performed through resonant neurons, activated by the complex dynamics generated in a lattice of recurrent spiking neurons modeling the insect Mushroom Bodies neuropile. The network devoted to context formation is able to reconstruct the learned sequence and also to trace the subsequences present in the provided input. A sensitivity analysis to parameter variation and noise is reported. Experiments on a roving robot are reported to show the capabilities of the architecture used as a neural controller.

  7. Comparison of the quality of different magnetic resonance image sequences of multiple myeloma.

    PubMed

    Sun, Zhao-yong; Zhang, Hai-bo; Li, Shuo; Wang, Yun; Xue, Hua-dan; Jin, Zheng-yu

    2015-02-01

    To compare the image quality of T1WI fat phase,T1WI water phase, short time inversion recovery (STIR) sequence, and diffusion weighted imaging (DWI) sequence in the evaluation of multiple myeloma (MM). Totally 20MM patients were enrolled in this study. All patients underwent scanning at coronal T1WI fat phase, coronal T1WI water phase, coronal STIR sequence, and axial DWI sequence. The image quality of the four different sequences was evaluated. The image was divided into seven sections(head and neck, chest, abdomen, pelvis, thigh, leg, and foot), and the signal-to-noise ratio (SNR) of each section was measured at 7 segments (skull, spine, pelvis, humerus, femur, tibia and fibula and ribs) were measured. In addition, 20 active MM lesions were selected, and the contrast-to-noise ratio (CNR) of each scan sequence was calculated. The average image quality scores of T1WI fat phase,T1WI water phase, STIR sequence, and DWI sequence were 4.19 ± 0.70,4.16 ± 0.73,3.89 ± 0.70, and 3.76 ± 0.68, respectively. The image quality at T1-fat phase and T1-water phase were significantly higher than those at STIR (P=0.000 and P=0.001) and DWI sequence (both P=0.000); however, there was no significant difference between T1-fat and T1-water phase (P=0.723)and between STIR and DWI sequence (P=0.167). The SNR of T1WI fat phase was significantly higher than those of the other three sequences (all P=0.000), and there was no significant difference among the other three sequences (all P>0.05). Although the CNR of DWI sequences was slightly higher than those of the other three sequences,there was no significant difference among all of them (all P>0.05). Imaging at T1WI fat phase,T1WI water phase, STIR sequence, and DWI sequence has certain advantages,and they should be combined in the diagnosis of MM.

  8. MISTICA: Minimum Spanning Tree-based Coarse Image Alignment for Microscopy Image Sequences

    PubMed Central

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T.

    2016-01-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to re-order the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries. PMID:26415193

  9. MISTICA: Minimum Spanning Tree-Based Coarse Image Alignment for Microscopy Image Sequences.

    PubMed

    Ray, Nilanjan; McArdle, Sara; Ley, Klaus; Acton, Scott T

    2016-11-01

    Registration of an in vivo microscopy image sequence is necessary in many significant studies, including studies of atherosclerosis in large arteries and the heart. Significant cardiac and respiratory motion of the living subject, occasional spells of focal plane changes, drift in the field of view, and long image sequences are the principal roadblocks. The first step in such a registration process is the removal of translational and rotational motion. Next, a deformable registration can be performed. The focus of our study here is to remove the translation and/or rigid body motion that we refer to here as coarse alignment. The existing techniques for coarse alignment are unable to accommodate long sequences often consisting of periods of poor quality images (as quantified by a suitable perceptual measure). Many existing methods require the user to select an anchor image to which other images are registered. We propose a novel method for coarse image sequence alignment based on minimum weighted spanning trees (MISTICA) that overcomes these difficulties. The principal idea behind MISTICA is to reorder the images in shorter sequences, to demote nonconforming or poor quality images in the registration process, and to mitigate the error propagation. The anchor image is selected automatically making MISTICA completely automated. MISTICA is computationally efficient. It has a single tuning parameter that determines graph width, which can also be eliminated by the way of additional computation. MISTICA outperforms existing alignment methods when applied to microscopy image sequences of mouse arteries.

  10. Diffusion-weighted imaging of the liver with multiple b values: effect of diffusion gradient polarity and breathing acquisition on image quality and intravoxel incoherent motion parameters--a pilot study.

    PubMed

    Dyvorne, Hadrien A; Galea, Nicola; Nevers, Thomas; Fiel, M Isabel; Carpenter, David; Wong, Edmund; Orton, Matthew; de Oliveira, Andre; Feiweier, Thorsten; Vachon, Marie-Louise; Babb, James S; Taouli, Bachir

    2013-03-01

    To optimize intravoxel incoherent motion (IVIM) diffusion-weighted (DW) imaging by estimating the effects of diffusion gradient polarity and breathing acquisition scheme on image quality, signal-to-noise ratio (SNR), IVIM parameters, and parameter reproducibility, as well as to investigate the potential of IVIM in the detection of hepatic fibrosis. In this institutional review board-approved prospective study, 20 subjects (seven healthy volunteers, 13 patients with hepatitis C virus infection; 14 men, six women; mean age, 46 years) underwent IVIM DW imaging with four sequences: (a) respiratory-triggered (RT) bipolar (BP) sequence, (b) RT monopolar (MP) sequence, (c) free-breathing (FB) BP sequence, and (d) FB MP sequence. Image quality scores were assessed for all sequences. A biexponential analysis with the Bayesian method yielded true diffusion coefficient (D), pseudodiffusion coefficient (D*), and perfusion fraction (PF) in liver parenchyma. Mixed-model analysis of variance was used to compare image quality, SNR, IVIM parameters, and interexamination variability between the four sequences, as well as the ability to differentiate areas of liver fibrosis from normal liver tissue. Image quality with RT sequences was superior to that with FB acquisitions (P = .02) and was not affected by gradient polarity. SNR did not vary significantly between sequences. IVIM parameter reproducibility was moderate to excellent for PF and D, while it was less reproducible for D*. PF and D were both significantly lower in patients with hepatitis C virus than in healthy volunteers with the RT BP sequence (PF = 13.5% ± 5.3 [standard deviation] vs 9.2% ± 2.5, P = .038; D = [1.16 ± 0.07] × 10(-3) mm(2)/sec vs [1.03 ± 0.1] × 10(-3) mm(2)/sec, P = .006). The RT BP DW imaging sequence had the best results in terms of image quality, reproducibility, and ability to discriminate between healthy and fibrotic liver with biexponential fitting.

  11. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.

    PubMed

    Ponzi, Adam; Wickens, Jeff

    2012-01-01

    The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.

  12. Input Dependent Cell Assembly Dynamics in a Model of the Striatal Medium Spiny Neuron Network

    PubMed Central

    Ponzi, Adam; Wickens, Jeff

    2012-01-01

    The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior. PMID:22438838

  13. Role of the site of synaptic competition and the balance of learning forces for Hebbian encoding of probabilistic Markov sequences

    PubMed Central

    Bouchard, Kristofer E.; Ganguli, Surya; Brainard, Michael S.

    2015-01-01

    The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions. PMID:26257637

  14. GOLabeler: Improving Sequence-based Large-scale Protein Function Prediction by Learning to Rank.

    PubMed

    You, Ronghui; Zhang, Zihan; Xiong, Yi; Sun, Fengzhu; Mamitsuka, Hiroshi; Zhu, Shanfeng

    2018-03-07

    Gene Ontology (GO) has been widely used to annotate functions of proteins and understand their biological roles. Currently only <1% of more than 70 million proteins in UniProtKB have experimental GO annotations, implying the strong necessity of automated function prediction (AFP) of proteins, where AFP is a hard multilabel classification problem due to one protein with a diverse number of GO terms. Most of these proteins have only sequences as input information, indicating the importance of sequence-based AFP (SAFP: sequences are the only input). Furthermore homology-based SAFP tools are competitive in AFP competitions, while they do not necessarily work well for so-called difficult proteins, which have <60% sequence identity to proteins with annotations already. Thus the vital and challenging problem now is how to develop a method for SAFP, particularly for difficult proteins. The key of this method is to extract not only homology information but also diverse, deep- rooted information/evidence from sequence inputs and integrate them into a predictor in a both effective and efficient manner. We propose GOLabeler, which integrates five component classifiers, trained from different features, including GO term frequency, sequence alignment, amino acid trigram, domains and motifs, and biophysical properties, etc., in the framework of learning to rank (LTR), a paradigm of machine learning, especially powerful for multilabel classification. The empirical results obtained by examining GOLabeler extensively and thoroughly by using large-scale datasets revealed numerous favorable aspects of GOLabeler, including significant performance advantage over state-of-the-art AFP methods. http://datamining-iip.fudan.edu.cn/golabeler. zhusf@fudan.edu.cn. Supplementary data are available at Bioinformatics online.

  15. Techniques for automatic large scale change analysis of temporal multispectral imagery

    NASA Astrophysics Data System (ADS)

    Mercovich, Ryan A.

    Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change.

  16. Global rotational motion and displacement estimation of digital image stabilization based on the oblique vectors matching algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Fei; Hui, Mei; Zhao, Yue-jin

    2009-08-01

    The image block matching algorithm based on motion vectors of correlative pixels in oblique direction is presented for digital image stabilization. The digital image stabilization is a new generation of image stabilization technique which can obtains the information of relative motion among frames of dynamic image sequences by the method of digital image processing. In this method the matching parameters are calculated from the vectors projected in the oblique direction. The matching parameters based on the vectors contain the information of vectors in transverse and vertical direction in the image blocks at the same time. So the better matching information can be obtained after making correlative operation in the oblique direction. And an iterative weighted least square method is used to eliminate the error of block matching. The weights are related with the pixels' rotational angle. The center of rotation and the global emotion estimation of the shaking image can be obtained by the weighted least square from the estimation of each block chosen evenly from the image. Then, the shaking image can be stabilized with the center of rotation and the global emotion estimation. Also, the algorithm can run at real time by the method of simulated annealing in searching method of block matching. An image processing system based on DSP was used to exam this algorithm. The core processor in the DSP system is TMS320C6416 of TI, and the CCD camera with definition of 720×576 pixels was chosen as the input video signal. Experimental results show that the algorithm can be performed at the real time processing system and have an accurate matching precision.

  17. Novel view synthesis by interpolation over sparse examples

    NASA Astrophysics Data System (ADS)

    Liang, Bodong; Chung, Ronald C.

    2006-01-01

    Novel view synthesis (NVS) is an important problem in image rendering. It involves synthesizing an image of a scene at any specified (novel) viewpoint, given some images of the scene at a few sample viewpoints. The general understanding is that the solution should bypass explicit 3-D reconstruction of the scene. As it is, the problem has a natural tie to interpolation, despite that mainstream efforts on the problem have been adopting formulations otherwise. Interpolation is about finding the output of a function f(x) for any specified input x, given a few input-output pairs {(xi,fi):i=1,2,3,...,n} of the function. If the input x is the viewpoint, and f(x) is the image, the interpolation problem becomes exactly NVS. We treat the NVS problem using the interpolation formulation. In particular, we adopt the example-based everything or interpolation (EBI) mechanism-an established mechanism for interpolating or learning functions from examples. EBI has all the desirable properties of a good interpolation: all given input-output examples are satisfied exactly, and the interpolation is smooth with minimum oscillations between the examples. We point out that EBI, however, has difficulty in interpolating certain classes of functions, including the image function in the NVS problem. We propose an extension of the mechanism for overcoming the limitation. We also present how the extended interpolation mechanism could be used to synthesize images at novel viewpoints. Real image results show that the mechanism has promising performance, even with very few example images.

  18. Secret shared multiple-image encryption based on row scanning compressive ghost imaging and phase retrieval in the Fresnel domain

    NASA Astrophysics Data System (ADS)

    Li, Xianye; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Yin, Yongkai; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2017-09-01

    A multiple-image encryption method is proposed that is based on row scanning compressive ghost imaging, (t, n) threshold secret sharing, and phase retrieval in the Fresnel domain. In the encryption process, after wavelet transform and Arnold transform of the target image, the ciphertext matrix can be first detected using a bucket detector. Based on a (t, n) threshold secret sharing algorithm, the measurement key used in the row scanning compressive ghost imaging can be decomposed and shared into two pairs of sub-keys, which are then reconstructed using two phase-only mask (POM) keys with fixed pixel values, placed in the input plane and transform plane 2 of the phase retrieval scheme, respectively; and the other POM key in the transform plane 1 can be generated and updated by the iterative encoding of each plaintext image. In each iteration, the target image acts as the input amplitude constraint in the input plane. During decryption, each plaintext image possessing all the correct keys can be successfully decrypted by measurement key regeneration, compression algorithm reconstruction, inverse wavelet transformation, and Fresnel transformation. Theoretical analysis and numerical simulations both verify the feasibility of the proposed method.

  19. Automatic Reconstruction of Spacecraft 3D Shape from Imagery

    NASA Astrophysics Data System (ADS)

    Poelman, C.; Radtke, R.; Voorhees, H.

    We describe a system that computes the three-dimensional (3D) shape of a spacecraft from a sequence of uncalibrated, two-dimensional images. While the mathematics of multi-view geometry is well understood, building a system that accurately recovers 3D shape from real imagery remains an art. A novel aspect of our approach is the combination of algorithms from computer vision, photogrammetry, and computer graphics. We demonstrate our system by computing spacecraft models from imagery taken by the Air Force Research Laboratory's XSS-10 satellite and DARPA's Orbital Express satellite. Using feature tie points (each identified in two or more images), we compute the relative motion of each frame and the 3D location of each feature using iterative linear factorization followed by non-linear bundle adjustment. The "point cloud" that results from this traditional shape-from-motion approach is typically too sparse to generate a detailed 3D model. Therefore, we use the computed motion solution as input to a volumetric silhouette-carving algorithm, which constructs a solid 3D model based on viewpoint consistency with the image frames. The resulting voxel model is then converted to a facet-based surface representation and is texture-mapped, yielding realistic images from arbitrary viewpoints. We also illustrate other applications of the algorithm, including 3D mensuration and stereoscopic 3D movie generation.

  20. Non-Cartesian Balanced SSFP Pulse Sequences for Real-Time Cardiac MRI

    PubMed Central

    Feng, Xue; Salerno, Michael; Kramer, Christopher M.; Meyer, Craig H.

    2015-01-01

    Purpose To develop a new spiral-in/out balanced steady-state free precession (bSSFP) pulse sequence for real-time cardiac MRI and compare it with radial and spiral-out techniques. Methods Non-Cartesian sampling strategies are efficient and robust to motion and thus have important advantages for real-time bSSFP cine imaging. This study describes a new symmetric spiral-in/out sequence with intrinsic gradient moment compensation and SSFP refocusing at TE=TR/2. In-vivo real-time cardiac imaging studies were performed to compare radial, spiral-out, and spiral-in/out bSSFP pulse sequences. Furthermore, phase-based fat-water separation taking advantage of the refocusing mechanism of the spiral-in/out bSSFP sequence was also studied. Results The image quality of the spiral-out and spiral-in/out bSSFP sequences was improved with off-resonance and k-space trajectory correction. The spiral-in/out bSSFP sequence had the highest SNR, CNR, and image quality ratings, with spiral-out bSSFP sequence second in each category and the radial bSSFP sequence third. The spiral-in/out bSSFP sequence provides separated fat and water images with no additional scan time. Conclusions In this work a new spiral-in/out bSSFP sequence was developed and tested. The superiority of spiral bSSFP sequences over the radial bSSFP sequence in terms of SNR and reduced artifacts was demonstrated in real-time MRI of cardiac function without image acceleration. PMID:25960254

  1. Robust temporal alignment of multimodal cardiac sequences

    NASA Astrophysics Data System (ADS)

    Perissinotto, Andrea; Queirós, Sandro; Morais, Pedro; Baptista, Maria J.; Monaghan, Mark; Rodrigues, Nuno F.; D'hooge, Jan; Vilaça, João. L.; Barbosa, Daniel

    2015-03-01

    Given the dynamic nature of cardiac function, correct temporal alignment of pre-operative models and intraoperative images is crucial for augmented reality in cardiac image-guided interventions. As such, the current study focuses on the development of an image-based strategy for temporal alignment of multimodal cardiac imaging sequences, such as cine Magnetic Resonance Imaging (MRI) or 3D Ultrasound (US). First, we derive a robust, modality-independent signal from the image sequences, estimated by computing the normalized cross-correlation between each frame in the temporal sequence and the end-diastolic frame. This signal is a resembler for the left-ventricle (LV) volume curve over time, whose variation indicates different temporal landmarks of the cardiac cycle. We then perform the temporal alignment of these surrogate signals derived from MRI and US sequences of the same patient through Dynamic Time Warping (DTW), allowing to synchronize both sequences. The proposed framework was evaluated in 98 patients, which have undergone both 3D+t MRI and US scans. The end-systolic frame could be accurately estimated as the minimum of the image-derived surrogate signal, presenting a relative error of 1.6 +/- 1.9% and 4.0 +/- 4.2% for the MRI and US sequences, respectively, thus supporting its association with key temporal instants of the cardiac cycle. The use of DTW reduces the desynchronization of the cardiac events in MRI and US sequences, allowing to temporally align multimodal cardiac imaging sequences. Overall, a generic, fast and accurate method for temporal synchronization of MRI and US sequences of the same patient was introduced. This approach could be straightforwardly used for the correct temporal alignment of pre-operative MRI information and intra-operative US images.

  2. Estimating atmospheric parameters and reducing noise for multispectral imaging

    DOEpatents

    Conger, James Lynn

    2014-02-25

    A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.

  3. The determination of high-resolution spatio-temporal glacier motion fields from time-lapse sequences

    NASA Astrophysics Data System (ADS)

    Schwalbe, Ellen; Maas, Hans-Gerd

    2017-12-01

    This paper presents a comprehensive method for the determination of glacier surface motion vector fields at high spatial and temporal resolution. These vector fields can be derived from monocular terrestrial camera image sequences and are a valuable data source for glaciological analysis of the motion behaviour of glaciers. The measurement concepts for the acquisition of image sequences are presented, and an automated monoscopic image sequence processing chain is developed. Motion vector fields can be derived with high precision by applying automatic subpixel-accuracy image matching techniques on grey value patterns in the image sequences. Well-established matching techniques have been adapted to the special characteristics of the glacier data in order to achieve high reliability in automatic image sequence processing, including the handling of moving shadows as well as motion effects induced by small instabilities in the camera set-up. Suitable geo-referencing techniques were developed to transform image measurements into a reference coordinate system.The result of monoscopic image sequence analysis is a dense raster of glacier surface point trajectories for each image sequence. Each translation vector component in these trajectories can be determined with an accuracy of a few centimetres for points at a distance of several kilometres from the camera. Extensive practical validation experiments have shown that motion vector and trajectory fields derived from monocular image sequences can be used for the determination of high-resolution velocity fields of glaciers, including the analysis of tidal effects on glacier movement, the investigation of a glacier's motion behaviour during calving events, the determination of the position and migration of the grounding line and the detection of subglacial channels during glacier lake outburst floods.

  4. [The Role of Imaging in Central Nervous System Infections].

    PubMed

    Yokota, Hajime; Tazoe, Jun; Yamada, Kei

    2015-07-01

    Many infections invade the central nervous system. Magnetic resonance imaging (MRI) is the main tool that is used to evaluate infectious lesions of the central nervous system. The useful sequences on MRI are dependent on the locations, such as intra-axial, extra-axial, and spinal cord. For intra-axial lesions, besides the fundamental sequences, including T1-weighted images, T2-weighted images, and fluid-attenuated inversion recovery (FLAIR) images, advanced sequences, such as diffusion-weighted imaging, diffusion tensor imaging, susceptibility-weighted imaging, and MR spectroscopy, can be applied. They are occasionally used as determinants for quick and correct diagnosis. For extra-axial lesions, understanding the differences among 2D-conventional T1-weighted images, 2D-fat-saturated T1-weighted images, 3D-Spin echo sequences, and 3D-Gradient echo sequence after the administration of gadolinium is required to avoid wrong interpretations. FLAIR plus gadolinium is a useful tool for revealing abnormal enhancement on the brain surface. For the spinal cord, the sequences are limited. Evaluating the distribution and time course of the spinal cord are essential for correct diagnoses. We summarize the role of imaging in central nervous system infections and show the pitfalls, key points, and latest information in them on clinical practices.

  5. A Hybrid Semi-Digital Transimpedance Amplifier With Noise Cancellation Technique for Nanopore-Based DNA Sequencing.

    PubMed

    Hsu, Chung-Lun; Jiang, Haowei; Venkatesh, A G; Hall, Drew A

    2015-10-01

    Over the past two decades, nanopores have been a promising technology for next generation deoxyribonucleic acid (DNA) sequencing. Here, we present a hybrid semi-digital transimpedance amplifier (HSD-TIA) to sense the minute current signatures introduced by single-stranded DNA (ssDNA) translocating through a nanopore, while discharging the baseline current using a semi-digital feedback loop. The amplifier achieves fast settling by adaptively tuning a DC compensation current when a step input is detected. A noise cancellation technique reduces the total input-referred current noise caused by the parasitic input capacitance. Measurement results show the performance of the amplifier with 31.6 M Ω mid-band gain, 950 kHz bandwidth, and 8.5 fA/ √Hz input-referred current noise, a 2× noise reduction due to the noise cancellation technique. The settling response is demonstrated by observing the insertion of a protein nanopore in a lipid bilayer. Using the nanopore, the HSD-TIA was able to measure ssDNA translocation events.

  6. Single DMD time-multiplexed 64-views autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Loreti, Luigi

    2013-03-01

    Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links

  7. Automatic Boosted Flood Mapping from Satellite Data

    NASA Technical Reports Server (NTRS)

    Coltin, Brian; McMichael, Scott; Smith, Trey; Fong, Terrence

    2016-01-01

    Numerous algorithms have been proposed to map floods from Moderate Resolution Imaging Spectroradiometer (MODIS) imagery. However, most require human input to succeed, either to specify a threshold value or to manually annotate training data. We introduce a new algorithm based on Adaboost which effectively maps floods without any human input, allowing for a truly rapid and automatic response. The Adaboost algorithm combines multiple thresholds to achieve results comparable to state-of-the-art algorithms which do require human input. We evaluate Adaboost, as well as numerous previously proposed flood mapping algorithms, on multiple MODIS flood images, as well as on hundreds of non-flood MODIS lake images, demonstrating its effectiveness across a wide variety of conditions.

  8. Seq2Logo: a method for construction and visualization of amino acid binding motifs and sequence profiles including sequence weighting, pseudo counts and two-sided representation of amino acid enrichment and depletion

    PubMed Central

    Thomsen, Martin Christen Frølund; Nielsen, Morten

    2012-01-01

    Seq2Logo is a web-based sequence logo generator. Sequence logos are a graphical representation of the information content stored in a multiple sequence alignment (MSA) and provide a compact and highly intuitive representation of the position-specific amino acid composition of binding motifs, active sites, etc. in biological sequences. Accurate generation of sequence logos is often compromised by sequence redundancy and low number of observations. Moreover, most methods available for sequence logo generation focus on displaying the position-specific enrichment of amino acids, discarding the equally valuable information related to amino acid depletion. Seq2logo aims at resolving these issues allowing the user to include sequence weighting to correct for data redundancy, pseudo counts to correct for low number of observations and different logotype representations each capturing different aspects related to amino acid enrichment and depletion. Besides allowing input in the format of peptides and MSA, Seq2Logo accepts input as Blast sequence profiles, providing easy access for non-expert end-users to characterize and identify functionally conserved/variable amino acids in any given protein of interest. The output from the server is a sequence logo and a PSSM. Seq2Logo is available at http://www.cbs.dtu.dk/biotools/Seq2Logo (14 May 2012, date last accessed). PMID:22638583

  9. Reconstruction of three-dimensional ultrasound images based on cyclic Savitzky-Golay filters

    NASA Astrophysics Data System (ADS)

    Toonkum, Pollakrit; Suwanwela, Nijasri C.; Chinrungrueng, Chedsada

    2011-01-01

    We present a new algorithm for reconstructing a three-dimensional (3-D) ultrasound image from a series of two-dimensional B-scan ultrasound slices acquired in the mechanical linear scanning framework. Unlike most existing 3-D ultrasound reconstruction algorithms, which have been developed and evaluated in the freehand scanning framework, the new algorithm has been designed to capitalize the regularity pattern of the mechanical linear scanning, where all the B-scan slices are precisely parallel and evenly spaced. The new reconstruction algorithm, referred to as the cyclic Savitzky-Golay (CSG) reconstruction filter, is an improvement on the original Savitzky-Golay filter in two respects: First, it is extended to accept a 3-D array of data as the filter input instead of a one-dimensional data sequence. Second, it incorporates the cyclic indicator function in its least-squares objective function so that the CSG algorithm can simultaneously perform both smoothing and interpolating tasks. The performance of the CSG reconstruction filter compared to that of most existing reconstruction algorithms in generating a 3-D synthetic test image and a clinical 3-D carotid artery bifurcation in the mechanical linear scanning framework are also reported.

  10. Sequencing historical specimens: successful preparation of small specimens with low amounts of degraded DNA.

    PubMed

    Sproul, John S; Maddison, David R

    2017-11-01

    Despite advances that allow DNA sequencing of old museum specimens, sequencing small-bodied, historical specimens can be challenging and unreliable as many contain only small amounts of fragmented DNA. Dependable methods to sequence such specimens are especially critical if the specimens are unique. We attempt to sequence small-bodied (3-6 mm) historical specimens (including nomenclatural types) of beetles that have been housed, dried, in museums for 58-159 years, and for which few or no suitable replacement specimens exist. To better understand ideal approaches of sample preparation and produce preparation guidelines, we compared different library preparation protocols using low amounts of input DNA (1-10 ng). We also explored low-cost optimizations designed to improve library preparation efficiency and sequencing success of historical specimens with minimal DNA, such as enzymatic repair of DNA. We report successful sample preparation and sequencing for all historical specimens despite our low-input DNA approach. We provide a list of guidelines related to DNA repair, bead handling, reducing adapter dimers and library amplification. We present these guidelines to facilitate more economical use of valuable DNA and enable more consistent results in projects that aim to sequence challenging, irreplaceable historical specimens. © 2017 John Wiley & Sons Ltd.

  11. An investigation of developmental changes in interpretation and construction of graphic AAC symbol sequences through systematic combination of input and output modalities.

    PubMed

    Trudeau, Natacha; Sutton, Ann; Morford, Jill P

    2014-09-01

    While research on spoken language has a long tradition of studying and contrasting language production and comprehension, the study of graphic symbol communication has focused more on production than comprehension. As a result, the relationships between the ability to construct and to interpret graphic symbol sequences are not well understood. This study explored the use of graphic symbol sequences in children without disabilities aged 3;0 to 6;11 (years; months) (n=111). Children took part in nine tasks that systematically varied input and output modalities (speech, action, and graphic symbols). Results show that in 3- and 4-year-olds, attributing meaning to a sequence of symbols was particularly difficult even when the children knew the meaning of each symbol in the sequence. Similarly, while even 3- and 4-year-olds could produce a graphic symbol sequence following a model, transposing a spoken sentence into a graphic sequence was more difficult for them. Representing an action with graphic symbols was difficult even for 5-year-olds. Finally, the ability to comprehend graphic-symbol sequences preceded the ability to produce them. These developmental patterns, as well as memory-related variables, should be taken into account in choosing intervention strategies with young children who use AAC.

  12. A comparative study on generating simulated Landsat NDVI images using data fusion and regression method-the case of the Korean Peninsula.

    PubMed

    Lee, Mi Hee; Lee, Soo Bong; Eo, Yang Dam; Kim, Sun Woong; Woo, Jung-Hun; Han, Soo Hee

    2017-07-01

    Landsat optical images have enough spatial and spectral resolution to analyze vegetation growth characteristics. But, the clouds and water vapor degrade the image quality quite often, which limits the availability of usable images for the time series vegetation vitality measurement. To overcome this shortcoming, simulated images are used as an alternative. In this study, weighted average method, spatial and temporal adaptive reflectance fusion model (STARFM) method, and multilinear regression analysis method have been tested to produce simulated Landsat normalized difference vegetation index (NDVI) images of the Korean Peninsula. The test results showed that the weighted average method produced the images most similar to the actual images, provided that the images were available within 1 month before and after the target date. The STARFM method gives good results when the input image date is close to the target date. Careful regional and seasonal consideration is required in selecting input images. During summer season, due to clouds, it is very difficult to get the images close enough to the target date. Multilinear regression analysis gives meaningful results even when the input image date is not so close to the target date. Average R 2 values for weighted average method, STARFM, and multilinear regression analysis were 0.741, 0.70, and 0.61, respectively.

  13. Real-time edge-enhanced optical correlator

    NASA Technical Reports Server (NTRS)

    Liu, Tsuen-Hsi (Inventor); Cheng, Li-Jen (Inventor)

    1992-01-01

    Edge enhancement of an input image by four-wave mixing a first write beam with a second write beam in a photorefractive crystal, GaAs, was achieved for VanderLugt optical correlation with an edge enhanced reference image by optimizing the power ratio of a second write beam to the first write beam (70:1) and optimizing the power ratio of a read beam, which carries the reference image to the first write beam (100:701). Liquid crystal TV panels are employed as spatial light modulators to change the input and reference images in real time.

  14. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  15. Diffusion-weighted imaging of the sellar region: a comparison study of BLADE and single-shot echo planar imaging sequences.

    PubMed

    Yiping, Lu; Hui, Liu; Kun, Zhou; Daoying, Geng; Bo, Yin

    2014-07-01

    The purpose of this study is to compare BLADE diffusion-weighted imaging (DWI) with single-shot echo planar imaging (EPI) DWI on the aspects of feasibility of imaging the sellar region and image quality. A total of 3 healthy volunteers and 52 patients with suspected lesions in the sellar region were included in this prospective intra-individual study. All exams were performed at 3.0T with a BLADE DWI sequence and a standard single-shot EP-DWI sequence. Phantom measurements were performed to measure the objective signal-to-noise ratio (SNR). Two radiologists rated the image quality according to the visualisation of the internal carotid arteries, optic chiasm, pituitary stalk, pituitary gland and lesion, and the overall image quality. One radiologist measured lesion sizes for detecting their relationship with the image score. The SNR in BLADE DWI sequence showed no significant difference from the single-shot EPI sequence (P>0.05). All of the assessed regions received higher scores in BLADE DWI images than single-shot EP-DWI. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  16. Control of an Estuarine Microfouling Sequence on Optical Surfaces Using Low-Intensity Ultraviolet Irradiation

    PubMed Central

    DiSalvo, L. H.; Cobet, A. B.

    1974-01-01

    Ultraviolet light has been investigated as an active energy input for the control of slime film formation on optical surfaces submerged in San Francisco Bay for periods up to 6 weeks. Irradiation of quartz underwater windows was carried out from three positions: (i) exterior to the window, (ii) from directly behind the window, and (iii) from the edge of the window with the ultraviolet (UV) energy refracted through the front of the window. Internally administered irradiation reaching levels of 10 to 30 μW per cm2 measurable at the glass surface was effective in preventing bacterial slime film formation and settlement of metazoan larvae. When administered from the external position, over one order of magnitude more (500 to 600 μW/cm2) UV energy was required to accomplish the same result. Irradiation from the edge position was most promising logistically and was effective in fouling control for 6 weeks. The results provide a preliminary quantitation of the energy requirement for control of the marine microfouling sequence which precedes development of macrofouling communities. Images PMID:16349978

  17. Multiview human activity recognition system based on spatiotemporal template for video surveillance system

    NASA Astrophysics Data System (ADS)

    Kushwaha, Alok Kumar Singh; Srivastava, Rajeev

    2015-09-01

    An efficient view invariant framework for the recognition of human activities from an input video sequence is presented. The proposed framework is composed of three consecutive modules: (i) detect and locate people by background subtraction, (ii) view invariant spatiotemporal template creation for different activities, (iii) and finally, template matching is performed for view invariant activity recognition. The foreground objects present in a scene are extracted using change detection and background modeling. The view invariant templates are constructed using the motion history images and object shape information for different human activities in a video sequence. For matching the spatiotemporal templates for various activities, the moment invariants and Mahalanobis distance are used. The proposed approach is tested successfully on our own viewpoint dataset, KTH action recognition dataset, i3DPost multiview dataset, MSR viewpoint action dataset, VideoWeb multiview dataset, and WVU multiview human action recognition dataset. From the experimental results and analysis over the chosen datasets, it is observed that the proposed framework is robust, flexible, and efficient with respect to multiple views activity recognition, scale, and phase variations.

  18. CIDR

    Science.gov Websites

    NGS Pretesting and QC Using Illumina Infinium Arrays CIDR IGES Posters - 2017 A Comparison of Methods fragmentation methods for input into library construction protocol Development of a Low Input FFPE workflow for Evaluation of Copy Number Variation (CNV) detection methods in whole exome sequencing (WES) data CIDR AGBT

  19. Dynamical Modeling of NGC 6397: Simulated HST Imaging

    NASA Astrophysics Data System (ADS)

    Dull, J. D.; Cohn, H. N.; Lugger, P. M.; Slavin, S. D.; Murphy, B. W.

    1994-12-01

    The proximity of NGC 6397 (2.2 kpc) provides an ideal opportunity to test current dynamical models for globular clusters with the HST Wide-Field/Planetary Camera (WFPC2)\\@. We have used a Monte Carlo algorithm to generate ensembles of simulated Planetary Camera (PC) U-band images of NGC 6397 from evolving, multi-mass Fokker-Planck models. These images, which are based on the post-repair HST-PC point-spread function, are used to develop and test analysis methods for recovering structural information from actual HST imaging. We have considered a range of exposure times up to 2.4times 10(4) s, based on our proposed HST Cycle 5 observations. Our Fokker-Planck models include energy input from dynamically-formed binaries. We have adopted a 20-group mass spectrum extending from 0.16 to 1.4 M_sun. We use theoretical luminosity functions for red giants and main sequence stars. Horizontal branch stars, blue stragglers, white dwarfs, and cataclysmic variables are also included. Simulated images are generated for cluster models at both maximal core collapse and at a post-collapse bounce. We are carrying out stellar photometry on these images using ``DAOPHOT-assisted aperture photometry'' software that we have developed. We are testing several techniques for analyzing the resulting star counts, to determine the underlying cluster structure, including parametric model fits and the nonparametric density estimation methods. Our simulated images also allow us to investigate the accuracy and completeness of methods for carrying out stellar photometry in HST Planetary Camera images of dense cluster cores.

  20. Update on Rover Sequencing and Visualization Program

    NASA Technical Reports Server (NTRS)

    Cooper, Brian; Hartman, Frank; Maxwell, Scott; Yen, Jeng; Wright, John; Balacuit, Carlos

    2005-01-01

    The Rover Sequencing and Visualization Program (RSVP) has been updated. RSVP was reported in Rover Sequencing and Visualization Program (NPO-30845), NASA Tech Briefs, Vol. 29, No. 4 (April 2005), page 38. To recapitulate: The Rover Sequencing and Visualization Program (RSVP) is the software tool to be used in the Mars Exploration Rover (MER) mission for planning rover operations and generating command sequences for accomplishing those operations. RSVP combines three-dimensional (3D) visualization for immersive exploration of the operations area, stereoscopic image display for high-resolution examination of the downlinked imagery, and a sophisticated command-sequence editing tool for analysis and completion of the sequences. RSVP is linked with actual flight code modules for operations rehearsal to provide feedback on the expected behavior of the rover prior to committing to a particular sequence. Playback tools allow for review of both rehearsed rover behavior and downlinked results of actual rover operations. These can be displayed simultaneously for comparison of rehearsed and actual activities for verification. The primary inputs to RSVP are downlink data products from the Operations Storage Server (OSS) and activity plans generated by the science team. The activity plans are high-level goals for the next day s activities. The downlink data products include imagery, terrain models, and telemetered engineering data on rover activities and state. The Rover Sequence Editor (RoSE) component of RSVP performs activity expansion to command sequences, command creation and editing with setting of command parameters, and viewing and management of rover resources. The HyperDrive component of RSVP performs 2D and 3D visualization of the rover s environment, graphical and animated review of rover predicted and telemetered state, and creation and editing of command sequences related to mobility and Instrument Deployment Device (robotic arm) operations. Additionally, RoSE and HyperDrive together evaluate command sequences for potential violations of flight and safety rules. The products of RSVP include command sequences for uplink that are stored in the Distributed Object Manager (DOM) and predicted rover state histories stored in the OSS for comparison and validation of downlinked telemetry. The majority of components comprising RSVP utilize the MER command and activity dictionaries to automatically customize the system for MER activities.

  1. Wavelet Fusion for Concealed Object Detection Using Passive Millimeter Wave Sequence Images

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Pang, L.; Liu, H.; Xu, X.

    2018-04-01

    PMMW imaging system can create interpretable imagery on the objects concealed under clothing, which gives the great advantage to the security check system. Paper addresses wavelet fusion to detect concealed objects using passive millimeter wave (PMMW) sequence images. According to PMMW real-time imager acquired image characteristics and storage methods firstly, using the sum of squared difference (SSD) as the image-related parameters to screen the sequence images. Secondly, the selected images are optimized using wavelet fusion algorithm. Finally, the concealed objects are detected by mean filter, threshold segmentation and edge detection. The experimental results show that this method improves the detection effect of concealed objects by selecting the most relevant images from PMMW sequence images and using wavelet fusion to enhance the information of the concealed objects. The method can be effectively applied to human body concealed object detection in millimeter wave video.

  2. Deblurring sequential ocular images from multi-spectral imaging (MSI) via mutual information.

    PubMed

    Lian, Jian; Zheng, Yuanjie; Jiao, Wanzhen; Yan, Fang; Zhao, Bojun

    2018-06-01

    Multi-spectral imaging (MSI) produces a sequence of spectral images to capture the inner structure of different species, which was recently introduced into ocular disease diagnosis. However, the quality of MSI images can be significantly degraded by motion blur caused by the inevitable saccades and exposure time required for maintaining a sufficiently high signal-to-noise ratio. This degradation may confuse an ophthalmologist, reduce the examination quality, or defeat various image analysis algorithms. We propose an early work specially on deblurring sequential MSI images, which is distinguished from many of the current image deblurring techniques by resolving the blur kernel simultaneously for all the images in an MSI sequence. It is accomplished by incorporating several a priori constraints including the sharpness of the latent clear image, the spatial and temporal smoothness of the blur kernel and the similarity between temporally-neighboring images in MSI sequence. Specifically, we model the similarity between MSI images with mutual information considering the different wavelengths used for capturing different images in MSI sequence. The optimization of the proposed approach is based on a multi-scale framework and stepwise optimization strategy. Experimental results from 22 MSI sequences validate that our approach outperforms several state-of-the-art techniques in natural image deblurring.

  3. An open, object-based modeling approach for simulating subsurface heterogeneity

    NASA Astrophysics Data System (ADS)

    Bennett, J.; Ross, M.; Haslauer, C. P.; Cirpka, O. A.

    2017-12-01

    Characterization of subsurface heterogeneity with respect to hydraulic and geochemical properties is critical in hydrogeology as their spatial distribution controls groundwater flow and solute transport. Many approaches of characterizing subsurface heterogeneity do not account for well-established geological concepts about the deposition of the aquifer materials; those that do (i.e. process-based methods) often require forcing parameters that are difficult to derive from site observations. We have developed a new method for simulating subsurface heterogeneity that honors concepts of sequence stratigraphy, resolves fine-scale heterogeneity and anisotropy of distributed parameters, and resembles observed sedimentary deposits. The method implements a multi-scale hierarchical facies modeling framework based on architectural element analysis, with larger features composed of smaller sub-units. The Hydrogeological Virtual Reality simulator (HYVR) simulates distributed parameter models using an object-based approach. Input parameters are derived from observations of stratigraphic morphology in sequence type-sections. Simulation outputs can be used for generic simulations of groundwater flow and solute transport, and for the generation of three-dimensional training images needed in applications of multiple-point geostatistics. The HYVR algorithm is flexible and easy to customize. The algorithm was written in the open-source programming language Python, and is intended to form a code base for hydrogeological researchers, as well as a platform that can be further developed to suit investigators' individual needs. This presentation will encompass the conceptual background and computational methods of the HYVR algorithm, the derivation of input parameters from site characterization, and the results of groundwater flow and solute transport simulations in different depositional settings.

  4. Multispectral and Textural Properties and Diversity of Soils in Gusev Crater and Meridiani Planum from Mars Exploration Rover Pancam and MI Data

    NASA Astrophysics Data System (ADS)

    Bell, J. F.; Fraeman, A. A.; Grossman, L.; Herkenhoff, K. E.; Sullivan, R. J.; Mer/Athena Science Team

    2010-12-01

    The Mars Exploration Rovers Spirit and Opportunity have enabled more than six and a half years of detailed, in situ field study of two specific landing sites and traverse paths within Gusev crater and Meridiani Planum, respectively. Much of the study has relied on high-resolution, multispectral imaging of fine-grained regolith components--the dust, sand, cobbles, clasts, and other components collectively referred to as "soil"--at both sites using the rovers' Panoramic Camera (Pancam) and Microscopic Imager (MI) imaging systems. As of early September 2010, the Pancam systems have acquired more than 1300 and 1000 "13 filter" multispectral imaging sequences of surfaces in Gusev and Meridiani, respectively, with each sequence consisting of co-located images at 11 unique narrowband wavelengths between 430 nm and 1009 nm and having a maximum spatial resolution of about 500 microns per pixel. The MI systems have acquired more than 5900 and 6500 monochromatic images, respectively, at about 31 microns per pixel scale. Pancam multispectral image cubes are calibrated to radiance factor (I/F, where I is the measured radiance and π*F is the incident solar irradiance) using observations of the onboard calibration targets, and then corrected to relative reflectance (assuming Lambertian photometric behavior) for comparison with laboratory rock and mineral measurements. Specifically, Pancam spectra can be used to detect the possible presence of some iron-bearing minerals (e.g., some ferric oxides/oxyhydroxides and pyroxenes) as well as structural water or OH in some hydrated alteration products, providing important inputs on the choice of targets for more quantitative compositional and mineralogic follow-up using the rover's other in situ and remote sensing analysis tools. Pancam 11-band spectra are being analyzed using a variety of standard as well as specifically-tailored analysis methods, including color ratio and band depth parameterizations, spectral similarity and principal components clustering, and simple visual inspection based on correlations with false color unit boundaries and textural variations seen in both Pancam and MI imaging. Approximately 20 distinct spectral classes of fine-grained surface components were identified at each site based on these methods. In this presentation we describe these spectral classes, their geologic and textural context and distribution based on supporting high-res MI and other Pancam imaging, and their potential compositional/mineralogic interpretations based on a variety of rover data sets.

  5. Iterative pixelwise approach applied to computer-generated holograms and diffractive optical elements.

    PubMed

    Hsu, Wei-Feng; Lin, Shih-Chih

    2018-01-01

    This paper presents a novel approach to optimizing the design of phase-only computer-generated holograms (CGH) for the creation of binary images in an optical Fourier transform system. Optimization begins by selecting an image pixel with a temporal change in amplitude. The modulated image function undergoes an inverse Fourier transform followed by the imposition of a CGH constraint and the Fourier transform to yield an image function associated with the change in amplitude of the selected pixel. In iterations where the quality of the image is improved, that image function is adopted as the input for the next iteration. In cases where the image quality is not improved, the image function before the pixel changed is used as the input. Thus, the proposed approach is referred to as the pixelwise hybrid input-output (PHIO) algorithm. The PHIO algorithm was shown to achieve image quality far exceeding that of the Gerchberg-Saxton (GS) algorithm. The benefits were particularly evident when the PHIO algorithm was equipped with a dynamic range of image intensities equivalent to the amplitude freedom of the image signal. The signal variation of images reconstructed from the GS algorithm was 1.0223, but only 0.2537 when using PHIO, i.e., a 75% improvement. Nonetheless, the proposed scheme resulted in a 10% degradation in diffraction efficiency and signal-to-noise ratio.

  6. Pre-Flight Radiometric Model of Linear Imager on LAPAN-IPB Satellite

    NASA Astrophysics Data System (ADS)

    Hadi Syafrudin, A.; Salaswati, Sartika; Hasbi, Wahyudi

    2018-05-01

    LAPAN-IPB Satellite is Microsatellite class with mission of remote sensing experiment. This satellite carrying Multispectral Line Imager for captured of radiometric reflectance value from earth to space. Radiometric quality of image is important factor to classification object on remote sensing process. Before satellite launch in orbit or pre-flight, Line Imager have been tested by Monochromator and integrating sphere to get spectral and every pixel radiometric response characteristic. Pre-flight test data with variety setting of line imager instrument used to see correlation radiance input and digital number of images output. Output input correlation is described by the radiance conversion model with imager setting and radiometric characteristics. Modelling process from hardware level until normalize radiance formula are presented and discussed in this paper.

  7. Vessel segmentation in 4D arterial spin labeling magnetic resonance angiography images of the brain

    NASA Astrophysics Data System (ADS)

    Phellan, Renzo; Lindner, Thomas; Falcão, Alexandre X.; Forkert, Nils D.

    2017-03-01

    4D arterial spin labeling magnetic resonance angiography (4D ASL MRA) is a non-invasive and safe modality for cerebrovascular imaging procedures. It uses the patient's magnetically labeled blood as intrinsic contrast agent, so that no external contrast media is required. It provides important 3D structure and blood flow information but a sufficient cerebrovascular segmentation is important since it can help clinicians to analyze and diagnose vascular diseases faster, and with higher confidence as compared to simple visual rating of raw ASL MRA images. This work presents a new method for automatic cerebrovascular segmentation in 4D ASL MRA images of the brain. In this process images are denoised, corresponding image label/control image pairs of the 4D ASL MRA sequences are subtracted, and temporal intensity averaging is used to generate a static representation of the vascular system. After that, sets of vessel and background seeds are extracted and provided as input for the image foresting transform algorithm to segment the vascular system. Four 4D ASL MRA datasets of the brain arteries of healthy subjects and corresponding time-of-flight (TOF) MRA images were available for this preliminary study. For evaluation of the segmentation results of the proposed method, the cerebrovascular system was automatically segmented in the high-resolution TOF MRA images using a validated algorithm and the segmentation results were registered to the 4D ASL datasets. Corresponding segmentation pairs were compared using the Dice similarity coefficient (DSC). On average, a DSC of 0.9025 was achieved, indicating that vessels can be extracted successfully from 4D ASL MRA datasets by the proposed segmentation method.

  8. CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition

    PubMed Central

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764

  9. Diffusion-weighted Imaging of the Liver with Multiple b Values: Effect of Diffusion Gradient Polarity and Breathing Acquisition on Image Quality and Intravoxel Incoherent Motion Parameters—A Pilot Study

    PubMed Central

    Dyvorne, Hadrien A.; Galea, Nicola; Nevers, Thomas; Fiel, M. Isabel; Carpenter, David; Wong, Edmund; Orton, Matthew; de Oliveira, Andre; Feiweier, Thorsten; Vachon, Marie-Louise; Babb, James S.

    2013-01-01

    Purpose: To optimize intravoxel incoherent motion (IVIM) diffusion-weighted (DW) imaging by estimating the effects of diffusion gradient polarity and breathing acquisition scheme on image quality, signal-to-noise ratio (SNR), IVIM parameters, and parameter reproducibility, as well as to investigate the potential of IVIM in the detection of hepatic fibrosis. Materials and Methods: In this institutional review board–approved prospective study, 20 subjects (seven healthy volunteers, 13 patients with hepatitis C virus infection; 14 men, six women; mean age, 46 years) underwent IVIM DW imaging with four sequences: (a) respiratory-triggered (RT) bipolar (BP) sequence, (b) RT monopolar (MP) sequence, (c) free-breathing (FB) BP sequence, and (d) FB MP sequence. Image quality scores were assessed for all sequences. A biexponential analysis with the Bayesian method yielded true diffusion coefficient (D), pseudodiffusion coefficient (D*), and perfusion fraction (PF) in liver parenchyma. Mixed-model analysis of variance was used to compare image quality, SNR, IVIM parameters, and interexamination variability between the four sequences, as well as the ability to differentiate areas of liver fibrosis from normal liver tissue. Results: Image quality with RT sequences was superior to that with FB acquisitions (P = .02) and was not affected by gradient polarity. SNR did not vary significantly between sequences. IVIM parameter reproducibility was moderate to excellent for PF and D, while it was less reproducible for D*. PF and D were both significantly lower in patients with hepatitis C virus than in healthy volunteers with the RT BP sequence (PF = 13.5% ± 5.3 [standard deviation] vs 9.2% ± 2.5, P = .038; D = [1.16 ± 0.07] × 10−3 mm2/sec vs [1.03 ± 0.1] × 10−3 mm2/sec, P = .006). Conclusion: The RT BP DW imaging sequence had the best results in terms of image quality, reproducibility, and ability to discriminate between healthy and fibrotic liver with biexponential fitting. © RSNA, 2012 PMID:23220895

  10. SU-F-303-13: Initial Evaluation of Four Dimensional Diffusion- Weighted MRI (4D-DWI) and Its Effect On Apparent Diffusion Coefficient (ADC) Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Y; Yin, F; Czito, B

    2015-06-15

    Purpose: Diffusion-weighted imaging(DWI) has been shown to have superior tumor-to-tissue contrast for cancer detection.This study aims at developing and evaluating a four dimensional DWI(4D-DWI) technique using retrospective sorting method for imaging respiratory motion for radiotherapy planning,and evaluate its effect on Apparent Diffusion Coefficient(ADC) measurement. Materials/Methods: Image acquisition was performed by repeatedly imaging a volume of interest using a multi-slice single-shot 2D-DWI sequence in the axial planes and cine MRI(served as reference) using FIESTA sequence.Each 2D-DWI image were acquired in xyz-diffusion-directions with a high b-value(b=500s/mm2).The respiratory motion was simultaneously recorded using bellows.Retrospective sorting was applied in each direction to reconstruct 4D-DWI.Themore » technique was evaluated using a computer simulated 4D-digital human phantom(XCAT),a motion phantom and a healthy volunteer under an IRB-approved study.Motion trajectories of regions-of-interests(ROI) were extracted from 4D-DWI and compared with reference.The mean motion trajectory amplitude differences(D) between the two was calculated.To quantitatively analyze the motion artifacts,XCAT were controlled to simulate regular motion and the motions of 10 liver cancer patients.4D-DWI,free-breathing DWI(FB- DWI) were reconstructed.Tumor volume difference(VD) of each phase of 4D-DWI and FB-DWI from the input static tumor were calculated.Furthermore, ADC was measured for each phase of 4D-DWI and FB-DWI data,and mean tumor ADC values(M-ADC) were calculated.Mean M-ADC over all 4D-DWI phases was compared with M-ADC calculated from FB-DWI. Results: 4D-DWI of XCAT,the motion phantom and the healthy volunteer demonstrated the respiratory motion clearly.ROI D values were 1.9mm,1.7mm and 2.0mm,respectively.For motion artifacts analysis,XCAT 4D-DWI images show much less motion artifacts compare to FB-DWI.Mean VD for 4D-WDI and FB-DWI were 8.5±1.4% and 108±15%,respectively.Mean M-ADC for ADC measured from 4D-DWI and M-ADC measured from FB-DWI were (2.29±0.04)*0.001*mm2/s and (3.80±0.01)*0.001*mm2/s,respectively.ADC value ground-truth is 2.24*0.001*mm2/s from the input of the simulation. Conclusion: A respiratory correlated 4D-DWI technique has been initially evaluated in phantoms and a human subject.Comparing to free breathing DWI,4D-DWI can lead to more accurate measurement of ADC.« less

  11. Neural network diagnosis of avascular necrosis from magnetic resonance images

    NASA Astrophysics Data System (ADS)

    Manduca, Armando; Christy, Paul S.; Ehman, Richard L.

    1993-09-01

    We have explored the use of artificial neural networks to diagnose avascular necrosis (AVN) of the femoral head from magnetic resonance images. We have developed multi-layer perceptron networks, trained with conjugate gradient optimization, which diagnose AVN from single sagittal images of the femoral head with 100% accuracy on the training data and 97% accuracy on test data. These networks use only the raw image as input (with minimal preprocessing to average the images down to 32 X 32 size and to scale the input data values) and learn to extract their own features for the diagnosis decision. Various experiments with these networks are described.

  12. Image Display and Manipulation System (IDAMS) program documentation, Appendixes A-D. [including routines, convolution filtering, image expansion, and fast Fourier transformation

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.; White, R. A.; Szczur, M. R.

    1972-01-01

    The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.

  13. The brain in time: insights from neuromagnetic recordings.

    PubMed

    Hari, Riitta; Parkkonen, Lauri; Nangini, Cathy

    2010-03-01

    The millisecond time resolution of magnetoencephalography (MEG) is instrumental for investigating the brain basis of sensory processing, motor planning, cognition, and social interaction. We review the basic principles, recent progress, and future potential of MEG in noninvasive tracking of human brain activity. Cortical activation sequences from tens to hundreds of milliseconds can be followed during, e.g., perception, motor action, imitation, and language processing by recording both spontaneous and evoked brain signals. Moreover, tagging of sensory input can be used to reveal neuronal mechanisms of binaural interaction and perception of ambiguous images. The results support the emerging ideas of multiple, hierarchically organized temporal scales in human brain function. Instrumentation and data analysis methods are rapidly progressing, enabling attempts to decode the four-dimensional spatiotemporal signal patterns to reveal correlates of behavior and mental contents.

  14. The Mariner Venus Mercury flight data subsystem.

    NASA Technical Reports Server (NTRS)

    Whitehead, P. B.

    1972-01-01

    The flight data subsystem (FDS) discussed handles both the engineering and scientific measurements performed on the MVM'73. It formats the data into serial data streams, and sends it to the modulation/demodulation subsystem for transmission to earth or to the data storage subsystem for storage on a digital tape recorder. The FDS is controlled by serial digital words, called coded commands, received from the central computer sequencer of from the ground via the modulation/demodulation subsystem. The eight major blocks of the FDS are: power converter, timing and control, engineering data, memory, memory input/output and control, nonimaging data, imaging data, and data output. The FDS incorporates some 4000 components, weighs 17 kg, and uses 35 W of power. General data on the mission and spacecraft are given.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ureba, A.; Salguero, F. J.; Barbeiro, A. R.

    Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reducemore » the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. Conclusions: A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.« less

  16. Detection and segmentation of multiple touching product inspection items

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Talukder, Ashit; Cox, Westley; Chang, Hsuan-Ting; Weber, David

    1996-12-01

    X-ray images of pistachio nuts on conveyor trays for product inspection are considered. The first step in such a processor is to locate each individual item and place it in a separate file for input to a classifier to determine the quality of each nut. This paper considers new techniques to: detect each item (each nut can be in any orientation, we employ new rotation-invariant filters to locate each item independent of its orientation), produce separate image files for each item [a new blob coloring algorithm provides this for isolated (non-touching) input items], segmentation to provide separate image files for touching or overlapping input items (we use a morphological watershed transform to achieve this), and morphological processing to remove the shell and produce an image of only the nutmeat. Each of these operations and algorithms are detailed and quantitative data for each are presented for the x-ray image nut inspection problem noted. These techniques are of general use in many different product inspection problems in agriculture and other areas.

  17. Program for User-Friendly Management of Input and Output Data Sets

    NASA Technical Reports Server (NTRS)

    Klimeck, Gerhard

    2003-01-01

    A computer program manages large, hierarchical sets of input and output (I/O) parameters (typically, sequences of alphanumeric data) involved in computational simulations in a variety of technological disciplines. This program represents sets of parameters as structures coded in object-oriented but otherwise standard American National Standards Institute C language. Each structure contains a group of I/O parameters that make sense as a unit in the simulation program with which this program is used. The addition of options and/or elements to sets of parameters amounts to the addition of new elements to data structures. By association of child data generated in response to a particular user input, a hierarchical ordering of input parameters can be achieved. Associated with child data structures are the creation and description mechanisms within the parent data structures. Child data structures can spawn further child data structures. In this program, the creation and representation of a sequence of data structures is effected by one line of code that looks for children of a sequence of structures until there are no more children to be found. A linked list of structures is created dynamically and is completely represented in the data structures themselves. Such hierarchical data presentation can guide users through otherwise complex setup procedures and it can be integrated within a variety of graphical representations.

  18. Deep Learning-Based Banknote Fitness Classification Using the Reflection Images by a Visible-Light One-Dimensional Line Image Sensor

    PubMed Central

    Pham, Tuyen Danh; Nguyen, Dat Tien; Kim, Wan; Park, Sung Ho; Park, Kang Ryoung

    2018-01-01

    In automatic paper currency sorting, fitness classification is a technique that assesses the quality of banknotes to determine whether a banknote is suitable for recirculation or should be replaced. Studies on using visible-light reflection images of banknotes for evaluating their usability have been reported. However, most of them were conducted under the assumption that the denomination and input direction of the banknote are predetermined. In other words, a pre-classification of the type of input banknote is required. To address this problem, we proposed a deep learning-based fitness-classification method that recognizes the fitness level of a banknote regardless of the denomination and input direction of the banknote to the system, using the reflection images of banknotes by visible-light one-dimensional line image sensor and a convolutional neural network (CNN). Experimental results on the banknote image databases of the Korean won (KRW) and the Indian rupee (INR) with three fitness levels, and the Unites States dollar (USD) with two fitness levels, showed that our method gives better classification accuracy than other methods. PMID:29415447

  19. SPM analysis of parametric (R)-[11C]PK11195 binding images: plasma input versus reference tissue parametric methods.

    PubMed

    Schuitemaker, Alie; van Berckel, Bart N M; Kropholler, Marc A; Veltman, Dick J; Scheltens, Philip; Jonker, Cees; Lammertsma, Adriaan A; Boellaard, Ronald

    2007-05-01

    (R)-[11C]PK11195 has been used for quantifying cerebral microglial activation in vivo. In previous studies, both plasma input and reference tissue methods have been used, usually in combination with a region of interest (ROI) approach. Definition of ROIs, however, can be labourious and prone to interobserver variation. In addition, results are only obtained for predefined areas and (unexpected) signals in undefined areas may be missed. On the other hand, standard pharmacokinetic models are too sensitive to noise to calculate (R)-[11C]PK11195 binding on a voxel-by-voxel basis. Linearised versions of both plasma input and reference tissue models have been described, and these are more suitable for parametric imaging. The purpose of this study was to compare the performance of these plasma input and reference tissue parametric methods on the outcome of statistical parametric mapping (SPM) analysis of (R)-[11C]PK11195 binding. Dynamic (R)-[11C]PK11195 PET scans with arterial blood sampling were performed in 7 younger and 11 elderly healthy subjects. Parametric images of volume of distribution (Vd) and binding potential (BP) were generated using linearised versions of plasma input (Logan) and reference tissue (Reference Parametric Mapping) models. Images were compared at the group level using SPM with a two-sample t-test per voxel, both with and without proportional scaling. Parametric BP images without scaling provided the most sensitive framework for determining differences in (R)-[11C]PK11195 binding between younger and elderly subjects. Vd images could only demonstrate differences in (R)-[11C]PK11195 binding when analysed with proportional scaling due to intersubject variation in K1/k2 (blood-brain barrier transport and non-specific binding).

  20. Phase and amplitude beam shaping with two deformable mirrors implementing input plane and Fourier plane phase modifications.

    PubMed

    Wu, Chensheng; Ko, Jonathan; Rzasa, John R; Paulson, Daniel A; Davis, Christopher C

    2018-03-20

    We find that ideas in optical image encryption can be very useful for adaptive optics in achieving simultaneous phase and amplitude shaping of a laser beam. An adaptive optics system with simultaneous phase and amplitude shaping ability is very desirable for atmospheric turbulence compensation. Atmospheric turbulence-induced beam distortions can jeopardize the effectiveness of optical power delivery for directed-energy systems and optical information delivery for free-space optical communication systems. In this paper, a prototype adaptive optics system is proposed based on a famous image encryption structure. The major change is to replace the two random phase plates at the input plane and Fourier plane of the encryption system, respectively, with two deformable mirrors that perform on-demand phase modulations. A Gaussian beam is used as an input to replace the conventional image input. We show through theory, simulation, and experiments that the slightly modified image encryption system can be used to achieve arbitrary phase and amplitude beam shaping within the limits of stroke range and influence function of the deformable mirrors. In application, the proposed technique can be used to perform mode conversion between optical beams, generate structured light signals for imaging and scanning, and compensate atmospheric turbulence-induced phase and amplitude beam distortions.

  1. Undesirable Choice Biases with Small Differences in the Spatial Structure of Chance Stimulus Sequences.

    PubMed

    Herrera, David; Treviño, Mario

    2015-01-01

    In two-alternative discrimination tasks, experimenters usually randomize the location of the rewarded stimulus so that systematic behavior with respect to irrelevant stimuli can only produce chance performance on the learning curves. One way to achieve this is to use random numbers derived from a discrete binomial distribution to create a 'full random training schedule' (FRS). When using FRS, however, sporadic but long laterally-biased training sequences occur by chance and such 'input biases' are thought to promote the generation of laterally-biased choices (i.e., 'output biases'). As an alternative, a 'Gellerman-like training schedule' (GLS) can be used. It removes most input biases by prohibiting the reward from appearing on the same location for more than three consecutive trials. The sequence of past rewards obtained from choosing a particular discriminative stimulus influences the probability of choosing that same stimulus on subsequent trials. Assuming that the long-term average ratio of choices matches the long-term average ratio of reinforcers, we hypothesized that a reduced amount of input biases in GLS compared to FRS should lead to a reduced production of output biases. We compared the choice patterns produced by a 'Rational Decision Maker' (RDM) in response to computer-generated FRS and GLS training sequences. To create a virtual RDM, we implemented an algorithm that generated choices based on past rewards. Our simulations revealed that, although the GLS presented fewer input biases than the FRS, the virtual RDM produced more output biases with GLS than with FRS under a variety of test conditions. Our results reveal that the statistical and temporal properties of training sequences interacted with the RDM to influence the production of output biases. Thus, discrete changes in the training paradigms did not translate linearly into modifications in the pattern of choices generated by a RDM. Virtual RDMs could be further employed to guide the selection of proper training schedules for perceptual decision-making studies.

  2. A New Sparse Representation Framework for Reconstruction of an Isotropic High Spatial Resolution MR Volume From Orthogonal Anisotropic Resolution Scans.

    PubMed

    Jia, Yuanyuan; Gholipour, Ali; He, Zhongshi; Warfield, Simon K

    2017-05-01

    In magnetic resonance (MR), hardware limitations, scan time constraints, and patient movement often result in the acquisition of anisotropic 3-D MR images with limited spatial resolution in the out-of-plane views. Our goal is to construct an isotropic high-resolution (HR) 3-D MR image through upsampling and fusion of orthogonal anisotropic input scans. We propose a multiframe super-resolution (SR) reconstruction technique based on sparse representation of MR images. Our proposed algorithm exploits the correspondence between the HR slices and the low-resolution (LR) sections of the orthogonal input scans as well as the self-similarity of each input scan to train pairs of overcomplete dictionaries that are used in a sparse-land local model to upsample the input scans. The upsampled images are then combined using wavelet fusion and error backprojection to reconstruct an image. Features are learned from the data and no extra training set is needed. Qualitative and quantitative analyses were conducted to evaluate the proposed algorithm using simulated and clinical MR scans. Experimental results show that the proposed algorithm achieves promising results in terms of peak signal-to-noise ratio, structural similarity image index, intensity profiles, and visualization of small structures obscured in the LR imaging process due to partial volume effects. Our novel SR algorithm outperforms the nonlocal means (NLM) method using self-similarity, NLM method using self-similarity and image prior, self-training dictionary learning-based SR method, averaging of upsampled scans, and the wavelet fusion method. Our SR algorithm can reduce through-plane partial volume artifact by combining multiple orthogonal MR scans, and thus can potentially improve medical image analysis, research, and clinical diagnosis.

  3. A segmentation method for lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise

    PubMed Central

    Zhang, Wei; Zhang, Xiaolong; Qiang, Yan; Tian, Qi; Tang, Xiaoxian

    2017-01-01

    The fast and accurate segmentation of lung nodule image sequences is the basis of subsequent processing and diagnostic analyses. However, previous research investigating nodule segmentation algorithms cannot entirely segment cavitary nodules, and the segmentation of juxta-vascular nodules is inaccurate and inefficient. To solve these problems, we propose a new method for the segmentation of lung nodule image sequences based on superpixels and density-based spatial clustering of applications with noise (DBSCAN). First, our method uses three-dimensional computed tomography image features of the average intensity projection combined with multi-scale dot enhancement for preprocessing. Hexagonal clustering and morphological optimized sequential linear iterative clustering (HMSLIC) for sequence image oversegmentation is then proposed to obtain superpixel blocks. The adaptive weight coefficient is then constructed to calculate the distance required between superpixels to achieve precise lung nodules positioning and to obtain the subsequent clustering starting block. Moreover, by fitting the distance and detecting the change in slope, an accurate clustering threshold is obtained. Thereafter, a fast DBSCAN superpixel sequence clustering algorithm, which is optimized by the strategy of only clustering the lung nodules and adaptive threshold, is then used to obtain lung nodule mask sequences. Finally, the lung nodule image sequences are obtained. The experimental results show that our method rapidly, completely and accurately segments various types of lung nodule image sequences. PMID:28880916

  4. Optimized scheduling technique of null subcarriers for peak power control in 3GPP LTE downlink.

    PubMed

    Cho, Soobum; Park, Sang Kyu

    2014-01-01

    Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system.

  5. Optimized Scheduling Technique of Null Subcarriers for Peak Power Control in 3GPP LTE Downlink

    PubMed Central

    Park, Sang Kyu

    2014-01-01

    Orthogonal frequency division multiple access (OFDMA) is a key multiple access technique for the long term evolution (LTE) downlink. However, high peak-to-average power ratio (PAPR) can cause the degradation of power efficiency. The well-known PAPR reduction technique, dummy sequence insertion (DSI), can be a realistic solution because of its structural simplicity. However, the large usage of subcarriers for the dummy sequences may decrease the transmitted data rate in the DSI scheme. In this paper, a novel DSI scheme is applied to the LTE system. Firstly, we obtain the null subcarriers in single-input single-output (SISO) and multiple-input multiple-output (MIMO) systems, respectively; then, optimized dummy sequences are inserted into the obtained null subcarrier. Simulation results show that Walsh-Hadamard transform (WHT) sequence is the best for the dummy sequence and the ratio of 16 to 20 for the WHT and randomly generated sequences has the maximum PAPR reduction performance. The number of near optimal iteration is derived to prevent exhausted iterations. It is also shown that there is no bit error rate (BER) degradation with the proposed technique in LTE downlink system. PMID:24883376

  6. A modular DNA signal translator for the controlled release of a protein by an aptamer

    PubMed Central

    Beyer, Stefan; Simmel, Friedrich C.

    2006-01-01

    Owing to the intimate linkage of sequence and structure in nucleic acids, DNA is an extremely attractive molecule for the development of molecular devices, in particular when a combination of information processing and chemomechanical tasks is desired. Many of the previously demonstrated devices are driven by hybridization between DNA ‘effector’ strands and specific recognition sequences on the device. For applications it is of great interest to link several of such molecular devices together within artificial reaction cascades. Often it will not be possible to choose DNA sequences freely, e.g. when functional nucleic acids such as aptamers are used. In such cases translation of an arbitrary ‘input’ sequence into a desired effector sequence may be required. Here we demonstrate a molecular ‘translator’ for information encoded in DNA and show how it can be used to control the release of a protein by an aptamer using an arbitrarily chosen DNA input strand. The function of the translator is based on branch migration and the action of the endonuclease FokI. The modular design of the translator facilitates the adaptation of the device to various input or output sequences. PMID:16547201

  7. Rover Sequencing and Visualization Program

    NASA Technical Reports Server (NTRS)

    Cooper, Brian; Hartman, Frank; Maxwell, Scott; Yen, Jeng; Wright, John; Balacuit, Carlos

    2005-01-01

    The Rover Sequencing and Visualization Program (RSVP) is the software tool for use in the Mars Exploration Rover (MER) mission for planning rover operations and generating command sequences for accomplishing those operations. RSVP combines three-dimensional (3D) visualization for immersive exploration of the operations area, stereoscopic image display for high-resolution examination of the downlinked imagery, and a sophisticated command-sequence editing tool for analysis and completion of the sequences. RSVP is linked with actual flight-code modules for operations rehearsal to provide feedback on the expected behavior of the rover prior to committing to a particular sequence. Playback tools allow for review of both rehearsed rover behavior and downlinked results of actual rover operations. These can be displayed simultaneously for comparison of rehearsed and actual activities for verification. The primary inputs to RSVP are downlink data products from the Operations Storage Server (OSS) and activity plans generated by the science team. The activity plans are high-level goals for the next day s activities. The downlink data products include imagery, terrain models, and telemetered engineering data on rover activities and state. The Rover Sequence Editor (RoSE) component of RSVP performs activity expansion to command sequences, command creation and editing with setting of command parameters, and viewing and management of rover resources. The HyperDrive component of RSVP performs 2D and 3D visualization of the rover s environment, graphical and animated review of rover-predicted and telemetered state, and creation and editing of command sequences related to mobility and Instrument Deployment Device (IDD) operations. Additionally, RoSE and HyperDrive together evaluate command sequences for potential violations of flight and safety rules. The products of RSVP include command sequences for uplink that are stored in the Distributed Object Manager (DOM) and predicted rover state histories stored in the OSS for comparison and validation of downlinked telemetry. The majority of components comprising RSVP utilize the MER command and activity dictionaries to automatically customize the system for MER activities. Thus, RSVP, being highly data driven, may be tailored to other missions with minimal effort. In addition, RSVP uses a distributed, message-passing architecture to allow multitasking, and collaborative visualization and sequence development by scattered team members.

  8. Analysis of Pre-Analytic Factors Affecting the Success of Clinical Next-Generation Sequencing of Solid Organ Malignancies.

    PubMed

    Chen, Hui; Luthra, Rajyalakshmi; Goswami, Rashmi S; Singh, Rajesh R; Roy-Chowdhuri, Sinchita

    2015-08-28

    Application of next-generation sequencing (NGS) technology to routine clinical practice has enabled characterization of personalized cancer genomes to identify patients likely to have a response to targeted therapy. The proper selection of tumor sample for downstream NGS based mutational analysis is critical to generate accurate results and to guide therapeutic intervention. However, multiple pre-analytic factors come into play in determining the success of NGS testing. In this review, we discuss pre-analytic requirements for AmpliSeq PCR-based sequencing using Ion Torrent Personal Genome Machine (PGM) (Life Technologies), a NGS sequencing platform that is often used by clinical laboratories for sequencing solid tumors because of its low input DNA requirement from formalin fixed and paraffin embedded tissue. The success of NGS mutational analysis is affected not only by the input DNA quantity but also by several other factors, including the specimen type, the DNA quality, and the tumor cellularity. Here, we review tissue requirements for solid tumor NGS based mutational analysis, including procedure types, tissue types, tumor volume and fraction, decalcification, and treatment effects.

  9. Investigating the Role of Global Histogram Equalization Technique for 99mTechnetium-Methylene diphosphonate Bone Scan Image Enhancement.

    PubMed

    Pandey, Anil Kumar; Sharma, Param Dev; Dheer, Pankaj; Parida, Girish Kumar; Goyal, Harish; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-01-01

    99m Technetium-methylene diphosphonate ( 99m Tc-MDP) bone scan images have limited number of counts per pixel, and hence, they have inferior image quality compared to X-rays. Theoretically, global histogram equalization (GHE) technique can improve the contrast of a given image though practical benefits of doing so have only limited acceptance. In this study, we have investigated the effect of GHE technique for 99m Tc-MDP-bone scan images. A set of 89 low contrast 99m Tc-MDP whole-body bone scan images were included in this study. These images were acquired with parallel hole collimation on Symbia E gamma camera. The images were then processed with histogram equalization technique. The image quality of input and processed images were reviewed by two nuclear medicine physicians on a 5-point scale where score of 1 is for very poor and 5 is for the best image quality. A statistical test was applied to find the significance of difference between the mean scores assigned to input and processed images. This technique improves the contrast of the images; however, oversaturation was noticed in the processed images. Student's t -test was applied, and a statistically significant difference in the input and processed image quality was found at P < 0.001 (with α = 0.05). However, further improvement in image quality is needed as per requirements of nuclear medicine physicians. GHE techniques can be used on low contrast bone scan images. In some of the cases, a histogram equalization technique in combination with some other postprocessing technique is useful.

  10. Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.

    PubMed

    Nath, Abhigyan; Subbiah, Karthikeyan

    2015-12-01

    Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance than that of individual base classifiers. The performance of the learned models trained on Kmeans preprocessed training set is far better than the randomly generated training sets. The proposed method achieved a sensitivity of 90.6%, specificity of 91.4% and accuracy of 91.0% on the first test set and sensitivity of 92.9%, specificity of 96.2% and accuracy of 94.7% on the second blind test set. These results have established that diversifying training set improves the performance of predictive models through superior generalization ability and balancing the training set improves prediction accuracy. For smaller data sets, unsupervised Kmeans based sampling can be an effective technique to increase generalization than that of the usual random splitting method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Comparison of Free-Breathing With Navigator-Triggered Technique in Diffusion Weighted Imaging for Evaluation of Small Hepatocellular Carcinoma: Effect on Image Quality and Intravoxel Incoherent Motion Parameters.

    PubMed

    Shan, Yan; Zeng, Meng-su; Liu, Kai; Miao, Xi-Yin; Lin, Jiang; Fu, Cai xia; Xu, Peng-ju

    2015-01-01

    To evaluate the effect on image quality and intravoxel incoherent motion (IVIM) parameters of small hepatocellular carcinoma (HCC) from choice of either free-breathing (FB) or navigator-triggered (NT) diffusion-weighted (DW) imaging. Thirty patients with 37 small HCCs underwent IVIM DW imaging using 12 b values (0-800 s/mm) with 2 sequences: NT, FB. A biexponential analysis with the Bayesian method yielded true diffusion coefficient (D), pseudodiffusion coefficient (D*), and perfusion fraction (f) in small HCCs and liver parenchyma. Apparent diffusion coefficient (ADC) was also calculated. The acquisition time and image quality scores were assessed for 2 sequences. Independent sample t test was used to compare image quality, signal intensity ratio, IVIM parameters, and ADC values between the 2 sequences; reproducibility of IVIM parameters, and ADC values between 2 sequences was assessed with the Bland-Altman method (BA-LA). Image quality with NT sequence was superior to that with FB acquisition (P = 0.02). The mean acquisition time for FB scheme was shorter than that of NT sequence (6 minutes 14 seconds vs 10 minutes 21 seconds ± 10 seconds P < 0.01). The signal intensity ratio of small HCCs did not vary significantly between the 2 sequences. The ADC and IVIM parameters from the 2 sequences show no significant difference. Reproducibility of D*and f parameters in small HCC was poor (BA-LA: 95% confidence interval, -180.8% to 189.2% for D* and -133.8% to 174.9% for f). A moderate reproducibility of D and ADC parameters was observed (BA-LA: 95% confidence interval, -83.5% to 76.8% for D and -74.4% to 88.2% for ADC) between the 2 sequences. The NT DW imaging technique offers no advantage in IVIM parameters measurements of small HCC except better image quality, whereas FB technique offers greater confidence in fitted diffusion parameters for matched acquisition periods.

  12. Discrete Cosine Transform Image Coding With Sliding Block Codes

    NASA Astrophysics Data System (ADS)

    Divakaran, Ajay; Pearlman, William A.

    1989-11-01

    A transform trellis coding scheme for images is presented. A two dimensional discrete cosine transform is applied to the image followed by a search on a trellis structured code. This code is a sliding block code that utilizes a constrained size reproduction alphabet. The image is divided into blocks by the transform coding. The non-stationarity of the image is counteracted by grouping these blocks in clusters through a clustering algorithm, and then encoding the clusters separately. Mandela ordered sequences are formed from each cluster i.e identically indexed coefficients from each block are grouped together to form one dimensional sequences. A separate search ensues on each of these Mandela ordered sequences. Padding sequences are used to improve the trellis search fidelity. The padding sequences absorb the error caused by the building up of the trellis to full size. The simulations were carried out on a 256x256 image ('LENA'). The results are comparable to any existing scheme. The visual quality of the image is enhanced considerably by the padding and clustering.

  13. Cassini Imaging Science: First Results at Saturn

    NASA Astrophysics Data System (ADS)

    Porco, C. C.

    The Cassini Imaging Science experiment at Saturn will commence in early February, 2004 -- five months before Cassini's arrival at Saturn. Approach observations consist of repeated multi-spectral `movie' sequences of Saturn and its rings, image sequences designed to search for previously unseen satellites between the outer edge of the ring system and the orbit of Hyperion, images of known satellites for orbit refinement, observations of Phoebe during Cassini's closest approach to the satellite, and repeated multi-spectral `movie' sequences of Titan to detect and track clouds (for wind determination) and to sense the surface. During Saturn Orbit Insertion, the highest resolution images (~ 100 m) obtained during the whole orbital tour will be collected of the dark side of the rings. Finally, imaging sequences are planned for Cassini's first Titan flyby, on July 2, from a distance of ~ 350,000 km, yielding an image scale of ~ 2.1 km on the South polar region. The highlights of these observation sequences will be presented.

  14. Identification of optimal mask size parameter for noise filtering in 99mTc-methylene diphosphonate bone scintigraphy images.

    PubMed

    Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-11-01

    Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.

  15. Two-dimensional PCA-based human gait identification

    NASA Astrophysics Data System (ADS)

    Chen, Jinyan; Wu, Rongteng

    2012-11-01

    It is very necessary to recognize person through visual surveillance automatically for public security reason. Human gait based identification focus on recognizing human by his walking video automatically using computer vision and image processing approaches. As a potential biometric measure, human gait identification has attracted more and more researchers. Current human gait identification methods can be divided into two categories: model-based methods and motion-based methods. In this paper a two-Dimensional Principal Component Analysis and temporal-space analysis based human gait identification method is proposed. Using background estimation and image subtraction we can get a binary images sequence from the surveillance video. By comparing the difference of two adjacent images in the gait images sequence, we can get a difference binary images sequence. Every binary difference image indicates the body moving mode during a person walking. We use the following steps to extract the temporal-space features from the difference binary images sequence: Projecting one difference image to Y axis or X axis we can get two vectors. Project every difference image in the difference binary images sequence to Y axis or X axis difference binary images sequence we can get two matrixes. These two matrixes indicate the styles of one walking. Then Two-Dimensional Principal Component Analysis(2DPCA) is used to transform these two matrixes to two vectors while at the same time keep the maximum separability. Finally the similarity of two human gait images is calculated by the Euclidean distance of the two vectors. The performance of our methods is illustrated using the CASIA Gait Database.

  16. Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.

    PubMed

    Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L

    2008-04-01

    The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.

  17. Full-color stereoscopic single-pixel camera based on DMD technology

    NASA Astrophysics Data System (ADS)

    Salvador-Balaguer, Eva; Clemente, Pere; Tajahuerce, Enrique; Pla, Filiberto; Lancis, Jesús

    2017-02-01

    Imaging systems based on microstructured illumination and single-pixel detection offer several advantages over conventional imaging techniques. They are an effective method for imaging through scattering media even in the dynamic case. They work efficiently under low light levels, and the simplicity of the detector makes it easy to design imaging systems working out of the visible spectrum and to acquire multidimensional information. In particular, several approaches have been proposed to record 3D information. The technique is based on sampling the object with a sequence of microstructured light patterns codified onto a programmable spatial light modulator while light intensity is measured with a single-pixel detector. The image is retrieved computationally from the photocurrent fluctuations provided by the detector. In this contribution we describe an optical system able to produce full-color stereoscopic images by using few and simple optoelectronic components. In our setup we use an off-the-shelf digital light projector (DLP) based on a digital micromirror device (DMD) to generate the light patterns. To capture the color of the scene we take advantage of the codification procedure used by the DLP for color video projection. To record stereoscopic views we use a 90° beam splitter and two mirrors, allowing us two project the patterns form two different viewpoints. By using a single monochromatic photodiode we obtain a pair of color images that can be used as input in a 3-D display. To reduce the time we need to project the patterns we use a compressive sampling algorithm. Experimental results are shown.

  18. Analysis of simulated image sequences from sensors for restricted-visibility operations

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar

    1991-01-01

    A real time model of the visible output from a 94 GHz sensor, based on a radiometric simulation of the sensor, was developed. A sequence of images as seen from an aircraft as it approaches for landing was simulated using this model. Thirty frames from this sequence of 200 x 200 pixel images were analyzed to identify and track objects in the image using the Cantata image processing package within the visual programming environment provided by the Khoros software system. The image analysis operations are described.

  19. CCCT - NCTN Steering Committees - Clinical Imaging

    Cancer.gov

    The Clinical Imaging Steering Committee serves as a forum for the extramural imaging and oncology communities to provide strategic input to the NCI regarding its significant investment in imaging activities in clinical trials.

  20. Oscillatory dynamics and place field maps reflect hippocampal ensemble processing of sequence and place memory under NMDA receptor control.

    PubMed

    Cabral, Henrique O; Vinck, Martin; Fouquet, Celine; Pennartz, Cyriel M A; Rondi-Reig, Laure; Battaglia, Francesco P

    2014-01-22

    Place coding in the hippocampus requires flexible combination of sensory inputs (e.g., environmental and self-motion information) with memory of past events. We show that mouse CA1 hippocampal spatial representations may either be anchored to external landmarks (place memory) or reflect memorized sequences of cell assemblies depending on the behavioral strategy spontaneously selected. These computational modalities correspond to different CA1 dynamical states, as expressed by theta and low- and high-frequency gamma oscillations, when switching from place to sequence memory-based processing. These changes are consistent with a shift from entorhinal to CA3 input dominance on CA1. In mice with a deletion of forebrain NMDA receptors, the ability of place cells to maintain a map based on sequence memory is selectively impaired and oscillatory dynamics are correspondingly altered, suggesting that oscillations contribute to selecting behaviorally appropriate computations in the hippocampus and that NMDA receptors are crucial for this function. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Rapid Diagnostics of Onboard Sequences

    NASA Technical Reports Server (NTRS)

    Starbird, Thomas W.; Morris, John R.; Shams, Khawaja S.; Maimone, Mark W.

    2012-01-01

    Keeping track of sequences onboard a spacecraft is challenging. When reviewing Event Verification Records (EVRs) of sequence executions on the Mars Exploration Rover (MER), operators often found themselves wondering which version of a named sequence the EVR corresponded to. The lack of this information drastically impacts the operators diagnostic capabilities as well as their situational awareness with respect to the commands the spacecraft has executed, since the EVRs do not provide argument values or explanatory comments. Having this information immediately available can be instrumental in diagnosing critical events and can significantly enhance the overall safety of the spacecraft. This software provides auditing capability that can eliminate that uncertainty while diagnosing critical conditions. Furthermore, the Restful interface provides a simple way for sequencing tools to automatically retrieve binary compiled sequence SCMFs (Space Command Message Files) on demand. It also enables developers to change the underlying database, while maintaining the same interface to the existing applications. The logging capabilities are also beneficial to operators when they are trying to recall how they solved a similar problem many days ago: this software enables automatic recovery of SCMF and RML (Robot Markup Language) sequence files directly from the command EVRs, eliminating the need for people to find and validate the corresponding sequences. To address the lack of auditing capability for sequences onboard a spacecraft during earlier missions, extensive logging support was added on the Mars Science Laboratory (MSL) sequencing server. This server is responsible for generating all MSL binary SCMFs from RML input sequences. The sequencing server logs every SCMF it generates into a MySQL database, as well as the high-level RML file and dictionary name inputs used to create the SCMF. The SCMF is then indexed by a hash value that is automatically included in all command EVRs by the onboard flight software. Second, both the binary SCMF result and the RML input file can be retrieved simply by specifying the hash to a Restful web interface. This interface enables command line tools as well as large sophisticated programs to download the SCMF and RMLs on-demand from the database, enabling a vast array of tools to be built on top of it. One such command line tool can retrieve and display RML files, or annotate a list of EVRs by interleaving them with the original sequence commands. This software has been integrated with the MSL sequencing pipeline where it will serve sequences useful in diagnostics, debugging, and situational awareness throughout the mission.

  2. Research on respiratory motion correction method based on liver contrast-enhanced ultrasound images of single mode

    NASA Astrophysics Data System (ADS)

    Zhang, Ji; Li, Tao; Zheng, Shiqiang; Li, Yiyong

    2015-03-01

    To reduce the effects of respiratory motion in the quantitative analysis based on liver contrast-enhanced ultrasound (CEUS) image sequencesof single mode. The image gating method and the iterative registration method using model image were adopted to register liver contrast-enhanced ultrasound image sequences of single mode. The feasibility of the proposed respiratory motion correction method was explored preliminarily using 10 hepatocellular carcinomas CEUS cases. The positions of the lesions in the time series of 2D ultrasound images after correction were visually evaluated. Before and after correction, the quality of the weighted sum of transit time (WSTT) parametric images were also compared, in terms of the accuracy and spatial resolution. For the corrected and uncorrected sequences, their mean deviation values (mDVs) of time-intensity curve (TIC) fitting derived from CEUS sequences were measured. After the correction, the positions of the lesions in the time series of 2D ultrasound images were almost invariant. In contrast, the lesions in the uncorrected images all shifted noticeably. The quality of the WSTT parametric maps derived from liver CEUS image sequences were improved more greatly. Moreover, the mDVs of TIC fitting derived from CEUS sequences after the correction decreased by an average of 48.48+/-42.15. The proposed correction method could improve the accuracy of quantitative analysis based on liver CEUS image sequences of single mode, which would help in enhancing the differential diagnosis efficiency of liver tumors.

  3. Factor analysis for delineation of organ structures, creation of in- and output functions, and standardization of multicenter kinetic modeling

    NASA Astrophysics Data System (ADS)

    Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.

    1999-05-01

    PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.

  4. The Engineer Topographic Laboratories /ETL/ hybrid optical/digital image processor

    NASA Astrophysics Data System (ADS)

    Benton, J. R.; Corbett, F.; Tuft, R.

    1980-01-01

    An optical-digital processor for generalized image enhancement and filtering is described. The optical subsystem is a two-PROM Fourier filter processor. Input imagery is isolated, scaled, and imaged onto the first PROM; this input plane acts like a liquid gate and serves as an incoherent-to-coherent converter. The image is transformed onto a second PROM which also serves as a filter medium; filters are written onto the second PROM with a laser scanner in real time. A solid state CCTV camera records the filtered image, which is then digitized and stored in a digital image processor. The operator can then manipulate the filtered image using the gray scale and color remapping capabilities of the video processor as well as the digital processing capabilities of the minicomputer.

  5. Image segmentation via foreground and background semantic descriptors

    NASA Astrophysics Data System (ADS)

    Yuan, Ding; Qiang, Jingjing; Yin, Jihao

    2017-09-01

    In the field of image processing, it has been a challenging task to obtain a complete foreground that is not uniform in color or texture. Unlike other methods, which segment the image by only using low-level features, we present a segmentation framework, in which high-level visual features, such as semantic information, are used. First, the initial semantic labels were obtained by using the nonparametric method. Then, a subset of the training images, with a similar foreground to the input image, was selected. Consequently, the semantic labels could be further refined according to the subset. Finally, the input image was segmented by integrating the object affinity and refined semantic labels. State-of-the-art performance was achieved in experiments with the challenging MSRC 21 dataset.

  6. HotSpot Wizard 3.0: web server for automated design of mutations and smart libraries based on sequence input information.

    PubMed

    Sumbalova, Lenka; Stourac, Jan; Martinek, Tomas; Bednar, David; Damborsky, Jiri

    2018-05-23

    HotSpot Wizard is a web server used for the automated identification of hotspots in semi-rational protein design to give improved protein stability, catalytic activity, substrate specificity and enantioselectivity. Since there are three orders of magnitude fewer protein structures than sequences in bioinformatic databases, the major limitation to the usability of previous versions was the requirement for the protein structure to be a compulsory input for the calculation. HotSpot Wizard 3.0 now accepts the protein sequence as input data. The protein structure for the query sequence is obtained either from eight repositories of homology models or is modeled using Modeller and I-Tasser. The quality of the models is then evaluated using three quality assessment tools-WHAT_CHECK, PROCHECK and MolProbity. During follow-up analyses, the system automatically warns the users whenever they attempt to redesign poorly predicted parts of their homology models. The second main limitation of HotSpot Wizard's predictions is that it identifies suitable positions for mutagenesis, but does not provide any reliable advice on particular substitutions. A new module for the estimation of thermodynamic stabilities using the Rosetta and FoldX suites has been introduced which prevents destabilizing mutations among pre-selected variants entering experimental testing. HotSpot Wizard is freely available at http://loschmidt.chemi.muni.cz/hotspotwizard.

  7. Rapid Gradient-Echo Imaging

    PubMed Central

    Hargreaves, Brian

    2012-01-01

    Gradient echo sequences are widely used in magnetic resonance imaging (MRI) for numerous applications ranging from angiography to perfusion to functional MRI. Compared with spin-echo techniques, the very short repetition times of gradient-echo methods enable very rapid 2D and 3D imaging, but also lead to complicated “steady states.” Signal and contrast behavior can be described graphically and mathematically, and depends strongly on the type of spoiling: fully balanced (no spoiling), gradient spoiling, or RF-spoiling. These spoiling options trade off between high signal and pure T1 contrast while the flip angle also affects image contrast in all cases, both of which can be demonstrated theoretically and in image examples. As with spin-echo sequences, magnetization preparation can be added to gradient-echo sequences to alter image contrast. Gradient echo sequences are widely used for numerous applications such as 3D perfusion imaging, functional MRI, cardiac imaging and MR angiography. PMID:23097185

  8. Overview of deep learning in medical imaging.

    PubMed

    Suzuki, Kenji

    2017-09-01

    The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.

  9. Automatic dynamic range adjustment for ultrasound B-mode imaging.

    PubMed

    Lee, Yeonhwa; Kang, Jinbum; Yoo, Yangmo

    2015-02-01

    In medical ultrasound imaging, dynamic range (DR) is defined as the difference between the maximum and minimum values of the displayed signal to display and it is one of the most essential parameters that determine its image quality. Typically, DR is given with a fixed value and adjusted manually by operators, which leads to low clinical productivity and high user dependency. Furthermore, in 3D ultrasound imaging, DR values are unable to be adjusted during 3D data acquisition. A histogram matching method, which equalizes the histogram of an input image based on that from a reference image, can be applied to determine the DR value. However, it could be lead to an over contrasted image. In this paper, a new Automatic Dynamic Range Adjustment (ADRA) method is presented that adaptively adjusts the DR value by manipulating input images similar to a reference image. The proposed ADRA method uses the distance ratio between the log average and each extreme value of a reference image. To evaluate the performance of the ADRA method, the similarity between the reference and input images was measured by computing a correlation coefficient (CC). In in vivo experiments, the CC values were increased by applying the ADRA method from 0.6872 to 0.9870 and from 0.9274 to 0.9939 for kidney and liver data, respectively, compared to the fixed DR case. In addition, the proposed ADRA method showed to outperform the histogram matching method with in vivo liver and kidney data. When using 3D abdominal data with 70 frames, while the CC value from the ADRA method is slightly increased (i.e., 0.6%), the proposed method showed improved image quality in the c-plane compared to its fixed counterpart, which suffered from a shadow artifact. These results indicate that the proposed method can enhance image quality in 2D and 3D ultrasound B-mode imaging by improving the similarity between the reference and input images while eliminating unnecessary manual interaction by the user. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Application of MELCOR Code to a French PWR 900 MWe Severe Accident Sequence and Evaluation of Models Performance Focusing on In-Vessel Thermal Hydraulic Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    De Rosa, Felice

    2006-07-01

    In the ambit of the Severe Accident Network of Excellence Project (SARNET), funded by the European Union, 6. FISA (Fission Safety) Programme, one of the main tasks is the development and validation of the European Accident Source Term Evaluation Code (ASTEC Code). One of the reference codes used to compare ASTEC results, coming from experimental and Reactor Plant applications, is MELCOR. ENEA is a SARNET member and also an ASTEC and MELCOR user. During the first 18 months of this project, we performed a series of MELCOR and ASTEC calculations referring to a French PWR 900 MWe and to themore » accident sequence of 'Loss of Steam Generator (SG) Feedwater' (known as H2 sequence in the French classification). H2 is an accident sequence substantially equivalent to a Station Blackout scenario, like a TMLB accident, with the only difference that in H2 sequence the scram is forced to occur with a delay of 28 seconds. The main events during the accident sequence are a loss of normal and auxiliary SG feedwater (0 s), followed by a scram when the water level in SG is equal or less than 0.7 m (after 28 seconds). There is also a main coolant pumps trip when {delta}Tsat < 10 deg. C, a total opening of the three relief valves when Tric (core maximal outlet temperature) is above 603 K (330 deg. C) and accumulators isolation when primary pressure goes below 1.5 MPa (15 bar). Among many other points, it is worth noting that this was the first time that a MELCOR 1.8.5 input deck was available for a French PWR 900. The main ENEA effort in this period was devoted to prepare the MELCOR input deck using the code version v.1.8.5 (build QZ Oct 2000 with the latest patch 185003 Oct 2001). The input deck, completely new, was prepared taking into account structure, data and same conditions as those found inside ASTEC input decks. The main goal of the work presented in this paper is to put in evidence where and when MELCOR provides good enough results and why, in some cases mainly referring to its specific models (candling, corium pool behaviour, etc.) they were less good. A future work will be the preparation of an input deck for the new MELCOR 1.8.6. and to perform a code-to-code comparison with ASTEC v1.2 rev. 1. (author)« less

  11. MSuPDA: A Memory Efficient Algorithm for Sequence Alignment.

    PubMed

    Khan, Mohammad Ibrahim; Kamal, Md Sarwar; Chowdhury, Linkon

    2016-03-01

    Space complexity is a million dollar question in DNA sequence alignments. In this regard, memory saving under pushdown automata can help to reduce the occupied spaces in computer memory. Our proposed process is that anchor seed (AS) will be selected from given data set of nucleotide base pairs for local sequence alignment. Quick splitting techniques will separate the AS from all the DNA genome segments. Selected AS will be placed to pushdown automata's (PDA) input unit. Whole DNA genome segments will be placed into PDA's stack. AS from input unit will be matched with the DNA genome segments from stack of PDA. Match, mismatch and indel of nucleotides will be popped from the stack under the control unit of pushdown automata. During the POP operation on stack, it will free the memory cell occupied by the nucleotide base pair.

  12. Self-aligning and compressed autosophy video databases

    NASA Astrophysics Data System (ADS)

    Holtz, Klaus E.

    1993-04-01

    Autosophy, an emerging new science, explains `self-assembling structures,' such as crystals or living trees, in mathematical terms. This research provides a new mathematical theory of `learning' and a new `information theory' which permits the growing of self-assembling data network in a computer memory similar to the growing of `data crystals' or `data trees' without data processing or programming. Autosophy databases are educated very much like a human child to organize their own internal data storage. Input patterns, such as written questions or images, are converted to points in a mathematical omni dimensional hyperspace. The input patterns are then associated with output patterns, such as written answers or images. Omni dimensional information storage will result in enormous data compression because each pattern fragment is only stored once. Pattern recognition in the text or image files is greatly simplified by the peculiar omni dimensional storage method. Video databases will absorb input images from a TV camera and associate them with textual information. The `black box' operations are totally self-aligning where the input data will determine their own hyperspace storage locations. Self-aligning autosophy databases may lead to a new generation of brain-like devices.

  13. Development and Validation of the Suprathreshold Stochastic Resonance-Based Image Processing Method for the Detection of Abdomino-pelvic Tumor on PET/CT Scans.

    PubMed

    Saroha, Kartik; Pandey, Anil Kumar; Sharma, Param Dev; Behera, Abhishek; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-01-01

    The detection of abdomino-pelvic tumors embedded in or nearby radioactive urine containing 18F-FDG activity is a challenging task on PET/CT scan. In this study, we propose and validate the suprathreshold stochastic resonance-based image processing method for the detection of these tumors. The method consists of the addition of noise to the input image, and then thresholding it that creates one frame of intermediate image. One hundred such frames were generated and averaged to get the final image. The method was implemented using MATLAB R2013b on a personal computer. Noisy image was generated using random Poisson variates corresponding to each pixel of the input image. In order to verify the method, 30 sets of pre-diuretic and its corresponding post-diuretic PET/CT scan images (25 tumor images and 5 control images with no tumor) were included. For each sets of pre-diuretic image (input image), 26 images (at threshold values equal to mean counts multiplied by a constant factor ranging from 1.0 to 2.6 with increment step of 0.1) were created and visually inspected, and the image that most closely matched with the gold standard (corresponding post-diuretic image) was selected as the final output image. These images were further evaluated by two nuclear medicine physicians. In 22 out of 25 images, tumor was successfully detected. In five control images, no false positives were reported. Thus, the empirical probability of detection of abdomino-pelvic tumors evaluates to 0.88. The proposed method was able to detect abdomino-pelvic tumors on pre-diuretic PET/CT scan with a high probability of success and no false positives.

  14. Category Induction via Distributional Analysis: Evidence from a Serial Reaction Time Task

    ERIC Educational Resources Information Center

    Hunt, Ruskin H.; Aslin, Richard N.

    2010-01-01

    Category formation lies at the heart of a number of higher-order behaviors, including language. We assessed the ability of human adults to learn, from distributional information alone, categories embedded in a sequence of input stimuli using a serial reaction time task. Artificial grammars generated corpora of input strings containing a…

  15. Implementing conventional logic unconventionally: photochromic molecular populations as registers and logic gates.

    PubMed

    Chaplin, J C; Russell, N A; Krasnogor, N

    2012-07-01

    In this paper we detail experimental methods to implement registers, logic gates and logic circuits using populations of photochromic molecules exposed to sequences of light pulses. Photochromic molecules are molecules with two or more stable states that can be switched reversibly between states by illuminating with appropriate wavelengths of radiation. Registers are implemented by using the concentration of molecules in each state in a given sample to represent an integer value. The register's value can then be read using the intensity of a fluorescence signal from the sample. Logic gates have been implemented using a register with inputs in the form of light pulses to implement 1-input/1-output and 2-input/1-output logic gates. A proof of concept logic circuit is also demonstrated; coupled with the software workflow describe the transition from a circuit design to the corresponding sequence of light pulses. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  16. Robot Task Commander with Extensible Programming Environment

    NASA Technical Reports Server (NTRS)

    Hart, Stephen W (Inventor); Wightman, Brian J (Inventor); Dinh, Duy Paul (Inventor); Yamokoski, John D. (Inventor); Gooding, Dustin R (Inventor)

    2014-01-01

    A system for developing distributed robot application-level software includes a robot having an associated control module which controls motion of the robot in response to a commanded task, and a robot task commander (RTC) in networked communication with the control module over a network transport layer (NTL). The RTC includes a script engine(s) and a GUI, with a processor and a centralized library of library blocks constructed from an interpretive computer programming code and having input and output connections. The GUI provides access to a Visual Programming Language (VPL) environment and a text editor. In executing a method, the VPL is opened, a task for the robot is built from the code library blocks, and data is assigned to input and output connections identifying input and output data for each block. A task sequence(s) is sent to the control module(s) over the NTL to command execution of the task.

  17. Using cellular automata to generate image representation for biological sequences.

    PubMed

    Xiao, X; Shao, S; Ding, Y; Huang, Z; Chen, X; Chou, K-C

    2005-02-01

    A novel approach to visualize biological sequences is developed based on cellular automata (Wolfram, S. Nature 1984, 311, 419-424), a set of discrete dynamical systems in which space and time are discrete. By transforming the symbolic sequence codes into the digital codes, and using some optimal space-time evolvement rules of cellular automata, a biological sequence can be represented by a unique image, the so-called cellular automata image. Many important features, which are originally hidden in a long and complicated biological sequence, can be clearly revealed thru its cellular automata image. With biological sequences entering into databanks rapidly increasing in the post-genomic era, it is anticipated that the cellular automata image will become a very useful vehicle for investigation into their key features, identification of their function, as well as revelation of their "fingerprint". It is anticipated that by using the concept of the pseudo amino acid composition (Chou, K.C. Proteins: Structure, Function, and Genetics, 2001, 43, 246-255), the cellular automata image approach can also be used to improve the quality of predicting protein attributes, such as structural class and subcellular location.

  18. Four-dimensional wavelet compression of arbitrarily sized echocardiographic data.

    PubMed

    Zeng, Li; Jansen, Christian P; Marsch, Stephan; Unser, Michael; Hunziker, Patrick R

    2002-09-01

    Wavelet-based methods have become most popular for the compression of two-dimensional medical images and sequences. The standard implementations consider data sizes that are powers of two. There is also a large body of literature treating issues such as the choice of the "optimal" wavelets and the performance comparison of competing algorithms. With the advent of telemedicine, there is a strong incentive to extend these techniques to higher dimensional data such as dynamic three-dimensional (3-D) echocardiography [four-dimensional (4-D) datasets]. One of the practical difficulties is that the size of this data is often not a multiple of a power of two, which can lead to increased computational complexity and impaired compression power. Our contribution in this paper is to present a genuine 4-D extension of the well-known zerotree algorithm for arbitrarily sized data. The key component of our method is a one-dimensional wavelet algorithm that can handle arbitrarily sized input signals. The method uses a pair of symmetric/antisymmetric wavelets (10/6) together with some appropriate midpoint symmetry boundary conditions that reduce border artifacts. The zerotree structure is also adapted so that it can accommodate noneven data splitting. We have applied our method to the compression of real 3-D dynamic sequences from clinical cardiac ultrasound examinations. Our new algorithm compares very favorably with other more ad hoc adaptations (image extension and tiling) of the standard powers-of-two methods, in terms of both compression performance and computational cost. It is vastly superior to slice-by-slice wavelet encoding. This was seen not only in numerical image quality parameters but also in expert ratings, where significant improvement using the new approach could be documented. Our validation experiments show that one can safely compress 4-D data sets at ratios of 128:1 without compromising the diagnostic value of the images. We also display some more extreme compression results at ratios of 2000:1 where some key diagnostically relevant key features are preserved.

  19. Resilience to the contralateral visual field bias as a window into object representations

    PubMed Central

    Garcea, Frank E.; Kristensen, Stephanie; Almeida, Jorge; Mahon, Bradford Z.

    2016-01-01

    Viewing images of manipulable objects elicits differential blood oxygen level-dependent (BOLD) contrast across parietal and dorsal occipital areas of the human brain that support object-directed reaching, grasping, and complex object manipulation. However, it is unknown which object-selective regions of parietal cortex receive their principal inputs from the ventral object-processing pathway and which receive their inputs from the dorsal object-processing pathway. Parietal areas that receive their inputs from the ventral visual pathway, rather than from the dorsal stream, will have inputs that are already filtered through object categorization and identification processes. This predicts that parietal regions that receive inputs from the ventral visual pathway should exhibit object-selective responses that are resilient to contralateral visual field biases. To test this hypothesis, adult participants viewed images of tools and animals that were presented to the left or right visual fields during functional magnetic resonance imaging (fMRI). We found that the left inferior parietal lobule showed robust tool preferences independently of the visual field in which tool stimuli were presented. In contrast, a region in posterior parietal/dorsal occipital cortex in the right hemisphere exhibited an interaction between visual field and category: tool-preferences were strongest contralateral to the stimulus. These findings suggest that action knowledge accessed in the left inferior parietal lobule operates over inputs that are abstracted from the visual input and contingent on analysis by the ventral visual pathway, consistent with its putative role in supporting object manipulation knowledge. PMID:27160998

  20. Unexpected substrate specificity of T4 DNA ligase revealed by in vitro selection

    NASA Technical Reports Server (NTRS)

    Harada, Kazuo; Orgel, Leslie E.

    1993-01-01

    We have used in vitro selection techniques to characterize DNA sequences that are ligated efficiently by T4 DNA ligase. We find that the ensemble of selected sequences ligates about 50 times as efficiently as the random mixture of sequences used as the input for selection. Surprisingly many of the selected sequences failed to produce a match at or close to the ligation junction. None of the 20 selected oligomers that we sequenced produced a match two bases upstream from the ligation junction.

  1. MerCat: a versatile k-mer counter and diversity estimator for database-independent property analysis obtained from metagenomic and/or metatranscriptomic sequencing data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    White, Richard A.; Panyala, Ajay R.; Glass, Kevin A.

    MerCat is a parallel, highly scalable and modular property software package for robust analysis of features in next-generation sequencing data. MerCat inputs include assembled contigs and raw sequence reads from any platform resulting in feature abundance counts tables. MerCat allows for direct analysis of data properties without reference sequence database dependency commonly used by search tools such as BLAST and/or DIAMOND for compositional analysis of whole community shotgun sequencing (e.g. metagenomes and metatranscriptomes).

  2. Assessment of cerebral venous sinus thrombosis using T2*-weighted gradient echo magnetic resonance imaging sequences

    PubMed Central

    Bidar, Fatemeh; Faeghi, Fariborz; Ghorbani, Askar

    2016-01-01

    Background: The purpose of this study is to demonstrate the advantages of gradient echo (GRE) sequences in the detection and characterization of cerebral venous sinus thrombosis compared to conventional magnetic resonance sequences. Methods: A total of 17 patients with cerebral venous thrombosis (CVT) were evaluated using different magnetic resonance imaging (MRI) sequences. The MRI sequences included T1-weighted spin echo (SE) imaging, T*2-weighted turbo SE (TSE), fluid attenuated inversion recovery (FLAIR), T*2-weighted conventional GRE, and diffusion weighted imaging (DWI). MR venography (MRV) images were obtained as the golden standard. Results: Venous sinus thrombosis was best detectable in T*2-weighted conventional GRE sequences in all patients except in one case. Venous thrombosis was undetectable in DWI. T*2-weighted GRE sequences were superior to T*2-weighted TSE, T1-weighted SE, and FLAIR. Enhanced MRV was successful in displaying the location of thrombosis. Conclusion: T*2-weighted conventional GRE sequences are probably the best method for the assessment of cerebral venous sinus thrombosis. The mentioned method is non-invasive; therefore, it can be employed in the clinical evaluation of cerebral venous sinus thrombosis. PMID:27326365

  3. Omniview motionless camera orientation system

    NASA Technical Reports Server (NTRS)

    Martin, H. Lee (Inventor); Kuban, Daniel P. (Inventor); Zimmermann, Steven D. (Inventor); Busko, Nicholas (Inventor)

    2010-01-01

    An apparatus and method is provided for converting digital images for use in an imaging system. The apparatus includes a data memory which stores digital data representing an image having a circular or spherical field of view such as an image captured by a fish-eye lens, a control input for receiving a signal for selecting a portion of the image, and a converter responsive to the control input for converting digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. Various methods include the steps of storing digital data representing an image having a circular or spherical field of view, selecting a portion of the image, and converting the stored digital data corresponding to the selected portion into digital data representing a planar image for subsequent display. In various embodiments, the data converter and data conversion step may use an orthogonal set of transformation algorithms.

  4. Human Motion Capture Data Tailored Transform Coding.

    PubMed

    Junhui Hou; Lap-Pui Chau; Magnenat-Thalmann, Nadia; Ying He

    2015-07-01

    Human motion capture (mocap) is a widely used technique for digitalizing human movements. With growing usage, compressing mocap data has received increasing attention, since compact data size enables efficient storage and transmission. Our analysis shows that mocap data have some unique characteristics that distinguish themselves from images and videos. Therefore, directly borrowing image or video compression techniques, such as discrete cosine transform, does not work well. In this paper, we propose a novel mocap-tailored transform coding algorithm that takes advantage of these features. Our algorithm segments the input mocap sequences into clips, which are represented in 2D matrices. Then it computes a set of data-dependent orthogonal bases to transform the matrices to frequency domain, in which the transform coefficients have significantly less dependency. Finally, the compression is obtained by entropy coding of the quantized coefficients and the bases. Our method has low computational cost and can be easily extended to compress mocap databases. It also requires neither training nor complicated parameter setting. Experimental results demonstrate that the proposed scheme significantly outperforms state-of-the-art algorithms in terms of compression performance and speed.

  5. An investigation into the effects of temporal resolution on hepatic dynamic contrast-enhanced MRI in volunteers and in patients with hepatocellular carcinoma

    NASA Astrophysics Data System (ADS)

    Gill, Andrew B.; Black, Richard T.; Bowden, David J.; Priest, Andrew N.; Graves, Martin J.; Lomas, David J.

    2014-06-01

    This study investigated the effect of temporal resolution on the dual-input pharmacokinetic (PK) modelling of dynamic contrast-enhanced MRI (DCE-MRI) data from normal volunteer livers and from patients with hepatocellular carcinoma. Eleven volunteers and five patients were examined at 3 T. Two sections, one optimized for the vascular input functions (VIF) and one for the tissue, were imaged within a single heart-beat (HB) using a saturation-recovery fast gradient echo sequence. The data was analysed using a dual-input single-compartment PK model. The VIFs and/or uptake curves were then temporally sub-sampled (at interval ▵t = [2-20] s) before being subject to the same PK analysis. Statistical comparisons of tumour and normal tissue PK parameter values using a 5% significance level gave rise to the same study results when temporally sub-sampling the VIFs to HB < ▵t <4 s. However, sub-sampling to ▵t > 4 s did adversely affect the statistical comparisons. Temporal sub-sampling of just the liver/tumour tissue uptake curves at ▵t ≤ 20 s, whilst using high temporal resolution VIFs, did not substantially affect PK parameter statistical comparisons. In conclusion, there is no practical advantage to be gained from acquiring very high temporal resolution hepatic DCE-MRI data. Instead the high temporal resolution could be usefully traded for increased spatial resolution or SNR.

  6. Slow Feature Analysis on Retinal Waves Leads to V1 Complex Cells

    PubMed Central

    Dähne, Sven; Wilbert, Niko; Wiskott, Laurenz

    2014-01-01

    The developing visual system of many mammalian species is partially structured and organized even before the onset of vision. Spontaneous neural activity, which spreads in waves across the retina, has been suggested to play a major role in these prenatal structuring processes. Recently, it has been shown that when employing an efficient coding strategy, such as sparse coding, these retinal activity patterns lead to basis functions that resemble optimal stimuli of simple cells in primary visual cortex (V1). Here we present the results of applying a coding strategy that optimizes for temporal slowness, namely Slow Feature Analysis (SFA), to a biologically plausible model of retinal waves. Previously, SFA has been successfully applied to model parts of the visual system, most notably in reproducing a rich set of complex-cell features by training SFA with quasi-natural image sequences. In the present work, we obtain SFA units that share a number of properties with cortical complex-cells by training on simulated retinal waves. The emergence of two distinct properties of the SFA units (phase invariance and orientation tuning) is thoroughly investigated via control experiments and mathematical analysis of the input-output functions found by SFA. The results support the idea that retinal waves share relevant temporal and spatial properties with natural visual input. Hence, retinal waves seem suitable training stimuli to learn invariances and thereby shape the developing early visual system such that it is best prepared for coding input from the natural world. PMID:24810948

  7. Optimising diffusion-weighted imaging in the abdomen and pelvis: comparison of image quality between monopolar and bipolar single-shot spin-echo echo-planar sequences.

    PubMed

    Kyriazi, Stavroula; Blackledge, Matthew; Collins, David J; Desouza, Nandita M

    2010-10-01

    To compare geometric distortion, signal-to-noise ratio (SNR), apparent diffusion coefficient (ADC), efficacy of fat suppression and presence of artefact between monopolar (Stejskal and Tanner) and bipolar (twice-refocused, eddy-current-compensating) diffusion-weighted imaging (DWI) sequences in the abdomen and pelvis. A semiquantitative distortion index (DI) was derived from the subtraction images with b = 0 and 1,000 s/mm(2) in a phantom and compared between the two sequences. Seven subjects were imaged with both sequences using four b values (0, 600, 900 and 1,050 s/mm(2)) and SNR, ADC for different organs and fat-to-muscle signal ratio (FMR) were compared. Image quality was evaluated by two radiologists on a 5-point scale. DI was improved in the bipolar sequence, indicating less geometric distortion. SNR was significantly lower for all tissues and b values in the bipolar images compared with the monopolar (p < 0.05), whereas FMR was not statistically different. ADC in liver, kidney and sacrum was higher in the bipolar scheme compared to the monopolar (p < 0.03), whereas in muscle it was lower (p = 0.018). Image quality scores were higher for the bipolar sequence (p ≤ 0.025). Artefact reduction makes the bipolar DWI sequence preferable in abdominopelvic applications, although the trade-off in SNR may compromise ADC measurements in muscle.

  8. Steady-state MR imaging sequences: physics, classification, and clinical applications.

    PubMed

    Chavhan, Govind B; Babyn, Paul S; Jankharia, Bhavin G; Cheng, Hai-Ling M; Shroff, Manohar M

    2008-01-01

    Steady-state sequences are a class of rapid magnetic resonance (MR) imaging techniques based on fast gradient-echo acquisitions in which both longitudinal magnetization (LM) and transverse magnetization (TM) are kept constant. Both LM and TM reach a nonzero steady state through the use of a repetition time that is shorter than the T2 relaxation time of tissue. When TM is maintained as multiple radiofrequency excitation pulses are applied, two types of signal are formed once steady state is reached: preexcitation signal (S-) from echo reformation; and postexcitation signal (S+), which consists of free induction decay. Depending on the signal sampled and used to form an image, steady-state sequences can be classified as (a) postexcitation refocused (only S+ is sampled), (b) preexcitation refocused (only S- is sampled), and (c) fully refocused (both S+ and S- are sampled) sequences. All tissues with a reasonably long T2 relaxation time will show additional signals due to various refocused echo paths. Steady-state sequences have revolutionized cardiac imaging and have become the standard for anatomic functional cardiac imaging and for the assessment of myocardial viability because of their good signal-to-noise ratio and contrast-to-noise ratio and increased speed of acquisition. They are also useful in abdominal and fetal imaging and hold promise for interventional MR imaging. Because steady-state sequences are now commonly used in MR imaging, radiologists will benefit from understanding the underlying physics, classification, and clinical applications of these sequences.

  9. Protein Sequence Annotation Tool (PSAT): A centralized web-based meta-server for high-throughput sequence annotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leung, Elo; Huang, Amy; Cadag, Eithon

    In this study, we introduce the Protein Sequence Annotation Tool (PSAT), a web-based, sequence annotation meta-server for performing integrated, high-throughput, genome-wide sequence analyses. Our goals in building PSAT were to (1) create an extensible platform for integration of multiple sequence-based bioinformatics tools, (2) enable functional annotations and enzyme predictions over large input protein fasta data sets, and (3) provide a web interface for convenient execution of the tools. In this paper, we demonstrate the utility of PSAT by annotating the predicted peptide gene products of Herbaspirillum sp. strain RV1423, importing the results of PSAT into EC2KEGG, and using the resultingmore » functional comparisons to identify a putative catabolic pathway, thereby distinguishing RV1423 from a well annotated Herbaspirillum species. This analysis demonstrates that high-throughput enzyme predictions, provided by PSAT processing, can be used to identify metabolic potential in an otherwise poorly annotated genome. Lastly, PSAT is a meta server that combines the results from several sequence-based annotation and function prediction codes, and is available at http://psat.llnl.gov/psat/. PSAT stands apart from other sequencebased genome annotation systems in providing a high-throughput platform for rapid de novo enzyme predictions and sequence annotations over large input protein sequence data sets in FASTA. PSAT is most appropriately applied in annotation of large protein FASTA sets that may or may not be associated with a single genome.« less

  10. Protein Sequence Annotation Tool (PSAT): A centralized web-based meta-server for high-throughput sequence annotations

    DOE PAGES

    Leung, Elo; Huang, Amy; Cadag, Eithon; ...

    2016-01-20

    In this study, we introduce the Protein Sequence Annotation Tool (PSAT), a web-based, sequence annotation meta-server for performing integrated, high-throughput, genome-wide sequence analyses. Our goals in building PSAT were to (1) create an extensible platform for integration of multiple sequence-based bioinformatics tools, (2) enable functional annotations and enzyme predictions over large input protein fasta data sets, and (3) provide a web interface for convenient execution of the tools. In this paper, we demonstrate the utility of PSAT by annotating the predicted peptide gene products of Herbaspirillum sp. strain RV1423, importing the results of PSAT into EC2KEGG, and using the resultingmore » functional comparisons to identify a putative catabolic pathway, thereby distinguishing RV1423 from a well annotated Herbaspirillum species. This analysis demonstrates that high-throughput enzyme predictions, provided by PSAT processing, can be used to identify metabolic potential in an otherwise poorly annotated genome. Lastly, PSAT is a meta server that combines the results from several sequence-based annotation and function prediction codes, and is available at http://psat.llnl.gov/psat/. PSAT stands apart from other sequencebased genome annotation systems in providing a high-throughput platform for rapid de novo enzyme predictions and sequence annotations over large input protein sequence data sets in FASTA. PSAT is most appropriately applied in annotation of large protein FASTA sets that may or may not be associated with a single genome.« less

  11. Reconstructing Interlaced High-Dynamic-Range Video Using Joint Learning.

    PubMed

    Inchang Choi; Seung-Hwan Baek; Kim, Min H

    2017-11-01

    For extending the dynamic range of video, it is a common practice to capture multiple frames sequentially with different exposures and combine them to extend the dynamic range of each video frame. However, this approach results in typical ghosting artifacts due to fast and complex motion in nature. As an alternative, video imaging with interlaced exposures has been introduced to extend the dynamic range. However, the interlaced approach has been hindered by jaggy artifacts and sensor noise, leading to concerns over image quality. In this paper, we propose a data-driven approach for jointly solving two specific problems of deinterlacing and denoising that arise in interlaced video imaging with different exposures. First, we solve the deinterlacing problem using joint dictionary learning via sparse coding. Since partial information of detail in differently exposed rows is often available via interlacing, we make use of the information to reconstruct details of the extended dynamic range from the interlaced video input. Second, we jointly solve the denoising problem by tailoring sparse coding to better handle additive noise in low-/high-exposure rows, and also adopt multiscale homography flow to temporal sequences for denoising. We anticipate that the proposed method will allow for concurrent capture of higher dynamic range video frames without suffering from ghosting artifacts. We demonstrate the advantages of our interlaced video imaging compared with the state-of-the-art high-dynamic-range video methods.

  12. Investigating the Role of Global Histogram Equalization Technique for 99mTechnetium-Methylene diphosphonate Bone Scan Image Enhancement

    PubMed Central

    Pandey, Anil Kumar; Sharma, Param Dev; Dheer, Pankaj; Parida, Girish Kumar; Goyal, Harish; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-01-01

    Purpose of the Study: 99mTechnetium-methylene diphosphonate (99mTc-MDP) bone scan images have limited number of counts per pixel, and hence, they have inferior image quality compared to X-rays. Theoretically, global histogram equalization (GHE) technique can improve the contrast of a given image though practical benefits of doing so have only limited acceptance. In this study, we have investigated the effect of GHE technique for 99mTc-MDP-bone scan images. Materials and Methods: A set of 89 low contrast 99mTc-MDP whole-body bone scan images were included in this study. These images were acquired with parallel hole collimation on Symbia E gamma camera. The images were then processed with histogram equalization technique. The image quality of input and processed images were reviewed by two nuclear medicine physicians on a 5-point scale where score of 1 is for very poor and 5 is for the best image quality. A statistical test was applied to find the significance of difference between the mean scores assigned to input and processed images. Results: This technique improves the contrast of the images; however, oversaturation was noticed in the processed images. Student's t-test was applied, and a statistically significant difference in the input and processed image quality was found at P < 0.001 (with α = 0.05). However, further improvement in image quality is needed as per requirements of nuclear medicine physicians. Conclusion: GHE techniques can be used on low contrast bone scan images. In some of the cases, a histogram equalization technique in combination with some other postprocessing technique is useful. PMID:29142344

  13. A fast and fully automatic registration approach based on point features for multi-source remote-sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Le; Zhang, Dengrong; Holden, Eun-Jung

    2008-07-01

    Automatic registration of multi-source remote-sensing images is a difficult task as it must deal with the varying illuminations and resolutions of the images, different perspectives and the local deformations within the images. This paper proposes a fully automatic and fast non-rigid image registration technique that addresses those issues. The proposed technique performs a pre-registration process that coarsely aligns the input image to the reference image by automatically detecting their matching points by using the scale invariant feature transform (SIFT) method and an affine transformation model. Once the coarse registration is completed, it performs a fine-scale registration process based on a piecewise linear transformation technique using feature points that are detected by the Harris corner detector. The registration process firstly finds in succession, tie point pairs between the input and the reference image by detecting Harris corners and applying a cross-matching strategy based on a wavelet pyramid for a fast search speed. Tie point pairs with large errors are pruned by an error-checking step. The input image is then rectified by using triangulated irregular networks (TINs) to deal with irregular local deformations caused by the fluctuation of the terrain. For each triangular facet of the TIN, affine transformations are estimated and applied for rectification. Experiments with Quickbird, SPOT5, SPOT4, TM remote-sensing images of the Hangzhou area in China demonstrate the efficiency and the accuracy of the proposed technique for multi-source remote-sensing image registration.

  14. PET and MRI image fusion based on combination of 2-D Hilbert transform and IHS method.

    PubMed

    Haddadpour, Mozhdeh; Daneshvar, Sabalan; Seyedarabi, Hadi

    2017-08-01

    The process of medical image fusion is combining two or more medical images such as Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) and mapping them to a single image as fused image. So purpose of our study is assisting physicians to diagnose and treat the diseases in the least of the time. We used Magnetic Resonance Image (MRI) and Positron Emission Tomography (PET) as input images, so fused them based on combination of two dimensional Hilbert transform (2-D HT) and Intensity Hue Saturation (IHS) method. Evaluation metrics that we apply are Discrepancy (D k ) as an assessing spectral features and Average Gradient (AG k ) as an evaluating spatial features and also Overall Performance (O.P) to verify properly of the proposed method. In this paper we used three common evaluation metrics like Average Gradient (AG k ) and the lowest Discrepancy (D k ) and Overall Performance (O.P) to evaluate the performance of our method. Simulated and numerical results represent the desired performance of proposed method. Since that the main purpose of medical image fusion is preserving both spatial and spectral features of input images, so based on numerical results of evaluation metrics such as Average Gradient (AG k ), Discrepancy (D k ) and Overall Performance (O.P) and also desired simulated results, it can be concluded that our proposed method can preserve both spatial and spectral features of input images. Copyright © 2017 Chang Gung University. Published by Elsevier B.V. All rights reserved.

  15. Semi-automatic medical image segmentation with adaptive local statistics in Conditional Random Fields framework.

    PubMed

    Hu, Yu-Chi J; Grossberg, Michael D; Mageras, Gikas S

    2008-01-01

    Planning radiotherapy and surgical procedures usually require onerous manual segmentation of anatomical structures from medical images. In this paper we present a semi-automatic and accurate segmentation method to dramatically reduce the time and effort required of expert users. This is accomplished by giving a user an intuitive graphical interface to indicate samples of target and non-target tissue by loosely drawing a few brush strokes on the image. We use these brush strokes to provide the statistical input for a Conditional Random Field (CRF) based segmentation. Since we extract purely statistical information from the user input, we eliminate the need of assumptions on boundary contrast previously used by many other methods, A new feature of our method is that the statistics on one image can be reused on related images without registration. To demonstrate this, we show that boundary statistics provided on a few 2D slices of volumetric medical data, can be propagated through the entire 3D stack of images without using the geometric correspondence between images. In addition, the image segmentation from the CRF can be formulated as a minimum s-t graph cut problem which has a solution that is both globally optimal and fast. The combination of a fast segmentation and minimal user input that is reusable, make this a powerful technique for the segmentation of medical images.

  16. Integrated circuit layer image segmentation

    NASA Astrophysics Data System (ADS)

    Masalskis, Giedrius; Petrauskas, Romas

    2010-09-01

    In this paper we present IC layer image segmentation techniques which are specifically created for precise metal layer feature extraction. During our research we used many samples of real-life de-processed IC metal layer images which were obtained using optical light microscope. We have created sequence of various image processing filters which provides segmentation results of good enough precision for our application. Filter sequences were fine tuned to provide best possible results depending on properties of IC manufacturing process and imaging technology. Proposed IC image segmentation filter sequences were experimentally tested and compared with conventional direct segmentation algorithms.

  17. Structure-preserving interpolation of temporal and spatial image sequences using an optical flow-based method.

    PubMed

    Ehrhardt, J; Säring, D; Handels, H

    2007-01-01

    Modern tomographic imaging devices enable the acquisition of spatial and temporal image sequences. But, the spatial and temporal resolution of such devices is limited and therefore image interpolation techniques are needed to represent images at a desired level of discretization. This paper presents a method for structure-preserving interpolation between neighboring slices in temporal or spatial image sequences. In a first step, the spatiotemporal velocity field between image slices is determined using an optical flow-based registration method in order to establish spatial correspondence between adjacent slices. An iterative algorithm is applied using the spatial and temporal image derivatives and a spatiotemporal smoothing step. Afterwards, the calculated velocity field is used to generate an interpolated image at the desired time by averaging intensities between corresponding points. Three quantitative measures are defined to evaluate the performance of the interpolation method. The behavior and capability of the algorithm is demonstrated by synthetic images. A population of 17 temporal and spatial image sequences are utilized to compare the optical flow-based interpolation method to linear and shape-based interpolation. The quantitative results show that the optical flow-based method outperforms the linear and shape-based interpolation statistically significantly. The interpolation method presented is able to generate image sequences with appropriate spatial or temporal resolution needed for image comparison, analysis or visualization tasks. Quantitative and qualitative measures extracted from synthetic phantoms and medical image data show that the new method definitely has advantages over linear and shape-based interpolation.

  18. Autofocus algorithm for curvilinear SAR imaging

    NASA Astrophysics Data System (ADS)

    Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.

    2012-05-01

    We describe an approach to autofocusing for large apertures on curved SAR trajectories. It is a phase-gradient type method in which phase corrections compensating trajectory perturbations are estimated not directly from the image itself, but rather on the basis of partial" SAR data { functions of the slow and fast times { recon- structed (by an appropriate forward-projection procedure) from windowed scene patches, of sizes comparable to distances between distinct targets or localized features of the scene. The resulting partial data" can be shown to contain the same information on the phase perturbations as that in the original data, provided the frequencies of the perturbations do not exceed a quantity proportional to the patch size. The algorithm uses as input a sequence of conventional scene images based on moderate-size subapertures constituting the full aperture for which the phase corrections are to be determined. The subaperture images are formed with pixel sizes comparable to the range resolution which, for the optimal subaperture size, should be also approximately equal the cross-range resolution. The method does not restrict the size or shape of the synthetic aperture and can be incorporated in the data collection process in persistent sensing scenarios. The algorithm has been tested on the publicly available set of GOTCHA data, intentionally corrupted by random-walk-type trajectory uctuations (a possible model of errors caused by imprecise inertial navigation system readings) of maximum frequencies compatible with the selected patch size. It was able to eciently remove image corruption for apertures of sizes up to 360 degrees.

  19. Classification of antibiotics by neural network analysis of optical resonance data of whispering gallery modes in dielectric microspheres

    NASA Astrophysics Data System (ADS)

    Saetchnikov, Vladimir A.; Tcherniavskaia, Elina A.; Schweiger, Gustav; Ostendorf, Andreas

    2012-04-01

    A novel emerging technique for the label-free analysis of nanoparticles and biomolecules in liquid fluids using optical micro cavity resonance of whispering-gallery-type modes is being developed.A scheme based on polymer microspheres fixed by adhesive on the evanescence wave coupling element has been used. We demonstrated that the only spectral shift can't be used for identification of biological agents by developed approach. So neural network classifier for biological agents and micro/nano particles classification has been developed. The developed technique is the following. While tuning the laser wavelength images were recorded as avi-file. All sequences were broken into single frames and the location of the resonance was allocated in each frame. The image was filtered for noise reduction and integrated over two coordinates for evaluation of integrated energy of a measured signal. As input data normalized resonance shift of whispering-gallery modes and the relative efficiency of whispering-gallery modes excitation were used. Other parameters such as polarization of excited light, "center of gravity" of a resonance spectra etc. are also tested as input data for probabilistic neural network. After network designing and training we estimated the accuracy of classification. The classification of antibiotics such as penicillin and cephasolin have been performed with the accuracy of not less 97 %. Developed techniques can be used for lab-on-chip sensor based diagnostic tools as for identification of different biological molecules, e.g. proteins, oligonucleotides, oligosaccharides, lipids, small molecules, viral particles, cells and for dynamics of a delivery of medicines to bodies.

  20. On the cyclic nature of perception in vision versus audition

    PubMed Central

    VanRullen, Rufin; Zoefel, Benedikt; Ilhan, Barkin

    2014-01-01

    Does our perceptual awareness consist of a continuous stream, or a discrete sequence of perceptual cycles, possibly associated with the rhythmic structure of brain activity? This has been a long-standing question in neuroscience. We review recent psychophysical and electrophysiological studies indicating that part of our visual awareness proceeds in approximately 7–13 Hz cycles rather than continuously. On the other hand, experimental attempts at applying similar tools to demonstrate the discreteness of auditory awareness have been largely unsuccessful. We argue and demonstrate experimentally that visual and auditory perception are not equally affected by temporal subsampling of their respective input streams: video sequences remain intelligible at sampling rates of two to three frames per second, whereas audio inputs lose their fine temporal structure, and thus all significance, below 20–30 samples per second. This does not mean, however, that our auditory perception must proceed continuously. Instead, we propose that audition could still involve perceptual cycles, but the periodic sampling should happen only after the stage of auditory feature extraction. In addition, although visual perceptual cycles can follow one another at a spontaneous pace largely independent of the visual input, auditory cycles may need to sample the input stream more flexibly, by adapting to the temporal structure of the auditory inputs. PMID:24639585

  1. On the fallacy of quantitative segmentation for T1-weighted MRI

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Harrigan, Robert L.; Newton, Allen T.; Rane, Swati; Pallavaram, Srivatsan; D'Haese, Pierre F.; Dawant, Benoit M.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    T1-weighted magnetic resonance imaging (MRI) generates contrasts with primary sensitivity to local T1 properties (with lesser T2 and PD contributions). The observed signal intensity is determined by these local properties and the sequence parameters of the acquisition. In common practice, a range of acceptable parameters is used to ensure "similar" contrast across scanners used for any particular study (e.g., the ADNI standard MPRAGE). However, different studies may use different ranges of parameters and report the derived data as simply "T1-weighted". Physics and imaging authors pay strong heed to the specifics of the imaging sequences, but image processing authors have historically been more lax. Herein, we consider three T1-weighted sequences acquired the same underlying protocol (MPRAGE) and vendor (Philips), but "normal study-to-study variation" in parameters. We show that the gray matter/white matter/cerebrospinal fluid contrast is subtly but systemically different between these images and yields systemically different measurements of brain volume. The problem derives from the visually apparent boundary shifts, which would also be seen by a human rater. We present and evaluate two solutions to produce consistent segmentation results across imaging protocols. First, we propose to acquire multiple sequences on a subset of the data and use the multi-modal imaging as atlases to segment target images any of the available sequences. Second (if additional imaging is not available), we propose to synthesize atlases of the target imaging sequence and use the synthesized atlases in place of atlas imaging data. Both approaches significantly improve consistency of target labeling.

  2. Deconvolution of noisy transient signals: a Kalman filtering application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Candy, J.V.; Zicker, J.E.

    The deconvolution of transient signals from noisy measurements is a common problem occuring in various tests at Lawrence Livermore National Laboratory. The transient deconvolution problem places atypical constraints on algorithms presently available. The Schmidt-Kalman filter, a time-varying, tunable predictor, is designed using a piecewise constant model of the transient input signal. A simulation is developed to test the algorithm for various input signal bandwidths and different signal-to-noise ratios for the input and output sequences. The algorithm performance is reasonable.

  3. Real-time UAV trajectory generation using feature points matching between video image sequences

    NASA Astrophysics Data System (ADS)

    Byun, Younggi; Song, Jeongheon; Han, Dongyeob

    2017-09-01

    Unmanned aerial vehicles (UAVs), equipped with navigation systems and video capability, are currently being deployed for intelligence, reconnaissance and surveillance mission. In this paper, we present a systematic approach for the generation of UAV trajectory using a video image matching system based on SURF (Speeded up Robust Feature) and Preemptive RANSAC (Random Sample Consensus). Video image matching to find matching points is one of the most important steps for the accurate generation of UAV trajectory (sequence of poses in 3D space). We used the SURF algorithm to find the matching points between video image sequences, and removed mismatching by using the Preemptive RANSAC which divides all matching points to outliers and inliers. The inliers are only used to determine the epipolar geometry for estimating the relative pose (rotation and translation) between image sequences. Experimental results from simulated video image sequences showed that our approach has a good potential to be applied to the automatic geo-localization of the UAVs system

  4. Motion Detection in Ultrasound Image-Sequences Using Tensor Voting

    NASA Astrophysics Data System (ADS)

    Inba, Masafumi; Yanagida, Hirotaka; Tamura, Yasutaka

    2008-05-01

    Motion detection in ultrasound image sequences using tensor voting is described. We have been developing an ultrasound imaging system adopting a combination of coded excitation and synthetic aperture focusing techniques. In our method, frame rate of the system at distance of 150 mm reaches 5000 frame/s. Sparse array and short duration coded ultrasound signals are used for high-speed data acquisition. However, many artifacts appear in the reconstructed image sequences because of the incompleteness of the transmitted code. To reduce the artifacts, we have examined the application of tensor voting to the imaging method which adopts both coded excitation and synthetic aperture techniques. In this study, the basis of applying tensor voting and the motion detection method to ultrasound images is derived. It was confirmed that velocity detection and feature enhancement are possible using tensor voting in the time and space of simulated ultrasound three-dimensional image sequences.

  5. gr-MRI: A software package for magnetic resonance imaging using software defined radios

    NASA Astrophysics Data System (ADS)

    Hasselwander, Christopher J.; Cao, Zhipeng; Grissom, William A.

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5 Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately 2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500 kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs.

  6. Improving Adaptive Learning Technology through the Use of Response Times

    ERIC Educational Resources Information Center

    Mettler, Everett; Massey, Christine M.; Kellman, Philip J.

    2011-01-01

    Adaptive learning techniques have typically scheduled practice using learners' accuracy and item presentation history. We describe an adaptive learning system (Adaptive Response Time Based Sequencing--ARTS) that uses both accuracy and response time (RT) as direct inputs into sequencing. Response times are used to assess learning strength and…

  7. Identification of Stochastically Perturbed Autonomous Systems from Temporal Sequences of Probability Density Functions

    NASA Astrophysics Data System (ADS)

    Nie, Xiaokai; Luo, Jingjing; Coca, Daniel; Birkin, Mark; Chen, Jing

    2018-03-01

    The paper introduces a method for reconstructing one-dimensional iterated maps that are driven by an external control input and subjected to an additive stochastic perturbation, from sequences of probability density functions that are generated by the stochastic dynamical systems and observed experimentally.

  8. Method and apparatus for eliminating coherent noise in a coherent energy imaging system without destroying spatial coherence

    NASA Technical Reports Server (NTRS)

    Shulman, A. R. (Inventor)

    1971-01-01

    A method and apparatus for substantially eliminating noise in a coherent energy imaging system, and specifically in a light imaging system of the type having a coherent light source and at least one image lens disposed between an input signal plane and an output image plane are, discussed. The input signal plane is illuminated with the light source by rotating the lens about its optical axis. In this manner, the energy density of coherent noise diffraction patterns as produced by imperfections such as dust and/or bubbles on and/or in the lens is distributed over a ring-shaped area of the output image plane and reduced to a point wherein it can be ignored. The spatial filtering capability of the coherent imaging system is not affected by this noise elimination technique.

  9. Application safety evaluation of the radio frequency identification tag under magnetic resonance imaging.

    PubMed

    Fei, Xiaolu; Li, Shanshan; Gao, Shan; Wei, Lan; Wang, Lihong

    2014-09-04

    Radio Frequency Identification(RFID) has been widely used in healthcare facilities, but it has been paid little attention whether RFID applications are safe enough under healthcare environment. The purpose of this study is to assess the effects of RFID tags on Magnetic Resonance (MR) imaging in a typical electromagnetic environment in hospitals, and to evaluate the safety of their applications. A Magphan phantom was used to simulate the imaging objects, while active RFID tags were placed at different distances (0, 4, 8, 10 cm) from the phantom border. The phantom was scanned by using three typical sequences including spin-echo (SE) sequence, gradient-echo (GRE) sequence and inversion-recovery (IR) sequence. The quality of the image was quantitatively evaluated by using signal-to-noise ratio (SNR), uniformity, high-contrast resolution, and geometric distortion. RFID tags were read by an RFID reader to calculate their usable rate. RFID tags can be read properly after being placed in high magnetic field for up to 30 minutes. SNR: There were no differences between the group with RFID tags and the group without RFID tags using SE and IR sequence, but it was lower when using GRE sequence.Uniformity: There was a significant difference between the group with RFID tags and the group without RFID tags using SE and GRE sequence. Geometric distortion and high-contrast resolution: There were no obvious differences found. Active RFID tags can affect MR imaging quality, especially using the GRE sequence. Increasing the distance from the RFID tags to the imaging objects can reduce that influence. When the distance was longer than 8 cm, MR imaging quality were almost unaffected. However, the Gradient Echo related sequence is not recommended when patients wear a RFID wristband.

  10. Logarithmic profile mapping multi-scale Retinex for restoration of low illumination images

    NASA Astrophysics Data System (ADS)

    Shi, Haiyan; Kwok, Ngaiming; Wu, Hongkun; Li, Ruowei; Liu, Shilong; Lin, Ching-Feng; Wong, Chin Yeow

    2018-04-01

    Images are valuable information sources for many scientific and engineering applications. However, images captured in poor illumination conditions would have a large portion of dark regions that could heavily degrade the image quality. In order to improve the quality of such images, a restoration algorithm is developed here that transforms the low input brightness to a higher value using a modified Multi-Scale Retinex approach. The algorithm is further improved by a entropy based weighting with the input and the processed results to refine the necessary amplification at regions of low brightness. Moreover, fine details in the image are preserved by applying the Retinex principles to extract and then re-insert object edges to obtain an enhanced image. Results from experiments using low and normal illumination images have shown satisfactory performances with regard to the improvement in information contents and the mitigation of viewing artifacts.

  11. SSh versus TSE sequence protocol in rapid MR examination of pediatric patients with programmable drainage system.

    PubMed

    Brichtová, Eva; Šenkyřík, J

    2017-05-01

    A low radiation burden is essential during diagnostic procedures in pediatric patients due to their high tissue sensitivity. Using MR examination instead of the routinely used CT reduces the radiation exposure and the risk of adverse stochastic effects. Our retrospective study evaluated the possibility of using ultrafast single-shot (SSh) sequences and turbo spin echo (TSE) sequences in rapid MR brain imaging in pediatric patients with hydrocephalus and a programmable ventriculoperitoneal drainage system. SSh sequences seem to be suitable for examining pediatric patients due to the speed of using this technique, but significant susceptibility artifacts due to the programmable drainage valve degrade the image quality. Therefore, a rapid MR examination protocol based on TSE sequences, less sensitive to artifacts due to ferromagnetic components, has been developed. Of 61 pediatric patients who were examined using MR and the SSh sequence protocol, a group of 15 patients with hydrocephalus and a programmable drainage system also underwent TSE sequence MR imaging. The susceptibility artifact volume in both rapid MR protocols was evaluated using a semiautomatic volumetry system. A statistically significant decrease in the susceptibility artifact volume has been demonstrated in TSE sequence imaging in comparison with SSh sequences. Using TSE sequences reduced the influence of artifacts from the programmable valve, and the image quality in all cases was rated as excellent. In all patients, rapid MR examinations were performed without any need for intravenous sedation or general anesthesia. Our study results strongly suggest the superiority of the TSE sequence MR protocol compared to the SSh sequence protocol in pediatric patients with a programmable ventriculoperitoneal drainage system due to a significant reduction of susceptibility artifact volume. Both rapid sequence MR protocols provide quick and satisfactory brain imaging with no ionizing radiation and a reduced need for intravenous or general anesthesia.

  12. PROPELLER technique to improve image quality of MRI of the shoulder.

    PubMed

    Dietrich, Tobias J; Ulbrich, Erika J; Zanetti, Marco; Fucentese, Sandro F; Pfirrmann, Christian W A

    2011-12-01

    The purpose of this article is to evaluate the use of the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) technique for artifact reduction and overall image quality improvement for intermediate-weighted and T2-weighted MRI of the shoulder. One hundred eleven patients undergoing MR arthrography of the shoulder were included. A coronal oblique intermediate-weighted turbo spin-echo (TSE) sequence with fat suppression and a sagittal oblique T2-weighted TSE sequence with fat suppression were obtained without (standard) and with the PROPELLER technique. Scanning time increased from 3 minutes 17 seconds to 4 minutes 17 seconds (coronal oblique plane) and from 2 minutes 52 seconds to 4 minutes 10 seconds (sagittal oblique) using PROPELLER. Two radiologists graded image artifacts, overall image quality, and delineation of several anatomic structures on a 5-point scale (5, no artifact, optimal diagnostic quality; and 1, severe artifacts, diagnostically not usable). The Wilcoxon signed rank test was used to compare the data of the standard and PROPELLER images. Motion artifacts were significantly reduced in PROPELLER images (p < 0.001). Observer 1 rated motion artifacts with diagnostic impairment in one patient on coronal oblique PROPELLER images compared with 33 patients on standard images. Ratings for the sequences with PROPELLER were significantly better for overall image quality (p < 0.001). Observer 1 noted an overall image quality with diagnostic impairment in nine patients on sagittal oblique PROPELLER images compared with 23 patients on standard MRI. The PROPELLER technique for MRI of the shoulder reduces the number of sequences with diagnostic impairment as a result of motion artifacts and increases image quality compared with standard TSE sequences. PROPELLER sequences increase the acquisition time.

  13. Tracking prominent points in image sequences

    NASA Astrophysics Data System (ADS)

    Hahn, Michael

    1994-03-01

    Measuring image motion and inferring scene geometry and camera motion are main aspects of image sequence analysis. The determination of image motion and the structure-from-motion problem are tasks that can be addressed independently or in cooperative processes. In this paper we focus on tracking prominent points. High stability, reliability, and accuracy are criteria for the extraction of prominent points. This implies that tracking should work quite well with those features; unfortunately, the reality looks quite different. In the experimental investigations we processed a long sequence of 128 images. This mono sequence is taken in an outdoor environment at the experimental field of Mercedes Benz in Rastatt. Different tracking schemes are explored and the results with respect to stability and quality are reported.

  14. Dedicated phantom to study susceptibility artifacts caused by depth electrode in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Garcia, J.; Hidalgo, S. S.; Solis, S. E.; Vazquez, D.; Nuñez, J.; Rodriguez, A. O.

    2012-10-01

    The susceptibility artifacts can degrade of magnetic resonance image quality. Electrodes are an important source of artifacts when performing brain imaging. A dedicated phantom was built using a depth electrode to study the susceptibility effects under different pulse sequences. T2-weighted images were acquired with both gradient-and spin-echo sequences. The spin-echo sequences can significantly attenuate the susceptibility artifacts allowing a straightforward visualization of the regions surrounding the electrode.

  15. VizieR Online Data Catalog: Habitable zones around main-sequence stars (Kopparapu+, 2014)

    NASA Astrophysics Data System (ADS)

    Kopparapu, R. K.; Ramirez, R. M.; Schottelkotte, J.; Kasting, J. F.; Domagal-Goldman, S.; Eymet, V.

    2017-08-01

    Language: Fortran 90 Code tested under the following compilers/operating systems: ifort/CentOS linux Description of input data: No input necessary. Description of output data: Output files: HZs.dat, HZ_coefficients.dat System requirements: No major system requirement. Fortran compiler necessary. Calls to external routines: None. Additional comments: None (1 data file).

  16. Correlations and linkages between the sun and the earth's atmosphere: Needed measurements and observations

    NASA Technical Reports Server (NTRS)

    Kellogg, W. W.

    1975-01-01

    A study was conducted to identify the sequence of processes that lead from some change in solar input to the earth to a change in tropospheric circulation and weather. Topics discussed include: inputs from the sun, the solar wind, and the magnetosphere; bremsstrahlung, ionizing radiation, cirrus clouds, thunderstorms, wave propagation, and gravity waves.

  17. Some Thoughts on the Matter of Self-Determination and Will.

    ERIC Educational Resources Information Center

    Deci, Edward L.

    "Will" is defined in this paper as the capacity to decide how to behave based on a processing of relevant information. A sequence of motivated behavior begins with informational inputs or stimuli. These come from three sources: the environment, one's physiology, and one's memory. These inputs lead to the formation of motives or awareness of a…

  18. Studies in Pattern Detection in Normal and Autistic Children. II. Reproduction and Production of Color Sequences

    ERIC Educational Resources Information Center

    Frith, Uta

    1970-01-01

    Findings are consistent with the hypothesis of an input processing deficit in autistic children. Autistic children were insensitive to differences in the structures present and tended to impose their own simple stereotyped patterns. Normal children imposed such patterns in the absence of structured input only. Paper reports work which has been…

  19. GibbsCluster: unsupervised clustering and alignment of peptide sequences.

    PubMed

    Andreatta, Massimo; Alvarez, Bruno; Nielsen, Morten

    2017-07-03

    Receptor interactions with short linear peptide fragments (ligands) are at the base of many biological signaling processes. Conserved and information-rich amino acid patterns, commonly called sequence motifs, shape and regulate these interactions. Because of the properties of a receptor-ligand system or of the assay used to interrogate it, experimental data often contain multiple sequence motifs. GibbsCluster is a powerful tool for unsupervised motif discovery because it can simultaneously cluster and align peptide data. The GibbsCluster 2.0 presented here is an improved version incorporating insertion and deletions accounting for variations in motif length in the peptide input. In basic terms, the program takes as input a set of peptide sequences and clusters them into meaningful groups. It returns the optimal number of clusters it identified, together with the sequence alignment and sequence motif characterizing each cluster. Several parameters are available to customize cluster analysis, including adjustable penalties for small clusters and overlapping groups and a trash cluster to remove outliers. As an example application, we used the server to deconvolute multiple specificities in large-scale peptidome data generated by mass spectrometry. The server is available at http://www.cbs.dtu.dk/services/GibbsCluster-2.0. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Restoration of distorted depth maps calculated from stereo sequences

    NASA Technical Reports Server (NTRS)

    Damour, Kevin; Kaufman, Howard

    1991-01-01

    A model-based Kalman estimator is developed for spatial-temporal filtering of noise and other degradations in velocity and depth maps derived from image sequences or cinema. As an illustration of the proposed procedures, edge information from image sequences of rigid objects is used in the processing of the velocity maps by selecting from a series of models for directional adaptive filtering. Adaptive filtering then allows for noise reduction while preserving sharpness in the velocity maps. Results from several synthetic and real image sequences are given.

  1. Improved parameter extraction and classification for dynamic contrast enhanced MRI of prostate

    NASA Astrophysics Data System (ADS)

    Haq, Nandinee Fariah; Kozlowski, Piotr; Jones, Edward C.; Chang, Silvia D.; Goldenberg, S. Larry; Moradi, Mehdi

    2014-03-01

    Magnetic resonance imaging (MRI), particularly dynamic contrast enhanced (DCE) imaging, has shown great potential in prostate cancer diagnosis and prognosis. The time course of the DCE images provides measures of the contrast agent uptake kinetics. Also, using pharmacokinetic modelling, one can extract parameters from the DCE-MR images that characterize the tumor vascularization and can be used to detect cancer. A requirement for calculating the pharmacokinetic DCE parameters is estimating the Arterial Input Function (AIF). One needs an accurate segmentation of the cross section of the external femoral artery to obtain the AIF. In this work we report a semi-automatic method for segmentation of the cross section of the femoral artery, using circular Hough transform, in the sequence of DCE images. We also report a machine-learning framework to combine pharmacokinetic parameters with the model-free contrast agent uptake kinetic parameters extracted from the DCE time course into a nine-dimensional feature vector. This combination of features is used with random forest and with support vector machine classi cation for cancer detection. The MR data is obtained from patients prior to radical prostatectomy. After the surgery, wholemount histopathology analysis is performed and registered to the DCE-MR images as the diagnostic reference. We show that the use of a combination of pharmacokinetic parameters and the model-free empirical parameters extracted from the time course of DCE results in improved cancer detection compared to the use of each group of features separately. We also validate the proposed method for calculation of AIF based on comparison with the manual method.

  2. Using virtual data for training deep model for hand gesture recognition

    NASA Astrophysics Data System (ADS)

    Nikolaev, E. I.; Dvoryaninov, P. V.; Lensky, Y. Y.; Drozdovsky, N. S.

    2018-05-01

    Deep learning has shown real promise for the classification efficiency for hand gesture recognition problems. In this paper, the authors present experimental results for a deeply-trained model for hand gesture recognition through the use of hand images. The authors have trained two deep convolutional neural networks. The first architecture produces the hand position as a 2D-vector by input hand image. The second one predicts the hand gesture class for the input image. The first proposed architecture produces state of the art results with an accuracy rate of 89% and the second architecture with split input produces accuracy rate of 85.2%. In this paper, the authors also propose using virtual data for training a supervised deep model. Such technique is aimed to avoid using original labelled images in the training process. The interest of this method in data preparation is motivated by the need to overcome one of the main challenges of deep supervised learning: using a copious amount of labelled data during training.

  3. On the Visual Input Driving Human Smooth-Pursuit Eye Movements

    NASA Technical Reports Server (NTRS)

    Stone, Leland S.; Beutter, Brent R.; Lorenceau, Jean

    1996-01-01

    Current computational models of smooth-pursuit eye movements assume that the primary visual input is local retinal-image motion (often referred to as retinal slip). However, we show that humans can pursue object motion with considerable accuracy, even in the presence of conflicting local image motion. This finding indicates that the visual cortical area(s) controlling pursuit must be able to perform a spatio-temporal integration of local image motion into a signal related to object motion. We also provide evidence that the object-motion signal that drives pursuit is related to the signal that supports perception. We conclude that current models of pursuit should be modified to include a visual input that encodes perceived object motion and not merely retinal image motion. Finally, our findings suggest that the measurement of eye movements can be used to monitor visual perception, with particular value in applied settings as this non-intrusive approach would not require interrupting ongoing work or training.

  4. A generic, cost-effective, and scalable cell lineage analysis platform

    PubMed Central

    Biezuner, Tamir; Spiro, Adam; Raz, Ofir; Amir, Shiran; Milo, Lilach; Adar, Rivka; Chapal-Ilani, Noa; Berman, Veronika; Fried, Yael; Ainbinder, Elena; Cohen, Galit; Barr, Haim M.; Halaban, Ruth; Shapiro, Ehud

    2016-01-01

    Advances in single-cell genomics enable commensurate improvements in methods for uncovering lineage relations among individual cells. Current sequencing-based methods for cell lineage analysis depend on low-resolution bulk analysis or rely on extensive single-cell sequencing, which is not scalable and could be biased by functional dependencies. Here we show an integrated biochemical-computational platform for generic single-cell lineage analysis that is retrospective, cost-effective, and scalable. It consists of a biochemical-computational pipeline that inputs individual cells, produces targeted single-cell sequencing data, and uses it to generate a lineage tree of the input cells. We validated the platform by applying it to cells sampled from an ex vivo grown tree and analyzed its feasibility landscape by computer simulations. We conclude that the platform may serve as a generic tool for lineage analysis and thus pave the way toward large-scale human cell lineage discovery. PMID:27558250

  5. Transcriptome Analysis at the Single-Cell Level Using SMART Technology.

    PubMed

    Fish, Rachel N; Bostick, Magnolia; Lehman, Alisa; Farmer, Andrew

    2016-10-10

    RNA sequencing (RNA-seq) is a powerful method for analyzing cell state, with minimal bias, and has broad applications within the biological sciences. However, transcriptome analysis of seemingly homogenous cell populations may in fact overlook significant heterogeneity that can be uncovered at the single-cell level. The ultra-low amount of RNA contained in a single cell requires extraordinarily sensitive and reproducible transcriptome analysis methods. As next-generation sequencing (NGS) technologies mature, transcriptome profiling by RNA-seq is increasingly being used to decipher the molecular signature of individual cells. This unit describes an ultra-sensitive and reproducible protocol to generate cDNA and sequencing libraries directly from single cells or RNA inputs ranging from 10 pg to 10 ng. Important considerations for working with minute RNA inputs are given. © 2016 by John Wiley & Sons, Inc. Copyright © 2016 John Wiley & Sons, Inc.

  6. Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)

    NASA Astrophysics Data System (ADS)

    Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto

    An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.

  7. MSuPDA: A memory efficient algorithm for sequence alignment.

    PubMed

    Khan, Mohammad Ibrahim; Kamal, Md Sarwar; Chowdhury, Linkon

    2015-01-16

    Space complexity is a million dollar question in DNA sequence alignments. In this regards, MSuPDA (Memory Saving under Pushdown Automata) can help to reduce the occupied spaces in computer memory. Our proposed process is that Anchor Seed (AS) will be selected from given data set of Nucleotides base pairs for local sequence alignment. Quick Splitting (QS) techniques will separate the Anchor Seed from all the DNA genome segments. Selected Anchor Seed will be placed to pushdown Automata's (PDA) input unit. Whole DNA genome segments will be placed into PDA's stack. Anchor Seed from input unit will be matched with the DNA genome segments from stack of PDA. Whatever matches, mismatches or Indel, of Nucleotides will be POP from the stack under the control of control unit of Pushdown Automata. During the POP operation on stack it will free the memory cell occupied by the Nucleotide base pair.

  8. Color constancy using bright-neutral pixels

    NASA Astrophysics Data System (ADS)

    Wang, Yanfang; Luo, Yupin

    2014-03-01

    An effective illuminant-estimation approach for color constancy is proposed. Bright and near-neutral pixels are selected to jointly represent the illuminant color and utilized for illuminant estimation. To assess the representing capability of pixels, bright-neutral strength (BNS) is proposed by combining pixel chroma and brightness. Accordingly, a certain percentage of pixels with the largest BNS is selected to be the representative set. For every input image, a proper percentage value is determined via an iterative strategy by seeking the optimal color-corrected image. To compare various color-corrected images of an input image, image color-cast degree (ICCD) is devised using means and standard deviations of RGB channels. Experimental evaluation on standard real-world datasets validates the effectiveness of the proposed approach.

  9. MMX-I: data-processing software for multimodal X-ray imaging and tomography.

    PubMed

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-05-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors' knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments.

  10. MPST Software: MoonKommand

    NASA Technical Reports Server (NTRS)

    Kwok, John H.; Call, Jared A.; Khanampornpan, Teerapat

    2012-01-01

    This software automatically processes Sally Ride Science (SRS) delivered MoonKAM camera control files (ccf) into uplink products for the GRAIL-A and GRAIL-B spacecraft as part of an education and public outreach (EPO) extension to the Grail Mission. Once properly validated and deemed safe for execution onboard the spacecraft, MoonKommand generates the command products via the Automated Sequence Processor (ASP) and generates uplink (.scmf) files for radiation to the Grail-A and/or Grail-B spacecraft. Any errors detected along the way are reported back to SRS via email. With Moon Kommand, SRS can control their EPO instrument as part of a fully automated process. Inputs are received from SRS as either image capture files (.ccficd) for new image requests, or downlink/delete files (.ccfdl) for requesting image downlink from the instrument and on-board memory management. The Moon - Kommand outputs are command and file-load (.scmf) files that will be uplinked by the Deep Space Network (DSN). Without MoonKommand software, uplink product generation for the MoonKAM instrument would be a manual process. The software is specific to the Moon - KAM instrument on the GRAIL mission. At the time of this writing, the GRAIL mission was making final preparations to begin the science phase, which was scheduled to continue until June 2012.

  11. A new complexity measure for time series analysis and classification

    NASA Astrophysics Data System (ADS)

    Nagaraj, Nithin; Balasubramanian, Karthi; Dey, Sutirth

    2013-07-01

    Complexity measures are used in a number of applications including extraction of information from data such as ecological time series, detection of non-random structure in biomedical signals, testing of random number generators, language recognition and authorship attribution etc. Different complexity measures proposed in the literature like Shannon entropy, Relative entropy, Lempel-Ziv, Kolmogrov and Algorithmic complexity are mostly ineffective in analyzing short sequences that are further corrupted with noise. To address this problem, we propose a new complexity measure ETC and define it as the "Effort To Compress" the input sequence by a lossless compression algorithm. Here, we employ the lossless compression algorithm known as Non-Sequential Recursive Pair Substitution (NSRPS) and define ETC as the number of iterations needed for NSRPS to transform the input sequence to a constant sequence. We demonstrate the utility of ETC in two applications. ETC is shown to have better correlation with Lyapunov exponent than Shannon entropy even with relatively short and noisy time series. The measure also has a greater rate of success in automatic identification and classification of short noisy sequences, compared to entropy and a popular measure based on Lempel-Ziv compression (implemented by Gzip).

  12. Free-breathing echo-planar imaging based diffusion-weighted magnetic resonance imaging of the liver with prospective acquisition correction.

    PubMed

    Asbach, Patrick; Hein, Patrick A; Stemmer, Alto; Wagner, Moritz; Huppertz, Alexander; Hamm, Bernd; Taupitz, Matthias; Klessen, Christian

    2008-01-01

    To evaluate soft tissue contrast and image quality of a respiratory-triggered echo-planar imaging based diffusion-weighted sequence (EPI-DWI) with different b values for magnetic resonance imaging (MRI) of the liver. Forty patients were examined. Quantitative and qualitative evaluation of contrast was performed. Severity of artifacts and overall image quality in comparison with a T2w turbo spin-echo (T2-TSE) sequence were scored. The liver-spleen contrast was significantly higher (P < 0.05) for the EPI-DWI compared with the T2-TSE sequence (0.47 +/- 0.11 (b50); 0.48 +/- 0.13 (b300); 0.47 +/- 0.13 (b600) vs 0.38 +/- 0.11). Liver-lesion contrast strongly depends on the b value of the DWI sequence and decreased with higher b values (b50, 0.47 +/- 0.19; b300, 0.40 +/- 0.20; b600, 0.28 +/- 0.23). Severity of artifacts and overall image quality were comparable to the T2-TSE sequence when using a low b value (P > 0.05), artifacts increased and image quality decreased with higher b values (P < 0.05). Respiratory-triggered EPI-DWI of the liver is feasible because good image quality and favorable soft tissue contrast can be achieved.

  13. [Contrastive analysis of artifacts produced by metal dental crowns in 3.0 T magnetic resonance imaging with six sequences].

    PubMed

    Lan, Gao; Yunmin, Lian; Pu, Wang; Haili, Huai

    2016-06-01

    This study aimed to observe and evaluate six 3.0 T sequences of metallic artifacts produced by metal dental crowns. Dental crowns fabricated with four different materials (Co-Gr, Ni-Gr, Ti alloy and pure Ti) were evaluated. A mature crossbreed dog was used as the experimental animal, and crowns were fabricated for its upper right second premolar. Each crown was examined through head MRI (3.0 T) with six sequences, namely, T₁ weighted-imaging of spin echo (T₁W/SE), T₂ weighted-imaging of inversion recovery (T₂W/IR), T₂ star gradient echo (T₂*/GRE), T2 weighted-imaging of fast spin echo (T₂W/FSE), T₁ weighted-imaging of fluid attenuate inversion recovery (T₂W/FLAIR), and T₂ weighted-imaging of propeller (T₂W/PROP). The largest area and layers of artifacts were assessed and compared. The artifact in the T₂*/GRE sequence was significantly wider than those in the other sequences (P < 0.01), whose artifact extent was not significantly different (P > 0.05). T₂*/GRE exhibit the strongest influence on the artifact, whereas the five other sequences contribute equally to artifact generation.

  14. Visual pattern image sequence coding

    NASA Technical Reports Server (NTRS)

    Silsbee, Peter; Bovik, Alan C.; Chen, Dapang

    1990-01-01

    The visual pattern image coding (VPIC) configurable digital image-coding process is capable of coding with visual fidelity comparable to the best available techniques, at compressions which (at 30-40:1) exceed all other technologies. These capabilities are associated with unprecedented coding efficiencies; coding and decoding operations are entirely linear with respect to image size and entail a complexity that is 1-2 orders of magnitude faster than any previous high-compression technique. The visual pattern image sequence coding to which attention is presently given exploits all the advantages of the static VPIC in the reduction of information from an additional, temporal dimension, to achieve unprecedented image sequence coding performance.

  15. Multi-modality image fusion based on enhanced fuzzy radial basis function neural networks.

    PubMed

    Chao, Zhen; Kim, Dohyeon; Kim, Hee-Joung

    2018-04-01

    In clinical applications, single modality images do not provide sufficient diagnostic information. Therefore, it is necessary to combine the advantages or complementarities of different modalities of images. Recently, neural network technique was applied to medical image fusion by many researchers, but there are still many deficiencies. In this study, we propose a novel fusion method to combine multi-modality medical images based on the enhanced fuzzy radial basis function neural network (Fuzzy-RBFNN), which includes five layers: input, fuzzy partition, front combination, inference, and output. Moreover, we propose a hybrid of the gravitational search algorithm (GSA) and error back propagation algorithm (EBPA) to train the network to update the parameters of the network. Two different patterns of images are used as inputs of the neural network, and the output is the fused image. A comparison with the conventional fusion methods and another neural network method through subjective observation and objective evaluation indexes reveals that the proposed method effectively synthesized the information of input images and achieved better results. Meanwhile, we also trained the network by using the EBPA and GSA, individually. The results reveal that the EBPGSA not only outperformed both EBPA and GSA, but also trained the neural network more accurately by analyzing the same evaluation indexes. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  16. Staring 2-D hadamard transform spectral imager

    DOEpatents

    Gentry, Stephen M [Albuquerque, NM; Wehlburg, Christine M [Albuquerque, NM; Wehlburg, Joseph C [Albuquerque, NM; Smith, Mark W [Albuquerque, NM; Smith, Jody L [Albuquerque, NM

    2006-02-07

    A staring imaging system inputs a 2D spatial image containing multi-frequency spectral information. This image is encoded in one dimension of the image with a cyclic Hadamarid S-matrix. The resulting image is detecting with a spatial 2D detector; and a computer applies a Hadamard transform to recover the encoded image.

  17. A manual for microcomputer image analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rich, P.M.; Ranken, D.M.; George, J.S.

    1989-12-01

    This manual is intended to serve three basic purposes: as a primer in microcomputer image analysis theory and techniques, as a guide to the use of IMAGE{copyright}, a public domain microcomputer program for image analysis, and as a stimulus to encourage programmers to develop microcomputer software suited for scientific use. Topics discussed include the principals of image processing and analysis, use of standard video for input and display, spatial measurement techniques, and the future of microcomputer image analysis. A complete reference guide that lists the commands for IMAGE is provided. IMAGE includes capabilities for digitization, input and output of images,more » hardware display lookup table control, editing, edge detection, histogram calculation, measurement along lines and curves, measurement of areas, examination of intensity values, output of analytical results, conversion between raster and vector formats, and region movement and rescaling. The control structure of IMAGE emphasizes efficiency, precision of measurement, and scientific utility. 18 refs., 18 figs., 2 tabs.« less

  18. Speckle noise reduction in ultrasound images using a discrete wavelet transform-based image fusion technique.

    PubMed

    Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun

    2015-01-01

    Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.

  19. A mathematical model of neuro-fuzzy approximation in image classification

    NASA Astrophysics Data System (ADS)

    Gopalan, Sasi; Pinto, Linu; Sheela, C.; Arun Kumar M., N.

    2016-06-01

    Image digitization and explosion of World Wide Web has made traditional search for image, an inefficient method for retrieval of required grassland image data from large database. For a given input query image Content-Based Image Retrieval (CBIR) system retrieves the similar images from a large database. Advances in technology has increased the use of grassland image data in diverse areas such has agriculture, art galleries, education, industry etc. In all the above mentioned diverse areas it is necessary to retrieve grassland image data efficiently from a large database to perform an assigned task and to make a suitable decision. A CBIR system based on grassland image properties and it uses the aid of a feed-forward back propagation neural network for an effective image retrieval is proposed in this paper. Fuzzy Memberships plays an important role in the input space of the proposed system which leads to a combined neural fuzzy approximation in image classification. The CBIR system with mathematical model in the proposed work gives more clarity about fuzzy-neuro approximation and the convergence of the image features in a grassland image.

  20. Contrast-enhanced T1-weighted fluid-attenuated inversion-recovery BLADE magnetic resonance imaging of the brain: an alternative to spin-echo technique for detection of brain lesions in the unsedated pediatric patient?

    PubMed

    Alibek, Sedat; Adamietz, Boris; Cavallaro, Alexander; Stemmer, Alto; Anders, Katharina; Kramer, Manuel; Bautz, Werner; Staatz, Gundula

    2008-08-01

    We compared contrast-enhanced T1-weighted magnetic resonance (MR) imaging of the brain using different types of data acquisition techniques: periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER, BLADE) imaging versus standard k-space sampling (conventional spin-echo pulse sequence) in the unsedated pediatric patient with focus on artifact reduction, overall image quality, and lesion detectability. Forty-eight pediatric patients (aged 3 months to 18 years) were scanned with a clinical 1.5-T whole body MR scanner. Cross-sectional contrast-enhanced T1-weighted spin-echo sequence was compared to a T1-weighted dark-fluid fluid-attenuated inversion-recovery (FLAIR) BLADE sequence for qualitative and quantitative criteria (image artifacts, image quality, lesion detectability) by two experienced radiologists. Imaging protocols were matched for imaging parameters. Reader agreement was assessed using the exact Bowker test. BLADE images showed significantly less pulsation and motion artifacts than the standard T1-weighted spin-echo sequence scan. BLADE images showed statistically significant lower signal-to-noise ratio but higher contrast-to-noise ratios with superior gray-white matter contrast. All lesions were demonstrated on FLAIR BLADE imaging, and one false-positive lesion was visible in spin-echo sequence images. BLADE MR imaging at 1.5 T is applicable for central nervous system imaging of the unsedated pediatric patient, reduces motion and pulsation artifacts, and minimizes the need for sedation or general anesthesia without loss of relevant diagnostic information.

  1. Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study.

    PubMed

    Zhang, Xiaoyong; Homma, Noriyasu; Ichiji, Kei; Takai, Yoshihiro; Yoshizawa, Makoto

    2015-05-01

    To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the tracking result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors' proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors' algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.

  2. Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoyong, E-mail: xiaoyong@ieee.org; Homma, Noriyasu, E-mail: homma@ieee.org; Ichiji, Kei, E-mail: ichiji@yoshizawa.ecei.tohoku.ac.jp

    2015-05-15

    Purpose: To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. Methods: A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the trackingmore » result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. Results: For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors’ proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. Conclusions: In this paper, the authors presented a feasibility study of tracking tumor boundary in EPID images by using a LSM-based algorithm. Experimental results conducted on phantom and clinical EPID images demonstrated the effectiveness of the tracking algorithm for visible tumor target. Compared with previous tracking methods, the authors’ algorithm has the potential to improve the tracking accuracy in radiation therapy. In addition, real-time tumor boundary information within the irradiation field will be potentially useful for further applications, such as adaptive beam delivery, dose evaluation.« less

  3. Image encryption using random sequence generated from generalized information domain

    NASA Astrophysics Data System (ADS)

    Xia-Yan, Zhang; Guo-Ji, Zhang; Xuan, Li; Ya-Zhou, Ren; Jie-Hua, Wu

    2016-05-01

    A novel image encryption method based on the random sequence generated from the generalized information domain and permutation-diffusion architecture is proposed. The random sequence is generated by reconstruction from the generalized information file and discrete trajectory extraction from the data stream. The trajectory address sequence is used to generate a P-box to shuffle the plain image while random sequences are treated as keystreams. A new factor called drift factor is employed to accelerate and enhance the performance of the random sequence generator. An initial value is introduced to make the encryption method an approximately one-time pad. Experimental results show that the random sequences pass the NIST statistical test with a high ratio and extensive analysis demonstrates that the new encryption scheme has superior security.

  4. (Pea)nuts and bolts of visual narrative: Structure and meaning in sequential image comprehension

    PubMed Central

    Cohn, Neil; Paczynski, Martin; Jackendoff, Ray; Holcomb, Phillip J.; Kuperberg, Gina R.

    2012-01-01

    Just as syntax differentiates coherent sentences from scrambled word strings, the comprehension of sequential images must also use a cognitive system to distinguish coherent narrative sequences from random strings of images. We conducted experiments analogous to two classic studies of language processing to examine the contributions of narrative structure and semantic relatedness to processing sequential images. We compared four types of comic strips: 1) Normal sequences with both structure and meaning, 2) Semantic Only sequences (in which the panels were related to a common semantic theme, but had no narrative structure), 3) Structural Only sequences (narrative structure but no semantic relatedness), and 4) Scrambled sequences of randomly-ordered panels. In Experiment 1, participants monitored for target panels in sequences presented panel-by-panel. Reaction times were slowest to panels in Scrambled sequences, intermediate in both Structural Only and Semantic Only sequences, and fastest in Normal sequences. This suggests that both semantic relatedness and narrative structure offer advantages to processing. Experiment 2 measured ERPs to all panels across the whole sequence. The N300/N400 was largest to panels in both the Scrambled and Structural Only sequences, intermediate in Semantic Only sequences and smallest in the Normal sequences. This implies that a combination of narrative structure and semantic relatedness can facilitate semantic processing of upcoming panels (as reflected by the N300/N400). Also, panels in the Scrambled sequences evoked a larger left-lateralized anterior negativity than panels in the Structural Only sequences. This localized effect was distinct from the N300/N400, and appeared despite the fact that these two sequence types were matched on local semantic relatedness between individual panels. These findings suggest that sequential image comprehension uses a narrative structure that may be independent of semantic relatedness. Altogether, we argue that the comprehension of visual narrative is guided by an interaction between structure and meaning. PMID:22387723

  5. Deep multi-spectral ensemble learning for electronic cleansing in dual-energy CT colonography

    NASA Astrophysics Data System (ADS)

    Tachibana, Rie; Näppi, Janne J.; Hironaka, Toru; Kim, Se Hyung; Yoshida, Hiroyuki

    2017-03-01

    We developed a novel electronic cleansing (EC) method for dual-energy CT colonography (DE-CTC) based on an ensemble deep convolution neural network (DCNN) and multi-spectral multi-slice image patches. In the method, an ensemble DCNN is used to classify each voxel of a DE-CTC image volume into five classes: luminal air, soft tissue, tagged fecal materials, and partial-volume boundaries between air and tagging and those between soft tissue and tagging. Each DCNN acts as a voxel classifier, where an input image patch centered at the voxel is generated as input to the DCNNs. An image patch has three channels that are mapped from a region-of-interest containing the image plane of the voxel and the two adjacent image planes. Six different types of spectral input image datasets were derived using two dual-energy CT images, two virtual monochromatic images, and two material images. An ensemble DCNN was constructed by use of a meta-classifier that combines the output of multiple DCNNs, each of which was trained with a different type of multi-spectral image patches. The electronically cleansed CTC images were calculated by removal of regions classified as other than soft tissue, followed by a colon surface reconstruction. For pilot evaluation, 359 volumes of interest (VOIs) representing sources of subtraction artifacts observed in current EC schemes were sampled from 30 clinical CTC cases. Preliminary results showed that the ensemble DCNN can yield high accuracy in labeling of the VOIs, indicating that deep learning of multi-spectral EC with multi-slice imaging could accurately remove residual fecal materials from CTC images without generating major EC artifacts.

  6. Self-Organizing Hidden Markov Model Map (SOHMMM): Biological Sequence Clustering and Cluster Visualization.

    PubMed

    Ferles, Christos; Beaufort, William-Scott; Ferle, Vanessa

    2017-01-01

    The present study devises mapping methodologies and projection techniques that visualize and demonstrate biological sequence data clustering results. The Sequence Data Density Display (SDDD) and Sequence Likelihood Projection (SLP) visualizations represent the input symbolical sequences in a lower-dimensional space in such a way that the clusters and relations of data elements are depicted graphically. Both operate in combination/synergy with the Self-Organizing Hidden Markov Model Map (SOHMMM). The resulting unified framework is in position to analyze automatically and directly raw sequence data. This analysis is carried out with little, or even complete absence of, prior information/domain knowledge.

  7. SVM-PB-Pred: SVM based protein block prediction method using sequence profiles and secondary structures.

    PubMed

    Suresh, V; Parthasarathy, S

    2014-01-01

    We developed a support vector machine based web server called SVM-PB-Pred, to predict the Protein Block for any given amino acid sequence. The input features of SVM-PB-Pred include i) sequence profiles (PSSM) and ii) actual secondary structures (SS) from DSSP method or predicted secondary structures from NPS@ and GOR4 methods. There were three combined input features PSSM+SS(DSSP), PSSM+SS(NPS@) and PSSM+SS(GOR4) used to test and train the SVM models. Similarly, four datasets RS90, DB433, LI1264 and SP1577 were used to develop the SVM models. These four SVM models developed were tested using three different benchmarking tests namely; (i) self consistency, (ii) seven fold cross validation test and (iii) independent case test. The maximum possible prediction accuracy of ~70% was observed in self consistency test for the SVM models of both LI1264 and SP1577 datasets, where PSSM+SS(DSSP) input features was used to test. The prediction accuracies were reduced to ~53% for PSSM+SS(NPS@) and ~43% for PSSM+SS(GOR4) in independent case test, for the SVM models of above two same datasets. Using our method, it is possible to predict the protein block letters for any query protein sequence with ~53% accuracy, when the SP1577 dataset and predicted secondary structure from NPS@ server were used. The SVM-PB-Pred server can be freely accessed through http://bioinfo.bdu.ac.in/~svmpbpred.

  8. MRI of the hip at 7T: feasibility of bone microarchitecture, high-resolution cartilage, and clinical imaging.

    PubMed

    Chang, Gregory; Deniz, Cem M; Honig, Stephen; Egol, Kenneth; Regatte, Ravinder R; Zhu, Yudong; Sodickson, Daniel K; Brown, Ryan

    2014-06-01

    To demonstrate the feasibility of performing bone microarchitecture, high-resolution cartilage, and clinical imaging of the hip at 7T. This study had Institutional Review Board approval. Using an 8-channel coil constructed in-house, we imaged the hips of 15 subjects on a 7T magnetic resonance imaging (MRI) scanner. We applied: 1) a T1-weighted 3D fast low angle shot (3D FLASH) sequence (0.23 × 0.23 × 1-1.5 mm(3) ) for bone microarchitecture imaging; 2) T1-weighted 3D FLASH (water excitation) and volumetric interpolated breath-hold examination (VIBE) sequences (0.23 × 0.23 × 1.5 mm(3) ) with saturation or inversion recovery-based fat suppression for cartilage imaging; 3) 2D intermediate-weighted fast spin-echo (FSE) sequences without and with fat saturation (0.27 × 0.27 × 2 mm) for clinical imaging. Bone microarchitecture images allowed visualization of individual trabeculae within the proximal femur. Cartilage was well visualized and fat was well suppressed on FLASH and VIBE sequences. FSE sequences allowed visualization of cartilage, the labrum (including cartilage and labral pathology), joint capsule, and tendons. This is the first study to demonstrate the feasibility of performing a clinically comprehensive hip MRI protocol at 7T, including high-resolution imaging of bone microarchitecture and cartilage, as well as clinical imaging. Copyright © 2013 Wiley Periodicals, Inc.

  9. Assessing the skeletal age from a hand radiograph: automating the Tanner-Whitehouse method

    NASA Astrophysics Data System (ADS)

    Niemeijer, Meindert; van Ginneken, Bram; Maas, Casper A.; Beek, Frederik J. A.; Viergever, Max A.

    2003-05-01

    The skeletal maturity of children is usually assessed from a standard radiograph of the left hand and wrist. An established clinical method to determine the skeletal maturity is the Tanner-Whitehouse (TW2) method. This method divides the skeletal development into several stages (labelled A, B, ...,I). We are developing an automated system based on this method. In this work we focus on assigning a stage to one region of interest (ROI), the middle phalanx of the third finger. We classify each ROI as follows. A number of ROIs which have been assigned a certain stage by a radiologist are used to construct a mean image for that stage. For a new input ROI, landmarks are detected by using an Active Shape Model. These are used to align the mean images with the input image. Subsequently the correlation between each transformed mean stage image and the input is calculated. The input ROI can be assigned to the stage with the highest correlation directly, or the values can be used as features in a classifier. The method was tested on 71 cases ranging from stage E to I. The ROI was staged correctly in 73.2% of all cases and in 97.2% of all incorrectly staged cases the error was not more than one stage.

  10. Rapid acquisition of magnetic resonance imaging of the shoulder using three-dimensional fast spin echo sequence with compressed sensing.

    PubMed

    Lee, Seung Hyun; Lee, Young Han; Song, Ho-Taek; Suh, Jin-Suck

    2017-10-01

    To evaluate the feasibility of 3D fast spin-echo (FSE) imaging with compressed sensing (CS) for the assessment of shoulder. Twenty-nine patients who underwent shoulder MRI including image sets of axial 3D-FSE sequence without CS and with CS, using an acceleration factor of 1.5, were included. Quantitative assessment was performed by calculating the root mean square error (RMSE) and structural similarity index (SSIM). Two musculoskeletal radiologists compared image quality of 3D-FSE sequences without CS and with CS, and scored the qualitative agreement between sequences, using a five-point scale. Diagnostic agreement for pathologic shoulder lesions between the two sequences was evaluated. The acquisition time of 3D-FSE MRI was reduced using CS (3min 23s vs. 2min 22s). Quantitative evaluations showed a significant correlation between the two sequences (r=0.872-0.993, p<0.05) and SSIM was in an acceptable range (0.940-0.993; mean±standard deviation, 0.968±0.018). Qualitative image quality showed good to excellent agreement between 3D-FSE images without CS and with CS. Diagnostic agreement for pathologic shoulder lesions between the two sequences was very good (κ=0.915-1). The 3D-FSE sequence with CS is feasible in evaluating the shoulder joint with reduced scan time compared to 3D-FSE without CS. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. [MRI of focal liver lesions using a 1.5 turbo-spin-echo technique compared with spin-echo technique].

    PubMed

    Steiner, S; Vogl, T J; Fischer, P; Steger, W; Neuhaus, P; Keck, H

    1995-08-01

    The aim of our study was to evaluate a T2-weighted turbo-spinecho sequence in comparison to a T2-weighted spinecho sequence in imaging focal liver lesions. In our study 35 patients with suspected focal liver lesions were examined. Standardised imaging protocol included a conventional T2-weighted SE sequence (TR/TE = 2000/90/45, acquisition time = 10.20) as well as a T2-weighted TSE sequence (TR/TE = 4700/90, acquisition time = 6.33). Calculation of S/N and C/N ratio as a basis of quantitative evaluation was done using standard methods. A diagnostic score was implemented to enable qualitative assessment. In 7% (n = 2) the TSE sequence enabled detection of further liver lesions showing a size of less than 1 cm in diameter. Comparing anatomical details the TSE sequence was superior. S/N and C/N ratio of anatomic and pathologic structures of the TSE sequence were higher compared to results of the SE sequence. Our results indicate that the T2-weighted turbo-spinecho sequence is well appropriate for imaging focal liver lesions, and leads to reduction of imaging time.

  12. Continuous aesthetic judgment of image sequences.

    PubMed

    Khaw, Mel W; Freedberg, David

    2018-05-18

    Perceptual judgments are said to be reference-dependent as they change on the basis of recent experiences. Here we quantify sequence effects within two types of aesthetic judgments: (i) individual ratings of single images (during self-paced trials) and (ii) continuous ratings of image sequences. As in the case of known contrast effects, trial-by-trial aesthetic responses are negatively correlated with judgments made toward the preceding image. During continuous judgment, a different type of bias is observed. The onset of change within a sequence introduces a persistent increase in ratings (relative to when the same images are judged in isolation). Furthermore, subjects indicate adjustment patterns and choices that selectively favor sequences that are rich in change. Sequence effects in aesthetic judgments thus differ greatly depending on the continuity and arrangement of presented stimuli. The effects highlighted here are important in understanding sustained aesthetic responses over time, such as those elicited during choreographic and musical arrangements. In contrast, standard measurements of aesthetic responses (over trials) may represent a series of distinct aesthetic experiences (e.g., viewing artworks in a museum). Copyright © 2018 Elsevier B.V. All rights reserved.

  13. High compression image and image sequence coding

    NASA Technical Reports Server (NTRS)

    Kunt, Murat

    1989-01-01

    The digital representation of an image requires a very large number of bits. This number is even larger for an image sequence. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture or image sequence. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a plateau around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1 for images and around 300:1 for image sequences. Recent progress on some of the main avenues of object-based methods is presented. These second generation techniques make use of contour-texture modeling, new results in neurophysiology and psychophysics and scene analysis.

  14. Adjustable shunt valve-induced magnetic resonance imaging artifact: a comparative study.

    PubMed

    Toma, Ahmed K; Tarnaris, Andrew; Grieve, Joan P; Watkins, Laurence D; Kitchen, Neil D

    2010-07-01

    In this paper, the authors' goal was to compare the artifact induced by implanted (in vivo) adjustable shunt valves in spin echo, diffusion weighted (DW), and gradient echo MR imaging pulse sequences. The MR images obtained in 8 patients with proGAV and 6 patients with Strata II adjustable shunt valves were assessed for artifact areas in different planes as well as the total volume for different pulse sequences. Artifacts induced by the Strata II valve were significantly larger than those induced by proGAV valve in spin echo MR imaging pulse sequence (29,761 vs 2450 mm(3) on T2-weighted fast spin echo, p = 0.003) and DW images (100,138 vs 38,955 mm(3), p = 0.025). Artifacts were more marked on DW MR images than on spin echo pulse sequence for both valve types. Adjustable valve-induced artifacts can conceal brain pathology on MR images. This should influence the choice of valve implantation site and the type of valve used. The effect of artifacts on DW images should be highlighted pending the development of less MR imaging artifact-inducing adjustable shunt valves.

  15. Arterial input function derived from pairwise correlations between PET-image voxels.

    PubMed

    Schain, Martin; Benjaminsson, Simon; Varnäs, Katarina; Forsberg, Anton; Halldin, Christer; Lansner, Anders; Farde, Lars; Varrone, Andrea

    2013-07-01

    A metabolite corrected arterial input function is a prerequisite for quantification of positron emission tomography (PET) data by compartmental analysis. This quantitative approach is also necessary for radioligands without suitable reference regions in brain. The measurement is laborious and requires cannulation of a peripheral artery, a procedure that can be associated with patient discomfort and potential adverse events. A non invasive procedure for obtaining the arterial input function is thus preferable. In this study, we present a novel method to obtain image-derived input functions (IDIFs). The method is based on calculation of the Pearson correlation coefficient between the time-activity curves of voxel pairs in the PET image to localize voxels displaying blood-like behavior. The method was evaluated using data obtained in human studies with the radioligands [(11)C]flumazenil and [(11)C]AZ10419369, and its performance was compared with three previously published methods. The distribution volumes (VT) obtained using IDIFs were compared with those obtained using traditional arterial measurements. Overall, the agreement in VT was good (∼3% difference) for input functions obtained using the pairwise correlation approach. This approach performed similarly or even better than the other methods, and could be considered in applied clinical studies. Applications to other radioligands are needed for further verification.

  16. Image processing and recognition for biological images

    PubMed Central

    Uchida, Seiichi

    2013-01-01

    This paper reviews image processing and pattern recognition techniques, which will be useful to analyze bioimages. Although this paper does not provide their technical details, it will be possible to grasp their main tasks and typical tools to handle the tasks. Image processing is a large research area to improve the visibility of an input image and acquire some valuable information from it. As the main tasks of image processing, this paper introduces gray-level transformation, binarization, image filtering, image segmentation, visual object tracking, optical flow and image registration. Image pattern recognition is the technique to classify an input image into one of the predefined classes and also has a large research area. This paper overviews its two main modules, that is, feature extraction module and classification module. Throughout the paper, it will be emphasized that bioimage is a very difficult target for even state-of-the-art image processing and pattern recognition techniques due to noises, deformations, etc. This paper is expected to be one tutorial guide to bridge biology and image processing researchers for their further collaboration to tackle such a difficult target. PMID:23560739

  17. Auto and hetero-associative memory using a 2-D optical logic gate

    NASA Technical Reports Server (NTRS)

    Chao, Tien-Hsin (Inventor)

    1992-01-01

    An optical system for auto-associative and hetero-associative recall utilizing Hamming distance as the similarity measure between a binary input image vector V(sup k) and a binary image vector V(sup m) in a first memory array using an optical Exclusive-OR gate for multiplication of each of a plurality of different binary image vectors in memory by the input image vector. After integrating the light of each product V(sup k) x V(sup m), a shortest Hamming distance detection electronics module determines which product has the lowest light intensity and emits a signal that activates a light emitting diode to illuminate a corresponding image vector in a second memory array for display. That corresponding image vector is identical to the memory image vector V(sup m) in the first memory array for auto-associative recall or related to it, such as by name, for hetero-associative recall.

  18. Development of a novel 2D color map for interactive segmentation of histological images.

    PubMed

    Chaudry, Qaiser; Sharma, Yachna; Raza, Syed H; Wang, May D

    2012-05-01

    We present a color segmentation approach based on a two-dimensional color map derived from the input image. Pathologists stain tissue biopsies with various colored dyes to see the expression of biomarkers. In these images, because of color variation due to inconsistencies in experimental procedures and lighting conditions, the segmentation used to analyze biological features is usually ad-hoc. Many algorithms like K-means use a single metric to segment the image into different color classes and rarely provide users with powerful color control. Our 2D color map interactive segmentation technique based on human color perception information and the color distribution of the input image, enables user control without noticeable delay. Our methodology works for different staining types and different types of cancer tissue images. Our proposed method's results show good accuracy with low response and computational time making it a feasible method for user interactive applications involving segmentation of histological images.

  19. [The dilemma of data flood - reducing costs and increasing quality control].

    PubMed

    Gassmann, B

    2012-09-05

    Digitization is found everywhere in sonography. Printing of ultrasound images using the videoprinter with special paper will be done in single cases. The documentation of sonography procedures is more and more done by saving image sequences instead of still frames. Echocardiography is routinely recorded in between with so called R-R-loops. Doing contrast enhanced ultrasound recording of sequences is necessary to get a deep impression of the vascular structure of interest. Working with this data flood in daily practice a specialized software is required. Comparison in follow up of stored and recent images/sequences is very helpful. Nevertheless quality control of the ultrasound system and the transducers is simple and safe - using a phantom for detail resolution and general image quality the stored images/sequences are comparable over the life cycle of the system. The comparison in follow up is showing decreased image quality and transducer defects immediately.

  20. Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue

    PubMed Central

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method. PMID:25873987

  1. Motion estimation using the firefly algorithm in ultrasonic image sequence of soft tissue.

    PubMed

    Chao, Chih-Feng; Horng, Ming-Huwi; Chen, Yu-Chan

    2015-01-01

    Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA) searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA) via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.

  2. A strongly goal-directed close-range vision system for spacecraft docking

    NASA Technical Reports Server (NTRS)

    Boyer, Kim L.; Goddard, Ralph E.

    1991-01-01

    In this presentation, we will propose a strongly goal-oriented stereo vision system to establish proper docking approach motions for automated rendezvous and capture (AR&C). From an input sequence of stereo video image pairs, the system produces a current best estimate of: contact position; contact vector; contact velocity; and contact orientation. The processing demands imposed by this particular problem and its environment dictate a special case solution; such a system should necessarily be, in some sense, minimalist. By this we mean the system should construct a scene description just sufficiently rich to solve the problem at hand and should do no more processing than is absolutely necessary. In addition, the imaging resolution should be just sufficient. Extracting additional information and constructing higher level scene representations wastes energy and computational resources and injects an unnecessary degree of complexity, increasing the likelihood of malfunction. We therefore take a departure from most prior stereopsis work, including our own, and propose a system based on associative memory. The purpose of the memory is to immediately associate a set of motor commands with a set of input visual patterns in the two cameras. That is, rather than explicitly computing point correspondences and object positions in world coordinates and trying to reason forward from this information to a plan of action, we are trying to capture the essence of reflex behavior through the action of associative memory. The explicit construction of point correspondences and 3D scene descriptions, followed by online velocity and point of impact calculations, is prohibitively expensive from a computational point of view for the problem at hand. Learned patterns on the four image planes, left and right at two discrete but closely spaced instants in time, will be bused directly to infer the spacecraft reaction. This will be a continuing online process as the docking collar approaches.

  3. Grid Computing Application for Brain Magnetic Resonance Image Processing

    NASA Astrophysics Data System (ADS)

    Valdivia, F.; Crépeault, B.; Duchesne, S.

    2012-02-01

    This work emphasizes the use of grid computing and web technology for automatic post-processing of brain magnetic resonance images (MRI) in the context of neuropsychiatric (Alzheimer's disease) research. Post-acquisition image processing is achieved through the interconnection of several individual processes into pipelines. Each process has input and output data ports, options and execution parameters, and performs single tasks such as: a) extracting individual image attributes (e.g. dimensions, orientation, center of mass), b) performing image transformations (e.g. scaling, rotation, skewing, intensity standardization, linear and non-linear registration), c) performing image statistical analyses, and d) producing the necessary quality control images and/or files for user review. The pipelines are built to perform specific sequences of tasks on the alphanumeric data and MRIs contained in our database. The web application is coded in PHP and allows the creation of scripts to create, store and execute pipelines and their instances either on our local cluster or on high-performance computing platforms. To run an instance on an external cluster, the web application opens a communication tunnel through which it copies the necessary files, submits the execution commands and collects the results. We present result on system tests for the processing of a set of 821 brain MRIs from the Alzheimer's Disease Neuroimaging Initiative study via a nonlinear registration pipeline composed of 10 processes. Our results show successful execution on both local and external clusters, and a 4-fold increase in performance if using the external cluster. However, the latter's performance does not scale linearly as queue waiting times and execution overhead increase with the number of tasks to be executed.

  4. Composeable Chat over Low-Bandwidth Intermittent Communication Links

    DTIC Science & Technology

    2007-04-01

    Compression (STC), introduced in this report, is a data compression algorithm intended to compress alphanumeric... Ziv - Lempel coding, the grandfather of most modern general-purpose file compression programs, watches for input symbol sequences that have previously... data . This section applies these techniques to create a new compression algorithm called Small Text Compression . Various sequence compression

  5. NEBNext Direct: A Novel, Rapid, Hybridization-Based Approach for the Capture and Library Conversion of Genomic Regions of Interest.

    PubMed

    Emerman, Amy B; Bowman, Sarah K; Barry, Andrew; Henig, Noa; Patel, Kruti M; Gardner, Andrew F; Hendrickson, Cynthia L

    2017-07-05

    Next-generation sequencing (NGS) is a powerful tool for genomic studies, translational research, and clinical diagnostics that enables the detection of single nucleotide polymorphisms, insertions and deletions, copy number variations, and other genetic variations. Target enrichment technologies improve the efficiency of NGS by only sequencing regions of interest, which reduces sequencing costs while increasing coverage of the selected targets. Here we present NEBNext Direct ® , a hybridization-based, target-enrichment approach that addresses many of the shortcomings of traditional target-enrichment methods. This approach features a simple, 7-hr workflow that uses enzymatic removal of off-target sequences to achieve a high specificity for regions of interest. Additionally, unique molecular identifiers are incorporated for the identification and filtering of PCR duplicates. The same protocol can be used across a wide range of input amounts, input types, and panel sizes, enabling NEBNext Direct to be broadly applicable across a wide variety of research and diagnostic needs. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  6. DNA methylation assessment from human slow- and fast-twitch skeletal muscle fibers

    PubMed Central

    Begue, Gwénaëlle; Raue, Ulrika; Jemiolo, Bozena

    2017-01-01

    A new application of the reduced representation bisulfite sequencing method was developed using low-DNA input to investigate the epigenetic profile of human slow- and fast-twitch skeletal muscle fibers. Successful library construction was completed with as little as 15 ng of DNA, and high-quality sequencing data were obtained with 32 ng of DNA. Analysis identified 143,160 differentially methylated CpG sites across 14,046 genes. In both fiber types, selected genes predominantly expressed in slow or fast fibers were hypomethylated, which was supported by the RNA-sequencing analysis. These are the first fiber type-specific methylation data from human skeletal muscle and provide a unique platform for future research. NEW & NOTEWORTHY This study validates a low-DNA input reduced representation bisulfite sequencing method for human muscle biopsy samples to investigate the methylation patterns at a fiber type-specific level. These are the first fiber type-specific methylation data reported from human skeletal muscle and thus provide initial insight into basal state differences in myosin heavy chain I and IIa muscle fibers among young, healthy men. PMID:28057818

  7. (abstract) Synthesis of Speaker Facial Movements to Match Selected Speech Sequences

    NASA Technical Reports Server (NTRS)

    Scott, Kenneth C.

    1994-01-01

    We are developing a system for synthesizing image sequences the simulate the facial motion of a speaker. To perform this synthesis, we are pursuing two major areas of effort. We are developing the necessary computer graphics technology to synthesize a realistic image sequence of a person speaking selected speech sequences. Next, we are developing a model that expresses the relation between spoken phonemes and face/mouth shape. A subject is video taped speaking an arbitrary text that contains expression of the full list of desired database phonemes. The subject is video taped from the front speaking normally, recording both audio and video detail simultaneously. Using the audio track, we identify the specific video frames on the tape relating to each spoken phoneme. From this range we digitize the video frame which represents the extreme of mouth motion/shape. Thus, we construct a database of images of face/mouth shape related to spoken phonemes. A selected audio speech sequence is recorded which is the basis for synthesizing a matching video sequence; the speaker need not be the same as used for constructing the database. The audio sequence is analyzed to determine the spoken phoneme sequence and the relative timing of the enunciation of those phonemes. Synthesizing an image sequence corresponding to the spoken phoneme sequence is accomplished using a graphics technique known as morphing. Image sequence keyframes necessary for this processing are based on the spoken phoneme sequence and timing. We have been successful in synthesizing the facial motion of a native English speaker for a small set of arbitrary speech segments. Our future work will focus on advancement of the face shape/phoneme model and independent control of facial features.

  8. WE-FG-206-06: Dual-Input Tracer Kinetic Modeling and Its Analog Implementation for Dynamic Contrast-Enhanced (DCE-) MRI of Malignant Mesothelioma (MPM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S; Rimner, A; Hayes, S

    Purpose: To use dual-input tracer kinetic modeling of the lung for mapping spatial heterogeneity of various kinetic parameters in malignant MPM Methods: Six MPM patients received DCE-MRI as part of their radiation therapy simulation scan. 5 patients had the epitheloid subtype of MPM, while one was biphasic. A 3D fast-field echo sequence with TR/TE/Flip angle of 3.62ms/1.69ms/15° was used for DCE-MRI acquisition. The scan was collected for 5 minutes with a temporal resolution of 5-9 seconds depending on the spatial extent of the tumor. A principal component analysis-based groupwise deformable registration was used to co-register all the DCE-MRI series formore » motion compensation. All the images were analyzed using five different dual-input tracer kinetic models implemented in analog continuous-time formalism: the Tofts-Kety (TK), extended TK (ETK), two compartment exchange (2CX), adiabatic approximation to the tissue homogeneity (AATH), and distributed parameter (DP) models. The following parameters were computed for each model: total blood flow (BF), pulmonary flow fraction (γ), pulmonary blood flow (BF-pa), systemic blood flow (BF-a), blood volume (BV), mean transit time (MTT), permeability-surface area product (PS), fractional interstitial volume (vi), extraction fraction (E), volume transfer constant (Ktrans) and efflux rate constant (kep). Results: Although the majority of patients had epitheloid histologies, kinetic parameter values varied across different models. One patient showed a higher total BF value in all models among the epitheloid histologies, although the γ value was varying among these different models. In one tumor with a large area of necrosis, the TK and ETK models showed higher E, Ktrans, and kep values and lower interstitial volume as compared to AATH and DP and 2CX models. Kinetic parameters such as BF-pa, BF-a, PS, Ktrans values were higher in surviving group compared to non-surviving group across most models. Conclusion: Dual-input tracer kinetic modeling is feasible in determining micro-vascular characteristics of MPM. This project was supported from Cycle for Survival and MSK Imaging and radiation science (IMRAS) grants.« less

  9. Multiuser Collaboration with Networked Mobile Devices

    NASA Technical Reports Server (NTRS)

    Tso, Kam S.; Tai, Ann T.; Deng, Yong M.; Becks, Paul G.

    2006-01-01

    In this paper we describe a multiuser collaboration infrastructure that enables multiple mission scientists to remotely and collaboratively interact with visualization and planning software, using wireless networked personal digital assistants(PDAs) and other mobile devices. During ground operations of planetary rover and lander missions, scientists need to meet daily to review downlinked data and plan science activities. For example, scientists use the Science Activity Planner (SAP) in the Mars Exploration Rover (MER) mission to visualize downlinked data and plan rover activities during the science meetings [1]. Computer displays are projected onto large screens in the meeting room to enable the scientists to view and discuss downlinked images and data displayed by SAP and other software applications. However, only one person can interact with the software applications because input to the computer is limited to a single mouse and keyboard. As a result, the scientists have to verbally express their intentions, such as selecting a target at a particular location on the Mars terrain image, to that person in order to interact with the applications. This constrains communication and limits the returns of science planning. Furthermore, ground operations for Mars missions are fundamentally constrained by the short turnaround time for science and engineering teams to process and analyze data, plan the next uplink, generate command sequences, and transmit the uplink to the vehicle [2]. Therefore, improving ground operations is crucial to the success of Mars missions. The multiuser collaboration infrastructure enables users to control software applications remotely and collaboratively using mobile devices. The infrastructure includes (1) human-computer interaction techniques to provide natural, fast, and accurate inputs, (2) a communications protocol to ensure reliable and efficient coordination of the input devices and host computers, (3) an application-independent middleware that maintains the states, sessions, and interactions of individual users of the software applications, (4) an application programming interface to enable tight integration of applications and the middleware. The infrastructure is able to support any software applications running under the Windows or Unix platforms. The resulting technologies not only are applicable to NASA mission operations, but also useful in other situations such as design reviews, brainstorming sessions, and business meetings, as they can benefit from having the participants concurrently interact with the software applications (e.g., presentation applications and CAD design tools) to illustrate their ideas and provide inputs.

  10. Integrated editing system for Japanese text and image information "Linernote"

    NASA Astrophysics Data System (ADS)

    Tanaka, Kazuto

    Integrated Japanese text editing system "Linernote" developed by Toyo Industries Co. is explained. The system has been developed on the concept of electronic publishing. It is composed of personal computer NEC PC-9801 VX and other peripherals. Sentence, drawing and image data is inputted and edited under the integrated operating environment in the system and final text is printed out by laser printer. Handling efficiency of time consuming work such as pattern input or page make up has been improved by draft image data indication method on CRT. It is the latest DTP system equipped with three major functions, namly, typesetting for high quality text editing, easy drawing/tracing and high speed image processing.

  11. Evaluating a robust contour tracker on echocardiographic sequences.

    PubMed

    Jacob, G; Noble, J A; Mulet-Parada, M; Blake, A

    1999-03-01

    In this paper we present an evaluation of a robust visual image tracker on echocardiographic image sequences. We show how the tracking framework can be customized to define an appropriate shape space that describes heart shape deformations that can be learnt from a training data set. We also investigate energy-based temporal boundary enhancement methods to improve image feature measurement. Results are presented demonstrating real-time tracking on real normal heart motion data sequences and abnormal synthesized and real heart motion data sequences. We conclude by discussing some of our current research efforts.

  12. High-speed imaging using 3CCD camera and multi-color LED flashes

    NASA Astrophysics Data System (ADS)

    Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis

    2017-11-01

    This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.

  13. Poster - Thur Eve - 05: Safety systems and failure modes and effects analysis for a magnetic resonance image guided radiation therapy system.

    PubMed

    Lamey, M; Carlone, M; Alasti, H; Bissonnette, J P; Borg, J; Breen, S; Coolens, C; Heaton, R; Islam, M; van Proojen, M; Sharpe, M; Stanescu, T; Jaffray, D

    2012-07-01

    An online Magnetic Resonance guided Radiation Therapy (MRgRT) system is under development. The system is comprised of an MRI with the capability of travel between and into HDR brachytherapy and external beam radiation therapy vaults. The system will provide on-line MR images immediately prior to radiation therapy. The MR images will be registered to a planning image and used for image guidance. With the intention of system safety we have performed a failure modes and effects analysis. A process tree of the facility function was developed. Using the process tree as well as an initial design of the facility as guidelines possible failure modes were identified, for each of these failure modes root causes were identified. For each possible failure the assignment of severity, detectability and occurrence scores was performed. Finally suggestions were developed to reduce the possibility of an event. The process tree consists of nine main inputs and each of these main inputs consisted of 5 - 10 sub inputs and tertiary inputs were also defined. The process tree ensures that the overall safety of the system has been considered. Several possible failure modes were identified and were relevant to the design, construction, commissioning and operating phases of the facility. The utility of the analysis can be seen in that it has spawned projects prior to installation and has lead to suggestions in the design of the facility. © 2012 American Association of Physicists in Medicine.

  14. Small-target leak detection for a closed vessel via infrared image sequences

    NASA Astrophysics Data System (ADS)

    Zhao, Ling; Yang, Hongjiu

    2017-03-01

    This paper focus on a leak diagnosis and localization method based on infrared image sequences. Some problems on high probability of false warning and negative affect for marginal information are solved by leak detection. An experimental model is established for leak diagnosis and localization on infrared image sequences. The differential background prediction is presented to eliminate the negative affect of marginal information on test vessel based on a kernel regression method. A pipeline filter based on layering voting is designed to reduce probability of leak point false warning. A synthesize leak diagnosis and localization algorithm is proposed based on infrared image sequences. The effectiveness and potential are shown for developed techniques through experimental results.

  15. Ω-Net (Omega-Net): Fully automatic, multi-view cardiac MR detection, orientation, and segmentation with deep neural networks.

    PubMed

    Vigneault, Davis M; Xie, Weidi; Ho, Carolyn Y; Bluemke, David A; Noble, J Alison

    2018-05-22

    Pixelwise segmentation of the left ventricular (LV) myocardium and the four cardiac chambers in 2-D steady state free precession (SSFP) cine sequences is an essential preprocessing step for a wide range of analyses. Variability in contrast, appearance, orientation, and placement of the heart between patients, clinical views, scanners, and protocols makes fully automatic semantic segmentation a notoriously difficult problem. Here, we present Ω-Net (Omega-Net): A novel convolutional neural network (CNN) architecture for simultaneous localization, transformation into a canonical orientation, and semantic segmentation. First, an initial segmentation is performed on the input image; second, the features learned during this initial segmentation are used to predict the parameters needed to transform the input image into a canonical orientation; and third, a final segmentation is performed on the transformed image. In this work, Ω-Nets of varying depths were trained to detect five foreground classes in any of three clinical views (short axis, SA; four-chamber, 4C; two-chamber, 2C), without prior knowledge of the view being segmented. This constitutes a substantially more challenging problem compared with prior work. The architecture was trained using three-fold cross-validation on a cohort of patients with hypertrophic cardiomyopathy (HCM, N=42) and healthy control subjects (N=21). Network performance, as measured by weighted foreground intersection-over-union (IoU), was substantially improved for the best-performing Ω-Net compared with U-Net segmentation without localization or orientation (0.858 vs 0.834). In addition, to be comparable with other works, Ω-Net was retrained from scratch using five-fold cross-validation on the publicly available 2017 MICCAI Automated Cardiac Diagnosis Challenge (ACDC) dataset. The Ω-Net outperformed the state-of-the-art method in segmentation of the LV and RV bloodpools, and performed slightly worse in segmentation of the LV myocardium. We conclude that this architecture represents a substantive advancement over prior approaches, with implications for biomedical image segmentation more generally. Published by Elsevier B.V.

  16. High-order motor cortex in rats receives somatosensory inputs from the primary motor cortex via cortico-cortical pathways.

    PubMed

    Kunori, Nobuo; Takashima, Ichiro

    2016-12-01

    The motor cortex of rats contains two forelimb motor areas; the caudal forelimb area (CFA) and the rostral forelimb area (RFA). Although the RFA is thought to correspond to the premotor and/or supplementary motor cortices of primates, which are higher-order motor areas that receive somatosensory inputs, it is unknown whether the RFA of rats receives somatosensory inputs in the same manner. To investigate this issue, voltage-sensitive dye (VSD) imaging was used to assess the motor cortex in rats following a brief electrical stimulation of the forelimb. This procedure was followed by intracortical microstimulation (ICMS) mapping to identify the motor representations in the imaged cortex. The combined use of VSD imaging and ICMS revealed that both the CFA and RFA received excitatory synaptic inputs after forelimb stimulation. Further evaluation of the sensory input pathway to the RFA revealed that the forelimb-evoked RFA response was abolished either by the pharmacological inactivation of the CFA or a cortical transection between the CFA and RFA. These results suggest that forelimb-related sensory inputs would be transmitted to the RFA from the CFA via the cortico-cortical pathway. Thus, the present findings imply that sensory information processed in the RFA may be used for the generation of coordinated forelimb movements, which would be similar to the function of the higher-order motor cortex in primates. © 2016 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  17. Image segmentation algorithm based on improved PCNN

    NASA Astrophysics Data System (ADS)

    Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui

    2017-11-01

    A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.

  18. Multiclassifier fusion in human brain MR segmentation: modelling convergence.

    PubMed

    Heckemann, Rolf A; Hajnal, Joseph V; Aljabar, Paul; Rueckert, Daniel; Hammers, Alexander

    2006-01-01

    Segmentations of MR images of the human brain can be generated by propagating an existing atlas label volume to the target image. By fusing multiple propagated label volumes, the segmentation can be improved. We developed a model that predicts the improvement of labelling accuracy and precision based on the number of segmentations used as input. Using a cross-validation study on brain image data as well as numerical simulations, we verified the model. Fit parameters of this model are potential indicators of the quality of a given label propagation method or the consistency of the input segmentations used.

  19. The Interaction between Semantic Representation and Episodic Memory.

    PubMed

    Fang, Jing; Rüther, Naima; Bellebaum, Christian; Wiskott, Laurenz; Cheng, Sen

    2018-02-01

    The experimental evidence on the interrelation between episodic memory and semantic memory is inconclusive. Are they independent systems, different aspects of a single system, or separate but strongly interacting systems? Here, we propose a computational role for the interaction between the semantic and episodic systems that might help resolve this debate. We hypothesize that episodic memories are represented as sequences of activation patterns. These patterns are the output of a semantic representational network that compresses the high-dimensional sensory input. We show quantitatively that the accuracy of episodic memory crucially depends on the quality of the semantic representation. We compare two types of semantic representations: appropriate representations, which means that the representation is used to store input sequences that are of the same type as those that it was trained on, and inappropriate representations, which means that stored inputs differ from the training data. Retrieval accuracy is higher for appropriate representations because the encoded sequences are less divergent than those encoded with inappropriate representations. Consistent with our model prediction, we found that human subjects remember some aspects of episodes significantly more accurately if they had previously been familiarized with the objects occurring in the episode, as compared to episodes involving unfamiliar objects. We thus conclude that the interaction with the semantic system plays an important role for episodic memory.

  20. MR imaging of breast implants.

    PubMed

    Gorczyca, D P

    1994-11-01

    MR imaging has proved to be an excellent imaging modality in locating free silicone and evaluating an implant for rupture, with a sensitivity of approximately 94% and specificity of 97%. Silicone has a unique MR resonance frequency and long T1 and T2 relaxation times, which allows several MR sequences to provide excellent diagnostic images. The most commonly used sequences include T2-weighted, STIR, and chemical shift imaging (Figs. 3, 13, and 14). The T2-weighted and STIR sequences are often used in conjunction with chemical water suppression. The most reliable findings on MR images for detection of implant rupture include identification of the collapsed implant shell (linguine sign) and free silicone within the breast parenchyma.

  1. MMX-I: data-processing software for multimodal X-ray imaging and tomography

    PubMed Central

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-01-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159

  2. Diffraction-Induced Bidimensional Talbot Self-Imaging with Full Independent Period Control

    NASA Astrophysics Data System (ADS)

    Guillet de Chatellus, Hugues; Romero Cortés, Luis; Deville, Antonin; Seghilani, Mohamed; Azaña, José

    2017-03-01

    We predict, formulate, and observe experimentally a generalized version of the Talbot effect that allows one to create diffraction-induced self-images of a periodic two-dimensional (2D) waveform with arbitrary control of the image spatial periods. Through the proposed scheme, the periods of the output self-image are multiples of the input ones by any desired integer or fractional factor, and they can be controlled independently across each of the two wave dimensions. The concept involves conditioning the phase profile of the input periodic wave before free-space diffraction. The wave energy is fundamentally preserved through the self-imaging process, enabling, for instance, the possibility of the passive amplification of the periodic patterns in the wave by a purely diffractive effect, without the use of any active gain.

  3. Diffraction-Induced Bidimensional Talbot Self-Imaging with Full Independent Period Control.

    PubMed

    Guillet de Chatellus, Hugues; Romero Cortés, Luis; Deville, Antonin; Seghilani, Mohamed; Azaña, José

    2017-03-31

    We predict, formulate, and observe experimentally a generalized version of the Talbot effect that allows one to create diffraction-induced self-images of a periodic two-dimensional (2D) waveform with arbitrary control of the image spatial periods. Through the proposed scheme, the periods of the output self-image are multiples of the input ones by any desired integer or fractional factor, and they can be controlled independently across each of the two wave dimensions. The concept involves conditioning the phase profile of the input periodic wave before free-space diffraction. The wave energy is fundamentally preserved through the self-imaging process, enabling, for instance, the possibility of the passive amplification of the periodic patterns in the wave by a purely diffractive effect, without the use of any active gain.

  4. MRI Sequences in Head & Neck Radiology - State of the Art.

    PubMed

    Widmann, Gerlig; Henninger, Benjamin; Kremser, Christian; Jaschke, Werner

    2017-05-01

    Background  Magnetic resonance imaging (MRI) has become an essential imaging modality for the evaluation of head & neck pathologies. However, the diagnostic power of MRI is strongly related to the appropriate selection and interpretation of imaging protocols and sequences. The aim of this article is to review state-of-the-art sequences for the clinical routine in head & neck MRI and to describe the evidence for which medical question these sequences and techniques are useful. Method  Literature review of state-of-the-art sequences in head & neck MRI. Results and Conclusion  Basic sequences (T1w, T2w, T1wC+) and fat suppression techniques (TIRM/STIR, Dixon, Spectral Fat sat) are important tools in the diagnostic workup of inflammation, congenital lesions and tumors including staging. Additional sequences (SSFP (CISS, FIESTA), SPACE, VISTA, 3D-FLAIR) are used for pathologies of the cranial nerves, labyrinth and evaluation of endolymphatic hydrops in Menière's disease. Vessel and perfusion sequences (3D-TOF, TWIST/TRICKS angiography, DCE) are used in vascular contact syndromes, vascular malformations and analysis of microvascular parameters of tissue perfusion. Diffusion-weighted imaging (EPI-DWI, non-EPI-DWI, RESOLVE) is helpful in cholesteatoma imaging, estimation of malignancy, and evaluation of treatment response and posttreatment recurrence in head & neck cancer. Understanding of MRI sequences and close collaboration with referring physicians improves the diagnostic confidence of MRI in the daily routine and drives further research in this fascinating image modality. Key Points:   · Understanding of MRI sequences is essential for the correct and reliable interpretation of MRI findings.. · MRI protocols have to be carefully selected based on relevant clinical information.. · Close collaboration with referring physicians improves the output obtained from the diagnostic possibilities of MRI.. Citation Format · Widmann G, Henninger B, Kremser C et al. MRI Sequences in Head & Neck Radiology - State of the Art. Fortschr Röntgenstr 2017; 189: 413 - 422. © Georg Thieme Verlag KG Stuttgart · New York.

  5. Images multiplexing by code division technique

    NASA Astrophysics Data System (ADS)

    Kuo, Chung J.; Rigas, Harriett

    Spread Spectrum System (SSS) or Code Division Multiple Access System (CDMAS) has been studied for a long time, but most of the attention was focused on the transmission problems. In this paper, we study the results when the code division technique is applied to the image at the source stage. The idea is to convolve the N different images with the corresponding m-sequence to obtain the encrypted image. The superimposed image (summation of the encrypted images) is then stored or transmitted. The benefit of this is that no one knows what is stored or transmitted unless the m-sequence is known. The recovery of the original image is recovered by correlating the superimposed image with corresponding m-sequence. Two cases are studied in this paper. First, the two-dimensional image is treated as a long one-dimensional vector and the m-sequence is employed to obtain the results. Secondly, the two-dimensional quasi m-array is proposed and used for the code division multiplexing. It is shown that quasi m-array is faster when the image size is 256 x 256. The important features of the proposed technique are not only the image security but also the data compactness. The compression ratio depends on how many images are superimposed.

  6. Images Multiplexing By Code Division Technique

    NASA Astrophysics Data System (ADS)

    Kuo, Chung Jung; Rigas, Harriett B.

    1990-01-01

    Spread Spectrum System (SSS) or Code Division Multiple Access System (CDMAS) has been studied for a long time, but most of the attention was focused on the transmission problems. In this paper, we study the results when the code division technique is applied to the image at the source stage. The idea is to convolve the N different images with the corresponding m-sequence to obtain the encrypted image. The superimposed image (summation of the encrypted images) is then stored or transmitted. The benefit of this is that no one knows what is stored or transmitted unless the m-sequence is known. The recovery of the original image is recovered by correlating the superimposed image with corresponding m-sequence. Two cases are studied in this paper. First, the 2-D image is treated as a long 1-D vector and the m-sequence is employed to obtained the results. Secondly, the 2-D quasi m-array is proposed and used for the code division multiplexing. It is showed that quasi m-array is faster when the image size is 256x256. The important features of the proposed technique are not only the image security but also the data compactness. The compression ratio depends on how many images are superimposed.

  7. Combining convolutional neural networks and Hough Transform for classification of images containing lines

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Limonova, Elena; Nikolaev, Dmitry; Krivtsov, Valeriy

    2017-03-01

    In this paper, we propose an expansion of convolutional neural network (CNN) input features based on Hough Transform. We perform morphological contrasting of source image followed by Hough Transform, and then use it as input for some convolutional filters. Thus, CNNs computational complexity and the number of units are not affected. Morphological contrasting and Hough Transform are the only additional computational expenses of introduced CNN input features expansion. Proposed approach was demonstrated on the example of CNN with very simple structure. We considered two image recognition problems, that were object classification on CIFAR-10 and printed character recognition on private dataset with symbols taken from Russian passports. Our approach allowed to reach noticeable accuracy improvement without taking much computational effort, which can be extremely important in industrial recognition systems or difficult problems utilising CNNs, like pressure ridge analysis and classification.

  8. Deep RNNs for video denoising

    NASA Astrophysics Data System (ADS)

    Chen, Xinyuan; Song, Li; Yang, Xiaokang

    2016-09-01

    Video denoising can be described as the problem of mapping from a specific length of noisy frames to clean one. We propose a deep architecture based on Recurrent Neural Network (RNN) for video denoising. The model learns a patch-based end-to-end mapping between the clean and noisy video sequences. It takes the corrupted video sequences as the input and outputs the clean one. Our deep network, which we refer to as deep Recurrent Neural Networks (deep RNNs or DRNNs), stacks RNN layers where each layer receives the hidden state of the previous layer as input. Experiment shows (i) the recurrent architecture through temporal domain extracts motion information and does favor to video denoising, and (ii) deep architecture have large enough capacity for expressing mapping relation between corrupted videos as input and clean videos as output, furthermore, (iii) the model has generality to learned different mappings from videos corrupted by different types of noise (e.g., Poisson-Gaussian noise). By training on large video databases, we are able to compete with some existing video denoising methods.

  9. Short-term memory capacity in networks via the restricted isometry property.

    PubMed

    Charles, Adam S; Yap, Han Lun; Rozell, Christopher J

    2014-06-01

    Cortical networks are hypothesized to rely on transient network activity to support short-term memory (STM). In this letter, we study the capacity of randomly connected recurrent linear networks for performing STM when the input signals are approximately sparse in some basis. We leverage results from compressed sensing to provide rigorous nonasymptotic recovery guarantees, quantifying the impact of the input sparsity level, the input sparsity basis, and the network characteristics on the system capacity. Our analysis demonstrates that network memory capacities can scale superlinearly with the number of nodes and in some situations can achieve STM capacities that are much larger than the network size. We provide perfect recovery guarantees for finite sequences and recovery bounds for infinite sequences. The latter analysis predicts that network STM systems may have an optimal recovery length that balances errors due to omission and recall mistakes. Furthermore, we show that the conditions yielding optimal STM capacity can be embodied in several network topologies, including networks with sparse or dense connectivities.

  10. Deformable Image Registration based on Similarity-Steered CNN Regression.

    PubMed

    Cao, Xiaohuan; Yang, Jianhua; Zhang, Jun; Nie, Dong; Kim, Min-Jeong; Wang, Qian; Shen, Dinggang

    2017-09-01

    Existing deformable registration methods require exhaustively iterative optimization, along with careful parameter tuning, to estimate the deformation field between images. Although some learning-based methods have been proposed for initiating deformation estimation, they are often template-specific and not flexible in practical use. In this paper, we propose a convolutional neural network (CNN) based regression model to directly learn the complex mapping from the input image pair (i.e., a pair of template and subject) to their corresponding deformation field. Specifically, our CNN architecture is designed in a patch-based manner to learn the complex mapping from the input patch pairs to their respective deformation field. First, the equalized active-points guided sampling strategy is introduced to facilitate accurate CNN model learning upon a limited image dataset. Then, the similarity-steered CNN architecture is designed, where we propose to add the auxiliary contextual cue, i.e., the similarity between input patches, to more directly guide the learning process. Experiments on different brain image datasets demonstrate promising registration performance based on our CNN model. Furthermore, it is found that the trained CNN model from one dataset can be successfully transferred to another dataset, although brain appearances across datasets are quite variable.

  11. Semantic Image Segmentation with Contextual Hierarchical Models.

    PubMed

    Seyedhosseini, Mojtaba; Tasdizen, Tolga

    2016-05-01

    Semantic segmentation is the problem of assigning an object label to each pixel. It unifies the image segmentation and object recognition problems. The importance of using contextual information in semantic segmentation frameworks has been widely realized in the field. We propose a contextual framework, called contextual hierarchical model (CHM), which learns contextual information in a hierarchical framework for semantic segmentation. At each level of the hierarchy, a classifier is trained based on downsampled input images and outputs of previous levels. Our model then incorporates the resulting multi-resolution contextual information into a classifier to segment the input image at original resolution. This training strategy allows for optimization of a joint posterior probability at multiple resolutions through the hierarchy. Contextual hierarchical model is purely based on the input image patches and does not make use of any fragments or shape examples. Hence, it is applicable to a variety of problems such as object segmentation and edge detection. We demonstrate that CHM performs at par with state-of-the-art on Stanford background and Weizmann horse datasets. It also outperforms state-of-the-art edge detection methods on NYU depth dataset and achieves state-of-the-art on Berkeley segmentation dataset (BSDS 500).

  12. Isolating Visual and Proprioceptive Components of Motor Sequence Learning in ASD.

    PubMed

    Sharer, Elizabeth A; Mostofsky, Stewart H; Pascual-Leone, Alvaro; Oberman, Lindsay M

    2016-05-01

    In addition to defining impairments in social communication skills, individuals with autism spectrum disorder (ASD) also show impairments in more basic sensory and motor skills. Development of new skills involves integrating information from multiple sensory modalities. This input is then used to form internal models of action that can be accessed when both performing skilled movements, as well as understanding those actions performed by others. Learning skilled gestures is particularly reliant on integration of visual and proprioceptive input. We used a modified serial reaction time task (SRTT) to decompose proprioceptive and visual components and examine whether patterns of implicit motor skill learning differ in ASD participants as compared with healthy controls. While both groups learned the implicit motor sequence during training, healthy controls showed robust generalization whereas ASD participants demonstrated little generalization when visual input was constant. In contrast, no group differences in generalization were observed when proprioceptive input was constant, with both groups showing limited degrees of generalization. The findings suggest, when learning a motor sequence, individuals with ASD tend to rely less on visual feedback than do healthy controls. Visuomotor representations are considered to underlie imitative learning and action understanding and are thereby crucial to social skill and cognitive development. Thus, anomalous patterns of implicit motor learning, with a tendency to discount visual feedback, may be an important contributor in core social communication deficits that characterize ASD. Autism Res 2016, 9: 563-569. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  13. Deep Convolutional Neural Networks for Multi-Modality Isointense Infant Brain Image Segmentation

    PubMed Central

    Zhang, Wenlu; Li, Rongjian; Deng, Houtao; Wang, Li; Lin, Weili; Ji, Shuiwang; Shen, Dinggang

    2015-01-01

    The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development in health and disease. In the isointense stage (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, making the tissue segmentation very challenging. Only a small number of existing methods have been designed for tissue segmentation in this isointense stage; however, they only used a single T1 or T2 images, or the combination of T1 and T2 images. In this paper, we propose to use deep convolutional neural networks (CNNs) for segmenting isointense stage brain tissues using multi-modality MR images. CNNs are a type of deep models in which trainable filters and local neighborhood pooling operations are applied alternatingly on the raw input images, resulting in a hierarchy of increasingly complex features. Specifically, we used multimodality information from T1, T2, and fractional anisotropy (FA) images as inputs and then generated the segmentation maps as outputs. The multiple intermediate layers applied convolution, pooling, normalization, and other operations to capture the highly nonlinear mappings between inputs and outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense stage brain images. Results showed that our proposed model significantly outperformed prior methods on infant brain tissue segmentation. In addition, our results indicated that integration of multi-modality images led to significant performance improvement. PMID:25562829

  14. gr-MRI: A software package for magnetic resonance imaging using software defined radios.

    PubMed

    Hasselwander, Christopher J; Cao, Zhipeng; Grissom, William A

    2016-09-01

    The goal of this work is to develop software that enables the rapid implementation of custom MRI spectrometers using commercially-available software defined radios (SDRs). The developed gr-MRI software package comprises a set of Python scripts, flowgraphs, and signal generation and recording blocks for GNU Radio, an open-source SDR software package that is widely used in communications research. gr-MRI implements basic event sequencing functionality, and tools for system calibrations, multi-radio synchronization, and MR signal processing and image reconstruction. It includes four pulse sequences: a single-pulse sequence to record free induction signals, a gradient-recalled echo imaging sequence, a spin echo imaging sequence, and an inversion recovery spin echo imaging sequence. The sequences were used to perform phantom imaging scans with a 0.5Tesla tabletop MRI scanner and two commercially-available SDRs. One SDR was used for RF excitation and reception, and the other for gradient pulse generation. The total SDR hardware cost was approximately $2000. The frequency of radio desynchronization events and the frequency with which the software recovered from those events was also measured, and the SDR's ability to generate frequency-swept RF waveforms was validated and compared to the scanner's commercial spectrometer. The spin echo images geometrically matched those acquired using the commercial spectrometer, with no unexpected distortions. Desynchronization events were more likely to occur at the very beginning of an imaging scan, but were nearly eliminated if the user invoked the sequence for a short period before beginning data recording. The SDR produced a 500kHz bandwidth frequency-swept pulse with high fidelity, while the commercial spectrometer produced a waveform with large frequency spike errors. In conclusion, the developed gr-MRI software can be used to develop high-fidelity, low-cost custom MRI spectrometers using commercially-available SDRs. Copyright © 2016. Published by Elsevier Inc.

  15. Error propagation in eigenimage filtering.

    PubMed

    Soltanian-Zadeh, H; Windham, J P; Jenkins, J M

    1990-01-01

    Mathematical derivation of error (noise) propagation in eigenimage filtering is presented. Based on the mathematical expressions, a method for decreasing the propagated noise given a sequence of images is suggested. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of the final composite image are compared to the SNRs and CNRs of the images in the sequence. The consistency of the assumptions and accuracy of the mathematical expressions are investigated using sequences of simulated and real magnetic resonance (MR) images of an agarose phantom and a human brain.

  16. A model-based approach for detection of runways and other objects in image sequences acquired using an on-board camera

    NASA Technical Reports Server (NTRS)

    Kasturi, Rangachar; Devadiga, Sadashiva; Tang, Yuan-Liang

    1994-01-01

    This research was initiated as a part of the Advanced Sensor and Imaging System Technology (ASSIST) program at NASA Langley Research Center. The primary goal of this research is the development of image analysis algorithms for the detection of runways and other objects using an on-board camera. Initial effort was concentrated on images acquired using a passive millimeter wave (PMMW) sensor. The images obtained using PMMW sensors under poor visibility conditions due to atmospheric fog are characterized by very low spatial resolution but good image contrast compared to those images obtained using sensors operating in the visible spectrum. Algorithms developed for analyzing these images using a model of the runway and other objects are described in Part 1 of this report. Experimental verification of these algorithms was limited to a sequence of images simulated from a single frame of PMMW image. Subsequent development and evaluation of algorithms was done using video image sequences. These images have better spatial and temporal resolution compared to PMMW images. Algorithms for reliable recognition of runways and accurate estimation of spatial position of stationary objects on the ground have been developed and evaluated using several image sequences. These algorithms are described in Part 2 of this report. A list of all publications resulting from this work is also included.

  17. Compressive Coded-Aperture Multimodal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Rueda-Chacon, Hoover F.

    Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.

  18. 95 Minutes Over Jupiter

    NASA Image and Video Library

    2017-09-28

    This sequence of color-enhanced images shows how quickly the viewing geometry changes for NASA's Juno spacecraft as it swoops by Jupiter. The images were obtained by JunoCam. Once every 53 days, Juno swings close to Jupiter, speeding over its clouds. In just two hours, the spacecraft travels from a perch over Jupiter's north pole through its closest approach (perijove), then passes over the south pole on its way back out. This sequence shows 11 color-enhanced images from Perijove 8 (Sept. 1, 2017) with the south pole on the left (11th image in the sequence) and the north pole on the right (first image in the sequence). The first image on the right shows a half-lit globe of Jupiter, with the north pole approximately at the upper center of the image close to the terminator -- the dividing line between night and day. As the spacecraft gets closer to Jupiter, the horizon moves in and the range of visible latitudes shrinks. The second and third images in this sequence show the north polar region rotating away from the spacecraft's field of view while the first of Jupiter's lighter-colored bands comes into view. The fourth through the eighth images display a blue-colored vortex in the mid-southern latitudes near Points of Interest "Collision of Colours," "Sharp Edge," "Caltech, by Halka," and "Structure01." The Points of Interest are locations in Jupiter's atmosphere that were identified and named by members of the general public. Additionally, a darker, dynamic band can be seen just south of the vortex. In the ninth and tenth images, the south polar region rotates into view. The final image on the left displays Jupiter's south pole in the center. From the start of this sequence of images to the end, roughly 1 hour and 35 minutes elapsed. https://photojournal.jpl.nasa.gov/catalog/PIA21967

  19. Classification and recognition of dynamical models: the role of phase, independent components, kernels and optimal transport.

    PubMed

    Bissacco, Alessandro; Chiuso, Alessandro; Soatto, Stefano

    2007-11-01

    We address the problem of performing decision tasks, and in particular classification and recognition, in the space of dynamical models in order to compare time series of data. Motivated by the application of recognition of human motion in image sequences, we consider a class of models that include linear dynamics, both stable and marginally stable (periodic), both minimum and non-minimum phase, driven by non-Gaussian processes. This requires extending existing learning and system identification algorithms to handle periodic modes and nonminimum phase behavior, while taking into account higher-order statistics of the data. Once a model is identified, we define a kernel-based cord distance between models that includes their dynamics, their initial conditions as well as input distribution. This is made possible by a novel kernel defined between two arbitrary (non-Gaussian) distributions, which is computed by efficiently solving an optimal transport problem. We validate our choice of models, inference algorithm, and distance on the tasks of human motion synthesis (sample paths of the learned models), and recognition (nearest-neighbor classification in the computed distance). However, our work can be applied more broadly where one needs to compare historical data while taking into account periodic trends, non-minimum phase behavior, and non-Gaussian input distributions.

  20. Cortical regions activated by the subjective sense of perceptual coherence of environmental sounds: a proposal for a neuroscience of intuition.

    PubMed

    Volz, Kirsten G; Rübsamen, Rudolf; von Cramon, D Yves

    2008-09-01

    According to the Oxford English Dictionary, intuition is "the ability to understand or know something immediately, without conscious reasoning." In other words, people continuously, without conscious attention, recognize patterns in the stream of sensations that impinge upon them. The result is a vague perception of coherence, which subsequently biases thought and behavior accordingly. Within the visual domain, research using paradigms with difficult recognition has suggested that the orbitofrontal cortex (OFC) serves as a fast detector and predictor of potential content that utilizes coarse facets of the input. To investigate whether the OFC is crucial in biasing task-specific processing, and hence subserves intuitive judgments in various modalities, we used a difficult-recognition paradigm in the auditory domain. Participants were presented with short sequences of distorted, nonverbal, environmental sounds and had to perform a sound categorization task. Imaging results revealed rostral medial OFC activation for such auditory intuitive coherence judgments. By means of a conjunction analysis between the present results and those from a previous study on visual intuitive coherence judgments, the rostral medial OFC was shown to be activated via both modalities. We conclude that rostral OFC activation during intuitive coherence judgments subserves the detection of potential content on the basis of only coarse facets of the input.

  1. Complexity and non-commutativity of learning operations on graphs.

    PubMed

    Atmanspacher, Harald; Filk, Thomas

    2006-07-01

    We present results from numerical studies of supervised learning operations in small recurrent networks considered as graphs, leading from a given set of input conditions to predetermined outputs. Graphs that have optimized their output for particular inputs with respect to predetermined outputs are asymptotically stable and can be characterized by attractors, which form a representation space for an associative multiplicative structure of input operations. As the mapping from a series of inputs onto a series of such attractors generally depends on the sequence of inputs, this structure is generally non-commutative. Moreover, the size of the set of attractors, indicating the complexity of learning, is found to behave non-monotonically as learning proceeds. A tentative relation between this complexity and the notion of pragmatic information is indicated.

  2. Detection of soft-tissue sarcoma recurrence: added value of functional MR imaging techniques at 3.0 T.

    PubMed

    Del Grande, Filippo; Subhawong, Ty; Weber, Kristy; Aro, Michael; Mugera, Charles; Fayad, Laura M

    2014-05-01

    To determine the added value of functional magnetic resonance (MR) sequences (dynamic contrast material-enhanced [DCE] and quantitative diffusion-weighted [DW] imaging with apparent diffusion coefficient [ADC] mapping) for the detection of recurrent soft-tissue sarcomas following surgical resection. This retrospective study was approved by the institutional review board. The requirement to obtain informed consent was waived. Thirty-seven patients referred for postoperative surveillance after resection of soft-tissue sarcoma (35 with high-grade sarcoma) were studied. Imaging at 3.0 T included conventional (T1-weighted, fluid-sensitive, and contrast-enhanced T1-weighted imaging) and functional (DCE MR imaging, DW imaging with ADC mapping) sequences. Recurrences were confirmed with biopsy or resection. A disease-free state was determined with at least 6 months of follow-up. Two readers independently recorded the signal and morphologic characteristics with conventional sequences, the presence or absence of arterial enhancement at DCE MR imaging, and ADCs of the surgical bed. The accuracy of conventional MR imaging in the detection of recurrence was compared with that with the addition of functional sequences. The Fisher exact and Wilcoxon rank sum tests were used to define the accuracy of imaging features, the Cohen κ and Lin interclass correlation were used to define interobserver variability, and receiver operating characteristic analysis was used to define a threshold to detect recurrence and assess reader confidence after the addition of functional imaging to conventional sequences. There were six histologically proved recurrences in 37 patients. Sensitivity and specificity of MR imaging in the detection of tumor recurrence were 100% (six of six patients) and 52% (16 of 31 patients), respectively, with conventional sequences, 100% (six of six patients) and 97% (30 of 31 patients) with the addition of DCE MR imaging, and 60% (three of five patients) and 97% (30 of 31 patients) with the addition of DW imaging and ADC mapping. The average ADC of recurrence (1.08 mm(2)/sec ± 0.19) was significantly different from those of postoperative scarring (0.9 mm(2)/sec ± 0.00) and hematomas (2.34 mm(2)/sec ± 0.72) (P = .03 for both). The addition of functional MR sequences to a routine MR protocol, in particular DCE MR imaging, offers a specificity of more than 95% for distinguishing recurrent sarcoma from postsurgical scarring.

  3. ACTG: novel peptide mapping onto gene models.

    PubMed

    Choi, Seunghyuk; Kim, Hyunwoo; Paek, Eunok

    2017-04-15

    In many proteogenomic applications, mapping peptide sequences onto genome sequences can be very useful, because it allows us to understand origins of the gene products. Existing software tools either take the genomic position of a peptide start site as an input or assume that the peptide sequence exactly matches the coding sequence of a given gene model. In case of novel peptides resulting from genomic variations, especially structural variations such as alternative splicing, these existing tools cannot be directly applied unless users supply information about the variant, either its genomic position or its transcription model. Mapping potentially novel peptides to genome sequences, while allowing certain genomic variations, requires introducing novel gene models when aligning peptide sequences to gene structures. We have developed a new tool called ACTG (Amino aCids To Genome), which maps peptides to genome, assuming all possible single exon skipping, junction variation allowing three edit distances from the original splice sites, exon extension and frame shift. In addition, it can also consider SNVs (single nucleotide variations) during mapping phase if a user provides the VCF (variant call format) file as an input. Available at http://prix.hanyang.ac.kr/ACTG/search.jsp . eunokpaek@hanyang.ac.kr. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  4. Modeling of prepregs during automated draping sequences

    NASA Astrophysics Data System (ADS)

    Krogh, Christian; Glud, Jens A.; Jakobsen, Johnny

    2017-10-01

    The behavior of wowen prepreg fabric during automated draping sequences is investigated. A drape tool under development with an arrangement of grippers facilitates the placement of a woven prepreg fabric in a mold. It is essential that the draped configuration is free from wrinkles and other defects. The present study aims at setting up a virtual draping framework capable of modeling the draping process from the initial flat fabric to the final double curved shape and aims at assisting the development of an automated drape tool. The virtual draping framework consists of a kinematic mapping algorithm used to generate target points on the mold which are used as input to a draping sequence planner. The draping sequence planner prescribes the displacement history for each gripper in the drape tool and these displacements are then applied to each gripper in a transient model of the draping sequence. The model is based on a transient finite element analysis with the material's constitutive behavior currently being approximated as linear elastic orthotropic. In-plane tensile and bias-extension tests as well as bending tests are conducted and used as input for the model. The virtual draping framework shows a good potential for obtaining a better understanding of the drape process and guide the development of the drape tool. However, results obtained from using the framework on a simple test case indicate that the generation of draping sequences is non-trivial.

  5. DEEP MOTIF DASHBOARD: VISUALIZING AND UNDERSTANDING GENOMIC SEQUENCES USING DEEP NEURAL NETWORKS.

    PubMed

    Lanchantin, Jack; Singh, Ritambhara; Wang, Beilun; Qi, Yanjun

    2017-01-01

    Deep neural network (DNN) models have recently obtained state-of-the-art prediction accuracy for the transcription factor binding (TFBS) site classification task. However, it remains unclear how these approaches identify meaningful DNA sequence signals and give insights as to why TFs bind to certain locations. In this paper, we propose a toolkit called the Deep Motif Dashboard (DeMo Dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns from deep neural network models for TFBS classification. We demonstrate how to visualize and understand three important DNN models: convolutional, recurrent, and convolutional-recurrent networks. Our first visualization method is finding a test sequence's saliency map which uses first-order derivatives to describe the importance of each nucleotide in making the final prediction. Second, considering recurrent models make predictions in a temporal manner (from one end of a TFBS sequence to the other), we introduce temporal output scores, indicating the prediction score of a model over time for a sequential input. Lastly, a class-specific visualization strategy finds the optimal input sequence for a given TFBS positive class via stochastic gradient optimization. Our experimental results indicate that a convolutional-recurrent architecture performs the best among the three architectures. The visualization techniques indicate that CNN-RNN makes predictions by modeling both motifs as well as dependencies among them.

  6. Input reconstruction for networked control systems subject to deception attacks and data losses on control signals

    NASA Astrophysics Data System (ADS)

    Keller, J. Y.; Chabir, K.; Sauter, D.

    2016-03-01

    State estimation of stochastic discrete-time linear systems subject to unknown inputs or constant biases has been widely studied but no work has been dedicated to the case where a disturbance switches between unknown input and constant bias. We show that such disturbance can affect a networked control system subject to deception attacks and data losses on the control signals transmitted by the controller to the plant. This paper proposes to estimate the switching disturbance from an augmented state version of the intermittent unknown input Kalman filter recently developed by the authors. Sufficient stochastic stability conditions are established when the arrival binary sequence of data losses follows a Bernoulli random process.

  7. Atlas-based automatic measurements of the morphology of the tibiofemoral joint

    NASA Astrophysics Data System (ADS)

    Brehler, M.; Thawait, G.; Shyr, W.; Ramsay, J.; Siewerdsen, J. H.; Zbijewski, W.

    2017-03-01

    Purpose: Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce userdependence of the metrics arising from manual identification of the anatomical landmarks. Methods: The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Results: Intra-reader variability as high as 10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. Conclusions: The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  8. Atlas-based automatic measurements of the morphology of the tibiofemoral joint.

    PubMed

    Brehler, M; Thawait, G; Shyr, W; Ramsay, J; Siewerdsen, J H; Zbijewski, W

    2017-02-11

    Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce user-dependence of the metrics arising from manual identification of the anatomical landmarks. The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Intra-reader variability as high as ~10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  9. A hexagonal orthogonal-oriented pyramid as a model of image representation in visual cortex

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Ahumada, Albert J., Jr.

    1989-01-01

    Retinal ganglion cells represent the visual image with a spatial code, in which each cell conveys information about a small region in the image. In contrast, cells of the primary visual cortex use a hybrid space-frequency code in which each cell conveys information about a region that is local in space, spatial frequency, and orientation. A mathematical model for this transformation is described. The hexagonal orthogonal-oriented quadrature pyramid (HOP) transform, which operates on a hexagonal input lattice, uses basis functions that are orthogonal, self-similar, and localized in space, spatial frequency, orientation, and phase. The basis functions, which are generated from seven basic types through a recursive process, form an image code of the pyramid type. The seven basis functions, six bandpass and one low-pass, occupy a point and a hexagon of six nearest neighbors on a hexagonal lattice. The six bandpass basis functions consist of three with even symmetry, and three with odd symmetry. At the lowest level, the inputs are image samples. At each higher level, the input lattice is provided by the low-pass coefficients computed at the previous level. At each level, the output is subsampled in such a way as to yield a new hexagonal lattice with a spacing square root of 7 larger than the previous level, so that the number of coefficients is reduced by a factor of seven at each level. In the biological model, the input lattice is the retinal ganglion cell array. The resulting scheme provides a compact, efficient code of the image and generates receptive fields that resemble those of the primary visual cortex.

  10. Image-derived and arterial blood sampled input functions for quantitative PET imaging of the angiotensin II subtype 1 receptor in the kidney

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Tao; Tsui, Benjamin M. W.; Li, Xin

    Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method inmore » pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent phase of the ID-IF. The combined use of FBP and OS-EM resulted in reduced bias and noise. After performing all the necessary corrections, the areas under the curves (AUCs) of the AD-IF were close to that of the AD-IF (average AUC ratio =1 ± 0.08) during the early phase. When applied in a two-tissue-compartmental kinetic model, the average difference between the estimated model parameters from ID-IF and AD-IF was 10% which was within the error of the estimation method. Conclusions: The bias of radioligand concentration in the aorta from the OS-EM image reconstruction is significantly affected by radioligand uptake in the adjacent kidney and cannot be neglected for quantitative evaluation. With careful calibrations and corrections, the ID-IF derived from quantitative dynamic PET images can be used as the input function of the compartmental model to quantify the renal kinetics of {sup 11}C-KR31173 in experimental animals and the authors intend to evaluate this method in future human studies.« less

  11. Computerized tomography using video recorded fluoroscopic images

    NASA Technical Reports Server (NTRS)

    Kak, A. C.; Jakowatz, C. V., Jr.; Baily, N. A.; Keller, R. A.

    1975-01-01

    A computerized tomographic imaging system is examined which employs video-recorded fluoroscopic images as input data. By hooking the video recorder to a digital computer through a suitable interface, such a system permits very rapid construction of tomograms.

  12. Stereo sequence transmission via conventional transmission channel

    NASA Astrophysics Data System (ADS)

    Lee, Ho-Keun; Kim, Chul-Hwan; Han, Kyu-Phil; Ha, Yeong-Ho

    2003-05-01

    This paper proposes a new stereo sequence transmission technique using digital watermarking for compatibility with conventional 2D digital TV. We, generally, compress and transmit image sequence using temporal-spatial redundancy between stereo images. It is difficult for users with conventional digital TV to watch the transmitted 3D image sequence because many 3D image compression methods are different. To solve such a problem, in this paper, we perceive the concealment of new information of digital watermarking and conceal information of the other stereo image into three channels of the reference image. The main target of the technique presented is to let the people who have conventional DTV watch stereo movies at the same time. This goal is reached by considering the response of human eyes to color information and by using digital watermarking. To hide right images into left images effectively, bit-change in 3 color channels and disparity estimation according to the value of estimated disparity are performed. The proposed method assigns the displacement information of right image to each channel of YCbCr on DCT domain. Each LSB bit on YCbCr channels is changed according to the bits of disparity information. The performance of the presented methods is confirmed by several computer experiments.

  13. System, method and apparatus for generating phrases from a database

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A phrase generation is a method of generating sequences of terms, such as phrases, that may occur within a database of subsets containing sequences of terms, such as text. A database is provided and a relational model of the database is created. A query is then input. The query includes a term or a sequence of terms or multiple individual terms or multiple sequences of terms or combinations thereof. Next, several sequences of terms that are contextually related to the query are assembled from contextual relations in the model of the database. The sequences of terms are then sorted and output. Phrase generation can also be an iterative process used to produce sequences of terms from a relational model of a database.

  14. Proton-decoupled, Overhauser-enhanced, spatially localized carbon-13 spectroscopy in humans.

    PubMed

    Bottomley, P A; Hardy, C J; Roemer, P B; Mueller, O M

    1989-12-01

    Spatially localized, natural abundance, carbon (13C) NMR spectroscopy has been combined with proton (1H) decoupling and nuclear Overhauser enhancement to improve 13C sensitivity up to five-fold in the human leg, liver, and heart. Broadhand-decoupled 13C spectra were acquired in 1 s to 17 min with a conventional 1.5-T imaging/spectroscopy system, an auxiliary 1H decoupler, an air-cooled dual-coil coplanar surface probe, and both depth-resolved surface coil spectroscopy (DRESS) and one-dimensional phase-encoding gradient NMR pulse sequences. The surface coil probe comprised circular and figure-eight-shaped coils to eliminate problems with mutual coupling of coils at high decoupling power levels applied during 13C reception. Peak decoupler RF power deposition in tissue was computed numerically from electromagnetic theory assuming a semi-infinite plane of uniform biological conductor. Peak values at the surface were calculated at 4 to 6 W/kg in any gram of tissue for each watt of decoupler power input excluding all coil and cable losses, warning of potential local RF heating problems in these and related experiments. The average power deposition was about 9 mW/kg per watt input, which should present no systemic hazard. At 3 W input, human 13C spectra were decoupled to a depth of about 5 cm while some Overhauser enhancement was sustained up to about 3 cm depth, without ill effect. The observation of glycogen in localized natural abundance 13C spectra of heart and muscle suggests that metabolites in the citric acid cycle should be observable noninvasively using 13C-labeled substrates.

  15. Processing Translational Motion Sequences.

    DTIC Science & Technology

    1982-10-01

    the initial ROADSIGN image using a (del)**2g mask with a width of 5 pixels The distinctiveness values were computed using features which were 5x5 pixel...the initial step size of the local search quite large. 34 4. EX P R g NTg The following experiments were performed using the roadsign and industrial...the initial image of the sequence. The third experiment involves processing the roadsign image sequence using the features extracted at the positions

  16. Beyond the electronic textbook model: software techniques to make on-line educational content dynamic.

    PubMed

    Frank, M S; Dreyer, K

    2001-06-01

    We describe a working software technology that enables educators to incorporate their expertise and teaching style into highly interactive and Socratic educational material for distribution on the world wide web. A graphically oriented interactive authoring system was developed to enable the computer novice to create and store within a database his or her domain expertise in the form of electronic knowledge. The authoring system supports and facilitates the input and integration of several types of content, including free-form, stylized text, miniature and full-sized images, audio, and interactive questions with immediate feedback. The system enables the choreography and sequencing of these entities for display within a web page as well as the sequencing of entire web pages within a case-based or thematic presentation. Images or segments of text can be hyperlinked with point-and-click to other entities such as adjunctive web pages, audio, or other images, cases, or electronic chapters. Miniature (thumbnail) images are automatically linked to their full-sized counterparts. The authoring system contains a graphically oriented word processor, an image editor, and capabilities to automatically invoke and use external image-editing software such as Photoshop. The system works in both local area network (LAN) and internet-centric environments. An internal metalanguage (invisible to the author but stored with the content) was invented to represent the choreographic directives that specify the interactive delivery of the content on the world wide web. A database schema was developed to objectify and store both this electronic knowledge and its associated choreographic metalanguage. A database engine was combined with page-rendering algorithms in order to retrieve content from the database and deliver it on the web in a Socratic style, assess the recipient's current fund of knowledge, and provide immediate feedback, thus stimulating in-person interaction with a human expert. This technology enables the educator to choreograph a stylized, interactive delivery of his or her message using multimedia components assembled in virtually any order, spanning any number of web pages for a given case or theme. An educator can thus exercise precise influence on specific learning objectives, embody his or her personal teaching style within the content, and ultimately enhance its educational impact. The described technology amplifies the efforts of the educator and provides a more dynamic and enriching learning environment for web-based education.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    N, Gwilliam M; J, Collins D; O, Leach M

    Purpose: To assess the feasibility of accurately quantifying the concentration of MRI contrast agent (CA) in pulsatile flowing blood by measuring its T{sub 1}, as is common for the purposes of obtaining a patientspecific arterial input function (AIF). Dynamic contrast enhanced (DCE) - MRI and pharmacokinetic (PK) modelling is widely used to produce measures of vascular function but accurate measurement of the AIF undermines their accuracy. A proposed solution is to measure the T{sub 1} of blood in a large vessel using the Fram double flip angle method during the passage of a bolus of CA. This work expands onmore » previous work by assessing pulsatile flow and the changes in T{sub 1} seen with a CA bolus. Methods: A phantom was developed which used a physiological pump to pass fluid of a known T{sub 1} (812ms) through the centre of a head coil of a clinical 1.5T MRI scanner. Measurements were made using high temporal resolution sequences suitable for DCE-MRI and were used to validate a virtual phantom that simulated the expected errors due to pulsatile flow and bolus of CA concentration changes typically found in patients. Results: : Measured and virtual results showed similar trends, although there were differences that may be attributed to the virtual phantom not accurately simulating the spin history of the fluid before entering the imaging volume. The relationship between T{sub 1} measurement and flow speed was non-linear. T{sub 1} measurement is compromised by new spins flowing into the imaging volume, not being subject to enough excitations to have reached steady-state. The virtual phantom demonstrated a range of recorded T{sub 1} for various simulated T{sub 1} / flow rates. Conclusion: T{sub 1} measurement of flowing blood using standard DCE-MRI sequences is very challenging. Measurement error is non-linear with relation to instantaneous flow speed. Optimising sequence parameters and lowering baseline T{sub 1} of blood should be considered.« less

  18. Three-dimensional sampling perfection with application-optimised contrasts using a different flip angle evolutions sequence for routine imaging of the spine: preliminary experience

    PubMed Central

    Tins, B; Cassar-Pullicino, V; Haddaway, M; Nachtrab, U

    2012-01-01

    Objectives The bulk of spinal imaging is still performed with conventional two-dimensional sequences. This study assesses the suitability of three-dimensional sampling perfection with application-optimised contrasts using a different flip angle evolutions (SPACE) sequence for routine spinal imaging. Methods 62 MRI examinations of the spine were evaluated by 2 examiners in consensus for the depiction of anatomy and presence of artefact. We noted pathologies that might be missed using the SPACE sequence only or the SPACE and a sagittal T1 weighted sequence. The reference standards were sagittal and axial T1 weighted and T2 weighted sequences. At a later date the evaluation was repeated by one of the original examiners and an additional examiner. Results There was good agreement of the single evaluations and consensus evaluation for the conventional sequences: κ>0.8, confidence interval (CI)>0.6–1.0. For the SPACE sequence, depiction of anatomy was very good for 84% of cases, with high interobserver agreement, but there was poor interobserver agreement for other cases. For artefact assessment of SPACE, κ=0.92, CI=0.92–1.0. The SPACE sequence was superior to conventional sequences for depiction of anatomy and artefact resistance. The SPACE sequence occasionally missed bone marrow oedema. In conjunction with sagittal T1 weighted sequences, no abnormality was missed. The isotropic SPACE sequence was superior to conventional sequences in imaging difficult anatomy such as in scoliosis and spondylolysis. Conclusion The SPACE sequence allows excellent assessment of anatomy owing to high spatial resolution and resistance to artefact. The sensitivity for bone marrow abnormalities is limited. PMID:22374284

  19. Three-dimensional sampling perfection with application-optimised contrasts using a different flip angle evolutions sequence for routine imaging of the spine: preliminary experience.

    PubMed

    Tins, B; Cassar-Pullicino, V; Haddaway, M; Nachtrab, U

    2012-08-01

    The bulk of spinal imaging is still performed with conventional two-dimensional sequences. This study assesses the suitability of three-dimensional sampling perfection with application-optimised contrasts using a different flip angle evolutions (SPACE) sequence for routine spinal imaging. 62 MRI examinations of the spine were evaluated by 2 examiners in consensus for the depiction of anatomy and presence of artefact. We noted pathologies that might be missed using the SPACE sequence only or the SPACE and a sagittal T(1) weighted sequence. The reference standards were sagittal and axial T(1) weighted and T(2) weighted sequences. At a later date the evaluation was repeated by one of the original examiners and an additional examiner. There was good agreement of the single evaluations and consensus evaluation for the conventional sequences: κ>0.8, confidence interval (CI)>0.6-1.0. For the SPACE sequence, depiction of anatomy was very good for 84% of cases, with high interobserver agreement, but there was poor interobserver agreement for other cases. For artefact assessment of SPACE, κ=0.92, CI=0.92-1.0. The SPACE sequence was superior to conventional sequences for depiction of anatomy and artefact resistance. The SPACE sequence occasionally missed bone marrow oedema. In conjunction with sagittal T(1) weighted sequences, no abnormality was missed. The isotropic SPACE sequence was superior to conventional sequences in imaging difficult anatomy such as in scoliosis and spondylolysis. The SPACE sequence allows excellent assessment of anatomy owing to high spatial resolution and resistance to artefact. The sensitivity for bone marrow abnormalities is limited.

  20. The visual development of hand-centered receptive fields in a neural network model of the primate visual system trained with experimentally recorded human gaze changes

    PubMed Central

    Galeazzi, Juan M.; Navajas, Joaquín; Mender, Bedeho M. W.; Quian Quiroga, Rodrigo; Minini, Loredana; Stringer, Simon M.

    2016-01-01

    ABSTRACT Neurons have been found in the primate brain that respond to objects in specific locations in hand-centered coordinates. A key theoretical challenge is to explain how such hand-centered neuronal responses may develop through visual experience. In this paper we show how hand-centered visual receptive fields can develop using an artificial neural network model, VisNet, of the primate visual system when driven by gaze changes recorded from human test subjects as they completed a jigsaw. A camera mounted on the head captured images of the hand and jigsaw, while eye movements were recorded using an eye-tracking device. This combination of data allowed us to reconstruct the retinal images seen as humans undertook the jigsaw task. These retinal images were then fed into the neural network model during self-organization of its synaptic connectivity using a biologically plausible trace learning rule. A trace learning mechanism encourages neurons in the model to learn to respond to input images that tend to occur in close temporal proximity. In the data recorded from human subjects, we found that the participant’s gaze often shifted through a sequence of locations around a fixed spatial configuration of the hand and one of the jigsaw pieces. In this case, trace learning should bind these retinal images together onto the same subset of output neurons. The simulation results consequently confirmed that some cells learned to respond selectively to the hand and a jigsaw piece in a fixed spatial configuration across different retinal views. PMID:27253452

Top