Sample records for high sequencing speed

  1. High-speed imaging using 3CCD camera and multi-color LED flashes

    NASA Astrophysics Data System (ADS)

    Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis

    2017-11-01

    This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.

  2. RAMICS: trainable, high-speed and biologically relevant alignment of high-throughput sequencing reads to coding DNA

    PubMed Central

    Wright, Imogen A.; Travers, Simon A.

    2014-01-01

    The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. PMID:24861618

  3. RAMICS: trainable, high-speed and biologically relevant alignment of high-throughput sequencing reads to coding DNA.

    PubMed

    Wright, Imogen A; Travers, Simon A

    2014-07-01

    The challenge presented by high-throughput sequencing necessitates the development of novel tools for accurate alignment of reads to reference sequences. Current approaches focus on using heuristics to map reads quickly to large genomes, rather than generating highly accurate alignments in coding regions. Such approaches are, thus, unsuited for applications such as amplicon-based analysis and the realignment phase of exome sequencing and RNA-seq, where accurate and biologically relevant alignment of coding regions is critical. To facilitate such analyses, we have developed a novel tool, RAMICS, that is tailored to mapping large numbers of sequence reads to short lengths (<10 000 bp) of coding DNA. RAMICS utilizes profile hidden Markov models to discover the open reading frame of each sequence and aligns to the reference sequence in a biologically relevant manner, distinguishing between genuine codon-sized indels and frameshift mutations. This approach facilitates the generation of highly accurate alignments, accounting for the error biases of the sequencing machine used to generate reads, particularly at homopolymer regions. Performance improvements are gained through the use of graphics processing units, which increase the speed of mapping through parallelization. RAMICS substantially outperforms all other mapping approaches tested in terms of alignment quality while maintaining highly competitive speed performance. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  4. Parametric study of laminated composite material shaft of high speed rotor-bearing system

    NASA Astrophysics Data System (ADS)

    Gonsalves, Thimothy Harold; Kumar, G. C. Mohan; Ramesh, M. R.

    2018-04-01

    In this paper some of the important parameters that influence the effectiveness of composite material shaft of high speed rotor-bearing system on rotor dynamics are analyzed. The type of composite material composition, the number of layers along with their stacking sequences are evaluated as they play an important role in deciding the best configuration suitable for the high-speed application. In this work the lateral modal frequencies for five types of composite materials shaft of a high-speed power turbine rotor-bearing system and stresses due to operating torque are evaluated. The results are useful for the selection of right combination of material, number of layers and their stacking sequences. The numerical analysis is carried out using the ANSYS Rotor dynamic analysis features.

  5. Difference in muscle activation patterns during high-speed versus standard-speed yoga: A randomized sequence crossover study.

    PubMed

    Potiaumpai, Melanie; Martins, Maria Carolina Massoni; Wong, Claudia; Desai, Trusha; Rodriguez, Roberto; Mooney, Kiersten; Signorile, Joseph F

    2017-02-01

    To compare the difference in muscle activation between high-speed yoga and standard-speed yoga and to compare muscle activation of the transitions between poses and the held phases of a yoga pose. Randomized sequence crossover trial SETTING: A laboratory of neuromuscular research and active aging Interventions: Eight minutes of continuous Sun Salutation B was performed, at a high speed versus a standard-speed, separately. Electromyography was used to quantify normalized muscle activation patterns of eight upper and lower body muscles (pectoralis major, medial deltoids, lateral head of the triceps, middle fibers of the trapezius, vastus medialis, medial gastrocnemius, thoracic extensor spinae, and external obliques) during the high-speed and standard-speed yoga protocols. Difference in normalized muscle activation between high-speed yoga and standard-speed yoga. Normalized muscle activity signals were significantly higher in all eight muscles during the transition phases of poses compared to the held phases (p<0.01). There was no significant interaction between speed×phase; however, greater normalized muscle activity was seen for highspeed yoga across the entire session. Our results show that transitions from one held phase of a pose to another produces higher normalized muscle activity than the held phases of the poses and that overall activity is greater during highspeed yoga than standard-speed yoga. Therefore, the transition speed and associated number of poses should be considered when targeting specific improvements in performance. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Science and technology review, July/August 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhye, R.

    This month`s issues are entitled Assuring the Safety of Nuclear Power; The Microtechnology Center, When Smaller is Better; Speeding the Gene Hunt: High Speed DNA Sequencing; and Microbial Treatments of High Explosives.

  7. High speed imaging - An important industrial tool

    NASA Technical Reports Server (NTRS)

    Moore, Alton; Pinelli, Thomas E.

    1986-01-01

    High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.

  8. Thought Speed, Mood, and the Experience of Mental Motion.

    PubMed

    Pronin, Emily; Jacobs, Elana

    2008-11-01

    This article presents a theoretical account relating thought speed to mood and psychological experience. Thought sequences that occur at a fast speed generally induce more positive affect than do those that occur slowly. Thought speed constitutes one aspect of mental motion. Another aspect involves thought variability, or the degree to which thoughts in a sequence either vary widely from or revolve closely around a theme. Thought sequences possessing more motion (occurring fast and varying widely) generally produce more positive affect than do sequences possessing little motion (occurring slowly and repetitively). When speed and variability oppose each other, such that one is low and the other is high, predictable psychological states also emerge. For example, whereas slow, repetitive thinking can prompt dejection, fast, repetitive thinking can prompt anxiety. This distinction is related to the fact that fast thinking involves greater actual and felt energy than slow thinking does. Effects of mental motion occur independent of the specific content of thought. Their consequences for mood and energy hold psychotherapeutic relevance. © 2008 Association for Psychological Science.

  9. Synchronous high speed multi-point velocity profile measurement by heterodyne interferometry

    NASA Astrophysics Data System (ADS)

    Hou, Xueqin; Xiao, Wen; Chen, Zonghui; Qin, Xiaodong; Pan, Feng

    2017-02-01

    This paper presents a synchronous multipoint velocity profile measurement system, which acquires the vibration velocities as well as images of vibrating objects by combining optical heterodyne interferometry and a high-speed CMOS-DVR camera. The high-speed CMOS-DVR camera records a sequence of images of the vibrating object. Then, by extracting and processing multiple pixels at the same time, a digital demodulation technique is implemented to simultaneously acquire the vibrating velocity of the target from the recorded sequences of images. This method is validated with an experiment. A piezoelectric ceramic plate with standard vibration characteristics is used as the vibrating target, which is driven by a standard sinusoidal signal.

  10. A high-speed on-chip pseudo-random binary sequence generator for multi-tone phase calibration

    NASA Astrophysics Data System (ADS)

    Gommé, Liesbeth; Vandersteen, Gerd; Rolain, Yves

    2011-07-01

    An on-chip reference generator is conceived by adopting the technique of decimating a pseudo-random binary sequence (PRBS) signal in parallel sequences. This is of great benefit when high-speed generation of PRBS and PRBS-derived signals is the objective. The design implemented standard CMOS logic is available in commercial libraries to provide the logic functions for the generator. The design allows the user to select the periodicity of the PRBS and the PRBS-derived signals. The characterization of the on-chip generator marks its performance and reveals promising specifications.

  11. DNA nanomapping using CRISPR-Cas9 as a programmable nanoparticle.

    PubMed

    Mikheikin, Andrey; Olsen, Anita; Leslie, Kevin; Russell-Pavier, Freddie; Yacoot, Andrew; Picco, Loren; Payton, Oliver; Toor, Amir; Chesney, Alden; Gimzewski, James K; Mishra, Bud; Reed, Jason

    2017-11-21

    Progress in whole-genome sequencing using short-read (e.g., <150 bp), next-generation sequencing technologies has reinvigorated interest in high-resolution physical mapping to fill technical gaps that are not well addressed by sequencing. Here, we report two technical advances in DNA nanotechnology and single-molecule genomics: (1) we describe a labeling technique (CRISPR-Cas9 nanoparticles) for high-speed AFM-based physical mapping of DNA and (2) the first successful demonstration of using DVD optics to image DNA molecules with high-speed AFM. As a proof of principle, we used this new "nanomapping" method to detect and map precisely BCL2-IGH translocations present in lymph node biopsies of follicular lymphoma patents. This HS-AFM "nanomapping" technique can be complementary to both sequencing and other physical mapping approaches.

  12. Advances in High-Throughput Speed, Low-Latency Communication for Embedded Instrumentation (7th Annual SFAF Meeting, 2012)

    ScienceCinema

    Jordan, Scott

    2018-01-24

    Scott Jordan on "Advances in high-throughput speed, low-latency communication for embedded instrumentation" at the 2012 Sequencing, Finishing, Analysis in the Future Meeting held June 5-7, 2012 in Santa Fe, New Mexico.

  13. Differences in energy expenditure during high-speed versus standard-speed yoga: A randomized sequence crossover trial.

    PubMed

    Potiaumpai, Melanie; Martins, Maria Carolina Massoni; Rodriguez, Roberto; Mooney, Kiersten; Signorile, Joseph F

    2016-12-01

    To compare energy expenditure and volume of oxygen consumption and carbon dioxide production during a high-speed yoga and a standard-speed yoga program. Randomized repeated measures controlled trial. A laboratory of neuromuscular research and active aging. Sun-Salutation B was performed, for eight minutes, at a high speed versus and a standard-speed separately while oxygen consumption was recorded. Caloric expenditure was calculated using volume of oxygen consumption and carbon dioxide production. Difference in energy expenditure (kcal) of HSY and SSY. Significant differences were observed in energy expenditure between yoga speeds with high-speed yoga producing significantly higher energy expenditure than standard-speed yoga (MD=18.55, SE=1.86, p<0.01). Significant differences were also seen between high-speed and standard-speed yoga for volume of oxygen consumed and carbon dioxide produced. High-speed yoga results in a significantly greater caloric expenditure than standard-speed yoga. High-speed yoga may be an effective alternative program for those targeting cardiometabolic markers. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Propeller speed and phase sensor

    NASA Technical Reports Server (NTRS)

    Collopy, Paul D. (Inventor); Bennett, George W. (Inventor)

    1992-01-01

    A speed and phase sensor counterrotates aircraft propellers. A toothed wheel is attached to each propeller, and the teeth trigger a sensor as they pass, producing a sequence of signals. From the sequence of signals, rotational speed of each propeller is computer based on time intervals between successive signals. The speed can be computed several times during one revolution, thus giving speed information which is highly up-to-date. Given that spacing between teeth may not be uniform, the signals produced may be nonuniform in time. Error coefficients are derived to correct for nonuniformities in the resulting signals, thus allowing accurate speed to be computed despite the spacing nonuniformities. Phase can be viewed as the relative rotational position of one propeller with respect to the other, but measured at a fixed time. Phase is computed from the signals.

  15. Quantum sequencing: opportunities and challenges

    NASA Astrophysics Data System (ADS)

    di Ventra, Massimiliano

    Personalized or precision medicine refers to the ability of tailoring drugs to the specific genome and transcriptome of each individual. It is however not yet feasible due the high costs and slow speed of present DNA sequencing methods. I will discuss a sequencing protocol that requires the measurement of the distributions of transverse tunneling currents during the translocation of single-stranded DNA into nanochannels. I will show that such a quantum sequencing approach can reach unprecedented speeds, without requiring any chemical preparation, amplification or labeling. I will discuss recent experiments that support these theoretical predictions, the advantages of this approach over other sequencing methods, and stress the challenges that need to be overcome to render it commercially viable.

  16. Adaptive efficient compression of genomes

    PubMed Central

    2012-01-01

    Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. However, memory requirements of the current algorithms are high and run times often are slow. In this paper, we propose an adaptive, parallel and highly efficient referential sequence compression method which allows fine-tuning of the trade-off between required memory and compression speed. When using 12 MB of memory, our method is for human genomes on-par with the best previous algorithms in terms of compression ratio (400:1) and compression speed. In contrast, it compresses a complete human genome in just 11 seconds when provided with 9 GB of main memory, which is almost three times faster than the best competitor while using less main memory. PMID:23146997

  17. Aircraft and avionic related research required to develop an effective high-speed runway exit system

    NASA Technical Reports Server (NTRS)

    Schoen, M. L.; Hosford, J. E.; Graham, J. M., Jr.; Preston, O. W.; Frankel, R. S.; Erickson, J. B.

    1979-01-01

    Research was conducted to increase airport capacity by studying the feasibility of the longitudinal separation between aircraft sequences on final approach. The multidisciplinary factors which include the utility of high speed exits for efficient runway operations were described along with recommendations and highlights of these studies.

  18. GBParsy: a GenBank flatfile parser library with high speed.

    PubMed

    Lee, Tae-Ho; Kim, Yeon-Ki; Nahm, Baek Hie

    2008-07-25

    GenBank flatfile (GBF) format is one of the most popular sequence file formats because of its detailed sequence features and ease of readability. To use the data in the file by a computer, a parsing process is required and is performed according to a given grammar for the sequence and the description in a GBF. Currently, several parser libraries for the GBF have been developed. However, with the accumulation of DNA sequence information from eukaryotic chromosomes, parsing a eukaryotic genome sequence with these libraries inevitably takes a long time, due to the large GBF file and its correspondingly large genomic nucleotide sequence and related feature information. Thus, there is significant need to develop a parsing program with high speed and efficient use of system memory. We developed a library, GBParsy, which was C language-based and parses GBF files. The parsing speed was maximized by using content-specified functions in place of regular expressions that are flexible but slow. In addition, we optimized an algorithm related to memory usage so that it also increased parsing performance and efficiency of memory usage. GBParsy is at least 5-100x faster than current parsers in benchmark tests. GBParsy is estimated to extract annotated information from almost 100 Mb of a GenBank flatfile for chromosomal sequence information within a second. Thus, it should be used for a variety of applications such as on-time visualization of a genome at a web site.

  19. Speed and path control for conflict-free flight in high air traffic demand in terminal airspace

    NASA Astrophysics Data System (ADS)

    Rezaei, Ali

    To accommodate the growing air traffic demand, flights will need to be planned and navigated with a much higher level of precision than today's aircraft flight path. The Next Generation Air Transportation System (NextGen) stands to benefit significantly in safety and efficiency from such movement of aircraft along precisely defined paths. Air Traffic Operations (ATO) relying on such precision--the Precision Air Traffic Operations or PATO--are the foundation of high throughput capacity envisioned for the future airports. In PATO, the preferred method is to manage the air traffic by assigning a speed profile to each aircraft in a given fleet in a given airspace (in practice known as (speed control). In this research, an algorithm has been developed, set in the context of a Hybrid Control System (HCS) model, that determines whether a speed control solution exists for a given fleet of aircraft in a given airspace and if so, computes this solution as a collective speed profile that assures separation if executed without deviation. Uncertainties such as weather are not considered but the algorithm can be modified to include uncertainties. The algorithm first computes all feasible sequences (i.e., all sequences that allow the given fleet of aircraft to reach destinations without violating the FAA's separation requirement) by looking at all pairs of aircraft. Then, the most likely sequence is determined and the speed control solution is constructed by a backward trajectory generation, starting with the aircraft last out and proceeds to the first out. This computation can be done for different sequences in parallel which helps to reduce the computation time. If such a solution does not exist, then the algorithm calculates a minimal path modification (known as path control) that will allow separation-compliance speed control. We will also prove that the algorithm will modify the path without creating a new separation violation. The new path will be generated by adding new waypoints in the airspace. As a byproduct, instead of minimal path modification, one can use the aircraft arrival time schedule to generate the sequence in which the aircraft reach their destinations.

  20. Laryngeal High-Speed Videoendoscopy: Rationale and Recommendation for Accurate and Consistent Terminology

    PubMed Central

    Deliyski, Dimitar D.; Hillman, Robert E.

    2015-01-01

    Purpose The authors discuss the rationale behind the term laryngeal high-speed videoendoscopy to describe the application of high-speed endoscopic imaging techniques to the visualization of vocal fold vibration. Method Commentary on the advantages of using accurate and consistent terminology in the field of voice research is provided. Specific justification is described for each component of the term high-speed videoendoscopy, which is compared and contrasted with alternative terminologies in the literature. Results In addition to the ubiquitous high-speed descriptor, the term endoscopy is necessary to specify the appropriate imaging technology and distinguish among modalities such as ultrasound, magnetic resonance imaging, and nonendoscopic optical imaging. Furthermore, the term video critically indicates the electronic recording of a sequence of optical still images representing scenes in motion, in contrast to strobed images using high-speed photography and non-optical high-speed magnetic resonance imaging. High-speed videoendoscopy thus concisely describes the technology and can be appended by the desired anatomical nomenclature such as laryngeal. Conclusions Laryngeal high-speed videoendoscopy strikes a balance between conciseness and specificity when referring to the typical high-speed imaging method performed on human participants. Guidance for the creation of future terminology provides clarity and context for current and future experiments and the dissemination of results among researchers. PMID:26375398

  1. Architecture Of High Speed Image Processing System

    NASA Astrophysics Data System (ADS)

    Konishi, Toshio; Hayashi, Hiroshi; Ohki, Tohru

    1988-01-01

    One of architectures for a high speed image processing system which corresponds to a new algorithm for a shape understanding is proposed. And the hardware system which is based on the archtecture was developed. Consideration points of the architecture are mainly that using processors should match with the processing sequence of the target image and that the developed system should be used practically in an industry. As the result, it was possible to perform each processing at a speed of 80 nano-seconds a pixel.

  2. Sequence information signal processor for local and global string comparisons

    DOEpatents

    Peterson, John C.; Chow, Edward T.; Waterman, Michael S.; Hunkapillar, Timothy J.

    1997-01-01

    A sequence information signal processing integrated circuit chip designed to perform high speed calculation of a dynamic programming algorithm based upon the algorithm defined by Waterman and Smith. The signal processing chip of the present invention is designed to be a building block of a linear systolic array, the performance of which can be increased by connecting additional sequence information signal processing chips to the array. The chip provides a high speed, low cost linear array processor that can locate highly similar global sequences or segments thereof such as contiguous subsequences from two different DNA or protein sequences. The chip is implemented in a preferred embodiment using CMOS VLSI technology to provide the equivalent of about 400,000 transistors or 100,000 gates. Each chip provides 16 processing elements, and is designed to provide 16 bit, two's compliment operation for maximum score precision of between -32,768 and +32,767. It is designed to provide a comparison between sequences as long as 4,194,304 elements without external software and between sequences of unlimited numbers of elements with the aid of external software. Each sequence can be assigned different deletion and insertion weight functions. Each processor is provided with a similarity measure device which is independently variable. Thus, each processor can contribute to maximum value score calculation using a different similarity measure.

  3. High-speed multiple sequence alignment on a reconfigurable platform.

    PubMed

    Oliver, Tim; Schmidt, Bertil; Maskell, Douglas; Nathan, Darran; Clemens, Ralf

    2006-01-01

    Progressive alignment is a widely used approach to compute multiple sequence alignments (MSAs). However, aligning several hundred sequences by popular progressive alignment tools requires hours on sequential computers. Due to the rapid growth of sequence databases biologists have to compute MSAs in a far shorter time. In this paper we present a new approach to MSA on reconfigurable hardware platforms to gain high performance at low cost. We have constructed a linear systolic array to perform pairwise sequence distance computations using dynamic programming. This results in an implementation with significant runtime savings on a standard FPGA.

  4. Using high-throughput barcode sequencing to efficiently map connectomes.

    PubMed

    Peikon, Ian D; Kebschull, Justus M; Vagin, Vasily V; Ravens, Diana I; Sun, Yu-Chi; Brouzes, Eric; Corrêa, Ivan R; Bressan, Dario; Zador, Anthony M

    2017-07-07

    The function of a neural circuit is determined by the details of its synaptic connections. At present, the only available method for determining a neural wiring diagram with single synapse precision-a 'connectome'-is based on imaging methods that are slow, labor-intensive and expensive. Here, we present SYNseq, a method for converting the connectome into a form that can exploit the speed and low cost of modern high-throughput DNA sequencing. In SYNseq, each neuron is labeled with a unique random nucleotide sequence-an RNA 'barcode'-which is targeted to the synapse using engineered proteins. Barcodes in pre- and postsynaptic neurons are then associated through protein-protein crosslinking across the synapse, extracted from the tissue, and joined into a form suitable for sequencing. Although our failure to develop an efficient barcode joining scheme precludes the widespread application of this approach, we expect that with further development SYNseq will enable tracing of complex circuits at high speed and low cost. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. The use of high-speed imaging in education

    NASA Astrophysics Data System (ADS)

    Kleine, H.; McNamara, G.; Rayner, J.

    2017-02-01

    Recent improvements in camera technology and the associated improved access to high-speed camera equipment have made it possible to use high-speed imaging not only in a research environment but also specifically for educational purposes. This includes high-speed sequences that are created both with and for a target audience of students in high schools and universities. The primary goal is to engage students in scientific exploration by providing them with a tool that allows them to see and measure otherwise inaccessible phenomena. High-speed imaging has the potential to stimulate students' curiosity as the results are often surprising or may contradict initial assumptions. "Live" demonstrations in class or student- run experiments are highly suitable to have a profound influence on student learning. Another aspect is the production of high-speed images for demonstration purposes. While some of the approaches known from the application of high speed imaging in a research environment can simply be transferred, additional techniques must often be developed to make the results more easily accessible for the targeted audience. This paper describes a range of student-centered activities that can be undertaken which demonstrate how student engagement and learning can be enhanced through the use of high speed imaging using readily available technologies.

  6. Music-Elicited Emotion Identification Using Optical Flow Analysis of Human Face

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.; Smirnova, Z. N.

    2015-05-01

    Human emotion identification from image sequences is highly demanded nowadays. The range of possible applications can vary from an automatic smile shutter function of consumer grade digital cameras to Biofied Building technologies, which enables communication between building space and residents. The highly perceptual nature of human emotions leads to the complexity of their classification and identification. The main question arises from the subjective quality of emotional classification of events that elicit human emotions. A variety of methods for formal classification of emotions were developed in musical psychology. This work is focused on identification of human emotions evoked by musical pieces using human face tracking and optical flow analysis. Facial feature tracking algorithm used for facial feature speed and position estimation is presented. Facial features were extracted from each image sequence using human face tracking with local binary patterns (LBP) features. Accurate relative speeds of facial features were estimated using optical flow analysis. Obtained relative positions and speeds were used as the output facial emotion vector. The algorithm was tested using original software and recorded image sequences. The proposed technique proves to give a robust identification of human emotions elicited by musical pieces. The estimated models could be used for human emotion identification from image sequences in such fields as emotion based musical background or mood dependent radio.

  7. Visualization of impact damage of composite plates by means of the Moire technique

    NASA Technical Reports Server (NTRS)

    Knauss, W. G.; Babcock, C. D.; Chai, H.

    1980-01-01

    The phenomenological aspects of propagation damage due to low velocity impact on heavily loaded graphite-epoxy composite laminates were investigated using high speed photography coupled with the moire fringe technique. High speed moire motion records of the impacted specimens are presented. The results provide information on the time scale and sequence of the failure process. While the generation of the initial damage cannot always be separated temporally from the spreading of the damage, the latter takes place on the average with a speed on the order of 200 m/sec.

  8. Multiple-rotor-cycle 2D PASS experiments with applications to (207)Pb NMR spectroscopy.

    PubMed

    Vogt, F G; Gibson, J M; Aurentz, D J; Mueller, K T; Benesi, A J

    2000-03-01

    Thetwo-dimensional phase-adjusted spinning sidebands (2D PASS) experiment is a useful technique for simplifying magic-angle spinning (MAS) NMR spectra that contain overlapping or complicated spinning sideband manifolds. The pulse sequence separates spinning sidebands by their order in a two-dimensional experiment. The result is an isotropic/anisotropic correlation experiment, in which a sheared projection of the 2D spectrum effectively yields an isotropic spectrum with no sidebands. The original 2D PASS experiment works best at lower MAS speeds (1-5 kHz). At higher spinning speeds (8-12 kHz) the experiment requires higher RF power levels so that the pulses do not overlap. In the case of nuclei such as (207)Pb, a large chemical shift anisotropy often yields too many spinning sidebands to be handled by a reasonable 2D PASS experiment unless higher spinning speeds are used. Performing the experiment at these speeds requires fewer 2D rows and a correspondingly shorter experimental time. Therefore, we have implemented PASS pulse sequences that occupy multiple MAS rotor cycles, thereby avoiding pulse overlap. These multiple-rotor-cycle 2D PASS sequences are intended for use in high-speed MAS situations such as those required by (207)Pb. A version of the multiple-rotor-cycle 2D PASS sequence that uses composite pulses to suppress spectral artifacts is also presented. These sequences are demonstrated on (207)Pb test samples, including lead zirconate, a perovskite-phase compound that is representative of a large class of interesting materials. Copyright 2000 Academic Press.

  9. Development of Neuromorphic Sift Operator with Application to High Speed Image Matching

    NASA Astrophysics Data System (ADS)

    Shankayi, M.; Saadatseresht, M.; Bitetto, M. A. V.

    2015-12-01

    There was always a speed/accuracy challenge in photogrammetric mapping process, including feature detection and matching. Most of the researches have improved algorithm's speed with simplifications or software modifications which increase the accuracy of the image matching process. This research tries to improve speed without enhancing the accuracy of the same algorithm using Neuromorphic techniques. In this research we have developed a general design of a Neuromorphic ASIC to handle algorithms such as SIFT. We also have investigated neural assignment in each step of the SIFT algorithm. With a rough estimation based on delay of the used elements including MAC and comparator, we have estimated the resulting chip's performance for 3 scenarios, Full HD movie (Videogrammetry), 24 MP (UAV photogrammetry), and 88 MP image sequence. Our estimations led to approximate 3000 fps for Full HD movie, 250 fps for 24 MP image sequence and 68 fps for 88MP Ultracam image sequence which can be a huge improvement for current photogrammetric processing systems. We also estimated the power consumption of less than10 watts which is not comparable to current workflows.

  10. Transcription profile of boar spermatozoa as revealed by RNA-sequencing

    USDA-ARS?s Scientific Manuscript database

    High-throughput RNA sequencing (RNA-Seq) overcomes the limitations of the current hybridization-based techniques to detect the actual pool of RNA transcripts in spermatozoa. The application of this technology in livestock can speed the discovery of potential predictors of male fertility. As a first ...

  11. Time-stretch microscopy based on time-wavelength sequence reconstruction from wideband incoherent source

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chi, E-mail: chizheung@gmail.com; Xu, Yiqing; Wei, Xiaoming

    2014-07-28

    Time-stretch microscopy has emerged as an ultrafast optical imaging concept offering the unprecedented combination of the imaging speed and sensitivity. However, dedicated wideband and coherence optical pulse source with high shot-to-shot stability has been mandated for time-wavelength mapping—the enabling process for ultrahigh speed wavelength-encoded image retrieval. From the practical point of view, exploiting methods to relax the stringent requirements (e.g., temporal stability and coherence) for the source of time-stretch microscopy is thus of great value. In this paper, we demonstrated time-stretch microscopy by reconstructing the time-wavelength mapping sequence from a wideband incoherent source. Utilizing the time-lens focusing mechanism mediated bymore » a narrow-band pulse source, this approach allows generation of a wideband incoherent source, with the spectral efficiency enhanced by a factor of 18. As a proof-of-principle demonstration, time-stretch imaging with the scan rate as high as MHz and diffraction-limited resolution is achieved based on the wideband incoherent source. We note that the concept of time-wavelength sequence reconstruction from wideband incoherent source can also be generalized to any high-speed optical real-time measurements, where wavelength is acted as the information carrier.« less

  12. Using high-throughput barcode sequencing to efficiently map connectomes

    PubMed Central

    Peikon, Ian D.; Kebschull, Justus M.; Vagin, Vasily V.; Ravens, Diana I.; Sun, Yu-Chi; Brouzes, Eric; Corrêa, Ivan R.; Bressan, Dario

    2017-01-01

    Abstract The function of a neural circuit is determined by the details of its synaptic connections. At present, the only available method for determining a neural wiring diagram with single synapse precision—a ‘connectome’—is based on imaging methods that are slow, labor-intensive and expensive. Here, we present SYNseq, a method for converting the connectome into a form that can exploit the speed and low cost of modern high-throughput DNA sequencing. In SYNseq, each neuron is labeled with a unique random nucleotide sequence—an RNA ‘barcode’—which is targeted to the synapse using engineered proteins. Barcodes in pre- and postsynaptic neurons are then associated through protein-protein crosslinking across the synapse, extracted from the tissue, and joined into a form suitable for sequencing. Although our failure to develop an efficient barcode joining scheme precludes the widespread application of this approach, we expect that with further development SYNseq will enable tracing of complex circuits at high speed and low cost. PMID:28449067

  13. Next-Generation Technologies for Multiomics Approaches Including Interactome Sequencing

    PubMed Central

    Ohashi, Hiroyuki; Miyamoto-Sato, Etsuko

    2015-01-01

    The development of high-speed analytical techniques such as next-generation sequencing and microarrays allows high-throughput analysis of biological information at a low cost. These techniques contribute to medical and bioscience advancements and provide new avenues for scientific research. Here, we outline a variety of new innovative techniques and discuss their use in omics research (e.g., genomics, transcriptomics, metabolomics, proteomics, and interactomics). We also discuss the possible applications of these methods, including an interactome sequencing technology that we developed, in future medical and life science research. PMID:25649523

  14. Pulse-burst PIV in a high-speed wind tunnel

    NASA Astrophysics Data System (ADS)

    Beresh, Steven; Kearney, Sean; Wagner, Justin; Guildenbecher, Daniel; Henfling, John; Spillers, Russell; Pruett, Brian; Jiang, Naibo; Slipchenko, Mikhail; Mance, Jason; Roy, Sukesh

    2015-09-01

    Time-resolved particle image velocimetry (TR-PIV) has been achieved in a high-speed wind tunnel, providing velocity field movies of compressible turbulence events. The requirements of high-speed flows demand greater energy at faster pulse rates than possible with the TR-PIV systems developed for low-speed flows. This has been realized using a pulse-burst laser to obtain movies at up to 50 kHz, with higher speeds possible at the cost of spatial resolution. The constraints imposed by use of a pulse-burst laser are limited burst duration of 10.2 ms and a low duty cycle for data acquisition. Pulse-burst PIV has been demonstrated in a supersonic jet exhausting into a transonic crossflow and in transonic flow over a rectangular cavity. The velocity field sequences reveal the passage of turbulent structures and can be used to find velocity power spectra at every point in the field, providing spatial distributions of acoustic modes. The present work represents the first use of TR-PIV in a high-speed ground-test facility.

  15. High speed clinical data retrieval system with event time sequence feature: with 10 years of clinical data of Hamamatsu University Hospital CPOE.

    PubMed

    Kimura, M; Tani, S; Watanabe, H; Naito, Y; Sakusabe, T; Watanabe, H; Nakaya, J; Sasaki, F; Numano, T; Furuta, T; Furuta, T

    2008-01-01

    This paper illustrates a high speed clinical data retrieving system, from 10 years of data of operating hospital information system for the purposes of research, evidence creation, patient safety, etc., even incorporating time sequence of causal relations. Total of 73,709,298 records of 10 years at Hamamatsu University Hospital (as of June 2008) are sent from HIS to retrieval system in HL7 v2.5 format. Hierarchical variable length database is used to install them. A search for "listing patients who were prescribed Pravastatin (Mevalotin and generic drugs, any titer)" took 1.92 seconds. "Pravastatin (any) prescribed and recorded AST >150 within two weeks" took 112.22 seconds. Searching conditions can be set to be more complex, connected by Boolean operator and/or. This system called D*D is in operation at Hamamatsu University Hospital since August 2002. It is used for 48,518 times (monthly average of 703 searches). Neither searching, nor background export of data from HIS caused delay of routine operating CPOE. Search database outside of routine operating CPOE, with daily export of order data in HL7 v2.5 format, is proved to provide excellent search environment without causing trouble. Hierarchical representation gives high-speed search response, especially with time sequence of events.

  16. High-repetition-rate interferometric Rayleigh scattering for flow-velocity measurements

    NASA Astrophysics Data System (ADS)

    Estevadeordal, Jordi; Jiang, Naibo; Cutler, Andrew D.; Felver, Josef J.; Slipchenko, Mikhail N.; Danehy, Paul M.; Gord, James R.; Roy, Sukesh

    2018-03-01

    High-repetition-rate interferometric-Rayleigh-scattering (IRS) velocimetry is demonstrated for non-intrusive, high-speed flow-velocity measurements. High temporal resolution is obtained with a quasi-continuous burst-mode laser that is capable of operating at 10-100 kHz, providing 10-ms bursts with pulse widths of 5-1000 ns and pulse energy > 100 mJ at 532 nm. Coupled with a high-speed camera system, the IRS method is based on imaging the flow field through an etalon with 8-GHz free spectral range and capturing the Doppler shift of the Rayleigh-scattered light from the flow at multiple points having constructive interference. The seed-laser linewidth permits a laser linewidth of < 150 MHz at 532 nm. The technique is demonstrated in a high-speed jet, and high-repetition-rate image sequences are shown.

  17. Ultrahigh-speed X-ray imaging of hypervelocity projectiles

    NASA Astrophysics Data System (ADS)

    Miller, Stuart; Singh, Bipin; Cool, Steven; Entine, Gerald; Campbell, Larry; Bishel, Ron; Rushing, Rick; Nagarkar, Vivek V.

    2011-08-01

    High-speed X-ray imaging is an extremely important modality for healthcare, industrial, military and research applications such as medical computed tomography, non-destructive testing, imaging in-flight projectiles, characterizing exploding ordnance, and analyzing ballistic impacts. We report on the development of a modular, ultrahigh-speed, high-resolution digital X-ray imaging system with large active imaging area and microsecond time resolution, capable of acquiring at a rate of up to 150,000 frames per second. The system is based on a high-resolution, high-efficiency, and fast-decay scintillator screen optically coupled to an ultra-fast image-intensified CCD camera designed for ballistic impact studies and hypervelocity projectile imaging. A specially designed multi-anode, high-fluence X-ray source with 50 ns pulse duration provides a sequence of blur-free images of hypervelocity projectiles traveling at speeds exceeding 8 km/s (18,000 miles/h). This paper will discuss the design, performance, and high frame rate imaging capability of the system.

  18. New Laboratory Observations of Thermal Pressurization Weakening

    NASA Astrophysics Data System (ADS)

    Badt, N.; Tullis, T. E.; Hirth, G.

    2017-12-01

    Dynamic frictional weakening due to pore fluid thermal pressurization has been studied under elevated confining pressure in the laboratory, using a rotary-shear apparatus having a sample with independent pore pressure and confining pressure systems. Thermal pressurization is directly controlled by the permeability of the rocks, not only for the initiation of high-speed frictional weakening but also for a subsequent sequence of high-speed sliding events. First, the permeability is evaluated at different effective pressures using a method where the pore pressure drop and the flow-through rate are compared using Darcy's Law as well as a pore fluid oscillation method, the latter method also permitting measurement of the storage capacity. Then, the samples undergo a series of high-speed frictional sliding segments at a velocity of 2.5 mm/s, under an applied confining pressure and normal stress of 45 MPa and 50 MPa, respectively, and an initial pore pressure of 25 MPa. Finally the rock permeability and storage capacity are measured again to assess the evolution of the rock's pore fluid properties. For samples with a permeability of 10-20 m2 thermal pressurization promotes a 40% decrease in strength. However, after a sequence of three high-speed sliding events, the magnitude of weakening diminishes progressively from 40% to 15%. The weakening events coincide with dilation of the sliding interface. Moreover, the decrease in the weakening degree with progressive fast-slip events suggest that the hydraulic diffusivity may increase locally near the sliding interface during thermal pressurization-enhanced slip. This could result from stress- or thermally-induced damage to the host rock, which would perhaps increase both permeability and storage capacity, and so possibly decrease the susceptibility of dynamic weakening due to thermal pressurization in subsequent high-speed sliding events.

  19. High rate concatenated coding systems using bandwidth efficient trellis inner codes

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-01-01

    High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.

  20. High Speed Intensified Video Observations of TLEs in Support of PhOCAL

    NASA Technical Reports Server (NTRS)

    Lyons, Walter A.; Nelson, Thomas E.; Cummer, Steven A.; Lang, Timothy; Miller, Steven; Beavis, Nick; Yue, Jia; Samaras, Tim; Warner, Tom A.

    2013-01-01

    The third observing season of PhOCAL (Physical Origins of Coupling to the upper Atmosphere by Lightning) was conducted over the U.S. High Plains during the late spring and summer of 2013. The goal was to capture using an intensified high-speed camera, a transient luminous event (TLE), especially a sprite, as well as its parent cloud-to-ground (SP+CG) lightning discharge, preferably within the domain of a 3-D lightning mapping array (LMA). The co-capture of sprite and its SP+CG was achieved within useful range of an interferometer operating near Rapid City. Other high-speed sprite video sequences were captured above the West Texas LMA. On several occasions the large mesoscale convective complexes (MCSs) producing the TLE-class lightning were also generating vertically propagating convectively generated gravity waves (CGGWs) at the mesopause which were easily visible using NIR-sensitive color cameras. These were captured concurrent with sprites. These observations were follow-ons to a case on 15 April 2012 in which CGGWs were also imaged by the new Day/Night Band on the Suomi NPP satellite system. The relationship between the CGGW and sprite initiation are being investigated. The past year was notable for a large number of elve+halo+sprite sequences sequences generated by the same parent CG. And on several occasions there appear to be prominent banded modulations of the elves' luminosity imaged at >3000 ips. These stripes appear coincident with the banded CGGW structure, and presumably its density variations. Several elves and a sprite from negative CGs were also noted. New color imaging systems have been tested and found capable of capturing sprites. Two cases of sprites with an aurora as a backdrop were also recorded. High speed imaging was also provided in support of the UPLIGHTS program near Rapid City, SD and the USAFA SPRITES II airborne campaign over the Great Plains.

  1. ISRNA: an integrative online toolkit for short reads from high-throughput sequencing data.

    PubMed

    Luo, Guan-Zheng; Yang, Wei; Ma, Ying-Ke; Wang, Xiu-Jie

    2014-02-01

    Integrative Short Reads NAvigator (ISRNA) is an online toolkit for analyzing high-throughput small RNA sequencing data. Besides the high-speed genome mapping function, ISRNA provides statistics for genomic location, length distribution and nucleotide composition bias analysis of sequence reads. Number of reads mapped to known microRNAs and other classes of short non-coding RNAs, coverage of short reads on genes, expression abundance of sequence reads as well as some other analysis functions are also supported. The versatile search functions enable users to select sequence reads according to their sub-sequences, expression abundance, genomic location, relationship to genes, etc. A specialized genome browser is integrated to visualize the genomic distribution of short reads. ISRNA also supports management and comparison among multiple datasets. ISRNA is implemented in Java/C++/Perl/MySQL and can be freely accessed at http://omicslab.genetics.ac.cn/ISRNA/.

  2. MALDI Top-Down sequencing: calling N- and C-terminal protein sequences with high confidence and speed.

    PubMed

    Suckau, Detlev; Resemann, Anja

    2009-12-01

    The ability to match Top-Down protein sequencing (TDS) results by MALDI-TOF to protein sequences by classical protein database searching was evaluated in this work. Resulting from these analyses were the protein identity, the simultaneous assignment of the N- and C-termini and protein sequences of up to 70 residues from either terminus. In combination with de novo sequencing using the MALDI-TDS data, even fusion proteins were assigned and the detailed sequence around the fusion site was elucidated. MALDI-TDS allowed to efficiently match protein sequences quickly and to validate recombinant protein structures-in particular, protein termini-on the level of undigested proteins.

  3. Car-Crash Experiment for the Undergraduate Laboratory

    ERIC Educational Resources Information Center

    Ball, Penny L.; And Others

    1974-01-01

    Describes an interesting, inexpensive, and highly motivating experiment to study uniform and accelerated motion by measuring the position of a car as it crashes into a rigid wall. Data are obtained from a sequence of pictures made by a high speed camera. (Author/SLH)

  4. Kinematics and spectra of planetary nebulae with O VI-sequence nuclei

    NASA Technical Reports Server (NTRS)

    Johnson, H. M.

    1976-01-01

    Spectral features of NGC 5189 and NGC 6905 are tabulated. Fabry-Perot profiles around H alpha and O III lambda 5007 of NGC 5189, NGC 6905, NGC 246, and NGC 1535, are illustrated. The latter planetary nebula is a non-O VI-sequence, comparison object of high excitation. The kinematics of the four planetary nebulae are simply analyzed. Discussion of these data is motivated by the possibility of collisional excitation by high-speed ejecta from broad-lined O VI-sequence nuclei, and by the opportunity to make a comparison with conditions in the supernova remnant or ring nebula, G2.4 + 1.4, which contains an O VI-sequence nucleus of Population I.

  5. A high-speed BCI based on code modulation VEP

    NASA Astrophysics Data System (ADS)

    Bin, Guangyu; Gao, Xiaorong; Wang, Yijun; Li, Yun; Hong, Bo; Gao, Shangkai

    2011-04-01

    Recently, electroencephalogram-based brain-computer interfaces (BCIs) have attracted much attention in the fields of neural engineering and rehabilitation due to their noninvasiveness. However, the low communication speed of current BCI systems greatly limits their practical application. In this paper, we present a high-speed BCI based on code modulation of visual evoked potentials (c-VEP). Thirty-two target stimuli were modulated by a time-shifted binary pseudorandom sequence. A multichannel identification method based on canonical correlation analysis (CCA) was used for target identification. The online system achieved an average information transfer rate (ITR) of 108 ± 12 bits min-1 on five subjects with a maximum ITR of 123 bits min-1 for a single subject.

  6. Flexbar 3.0 - SIMD and multicore parallelization.

    PubMed

    Roehr, Johannes T; Dieterich, Christoph; Reinert, Knut

    2017-09-15

    High-throughput sequencing machines can process many samples in a single run. For Illumina systems, sequencing reads are barcoded with an additional DNA tag that is contained in the respective sequencing adapters. The recognition of barcode and adapter sequences is hence commonly needed for the analysis of next-generation sequencing data. Flexbar performs demultiplexing based on barcodes and adapter trimming for such data. The massive amounts of data generated on modern sequencing machines demand that this preprocessing is done as efficiently as possible. We present Flexbar 3.0, the successor of the popular program Flexbar. It employs now twofold parallelism: multi-threading and additionally SIMD vectorization. Both types of parallelism are used to speed-up the computation of pair-wise sequence alignments, which are used for the detection of barcodes and adapters. Furthermore, new features were included to cover a wide range of applications. We evaluated the performance of Flexbar based on a simulated sequencing dataset. Our program outcompetes other tools in terms of speed and is among the best tools in the presented quality benchmark. https://github.com/seqan/flexbar. johannes.roehr@fu-berlin.de or knut.reinert@fu-berlin.de. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  7. [The principle and application of the single-molecule real-time sequencing technology].

    PubMed

    Yanhu, Liu; Lu, Wang; Li, Yu

    2015-03-01

    Last decade witnessed the explosive development of the third-generation sequencing strategy, including single-molecule real-time sequencing (SMRT), true single-molecule sequencing (tSMSTM) and the single-molecule nanopore DNA sequencing. In this review, we summarize the principle, performance and application of the SMRT sequencing technology. Compared with the traditional Sanger method and the next-generation sequencing (NGS) technologies, the SMRT approach has several advantages, including long read length, high speed, PCR-free and the capability of direct detection of epigenetic modifications. However, the disadvantage of its low accuracy, most of which resulted from insertions and deletions, is also notable. So, the raw sequence data need to be corrected before assembly. Up to now, the SMRT is a good fit for applications in the de novo genomic sequencing and the high-quality assemblies of small genomes. In the future, it is expected to play an important role in epigenetics, transcriptomic sequencing, and assemblies of large genomes.

  8. Solid-state NMR imaging system

    DOEpatents

    Gopalsami, Nachappa; Dieckman, Stephen L.; Ellingson, William A.

    1992-01-01

    An apparatus for use with a solid-state NMR spectrometer includes a special imaging probe with linear, high-field strength gradient fields and high-power broadband RF coils using a back projection method for data acquisition and image reconstruction, and a real-time pulse programmer adaptable for use by a conventional computer for complex high speed pulse sequences.

  9. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  10. Sequence of the Essex-Lopresti lesion—a high-speed video documentation and kinematic analysis

    PubMed Central

    2014-01-01

    Background and purpose The pathomechanics of the Essex-Lopresti lesion are not fully understood. We used human cadavers and documented the genesis of the injury with high-speed cameras. Methods 4 formalin-fixed cadaveric specimens of human upper extremities were tested in a prototype, custom-made, drop-weight test bench. An axial high-energy impulse was applied and the development of the lesion was documented with 3 high-speed cameras. Results The high-speed images showed a transversal movement of the radius and ulna, which moved away from each other in the transversal plane during the impact. This resulted into a transversal rupture of the interosseous membrane, starting in its central portion, and only then did the radius migrate proximally and fracture. The lesion proceeded to the dislocation of the distal radio-ulnar joint and then to a full-blown Essex-Lopresti lesion. Interpretation Our findings indicate that fracture of the radial head may be preceded by at least partial lesions of the interosseous membrane in the course of high-energy axial trauma. PMID:24479620

  11. Reduced randomness in quantum cryptography with sequences of qubits encoded in the same basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamoureux, L.-P.; Cerf, N. J.; Bechmann-Pasquinucci, H.

    2006-03-15

    We consider the cloning of sequences of qubits prepared in the states used in the BB84 or six-state quantum cryptography protocol, and show that the single-qubit fidelity is unaffected even if entire sequences of qubits are prepared in the same basis. This result is only valid provided that the sequences are much shorter than the total key. It is of great importance for practical quantum cryptosystems because it reduces the need for high-speed random number generation without impairing on the security against finite-size cloning attacks.

  12. High-speed imaging using compressed sensing and wavelength-dependent scattering (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Shin, Jaewook; Bosworth, Bryan T.; Foster, Mark A.

    2017-02-01

    The process of multiple scattering has inherent characteristics that are attractive for high-speed imaging with high spatial resolution and a wide field-of-view. A coherent source passing through a multiple-scattering medium naturally generates speckle patterns with diffraction-limited features over an arbitrarily large field-of-view. In addition, the process of multiple scattering is deterministic allowing a given speckle pattern to be reliably reproduced with identical illumination conditions. Here, by exploiting wavelength dependent multiple scattering and compressed sensing, we develop a high-speed 2D time-stretch microscope. Highly chirped pulses from a 90-MHz mode-locked laser are sent through a 2D grating and a ground-glass diffuser to produce 2D speckle patterns that rapidly evolve with the instantaneous frequency of the chirped pulse. To image a scene, we first characterize the high-speed evolution of the generated speckle patterns. Subsequently we project the patterns onto the microscopic region of interest and collect the total light from the scene using a single high-speed photodetector. Thus the wavelength dependent speckle patterns serve as high-speed pseudorandom structured illumination of the scene. An image sequence is then recovered using the time-dependent signal received by the photodetector, the known speckle pattern evolution, and compressed sensing algorithms. Notably, the use of compressed sensing allows for reconstruction of a time-dependent scene using a highly sub-Nyquist number of measurements, which both increases the speed of the imager and reduces the amount of data that must be collected and stored. We will discuss our experimental demonstration of this approach and the theoretical limits on imaging speed.

  13. High-speed large angle mammography tomosynthesis system

    NASA Astrophysics Data System (ADS)

    Eberhard, Jeffrey W.; Staudinger, Paul; Smolenski, Joe; Ding, Jason; Schmitz, Andrea; McCoy, Julie; Rumsey, Michael; Al-Khalidy, Abdulrahman; Ross, William; Landberg, Cynthia E.; Claus, Bernhard E. H.; Carson, Paul; Goodsitt, Mitchell; Chan, Heang-Ping; Roubidoux, Marilyn; Thomas, Jerry A.; Osland, Jacqueline

    2006-03-01

    A new mammography tomosynthesis prototype system that acquires 21 projection images over a 60 degree angular range in approximately 8 seconds has been developed and characterized. Fast imaging sequences are facilitated by a high power tube and generator for faster delivery of the x-ray exposure and a high speed detector read-out. An enhanced a-Si/CsI flat panel digital detector provides greater DQE at low exposure, enabling tomo image sequence acquisitions at total patient dose levels between 150% and 200% of the dose of a standard mammographic view. For clinical scenarios where a single MLO tomographic acquisition per breast may replace the standard CC and MLO views, total tomosynthesis breast dose is comparable to or below the dose in standard mammography. The system supports co-registered acquisition of x-ray tomosynthesis and 3-D ultrasound data sets by incorporating an ultrasound transducer scanning system that flips into position above the compression paddle for the ultrasound exam. Initial images acquired with the system are presented.

  14. High Speed Videometric Monitoring of Rock Breakage

    NASA Astrophysics Data System (ADS)

    Allemand, J.; Shortis, M. R.; Elmouttie, M. K.

    2018-05-01

    Estimation of rock breakage characteristics plays an important role in optimising various industrial and mining processes used for rock comminution. Although little research has been undertaken into 3D photogrammetric measurement of the progeny kinematics, there is promising potential to improve the efficacy of rock breakage characterisation. In this study, the observation of progeny kinematics was conducted using a high speed, stereo videometric system based on laboratory experiments with a drop weight impact testing system. By manually tracking individual progeny through the captured video sequences, observed progeny coordinates can be used to determine 3D trajectories and velocities, supporting the idea that high speed video can be used for rock breakage characterisation purposes. An analysis of the results showed that the high speed videometric system successfully observed progeny trajectories and showed clear projection of the progeny away from the impact location. Velocities of the progeny could also be determined based on the trajectories and the video frame rate. These results were obtained despite the limitations of the photogrammetric system and experiment processes observed in this study. Accordingly there is sufficient evidence to conclude that high speed videometric systems are capable of observing progeny kinematics from drop weight impact tests. With further optimisation of the systems and processes used, there is potential for improving the efficacy of rock breakage characterisation from measurements with high speed videometric systems.

  15. BBMerge – Accurate paired shotgun read merging via overlap

    DOE PAGES

    Bushnell, Brian; Rood, Jonathan; Singer, Esther

    2017-10-26

    Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less

  16. BBMerge – Accurate paired shotgun read merging via overlap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bushnell, Brian; Rood, Jonathan; Singer, Esther

    Merging paired-end shotgun reads generated on high-throughput sequencing platforms can substantially improve various subsequent bioinformatics processes, including genome assembly, binning, mapping, annotation, and clustering for taxonomic analysis. With the inexorable growth of sequence data volume and CPU core counts, the speed and scalability of read-processing tools becomes ever-more important. The accuracy of shotgun read merging is crucial as well, as errors introduced by incorrect merging percolate through to reduce the quality of downstream analysis. Thus, we designed a new tool to maximize accuracy and minimize processing time, allowing the use of read merging on larger datasets, and in analyses highlymore » sensitive to errors. We present BBMerge, a new merging tool for paired-end shotgun sequence data. We benchmark BBMerge by comparison with eight other widely used merging tools, assessing speed, accuracy and scalability. Evaluations of both synthetic and real-world datasets demonstrate that BBMerge produces merged shotgun reads with greater accuracy and at higher speed than any existing merging tool examined. BBMerge also provides the ability to merge non-overlapping shotgun read pairs by using k-mer frequency information to assemble the unsequenced gap between reads, achieving a significantly higher merge rate while maintaining or increasing accuracy.« less

  17. The effect of session order on the physiological, neuromuscular, and endocrine responses to maximal speed and weight training sessions over a 24-h period.

    PubMed

    Johnston, Michael; Johnston, Julia; Cook, Christian J; Costley, Lisa; Kilgallon, Mark; Kilduff, Liam P

    2017-05-01

    Athletes are often required to undertake multiple training sessions on the same day with these sessions needing to be sequenced correctly to allow the athlete to maximize the responses of each session. We examined the acute effect of strength and speed training sequence on neuromuscular, endocrine, and physiological responses over 24h. 15 academy rugby union players completed this randomized crossover study. Players performed a weight training session followed 2h later by a speed training session (weights speed) and on a separate day reversed the order (speed weights). Countermovement jumps, perceived muscle soreness, and blood samples were collected immediately prior, immediately post, and 24h post-sessions one and two respectively. Jumps were analyzed for power, jump height, rate of force development, and velocity. Blood was analyzed for testosterone, cortisol, lactate and creatine kinase. There were no differences between countermovement jump variables at any of the post-training time points (p>0.05). Likewise, creatine kinase, testosterone, cortisol, and muscle soreness were unaffected by session order (p>0.05). However, 10m sprint time was significantly faster (mean±standard deviation; speed weights 1.80±0.11s versus weights speed 1.76±0.08s; p>0.05) when speed was sequenced second. Lactate levels were significantly higher immediately post-speed sessions versus weight training sessions at both time points (p<0.05). The sequencing of strength and speed training does not affect the neuromuscular, endocrine, and physiological recovery over 24h. However, speed may be enhanced when performed as the second session. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  18. A distributed system for fast alignment of next-generation sequencing data.

    PubMed

    Srimani, Jaydeep K; Wu, Po-Yen; Phan, John H; Wang, May D

    2010-12-01

    We developed a scalable distributed computing system using the Berkeley Open Interface for Network Computing (BOINC) to align next-generation sequencing (NGS) data quickly and accurately. NGS technology is emerging as a promising platform for gene expression analysis due to its high sensitivity compared to traditional genomic microarray technology. However, despite the benefits, NGS datasets can be prohibitively large, requiring significant computing resources to obtain sequence alignment results. Moreover, as the data and alignment algorithms become more prevalent, it will become necessary to examine the effect of the multitude of alignment parameters on various NGS systems. We validate the distributed software system by (1) computing simple timing results to show the speed-up gained by using multiple computers, (2) optimizing alignment parameters using simulated NGS data, and (3) computing NGS expression levels for a single biological sample using optimal parameters and comparing these expression levels to that of a microarray sample. Results indicate that the distributed alignment system achieves approximately a linear speed-up and correctly distributes sequence data to and gathers alignment results from multiple compute clients.

  19. Ship Speed Retrieval From Single Channel TerraSAR-X Data

    NASA Astrophysics Data System (ADS)

    Soccorsi, Matteo; Lehner, Susanne

    2010-04-01

    A method to estimate the speed of a moving ship is presented. The technique, introduced in Kirscht (1998), is extended to marine application and validated on TerraSAR-X High-Resolution (HR) data. The generation of a sequence of single-look SAR images from a single- channel image corresponds to an image time series with reduced resolution. This allows applying change detection techniques on the time series to evaluate the velocity components in range and azimuth of the ship. The evaluation of the displacement vector of a moving target in consecutive images of the sequence allows the estimation of the azimuth velocity component. The range velocity component is estimated by evaluating the variation of the signal amplitude during the sequence. In order to apply the technique on TerraSAR-X Spot Light (SL) data a further processing step is needed. The phase has to be corrected as presented in Eineder et al. (2009) due to the SL acquisition mode; otherwise the image sequence cannot be generated. The analysis, when possible validated by the Automatic Identification System (AIS), was performed in the framework of the ESA project MARISS.

  20. Integrated on-line system for DNA sequencing by capillary electrophoresis: From template to called bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ton, H.; Yeung, E.S.

    1997-02-15

    An integrated on-line prototype for coupling a microreactor to capillary electrophoresis for DNA sequencing has been demonstrated. A dye-labeled terminator cycle-sequencing reaction is performed in a fused-silica capillary. Subsequently, the sequencing ladder is directly injected into a size-exclusion chromatographic column operated at nearly 95{degree}C for purification. On-line injection to a capillary for electrophoresis is accomplished at a junction set at nearly 70{degree}C. High temperature at the purification column and injection junction prevents the renaturation of DNA fragments during on-line transfer without affecting the separation. The high solubility of DNA in and the relatively low ionic strength of 1 x TEmore » buffer permit both effective purification and electrokinetic injection of the DNA sample. The system is compatible with highly efficient separations by a replaceable poly(ethylene oxide) polymer solution in uncoated capillary tubes. Future automation and adaptation to a multiple-capillary array system should allow high-speed, high-throughput DNA sequencing from templates to called bases in one step. 32 refs., 5 figs.« less

  1. A novel approach to multiple sequence alignment using hadoop data grids.

    PubMed

    Sudha Sadasivam, G; Baktavatchalam, G

    2010-01-01

    Multiple alignment of protein sequences helps to determine evolutionary linkage and to predict molecular structures. The factors to be considered while aligning multiple sequences are speed and accuracy of alignment. Although dynamic programming algorithms produce accurate alignments, they are computation intensive. In this paper we propose a time efficient approach to sequence alignment that also produces quality alignment. The dynamic nature of the algorithm coupled with data and computational parallelism of hadoop data grids improves the accuracy and speed of sequence alignment. The principle of block splitting in hadoop coupled with its scalability facilitates alignment of very large sequences.

  2. HMM-ModE: implementation, benchmarking and validation with HMMER3

    PubMed Central

    2014-01-01

    Background HMM-ModE is a computational method that generates family specific profile HMMs using negative training sequences. The method optimizes the discrimination threshold using 10 fold cross validation and modifies the emission probabilities of profiles to reduce common fold based signals shared with other sub-families. The protocol depends on the program HMMER for HMM profile building and sequence database searching. The recent release of HMMER3 has improved database search speed by several orders of magnitude, allowing for the large scale deployment of the method in sequence annotation projects. We have rewritten our existing scripts both at the level of parsing the HMM profiles and modifying emission probabilities to upgrade HMM-ModE using HMMER3 that takes advantage of its probabilistic inference with high computational speed. The method is benchmarked and tested on GPCR dataset as an accurate and fast method for functional annotation. Results The implementation of this method, which now works with HMMER3, is benchmarked with the earlier version of HMMER, to show that the effect of local-local alignments is marked only in the case of profiles containing a large number of discontinuous match states. The method is tested on a gold standard set of families and we have reported a significant reduction in the number of false positive hits over the default HMM profiles. When implemented on GPCR sequences, the results showed an improvement in the accuracy of classification compared with other methods used to classify the familyat different levels of their classification hierarchy. Conclusions The present findings show that the new version of HMM-ModE is a highly specific method used to differentiate between fold (superfamily) and function (family) specific signals, which helps in the functional annotation of protein sequences. The use of modified profile HMMs of GPCR sequences provides a simple yet highly specific method for classification of the family, being able to predict the sub-family specific sequences with high accuracy even though sequences share common physicochemical characteristics between sub-families. PMID:25073805

  3. High-speed imaging of submerged jet: visualization analysis using proper orthogonality decomposition

    NASA Astrophysics Data System (ADS)

    Liu, Yingzheng; He, Chuangxin

    2016-11-01

    In the present study, the submerged jet at low Reynolds numbers was visualized using laser induced fluoresce and high-speed imaging in a water tank. Well-controlled calibration was made to determine linear dependency region of the fluoresce intensity on its concentration. Subsequently, the jet fluid issuing from a circular pipe was visualized using a high-speed camera. The animation sequence of the visualized jet flow field was supplied for the snapshot proper orthogonality decomposition (POD) analysis. Spatio-temporally varying structures superimposed in the unsteady fluid flow were identified, e.g., the axisymmetric mode and the helical mode, which were reflected from the dominant POD modes. The coefficients of the POD modes give strong indication of temporal and spectral features of the corresponding unsteady events. The reconstruction using the time-mean visualization and the selected POD modes was conducted to reveal the convective motion of the buried vortical structures. National Natural Science Foundation of China.

  4. Real-time image reconstruction and display system for MRI using a high-speed personal computer.

    PubMed

    Haishi, T; Kose, K

    1998-09-01

    A real-time NMR image reconstruction and display system was developed using a high-speed personal computer and optimized for the 32-bit multitasking Microsoft Windows 95 operating system. The system was operated at various CPU clock frequencies by changing the motherboard clock frequency and the processor/bus frequency ratio. When the Pentium CPU was used at the 200 MHz clock frequency, the reconstruction time for one 128 x 128 pixel image was 48 ms and that for the image display on the enlarged 256 x 256 pixel window was about 8 ms. NMR imaging experiments were performed with three fast imaging sequences (FLASH, multishot EPI, and one-shot EPI) to demonstrate the ability of the real-time system. It was concluded that in most cases, high-speed PC would be the best choice for the image reconstruction and display system for real-time MRI. Copyright 1998 Academic Press.

  5. Brandaris 128 ultra-high-speed imaging facility: 10 years of operation, updates, and enhanced features

    NASA Astrophysics Data System (ADS)

    Gelderblom, Erik C.; Vos, Hendrik J.; Mastik, Frits; Faez, Telli; Luan, Ying; Kokhuis, Tom J. A.; van der Steen, Antonius F. W.; Lohse, Detlef; de Jong, Nico; Versluis, Michel

    2012-10-01

    The Brandaris 128 ultra-high-speed imaging facility has been updated over the last 10 years through modifications made to the camera's hardware and software. At its introduction the camera was able to record 6 sequences of 128 images (500 × 292 pixels) at a maximum frame rate of 25 Mfps. The segmented mode of the camera was revised to allow for subdivision of the 128 image sensors into arbitrary segments (1-128) with an inter-segment time of 17 μs. Furthermore, a region of interest can be selected to increase the number of recordings within a single run of the camera from 6 up to 125. By extending the imaging system with a laser-induced fluorescence setup, time-resolved ultra-high-speed fluorescence imaging of microscopic objects has been enabled. Minor updates to the system are also reported here.

  6. High-speed railway real-time localization auxiliary method based on deep neural network

    NASA Astrophysics Data System (ADS)

    Chen, Dongjie; Zhang, Wensheng; Yang, Yang

    2017-11-01

    High-speed railway intelligent monitoring and management system is composed of schedule integration, geographic information, location services, and data mining technology for integration of time and space data. Assistant localization is a significant submodule of the intelligent monitoring system. In practical application, the general access is to capture the image sequences of the components by using a high-definition camera, digital image processing technique and target detection, tracking and even behavior analysis method. In this paper, we present an end-to-end character recognition method based on a deep CNN network called YOLO-toc for high-speed railway pillar plate number. Different from other deep CNNs, YOLO-toc is an end-to-end multi-target detection framework, furthermore, it exhibits a state-of-art performance on real-time detection with a nearly 50fps achieved on GPU (GTX960). Finally, we realize a real-time but high-accuracy pillar plate number recognition system and integrate natural scene OCR into a dedicated classification YOLO-toc model.

  7. Image correlation method for DNA sequence alignment.

    PubMed

    Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván

    2012-01-01

    The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.

  8. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE PAGES

    Yim, Won Cheol; Cushman, John C.

    2017-07-22

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  9. Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yim, Won Cheol; Cushman, John C.

    Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less

  10. Association analysis for udder index and milking speed with imputed whole-genome sequence variants in Nordic Holstein cattle.

    PubMed

    Jardim, Júlia Gazzoni; Guldbrandtsen, Bernt; Lund, Mogens Sandø; Sahana, Goutam

    2018-03-01

    Genome-wide association testing facilitates the identification of genetic variants associated with complex traits. Mapping genes that promote genetic resistance to mastitis could reduce the cost of antibiotic use and enhance animal welfare and milk production by improving outcomes of breeding for udder health. Using imputed whole-genome sequence variants, we carried out association studies for 2 traits related to udder health, udder index, and milking speed in Nordic Holstein cattle. A total of 4,921 bulls genotyped with the BovineSNP50 BeadChip array were imputed to high-density genotypes (Illumina BovineHD BeadChip, Illumina, San Diego, CA) and, subsequently, to whole-genome sequence variants. An association analysis was carried out using a linear mixed model. Phenotypes used in the association analyses were deregressed breeding values. Multitrait meta-analysis was carried out for these 2 traits. We identified 10 and 8 chromosomes harboring markers that were significantly associated with udder index and milking speed, respectively. Strongest association signals were observed on chromosome 20 for udder index and chromosome 19 for milking speed. Multitrait meta-analysis identified 13 chromosomes harboring associated markers for the combination of udder index and milking speed. The associated region on chromosome 20 overlapped with earlier reported quantitative trait loci for similar traits in other cattle populations. Moreover, this region was located close to the FYB gene, which is involved in platelet activation and controls IL-2 expression; FYB is a strong candidate gene for udder health and worthy of further investigation. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. Repeated-Sprint Sequences During Female Soccer Matches Using Fixed and Individual Speed Thresholds.

    PubMed

    Nakamura, Fábio Y; Pereira, Lucas A; Loturco, Irineu; Rosseti, Marcelo; Moura, Felipe A; Bradley, Paul S

    2017-07-01

    Nakamura, FY, Pereira, LA, Loturco, I, Rosseti, M, Moura, FA, and Bradley, PS. Repeated-sprint sequences during female soccer matches using fixed and individual speed thresholds. J Strength Cond Res 31(7): 1802-1810, 2017-The main objective of this study was to characterize the occurrence of single sprint and repeated-sprint sequences (RSS) during elite female soccer matches, using fixed (20 km·h) and individually based speed thresholds (>90% of the mean speed from a 20-m sprint test). Eleven elite female soccer players from the same team participated in the study. All players performed a 20-m linear sprint test, and were assessed in up to 10 official matches using Global Positioning System technology. Magnitude-based inferences were used to test for meaningful differences. Results revealed that irrespective of adopting fixed or individual speed thresholds, female players produced only a few RSS during matches (2.3 ± 2.4 sequences using the fixed threshold and 3.3 ± 3.0 sequences using the individually based threshold), with most sequences composing of just 2 sprints. Additionally, central defenders performed fewer sprints (10.2 ± 4.1) than other positions (fullbacks: 28.1 ± 5.5; midfielders: 21.9 ± 10.5; forwards: 31.9 ± 11.1; with the differences being likely to almost certainly associated with effect sizes ranging from 1.65 to 2.72), and sprinting ability declined in the second half. The data do not support the notion that RSS occurs frequently during soccer matches in female players, irrespective of using fixed or individual speed thresholds to define sprint occurrence. However, repeated-sprint ability development cannot be ruled out from soccer training programs because of its association with match-related performance.

  12. Atropos: specific, sensitive, and speedy trimming of sequencing reads.

    PubMed

    Didion, John P; Martin, Marcel; Collins, Francis S

    2017-01-01

    A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos.

  13. Atropos: specific, sensitive, and speedy trimming of sequencing reads

    PubMed Central

    Collins, Francis S.

    2017-01-01

    A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos. PMID:28875074

  14. High Speed Imaging of Cavitation around Dental Ultrasonic Scaler Tips.

    PubMed

    Vyas, Nina; Pecheva, Emilia; Dehghani, Hamid; Sammons, Rachel L; Wang, Qianxi X; Leppinen, David M; Walmsley, A Damien

    2016-01-01

    Cavitation occurs around dental ultrasonic scalers, which are used clinically for removing dental biofilm and calculus. However it is not known if this contributes to the cleaning process. Characterisation of the cavitation around ultrasonic scalers will assist in assessing its contribution and in developing new clinical devices for removing biofilm with cavitation. The aim is to use high speed camera imaging to quantify cavitation patterns around an ultrasonic scaler. A Satelec ultrasonic scaler operating at 29 kHz with three different shaped tips has been studied at medium and high operating power using high speed imaging at 15,000, 90,000 and 250,000 frames per second. The tip displacement has been recorded using scanning laser vibrometry. Cavitation occurs at the free end of the tip and increases with power while the area and width of the cavitation cloud varies for different shaped tips. The cavitation starts at the antinodes, with little or no cavitation at the node. High speed image sequences combined with scanning laser vibrometry show individual microbubbles imploding and bubble clouds lifting and moving away from the ultrasonic scaler tip, with larger tip displacement causing more cavitation.

  15. High Speed Imaging of Cavitation around Dental Ultrasonic Scaler Tips

    PubMed Central

    Vyas, Nina; Pecheva, Emilia; Dehghani, Hamid; Sammons, Rachel L.; Wang, Qianxi X.; Leppinen, David M.; Walmsley, A. Damien

    2016-01-01

    Cavitation occurs around dental ultrasonic scalers, which are used clinically for removing dental biofilm and calculus. However it is not known if this contributes to the cleaning process. Characterisation of the cavitation around ultrasonic scalers will assist in assessing its contribution and in developing new clinical devices for removing biofilm with cavitation. The aim is to use high speed camera imaging to quantify cavitation patterns around an ultrasonic scaler. A Satelec ultrasonic scaler operating at 29 kHz with three different shaped tips has been studied at medium and high operating power using high speed imaging at 15,000, 90,000 and 250,000 frames per second. The tip displacement has been recorded using scanning laser vibrometry. Cavitation occurs at the free end of the tip and increases with power while the area and width of the cavitation cloud varies for different shaped tips. The cavitation starts at the antinodes, with little or no cavitation at the node. High speed image sequences combined with scanning laser vibrometry show individual microbubbles imploding and bubble clouds lifting and moving away from the ultrasonic scaler tip, with larger tip displacement causing more cavitation. PMID:26934340

  16. Measurement of instantaneous rotational speed using double-sine-varying-density fringe pattern

    NASA Astrophysics Data System (ADS)

    Zhong, Jianfeng; Zhong, Shuncong; Zhang, Qiukun; Peng, Zhike

    2018-03-01

    Fast and accurate rotational speed measurement is required both for condition monitoring and faults diagnose of rotating machineries. A vision- and fringe pattern-based rotational speed measurement system was proposed to measure the instantaneous rotational speed (IRS) with high accuracy and reliability. A special double-sine-varying-density fringe pattern (DSVD-FP) was designed and pasted around the shaft surface completely and worked as primary angular sensor. The rotational angle could be correctly obtained from the left and right fringe period densities (FPDs) of the DSVD-FP image sequence recorded by a high-speed camera. The instantaneous angular speed (IAS) between two adjacent frames could be calculated from the real-time rotational angle curves, thus, the IRS also could be obtained accurately and efficiently. Both the measurement principle and system design of the novel method have been presented. The influence factors on the sensing characteristics and measurement accuracy of the novel system, including the spectral centrobaric correction method (SCCM) on the FPD calculation, the noise sources introduce by the image sensor, the exposure time and the vibration of the shaft, were investigated through simulations and experiments. The sampling rate of the high speed camera could be up to 5000 Hz, thus, the measurement becomes very fast and the change in rotational speed was sensed within 0.2 ms. The experimental results for different IRS measurements and characterization of the response property of a servo motor demonstrated the high accuracy and fast measurement of the proposed technique, making it attractive for condition monitoring and faults diagnosis of rotating machineries.

  17. Learning multiple variable-speed sequences in striatum via cortical tutoring.

    PubMed

    Murray, James M; Escola, G Sean

    2017-05-08

    Sparse, sequential patterns of neural activity have been observed in numerous brain areas during timekeeping and motor sequence tasks. Inspired by such observations, we construct a model of the striatum, an all-inhibitory circuit where sequential activity patterns are prominent, addressing the following key challenges: (i) obtaining control over temporal rescaling of the sequence speed, with the ability to generalize to new speeds; (ii) facilitating flexible expression of distinct sequences via selective activation, concatenation, and recycling of specific subsequences; and (iii) enabling the biologically plausible learning of sequences, consistent with the decoupling of learning and execution suggested by lesion studies showing that cortical circuits are necessary for learning, but that subcortical circuits are sufficient to drive learned behaviors. The same mechanisms that we describe can also be applied to circuits with both excitatory and inhibitory populations, and hence may underlie general features of sequential neural activity pattern generation in the brain.

  18. Safety and EEG data quality of concurrent high-density EEG and high-speed fMRI at 3 Tesla.

    PubMed

    Foged, Mette Thrane; Lindberg, Ulrich; Vakamudi, Kishore; Larsson, Henrik B W; Pinborg, Lars H; Kjær, Troels W; Fabricius, Martin; Svarer, Claus; Ozenne, Brice; Thomsen, Carsten; Beniczky, Sándor; Paulson, Olaf B; Posse, Stefan

    2017-01-01

    Concurrent EEG and fMRI is increasingly used to characterize the spatial-temporal dynamics of brain activity. However, most studies to date have been limited to conventional echo-planar imaging (EPI). There is considerable interest in integrating recently developed high-speed fMRI methods with high-density EEG to increase temporal resolution and sensitivity for task-based and resting state fMRI, and for detecting interictal spikes in epilepsy. In the present study using concurrent high-density EEG and recently developed high-speed fMRI methods, we investigate safety of radiofrequency (RF) related heating, the effect of EEG on cortical signal-to-noise ratio (SNR) in fMRI, and assess EEG data quality. The study compared EPI, multi-echo EPI, multi-band EPI and multi-slab echo-volumar imaging pulse sequences, using clinical 3 Tesla MR scanners from two different vendors that were equipped with 64- and 256-channel MR-compatible EEG systems, respectively, and receive only array head coils. Data were collected in 11 healthy controls (3 males, age range 18-70 years) and 13 patients with epilepsy (8 males, age range 21-67 years). Three of the healthy controls were scanned with the 256-channel EEG system, the other subjects were scanned with the 64-channel EEG system. Scalp surface temperature, SNR in occipital cortex and head movement were measured with and without the EEG cap. The degree of artifacts and the ability to identify background activity was assessed by visual analysis by a trained expert in the 64 channel EEG data (7 healthy controls, 13 patients). RF induced heating at the surface of the EEG electrodes during a 30-minute scan period with stable temperature prior to scanning did not exceed 1.0° C with either EEG system and any of the pulse sequences used in this study. There was no significant decrease in cortical SNR due to the presence of the EEG cap (p > 0.05). No significant differences in the visually analyzed EEG data quality were found between EEG recorded during high-speed fMRI and during conventional EPI (p = 0.78). Residual ballistocardiographic artifacts resulted in 58% of EEG data being rated as poor quality. This study demonstrates that high-density EEG can be safely implemented in conjunction with high-speed fMRI and that high-speed fMRI does not adversely affect EEG data quality. However, the deterioration of the EEG quality due to residual ballistocardiographic artifacts remains a significant constraint for routine clinical applications of concurrent EEG-fMRI.

  19. Protein sequence annotation in the genome era: the annotation concept of SWISS-PROT+TREMBL.

    PubMed

    Apweiler, R; Gateau, A; Contrino, S; Martin, M J; Junker, V; O'Donovan, C; Lang, F; Mitaritonna, N; Kappus, S; Bairoch, A

    1997-01-01

    SWISS-PROT is a curated protein sequence database which strives to provide a high level of annotation, a minimal level of redundancy and high level of integration with other databases. Ongoing genome sequencing projects have dramatically increased the number of protein sequences to be incorporated into SWISS-PROT. Since we do not want to dilute the quality standards of SWISS-PROT by incorporating sequences without proper sequence analysis and annotation, we cannot speed up the incorporation of new incoming data indefinitely. However, as we also want to make the sequences available as fast as possible, we introduced TREMBL (TRanslation of EMBL nucleotide sequence database), a supplement to SWISS-PROT. TREMBL consists of computer-annotated entries in SWISS-PROT format derived from the translation of all coding sequences (CDS) in the EMBL nucleotide sequence database, except for CDS already included in SWISS-PROT. While TREMBL is already of immense value, its computer-generated annotation does not match the quality of SWISS-PROTs. The main difference is in the protein functional information attached to sequences. With this in mind, we are dedicating substantial effort to develop and apply computer methods to enhance the functional information attached to TREMBL entries.

  20. Pattern recognition of electronic bit-sequences using a semiconductor mode-locked laser and spatial light modulators

    NASA Astrophysics Data System (ADS)

    Bhooplapur, Sharad; Akbulut, Mehmetkan; Quinlan, Franklyn; Delfyett, Peter J.

    2010-04-01

    A novel scheme for recognition of electronic bit-sequences is demonstrated. Two electronic bit-sequences that are to be compared are each mapped to a unique code from a set of Walsh-Hadamard codes. The codes are then encoded in parallel on the spectral phase of the frequency comb lines from a frequency-stabilized mode-locked semiconductor laser. Phase encoding is achieved by using two independent spatial light modulators based on liquid crystal arrays. Encoded pulses are compared using interferometric pulse detection and differential balanced photodetection. Orthogonal codes eight bits long are compared, and matched codes are successfully distinguished from mismatched codes with very low error rates, of around 10-18. This technique has potential for high-speed, high accuracy recognition of bit-sequences, with applications in keyword searches and internet protocol packet routing.

  1. High-speed 3D surface measurement with a fringe projection based optical sensor

    NASA Astrophysics Data System (ADS)

    Bräuer-Burchardt, Christian; Heist, Stefan; Kühmstedt, Peter; Notni, Gunther

    2014-05-01

    A new optical sensor based on fringe projection technique for the accurate and fast measurement of the surface of objects mainly for industrial inspection tasks is introduced. High-speed fringe projection and image recording with 180 Hz allows 3D rates up to 60 Hz. The high measurement velocity was achieved by consequent fringe code reduction and parallel data processing. Reduction of the image sequence length was obtained by omission of the Gray-code sequence by using the geometric restrictions of the measurement objects. The sensor realizes three different measurement fields between 20 x 20 mm2 and 40 x 40 mm2 with lateral spatial solutions between 10 μm and 20 μm with the same working distance. Measurement object height extension is between +/- 0.5 mm and +/- 2 mm. Height resolution between 1 μm and 5 μm can be achieved depending on the properties of the measurement objects. The sensor may be used e.g. for quality inspection of conductor boards or plugs in real-time industrial applications.

  2. A time series based sequence prediction algorithm to detect activities of daily living in smart home.

    PubMed

    Marufuzzaman, M; Reaz, M B I; Ali, M A M; Rahman, L F

    2015-01-01

    The goal of smart homes is to create an intelligent environment adapting the inhabitants need and assisting the person who needs special care and safety in their daily life. This can be reached by collecting the ADL (activities of daily living) data and further analysis within existing computing elements. In this research, a very recent algorithm named sequence prediction via enhanced episode discovery (SPEED) is modified and in order to improve accuracy time component is included. The modified SPEED or M-SPEED is a sequence prediction algorithm, which modified the previous SPEED algorithm by using time duration of appliance's ON-OFF states to decide the next state. M-SPEED discovered periodic episodes of inhabitant behavior, trained it with learned episodes, and made decisions based on the obtained knowledge. The results showed that M-SPEED achieves 96.8% prediction accuracy, which is better than other time prediction algorithms like PUBS, ALZ with temporal rules and the previous SPEED. Since human behavior shows natural temporal patterns, duration times can be used to predict future events more accurately. This inhabitant activity prediction system will certainly improve the smart homes by ensuring safety and better care for elderly and handicapped people.

  3. Performance of a visuomotor walking task in an augmented reality training setting.

    PubMed

    Haarman, Juliet A M; Choi, Julia T; Buurke, Jaap H; Rietman, Johan S; Reenalda, Jasper

    2017-12-01

    Visual cues can be used to train walking patterns. Here, we studied the performance and learning capacities of healthy subjects executing a high-precision visuomotor walking task, in an augmented reality training set-up. A beamer was used to project visual stepping targets on the walking surface of an instrumented treadmill. Two speeds were used to manipulate task difficulty. All participants (n = 20) had to change their step length to hit visual stepping targets with a specific part of their foot, while walking on a treadmill over seven consecutive training blocks, each block composed of 100 stepping targets. Distance between stepping targets was varied between short, medium and long steps. Training blocks could either be composed of random stepping targets (no fixed sequence was present in the distance between the stepping targets) or sequenced stepping targets (repeating fixed sequence was present). Random training blocks were used to measure non-specific learning and sequenced training blocks were used to measure sequence-specific learning. Primary outcome measures were performance (% of correct hits), and learning effects (increase in performance over the training blocks: both sequence-specific and non-specific). Secondary outcome measures were the performance and stepping-error in relation to the step length (distance between stepping target). Subjects were able to score 76% and 54% at first try for lower speed (2.3 km/h) and higher speed (3.3 km/h) trials, respectively. Performance scores did not increase over the course of the trials, nor did the subjects show the ability to learn a sequenced walking task. Subjects were better able to hit targets while increasing their step length, compared to shortening it. In conclusion, augmented reality training by use of the current set-up was intuitive for the user. Suboptimal feedback presentation might have limited the learning effects of the subjects. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Image processing for safety assessment in civil engineering.

    PubMed

    Ferrer, Belen; Pomares, Juan C; Irles, Ramon; Espinosa, Julian; Mas, David

    2013-06-20

    Behavior analysis of construction safety systems is of fundamental importance to avoid accidental injuries. Traditionally, measurements of dynamic actions in civil engineering have been done through accelerometers, but high-speed cameras and image processing techniques can play an important role in this area. Here, we propose using morphological image filtering and Hough transform on high-speed video sequence as tools for dynamic measurements on that field. The presented method is applied to obtain the trajectory and acceleration of a cylindrical ballast falling from a building and trapped by a thread net. Results show that safety recommendations given in construction codes can be potentially dangerous for workers.

  5. High-Speed Incoming Infrared Target Detection by Fusion of Spatial and Temporal Detectors

    PubMed Central

    Kim, Sungho

    2015-01-01

    This paper presents a method for detecting high-speed incoming targets by the fusion of spatial and temporal detectors to achieve a high detection rate for an active protection system (APS). The incoming targets have different image velocities according to the target-camera geometry. Therefore, single-target detector-based approaches, such as a 1D temporal filter, 2D spatial filter and 3D matched filter, cannot provide a high detection rate with moderate false alarms. The target speed variation was analyzed according to the incoming angle and target velocity. The speed of the distant target at the firing time is almost stationary and increases slowly. The speed varying targets are detected stably by fusing the spatial and temporal filters. The stationary target detector is activated by an almost zero temporal contrast filter (TCF) and identifies targets using a spatial filter called the modified mean subtraction filter (M-MSF). A small motion (sub-pixel velocity) target detector is activated by a small TCF value and finds targets using the same spatial filter. A large motion (pixel-velocity) target detector works when the TCF value is high. The final target detection is terminated by fusing the three detectors based on the threat priority. The experimental results of the various target sequences show that the proposed fusion-based target detector produces the highest detection rate with an acceptable false alarm rate. PMID:25815448

  6. High-speed mixture fraction and temperature imaging of pulsed, turbulent fuel jets auto-igniting in high-temperature, vitiated co-flows

    NASA Astrophysics Data System (ADS)

    Papageorge, Michael J.; Arndt, Christoph; Fuest, Frederik; Meier, Wolfgang; Sutton, Jeffrey A.

    2014-07-01

    In this manuscript, we describe an experimental approach to simultaneously measure high-speed image sequences of the mixture fraction and temperature fields during pulsed, turbulent fuel injection into a high-temperature, co-flowing, and vitiated oxidizer stream. The quantitative mixture fraction and temperature measurements are determined from 10-kHz-rate planar Rayleigh scattering and a robust data processing methodology which is accurate from fuel injection to the onset of auto-ignition. In addition, the data processing is shown to yield accurate temperature measurements following ignition to observe the initial evolution of the "burning" temperature field. High-speed OH* chemiluminescence (CL) was used to determine the spatial location of the initial auto-ignition kernel. In order to ensure that the ignition kernel formed inside of the Rayleigh scattering laser light sheet, OH* CL was observed in two viewing planes, one near-parallel to the laser sheet and one perpendicular to the laser sheet. The high-speed laser measurements are enabled through the use of the unique high-energy pulse burst laser system which generates long-duration bursts of ultra-high pulse energies at 532 nm (>1 J) suitable for planar Rayleigh scattering imaging. A particular focus of this study was to characterize the fidelity of the measurements both in the context of the precision and accuracy, which includes facility operating and boundary conditions and measurement of signal-to-noise ratio (SNR). The mixture fraction and temperature fields deduced from the high-speed planar Rayleigh scattering measurements exhibited SNR values greater than 100 at temperatures exceeding 1,300 K. The accuracy of the measurements was determined by comparing the current mixture fraction results to that of "cold", isothermal, non-reacting jets. All profiles, when properly normalized, exhibited self-similarity and collapsed upon one another. Finally, example mixture fraction, temperature, and OH* emission sequences are presented for a variety for fuel and vitiated oxidizer combinations. For all cases considered, auto-ignition occurred at the periphery of the fuel jet, under very "lean" conditions, where the local mixture fraction was less than the stoichiometric mixture fraction ( ξ < ξ s). Furthermore, the ignition kernel formed in regions of low scalar dissipation rate, which agrees with previous results from direct numerical simulations.

  7. Use of Proper Orthogonal Decomposition Towards Time-resolved Image Analysis of Sprays

    DTIC Science & Technology

    2011-03-15

    High-speed movies of optically dense sprays exiting a Gas-Centered Swirl Coaxial (GCSC) injector are subjected to image analysis to determine spray...sequence prior to image analysis . Results of spray morphology including spray boundary, widths, angles and boundary oscillation frequencies, are

  8. Eyelid contour detection and tracking for startle research related eye-blink measurements from high-speed video records.

    PubMed

    Bernard, Florian; Deuter, Christian Eric; Gemmar, Peter; Schachinger, Hartmut

    2013-10-01

    Using the positions of the eyelids is an effective and contact-free way for the measurement of startle induced eye-blinks, which plays an important role in human psychophysiological research. To the best of our knowledge, no methods for an efficient detection and tracking of the exact eyelid contours in image sequences captured at high-speed exist that are conveniently usable by psychophysiological researchers. In this publication a semi-automatic model-based eyelid contour detection and tracking algorithm for the analysis of high-speed video recordings from an eye tracker is presented. As a large number of images have been acquired prior to method development it was important that our technique is able to deal with images that are recorded without any special parametrisation of the eye tracker. The method entails pupil detection, specular reflection removal and makes use of dynamic model adaption. In a proof-of-concept study we could achieve a correct detection rate of 90.6%. With this approach, we provide a feasible method to accurately assess eye-blinks from high-speed video recordings. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  9. An FPGA Implementation to Detect Selective Cationic Antibacterial Peptides

    PubMed Central

    Polanco González, Carlos; Nuño Maganda, Marco Aurelio; Arias-Estrada, Miguel; del Rio, Gabriel

    2011-01-01

    Exhaustive prediction of physicochemical properties of peptide sequences is used in different areas of biological research. One example is the identification of selective cationic antibacterial peptides (SCAPs), which may be used in the treatment of different diseases. Due to the discrete nature of peptide sequences, the physicochemical properties calculation is considered a high-performance computing problem. A competitive solution for this class of problems is to embed algorithms into dedicated hardware. In the present work we present the adaptation, design and implementation of an algorithm for SCAPs prediction into a Field Programmable Gate Array (FPGA) platform. Four physicochemical properties codes useful in the identification of peptide sequences with potential selective antibacterial activity were implemented into an FPGA board. The speed-up gained in a single-copy implementation was up to 108 times compared with a single Intel processor cycle for cycle. The inherent scalability of our design allows for replication of this code into multiple FPGA cards and consequently improvements in speed are possible. Our results show the first embedded SCAPs prediction solution described and constitutes the grounds to efficiently perform the exhaustive analysis of the sequence-physicochemical properties relationship of peptides. PMID:21738652

  10. Repeatability of high-speed migration of tremor along the Nankai subduction zone, Japan

    NASA Astrophysics Data System (ADS)

    Kato, A.; Tsuruoka, H.; Nakagawa, S.; Hirata, N.

    2015-12-01

    Tectonic tremors have been considered to be a swarm or superimposed pulses of low-frequency earthquakes (LFEs). To systematically analyze the high-speed migration of tremor [e.g., Shelly et al., 2007], we here focus on an intensive cluster hosting many low-frequency earthquakes located at the western part of Shikoku Island. We relocated ~770 hypocenters of LFEs identified by the JMA, which took place from Jan. 2008 to Dec. 2013, applying double differential relocation algorithm [e.g., Waldhauser and Ellsworth, 2000] to arrival times picked by the JMA and those obtained by waveform cross correlation measurements. The epicentral distributions show a clear alignment parallel to the subduction of the Philippine Sea plate, as like a slip-parallel streaking. Then, we applied a matched-filter technique to continuous seismograms recorded near the source region using relocated template LFEs during 6 years (between Jan. 2008 and Dec. 2013). We newly detected about 60 times the number of template events, which is fairly larger than ones obtained by conventional envelope cross correlation method. Interestingly, we identified many repeated sequences of tremor migrations along the slip-parallel streaking (~350 sequences). Front of each or stacked migration of tremors can be modeled by a parabolic envelope, indicating a diffusion process. The diffusivity of parabolic envelope is estimated to be around 105 m2/s, which is categorized as high-speed migration feature (~100 km/hour). Most of the rapid migrations took place during occurrences of short-term slow slip events (SSEs), and seems to be triggered by ocean and solid Earth tides. The most plausible explanation of the high-speed propagation is a diffusion process of stress pulse concentrated within a cluster of strong brittle patches on the ductile shear zone [Ando et al., 2012]. The viscosity of the ductile shear zone within the streaking is at least one order magnitude smaller than that of the slow-speed migration. This discrepancy of viscosity indicates that the streaking has different rheology compared with background main tremor/SSE belt. In addition, the diffusivity did not show any significant change before and after the Tohoku-Oki M9.0 Earthquake, suggesting that the high-speed propagation of tremors seems to be stable against external stress perturbations.

  11. Closha: bioinformatics workflow system for the analysis of massive sequencing data.

    PubMed

    Ko, GunHwan; Kim, Pan-Gyu; Yoon, Jongcheol; Han, Gukhee; Park, Seong-Jin; Song, Wangho; Lee, Byungwook

    2018-02-19

    While next-generation sequencing (NGS) costs have fallen in recent years, the cost and complexity of computation remain substantial obstacles to the use of NGS in bio-medical care and genomic research. The rapidly increasing amounts of data available from the new high-throughput methods have made data processing infeasible without automated pipelines. The integration of data and analytic resources into workflow systems provides a solution to the problem by simplifying the task of data analysis. To address this challenge, we developed a cloud-based workflow management system, Closha, to provide fast and cost-effective analysis of massive genomic data. We implemented complex workflows making optimal use of high-performance computing clusters. Closha allows users to create multi-step analyses using drag and drop functionality and to modify the parameters of pipeline tools. Users can also import the Galaxy pipelines into Closha. Closha is a hybrid system that enables users to use both analysis programs providing traditional tools and MapReduce-based big data analysis programs simultaneously in a single pipeline. Thus, the execution of analytics algorithms can be parallelized, speeding up the whole process. We also developed a high-speed data transmission solution, KoDS, to transmit a large amount of data at a fast rate. KoDS has a file transfer speed of up to 10 times that of normal FTP and HTTP. The computer hardware for Closha is 660 CPU cores and 800 TB of disk storage, enabling 500 jobs to run at the same time. Closha is a scalable, cost-effective, and publicly available web service for large-scale genomic data analysis. Closha supports the reliable and highly scalable execution of sequencing analysis workflows in a fully automated manner. Closha provides a user-friendly interface to all genomic scientists to try to derive accurate results from NGS platform data. The Closha cloud server is freely available for use from http://closha.kobic.re.kr/ .

  12. FRESCO: Referential compression of highly similar sequences.

    PubMed

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  13. Centrifuge: rapid and sensitive classification of metagenomic sequences

    PubMed Central

    Song, Li; Breitwieser, Florian P.

    2016-01-01

    Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. PMID:27852649

  14. MUSCLE: multiple sequence alignment with high accuracy and high throughput.

    PubMed

    Edgar, Robert C

    2004-01-01

    We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.

  15. Mapping RNA-seq Reads with STAR

    PubMed Central

    Dobin, Alexander; Gingeras, Thomas R.

    2015-01-01

    Mapping of large sets of high-throughput sequencing reads to a reference genome is one of the foundational steps in RNA-seq data analysis. The STAR software package performs this task with high levels of accuracy and speed. In addition to detecting annotated and novel splice junctions, STAR is capable of discovering more complex RNA sequence arrangements, such as chimeric and circular RNA. STAR can align spliced sequences of any length with moderate error rates providing scalability for emerging sequencing technologies. STAR generates output files that can be used for many downstream analyses such as transcript/gene expression quantification, differential gene expression, novel isoform reconstruction, signal visualization, and so forth. In this unit we describe computational protocols that produce various output files, use different RNA-seq datatypes, and utilize different mapping strategies. STAR is Open Source software that can be run on Unix, Linux or Mac OS X systems. PMID:26334920

  16. Mapping RNA-seq Reads with STAR.

    PubMed

    Dobin, Alexander; Gingeras, Thomas R

    2015-09-03

    Mapping of large sets of high-throughput sequencing reads to a reference genome is one of the foundational steps in RNA-seq data analysis. The STAR software package performs this task with high levels of accuracy and speed. In addition to detecting annotated and novel splice junctions, STAR is capable of discovering more complex RNA sequence arrangements, such as chimeric and circular RNA. STAR can align spliced sequences of any length with moderate error rates, providing scalability for emerging sequencing technologies. STAR generates output files that can be used for many downstream analyses such as transcript/gene expression quantification, differential gene expression, novel isoform reconstruction, and signal visualization. In this unit, we describe computational protocols that produce various output files, use different RNA-seq datatypes, and utilize different mapping strategies. STAR is open source software that can be run on Unix, Linux, or Mac OS X systems. Copyright © 2015 John Wiley & Sons, Inc.

  17. Nanowire-nanopore transistor sensor for DNA detection during translocation

    NASA Astrophysics Data System (ADS)

    Xie, Ping; Xiong, Qihua; Fang, Ying; Qing, Quan; Lieber, Charles

    2011-03-01

    Nanopore sequencing, as a promising low cost, high throughput sequencing technique, has been proposed more than a decade ago. Due to the incompatibility between small ionic current signal and fast translocation speed and the technical difficulties on large scale integration of nanopore for direct ionic current sequencing, alternative methods rely on integrated DNA sensors have been proposed, such as using capacitive coupling or tunnelling current etc. But none of them have been experimentally demonstrated yet. Here we show that for the first time an amplified sensor signal has been experimentally recorded from a nanowire-nanopore field effect transistor sensor during DNA translocation. Independent multi-channel recording was also demonstrated for the first time. Our results suggest that the signal is from highly localized potential change caused by DNA translocation in none-balanced buffer condition. Given this method may produce larger signal for smaller nanopores, we hope our experiment can be a starting point for a new generation of nanopore sequencing devices with larger signal, higher bandwidth and large-scale multiplexing capability and finally realize the ultimate goal of low cost high throughput sequencing.

  18. Vision-based measurement for rotational speed by improving Lucas-Kanade template tracking algorithm.

    PubMed

    Guo, Jie; Zhu, Chang'an; Lu, Siliang; Zhang, Dashan; Zhang, Chunyu

    2016-09-01

    Rotational angle and speed are important parameters for condition monitoring and fault diagnosis of rotating machineries, and their measurement is useful in precision machining and early warning of faults. In this study, a novel vision-based measurement algorithm is proposed to complete this task. A high-speed camera is first used to capture the video of the rotational object. To extract the rotational angle, the template-based Lucas-Kanade algorithm is introduced to complete motion tracking by aligning the template image in the video sequence. Given the special case of nonplanar surface of the cylinder object, a nonlinear transformation is designed for modeling the rotation tracking. In spite of the unconventional and complex form, the transformation can realize angle extraction concisely with only one parameter. A simulation is then conducted to verify the tracking effect, and a practical tracking strategy is further proposed to track consecutively the video sequence. Based on the proposed algorithm, instantaneous rotational speed (IRS) can be measured accurately and efficiently. Finally, the effectiveness of the proposed algorithm is verified on a brushless direct current motor test rig through the comparison with results obtained by the microphone. Experimental results demonstrate that the proposed algorithm can extract accurately rotational angles and can measure IRS with the advantage of noncontact and effectiveness.

  19. Narrowband Interference Suppression in Spread Spectrum Communication Systems

    DTIC Science & Technology

    1995-12-01

    receiver input. As stated earlier, these waveforms must be sampled to obtain the discrete time sequences. The sampling theorem states: A bandlimited...From the FFT chips, the data is passed to a Plessey PDSP16330 Pythagoras Processor. The 16330 is a high-speed digital CMOS IC that converts real and

  20. Volumetric flow imaging reveals the importance of vortex ring formation in squid swimming tail-first and arms-first.

    PubMed

    Bartol, Ian K; Krueger, Paul S; Jastrebsky, Rachel A; Williams, Sheila; Thompson, Joseph T

    2016-02-01

    Squids use a pulsed jet and fin movements to swim both arms-first (forward) and tail-first (backward). Given the complexity of the squid multi-propulsor system, 3D velocimetry techniques are required for the comprehensive study of wake dynamics. Defocusing digital particle tracking velocimetry, a volumetric velocimetry technique, and high-speed videography were used to study arms-first and tail-first swimming of brief squid Lolliguncula brevis over a broad range of speeds [0-10 dorsal mantle lengths (DML) s(-1)] in a swim tunnel. Although there was considerable complexity in the wakes of these multi-propulsor swimmers, 3D vortex rings and their derivatives were prominent reoccurring features during both tail-first and arms-first swimming, with the greatest jet and fin flow complexity occurring at intermediate speeds (1.5-3.0 DML s(-1)). The jet generally produced the majority of thrust during rectilinear swimming, increasing in relative importance with speed, and the fins provided no thrust at speeds >4.5 DML s(-1). For both swimming orientations, the fins sometimes acted as stabilizers, producing negative thrust (drag), and consistently provided lift at low/intermediate speeds (<2.0 DML s(-1)) to counteract negative buoyancy. Propulsive efficiency (η) increased with speed irrespective of swimming orientation, and η for swimming sequences with clear isolated jet vortex rings was significantly greater (η=78.6±7.6%, mean±s.d.) than that for swimming sequences with clear elongated regions of concentrated jet vorticity (η=67.9±19.2%). This study reveals the complexity of 3D vortex wake flows produced by nekton with hydrodynamically distinct propulsors. © 2016. Published by The Company of Biologists Ltd.

  1. Reactivity to stress and the cognitive components of math disability in grade 1 children.

    PubMed

    MacKinnon McQuarrie, Maureen A; Siegel, Linda S; Perry, Nancy E; Weinberg, Joanne

    2014-01-01

    This study investigated the relationship among working memory, processing speed, math performance, and reactivity to stress in 83 Grade 1 children. Specifically, 39 children with math disability (MD) were compared to 44 children who are typically achieving (TA) in mathematics. It is the first study to use a physiological index of stress (salivary cortisol levels) to measure children's reactivity while completing tasks that assess the core components of MD: working memory for numbers, working memory for words, digits backward, letter number sequence, digit span forward, processing speed for numbers and words, block rotation, and math tasks. Grade 1 children with MD obtained significantly lower scores on the letter number sequence and quantitative concepts tasks. Higher levels of reactivity significantly predicted poorer performance on the working memory for numbers, working memory for words, and quantitative concepts tasks for Grade 1 children, regardless of math ability. Grade 1 children with MD and higher reactivity had significantly lower scores on the letter number sequence task than the children with MD and low reactivity. The findings suggest that high reactivity impairs performance in working memory and math tasks in Grade 1 children, and young children with high reactivity may benefit from interventions aimed at lowering anxiety in stressful situations, which may improve learning. © Hammill Institute on Disabilities 2012.

  2. miRanalyzer: an update on the detection and analysis of microRNAs in high-throughput sequencing experiments

    PubMed Central

    Hackenberg, Michael; Rodríguez-Ezpeleta, Naiara; Aransay, Ana M.

    2011-01-01

    We present a new version of miRanalyzer, a web server and stand-alone tool for the detection of known and prediction of new microRNAs in high-throughput sequencing experiments. The new version has been notably improved regarding speed, scope and available features. Alignments are now based on the ultrafast short-read aligner Bowtie (granting also colour space support, allowing mismatches and improving speed) and 31 genomes, including 6 plant genomes, can now be analysed (previous version contained only 7). Differences between plant and animal microRNAs have been taken into account for the prediction models and differential expression of both, known and predicted microRNAs, between two conditions can be calculated. Additionally, consensus sequences of predicted mature and precursor microRNAs can be obtained from multiple samples, which increases the reliability of the predicted microRNAs. Finally, a stand-alone version of the miRanalyzer that is based on a local and easily customized database is also available; this allows the user to have more control on certain parameters as well as to use specific data such as unpublished assemblies or other libraries that are not available in the web server. miRanalyzer is available at http://bioinfo2.ugr.es/miRanalyzer/miRanalyzer.php. PMID:21515631

  3. Reactivity to Stress and the Cognitive Components of Math Disability in Grade 1 Children

    PubMed Central

    MacKinnon McQuarrie, Maureen A.; Siegel, Linda S.; Perry, Nancy E.; Weinberg, Joanne

    2016-01-01

    This study investigated the relationship among working memory, processing speed, math performance, and reactivity to stress in 83 Grade 1 children. Specifically, 39 children with math disability (MD) were compared to 44 children who are typically achieving (TA) in mathematics. It is the first study to use a physiological index of stress (salivary cortisol levels) to measure children’s reactivity while completing tasks that assess the core components of MD: working memory for numbers, working memory for words, digits backward, letter number sequence, digit span forward, processing speed for numbers and words, block rotation, and math tasks. Grade 1 children with MD obtained significantly lower scores on the letter number sequence and quantitative concepts tasks. Higher levels of reactivity significantly predicted poorer performance on the working memory for numbers, working memory for words, and quantitative concepts tasks for Grade 1 children, regardless of math ability. Grade 1 children with MD and higher reactivity had significantly lower scores on the letter number sequence task than the children with MD and low reactivity. The findings suggest that high reactivity impairs performance in working memory and math tasks in Grade 1 children, and young children with high reactivity may benefit from interventions aimed at lowering anxiety in stressful situations, which may improve learning. PMID:23124381

  4. Viterbi equalization for long-distance, high-speed underwater laser communication

    NASA Astrophysics Data System (ADS)

    Hu, Siqi; Mi, Le; Zhou, Tianhua; Chen, Weibiao

    2017-07-01

    In long-distance, high-speed underwater laser communication, because of the strong absorption and scattering processes, the laser pulse is stretched with the increase in communication distance and the decrease in water clarity. The maximum communication bandwidth is limited by laser-pulse stretching. Improving the communication rate increases the intersymbol interference (ISI). To reduce the effect of ISI, the Viterbi equalization (VE) algorithm is used to estimate the maximum-likelihood receiving sequence. The Monte Carlo method is used to simulate the stretching of the received laser pulse and the maximum communication rate at a wavelength of 532 nm in Jerlov IB and Jerlov II water channels with communication distances of 80, 100, and 130 m, respectively. The high-data rate communication performance for the VE and hard-decision algorithms is compared. The simulation results show that the VE algorithm can be used to reduce the ISI by selecting the minimum error path. The trade-off between the high-data rate communication performance and minor bit-error rate performance loss makes VE a promising option for applications in long-distance, high-speed underwater laser communication systems.

  5. A high-speed network for cardiac image review.

    PubMed

    Elion, J L; Petrocelli, R R

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage.

  6. A high-speed network for cardiac image review.

    PubMed Central

    Elion, J. L.; Petrocelli, R. R.

    1994-01-01

    A high-speed fiber-based network for the transmission and display of digitized full-motion cardiac images has been developed. Based on Asynchronous Transfer Mode (ATM), the network is scaleable, meaning that the same software and hardware is used for a small local area network or for a large multi-institutional network. The system can handle uncompressed digital angiographic images, considered to be at the "high-end" of the bandwidth requirements. Along with the networking, a general-purpose multi-modality review station has been implemented without specialized hardware. This station can store a full injection sequence in "loop RAM" in a 512 x 512 format, then interpolate to 1024 x 1024 while displaying at 30 frames per second. The network and review stations connect to a central file server that uses a virtual file system to make a large high-speed RAID storage disk and associated off-line storage tapes and cartridges all appear as a single large file system to the software. In addition to supporting archival storage and review, the system can also digitize live video using high-speed Direct Memory Access (DMA) from the frame grabber to present uncompressed data to the network. Fully functional prototypes have provided the proof of concept, with full deployment in the institution planned as the next stage. PMID:7949964

  7. High speed nucleic acid sequencing

    DOEpatents

    Korlach, Jonas [Ithaca, NY; Webb, Watt W [Ithaca, NY; Levene, Michael [Ithaca, NY; Turner, Stephen [Ithaca, NY; Craighead, Harold G [Ithaca, NY; Foquet, Mathieu [Ithaca, NY

    2011-05-17

    The present invention is directed to a method of sequencing a target nucleic acid molecule having a plurality of bases. In its principle, the temporal order of base additions during the polymerization reaction is measured on a molecule of nucleic acid. Each type of labeled nucleotide comprises an acceptor fluorophore attached to a phosphate portion of the nucleotide such that the fluorophore is removed upon incorporation into a growing strand. Fluorescent signal is emitted via fluorescent resonance energy transfer between the donor fluorophore and the acceptor fluorophore as each nucleotide is incorporated into the growing strand. The sequence is deduced by identifying which base is being incorporated into the growing strand.

  8. LipidSeq: a next-generation clinical resequencing panel for monogenic dyslipidemias.

    PubMed

    Johansen, Christopher T; Dubé, Joseph B; Loyzer, Melissa N; MacDonald, Austin; Carter, David E; McIntyre, Adam D; Cao, Henian; Wang, Jian; Robinson, John F; Hegele, Robert A

    2014-04-01

    We report the design of a targeted resequencing panel for monogenic dyslipidemias, LipidSeq, for the purpose of replacing Sanger sequencing in the clinical detection of dyslipidemia-causing variants. We also evaluate the performance of the LipidSeq approach versus Sanger sequencing in 84 patients with a range of phenotypes including extreme blood lipid concentrations as well as additional dyslipidemias and related metabolic disorders. The panel performs well, with high concordance (95.2%) in samples with known mutations based on Sanger sequencing and a high detection rate (57.9%) of mutations likely to be causative for disease in samples not previously sequenced. Clinical implementation of LipidSeq has the potential to aid in the molecular diagnosis of patients with monogenic dyslipidemias with a high degree of speed and accuracy and at lower cost than either Sanger sequencing or whole exome sequencing. Furthermore, LipidSeq will help to provide a more focused picture of monogenic and polygenic contributors that underlie dyslipidemia while excluding the discovery of incidental pathogenic clinically actionable variants in nonmetabolism-related genes, such as oncogenes, that would otherwise be identified by a whole exome approach, thus minimizing potential ethical issues.

  9. LipidSeq: a next-generation clinical resequencing panel for monogenic dyslipidemias[S

    PubMed Central

    Johansen, Christopher T.; Dubé, Joseph B.; Loyzer, Melissa N.; MacDonald, Austin; Carter, David E.; McIntyre, Adam D.; Cao, Henian; Wang, Jian; Robinson, John F.; Hegele, Robert A.

    2014-01-01

    We report the design of a targeted resequencing panel for monogenic dyslipidemias, LipidSeq, for the purpose of replacing Sanger sequencing in the clinical detection of dyslipidemia-causing variants. We also evaluate the performance of the LipidSeq approach versus Sanger sequencing in 84 patients with a range of phenotypes including extreme blood lipid concentrations as well as additional dyslipidemias and related metabolic disorders. The panel performs well, with high concordance (95.2%) in samples with known mutations based on Sanger sequencing and a high detection rate (57.9%) of mutations likely to be causative for disease in samples not previously sequenced. Clinical implementation of LipidSeq has the potential to aid in the molecular diagnosis of patients with monogenic dyslipidemias with a high degree of speed and accuracy and at lower cost than either Sanger sequencing or whole exome sequencing. Furthermore, LipidSeq will help to provide a more focused picture of monogenic and polygenic contributors that underlie dyslipidemia while excluding the discovery of incidental pathogenic clinically actionable variants in nonmetabolism-related genes, such as oncogenes, that would otherwise be identified by a whole exome approach, thus minimizing potential ethical issues. PMID:24503134

  10. Blinking characterization from high speed video records. Application to biometric authentication

    PubMed Central

    2018-01-01

    The evaluation of eye blinking has been used for the diagnosis of neurological disorders and fatigue. Despite the extensive literature, no objective method has been found to analyze its kinematic and dynamic behavior. A non-contact technique based on the high-speed recording of the light reflected by the eyelid in the blinking process and the off-line processing of the sequence is presented. It allows for objectively determining the start and end of a blink, besides obtaining different physical magnitudes: position, speed, eyelid acceleration as well as the power, work and mechanical impulse developed by the muscles involved in the physiological process. The parameters derived from these magnitudes provide a unique set of features that can be used to biometric authentication. This possibility has been tested with a limited number of subjects with a correct identification rate of up to 99.7%, thus showing the potential application of the method. PMID:29734389

  11. A novel architecture of recovered data comparison for high speed clock and data recovery

    NASA Astrophysics Data System (ADS)

    Gao, Susan; Li, Fei; Wang, Zhigong; Cui, Hongliang

    2005-05-01

    A clock and data recovery (CDR) circuit is one of the crucial blocks in high-speed serial link communication systems. The data received in these systems are asynchronous and noisy, requiring that a clock be extracted to allow synchronous operations. Furthermore, the data must be "retimed" so that the jitter accumulated during transmission is removed. This paper presents a novel architecture of CDR, which is very tolerant to long sequences of serial ones or zeros and also robust to occasional long absence of transitions. The design is based on the fact that a basic clock recovery having a clock recovery circuit (CRC) and a data decision circuit separately would generate a high jitter clock when the received non-return-to-zero (NRZ) data with long sequences of ones or zeros. To eliminate this drawback, the proposed architecture incorporates a data circuit decision circuit within the phase-locked loop (PLL) CRC. Other than this, a new phase detector (PD) is also proposed, which was easy to accomplish and robust at high speed. This PD is functional with a random input and automatically turns to disable during both the locked state and long absence of transitions. The voltage-controlled oscillator (VCO) is also designed delicately to suppress the jitter. Due to the high stability, the jitter is highly reduced when the loop is locked. The simulation results of such CDR working at 1.25Gb/s particularly for 1000BASE-X Gigabit Ethernet by using TSMC 0.25μm technology are presented to prove the feasibility of this architecture. One more CDR based on edge detection architecture is also built in the circuit for performance comparisons.

  12. Parallel human genome analysis: microarray-based expression monitoring of 1000 genes.

    PubMed Central

    Schena, M; Shalon, D; Heller, R; Chai, A; Brown, P O; Davis, R W

    1996-01-01

    Microarrays containing 1046 human cDNAs of unknown sequence were printed on glass with high-speed robotics. These 1.0-cm2 DNA "chips" were used to quantitatively monitor differential expression of the cognate human genes using a highly sensitive two-color hybridization assay. Array elements that displayed differential expression patterns under given experimental conditions were characterized by sequencing. The identification of known and novel heat shock and phorbol ester-regulated genes in human T cells demonstrates the sensitivity of the assay. Parallel gene analysis with microarrays provides a rapid and efficient method for large-scale human gene discovery. Images Fig. 1 Fig. 2 Fig. 3 PMID:8855227

  13. TOPICAL REVIEW: Integrated genetic analysis microsystems

    NASA Astrophysics Data System (ADS)

    Lagally, Eric T.; Mathies, Richard A.

    2004-12-01

    With the completion of the Human Genome Project and the ongoing DNA sequencing of the genomes of other animals, bacteria, plants and others, a wealth of new information about the genetic composition of organisms has become available. However, as the demand for sequence information grows, so does the workload required both to generate this sequence and to use it for targeted genetic analysis. Microfabricated genetic analysis systems are well poised to assist in the collection and use of these data through increased analysis speed, lower analysis cost and higher parallelism leading to increased assay throughput. In addition, such integrated microsystems may point the way to targeted genetic experiments on single cells and in other areas that are otherwise very difficult. Concomitant with these advantages, such systems, when fully integrated, should be capable of forming portable systems for high-speed in situ analyses, enabling a new standard in disciplines such as clinical chemistry, forensics, biowarfare detection and epidemiology. This review will discuss the various technologies available for genetic analysis on the microscale, and efforts to integrate them to form fully functional robust analysis devices.

  14. Ramped-Amplitude Cross Polarization in Magic-Angle-Spinning NMR

    NASA Astrophysics Data System (ADS)

    Metz, G.; Wu, X. L.; Smith, S. O.

    The Hartmann-Hahn matching profile in CP-MAS NMR shows a strong mismatch dependence if the MAS frequency is on the order of the dipolar couplings in the sample. Under these conditions, the profile breaks down into a series of narrow matching bands separated by the spinning speed, and it becomes difficult to establish and maintain an efficient matching condition. Variable-amplitude CP (VACP), as introduced previously (Peersen et al., J. Magn. Reson. A104, 334, 1993), has been proven to be effective for restoring flat profiles at high spinning speeds. Here, a refined implementation of VACP using a ramped-amplitude cross-polarization sequence (RAMP-CP) is described. The order of the amplitude modulation is shown to be of importance for the cross-polarization process. The new pulse sequence with a linear amplitude ramp is not only easier to set up but also improves the performance of the variable-amplitude experiment in that it produces flat profiles over a wider range of matching conditions even with short total contact times. An increase in signal intensity is obtained compared to both con ventional CP and the originally proposed VACP sequence.

  15. The use of uncalibrated roadside CCTV cameras to estimate mean traffic speed

    DOT National Transportation Integrated Search

    2001-12-01

    In this report, we present a novel approach for estimating traffic speed using a sequence of images from an un-calibrated camera. We assert that exact calibration is not necessary to estimate speed. Instead, to estimate speed, we use: (1) geometric r...

  16. Sequencing sit-to-stand and upright posture for mobility limitation assessment: determination of the timing of the task phases from force platform data.

    PubMed

    Mazzà, Claudia; Zok, Mounir; Della Croce, Ugo

    2005-06-01

    The identification of quantitative tools to assess an individual's mobility limitation is a complex and challenging task. Several motor tasks have been designated as potential indicators of mobility limitation. In this study, a multiple motor task obtained by sequencing sit-to-stand and upright posture was used. Algorithms based on data obtained exclusively from a single force platform were developed to detect the timing of the motor task phases (sit-to-stand, preparation to the upright posture and upright posture). To test these algorithms, an experimental protocol inducing predictable changes in the acquired signals was designed. Twenty-two young, able-bodied subjects performed the task in four different conditions: self-selected natural and high speed with feet kept together, and self-selected natural and high speed with feet pelvis-width apart. The proposed algorithms effectively detected the timing of the task phases, the duration of which was sensitive to the four different experimental conditions. As expected, the duration of the sit-to-stand was sensitive to the speed of the task and not to the foot position, while the duration of the preparation to the upright posture was sensitive to foot position but not to speed. In addition to providing a simple and effective description of the execution of the motor task, the correct timing of the studied multiple task could facilitate the accurate determination of variables descriptive of the single isolated phases, allowing for a more thorough description of the motor task and therefore could contribute to the development of effective quantitative functional evaluation tests.

  17. Safety and EEG data quality of concurrent high-density EEG and high-speed fMRI at 3 Tesla

    PubMed Central

    Foged, Mette Thrane; Lindberg, Ulrich; Vakamudi, Kishore; Larsson, Henrik B. W.; Pinborg, Lars H.; Kjær, Troels W.; Fabricius, Martin; Svarer, Claus; Ozenne, Brice; Thomsen, Carsten; Beniczky, Sándor; Posse, Stefan

    2017-01-01

    Purpose Concurrent EEG and fMRI is increasingly used to characterize the spatial-temporal dynamics of brain activity. However, most studies to date have been limited to conventional echo-planar imaging (EPI). There is considerable interest in integrating recently developed high-speed fMRI methods with high-density EEG to increase temporal resolution and sensitivity for task-based and resting state fMRI, and for detecting interictal spikes in epilepsy. In the present study using concurrent high-density EEG and recently developed high-speed fMRI methods, we investigate safety of radiofrequency (RF) related heating, the effect of EEG on cortical signal-to-noise ratio (SNR) in fMRI, and assess EEG data quality. Materials and methods The study compared EPI, multi-echo EPI, multi-band EPI and multi-slab echo-volumar imaging pulse sequences, using clinical 3 Tesla MR scanners from two different vendors that were equipped with 64- and 256-channel MR-compatible EEG systems, respectively, and receive only array head coils. Data were collected in 11 healthy controls (3 males, age range 18–70 years) and 13 patients with epilepsy (8 males, age range 21–67 years). Three of the healthy controls were scanned with the 256-channel EEG system, the other subjects were scanned with the 64-channel EEG system. Scalp surface temperature, SNR in occipital cortex and head movement were measured with and without the EEG cap. The degree of artifacts and the ability to identify background activity was assessed by visual analysis by a trained expert in the 64 channel EEG data (7 healthy controls, 13 patients). Results RF induced heating at the surface of the EEG electrodes during a 30-minute scan period with stable temperature prior to scanning did not exceed 1.0° C with either EEG system and any of the pulse sequences used in this study. There was no significant decrease in cortical SNR due to the presence of the EEG cap (p > 0.05). No significant differences in the visually analyzed EEG data quality were found between EEG recorded during high-speed fMRI and during conventional EPI (p = 0.78). Residual ballistocardiographic artifacts resulted in 58% of EEG data being rated as poor quality. Conclusion This study demonstrates that high-density EEG can be safely implemented in conjunction with high-speed fMRI and that high-speed fMRI does not adversely affect EEG data quality. However, the deterioration of the EEG quality due to residual ballistocardiographic artifacts remains a significant constraint for routine clinical applications of concurrent EEG-fMRI. PMID:28552957

  18. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    PubMed

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  19. Goldstone R/D High Speed Data Acquisition System

    NASA Technical Reports Server (NTRS)

    Deutsch, L. J.; Jurgens, R. F.; Brokl, S. S.

    1984-01-01

    A digital data acquisition system that meets the requirements of several users (initially the planetary radar program) is planned for general use at Deep Space Station 14 (DSS 14). The system, now partially complete, is controlled by VAX 11/780 computer that is programmed in high level languages. A DEC Data Controller is included for moderate-speed data acquisition, low speed data display, and for a digital interface to special user-provided devices. The high-speed data acquisition is performed in devices that are being designed and built at JPL. Analog IF signals are converted to a digitized 50 MHz real signal. This signal is filtered and mixed digitally to baseband after which its phase code (a PN sequence in the case of planetary radar) is removed. It may then be accumulated (or averaged) and fed into the VAX through an FPS 5210 array processor. Further data processing before entering the VAX is thus possible (computation and accumulation of the power spectra, for example). The system is to be located in the research and development pedestal at DSS 14 for easy access by researchers in radio astronomy as well as telemetry processing and antenna arraying.

  20. Test Operations Procedure (TOP) 07-2-033 Weaponized Manned/Unmanned Aircraft

    DTIC Science & Technology

    2013-01-14

    Uncertainty Analysis ........................................................................... 24 6. PRESENTATION OF DATA...Testing. a. The purpose of testing is to confirm the predictions of engineering analysis , simulation, and subsystem tests. It is not to be... lunar illumination + 10 percent Laser hit point + 0.2 meter Sensor resolution target + 5 percent Weapon impact sequence High-speed

  1. Large area high-speed metrology SPM system.

    PubMed

    Klapetek, P; Valtr, M; Picco, L; Payton, O D; Martinek, J; Yacoot, A; Miles, M

    2015-02-13

    We present a large area high-speed measuring system capable of rapidly generating nanometre resolution scanning probe microscopy data over mm(2) regions. The system combines a slow moving but accurate large area XYZ scanner with a very fast but less accurate small area XY scanner. This arrangement enables very large areas to be scanned by stitching together the small, rapidly acquired, images from the fast XY scanner while simultaneously moving the slow XYZ scanner across the region of interest. In order to successfully merge the image sequences together two software approaches for calibrating the data from the fast scanner are described. The first utilizes the low uncertainty interferometric sensors of the XYZ scanner while the second implements a genetic algorithm with multiple parameter fitting during the data merging step of the image stitching process. The basic uncertainty components related to these high-speed measurements are also discussed. Both techniques are shown to successfully enable high-resolution, large area images to be generated at least an order of magnitude faster than with a conventional atomic force microscope.

  2. Large area high-speed metrology SPM system

    NASA Astrophysics Data System (ADS)

    Klapetek, P.; Valtr, M.; Picco, L.; Payton, O. D.; Martinek, J.; Yacoot, A.; Miles, M.

    2015-02-01

    We present a large area high-speed measuring system capable of rapidly generating nanometre resolution scanning probe microscopy data over mm2 regions. The system combines a slow moving but accurate large area XYZ scanner with a very fast but less accurate small area XY scanner. This arrangement enables very large areas to be scanned by stitching together the small, rapidly acquired, images from the fast XY scanner while simultaneously moving the slow XYZ scanner across the region of interest. In order to successfully merge the image sequences together two software approaches for calibrating the data from the fast scanner are described. The first utilizes the low uncertainty interferometric sensors of the XYZ scanner while the second implements a genetic algorithm with multiple parameter fitting during the data merging step of the image stitching process. The basic uncertainty components related to these high-speed measurements are also discussed. Both techniques are shown to successfully enable high-resolution, large area images to be generated at least an order of magnitude faster than with a conventional atomic force microscope.

  3. A microprogrammable radar controller

    NASA Technical Reports Server (NTRS)

    Law, D. C.

    1986-01-01

    The Wave Propagation Lab. has completed the design and construction of a microprogrammable radar controller for atmospheric wind profiling. Unlike some radar controllers using state machines or hardwired logic for radar timing, this design is a high speed programmable sequencer with signal processing resources. A block diagram of the device is shown. The device is a single 8 1/2 inch by 10 1/2 inch printed circuit board and consists of three main subsections: (1) the host computer interface; (2) the microprogram sequencer; and (3) the signal processing circuitry. Each of these subsections are described in detail.

  4. Application of high-speed photography to chip refining

    NASA Astrophysics Data System (ADS)

    Stationwala, Mustafa I.; Miller, Charles E.; Atack, Douglas; Karnis, A.

    1991-04-01

    Several high speed photographic methods have been employed to elucidate the mechanistic aspects of producing mechanical pulp in a disc refiner. Material flow patterns of pulp in a refmer were previously recorded by means of a HYCAM camera and continuous lighting system which provided cine pictures at up to 10,000 pps. In the present work an IMACON camera was used to obtain several series of high resolution, high speed photographs, each photograph containing an eight-frame sequence obtained at a framing rate of 100,000 pps. These high-resolution photographs made it possible to identify the nature of the fibrous material trapped on the bars of the stationary disc. Tangential movement of fibre floes, during the passage of bars on the rotating disc over bars on the stationary disc, was also observed on the stator bars. In addition, using a cinestroboscopic technique a large number of high resolution pictures were taken at three different positions of the rotating disc relative to the stationary disc. These pictures were computer analyzed, statistically, to determine the fractional coverage of the bars of the stationary disc with pulp. Information obtained from these studies provides new insights into the mechanism of the refining process.

  5. Fast and low-cost structured light pattern sequence projection.

    PubMed

    Wissmann, Patrick; Forster, Frank; Schmitt, Robert

    2011-11-21

    We present a high-speed and low-cost approach for structured light pattern sequence projection. Using a fast rotating binary spatial light modulator, our method is potentially capable of projection frequencies in the kHz domain, while enabling pattern rasterization as low as 2 μm pixel size and inherently linear grayscale reproduction quantized at 12 bits/pixel or better. Due to the circular arrangement of the projected fringe patterns, we extend the widely used ray-plane triangulation method to ray-cone triangulation and provide a detailed description of the optical calibration procedure. Using the proposed projection concept in conjunction with the recently published coded phase shift (CPS) pattern sequence, we demonstrate high accuracy 3-D measurement at 200 Hz projection frequency and 20 Hz 3-D reconstruction rate. © 2011 Optical Society of America

  6. Recent patents of nanopore DNA sequencing technology: progress and challenges.

    PubMed

    Zhou, Jianfeng; Xu, Bingqian

    2010-11-01

    DNA sequencing techniques witnessed fast development in the last decades, primarily driven by the Human Genome Project. Among the proposed new techniques, Nanopore was considered as a suitable candidate for the single DNA sequencing with ultrahigh speed and very low cost. Several fabrication and modification techniques have been developed to produce robust and well-defined nanopore devices. Many efforts have also been done to apply nanopore to analyze the properties of DNA molecules. By comparing with traditional sequencing techniques, nanopore has demonstrated its distinctive superiorities in main practical issues, such as sample preparation, sequencing speed, cost-effective and read-length. Although challenges still remain, recent researches in improving the capabilities of nanopore have shed a light to achieve its ultimate goal: Sequence individual DNA strand at single nucleotide level. This patent review briefly highlights recent developments and technological achievements for DNA analysis and sequencing at single molecule level, focusing on nanopore based methods.

  7. MHz-Rate NO PLIF Imaging in a Mach 10 Hypersonic Wind Tunnel

    NASA Technical Reports Server (NTRS)

    Jiang, N.; Webster, M.; Lempert, Walter R.; Miller, J. D.; Meyer, T. R.; Danehy, Paul M.

    2010-01-01

    NO PLIF imaging at repetition rates as high as 1 MHz is demonstrated in the NASA Langley 31 inch Mach 10 hypersonic wind tunnel. Approximately two hundred time correlated image sequences, of between ten and twenty individual frames, were obtained over eight days of wind tunnel testing spanning two entries in March and September of 2009. The majority of the image sequences were obtained from the boundary layer of a 20 flat plate model, in which transition was induced using a variety of cylindrical and triangular shaped protuberances. The high speed image sequences captured a variety of laminar and transitional flow phenomena, ranging from mostly laminar flow, typically at lower Reynolds number and/or in the near wall region of the model, to highly transitional flow in which the temporal evolution and progression of characteristic streak instabilities and/or corkscrew-shaped vortices could be clearly identified. A series of image sequences were also obtained from a 20 compression ramp at a 10 angle of attack in which the temporal dynamics of the characteristic separated flow was captured in a time correlated manner.

  8. Global Processing Speed in Children With Low Reading Ability and in Children and Adults With Typical Reading Ability: Exploratory Factor Analytic Models

    PubMed Central

    Peter, Beate; Matsushita, Mark; Raskind, Wendy H.

    2013-01-01

    Purpose To investigate processing speed as a latent dimension in children with dyslexia and children and adults with typical reading skills. Method Exploratory factor analysis (FA) was based on a sample of multigenerational families, each ascertained through a child with dyslexia. Eleven measures—6 of them timed—represented verbal and nonverbal processes, alphabet writing, and motor sequencing in the hand and oral motor system. FA was conducted in 4 cohorts (all children, a subset of children with low reading scores, a subset of children with typical reading scores, and adults with typical reading scores; total N = 829). Results Processing speed formed the first factor in all cohorts. Both measures of motor sequencing speed loaded on the speed factor with the other timed variables. Children with poor reading scores showed lower speed factor scores than did typical peers. The speed factor was negatively correlated with age in the adults. Conclusions The speed dimension was observed independently of participant cohort, gender, and reading ability. Results are consistent with a unified theory of processing speed as a quadratic function of age in typical development and with slowed processing in poor readers. PMID:21081672

  9. Time optimal paths for high speed maneuvering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reister, D.B.; Lenhart, S.M.

    1993-01-01

    Recent theoretical results have completely solved the problem of determining the minimum length path for a vehicle with a minimum turning radius moving from an initial configuration to a final configuration. Time optimal paths for a constant speed vehicle are a subset of the minimum length paths. This paper uses the Pontryagin maximum principle to find time optimal paths for a constant speed vehicle. The time optimal paths consist of sequences of axes of circles and straight lines. The maximum principle introduces concepts (dual variables, bang-bang solutions, singular solutions, and transversality conditions) that provide important insight into the nature ofmore » the time optimal paths. We explore the properties of the optimal paths and present some experimental results for a mobile robot following an optimal path.« less

  10. 35-GHz radar sensor for automotive collision avoidance

    NASA Astrophysics Data System (ADS)

    Zhang, Jun

    1999-07-01

    This paper describes the development of a radar sensor system used for automotive collision avoidance. Because the heavy truck may have great larger radar cross section than a motorcyclist has, the radar receiver may have a large dynamic range. And multi-targets at different speed may confuse the echo spectrum causing the ambiguity between range and speed of target. To get more information about target and background and to adapt to the large dynamic range and multi-targets, a frequency modulated and pseudo- random binary sequences phase modulated continuous wave radar system is described. The analysis of this double- modulation system is given. A high-speed signal processing and data processing component are used to process and combine the data and information from echo at different direction and at every moment.

  11. Centrifuge: rapid and sensitive classification of metagenomic sequences.

    PubMed

    Kim, Daehwan; Song, Li; Breitwieser, Florian P; Salzberg, Steven L

    2016-12-01

    Centrifuge is a novel microbial classification engine that enables rapid, accurate, and sensitive labeling of reads and quantification of species on desktop computers. The system uses an indexing scheme based on the Burrows-Wheeler transform (BWT) and the Ferragina-Manzini (FM) index, optimized specifically for the metagenomic classification problem. Centrifuge requires a relatively small index (4.2 GB for 4078 bacterial and 200 archaeal genomes) and classifies sequences at very high speed, allowing it to process the millions of reads from a typical high-throughput DNA sequencing run within a few minutes. Together, these advances enable timely and accurate analysis of large metagenomics data sets on conventional desktop computers. Because of its space-optimized indexing schemes, Centrifuge also makes it possible to index the entire NCBI nonredundant nucleotide sequence database (a total of 109 billion bases) with an index size of 69 GB, in contrast to k-mer-based indexing schemes, which require far more extensive space. © 2016 Kim et al.; Published by Cold Spring Harbor Laboratory Press.

  12. Theory and implementation of a very high throughput true random number generator in field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less

  13. Theory and implementation of a very high throughput true random number generator in field programmable gate array.

    PubMed

    Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao

    2016-04-01

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.

  14. Different propagation speeds of recalled sequences in plastic spiking neural networks

    NASA Astrophysics Data System (ADS)

    Huang, Xuhui; Zheng, Zhigang; Hu, Gang; Wu, Si; Rasch, Malte J.

    2015-03-01

    Neural networks can generate spatiotemporal patterns of spike activity. Sequential activity learning and retrieval have been observed in many brain areas, and e.g. is crucial for coding of episodic memory in the hippocampus or generating temporal patterns during song production in birds. In a recent study, a sequential activity pattern was directly entrained onto the neural activity of the primary visual cortex (V1) of rats and subsequently successfully recalled by a local and transient trigger. It was observed that the speed of activity propagation in coordinates of the retinotopically organized neural tissue was constant during retrieval regardless how the speed of light stimulation sweeping across the visual field during training was varied. It is well known that spike-timing dependent plasticity (STDP) is a potential mechanism for embedding temporal sequences into neural network activity. How training and retrieval speeds relate to each other and how network and learning parameters influence retrieval speeds, however, is not well described. We here theoretically analyze sequential activity learning and retrieval in a recurrent neural network with realistic synaptic short-term dynamics and STDP. Testing multiple STDP rules, we confirm that sequence learning can be achieved by STDP. However, we found that a multiplicative nearest-neighbor (NN) weight update rule generated weight distributions and recall activities that best matched the experiments in V1. Using network simulations and mean-field analysis, we further investigated the learning mechanisms and the influence of network parameters on recall speeds. Our analysis suggests that a multiplicative STDP rule with dominant NN spike interaction might be implemented in V1 since recall speed was almost constant in an NMDA-dominant regime. Interestingly, in an AMPA-dominant regime, neural circuits might exhibit recall speeds that instead follow the change in stimulus speeds. This prediction could be tested in experiments.

  15. High-speed multishot pellet injector prototype for the Frascati Tokamak Upgrade

    NASA Astrophysics Data System (ADS)

    Frattolillo, A.; Migliori, S.; Scaramuzzi, F.; Angelone, G.; Baldarelli, M.; Capobianchi, M.; Cardoni, P.; Domma, C.; Mori, L.; Ronci, G.

    1998-07-01

    The Frascati Tokamak Upgrade (FTU) may require multiple high-speed pellet injection in order to achieve quasi-steady-state conditions. A research and development program was thus being pursued at ENEA Frascati, aimed at developing a multishot two-stage pellet injector (MPI), featuring eight "pipe gun" barrels and eight small two-stage pneumatic guns. According to FTU requirements, the final goal is to simultaneously produce up to eight D2 pellets, and then deliver them during a plasma pulse (1 s) with any time schedule, at speeds in the 1-2.5 km/s range. A prototype was constructed and tested to demonstrate the feasibility of the concept, and optimize pellet formation and firing sequences. This laboratory facility was automatically operated by means of a programmable logic controller (PLC), and had a full eight-shot capability. However, it was equipped as a first approach with only four two-stage guns. In this article we will describe in detail the guidelines of the MPI prototype design, which were strongly influenced by some external constraints. We will also report on the results of the experimental campaign, during which the feasibility of such a two-stage MPI was demonstrated. Sequences of four intact D2 pellets in the 1.2-1.6 mm size range, fired at time intervals of a few tens up to a few hundreds of ms, were routinely delivered in a laboratory experiment at injection speeds above 2.5 km/s, with good reproducibility and satisfactory aiming dispersion. Some preliminary effort to address the problem of propellant gas handling, based on an innovative approach, gave encouraging results, and work is in progress to carry out an experiment to definitely test the feasibility of this concept.

  16. Research on parallel combinatory spread spectrum communication system with double information matching

    NASA Astrophysics Data System (ADS)

    Xue, Wei; Wang, Qi; Wang, Tianyu

    2018-04-01

    This paper presents an improved parallel combinatory spread spectrum (PC/SS) communication system with the method of double information matching (DIM). Compared with conventional PC/SS system, the new model inherits the advantage of high transmission speed, large information capacity and high security. Besides, the problem traditional system will face is the high bit error rate (BER) and since its data-sequence mapping algorithm. Hence the new model presented shows lower BER and higher efficiency by its optimization of mapping algorithm.

  17. Insect-computer hybrid legged robot with user-adjustable speed, step length and walking gait.

    PubMed

    Cao, Feng; Zhang, Chao; Choo, Hao Yu; Sato, Hirotaka

    2016-03-01

    We have constructed an insect-computer hybrid legged robot using a living beetle (Mecynorrhina torquata; Coleoptera). The protraction/retraction and levation/depression motions in both forelegs of the beetle were elicited by electrically stimulating eight corresponding leg muscles via eight pairs of implanted electrodes. To perform a defined walking gait (e.g., gallop), different muscles were individually stimulated in a predefined sequence using a microcontroller. Different walking gaits were performed by reordering the applied stimulation signals (i.e., applying different sequences). By varying the duration of the stimulation sequences, we successfully controlled the step frequency and hence the beetle's walking speed. To the best of our knowledge, this paper presents the first demonstration of living insect locomotion control with a user-adjustable walking gait, step length and walking speed. © 2016 The Author(s).

  18. Insect–computer hybrid legged robot with user-adjustable speed, step length and walking gait

    PubMed Central

    Cao, Feng; Zhang, Chao; Choo, Hao Yu

    2016-01-01

    We have constructed an insect–computer hybrid legged robot using a living beetle (Mecynorrhina torquata; Coleoptera). The protraction/retraction and levation/depression motions in both forelegs of the beetle were elicited by electrically stimulating eight corresponding leg muscles via eight pairs of implanted electrodes. To perform a defined walking gait (e.g. gallop), different muscles were individually stimulated in a predefined sequence using a microcontroller. Different walking gaits were performed by reordering the applied stimulation signals (i.e. applying different sequences). By varying the duration of the stimulation sequences, we successfully controlled the step frequency and hence the beetle's walking speed. To the best of our knowledge, this paper presents the first demonstration of living insect locomotion control with a user-adjustable walking gait, step length and walking speed. PMID:27030043

  19. LEFT-RIGHT DIFFERENCES ON TIMED MOTOR EXAMINATION IN CHILDREN

    PubMed Central

    Roeder, Megan B.; Mahone, E. Mark; Larson, J. Gidley; Mostofsky, S. H.; Cutting, Laurie E.; Goldberg, Melissa C.; Denckla, Martha B.

    2008-01-01

    Age-related change in the difference between left- and right-side speed on motor examination may be an important indicator of maturation. Cortical maturation and myelination of the corpus callosum are considered to be related to increased bilateral skill and speed on timed motor tasks. We compared left minus right foot, hand, and finger speed differences using the Revised Physical and Neurological Assessment for Subtle Signs (PANESS; Denckla, 1985); examining 130 typically developing right-handed children (65 boys, 65 girls) ages 7−14. Timed tasks included right and left sets of 20 toe taps, 10 toe-heel alternation sequences, 20 hand pats, 10 hand pronate-supinate sets, 20 finger taps, and 5 sequences of each finger-to-thumb apposition. For each individual, six difference scores between left- and right-sided speeded performances of timed motor tasks were analyzed. Left-right differences decreased significantly with age on toe tapping, heel-toe alternations, hand pronation-supination, finger repetition, and finger sequencing. There were significant gender effects for heel-toe sequences (boys showing a greater left-right difference than girls), and a significant interaction between age and gender for hand pronation-supination, such that the magnitude of the left-right difference was similar for younger, compared with older girls, while the difference was significantly larger for younger, compared to older boys. Speed of performing right and left timed motor tasks equalizes with development; for some tasks, the equalization occurs earlier in girls than in boys. PMID:17852124

  20. High security chaotic multiple access scheme for visible light communication systems with advanced encryption standard interleaving

    NASA Astrophysics Data System (ADS)

    Qiu, Junchao; Zhang, Lin; Li, Diyang; Liu, Xingcheng

    2016-06-01

    Chaotic sequences can be applied to realize multiple user access and improve the system security for a visible light communication (VLC) system. However, since the map patterns of chaotic sequences are usually well known, eavesdroppers can possibly derive the key parameters of chaotic sequences and subsequently retrieve the information. We design an advanced encryption standard (AES) interleaving aided multiple user access scheme to enhance the security of a chaotic code division multiple access-based visible light communication (C-CDMA-VLC) system. We propose to spread the information with chaotic sequences, and then the spread information is interleaved by an AES algorithm and transmitted over VLC channels. Since the computation complexity of performing inverse operations to deinterleave the information is high, the eavesdroppers in a high speed VLC system cannot retrieve the information in real time; thus, the system security will be enhanced. Moreover, we build a mathematical model for the AES-aided VLC system and derive the theoretical information leakage to analyze the system security. The simulations are performed over VLC channels, and the results demonstrate the effectiveness and high security of our presented AES interleaving aided chaotic CDMA-VLC system.

  1. CLAST: CUDA implemented large-scale alignment search tool.

    PubMed

    Yano, Masahiro; Mori, Hiroshi; Akiyama, Yutaka; Yamada, Takuji; Kurokawa, Ken

    2014-12-11

    Metagenomics is a powerful methodology to study microbial communities, but it is highly dependent on nucleotide sequence similarity searching against sequence databases. Metagenomic analyses with next-generation sequencing technologies produce enormous numbers of reads from microbial communities, and many reads are derived from microbes whose genomes have not yet been sequenced, limiting the usefulness of existing sequence similarity search tools. Therefore, there is a clear need for a sequence similarity search tool that can rapidly detect weak similarity in large datasets. We developed a tool, which we named CLAST (CUDA implemented large-scale alignment search tool), that enables analyses of millions of reads and thousands of reference genome sequences, and runs on NVIDIA Fermi architecture graphics processing units. CLAST has four main advantages over existing alignment tools. First, CLAST was capable of identifying sequence similarities ~80.8 times faster than BLAST and 9.6 times faster than BLAT. Second, CLAST executes global alignment as the default (local alignment is also an option), enabling CLAST to assign reads to taxonomic and functional groups based on evolutionarily distant nucleotide sequences with high accuracy. Third, CLAST does not need a preprocessed sequence database like Burrows-Wheeler Transform-based tools, and this enables CLAST to incorporate large, frequently updated sequence databases. Fourth, CLAST requires <2 GB of main memory, making it possible to run CLAST on a standard desktop computer or server node. CLAST achieved very high speed (similar to the Burrows-Wheeler Transform-based Bowtie 2 for long reads) and sensitivity (equal to BLAST, BLAT, and FR-HIT) without the need for extensive database preprocessing or a specialized computing platform. Our results demonstrate that CLAST has the potential to be one of the most powerful and realistic approaches to analyze the massive amount of sequence data from next-generation sequencing technologies.

  2. Designing small universal k-mer hitting sets for improved analysis of high-throughput sequencing

    PubMed Central

    Kingsford, Carl

    2017-01-01

    With the rapidly increasing volume of deep sequencing data, more efficient algorithms and data structures are needed. Minimizers are a central recent paradigm that has improved various sequence analysis tasks, including hashing for faster read overlap detection, sparse suffix arrays for creating smaller indexes, and Bloom filters for speeding up sequence search. Here, we propose an alternative paradigm that can lead to substantial further improvement in these and other tasks. For integers k and L > k, we say that a set of k-mers is a universal hitting set (UHS) if every possible L-long sequence must contain a k-mer from the set. We develop a heuristic called DOCKS to find a compact UHS, which works in two phases: The first phase is solved optimally, and for the second we propose several efficient heuristics, trading set size for speed and memory. The use of heuristics is motivated by showing the NP-hardness of a closely related problem. We show that DOCKS works well in practice and produces UHSs that are very close to a theoretical lower bound. We present results for various values of k and L and by applying them to real genomes show that UHSs indeed improve over minimizers. In particular, DOCKS uses less than 30% of the 10-mers needed to span the human genome compared to minimizers. The software and computed UHSs are freely available at github.com/Shamir-Lab/DOCKS/ and acgt.cs.tau.ac.il/docks/, respectively. PMID:28968408

  3. A high speed implementation of the random decrement algorithm

    NASA Technical Reports Server (NTRS)

    Kiraly, L. J.

    1982-01-01

    The algorithm is useful for measuring net system damping levels in stochastic processes and for the development of equivalent linearized system response models. The algorithm works by summing together all subrecords which occur after predefined threshold level is crossed. The random decrement signature is normally developed by scanning stored data and adding subrecords together. The high speed implementation of the random decrement algorithm exploits the digital character of sampled data and uses fixed record lengths of 2(n) samples to greatly speed up the process. The contributions to the random decrement signature of each data point was calculated only once and in the same sequence as the data were taken. A hardware implementation of the algorithm using random logic is diagrammed and the process is shown to be limited only by the record size and the threshold crossing frequency of the sampled data. With a hardware cycle time of 200 ns and 1024 point signature, a threshold crossing frequency of 5000 Hertz can be processed and a stably averaged signature presented in real time.

  4. A wireless high-speed data acquisition system for geotechnical centrifuge model testing

    NASA Astrophysics Data System (ADS)

    Gaudin, C.; White, D. J.; Boylan, N.; Breen, J.; Brown, T.; DeCatania, S.; Hortin, P.

    2009-09-01

    This paper describes a novel high-speed wireless data acquisition system (WDAS) developed at the University of Western Australia for operation onboard a geotechnical centrifuge, in an enhanced gravitational field of up to 300 times Earth's gravity. The WDAS system consists of up to eight separate miniature units distributed around the circumference of a 0.8 m diameter drum centrifuge, communicating with the control room via wireless Ethernet. Each unit is capable of powering and monitoring eight instrument channels at a sampling rate of up to 1 MHz at 16-bit resolution. The data are stored within the logging unit in solid-state memory, but may also be streamed in real-time at low frequency (up to 10 Hz) to the centrifuge control room, via wireless transmission. The high-speed logging runs continuously within a circular memory (buffer), allowing for storage of a pre-trigger segment of data prior to an event. To suit typical geotechnical modelling applications, the system can record low-speed data continuously, until a burst of high-speed acquisition is triggered when an experimental event occurs, after which the system reverts back to low-speed acquisition to monitor the aftermath of the event. Unlike PC-based data acquisition solutions, this system performs the full sequence of amplification, conditioning, digitization and storage on a single circuit board via an independent micro-controller allocated to each pair of instrumented channels. This arrangement is efficient, compact and physically robust to suit the centrifuge environment. This paper details the design specification of the WDAS along with the software interface developed to control the units. Results from a centrifuge test of a submarine landslide are used to illustrate the performance of the new WDAS.

  5. MHz-rate nitric oxide planar laser-induced fluorescence imaging in a Mach 10 hypersonic wind tunnel.

    PubMed

    Jiang, Naibo; Webster, Matthew; Lempert, Walter R; Miller, Joseph D; Meyer, Terrence R; Ivey, Christopher B; Danehy, Paul M

    2011-02-01

    Nitric oxide planar laser-induced fluorescence (NO PLIF) imaging at repetition rates as high as 1 MHz is demonstrated in the NASA Langley 31 in. Mach 10 hypersonic wind tunnel. Approximately 200 time-correlated image sequences of between 10 and 20 individual frames were obtained over eight days of wind tunnel testing spanning two entries in March and September of 2009. The image sequences presented were obtained from the boundary layer of a 20° flat plate model, in which transition was induced using a variety of different shaped protuberances, including a cylinder and a triangle. The high-speed image sequences captured a variety of laminar and transitional flow phenomena, ranging from mostly laminar flow, typically at a lower Reynolds number and/or in the near wall region of the model, to highly transitional flow in which the temporal evolution and progression of characteristic streak instabilities and/or corkscrew-shaped vortices could be clearly identified.

  6. High-speed optical phase-shifting apparatus

    DOEpatents

    Zortman, William A.

    2016-11-08

    An optical phase shifter includes an optical waveguide, a plurality of partial phase shifting elements arranged sequentially, and control circuitry electrically coupled to the partial phase shifting elements. The control circuitry is adapted to provide an activating signal to each of the N partial phase shifting elements such that the signal is delayed by a clock cycle between adjacent partial phase shifting elements in the sequence. The transit time for a guided optical pulse train between the input edges of consecutive partial phase shifting elements in the sequence is arranged to be equal to a clock cycle, thereby enabling pipelined processing of the optical pulses.

  7. Analysis of base fuze functioning of HESH ammunitions through high-speed photographic technique

    NASA Astrophysics Data System (ADS)

    Biswal, T. K.

    2007-01-01

    High-speed photography plays a major role in a Test Range where the direct access is possible through imaging in order to understand a dynamic process thoroughly and both qualitative and quantitative data are obtained thereafter through image processing and analysis. In one of the trials it was difficult to understand the performance of HESH ammunitions on rolled homogeneous armour. There was no consistency in scab formation even though all other parameters like propellant charge mass, charge temperature, impact velocity etc are maintained constant. To understand the event thoroughly high-speed photography was deployed to have a frontal view of the total process. Clear information of shell impact, embedding of HE propellant on armour and base fuze initiation are obtained. In case of scab forming rounds these three processes are clearly observed in sequence. However in non-scab ones base fuze is initiated before the completion of the embedding process resulting non-availability of threshold thrust on to the armour to cause scab. This has been revealed in two rounds where there was a failure of scab formation. As a quantitative measure, fuze delay was calculated for each round and there after premature functioning of base fuze was ascertained in case of non-scab rounds. Such potency of high-speed photography has been depicted in details in this paper.

  8. On-Chip AC self-test controller

    DOEpatents

    Flanagan, John D [Rhinebeck, NY; Herring, Jay R [Poughkeepsie, NY; Lo, Tin-Chee [Fishkill, NY

    2009-09-29

    A system for performing AC self-test on an integrated circuit that includes a system clock for normal operation is provided. The system includes the system clock, self-test circuitry, a first and second test register to capture and launch test data in response to a sequence of data pulses, and a logic circuit to be tested. The self-test circuitry includes an AC self-test controller and a clock splitter. The clock splitter generates the sequence of data pulses including a long data capture pulse followed by an at speed data launch pulse and an at speed data capture pulse followed by a long data launch pulse. The at speed data launch pulse and the at speed data capture pulse are generated for a common cycle of the system clock.

  9. Flexible, fast and accurate sequence alignment profiling on GPGPU with PaSWAS.

    PubMed

    Warris, Sven; Yalcin, Feyruz; Jackson, Katherine J L; Nap, Jan Peter

    2015-01-01

    To obtain large-scale sequence alignments in a fast and flexible way is an important step in the analyses of next generation sequencing data. Applications based on the Smith-Waterman (SW) algorithm are often either not fast enough, limited to dedicated tasks or not sufficiently accurate due to statistical issues. Current SW implementations that run on graphics hardware do not report the alignment details necessary for further analysis. With the Parallel SW Alignment Software (PaSWAS) it is possible (a) to have easy access to the computational power of NVIDIA-based general purpose graphics processing units (GPGPUs) to perform high-speed sequence alignments, and (b) retrieve relevant information such as score, number of gaps and mismatches. The software reports multiple hits per alignment. The added value of the new SW implementation is demonstrated with two test cases: (1) tag recovery in next generation sequence data and (2) isotype assignment within an immunoglobulin 454 sequence data set. Both cases show the usability and versatility of the new parallel Smith-Waterman implementation.

  10. High-speed all-optical DNA local sequence alignment based on a three-dimensional artificial neural network.

    PubMed

    Maleki, Ehsan; Babashah, Hossein; Koohi, Somayyeh; Kavehvash, Zahra

    2017-07-01

    This paper presents an optical processing approach for exploring a large number of genome sequences. Specifically, we propose an optical correlator for global alignment and an extended moiré matching technique for local analysis of spatially coded DNA, whose output is fed to a novel three-dimensional artificial neural network for local DNA alignment. All-optical implementation of the proposed 3D artificial neural network is developed and its accuracy is verified in Zemax. Thanks to its parallel processing capability, the proposed structure performs local alignment of 4 million sequences of 150 base pairs in a few seconds, which is much faster than its electrical counterparts, such as the basic local alignment search tool.

  11. Memory-efficient dynamic programming backtrace and pairwise local sequence alignment.

    PubMed

    Newberg, Lee A

    2008-08-15

    A backtrace through a dynamic programming algorithm's intermediate results in search of an optimal path, or to sample paths according to an implied probability distribution, or as the second stage of a forward-backward algorithm, is a task of fundamental importance in computational biology. When there is insufficient space to store all intermediate results in high-speed memory (e.g. cache) existing approaches store selected stages of the computation, and recompute missing values from these checkpoints on an as-needed basis. Here we present an optimal checkpointing strategy, and demonstrate its utility with pairwise local sequence alignment of sequences of length 10,000. Sample C++-code for optimal backtrace is available in the Supplementary Materials. Supplementary data is available at Bioinformatics online.

  12. Genomic Sequencing: Assessing The Health Care System, Policy, And Big-Data Implications

    PubMed Central

    Phillips, Kathryn A.; Trosman, Julia; Kelley, Robin K.; Pletcher, Mark J.; Douglas, Michael P.; Weldon, Christine B.

    2014-01-01

    New genomic sequencing technologies enable the high-speed analysis of multiple genes simultaneously, including all of those in a person's genome. Sequencing is a prominent example of a “big data” technology because of the massive amount of information it produces and its complexity, diversity, and timeliness. Our objective in this article is to provide a policy primer on sequencing and illustrate how it can affect health care system and policy issues. Toward this end, we developed an easily applied classification of sequencing based on inputs, methods, and outputs. We used it to examine the implications of sequencing for three health care system and policy issues: making care more patient-centered, developing coverage and reimbursement policies, and assessing economic value. We conclude that sequencing has great promise but that policy challenges include how to optimize patient engagement as well as privacy, develop coverage policies that distinguish research from clinical uses and account for bioinformatics costs, and determine the economic value of sequencing through complex economic models that take into account multiple findings and downstream costs. PMID:25006153

  13. Genomic sequencing: assessing the health care system, policy, and big-data implications.

    PubMed

    Phillips, Kathryn A; Trosman, Julia R; Kelley, Robin K; Pletcher, Mark J; Douglas, Michael P; Weldon, Christine B

    2014-07-01

    New genomic sequencing technologies enable the high-speed analysis of multiple genes simultaneously, including all of those in a person's genome. Sequencing is a prominent example of a "big data" technology because of the massive amount of information it produces and its complexity, diversity, and timeliness. Our objective in this article is to provide a policy primer on sequencing and illustrate how it can affect health care system and policy issues. Toward this end, we developed an easily applied classification of sequencing based on inputs, methods, and outputs. We used it to examine the implications of sequencing for three health care system and policy issues: making care more patient-centered, developing coverage and reimbursement policies, and assessing economic value. We conclude that sequencing has great promise but that policy challenges include how to optimize patient engagement as well as privacy, develop coverage policies that distinguish research from clinical uses and account for bioinformatics costs, and determine the economic value of sequencing through complex economic models that take into account multiple findings and downstream costs. Project HOPE—The People-to-People Health Foundation, Inc.

  14. Thermographic measurements of high-speed metal cutting

    NASA Astrophysics Data System (ADS)

    Mueller, Bernhard; Renz, Ulrich

    2002-03-01

    Thermographic measurements of a high-speed cutting process have been performed with an infrared camera. To realize images without motion blur the integration times were reduced to a few microseconds. Since the high tool wear influences the measured temperatures a set-up has been realized which enables small cutting lengths. Only single images have been recorded because the process is too fast to acquire a sequence of images even with the frame rate of the very fast infrared camera which has been used. To expose the camera when the rotating tool is in the middle of the camera image an experimental set-up with a light barrier and a digital delay generator with a time resolution of 1 ns has been realized. This enables a very exact triggering of the camera at the desired position of the tool in the image. Since the cutting depth is between 0.1 and 0.2 mm a high spatial resolution was also necessary which was obtained by a special close-up lens allowing a resolution of app. 45 microns. The experimental set-up will be described and infrared images and evaluated temperatures of a titanium alloy and a carbon steel will be presented for cutting speeds up to 42 m/s.

  15. Reflectively Coupled Waveguide Photodetector for High Speed Optical Interconnection

    PubMed Central

    Hsu*, Shih-Hsiang

    2010-01-01

    To fully utilize GaAs high drift mobility, techniques to monolithically integrate In0.53Ga0.47As p-i-n photodetectors with GaAs based optical waveguides using total internal reflection coupling are reviewed. Metal coplanar waveguides, deposited on top of the polyimide layer for the photodetector’s planarization and passivation, were then uniquely connected as a bridge between the photonics and electronics to illustrate the high-speed monitoring function. The photodetectors were efficiently implemented and imposed on the echelle grating circle for wavelength division multiplexing monitoring. In optical filtering performance, the monolithically integrated photodetector channel spacing was 2 nm over the 1,520–1,550 nm wavelength range and the pass band was 1 nm at the −1 dB level. For high-speed applications the full-width half-maximum of the temporal response and 3-dB bandwidth for the reflectively coupled waveguide photodetectors were demonstrated to be 30 ps and 11 GHz, respectively. The bit error rate performance of this integrated photodetector at 10 Gbit/s with 27-1 long pseudo-random bit sequence non-return to zero input data also showed error-free operation. PMID:22163502

  16. Genometa--a fast and accurate classifier for short metagenomic shotgun reads.

    PubMed

    Davenport, Colin F; Neugebauer, Jens; Beckmann, Nils; Friedrich, Benedikt; Kameri, Burim; Kokott, Svea; Paetow, Malte; Siekmann, Björn; Wieding-Drewes, Matthias; Wienhöfer, Markus; Wolf, Stefan; Tümmler, Burkhard; Ahlers, Volker; Sprengel, Frauke

    2012-01-01

    Metagenomic studies use high-throughput sequence data to investigate microbial communities in situ. However, considerable challenges remain in the analysis of these data, particularly with regard to speed and reliable analysis of microbial species as opposed to higher level taxa such as phyla. We here present Genometa, a computationally undemanding graphical user interface program that enables identification of bacterial species and gene content from datasets generated by inexpensive high-throughput short read sequencing technologies. Our approach was first verified on two simulated metagenomic short read datasets, detecting 100% and 94% of the bacterial species included with few false positives or false negatives. Subsequent comparative benchmarking analysis against three popular metagenomic algorithms on an Illumina human gut dataset revealed Genometa to attribute the most reads to bacteria at species level (i.e. including all strains of that species) and demonstrate similar or better accuracy than the other programs. Lastly, speed was demonstrated to be many times that of BLAST due to the use of modern short read aligners. Our method is highly accurate if bacteria in the sample are represented by genomes in the reference sequence but cannot find species absent from the reference. This method is one of the most user-friendly and resource efficient approaches and is thus feasible for rapidly analysing millions of short reads on a personal computer. The Genometa program, a step by step tutorial and Java source code are freely available from http://genomics1.mh-hannover.de/genometa/ and on http://code.google.com/p/genometa/. This program has been tested on Ubuntu Linux and Windows XP/7.

  17. Terahertz Science, Technology, and Communication

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Goutam

    2013-01-01

    The term "terahertz" has been ubiquitous in the arena of technology over the past couple of years. New applications are emerging every day which are exploiting the promises of terahertz - its small wavelength; capability of penetrating dust, clouds, and fog; and possibility of having large instantaneous bandwidth for high-speed communication channels. Until very recently, space-based instruments for astrophysics, planetary science, and Earth science missions have been the primary motivator for the development of terahertz sensors, sources, and systems. However, in recent years the emerging areas such as imaging from space platforms, surveillance of person-borne hidden weapons or contraband from a safe stand-off distance and reconnaissance, medical imaging and DNA sequencing, and in the world high speed communications have been the driving force for this area of research.

  18. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  19. A New Test Method of Circuit Breaker Spring Telescopic Characteristics Based Image Processing

    NASA Astrophysics Data System (ADS)

    Huang, Huimin; Wang, Feifeng; Lu, Yufeng; Xia, Xiaofei; Su, Yi

    2018-06-01

    This paper applied computer vision technology to the fatigue condition monitoring of springs, and a new telescopic characteristics test method is proposed for circuit breaker operating mechanism spring based on image processing technology. High-speed camera is utilized to capture spring movement image sequences when high voltage circuit breaker operated. Then the image-matching method is used to obtain the deformation-time curve and speed-time curve, and the spring expansion and deformation parameters are extracted from it, which will lay a foundation for subsequent spring force analysis and matching state evaluation. After performing simulation tests at the experimental site, this image analyzing method could solve the complex problems of traditional mechanical sensor installation and monitoring online, status assessment of the circuit breaker spring.

  20. Long-term scale adaptive tracking with kernel correlation filters

    NASA Astrophysics Data System (ADS)

    Wang, Yueren; Zhang, Hong; Zhang, Lei; Yang, Yifan; Sun, Mingui

    2018-04-01

    Object tracking in video sequences has broad applications in both military and civilian domains. However, as the length of input video sequence increases, a number of problems arise, such as severe object occlusion, object appearance variation, and object out-of-view (some portion or the entire object leaves the image space). To deal with these problems and identify the object being tracked from cluttered background, we present a robust appearance model using Speeded Up Robust Features (SURF) and advanced integrated features consisting of the Felzenszwalb's Histogram of Oriented Gradients (FHOG) and color attributes. Since re-detection is essential in long-term tracking, we develop an effective object re-detection strategy based on moving area detection. We employ the popular kernel correlation filters in our algorithm design, which facilitates high-speed object tracking. Our evaluation using the CVPR2013 Object Tracking Benchmark (OTB2013) dataset illustrates that the proposed algorithm outperforms reference state-of-the-art trackers in various challenging scenarios.

  1. An ant colony optimization based algorithm for identifying gene regulatory elements.

    PubMed

    Liu, Wei; Chen, Hanwu; Chen, Ling

    2013-08-01

    It is one of the most important tasks in bioinformatics to identify the regulatory elements in gene sequences. Most of the existing algorithms for identifying regulatory elements are inclined to converge into a local optimum, and have high time complexity. Ant Colony Optimization (ACO) is a meta-heuristic method based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of real ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper designs and implements an ACO based algorithm named ACRI (ant-colony-regulatory-identification) for identifying all possible binding sites of transcription factor from the upstream of co-expressed genes. To accelerate the ants' searching process, a strategy of local optimization is presented to adjust the ants' start positions on the searched sequences. By exploiting the powerful optimization ability of ACO, the algorithm ACRI can not only improve precision of the results, but also achieve a very high speed. Experimental results on real world datasets show that ACRI can outperform other traditional algorithms in the respects of speed and quality of solutions. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. A home-built digital optical MRI console using high-speed serial links.

    PubMed

    Tang, Weinan; Wang, Weimin; Liu, Wentao; Ma, Yajun; Tang, Xin; Xiao, Liang; Gao, Jia-Hong

    2015-08-01

    To develop a high performance, cost-effective digital optical console for scalable multichannel MRI. The console system was implemented with flexibility and efficiency based on a modular architecture with distributed pulse sequencers. High-speed serial links were optimally utilized to interconnect the system, providing fast digital communication with a multi-gigabit data rate. The conventional analog radio frequency (RF) chain was replaced with a digital RF manipulation. The acquisition electronics were designed in close proximity to RF coils and preamplifiers, using a digital optical link to transmit the MR signal. A prototype of the console was constructed with a broad frequency range from direct current to 100 MHz. A temporal resolution of 1 μs was achieved for both the RF and gradient operations. The MR signal was digitized in the scanner room with an overall dynamic range between 16 and 24 bits and was transmitted to a master controller over a duplex optic fiber with a high data rate of 3.125 gigabits per second. High-quality phantom and human images were obtained using the prototype on both 0.36T and 1.5T clinical MRI scanners. A homemade digital optical MRI console with high-speed serial interconnection has been developed to better serve imaging research and clinical applications. © 2014 Wiley Periodicals, Inc.

  3. Using video-oriented instructions to speed up sequence comparison.

    PubMed

    Wozniak, A

    1997-04-01

    This document presents an implementation of the well-known Smith-Waterman algorithm for comparison of proteic and nucleic sequences, using specialized video instructions. These instructions, SIMD-like in their design, make possible parallelization of the algorithm at the instruction level. Benchmarks on an ULTRA SPARC running at 167 MHz show a speed-up factor of two compared to the same algorithm implemented with integer instructions on the same machine. Performance reaches over 18 million matrix cells per second on a single processor, giving to our knowledge the fastest implementation of the Smith-Waterman algorithm on a workstation. The accelerated procedure was introduced in LASSAP--a LArge Scale Sequence compArison Package software developed at INRIA--which handles parallelism at higher level. On a SUN Enterprise 6000 server with 12 processors, a speed of nearly 200 million matrix cells per second has been obtained. A sequence of length 300 amino acids is scanned against SWISSPROT R33 (1,8531,385 residues) in 29 s. This procedure is not restricted to databank scanning. It applies to all cases handled by LASSAP (intra- and inter-bank comparisons, Z-score computation, etc.

  4. Simultaneous high-speed schlieren and OH chemiluminescence imaging in a hybrid rocket combustor at elevated pressures

    NASA Astrophysics Data System (ADS)

    Miller, Victor; Jens, Elizabeth T.; Mechentel, Flora S.; Cantwell, Brian J.; Stanford Propulsion; Space Exploration Group Team

    2014-11-01

    In this work, we present observations of the overall features and dynamics of flow and combustion in a slab-type hybrid rocket combustor. Tests were conducted in the recently upgraded Stanford Combustion Visualization Facility, a hybrid rocket combustor test platform capable of generating constant mass-flux flows of oxygen. High-speed (3 kHz) schlieren and OH chemiluminescence imaging were used to visualize the flow. We present imaging results for the combustion of two different fuel grains, a classic, low regression rate polymethyl methacrylate (PMMA), and a high regression rate paraffin, and all tests were conducted in gaseous oxygen. Each fuel grain was tested at multiple free-stream pressures at constant oxidizer mass flux (40 kg/m2s). The resulting image sequences suggest that aspects of the dynamics and scaling of the system depend strongly on both pressure and type of fuel.

  5. The High-Performance Computing and Communications program, the national information infrastructure and health care.

    PubMed Central

    Lindberg, D A; Humphreys, B L

    1995-01-01

    The High-Performance Computing and Communications (HPCC) program is a multiagency federal effort to advance the state of computing and communications and to provide the technologic platform on which the National Information Infrastructure (NII) can be built. The HPCC program supports the development of high-speed computers, high-speed telecommunications, related software and algorithms, education and training, and information infrastructure technology and applications. The vision of the NII is to extend access to high-performance computing and communications to virtually every U.S. citizen so that the technology can be used to improve the civil infrastructure, lifelong learning, energy management, health care, etc. Development of the NII will require resolution of complex economic and social issues, including information privacy. Health-related applications supported under the HPCC program and NII initiatives include connection of health care institutions to the Internet; enhanced access to gene sequence data; the "Visible Human" Project; and test-bed projects in telemedicine, electronic patient records, shared informatics tool development, and image systems. PMID:7614116

  6. Three-Dimensional Reconstruction of Cloud-to-Ground Lightning Using High-Speed Video and VHF Broadband Interferometer

    NASA Astrophysics Data System (ADS)

    Li, Yun; Qiu, Shi; Shi, Lihua; Huang, Zhengyu; Wang, Tao; Duan, Yantao

    2017-12-01

    The time resolved three-dimensional (3-D) spatial reconstruction of lightning channels using high-speed video (HSV) images and VHF broadband interferometer (BITF) data is first presented in this paper. Because VHF and optical radiations in step formation process occur with time separation no more than 1 μs, the observation data of BITF and HSV at two different sites provide the possibility of reconstructing the time resolved 3-D channel of lightning. With the proposed procedures for 3-D reconstruction of leader channels, dart leaders as well as stepped leaders with complex multiple branches can be well reconstructed. The differences between 2-D speeds and 3-D speeds of leader channels are analyzed by comparing the development of leader channels in 2-D and 3-D space. Since return stroke (RS) usually follows the path of previous leader channels, the 3-D speeds of the return strokes are first estimated by combination with the 3-D structure of the preceding leaders and HSV image sequences. For the fourth RS, the ratios of the 3-D to 2-D RS speeds increase with height, and the largest ratio of the 3-D to 2-D return stroke speeds can reach 2.03, which is larger than the result of triggered lightning reported by Idone. Since BITF can detect lightning radiation in a 360° view, correlated BITF and HSV observations increase the 3-D detection probability than dual-station HSV observations, which is helpful to obtain more events and deeper understanding of the lightning process.

  7. Efficient and controllable thermal ablation induced by short-pulsed HIFU sequence assisted with perfluorohexane nanodroplets.

    PubMed

    Chang, Nan; Lu, Shukuan; Qin, Dui; Xu, Tianqi; Han, Meng; Wang, Supin; Wan, Mingxi

    2018-07-01

    A HIFU sequence with extremely short pulse duration and high pulse repetition frequency can achieve thermal ablation at a low acoustic power using inertial cavitation. Because of its cavitation-dependent property, the therapeutic outcome is unreliable when the treatment zone lacks cavitation nuclei. To overcome this intrinsic limitation, we introduced perfluorocarbon nanodroplets as extra cavitation nuclei into short-pulsed HIFU-mediated thermal ablation. Two types of nanodroplets were used with perfluorohexane (PFH) as the core material coated with bovine serum albumin (BSA) or an anionic fluorosurfactant (FS) to demonstrate the feasibility of this study. The thermal ablation process was recorded by high-speed photography. The inertial cavitation activity during the ablation was revealed by sonoluminescence (SL). The high-speed photography results show that the thermal ablation volume increased by ∼643% and 596% with BSA-PFH and FS-PFH, respectively, than the short-pulsed HIFU alone at an acoustic power of 19.5 W. Using nanodroplets, much larger ablation volumes were created even at a much lower acoustic power. Meanwhile, the treatment time for ablating a desired volume significantly reduced in the presence of nanodroplets. Moreover, by adjusting the treatment time, lesion migration towards the HIFU transducer could also be avoided. The SL results show that the thermal lesion shape was significantly dependent on the inertial cavitation in this short-pulsed HIFU-mediated thermal ablation. The inertial cavitation activity became more predictable by using nanodroplets. Therefore, the introduction of PFH nanodroplets as extra cavitation nuclei made the short-pulsed HIFU thermal ablation more efficient by increasing the ablation volume and speed, and more controllable by reducing the acoustic power and preventing lesion migration. Copyright © 2018. Published by Elsevier B.V.

  8. Speech and Nonspeech Sequence Skill Learning in Adults Who Stutter

    ERIC Educational Resources Information Center

    Smits-Bandstra, Sarah; De Nil, Luc; Saint-Cyr, Jean A.

    2006-01-01

    Two studies compared the speech and nonspeech sequence skill learning of nine persons who stutter (PWS) and nine matched fluent speakers (PNS). Sequence skill learning was defined as a continuing process of stable improvement in speed and/or accuracy of sequencing performance over practice and was measured by comparing PWS's and PNS's performance…

  9. Applying Agrep to r-NSA to solve multiple sequences approximate matching.

    PubMed

    Ni, Bing; Wong, Man-Hon; Lam, Chi-Fai David; Leung, Kwong-Sak

    2014-01-01

    This paper addresses the approximate matching problem in a database consisting of multiple DNA sequences, where the proposed approach applies Agrep to a new truncated suffix array, r-NSA. The construction time of the structure is linear to the database size, and the computations of indexing a substring in the structure are constant. The number of characters processed in applying Agrep is analysed theoretically, and the theoretical upper-bound can approximate closely the empirical number of characters, which is obtained through enumerating the characters in the actual structure built. Experiments are carried out using (synthetic) random DNA sequences, as well as (real) genome sequences including Hepatitis-B Virus and X-chromosome. Experimental results show that, compared to the straight-forward approach that applies Agrep to multiple sequences individually, the proposed approach solves the matching problem in much shorter time. The speed-up of our approach depends on the sequence patterns, and for highly similar homologous genome sequences, which are the common cases in real-life genomes, it can be up to several orders of magnitude.

  10. Vehicle speed detection based on gaussian mixture model using sequential of images

    NASA Astrophysics Data System (ADS)

    Setiyono, Budi; Ratna Sulistyaningrum, Dwi; Soetrisno; Fajriyah, Farah; Wahyu Wicaksono, Danang

    2017-09-01

    Intelligent Transportation System is one of the important components in the development of smart cities. Detection of vehicle speed on the highway is supporting the management of traffic engineering. The purpose of this study is to detect the speed of the moving vehicles using digital image processing. Our approach is as follows: The inputs are a sequence of frames, frame rate (fps) and ROI. The steps are following: First we separate foreground and background using Gaussian Mixture Model (GMM) in each frames. Then in each frame, we calculate the location of object and its centroid. Next we determine the speed by computing the movement of centroid in sequence of frames. In the calculation of speed, we only consider frames when the centroid is inside the predefined region of interest (ROI). Finally we transform the pixel displacement into a time unit of km/hour. Validation of the system is done by comparing the speed calculated manually and obtained by the system. The results of software testing can detect the speed of vehicles with the highest accuracy is 97.52% and the lowest accuracy is 77.41%. And the detection results of testing by using real video footage on the road is included with real speed of the vehicle.

  11. Contribution of Leg-Muscle Forces to Paddle Force and Kayak Speed During Maximal-Effort Flat-Water Paddling.

    PubMed

    Nilsson, Johnny E; Rosdahl, Hans G

    2016-01-01

    The purpose was to investigate the contribution of leg-muscle-generated forces to paddle force and kayak speed during maximal-effort flat-water paddling. Five elite male kayakers at national and international level participated. The participants warmed up at progressively increasing speeds and then performed a maximal-effort, nonrestricted paddling sequence. This was followed after 5 min rest by a maximal-effort paddling sequence with the leg action restricted--the knee joints "locked." Left- and right-side foot-bar and paddle forces were recorded with specially designed force devices. In addition, knee angular displacement of the right and left knees was recorded with electrogoniometric technique, and the kayak speed was calculated from GPS signals sampled at 5 Hz. The results showed that reduction in both push and pull foot-bar forces resulted in a reduction of 21% and 16% in mean paddle-stroke force and mean kayak speed, respectively. Thus, the contribution of foot-bar force from lower-limb action significantly contributes to kayakers' paddling performance.

  12. Motion Detection in Ultrasound Image-Sequences Using Tensor Voting

    NASA Astrophysics Data System (ADS)

    Inba, Masafumi; Yanagida, Hirotaka; Tamura, Yasutaka

    2008-05-01

    Motion detection in ultrasound image sequences using tensor voting is described. We have been developing an ultrasound imaging system adopting a combination of coded excitation and synthetic aperture focusing techniques. In our method, frame rate of the system at distance of 150 mm reaches 5000 frame/s. Sparse array and short duration coded ultrasound signals are used for high-speed data acquisition. However, many artifacts appear in the reconstructed image sequences because of the incompleteness of the transmitted code. To reduce the artifacts, we have examined the application of tensor voting to the imaging method which adopts both coded excitation and synthetic aperture techniques. In this study, the basis of applying tensor voting and the motion detection method to ultrasound images is derived. It was confirmed that velocity detection and feature enhancement are possible using tensor voting in the time and space of simulated ultrasound three-dimensional image sequences.

  13. Ongoing behavior predicts perceptual report of interval duration

    PubMed Central

    Gouvêa, Thiago S.; Monteiro, Tiago; Soares, Sofia; Atallah, Bassam V.; Paton, Joseph J.

    2014-01-01

    The ability to estimate the passage of time is essential for adaptive behavior in complex environments. Yet, it is not known how the brain encodes time over the durations necessary to explain animal behavior. Under temporally structured reinforcement schedules, animals tend to develop temporally structured behavior, and interval timing has been suggested to be accomplished by learning sequences of behavioral states. If this is true, trial to trial fluctuations in behavioral sequences should be predictive of fluctuations in time estimation. We trained rodents in an duration categorization task while continuously monitoring their behavior with a high speed camera. Animals developed highly reproducible behavioral sequences during the interval being timed. Moreover, those sequences were often predictive of perceptual report from early in the trial, providing support to the idea that animals may use learned behavioral patterns to estimate the duration of time intervals. To better resolve the issue, we propose that continuous and simultaneous behavioral and neural monitoring will enable identification of neural activity related to time perception that is not explained by ongoing behavior. PMID:24672473

  14. Image and Video Compression with VLSI Neural Networks

    NASA Technical Reports Server (NTRS)

    Fang, W.; Sheu, B.

    1993-01-01

    An advanced motion-compensated predictive video compression system based on artificial neural networks has been developed to effectively eliminate the temporal and spatial redundancy of video image sequences and thus reduce the bandwidth and storage required for the transmission and recording of the video signal. The VLSI neuroprocessor for high-speed high-ratio image compression based upon a self-organization network and the conventional algorithm for vector quantization are compared. The proposed method is quite efficient and can achieve near-optimal results.

  15. Robust sensorimotor representation to physical interaction changes in humanoid motion learning.

    PubMed

    Shimizu, Toshihiko; Saegusa, Ryo; Ikemoto, Shuhei; Ishiguro, Hiroshi; Metta, Giorgio

    2015-05-01

    This paper proposes a learning from demonstration system based on a motion feature, called phase transfer sequence. The system aims to synthesize the knowledge on humanoid whole body motions learned during teacher-supported interactions, and apply this knowledge during different physical interactions between a robot and its surroundings. The phase transfer sequence represents the temporal order of the changing points in multiple time sequences. It encodes the dynamical aspects of the sequences so as to absorb the gaps in timing and amplitude derived from interaction changes. The phase transfer sequence was evaluated in reinforcement learning of sitting-up and walking motions conducted by a real humanoid robot and compatible simulator. In both tasks, the robotic motions were less dependent on physical interactions when learned by the proposed feature than by conventional similarity measurements. Phase transfer sequence also enhanced the convergence speed of motion learning. Our proposed feature is original primarily because it absorbs the gaps caused by changes of the originally acquired physical interactions, thereby enhancing the learning speed in subsequent interactions.

  16. Robust object tracking techniques for vision-based 3D motion analysis applications

    NASA Astrophysics Data System (ADS)

    Knyaz, Vladimir A.; Zheltov, Sergey Y.; Vishnyakov, Boris V.

    2016-04-01

    Automated and accurate spatial motion capturing of an object is necessary for a wide variety of applications including industry and science, virtual reality and movie, medicine and sports. For the most part of applications a reliability and an accuracy of the data obtained as well as convenience for a user are the main characteristics defining the quality of the motion capture system. Among the existing systems for 3D data acquisition, based on different physical principles (accelerometry, magnetometry, time-of-flight, vision-based), optical motion capture systems have a set of advantages such as high speed of acquisition, potential for high accuracy and automation based on advanced image processing algorithms. For vision-based motion capture accurate and robust object features detecting and tracking through the video sequence are the key elements along with a level of automation of capturing process. So for providing high accuracy of obtained spatial data the developed vision-based motion capture system "Mosca" is based on photogrammetric principles of 3D measurements and supports high speed image acquisition in synchronized mode. It includes from 2 to 4 technical vision cameras for capturing video sequences of object motion. The original camera calibration and external orientation procedures provide the basis for high accuracy of 3D measurements. A set of algorithms as for detecting, identifying and tracking of similar targets, so for marker-less object motion capture is developed and tested. The results of algorithms' evaluation show high robustness and high reliability for various motion analysis tasks in technical and biomechanics applications.

  17. Time-Resolved Images of Laser-Induced Gas Ignition Using High-Speed Photographic and Spectroscopic Techniques

    NASA Astrophysics Data System (ADS)

    Chen, Ying-Ling; Lewis, J. W. L.; Parigger, C. G.

    1997-11-01

    Two-dimensional visualization of laser-induced spark ignition in atmospheric-pressure gases is reported. Laser-induced breakdown in air, O2 and combustible NH_3/O2 mixture was achieved using a 1064 nm, Nd:YAG laser of approximately 6 ns pulse width, focused at 10-mm above a 60-mm diameter flat-flame burner. An argon sheath-gas flow was used to stabilize the core flowfield. High-speed photographic techniques were applied to trace a complete sequence of kernel development of a single breakdown or ignition event. Thermochemical characteristics of the post-breakdown regime were analyzed by laser-induced fluorescence spectroscopy (LIFS). Spatial distribution of NH free radical observed by planar-LIF showed the contours of the developing flame-front. The corresponding NH temperature maps achieved by excitation LIFS and Boltzmann plot are also presented.

  18. A compact Acousto-Optic Lens for 2D and 3D femtosecond based 2-photon microscopy.

    PubMed

    Kirkby, Paul A; Srinivas Nadella, K M Naga; Silver, R Angus

    2010-06-21

    We describe a high speed 3D Acousto-Optic Lens Microscope (AOLM) for femtosecond 2-photon imaging. By optimizing the design of the 4 AO Deflectors (AODs) and by deriving new control algorithms, we have developed a compact spherical AOL with a low temporal dispersion that enables 2-photon imaging at 10-fold lower power than previously reported. We show that the AOLM can perform high speed 2D raster-scan imaging (>150 Hz) without scan rate dependent astigmatism. It can deflect and focus a laser beam in a 3D random access sequence at 30 kHz and has an extended focusing range (>137 mum; 40X 0.8NA objective). These features are likely to make the AOLM a useful tool for studying fast physiological processes distributed in 3D space.

  19. High speed cinematography of the initial break-point of latex condoms during the air burst test.

    PubMed

    Stube, R; Voeller, B; Davidhazy, A

    1990-06-01

    High speed cinematography of latex condoms inflated to burst under standard (ISO) conditions reveals that rupture of the condom typically is initiated at a small focal point on the shank of the condom and then rapidly propagates throughout the condom's surface, often ending with partial or full severance of the condom at its point of attachment to the air burst instrument. This sequence of events is the reverse of that sometimes hypothesized to occur, where initiation of burst was considered to begin at the attachment point and to constitute a testing method artifact. This hypothesis of breakage at the attachment point, if true, would diminish the value of the air burst test as a standard for assessing manufacturing quality control as well as for condom strength measurements and comparisons.

  20. Motion-oriented high speed 3-D measurements by binocular fringe projection using binary aperiodic patterns.

    PubMed

    Feng, Shijie; Chen, Qian; Zuo, Chao; Tao, Tianyang; Hu, Yan; Asundi, Anand

    2017-01-23

    Fringe projection is an extensively used technique for high speed three-dimensional (3-D) measurements of dynamic objects. To precisely retrieve a moving object at pixel level, researchers prefer to project a sequence of fringe images onto its surface. However, the motion often leads to artifacts in reconstructions due to the sequential recording of the set of patterns. In order to reduce the adverse impact of the movement, we present a novel high speed 3-D scanning technique combining the fringe projection and stereo. Firstly, promising measuring speed is achieved by modifying the traditional aperiodic sinusoidal patterns so that the fringe images can be cast at kilohertz with the widely used defocusing strategy. Next, a temporal intensity tracing algorithm is developed to further alleviate the influence of motion by accurately tracing the ideal intensity for stereo matching. Then, a combined cost measure is suggested to robustly estimate the cost for each pixel and lastly a three-step framework of refinement follows not only to eliminate outliers caused by the motion but also to obtain sub-pixel disparity results for 3-D reconstructions. In comparison with the traditional method where the effect of motion is not considered, experimental results show that the reconstruction accuracy for dynamic objects can be improved by an order of magnitude with the proposed method.

  1. Review on the Traction System Sensor Technology of a Rail Transit Train.

    PubMed

    Feng, Jianghua; Xu, Junfeng; Liao, Wu; Liu, Yong

    2017-06-11

    The development of high-speed intelligent rail transit has increased the number of sensors applied on trains. These play an important role in train state control and monitoring. These sensors generally work in a severe environment, so the key problem for sensor data acquisition is to ensure data accuracy and reliability. In this paper, we follow the sequence of sensor signal flow, present sensor signal sensing technology, sensor data acquisition, and processing technology, as well as sensor fault diagnosis technology based on the voltage, current, speed, and temperature sensors which are commonly used in train traction systems. Finally, intelligent sensors and future research directions of rail transit train sensors are discussed.

  2. Review on the Traction System Sensor Technology of a Rail Transit Train

    PubMed Central

    Feng, Jianghua; Xu, Junfeng; Liao, Wu; Liu, Yong

    2017-01-01

    The development of high-speed intelligent rail transit has increased the number of sensors applied on trains. These play an important role in train state control and monitoring. These sensors generally work in a severe environment, so the key problem for sensor data acquisition is to ensure data accuracy and reliability. In this paper, we follow the sequence of sensor signal flow, present sensor signal sensing technology, sensor data acquisition, and processing technology, as well as sensor fault diagnosis technology based on the voltage, current, speed, and temperature sensors which are commonly used in train traction systems. Finally, intelligent sensors and future research directions of rail transit train sensors are discussed. PMID:28604615

  3. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure

    NASA Astrophysics Data System (ADS)

    Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.

    2017-02-01

    Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.

  4. Oasis 2: improved online analysis of small RNA-seq data.

    PubMed

    Rahman, Raza-Ur; Gautam, Abhivyakti; Bethune, Jörn; Sattar, Abdul; Fiosins, Maksims; Magruder, Daniel Sumner; Capece, Vincenzo; Shomroni, Orr; Bonn, Stefan

    2018-02-14

    Small RNA molecules play important roles in many biological processes and their dysregulation or dysfunction can cause disease. The current method of choice for genome-wide sRNA expression profiling is deep sequencing. Here we present Oasis 2, which is a new main release of the Oasis web application for the detection, differential expression, and classification of small RNAs in deep sequencing data. Compared to its predecessor Oasis, Oasis 2 features a novel and speed-optimized sRNA detection module that supports the identification of small RNAs in any organism with higher accuracy. Next to the improved detection of small RNAs in a target organism, the software now also recognizes potential cross-species miRNAs and viral and bacterial sRNAs in infected samples. In addition, novel miRNAs can now be queried and visualized interactively, providing essential information for over 700 high-quality miRNA predictions across 14 organisms. Robust biomarker signatures can now be obtained using the novel enhanced classification module. Oasis 2 enables biologists and medical researchers to rapidly analyze and query small RNA deep sequencing data with improved precision, recall, and speed, in an interactive and user-friendly environment. Oasis 2 is implemented in Java, J2EE, mysql, Python, R, PHP and JavaScript. It is freely available at https://oasis.dzne.de.

  5. QuickProbs—A Fast Multiple Sequence Alignment Algorithm Designed for Graphics Processors

    PubMed Central

    Gudyś, Adam; Deorowicz, Sebastian

    2014-01-01

    Multiple sequence alignment is a crucial task in a number of biological analyses like secondary structure prediction, domain searching, phylogeny, etc. MSAProbs is currently the most accurate alignment algorithm, but its effectiveness is obtained at the expense of computational time. In the paper we present QuickProbs, the variant of MSAProbs customised for graphics processors. We selected the two most time consuming stages of MSAProbs to be redesigned for GPU execution: the posterior matrices calculation and the consistency transformation. Experiments on three popular benchmarks (BAliBASE, PREFAB, OXBench-X) on quad-core PC equipped with high-end graphics card show QuickProbs to be 5.7 to 9.7 times faster than original CPU-parallel MSAProbs. Additional tests performed on several protein families from Pfam database give overall speed-up of 6.7. Compared to other algorithms like MAFFT, MUSCLE, or ClustalW, QuickProbs proved to be much more accurate at similar speed. Additionally we introduce a tuned variant of QuickProbs which is significantly more accurate on sets of distantly related sequences than MSAProbs without exceeding its computation time. The GPU part of QuickProbs was implemented in OpenCL, thus the package is suitable for graphics processors produced by all major vendors. PMID:24586435

  6. Upon the reconstruction of accidents triggered by tire explosion. Analytical model and case study

    NASA Astrophysics Data System (ADS)

    Gaiginschi, L.; Agape, I.; Talif, S.

    2017-10-01

    Accident Reconstruction is important in the general context of increasing road traffic safety. In the casuistry of traffic accidents, those caused by tire explosions are critical under the severity of consequences, because they are usually happening at high speeds. Consequently, the knowledge of the running speed of the vehicle involved at the time of the tire explosion is essential to elucidate the circumstances of the accident. The paper presents an analytical model for the kinematics of a vehicle which, after the explosion of one of its tires, begins to skid, overturns and rolls. The model consists of two concurent approaches built as applications of the momentum conservation and energy conservation principles, and allows determination of the initial speed of the vehicle involved, by running backwards the sequences of the road event. The authors also aimed to both validate the two distinct analytical approaches by calibrating the calculation algorithms on a case study

  7. High-speed photography and stress-gauge studies of the impact and penetration of plates by rods

    NASA Astrophysics Data System (ADS)

    Bourne, Neil K.; Forde, Lucy C.; Field, John E.

    1997-05-01

    There has been much study of the penetration of semi- infinite and finite thickness targets by long rods at normal incidence. The effects of oblique impact have received relatively little attention and techniques of modeling are thus less developed. It was decided to conduct an experimental investigation of the effects of rod penetration at various angles of impact at zero yaw. The rods were mounted in a reverse ballistic configuration so that their response could be quantified through the impact. Scale copper, mild steel and tungsten alloy rods with hemispherical ends were suspended at the end of the barrel of a 50 mm gas gun at the University of Cambridge. The rods were instrumented with embedded manganin piezoresistive stress gauges. Annealed aluminum, duraluminum and rolled homogeneous armor plates of varying thickness and obliquity were fired at the rods at one of two velocities. The impacts were backlit and photographed with an Ultranac FS501 programmable high-speed camera operated in framing mode. The gauges were monitored using a 2 GH s-1 storage oscilloscope. Rods and plates were recovered after the impact for microstructural examination. Additionally, penetration of borosilicate glass targets was investigated using high-speed photography and a localized Xe flash source and schlieren optics. Additional data was obtained by the use of flash X-ray. Waves and damage were visualized in the glass. High-speed sequences and gauge records are presented showing the mechanisms of penetration and exit seen during impact.

  8. The other fiber, the other fabric, the other way

    NASA Astrophysics Data System (ADS)

    Stephens, Gary R.

    1993-02-01

    Coaxial cable and distributed switches provide a way to configure high-speed Fiber Channel fabrics. This type of fabric provides a cost-effective alternative to a fabric of optical fibers and centralized cross-point switches. The fabric topology is a simple tree. Products using parallel busses require a significant change to migrate to a serial bus. Coaxial cables and distributed switches require a smaller technology shift for these device manufacturers. Each distributed switch permits both medium type and speed changes. The fabric can grow and bridge to optical fibers as the needs expand. A distributed fabric permits earlier entry into high-speed serial operations. For very low-cost fabrics, a distributed switch may permit a link configured as a loop. The loop eliminates half of the ports when compared to a switched point-to-point fabric. A fabric of distributed switches can interface to a cross-point switch fabric. The expected sequence of migration is: closed loops, small closed fabrics, and, finally, bridges, to connect optical cross-point switch fabrics. This paper presents the concept of distributed fabrics, including address assignment, frame routing, and general operation.

  9. Frequency of the first feature in action sequences influences feature binding.

    PubMed

    Mattson, Paul S; Fournier, Lisa R; Behmer, Lawrence P

    2012-10-01

    We investigated whether binding among perception and action feature codes is a preliminary step toward creating a more durable memory trace of an action event. If so, increasing the frequency of a particular event (e.g., a stimulus requiring a movement with the left or right hand in an up or down direction) should increase the strength and speed of feature binding for this event. The results from two experiments, using a partial-repetition paradigm, confirmed that feature binding increased in strength and/or occurred earlier for a high-frequency (e.g., left hand moving up) than for a low-frequency (e.g., right hand moving down) event. Moreover, increasing the frequency of the first-specified feature in the action sequence alone (e.g., "left" hand) increased the strength and/or speed of action feature binding (e.g., between the "left" hand and movement in an "up" or "down" direction). The latter finding suggests an update to the theory of event coding, as not all features in the action sequence equally determine binding strength. We conclude that action planning involves serial binding of features in the order of action feature execution (i.e., associations among features are not bidirectional but are directional), which can lead to a more durable memory trace. This is consistent with physiological evidence suggesting that serial order is preserved in an action plan executed from memory and that the first feature in the action sequence may be critical in preserving this serial order.

  10. Use of simulated experiments for material characterization of brittle materials subjected to high strain rate dynamic tension

    PubMed Central

    Saletti, Dominique

    2017-01-01

    Rapid progress in ultra-high-speed imaging has allowed material properties to be studied at high strain rates by applying full-field measurements and inverse identification methods. Nevertheless, the sensitivity of these techniques still requires a better understanding, since various extrinsic factors present during an actual experiment make it difficult to separate different sources of errors that can significantly affect the quality of the identified results. This study presents a methodology using simulated experiments to investigate the accuracy of the so-called spalling technique (used to study tensile properties of concrete subjected to high strain rates) by numerically simulating the entire identification process. The experimental technique uses the virtual fields method and the grid method. The methodology consists of reproducing the recording process of an ultra-high-speed camera by generating sequences of synthetically deformed images of a sample surface, which are then analysed using the standard tools. The investigation of the uncertainty of the identified parameters, such as Young's modulus along with the stress–strain constitutive response, is addressed by introducing the most significant user-dependent parameters (i.e. acquisition speed, camera dynamic range, grid sampling, blurring), proving that the used technique can be an effective tool for error investigation. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956505

  11. Linear and exponential TAIL-PCR: a method for efficient and quick amplification of flanking sequences adjacent to Tn5 transposon insertion sites.

    PubMed

    Jia, Xianbo; Lin, Xinjian; Chen, Jichen

    2017-11-02

    Current genome walking methods are very time consuming, and many produce non-specific amplification products. To amplify the flanking sequences that are adjacent to Tn5 transposon insertion sites in Serratia marcescens FZSF02, we developed a genome walking method based on TAIL-PCR. This PCR method added a 20-cycle linear amplification step before the exponential amplification step to increase the concentration of the target sequences. Products of the linear amplification and the exponential amplification were diluted 100-fold to decrease the concentration of the templates that cause non-specific amplification. Fast DNA polymerase with a high extension speed was used in this method, and an amplification program was used to rapidly amplify long specific sequences. With this linear and exponential TAIL-PCR (LETAIL-PCR), we successfully obtained products larger than 2 kb from Tn5 transposon insertion mutant strains within 3 h. This method can be widely used in genome walking studies to amplify unknown sequences that are adjacent to known sequences.

  12. Video image processing to create a speed sensor

    DOT National Transportation Integrated Search

    1999-11-01

    Image processing has been applied to traffic analysis in recent years, with different goals. In the report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time traffic images. We extract moving edges ...

  13. Fault-Tolerant Sequencer Using FPGA-Based Logic Designs for Space Applications

    DTIC Science & Technology

    2013-12-01

    Prototype Board SBU single bit upset SDK software development kit SDRAM synchronous dynamic random-access memory SEB single-event burnout ...current VHDL VHSIC hardware description language VHSIC very-high-speed integrated circuits VLSI very-large- scale integration VQFP very...transient pulse, called a single-event transient (SET), or even cause permanent damage to the device in the form of a burnout or gate rupture. The SEE

  14. SequenceL: Automated Parallel Algorithms Derived from CSP-NT Computational Laws

    NASA Technical Reports Server (NTRS)

    Cooke, Daniel; Rushton, Nelson

    2013-01-01

    With the introduction of new parallel architectures like the cell and multicore chips from IBM, Intel, AMD, and ARM, as well as the petascale processing available for highend computing, a larger number of programmers will need to write parallel codes. Adding the parallel control structure to the sequence, selection, and iterative control constructs increases the complexity of code development, which often results in increased development costs and decreased reliability. SequenceL is a high-level programming language that is, a programming language that is closer to a human s way of thinking than to a machine s. Historically, high-level languages have resulted in decreased development costs and increased reliability, at the expense of performance. In recent applications at JSC and in industry, SequenceL has demonstrated the usual advantages of high-level programming in terms of low cost and high reliability. SequenceL programs, however, have run at speeds typically comparable with, and in many cases faster than, their counterparts written in C and C++ when run on single-core processors. Moreover, SequenceL is able to generate parallel executables automatically for multicore hardware, gaining parallel speedups without any extra effort from the programmer beyond what is required to write the sequen tial/singlecore code. A SequenceL-to-C++ translator has been developed that automatically renders readable multithreaded C++ from a combination of a SequenceL program and sample data input. The SequenceL language is based on two fundamental computational laws, Consume-Simplify- Produce (CSP) and Normalize-Trans - pose (NT), which enable it to automate the creation of parallel algorithms from high-level code that has no annotations of parallelism whatsoever. In our anecdotal experience, SequenceL development has been in every case less costly than development of the same algorithm in sequential (that is, single-core, single process) C or C++, and an order of magnitude less costly than development of comparable parallel code. Moreover, SequenceL not only automatically parallelizes the code, but since it is based on CSP-NT, it is provably race free, thus eliminating the largest quality challenge the parallelized software developer faces.

  15. Development of a State Machine Sequencer for the Keck Interferometer: Evolution, Development and Lessons Learned using a CASE Tool Approach

    NASA Technical Reports Server (NTRS)

    Rede, Leonard J.; Booth, Andrew; Hsieh, Jonathon; Summer, Kellee

    2004-01-01

    This paper presents a discussion of the evolution of a sequencer from a simple EPICS (Experimental Physics and Industrial Control System) based sequencer into a complex implementation designed utilizing UML (Unified Modeling Language) methodologies and a CASE (Computer Aided Software Engineering) tool approach. The main purpose of the sequencer (called the IF Sequencer) is to provide overall control of the Keck Interferometer to enable science operations be carried out by a single operator (and/or observer). The interferometer links the two 10m telescopes of the W. M. Keck Observatory at Mauna Kea, Hawaii. The IF Sequencer is a high-level, multi-threaded, Hare1 finite state machine, software program designed to orchestrate several lower-level hardware and software hard real time subsystems that must perform their work in a specific and sequential order. The sequencing need not be done in hard real-time. Each state machine thread commands either a high-speed real-time multiple mode embedded controller via CORB A, or slower controllers via EPICS Channel Access interfaces. The overall operation of the system is simplified by the automation. The UML is discussed and our use of it to implement the sequencer is presented. The decision to use the Rhapsody product as our CASE tool is explained and reflected upon. Most importantly, a section on lessons learned is presented and the difficulty of integrating CASE tool automatically generated C++ code into a large control system consisting of multiple infrastructures is presented.

  16. Development of a state machine sequencer for the Keck Interferometer: evolution, development, and lessons learned using a CASE tool approach

    NASA Astrophysics Data System (ADS)

    Reder, Leonard J.; Booth, Andrew; Hsieh, Jonathan; Summers, Kellee R.

    2004-09-01

    This paper presents a discussion of the evolution of a sequencer from a simple Experimental Physics and Industrial Control System (EPICS) based sequencer into a complex implementation designed utilizing UML (Unified Modeling Language) methodologies and a Computer Aided Software Engineering (CASE) tool approach. The main purpose of the Interferometer Sequencer (called the IF Sequencer) is to provide overall control of the Keck Interferometer to enable science operations to be carried out by a single operator (and/or observer). The interferometer links the two 10m telescopes of the W. M. Keck Observatory at Mauna Kea, Hawaii. The IF Sequencer is a high-level, multi-threaded, Harel finite state machine software program designed to orchestrate several lower-level hardware and software hard real-time subsystems that must perform their work in a specific and sequential order. The sequencing need not be done in hard real-time. Each state machine thread commands either a high-speed real-time multiple mode embedded controller via CORBA, or slower controllers via EPICS Channel Access interfaces. The overall operation of the system is simplified by the automation. The UML is discussed and our use of it to implement the sequencer is presented. The decision to use the Rhapsody product as our CASE tool is explained and reflected upon. Most importantly, a section on lessons learned is presented and the difficulty of integrating CASE tool automatically generated C++ code into a large control system consisting of multiple infrastructures is presented.

  17. A compact acousto-optic lens for 2D and 3D femtosecond based 2-photon microscopy

    PubMed Central

    Kirkby, Paul A.; Naga Srinivas, N.K.M.; Silver, R. Angus

    2010-01-01

    We describe a high speed 3D Acousto-Optic Lens Microscope (AOLM) for femtosecond 2-photon imaging. By optimizing the design of the 4 AO Deflectors (AODs) and by deriving new control algorithms, we have developed a compact spherical AOL with a low temporal dispersion that enables 2-photon imaging at 10-fold lower power than previously reported. We show that the AOLM can perform high speed 2D raster-scan imaging (>150 Hz) without scan rate dependent astigmatism. It can deflect and focus a laser beam in a 3D random access sequence at 30 kHz and has an extended focusing range (>137 μm; 40X 0.8NA objective). These features are likely to make the AOLM a useful tool for studying fast physiological processes distributed in 3D space PMID:20588506

  18. Dynamic strain distribution of FRP plate under blast loading

    NASA Astrophysics Data System (ADS)

    Saburi, T.; Yoshida, M.; Kubota, S.

    2017-02-01

    The dynamic strain distribution of a fiber re-enforced plastic (FRP) plate under blast loading was investigated using a Digital Image Correlation (DIC) image analysis method. The testing FRP plates were mounted in parallel to each other on a steel frame. 50 g of composition C4 explosive was used as a blast loading source and set in the center of the FRP plates. The dynamic behavior of the FRP plate under blast loading were observed by two high-speed video cameras. The set of two high-speed video image sequences were used to analyze the FRP three-dimensional strain distribution by means of DIC method. A point strain profile extracted from the analyzed strain distribution data was compared with a directly observed strain profile using a strain gauge and it was shown that the strain profile under the blast loading by DIC method is quantitatively accurate.

  19. Processing Device for High-Speed Execution of an Xrisc Computer Program

    NASA Technical Reports Server (NTRS)

    Ng, Tak-Kwong (Inventor); Mills, Carl S. (Inventor)

    2016-01-01

    A processing device for high-speed execution of a computer program is provided. A memory module may store one or more computer programs. A sequencer may select one of the computer programs and controls execution of the selected program. A register module may store intermediate values associated with a current calculation set, a set of output values associated with a previous calculation set, and a set of input values associated with a subsequent calculation set. An external interface may receive the set of input values from a computing device and provides the set of output values to the computing device. A computation interface may provide a set of operands for computation during processing of the current calculation set. The set of input values are loaded into the register and the set of output values are unloaded from the register in parallel with processing of the current calculation set.

  20. i-rDNA: alignment-free algorithm for rapid in silico detection of ribosomal gene fragments from metagenomic sequence data sets.

    PubMed

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Chadaram, Sudha; Mande, Sharmila S

    2011-11-30

    Obtaining accurate estimates of microbial diversity using rDNA profiling is the first step in most metagenomics projects. Consequently, most metagenomic projects spend considerable amounts of time, money and manpower for experimentally cloning, amplifying and sequencing the rDNA content in a metagenomic sample. In the second step, the entire genomic content of the metagenome is extracted, sequenced and analyzed. Since DNA sequences obtained in this second step also contain rDNA fragments, rapid in silico identification of these rDNA fragments would drastically reduce the cost, time and effort of current metagenomic projects by entirely bypassing the experimental steps of primer based rDNA amplification, cloning and sequencing. In this study, we present an algorithm called i-rDNA that can facilitate the rapid detection of 16S rDNA fragments from amongst millions of sequences in metagenomic data sets with high detection sensitivity. Performance evaluation with data sets/database variants simulating typical metagenomic scenarios indicates the significantly high detection sensitivity of i-rDNA. Moreover, i-rDNA can process a million sequences in less than an hour on a simple desktop with modest hardware specifications. In addition to the speed of execution, high sensitivity and low false positive rate, the utility of the algorithmic approach discussed in this paper is immense given that it would help in bypassing the entire experimental step of primer-based rDNA amplification, cloning and sequencing. Application of this algorithmic approach would thus drastically reduce the cost, time and human efforts invested in all metagenomic projects. A web-server for the i-rDNA algorithm is available at http://metagenomics.atc.tcs.com/i-rDNA/

  1. Leveraging the Power of High Performance Computing for Next Generation Sequencing Data Analysis: Tricks and Twists from a High Throughput Exome Workflow

    PubMed Central

    Wonczak, Stephan; Thiele, Holger; Nieroda, Lech; Jabbari, Kamel; Borowski, Stefan; Sinha, Vishal; Gunia, Wilfried; Lang, Ulrich; Achter, Viktor; Nürnberg, Peter

    2015-01-01

    Next generation sequencing (NGS) has been a great success and is now a standard method of research in the life sciences. With this technology, dozens of whole genomes or hundreds of exomes can be sequenced in rather short time, producing huge amounts of data. Complex bioinformatics analyses are required to turn these data into scientific findings. In order to run these analyses fast, automated workflows implemented on high performance computers are state of the art. While providing sufficient compute power and storage to meet the NGS data challenge, high performance computing (HPC) systems require special care when utilized for high throughput processing. This is especially true if the HPC system is shared by different users. Here, stability, robustness and maintainability are as important for automated workflows as speed and throughput. To achieve all of these aims, dedicated solutions have to be developed. In this paper, we present the tricks and twists that we utilized in the implementation of our exome data processing workflow. It may serve as a guideline for other high throughput data analysis projects using a similar infrastructure. The code implementing our solutions is provided in the supporting information files. PMID:25942438

  2. Visual Prediction in Infancy: What Is the Association with Later Vocabulary?

    ERIC Educational Resources Information Center

    Ellis, Erica M.; Gonzalez, Marybel Robledo; Deák, Gedeon O.

    2014-01-01

    Young infants can learn statistical regularities and patterns in sequences of events. Studies have demonstrated a relationship between early sequence learning skills and later development of cognitive and language skills. We investigated the relation between infants' visual response speed to novel event sequences, and their later receptive and…

  3. Four-dimensional guidance algorithms for aircraft in an air traffic control environment

    NASA Technical Reports Server (NTRS)

    Pecsvaradi, T.

    1975-01-01

    Theoretical development and computer implementation of three guidance algorithms are presented. From a small set of input parameters the algorithms generate the ground track, altitude profile, and speed profile required to implement an experimental 4-D guidance system. Given a sequence of waypoints that define a nominal flight path, the first algorithm generates a realistic, flyable ground track consisting of a sequence of straight line segments and circular arcs. Each circular turn is constrained by the minimum turning radius of the aircraft. The ground track and the specified waypoint altitudes are used as inputs to the second algorithm which generates the altitude profile. The altitude profile consists of piecewise constant flight path angle segments, each segment lying within specified upper and lower bounds. The third algorithm generates a feasible speed profile subject to constraints on the rate of change in speed, permissible speed ranges, and effects of wind. Flight path parameters are then combined into a chronological sequence to form the 4-D guidance vectors. These vectors can be used to drive the autopilot/autothrottle of the aircraft so that a 4-D flight path could be tracked completely automatically; or these vectors may be used to drive the flight director and other cockpit displays, thereby enabling the pilot to track a 4-D flight path manually.

  4. Ultrasonic Shear Wave Elasticity Imaging (SWEI) Sequencing and Data Processing Using a Verasonics Research Scanner

    PubMed Central

    Deng, Yufeng; Rouze, Ned C.; Palmeri, Mark L.; Nightingale, Kathryn R.

    2017-01-01

    Ultrasound elasticity imaging has been developed over the last decade to estimate tissue stiffness. Shear wave elasticity imaging (SWEI) quantifies tissue stiffness by measuring the speed of propagating shear waves following acoustic radiation force excitation. This work presents the sequencing and data processing protocols of SWEI using a Verasonics system. The selection of the sequence parameters in a Verasonics programming script is discussed in detail. The data processing pipeline to calculate group shear wave speed (SWS), including tissue motion estimation, data filtering, and SWS estimation is demonstrated. In addition, the procedures for calibration of beam position, scanner timing, and transducer face heating are provided to avoid SWS measurement bias and transducer damage. PMID:28092508

  5. Electro-optic modulation for high-speed characterization of entangled photon pairs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lukens, Joseph M.; Odele, Ogaga D.; Leaird, Daniel E.

    In this study, we demonstrate a new biphoton manipulation and characterization technique based on electro-optic intensity modulation and time shifting. By applying fast modulation signals with a sharply peaked cross-correlation to each photon from an entangled pair, it is possible to measure temporal correlations with significantly higher precision than that attainable using standard single-photon detection. Low-duty-cycle pulses and maximal-length sequences are considered as modulation functions, reducing the time spread in our correlation measurement by a factor of five compared to our detector jitter. With state-of-the-art electro-optic components, we expect the potential to surpass the speed of any single-photon detectors currentlymore » available.« less

  6. Electro-optic modulation for high-speed characterization of entangled photon pairs

    DOE PAGES

    Lukens, Joseph M.; Odele, Ogaga D.; Leaird, Daniel E.; ...

    2015-11-10

    In this study, we demonstrate a new biphoton manipulation and characterization technique based on electro-optic intensity modulation and time shifting. By applying fast modulation signals with a sharply peaked cross-correlation to each photon from an entangled pair, it is possible to measure temporal correlations with significantly higher precision than that attainable using standard single-photon detection. Low-duty-cycle pulses and maximal-length sequences are considered as modulation functions, reducing the time spread in our correlation measurement by a factor of five compared to our detector jitter. With state-of-the-art electro-optic components, we expect the potential to surpass the speed of any single-photon detectors currentlymore » available.« less

  7. Three-dimensional interactions and vortical flows with emphasis on high speeds

    NASA Technical Reports Server (NTRS)

    Peake, D. J.; Tobak, M.

    1980-01-01

    Diverse kinds of three-dimensional regions of separation in laminar and turbulent boundary layers are discussed that exist on lifting aerodynamic configurations immersed in flows from subsonic to hypersonic speeds. In all cases of three dimensional flow separation, the assumption of continuous vector fields of skin-friction lines and external-flow streamlines, coupled with simple topology laws, provides a flow grammar whose elemental constituents are the singular points: nodes, foci, and saddles. Adopting these notions enables one to create sequences of plausible flow structures, to deduce mean flow characteristics, expose flow mechanisms, and to aid theory and experiment where lack of resolution in numerical calculations or wind tunnel observation causes imprecision in diagnosing the three dimensional flow features.

  8. Multi-modulus algorithm based on global artificial fish swarm intelligent optimization of DNA encoding sequences.

    PubMed

    Guo, Y C; Wang, H; Wu, H P; Zhang, M Q

    2015-12-21

    Aimed to address the defects of the large mean square error (MSE), and the slow convergence speed in equalizing the multi-modulus signals of the constant modulus algorithm (CMA), a multi-modulus algorithm (MMA) based on global artificial fish swarm (GAFS) intelligent optimization of DNA encoding sequences (GAFS-DNA-MMA) was proposed. To improve the convergence rate and reduce the MSE, this proposed algorithm adopted an encoding method based on DNA nucleotide chains to provide a possible solution to the problem. Furthermore, the GAFS algorithm, with its fast convergence and global search ability, was used to find the best sequence. The real and imaginary parts of the initial optimal weight vector of MMA were obtained through DNA coding of the best sequence. The simulation results show that the proposed algorithm has a faster convergence speed and smaller MSE in comparison with the CMA, the MMA, and the AFS-DNA-MMA.

  9. Nanopore with Transverse Nanoelectrodes for Electrical Characterization and Sequencing of DNA

    PubMed Central

    Gierhart, Brian C.; Howitt, David G.; Chen, Shiahn J.; Zhu, Zhineng; Kotecki, David E.; Smith, Rosemary L.; Collins, Scott D.

    2009-01-01

    A DNA sequencing device which integrates transverse conducting electrodes for the measurement of electrode currents during DNA translocation through a nanopore has been nanofabricated and characterized. A focused electron beam (FEB) milling technique, capable of creating features on the order of 1 nm in diameter, was used to create the nanopore. The device was characterized electrically using gold nanoparticles as an artificial analyte with both DC and AC measurement methods. Single nanoparticle/electrode interaction events were recorded. A low-noise, high-speed transimpedance current amplifier for the detection of nano to picoampere currents at microsecond time scales was designed, fabricated and tested for future integration with the nanopore device. PMID:19584949

  10. Nanopore with Transverse Nanoelectrodes for Electrical Characterization and Sequencing of DNA.

    PubMed

    Gierhart, Brian C; Howitt, David G; Chen, Shiahn J; Zhu, Zhineng; Kotecki, David E; Smith, Rosemary L; Collins, Scott D

    2008-06-16

    A DNA sequencing device which integrates transverse conducting electrodes for the measurement of electrode currents during DNA translocation through a nanopore has been nanofabricated and characterized. A focused electron beam (FEB) milling technique, capable of creating features on the order of 1 nm in diameter, was used to create the nanopore. The device was characterized electrically using gold nanoparticles as an artificial analyte with both DC and AC measurement methods. Single nanoparticle/electrode interaction events were recorded. A low-noise, high-speed transimpedance current amplifier for the detection of nano to picoampere currents at microsecond time scales was designed, fabricated and tested for future integration with the nanopore device.

  11. Assemblathon 2: evaluating de novo methods of genome assembly in three vertebrate species

    PubMed Central

    2013-01-01

    Background The process of generating raw genome sequence data continues to become cheaper, faster, and more accurate. However, assembly of such data into high-quality, finished genome sequences remains challenging. Many genome assembly tools are available, but they differ greatly in terms of their performance (speed, scalability, hardware requirements, acceptance of newer read technologies) and in their final output (composition of assembled sequence). More importantly, it remains largely unclear how to best assess the quality of assembled genome sequences. The Assemblathon competitions are intended to assess current state-of-the-art methods in genome assembly. Results In Assemblathon 2, we provided a variety of sequence data to be assembled for three vertebrate species (a bird, a fish, and snake). This resulted in a total of 43 submitted assemblies from 21 participating teams. We evaluated these assemblies using a combination of optical map data, Fosmid sequences, and several statistical methods. From over 100 different metrics, we chose ten key measures by which to assess the overall quality of the assemblies. Conclusions Many current genome assemblers produced useful assemblies, containing a significant representation of their genes and overall genome structure. However, the high degree of variability between the entries suggests that there is still much room for improvement in the field of genome assembly and that approaches which work well in assembling the genome of one species may not necessarily work well for another. PMID:23870653

  12. A peripheral component interconnect express-based scalable and highly integrated pulsed spectrometer for solution state dynamic nuclear polarization.

    PubMed

    He, Yugui; Feng, Jiwen; Zhang, Zhi; Wang, Chao; Wang, Dong; Chen, Fang; Liu, Maili; Liu, Chaoyang

    2015-08-01

    High sensitivity, high data rates, fast pulses, and accurate synchronization all represent challenges for modern nuclear magnetic resonance spectrometers, which make any expansion or adaptation of these devices to new techniques and experiments difficult. Here, we present a Peripheral Component Interconnect Express (PCIe)-based highly integrated distributed digital architecture pulsed spectrometer that is implemented with electron and nucleus double resonances and is scalable specifically for broad dynamic nuclear polarization (DNP) enhancement applications, including DNP-magnetic resonance spectroscopy/imaging (DNP-MRS/MRI). The distributed modularized architecture can implement more transceiver channels flexibly to meet a variety of MRS/MRI instrumentation needs. The proposed PCIe bus with high data rates can significantly improve data transmission efficiency and communication reliability and allow precise control of pulse sequences. An external high speed double data rate memory chip is used to store acquired data and pulse sequence elements, which greatly accelerates the execution of the pulse sequence, reduces the TR (time of repetition) interval, and improves the accuracy of TR in imaging sequences. Using clock phase-shift technology, we can produce digital pulses accurately with high timing resolution of 1 ns and narrow widths of 4 ns to control the microwave pulses required by pulsed DNP and ensure overall system synchronization. The proposed spectrometer is proved to be both feasible and reliable by observation of a maximum signal enhancement factor of approximately -170 for (1)H, and a high quality water image was successfully obtained by DNP-enhanced spin-echo (1)H MRI at 0.35 T.

  13. Programmable controller system for wind tunnel diversion vanes

    NASA Technical Reports Server (NTRS)

    King, R. F.

    1982-01-01

    A programmable controller (PC) system automatic sequence control, which acts as a supervisory controller for the servos, selects the proper drives, and automatically sequences the vanes, was developed for use in a subsonic wind tunnel. Tunnel modifications include a new second test section (80 ft x 100 ft with a maximum air speed capability of 110 knots) and an increase in maximum velocity flow from 200 knots to 300 knots. A completely automatic sequence control is necessary in order to allow intricate motion of the 14 triangularly arranged vanes which can be as large as 70 ft high x 35 ft wide and which require precise acceleration and deceleration control. Rate servos on each drive aid in this control, and servo cost was minimized by using four silicon controlled rectifier controllers to control the 20 dc drives. The PC has a programming capacity which facilitated the implementation of extensive logic design. A series of diagrams sequencing the vanes and a block diagram of the system are included.

  14. Threading DNA through nanopores for biosensing applications

    NASA Astrophysics Data System (ADS)

    Fyta, Maria

    2015-07-01

    This review outlines the recent achievements in the field of nanopore research. Nanopores are typically used in single-molecule experiments and are believed to have a high potential to realize an ultra-fast and very cheap genome sequencer. Here, the various types of nanopore materials, ranging from biological to 2D nanopores are discussed together with their advantages and disadvantages. These nanopores can utilize different protocols to read out the DNA nucleobases. Although, the first nanopore devices have reached the market, many still have issues which do not allow a full realization of a nanopore sequencer able to sequence the human genome in about a day. Ways to control the DNA, its dynamics and speed as the biomolecule translocates the nanopore in order to increase the signal-to-noise ratio in the reading-out process are examined in this review. Finally, the advantages, as well as the drawbacks in distinguishing the DNA nucleotides, i.e., the genetic information, are presented in view of their importance in the field of nanopore sequencing.

  15. An implementation of the SNR high speed network communication protocol (Receiver part)

    NASA Astrophysics Data System (ADS)

    Wan, Wen-Jyh

    1995-03-01

    This thesis work is to implement the receiver pan of the SNR high speed network transport protocol. The approach was to use the Systems of Communicating Machines (SCM) as the formal definition of the protocol. Programs were developed on top of the Unix system using C programming language. The Unix system features that were adopted for this implementation were multitasking, signals, shared memory, semaphores, sockets, timers and process control. The problems encountered, and solved, were signal loss, shared memory conflicts, process synchronization, scheduling, data alignment and errors in the SCM specification itself. The result was a correctly functioning program which implemented the SNR protocol. The system was tested using different connection modes, lost packets, duplicate packets and large data transfers. The contributions of this thesis are: (1) implementation of the receiver part of the SNR high speed transport protocol; (2) testing and integration with the transmitter part of the SNR transport protocol on an FDDI data link layered network; (3) demonstration of the functions of the SNR transport protocol such as connection management, sequenced delivery, flow control and error recovery using selective repeat methods of retransmission; and (4) modifications to the SNR transport protocol specification such as corrections for incorrect predicate conditions, defining of additional packet types formats, solutions for signal lost and processes contention problems etc.

  16. Variable speed wind turbine generator with zero-sequence filter

    DOEpatents

    Muljadi, Eduard

    1998-01-01

    A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility.

  17. Variable Speed Wind Turbine Generator with Zero-sequence Filter

    DOEpatents

    Muljadi, Eduard

    1998-08-25

    A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility.

  18. Variable speed wind turbine generator with zero-sequence filter

    DOEpatents

    Muljadi, E.

    1998-08-25

    A variable speed wind turbine generator system to convert mechanical power into electrical power or energy and to recover the electrical power or energy in the form of three phase alternating current and return the power or energy to a utility or other load with single phase sinusoidal waveform at sixty (60) hertz and unity power factor includes an excitation controller for generating three phase commanded current, a generator, and a zero sequence filter. Each commanded current signal includes two components: a positive sequence variable frequency current signal to provide the balanced three phase excitation currents required in the stator windings of the generator to generate the rotating magnetic field needed to recover an optimum level of real power from the generator; and a zero frequency sixty (60) hertz current signal to allow the real power generated by the generator to be supplied to the utility. The positive sequence current signals are balanced three phase signals and are prevented from entering the utility by the zero sequence filter. The zero sequence current signals have zero phase displacement from each other and are prevented from entering the generator by the star connected stator windings. The zero sequence filter allows the zero sequence current signals to pass through to deliver power to the utility. 14 figs.

  19. Speech Motor Sequence Learning: Acquisition and Retention in Parkinson Disease and Normal Aging.

    PubMed

    Whitfield, Jason A; Goberman, Alexander M

    2017-06-10

    The aim of the current investigation was to examine speech motor sequence learning in neurologically healthy younger adults, neurologically healthy older adults, and individuals with Parkinson disease (PD) over a 2-day period. A sequential nonword repetition task was used to examine learning over 2 days. Participants practiced a sequence of 6 monosyllabic nonwords that was retested following nighttime sleep. The speed and accuracy of the nonword sequence were measured, and learning was inferred by examining performance within and between sessions. Though all groups exhibited comparable improvements of the nonword sequence performance during the initial session, between-session retention of the nonword sequence differed between groups. Younger adult controls exhibited offline gains, characterized by an increase in the speed and accuracy of nonword sequence performance across sessions, whereas older adults exhibited stable between-session performance. Individuals with PD exhibited offline losses, marked by an increase in sequence duration between sessions. The current results demonstrate that both PD and normal aging affect retention of speech motor learning. Furthermore, these data suggest that basal ganglia dysfunction associated with PD may affect the later stages of speech motor learning. Findings from the current investigation are discussed in relation to studies examining consolidation of nonspeech motor learning.

  20. Handling the data management needs of high-throughput sequencing data: SpeedGene, a compression algorithm for the efficient storage of genetic data

    PubMed Central

    2012-01-01

    Background As Next-Generation Sequencing data becomes available, existing hardware environments do not provide sufficient storage space and computational power to store and process the data due to their enormous size. This is and will be a frequent problem that is encountered everyday by researchers who are working on genetic data. There are some options available for compressing and storing such data, such as general-purpose compression software, PBAT/PLINK binary format, etc. However, these currently available methods either do not offer sufficient compression rates, or require a great amount of CPU time for decompression and loading every time the data is accessed. Results Here, we propose a novel and simple algorithm for storing such sequencing data. We show that, the compression factor of the algorithm ranges from 16 to several hundreds, which potentially allows SNP data of hundreds of Gigabytes to be stored in hundreds of Megabytes. We provide a C++ implementation of the algorithm, which supports direct loading and parallel loading of the compressed format without requiring extra time for decompression. By applying the algorithm to simulated and real datasets, we show that the algorithm gives greater compression rate than the commonly used compression methods, and the data-loading process takes less time. Also, The C++ library provides direct-data-retrieving functions, which allows the compressed information to be easily accessed by other C++ programs. Conclusions The SpeedGene algorithm enables the storage and the analysis of next generation sequencing data in current hardware environment, making system upgrades unnecessary. PMID:22591016

  1. Improper trunk rotation sequence is associated with increased maximal shoulder external rotation angle and shoulder joint force in high school baseball pitchers.

    PubMed

    Oyama, Sakiko; Yu, Bing; Blackburn, J Troy; Padua, Darin A; Li, Li; Myers, Joseph B

    2014-09-01

    In a properly coordinated throwing motion, peak pelvic rotation velocity is reached before peak upper torso rotation velocity, so that angular momentum can be transferred effectively from the proximal (pelvis) to distal (upper torso) segment. However, the effects of trunk rotation sequence on pitching biomechanics and performance have not been investigated. The aim of this study was to investigate the effects of trunk rotation sequence on ball speed and on upper extremity biomechanics that are linked to injuries in high school baseball pitchers. The hypothesis was that pitchers with improper trunk rotation sequence would demonstrate lower ball velocity and greater stress to the joint. Descriptive laboratory study. Three-dimensional pitching kinematics data were captured from 72 high school pitchers. Subjects were considered to have proper or improper trunk rotation sequences when the peak pelvic rotation velocity was reached either before or after the peak upper torso rotation velocity beyond the margin of error (±3.7% of the time from stride-foot contact to ball release). Maximal shoulder external rotation angle, elbow extension angle at ball release, peak shoulder proximal force, shoulder internal rotation moment, and elbow varus moment were compared between groups using independent t tests (α < 0.05). Pitchers with improper trunk rotation sequences (n = 33) demonstrated greater maximal shoulder external rotation angle (mean difference, 7.2° ± 2.9°, P = .016) and greater shoulder proximal force (mean difference, 9.2% ± 3.9% body weight, P = .021) compared with those with proper trunk rotation sequences (n = 22). No other variables differed significantly different between groups. High school baseball pitchers who demonstrated improper trunk rotation sequences demonstrated greater maximal shoulder external rotation angle and shoulder proximal force compared with pitchers with proper trunk rotation sequences. Improper sequencing of the trunk and torso alter upper extremity joint loading in ways that may influence injury risk. As such, exercises that reinforce the use of a proper trunk rotation sequence during the pitching motion may reduce the stress placed on the structures around the shoulder joint and lead to the prevention of injuries. © 2014 The Author(s).

  2. Pairagon: a highly accurate, HMM-based cDNA-to-genome aligner.

    PubMed

    Lu, David V; Brown, Randall H; Arumugam, Manimozhiyan; Brent, Michael R

    2009-07-01

    The most accurate way to determine the intron-exon structures in a genome is to align spliced cDNA sequences to the genome. Thus, cDNA-to-genome alignment programs are a key component of most annotation pipelines. The scoring system used to choose the best alignment is a primary determinant of alignment accuracy, while heuristics that prevent consideration of certain alignments are a primary determinant of runtime and memory usage. Both accuracy and speed are important considerations in choosing an alignment algorithm, but scoring systems have received much less attention than heuristics. We present Pairagon, a pair hidden Markov model based cDNA-to-genome alignment program, as the most accurate aligner for sequences with high- and low-identity levels. We conducted a series of experiments testing alignment accuracy with varying sequence identity. We first created 'perfect' simulated cDNA sequences by splicing the sequences of exons in the reference genome sequences of fly and human. The complete reference genome sequences were then mutated to various degrees using a realistic mutation simulator and the perfect cDNAs were aligned to them using Pairagon and 12 other aligners. To validate these results with natural sequences, we performed cross-species alignment using orthologous transcripts from human, mouse and rat. We found that aligner accuracy is heavily dependent on sequence identity. For sequences with 100% identity, Pairagon achieved accuracy levels of >99.6%, with one quarter of the errors of any other aligner. Furthermore, for human/mouse alignments, which are only 85% identical, Pairagon achieved 87% accuracy, higher than any other aligner. Pairagon source and executables are freely available at http://mblab.wustl.edu/software/pairagon/

  3. Mining new crystal protein genes from Bacillus thuringiensis on the basis of mixed plasmid-enriched genome sequencing and a computational pipeline.

    PubMed

    Ye, Weixing; Zhu, Lei; Liu, Yingying; Crickmore, Neil; Peng, Donghai; Ruan, Lifang; Sun, Ming

    2012-07-01

    We have designed a high-throughput system for the identification of novel crystal protein genes (cry) from Bacillus thuringiensis strains. The system was developed with two goals: (i) to acquire the mixed plasmid-enriched genomic sequence of B. thuringiensis using next-generation sequencing biotechnology, and (ii) to identify cry genes with a computational pipeline (using BtToxin_scanner). In our pipeline method, we employed three different kinds of well-developed prediction methods, BLAST, hidden Markov model (HMM), and support vector machine (SVM), to predict the presence of Cry toxin genes. The pipeline proved to be fast (average speed, 1.02 Mb/min for proteins and open reading frames [ORFs] and 1.80 Mb/min for nucleotide sequences), sensitive (it detected 40% more protein toxin genes than a keyword extraction method using genomic sequences downloaded from GenBank), and highly specific. Twenty-one strains from our laboratory's collection were selected based on their plasmid pattern and/or crystal morphology. The plasmid-enriched genomic DNA was extracted from these strains and mixed for Illumina sequencing. The sequencing data were de novo assembled, and a total of 113 candidate cry sequences were identified using the computational pipeline. Twenty-seven candidate sequences were selected on the basis of their low level of sequence identity to known cry genes, and eight full-length genes were obtained with PCR. Finally, three new cry-type genes (primary ranks) and five cry holotypes, which were designated cry8Ac1, cry7Ha1, cry21Ca1, cry32Fa1, and cry21Da1 by the B. thuringiensis Toxin Nomenclature Committee, were identified. The system described here is both efficient and cost-effective and can greatly accelerate the discovery of novel cry genes.

  4. Rupture Speed and Dynamic Frictional Processes for the 1995 ML4.1 Shacheng, Hebei, China, Earthquake Sequence

    NASA Astrophysics Data System (ADS)

    Liu, B.; Shi, B.

    2010-12-01

    An earthquake with ML4.1 occurred at Shacheng, Hebei, China, on July 20, 1995, followed by 28 aftershocks with 0.9≤ML≤4.0 (Chen et al, 2005). According to ZÚÑIGA (1993), for the 1995 ML4.1 Shacheng earthquake sequence, the main shock is corresponding to undershoot, while aftershocks should match overshoot. With the suggestion that the dynamic rupture processes of the overshoot aftershocks could be related to the crack (sub-fault) extension inside the main fault. After main shock, the local stresses concentration inside the fault may play a dominant role in sustain the crack extending. Therefore, the main energy dissipation mechanism should be the aftershocks fracturing process associated with the crack extending. We derived minimum radiation energy criterion (MREC) following variational principle (Kanamori and Rivera, 2004)(ES/M0')min≧[3M0/(ɛπμR3)](v/β)3, where ES and M0' are radiated energy and seismic moment gained from observation, μ is the modulus of fault rigidity, ɛ is the parameter of ɛ=M0'/M0,M0 is seismic moment and R is rupture size on the fault, v and β are rupture speed and S-wave speed. From II and III crack extending model, we attempt to reconcile a uniform expression for calculate seismic radiation efficiency ηG, which can be used to restrict the upper limit efficiency and avoid the non-physics phenomenon that radiation efficiency is larger than 1. In ML 4.1 Shacheng earthquake sequence, the rupture speed of the main shock was about 0.86 of S-wave speed β according to MREC, closing to the Rayleigh wave speed, while the rupture speeds of the remained 28 aftershocks ranged from 0.05β to 0.55β. The rupture speed was 0.9β, and most of the aftershocks are no more than 0.35β using II and III crack extending model. In addition, the seismic radiation efficiencies for this earthquake sequence were: for the most aftershocks, the radiation efficiencies were less than 10%, inferring a low seismic efficiency, whereas the radiation efficiency was 78% for the main shock. The essential difference in the earthquake energy partition for the aftershock source dynamics indicated that the fracture energy dissipation could not be ignored in the source parameter estimation for the earthquake faulting, especially for small earthquakes. Otherwise, the radiated seismic energy could be overestimated or underestimated.

  5. A peripheral component interconnect express-based scalable and highly integrated pulsed spectrometer for solution state dynamic nuclear polarization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Yugui; Liu, Chaoyang, E-mail: chyliu@wipm.ac.cn; State Key Laboratory of Magnet Resonance and Atomic and Molecular Physics, Wuhan Institute of Physics and Mathematics, Chinese Academy of Sciences, Wuhan 430071

    2015-08-15

    High sensitivity, high data rates, fast pulses, and accurate synchronization all represent challenges for modern nuclear magnetic resonance spectrometers, which make any expansion or adaptation of these devices to new techniques and experiments difficult. Here, we present a Peripheral Component Interconnect Express (PCIe)-based highly integrated distributed digital architecture pulsed spectrometer that is implemented with electron and nucleus double resonances and is scalable specifically for broad dynamic nuclear polarization (DNP) enhancement applications, including DNP-magnetic resonance spectroscopy/imaging (DNP-MRS/MRI). The distributed modularized architecture can implement more transceiver channels flexibly to meet a variety of MRS/MRI instrumentation needs. The proposed PCIe bus with highmore » data rates can significantly improve data transmission efficiency and communication reliability and allow precise control of pulse sequences. An external high speed double data rate memory chip is used to store acquired data and pulse sequence elements, which greatly accelerates the execution of the pulse sequence, reduces the TR (time of repetition) interval, and improves the accuracy of TR in imaging sequences. Using clock phase-shift technology, we can produce digital pulses accurately with high timing resolution of 1 ns and narrow widths of 4 ns to control the microwave pulses required by pulsed DNP and ensure overall system synchronization. The proposed spectrometer is proved to be both feasible and reliable by observation of a maximum signal enhancement factor of approximately −170 for {sup 1}H, and a high quality water image was successfully obtained by DNP-enhanced spin-echo {sup 1}H MRI at 0.35 T.« less

  6. MPCM: a hardware coder for super slow motion video sequences

    NASA Astrophysics Data System (ADS)

    Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.

    2013-12-01

    In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.

  7. Dynamic visual attention: motion direction versus motion magnitude

    NASA Astrophysics Data System (ADS)

    Bur, A.; Wurtz, P.; Müri, R. M.; Hügli, H.

    2008-02-01

    Defined as an attentive process in the context of visual sequences, dynamic visual attention refers to the selection of the most informative parts of video sequence. This paper investigates the contribution of motion in dynamic visual attention, and specifically compares computer models designed with the motion component expressed either as the speed magnitude or as the speed vector. Several computer models, including static features (color, intensity and orientation) and motion features (magnitude and vector) are considered. Qualitative and quantitative evaluations are performed by comparing the computer model output with human saliency maps obtained experimentally from eye movement recordings. The model suitability is evaluated in various situations (synthetic and real sequences, acquired with fixed and moving camera perspective), showing advantages and inconveniences of each method as well as preferred domain of application.

  8. Swimming activity in marine fish.

    PubMed

    Wardle, C S

    1985-01-01

    Marine fish are capable of swimming long distances in annual migrations; they are also capable of high-speed dashes of short duration, and they can occupy small home territories for long periods with little activity. There is a large effect of fish size on the distance fish migrate at slow swimming speeds. When chased by a fishing trawl the effect of fish size on swimming performance can decide their fate. The identity and thickness of muscle used at each speed and evidence for the timing of myotomes used during the body movement cycle can be detected using electromyogram (EMG) electrodes. The cross-sectional area of muscle needed to maintain different swimming speeds can be predicted by relating the swimming drag force to the muscle force. At maximum swimming speed one completed cycle of swimming force is derived in sequence from the whole cross-sectional area of the muscles along the two sides of the fish. This and other aspects of the swimming cycle suggest that each myotome might be responsible for generating forces involved in particular stages of the tail sweep. The thick myotomes at the head end shorten during the peak thrust of the tail blade whereas the thinner myotomes nearer the tail generate stiffness appropriate for transmission of these forces and reposition the tail for the next cycle.

  9. Inlet Unstart Propulsion Integration Wind Tunnel Test Program Completed for High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Porro, A. Robert

    2000-01-01

    One of the propulsion system concepts to be considered for the High-Speed Civil Transport (HSCT) is an underwing, dual-propulsion, pod-per-wing installation. Adverse transient phenomena such as engine compressor stall and inlet unstart could severely degrade the performance of one of these propulsion pods. The subsequent loss of thrust and increased drag could cause aircraft stability and control problems that could lead to a catastrophic accident if countermeasures are not in place to anticipate and control these detrimental transient events. Aircraft system engineers must understand what happens during an engine compressor stall and inlet unstart so that they can design effective control systems to avoid and/or alleviate the effects of a propulsion pod engine compressor stall and inlet unstart. The objective of the Inlet Unstart Propulsion Airframe Integration test program was to assess the underwing flow field of a High-Speed Civil Transport propulsion system during an engine compressor stall and subsequent inlet unstart. Experimental research testing was conducted in the 10- by 10-Foot Supersonic Wind Tunnel at the NASA Glenn Research Center at Lewis Field. The representative propulsion pod consisted of a two-dimensional, bifurcated inlet mated to a live turbojet engine. The propulsion pod was mounted below a large flat plate that acted as a wing simulator. Because of the plate s long length (nominally 10-ft wide by 18-ft long), realistic boundary layers could form at the inlet cowl plane. Transient instrumentation was used to document the aerodynamic flow-field conditions during an unstart sequence. Acquiring these data was a significant technical challenge because a typical unstart sequence disrupts the local flow field for about only 50 msec. Flow surface information was acquired via static pressure taps installed in the wing simulator, and intrusive pressure probes were used to acquire flow-field information. These data were extensively analyzed to determine the impact of the unstart transient on the surrounding flow field. This wind tunnel test program was a success, and for the first time, researchers acquired flow-field aerodynamic data during a supersonic propulsion system engine compressor stall and inlet unstart sequence. In addition to obtaining flow-field pressure data, Glenn researchers determined other properties such as the transient flow angle and Mach number. Data are still being reduced, and a comprehensive final report will be released during calendar year 2000.

  10. Chemical biology on the genome.

    PubMed

    Balasubramanian, Shankar

    2014-08-15

    In this article I discuss studies towards understanding the structure and function of DNA in the context of genomes from the perspective of a chemist. The first area I describe concerns the studies that led to the invention and subsequent development of a method for sequencing DNA on a genome scale at high speed and low cost, now known as Solexa/Illumina sequencing. The second theme will feature the four-stranded DNA structure known as a G-quadruplex with a focus on its fundamental properties, its presence in cellular genomic DNA and the prospects for targeting such a structure in cels with small molecules. The final topic for discussion is naturally occurring chemically modified DNA bases with an emphasis on chemistry for decoding (or sequencing) such modifications in genomic DNA. The genome is a fruitful topic to be further elucidated by the creation and application of chemical approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. repRNA: a web server for generating various feature vectors of RNA sequences.

    PubMed

    Liu, Bin; Liu, Fule; Fang, Longyun; Wang, Xiaolong; Chou, Kuo-Chen

    2016-02-01

    With the rapid growth of RNA sequences generated in the postgenomic age, it is highly desired to develop a flexible method that can generate various kinds of vectors to represent these sequences by focusing on their different features. This is because nearly all the existing machine-learning methods, such as SVM (support vector machine) and KNN (k-nearest neighbor), can only handle vectors but not sequences. To meet the increasing demands and speed up the genome analyses, we have developed a new web server, called "representations of RNA sequences" (repRNA). Compared with the existing methods, repRNA is much more comprehensive, flexible and powerful, as reflected by the following facts: (1) it can generate 11 different modes of feature vectors for users to choose according to their investigation purposes; (2) it allows users to select the features from 22 built-in physicochemical properties and even those defined by users' own; (3) the resultant feature vectors and the secondary structures of the corresponding RNA sequences can be visualized. The repRNA web server is freely accessible to the public at http://bioinformatics.hitsz.edu.cn/repRNA/ .

  12. 3D Compressed Sensing for Highly Accelerated Hyperpolarized 13C MRSI With In Vivo Applications to Transgenic Mouse Models of Cancer

    PubMed Central

    Hu, Simon; Lustig, Michael; Balakrishnan, Asha; Larson, Peder E. Z.; Bok, Robert; Kurhanewicz, John; Nelson, Sarah J.; Goga, Andrei; Pauly, John M.; Vigneron, Daniel B.

    2010-01-01

    High polarization of nuclear spins in liquid state through hyperpolarized technology utilizing dynamic nuclear polarization has enabled the direct monitoring of 13C metabolites in vivo at a high signal-to-noise ratio. Acquisition time limitations due to T1 decay of the hyperpolarized signal require accelerated imaging methods, such as compressed sensing, for optimal speed and spatial coverage. In this paper, the design and testing of a new echo-planar 13C three-dimensional magnetic resonance spectroscopic imaging (MRSI) compressed sensing sequence is presented. The sequence provides up to a factor of 7.53 in acceleration with minimal reconstruction artifacts. The key to the design is employing x and y gradient blips during a fly-back readout to pseudorandomly undersample kf-kx-ky space. The design was validated in simulations and phantom experiments where the limits of undersampling and the effects of noise on the compressed sensing nonlinear reconstruction were tested. Finally, this new pulse sequence was applied in vivo in preclinical studies involving transgenic prostate cancer and transgenic liver cancer murine models to obtain much higher spatial and temporal resolution than possible with conventional echo-planar spectroscopic imaging methods. PMID:20017160

  13. Tachyon search speeds up retrieval of similar sequences by several orders of magnitude.

    PubMed

    Tan, Joshua; Kuchibhatla, Durga; Sirota, Fernanda L; Sherman, Westley A; Gattermayer, Tobias; Kwoh, Chia Yee; Eisenhaber, Frank; Schneider, Georg; Maurer-Stroh, Sebastian

    2012-06-15

    The usage of current sequence search tools becomes increasingly slower as databases of protein sequences continue to grow exponentially. Tachyon, a new algorithm that identifies closely related protein sequences ~200 times faster than standard BLAST, circumvents this limitation with a reduced database and oligopeptide matching heuristic. The tool is publicly accessible as a webserver at http://tachyon.bii.a-star.edu.sg and can also be accessed programmatically through SOAP.

  14. FIR Filter of DS-CDMA UWB Modem Transmitter

    NASA Astrophysics Data System (ADS)

    Kang, Kyu-Min; Cho, Sang-In; Won, Hui-Chul; Choi, Sang-Sung

    This letter presents low-complexity digital pulse shaping filter structures of a direct sequence code division multiple access (DS-CDMA) ultra wide-band (UWB) modem transmitter with a ternary spreading code. The proposed finite impulse response (FIR) filter structures using a look-up table (LUT) have the effect of saving the amount of memory by about 50% to 80% in comparison to the conventional FIR filter structures, and consequently are suitable for a high-speed parallel data process.

  15. Novel high-speed droplet-allele specific-polymerase chain reaction: application in the rapid genotyping of single nucleotide polymorphisms.

    PubMed

    Taira, Chiaki; Matsuda, Kazuyuki; Yamaguchi, Akemi; Sueki, Akane; Koeda, Hiroshi; Takagi, Fumio; Kobayashi, Yukihiro; Sugano, Mitsutoshi; Honda, Takayuki

    2013-09-23

    Single nucleotide alterations such as single nucleotide polymorphisms (SNP) and single nucleotide mutations are associated with responses to drugs and predisposition to several diseases, and they contribute to the pathogenesis of malignancies. We developed a rapid genotyping assay based on the allele-specific polymerase chain reaction (AS-PCR) with our droplet-PCR machine (droplet-AS-PCR). Using 8 SNP loci, we evaluated the specificity and sensitivity of droplet-AS-PCR. Buccal cells were pretreated with proteinase K and subjected directly to the droplet-AS-PCR without DNA extraction. The genotypes determined using the droplet-AS-PCR were then compared with those obtained by direct sequencing. Specific PCR amplifications for the 8 SNP loci were detected, and the detection limit of the droplet-AS-PCR was found to be 0.1-5.0% by dilution experiments. Droplet-AS-PCR provided specific amplification when using buccal cells, and all the genotypes determined within 9 min were consistent with those obtained by direct sequencing. Our novel droplet-AS-PCR assay enabled high-speed amplification retaining specificity and sensitivity and provided ultra-rapid genotyping. Crude samples such as buccal cells were available for the droplet-AS-PCR assay, resulting in the reduction of the total analysis time. Droplet-AS-PCR may therefore be useful for genotyping or the detection of single nucleotide alterations. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Is evaluation of humorous stimuli associated with frontal cortex morphology? A pilot study using facial micro-movement analysis and MRI.

    PubMed

    Juckel, Georg; Mergl, Roland; Brüne, Martin; Villeneuve, Isabelle; Frodl, Thomas; Schmitt, Gisela; Zetzsche, Thomas; Born, Christine; Hahn, Klaus; Reiser, Maximilian; Möller, Hans-Jürgen; Bär, Karl-Jürgen; Hegerl, Ulrich; Meisenzahl, Eva Maria

    2011-05-01

    Humour involves the ability to detect incongruous ideas violating social rules and norms. Accordingly, humour requires a complex array of cognitive skills for which intact frontal lobe functioning is critical. Here, we sought to examine the association of facial expression during an emotion inducing experiment with frontal cortex morphology in healthy subjects. Thirty-one healthy male subjects (mean age: 30.8±8.9 years; all right-handers) watching a humorous movie ("Mr. Bean") were investigated. Markers fixed at certain points of the face emitting high-frequency ultrasonic signals allowed direct measurement of facial movements with high spatial-temporal resolution. Magnetic resonance images of the frontal cortex were obtained with a 1.5-T Magnetom using a coronar T2- and protondensity-weighted Dual-Echo-Sequence and a 3D-magnetization-prepared rapid gradient echo (MPRAGE) sequence. Volumetric analysis was performed using BRAINS. Frontal cortex volume was partly associated with slower speed of "laughing" movements of the eyes ("genuine" or Duchenne smile). Specifically, grey matter volume was associated with longer emotional reaction time ipsilaterally, even when controlled for age and daily alcohol intake. These results lend support to the hypothesis that superior cognitive evaluation of humorous stimuli - mediated by larger prefrontal grey and white matter volume - leads to a measurable reduction of speed of emotional expressivity in normal adults. Copyright © 2010 Elsevier Srl. All rights reserved.

  17. Protein structural similarity search by Ramachandran codes

    PubMed Central

    Lo, Wei-Cheng; Huang, Po-Jung; Chang, Chih-Hung; Lyu, Ping-Chiang

    2007-01-01

    Background Protein structural data has increased exponentially, such that fast and accurate tools are necessary to access structure similarity search. To improve the search speed, several methods have been designed to reduce three-dimensional protein structures to one-dimensional text strings that are then analyzed by traditional sequence alignment methods; however, the accuracy is usually sacrificed and the speed is still unable to match sequence similarity search tools. Here, we aimed to improve the linear encoding methodology and develop efficient search tools that can rapidly retrieve structural homologs from large protein databases. Results We propose a new linear encoding method, SARST (Structural similarity search Aided by Ramachandran Sequential Transformation). SARST transforms protein structures into text strings through a Ramachandran map organized by nearest-neighbor clustering and uses a regenerative approach to produce substitution matrices. Then, classical sequence similarity search methods can be applied to the structural similarity search. Its accuracy is similar to Combinatorial Extension (CE) and works over 243,000 times faster, searching 34,000 proteins in 0.34 sec with a 3.2-GHz CPU. SARST provides statistically meaningful expectation values to assess the retrieved information. It has been implemented into a web service and a stand-alone Java program that is able to run on many different platforms. Conclusion As a database search method, SARST can rapidly distinguish high from low similarities and efficiently retrieve homologous structures. It demonstrates that the easily accessible linear encoding methodology has the potential to serve as a foundation for efficient protein structural similarity search tools. These search tools are supposed applicable to automated and high-throughput functional annotations or predictions for the ever increasing number of published protein structures in this post-genomic era. PMID:17716377

  18. Hippocampal replay of extended experience

    PubMed Central

    Davidson, Thomas J.; Kloosterman, Fabian; Wilson, Matthew A.

    2009-01-01

    Summary During pauses in exploration, ensembles of place cells in the rat hippocampus re-express firing sequences corresponding to recent spatial experience. Such ‘replay’ co-occurs with ripple events: short-lasting (~50–120 ms), high frequency (~200 Hz) oscillations that are associated with increased hippocampal-cortical communication. In previous studies, rats explored small environments, and replay was found to be anchored to the rat’s current location, and compressed in time such that replay of the complete environment occurred during a single ripple event. It is not known whether or how longer behavioral sequences are replayed in the hippocampus. Here we show, using a neural decoding approach, that firing sequences corresponding to long runs through a large environment are replayed with high fidelity (in both forward and reverse order), and that such replay can begin at remote locations on the track. Extended replay proceeds at a characteristic virtual speed of ~8 m/s, and remains coherent across trains of ripple events. These results suggest that extended replay is composed of chains of shorter subsequences, which may reflect a strategy for the storage and flexible expression of memories of prolonged experience. PMID:19709631

  19. Sleep-dependent learning and motor-skill complexity

    PubMed Central

    Kuriyama, Kenichi; Stickgold, Robert; Walker, Matthew P.

    2004-01-01

    Learning of a procedural motor-skill task is known to progress through a series of unique memory stages. Performance initially improves during training, and continues to improve, without further rehearsal, across subsequent periods of sleep. Here, we investigate how this delayed sleep-dependent learning is affected when the task characteristics are varied across several degrees of difficulty, and whether this improvement differentially enhances individual transitions of the motor-sequence pattern being learned. We report that subjects show similar overnight improvements in speed whether learning a five-element unimanual sequence (17.7% improvement), a nine-element unimanual sequence (20.2%), or a five-element bimanual sequence (17.5%), but show markedly increased overnight improvement (28.9%) with a nine-element bimanual sequence. In addition, individual transitions within the motor-sequence pattern that appeared most difficult at the end of training showed a significant 17.8% increase in speed overnight, whereas those transitions that were performed most rapidly at the end of training showed only a non-significant 1.4% improvement. Together, these findings suggest that the sleep-dependent learning process selectively provides maximum benefit to motor-skill procedures that proved to be most difficult prior to sleep. PMID:15576888

  20. An evaluation of the accuracy and speed of metagenome analysis tools

    PubMed Central

    Lindgreen, Stinus; Adair, Karen L.; Gardner, Paul P.

    2016-01-01

    Metagenome studies are becoming increasingly widespread, yielding important insights into microbial communities covering diverse environments from terrestrial and aquatic ecosystems to human skin and gut. With the advent of high-throughput sequencing platforms, the use of large scale shotgun sequencing approaches is now commonplace. However, a thorough independent benchmark comparing state-of-the-art metagenome analysis tools is lacking. Here, we present a benchmark where the most widely used tools are tested on complex, realistic data sets. Our results clearly show that the most widely used tools are not necessarily the most accurate, that the most accurate tool is not necessarily the most time consuming, and that there is a high degree of variability between available tools. These findings are important as the conclusions of any metagenomics study are affected by errors in the predicted community composition and functional capacity. Data sets and results are freely available from http://www.ucbioinformatics.org/metabenchmark.html PMID:26778510

  1. Acceleration of the Smith-Waterman algorithm using single and multiple graphics processors

    NASA Astrophysics Data System (ADS)

    Khajeh-Saeed, Ali; Poole, Stephen; Blair Perot, J.

    2010-06-01

    Finding regions of similarity between two very long data streams is a computationally intensive problem referred to as sequence alignment. Alignment algorithms must allow for imperfect sequence matching with different starting locations and some gaps and errors between the two data sequences. Perhaps the most well known application of sequence matching is the testing of DNA or protein sequences against genome databases. The Smith-Waterman algorithm is a method for precisely characterizing how well two sequences can be aligned and for determining the optimal alignment of those two sequences. Like many applications in computational science, the Smith-Waterman algorithm is constrained by the memory access speed and can be accelerated significantly by using graphics processors (GPUs) as the compute engine. In this work we show that effective use of the GPU requires a novel reformulation of the Smith-Waterman algorithm. The performance of this new version of the algorithm is demonstrated using the SSCA#1 (Bioinformatics) benchmark running on one GPU and on up to four GPUs executing in parallel. The results indicate that for large problems a single GPU is up to 45 times faster than a CPU for this application, and the parallel implementation shows linear speed up on up to 4 GPUs.

  2. Fuzzy logic based on-line fault detection and classification in transmission line.

    PubMed

    Adhikari, Shuma; Sinha, Nidul; Dorendrajit, Thingam

    2016-01-01

    This study presents fuzzy logic based online fault detection and classification of transmission line using Programmable Automation and Control technology based National Instrument Compact Reconfigurable i/o (CRIO) devices. The LabVIEW software combined with CRIO can perform real time data acquisition of transmission line. When fault occurs in the system current waveforms are distorted due to transients and their pattern changes according to the type of fault in the system. The three phase alternating current, zero sequence and positive sequence current data generated by LabVIEW through CRIO-9067 are processed directly for relaying. The result shows that proposed technique is capable of right tripping action and classification of type of fault at high speed therefore can be employed in practical application.

  3. Analysis of levels of support and resonance demonstrated by an elite singing teacher

    NASA Astrophysics Data System (ADS)

    Scherer, Ronald C.; Radhakrishnan, Nandhakumar; Poulimenos, Andreas

    2003-04-01

    This was a study of levels of singing expertise demonstrated by an elite operatic singer and teacher. This approach may prove advantageous because the teacher demonstrates what he thinks is important, not what the nonsinging scientist thinks should be important. Two pedagogical sequences were studied: (1) the location of support-glottis (poor), chest (better), abdomen (best); (2) locations of resonance-hard palate/straight tone (poor), mouth (better), sinus/head (best). Measures were obtained for a single frequency (196 Hz), the vowel /ae/, and for mezzo-forte loudness using the /pae pae pae/ technique. Sequence differences: The support sequence was characterized by formant frequency lowering suggestive of vocal tract lengthening. The resonance sequence was characterized by flow (AC, mean flow) and abduction increases. Sequence similarities: The best locations had the widest F2 bandwidths. The better and best locations had the largest dB difference between F2 and F3. Although acoustic power increased through the sequences, the acoustic efficiency was not a discriminating factor. Open and speed quotients were not differentiating. The flow resistance was highest and aerodynamic power the lowest for the first of each sequence. Combined data: The maximum flow declination rate correlated highly with the AC flow (r=-0.92) and SPL (r=0.901).

  4. Bit error rate tester using fast parallel generation of linear recurring sequences

    DOEpatents

    Pierson, Lyndon G.; Witzke, Edward L.; Maestas, Joseph H.

    2003-05-06

    A fast method for generating linear recurring sequences by parallel linear recurring sequence generators (LRSGs) with a feedback circuit optimized to balance minimum propagation delay against maximal sequence period. Parallel generation of linear recurring sequences requires decimating the sequence (creating small contiguous sections of the sequence in each LRSG). A companion matrix form is selected depending on whether the LFSR is right-shifting or left-shifting. The companion matrix is completed by selecting a primitive irreducible polynomial with 1's most closely grouped in a corner of the companion matrix. A decimation matrix is created by raising the companion matrix to the (n*k).sup.th power, where k is the number of parallel LRSGs and n is the number of bits to be generated at a time by each LRSG. Companion matrices with 1's closely grouped in a corner will yield sparse decimation matrices. A feedback circuit comprised of XOR logic gates implements the decimation matrix in hardware. Sparse decimation matrices can be implemented with minimum number of XOR gates, and therefore a minimum propagation delay through the feedback circuit. The LRSG of the invention is particularly well suited to use as a bit error rate tester on high speed communication lines because it permits the receiver to synchronize to the transmitted pattern within 2n bits.

  5. Automated search method for AFM and profilers

    NASA Astrophysics Data System (ADS)

    Ray, Michael; Martin, Yves C.

    2001-08-01

    A new automation software creates a search model as an initial setup and searches for a user-defined target in atomic force microscopes or stylus profilometers used in semiconductor manufacturing. The need for such automation has become critical in manufacturing lines. The new method starts with a survey map of a small area of a chip obtained from a chip-design database or an image of the area. The user interface requires a user to point to and define a precise location to be measured, and to select a macro function for an application such as line width or contact hole. The search algorithm automatically constructs a range of possible scan sequences within the survey, and provides increased speed and functionality compared to the methods used in instruments to date. Each sequence consists in a starting point relative to the target, a scan direction, and a scan length. The search algorithm stops when the location of a target is found and criteria for certainty in positioning is met. With today's capability in high speed processing and signal control, the tool can simultaneously scan and search for a target in a robotic and continuous manner. Examples are given that illustrate the key concepts.

  6. Combining high-speed SVM learning with CNN feature encoding for real-time target recognition in high-definition video for ISR missions

    NASA Astrophysics Data System (ADS)

    Kroll, Christine; von der Werth, Monika; Leuck, Holger; Stahl, Christoph; Schertler, Klaus

    2017-05-01

    For Intelligence, Surveillance, Reconnaissance (ISR) missions of manned and unmanned air systems typical electrooptical payloads provide high-definition video data which has to be exploited with respect to relevant ground targets in real-time by automatic/assisted target recognition software. Airbus Defence and Space is developing required technologies for real-time sensor exploitation since years and has combined the latest advances of Deep Convolutional Neural Networks (CNN) with a proprietary high-speed Support Vector Machine (SVM) learning method into a powerful object recognition system with impressive results on relevant high-definition video scenes compared to conventional target recognition approaches. This paper describes the principal requirements for real-time target recognition in high-definition video for ISR missions and the Airbus approach of combining an invariant feature extraction using pre-trained CNNs and the high-speed training and classification ability of a novel frequency-domain SVM training method. The frequency-domain approach allows for a highly optimized implementation for General Purpose Computation on a Graphics Processing Unit (GPGPU) and also an efficient training of large training samples. The selected CNN which is pre-trained only once on domain-extrinsic data reveals a highly invariant feature extraction. This allows for a significantly reduced adaptation and training of the target recognition method for new target classes and mission scenarios. A comprehensive training and test dataset was defined and prepared using relevant high-definition airborne video sequences. The assessment concept is explained and performance results are given using the established precision-recall diagrams, average precision and runtime figures on representative test data. A comparison to legacy target recognition approaches shows the impressive performance increase by the proposed CNN+SVM machine-learning approach and the capability of real-time high-definition video exploitation.

  7. Multi-transmitter/multi-receiver high-speed measurements of soil resistivity and induced polarization - Hydrological application

    NASA Astrophysics Data System (ADS)

    Gance, Julien; Texier, Benoît; Leite, Orlando; Bernard, Jean; Truffert, Catherine; Lebert, François; Yamashita, Yoshihiro

    2016-04-01

    Electrical resistivity tomography (ERT) is an adapted tool for the monitoring of soil moisture variations in aquifers (Binley et al., 2015). Nevertheless, in some specific cases, like for highly permeable soils or fractured aquifers, the measurements from the device can be slower than the water flow through the entire investigated zone. Therefore, the monitoring of such phenomena cannot be performed with classical devices. In such cases, we require a high-speed measurement of soils resistivity. Since 20 years, the speed of acquisition of the resistivity meters has been improved by the development of multi-channel devices allowing to perform multi-electrode (> 4) measurements. The switching capabilities of the actual devices allow to measure over long profiles up to hundreds of electrodes only using one transmitter. Based on this multi-receiver technology and on previous work from Yamashita et al. (2013), authors have developed a 250 W multi-transmitter device for the high speed measurement of resistivity and induced polarization. Current is therefore injected simultaneously in the soil through six injection electrodes. The injected current is coded for each transmitter using Code Division Multiple Access (CDMA, Yamashita et al., 2014) so that the different voltages induced by each sources can be reconstructed from the total potential measurement signal at each receiver, allowing to save acquisition time. The first operational prototype features 3 transmitters and 6 receivers. Its performances are compared to a mono-transmitter device for different sequences of acquisition in 2D and 3D configurations both in theory and on real field data acquired on a shallow sedimentary aquifer in the Loire valley in France. This device is promising for the accurate monitoring of rapid water flows in heterogeneous aquifers.

  8. A CMOS high speed imaging system design based on FPGA

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui

    2015-10-01

    CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.

  9. orthAgogue: an agile tool for the rapid prediction of orthology relations.

    PubMed

    Ekseth, Ole Kristian; Kuiper, Martin; Mironov, Vladimir

    2014-03-01

    The comparison of genes and gene products across species depends on high-quality tools to determine the relationships between gene or protein sequences from various species. Although some excellent applications are available and widely used, their performance leaves room for improvement. We developed orthAgogue: a multithreaded C application for high-speed estimation of homology relations in massive datasets, operated via a flexible and easy command-line interface. The orthAgogue software is distributed under the GNU license. The source code and binaries compiled for Linux are available at https://code.google.com/p/orthagogue/.

  10. Targeted Next-generation Sequencing and Bioinformatics Pipeline to Evaluate Genetic Determinants of Constitutional Disease.

    PubMed

    Dilliott, Allison A; Farhan, Sali M K; Ghani, Mahdi; Sato, Christine; Liang, Eric; Zhang, Ming; McIntyre, Adam D; Cao, Henian; Racacho, Lemuel; Robinson, John F; Strong, Michael J; Masellis, Mario; Bulman, Dennis E; Rogaeva, Ekaterina; Lang, Anthony; Tartaglia, Carmela; Finger, Elizabeth; Zinman, Lorne; Turnbull, John; Freedman, Morris; Swartz, Rick; Black, Sandra E; Hegele, Robert A

    2018-04-04

    Next-generation sequencing (NGS) is quickly revolutionizing how research into the genetic determinants of constitutional disease is performed. The technique is highly efficient with millions of sequencing reads being produced in a short time span and at relatively low cost. Specifically, targeted NGS is able to focus investigations to genomic regions of particular interest based on the disease of study. Not only does this further reduce costs and increase the speed of the process, but it lessens the computational burden that often accompanies NGS. Although targeted NGS is restricted to certain regions of the genome, preventing identification of potential novel loci of interest, it can be an excellent technique when faced with a phenotypically and genetically heterogeneous disease, for which there are previously known genetic associations. Because of the complex nature of the sequencing technique, it is important to closely adhere to protocols and methodologies in order to achieve sequencing reads of high coverage and quality. Further, once sequencing reads are obtained, a sophisticated bioinformatics workflow is utilized to accurately map reads to a reference genome, to call variants, and to ensure the variants pass quality metrics. Variants must also be annotated and curated based on their clinical significance, which can be standardized by applying the American College of Medical Genetics and Genomics Pathogenicity Guidelines. The methods presented herein will display the steps involved in generating and analyzing NGS data from a targeted sequencing panel, using the ONDRISeq neurodegenerative disease panel as a model, to identify variants that may be of clinical significance.

  11. Modelling Rate for Change of Speed in Calculus Proposal of Inductive Inquiry

    ERIC Educational Resources Information Center

    Sokolowski, Andrzej

    2014-01-01

    Research has shown that students have difficulties with understanding the process of determining whether an object is speeding up or slowing down, especially when it is applied to the analysis of motion in the negative direction. As inductively organized learning through its scaffolding sequencing supports the process of knowledge acquisition…

  12. Dynamic Modeling of Starting Aerodynamics and Stage Matching in an Axi-Centrifugal Compressor

    NASA Technical Reports Server (NTRS)

    Wilkes, Kevin; OBrien, Walter F.; Owen, A. Karl

    1996-01-01

    A DYNamic Turbine Engine Compressor Code (DYNTECC) has been modified to model speed transients from 0-100% of compressor design speed. The impetus for this enhancement was to investigate stage matching and stalling behavior during a start sequence as compared to rotating stall events above ground idle. The model can simulate speed and throttle excursions simultaneously as well as time varying bleed flow schedules. Results of a start simulation are presented and compared to experimental data obtained from an axi-centrifugal turboshaft engine and companion compressor rig. Stage by stage comparisons reveal the front stages to be operating in or near rotating stall through most of the start sequence. The model matches the starting operating line quite well in the forward stages with deviations appearing in the rearward stages near the start bleed. Overall, the performance of the model is very promising and adds significantly to the dynamic simulation capabilities of DYNTECC.

  13. Accelerating parallel transmit array B1 mapping in high field MRI with slice undersampling and interpolation by kriging.

    PubMed

    Ferrand, Guillaume; Luong, Michel; Cloos, Martijn A; Amadon, Alexis; Wackernagel, Hans

    2014-08-01

    Transmit arrays have been developed to mitigate the RF field inhomogeneity commonly observed in high field magnetic resonance imaging (MRI), typically above 3T. To this end, the knowledge of the RF complex-valued B1 transmit-sensitivities of each independent radiating element has become essential. This paper details a method to speed up a currently available B1-calibration method. The principle relies on slice undersampling, slice and channel interleaving and kriging, an interpolation method developed in geostatistics and applicable in many domains. It has been demonstrated that, under certain conditions, kriging gives the best estimator of a field in a region of interest. The resulting accelerated sequence allows mapping a complete set of eight volumetric field maps of the human head in about 1 min. For validation, the accuracy of kriging is first evaluated against a well-known interpolation technique based on Fourier transform as well as to a B1-maps interpolation method presented in the literature. This analysis is carried out on simulated and decimated experimental B1 maps. Finally, the accelerated sequence is compared to the standard sequence on a phantom and a volunteer. The new sequence provides B1 maps three times faster with a loss of accuracy limited potentially to about 5%.

  14. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture.

    PubMed

    Trivedi, Chintan A; Bollmann, Johann H

    2013-01-01

    Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback.

  15. Oncogenomics and the development of new cancer therapies.

    PubMed

    Strausberg, Robert L; Simpson, Andrew J G; Old, Lloyd J; Riggins, Gregory J

    2004-05-27

    Scientists have sequenced the human genome and identified most of its genes. Now it is time to use these genomic data, and the high-throughput technology developed to generate them, to tackle major health problems such as cancer. To accelerate our understanding of this disease and to produce targeted therapies, further basic mutational and functional genomic information is required. A systematic and coordinated approach, with the results freely available, should speed up progress. This will best be accomplished through an international academic and pharmaceutical oncogenomics initiative.

  16. A decade of aeroacoustic research at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Schmitz, Frederic H.; Mosher, M.; Kitaplioglu, Cahit; Cross, J.; Chang, I.

    1988-01-01

    The rotorcraft aeroacoustic research accomplishments of the past decade at Ames Research Center are reviewed. These include an extensive sequence of flight, ground, and wind tunnel tests that have utilized the facilities to guide and pioneer theoretical research. Many of these experiments were of benchmark quality. The experiments were used to isolate the inadequacies of linear theory in high-speed impulsive noise research, have led to the development of theoretical approaches, and have guided the emerging discipline of computational fluid dynamics to rotorcraft aeroacoustic problems.

  17. Real-space and real-time dynamics of CRISPR-Cas9 visualized by high-speed atomic force microscopy.

    PubMed

    Shibata, Mikihiro; Nishimasu, Hiroshi; Kodera, Noriyuki; Hirano, Seiichi; Ando, Toshio; Uchihashi, Takayuki; Nureki, Osamu

    2017-11-10

    The CRISPR-associated endonuclease Cas9 binds to a guide RNA and cleaves double-stranded DNA with a sequence complementary to the RNA guide. The Cas9-RNA system has been harnessed for numerous applications, such as genome editing. Here we use high-speed atomic force microscopy (HS-AFM) to visualize the real-space and real-time dynamics of CRISPR-Cas9 in action. HS-AFM movies indicate that, whereas apo-Cas9 adopts unexpected flexible conformations, Cas9-RNA forms a stable bilobed structure and interrogates target sites on the DNA by three-dimensional diffusion. These movies also provide real-time visualization of the Cas9-mediated DNA cleavage process. Notably, the Cas9 HNH nuclease domain fluctuates upon DNA binding, and subsequently adopts an active conformation, where the HNH active site is docked at the cleavage site in the target DNA. Collectively, our HS-AFM data extend our understanding of the action mechanism of CRISPR-Cas9.

  18. TIME-SEQUENCED X-RAY OBSERVATION OF A THERMAL EXPLOSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tringe, J. W.; Molitoris, J. D.; Kercher, J. R.

    The evolution of a thermally-initiated explosion is studied using a multiple-image x-ray system. HMX-based PBX 9501 is used in this work, enabling direct comparison to recently-published data obtained with proton radiography [1]. Multiple x-ray images of the explosion are obtained with image spacing of ten microseconds or more. The explosion is simultaneously characterized with a high-speed camera using an interframe spacing of 11 mus. X-ray and camera images were both initiated passively by signals from an embedded thermocouple array, as opposed to being actively triggered by a laser pulse or other external source. X-ray images show an accelerating reacting frontmore » within the explosive, and also show unreacted explosive at the time the containment vessel bursts. High-speed camera images show debris ejected from the vessel expanding at 800-2100 m/s in the first tens of mus after the container wall failure. The effective center of the initiation volume is about 6 mm from the geometric center of the explosive.« less

  19. Comparison of methods for library construction and short read annotation of shellfish viral metagenomes.

    PubMed

    Wei, Hong-Ying; Huang, Sheng; Wang, Jiang-Yong; Gao, Fang; Jiang, Jing-Zhe

    2018-03-01

    The emergence and widespread use of high-throughput sequencing technologies have promoted metagenomic studies on environmental or animal samples. Library construction for metagenome sequencing and annotation of the produced sequence reads are important steps in such studies and influence the quality of metagenomic data. In this study, we collected some marine mollusk samples, such as Crassostrea hongkongensis, Chlamys farreri, and Ruditapes philippinarum, from coastal areas in South China. These samples were divided into two batches to compare two library construction methods for shellfish viral metagenome. Our analysis showed that reverse-transcribing RNA into cDNA and then amplifying it simultaneously with DNA by whole genome amplification (WGA) yielded a larger amount of DNA compared to using only WGA or WTA (whole transcriptome amplification). Moreover, higher quality libraries were obtained by agarose gel extraction rather than with AMPure bead size selection. However, the latter can also provide good results if combined with the adjustment of the filter parameters. This, together with its simplicity, makes it a viable alternative. Finally, we compared three annotation tools (BLAST, DIAMOND, and Taxonomer) and two reference databases (NCBI's NR and Uniprot's Uniref). Considering the limitations of computing resources and data transfer speed, we propose the use of DIAMOND with Uniref for annotating metagenomic short reads as its running speed can guarantee a good annotation rate. This study may serve as a useful reference for selecting methods for Shellfish viral metagenome library construction and read annotation.

  20. The relative temporal sequence of decline in mobility and cognition among initially unimpaired older adults: Results from the Baltimore Longitudinal Study of Aging.

    PubMed

    Tian, Qu; An, Yang; Resnick, Susan M; Studenski, Stephanie

    2017-05-01

    most older individuals who experience mobility decline, also show cognitive decline, but whether cognitive decline precedes or follows mobility limitation is not well understood. examine the temporal sequence of mobility and cognition among initially unimpaired older adults. mobility and cognition were assessed every 2 years for 6 years in 412 participants aged ≥60 with initially unimpaired cognition and gait speed. Using autoregressive models, accounting for the dependent variable from the prior assessment, baseline age, sex, body mass index and education, we examine the temporal sequence of change in mobility (6 m usual gait speed, 400 m fast walk time) and executive function (visuoperceptual speed: Digit Symbol Substitution Test (DSST); cognitive flexibility: Trail Making Test part B (TMT-B)) or memory (California Verbal Learning Test (CVLT) immediate, short-delay, long-delay). there was a bidirectional relationship over time between slower usual gait speed and both poorer DSST and TMT-B scores (Bonferroni-corrected P < 0.005). In contrast, slower 400 m fast walk time predicted subsequent poorer DSST, TMT-B, CVLT immediate recall and CVLT short-delay scores (P < 0.005), while these measures did not predict subsequent 400 m fast walk time (P > 0.005). among initially unimpaired older adults, the temporal relationship between usual gait speed and executive function is bidirectional, with each predicting change in the other, while poor fast walking performance predicts future executive function and memory changes but not vice versa. Challenging tasks like the 400 m walk appear superior to usual gait speed for predicting executive function and memory change in unimpaired older adults. Published by Oxford University Press on behalf of the British Geriatrics Society 2016. This work is written by (a) US Government employee(s) and is in the public domain in the US.

  1. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics.

    PubMed

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-08-01

    RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of [Formula: see text]. Subsequently, numerous faster 'Sankoff-style' approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity ([Formula: see text] quartic time). Breaking this barrier, we introduce the novel Sankoff-style algorithm 'sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)', which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff's original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. © The Author 2015. Published by Oxford University Press.

  2. Comparison of reading speed with 3 different log-scaled reading charts.

    PubMed

    Buari, Noor Halilah; Chen, Ai-Hong; Musa, Nuraini

    2014-01-01

    A reading chart that resembles real reading conditions is important to evaluate the quality of life in terms of reading performance. The purpose of this study was to compare the reading speed of UiTM Malay related words (UiTM-Mrw) reading chart with MNread Acuity Chart and Colenbrander Reading Chart. Fifty subjects with normal sight were randomly recruited through randomized sampling in this study (mean age=22.98±1.65 years). Subjects were asked to read three different near charts aloud and as quickly as possible at random sequence. The charts were the UiTM-Mrw Reading Chart, MNread Acuity Chart and Colenbrander Reading Chart, respectively. The time taken to read each chart was recorded and any errors while reading were noted. Reading performance was quantified in terms of reading speed as words per minute (wpm). The mean reading speed for UiTM-Mrw Reading Chart, MNread Acuity Chart and Colenbrander Reading Chart was 200±30wpm, 196±28wpm and 194±31wpm, respectively. Comparison of reading speed between UiTM-Mrw Reading Chart and MNread Acuity Chart showed no significant difference (t=-0.73, p=0.72). The same happened with the reading speed between UiTM-Mrw Reading Chart and Colenbrander Reading Chart (t=-0.97, p=0.55). Bland and Altman plot showed good agreement between reading speed of UiTM-Mrw Reading Chart with MNread Acuity Chart with the Colenbrander Reading Chart. UiTM-Mrw Reading Chart in Malay language is highly comparable with standardized charts and can be used for evaluating reading speed. Copyright © 2013 Spanish General Council of Optometry. Published by Elsevier Espana. All rights reserved.

  3. Comparison of reading speed with 3 different log-scaled reading charts

    PubMed Central

    Buari, Noor Halilah; Chen, Ai-Hong; Musa, Nuraini

    2014-01-01

    Background A reading chart that resembles real reading conditions is important to evaluate the quality of life in terms of reading performance. The purpose of this study was to compare the reading speed of UiTM Malay related words (UiTM-Mrw) reading chart with MNread Acuity Chart and Colenbrander Reading Chart. Materials and methods Fifty subjects with normal sight were randomly recruited through randomized sampling in this study (mean age = 22.98 ± 1.65 years). Subjects were asked to read three different near charts aloud and as quickly as possible at random sequence. The charts were the UiTM-Mrw Reading Chart, MNread Acuity Chart and Colenbrander Reading Chart, respectively. The time taken to read each chart was recorded and any errors while reading were noted. Reading performance was quantified in terms of reading speed as words per minute (wpm). Results The mean reading speed for UiTM-Mrw Reading Chart, MNread Acuity Chart and Colenbrander Reading Chart was 200 ± 30 wpm, 196 ± 28 wpm and 194 ± 31 wpm, respectively. Comparison of reading speed between UiTM-Mrw Reading Chart and MNread Acuity Chart showed no significant difference (t = −0.73, p = 0.72). The same happened with the reading speed between UiTM-Mrw Reading Chart and Colenbrander Reading Chart (t = −0.97, p = 0.55). Bland and Altman plot showed good agreement between reading speed of UiTM-Mrw Reading Chart with MNread Acuity Chart with the Colenbrander Reading Chart. Conclusion UiTM-Mrw Reading Chart in Malay language is highly comparable with standardized charts and can be used for evaluating reading speed. PMID:25323642

  4. Developmental Trajectory of Motor Deficits in Preschool Children with ADHD.

    PubMed

    Sweeney, Kristie L; Ryan, Matthew; Schneider, Heather; Ferenc, Lisa; Denckla, Martha Bridge; Mahone, E Mark

    2018-01-01

    Motor deficits persisting into childhood (>7 years) are associated with increased executive and cognitive dysfunction, likely due to parallel neural circuitry. This study assessed the longitudinal trajectory of motor deficits in preschool children with ADHD, compared to typically developing (TD) children, in order to identify individuals at risk for anomalous neurological development. Participants included 47 children (21 ADHD, 26 TD) ages 4-7 years who participated in three visits (V1, V2, V3), each one year apart (V1=48-71 months, V2=60-83 months, V3=72-95 months). Motor variables assessed included speed (finger tapping and sequencing), total overflow, and axial movements from the Revised Physical and Neurological Examination for Subtle Signs (PANESS). Effects for group, visit, and group-by-visit interaction were examined. There were significant effects for group (favoring TD) for finger tapping speed and total axial movements, visit (performance improving with age for all 4 variables), and a significant group-by-visit interaction for finger tapping speed. Motor speed (repetitive finger tapping) and quality of axial movements are sensitive markers of anomalous motor development associated with ADHD in children as young as 4 years. Conversely, motor overflow and finger sequencing speed may be less sensitive in preschool, due to ongoing wide variations in attainment of these milestones.

  5. cljam: a library for handling DNA sequence alignment/map (SAM) with parallel processing.

    PubMed

    Takeuchi, Toshiki; Yamada, Atsuo; Aoki, Takashi; Nishimura, Kunihiro

    2016-01-01

    Next-generation sequencing can determine DNA bases and the results of sequence alignments are generally stored in files in the Sequence Alignment/Map (SAM) format and the compressed binary version (BAM) of it. SAMtools is a typical tool for dealing with files in the SAM/BAM format. SAMtools has various functions, including detection of variants, visualization of alignments, indexing, extraction of parts of the data and loci, and conversion of file formats. It is written in C and can execute fast. However, SAMtools requires an additional implementation to be used in parallel with, for example, OpenMP (Open Multi-Processing) libraries. For the accumulation of next-generation sequencing data, a simple parallelization program, which can support cloud and PC cluster environments, is required. We have developed cljam using the Clojure programming language, which simplifies parallel programming, to handle SAM/BAM data. Cljam can run in a Java runtime environment (e.g., Windows, Linux, Mac OS X) with Clojure. Cljam can process and analyze SAM/BAM files in parallel and at high speed. The execution time with cljam is almost the same as with SAMtools. The cljam code is written in Clojure and has fewer lines than other similar tools.

  6. Virtual Machine Language

    NASA Technical Reports Server (NTRS)

    Grasso, Christopher; Page, Dennis; O'Reilly, Taifun; Fteichert, Ralph; Lock, Patricia; Lin, Imin; Naviaux, Keith; Sisino, John

    2005-01-01

    Virtual Machine Language (VML) is a mission-independent, reusable software system for programming for spacecraft operations. Features of VML include a rich set of data types, named functions, parameters, IF and WHILE control structures, polymorphism, and on-the-fly creation of spacecraft commands from calculated values. Spacecraft functions can be abstracted into named blocks that reside in files aboard the spacecraft. These named blocks accept parameters and execute in a repeatable fashion. The sizes of uplink products are minimized by the ability to call blocks that implement most of the command steps. This block approach also enables some autonomous operations aboard the spacecraft, such as aerobraking, telemetry conditional monitoring, and anomaly response, without developing autonomous flight software. Operators on the ground write blocks and command sequences in a concise, high-level, human-readable programming language (also called VML ). A compiler translates the human-readable blocks and command sequences into binary files (the operations products). The flight portion of VML interprets the uplinked binary files. The ground subsystem of VML also includes an interactive sequence- execution tool hosted on workstations, which runs sequences at several thousand times real-time speed, affords debugging, and generates reports. This tool enables iterative development of blocks and sequences within times of the order of seconds.

  7. 40 CFR 92.124 - Test sequence; general requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (e) Pre-test engine measurements (e.g., idle and throttle notch speeds, fuel flows, etc.), pre-test engine performance checks (e.g., verification of engine power, etc.) and pre-test system calibrations (e... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Test sequence; general requirements...

  8. 40 CFR 92.124 - Test sequence; general requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (e) Pre-test engine measurements (e.g., idle and throttle notch speeds, fuel flows, etc.), pre-test engine performance checks (e.g., verification of engine power, etc.) and pre-test system calibrations (e... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Test sequence; general requirements. 92...

  9. 40 CFR 92.124 - Test sequence; general requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (e) Pre-test engine measurements (e.g., idle and throttle notch speeds, fuel flows, etc.), pre-test engine performance checks (e.g., verification of engine power, etc.) and pre-test system calibrations (e... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Test sequence; general requirements...

  10. 40 CFR 92.124 - Test sequence; general requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (e) Pre-test engine measurements (e.g., idle and throttle notch speeds, fuel flows, etc.), pre-test engine performance checks (e.g., verification of engine power, etc.) and pre-test system calibrations (e... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Test sequence; general requirements...

  11. 40 CFR 92.124 - Test sequence; general requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (e) Pre-test engine measurements (e.g., idle and throttle notch speeds, fuel flows, etc.), pre-test engine performance checks (e.g., verification of engine power, etc.) and pre-test system calibrations (e... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Test sequence; general requirements...

  12. shinyheatmap: Ultra fast low memory heatmap web interface for big data genomics.

    PubMed

    Khomtchouk, Bohdan B; Hennessy, James R; Wahlestedt, Claes

    2017-01-01

    Transcriptomics, metabolomics, metagenomics, and other various next-generation sequencing (-omics) fields are known for their production of large datasets, especially across single-cell sequencing studies. Visualizing such big data has posed technical challenges in biology, both in terms of available computational resources as well as programming acumen. Since heatmaps are used to depict high-dimensional numerical data as a colored grid of cells, efficiency and speed have often proven to be critical considerations in the process of successfully converting data into graphics. For example, rendering interactive heatmaps from large input datasets (e.g., 100k+ rows) has been computationally infeasible on both desktop computers and web browsers. In addition to memory requirements, programming skills and knowledge have frequently been barriers-to-entry for creating highly customizable heatmaps. We propose shinyheatmap: an advanced user-friendly heatmap software suite capable of efficiently creating highly customizable static and interactive biological heatmaps in a web browser. shinyheatmap is a low memory footprint program, making it particularly well-suited for the interactive visualization of extremely large datasets that cannot typically be computed in-memory due to size restrictions. Also, shinyheatmap features a built-in high performance web plug-in, fastheatmap, for rapidly plotting interactive heatmaps of datasets as large as 105-107 rows within seconds, effectively shattering previous performance benchmarks of heatmap rendering speed. shinyheatmap is hosted online as a freely available web server with an intuitive graphical user interface: http://shinyheatmap.com. The methods are implemented in R, and are available as part of the shinyheatmap project at: https://github.com/Bohdan-Khomtchouk/shinyheatmap. Users can access fastheatmap directly from within the shinyheatmap web interface, and all source code has been made publicly available on Github: https://github.com/Bohdan-Khomtchouk/fastheatmap.

  13. Pattern-based integer sample motion search strategies in the context of HEVC

    NASA Astrophysics Data System (ADS)

    Maier, Georg; Bross, Benjamin; Grois, Dan; Marpe, Detlev; Schwarz, Heiko; Veltkamp, Remco C.; Wiegand, Thomas

    2015-09-01

    The H.265/MPEG-H High Efficiency Video Coding (HEVC) standard provides a significant increase in coding efficiency compared to its predecessor, the H.264/MPEG-4 Advanced Video Coding (AVC) standard, which however comes at the cost of a high computational burden for a compliant encoder. Motion estimation (ME), which is a part of the inter-picture prediction process, typically consumes a high amount of computational resources, while significantly increasing the coding efficiency. In spite of the fact that both H.265/MPEG-H HEVC and H.264/MPEG-4 AVC standards allow processing motion information on a fractional sample level, the motion search algorithms based on the integer sample level remain to be an integral part of ME. In this paper, a flexible integer sample ME framework is proposed, thereby allowing to trade off significant reduction of ME computation time versus coding efficiency penalty in terms of bit rate overhead. As a result, through extensive experimentation, an integer sample ME algorithm that provides a good trade-off is derived, incorporating a combination and optimization of known predictive, pattern-based and early termination techniques. The proposed ME framework is implemented on a basis of the HEVC Test Model (HM) reference software, further being compared to the state-of-the-art fast search algorithm, which is a native part of HM. It is observed that for high resolution sequences, the integer sample ME process can be speed-up by factors varying from 3.2 to 7.6, resulting in the bit-rate overhead of 1.5% and 0.6% for Random Access (RA) and Low Delay P (LDP) configurations, respectively. In addition, the similar speed-up is observed for sequences with mainly Computer-Generated Imagery (CGI) content while trading off the bit rate overhead of up to 5.2%.

  14. Parallel Implementation of MAFFT on CUDA-Enabled Graphics Hardware.

    PubMed

    Zhu, Xiangyuan; Li, Kenli; Salah, Ahmad; Shi, Lin; Li, Keqin

    2015-01-01

    Multiple sequence alignment (MSA) constitutes an extremely powerful tool for many biological applications including phylogenetic tree estimation, secondary structure prediction, and critical residue identification. However, aligning large biological sequences with popular tools such as MAFFT requires long runtimes on sequential architectures. Due to the ever increasing sizes of sequence databases, there is increasing demand to accelerate this task. In this paper, we demonstrate how graphic processing units (GPUs), powered by the compute unified device architecture (CUDA), can be used as an efficient computational platform to accelerate the MAFFT algorithm. To fully exploit the GPU's capabilities for accelerating MAFFT, we have optimized the sequence data organization to eliminate the bandwidth bottleneck of memory access, designed a memory allocation and reuse strategy to make full use of limited memory of GPUs, proposed a new modified-run-length encoding (MRLE) scheme to reduce memory consumption, and used high-performance shared memory to speed up I/O operations. Our implementation tested in three NVIDIA GPUs achieves speedup up to 11.28 on a Tesla K20m GPU compared to the sequential MAFFT 7.015.

  15. Buckling Test Results from the 8-Foot-Diameter Orthogrid-Stiffened Cylinder Test Article TA01. [Test Dates: 19-21 November 2008

    NASA Technical Reports Server (NTRS)

    Hilburger, Mark W.; Waters, W. Allen, Jr.; Haynie, Waddy T.

    2015-01-01

    Results from the testing of cylinder test article SBKF-P2-CYLTA01 (referred to herein as TA01) are presented. The testing was conducted at the Marshall Space Flight Center (MSFC), November 19?21, 2008, in support of the Shell Buckling Knockdown Factor (SBKF) Project.i The test was used to verify the performance of a newly constructed buckling test facility at MSFC and to verify the test article design and analysis approach used by the SBKF project researchers. TA01 is an 8-foot-diameter (96-inches), 78.0-inch long, aluminum-lithium (Al-Li), orthogrid-stiffened cylindrical shell similar to those used in current state-of-the-art launch vehicle structures and was designed to exhibit global buckling when subjected to compression loads. Five different load sequences were applied to TA01 during testing and included four sub-critical load sequences, i.e., loading conditions that did not cause buckling or material failure, and one final load sequence to buckling and collapse. The sub-critical load sequences consisted of either uniform axial compression loading or combined axial compression and bending and the final load sequence subjected TA01 to uniform axial compression. Traditional displacement transducers and strain gages were used to monitor the test article response at nearly 300 locations and an advanced digital image correlation system was used to obtain low-speed and high-speed full-field displacement measurements of the outer surface of the test article. Overall, the test facility and test article performed as designed. In particular, the test facility successfully applied all desired load combinations to the test article and was able to test safely into the postbuckling range of loading, and the test article failed by global buckling. In addition, the test results correlated well with initial pretest predictions.

  16. Perceptual multistability in figure-ground segregation using motion stimuli.

    PubMed

    Gori, Simone; Giora, Enrico; Pedersini, Riccardo

    2008-11-01

    In a series of experiments using ambiguous stimuli, we investigate the effects of displaying ordered, discrete series of images on the dynamics of figure-ground segregation. For low frame presentation speeds, the series were perceived as a sequence of discontinuous, static images, while for high speeds they were perceived as continuous. We conclude that using stimuli varying continuously along one parameter results in stronger hysteresis and reduces spontaneous switching compared to matched static stimuli with discontinuous parameter changes. The additional evidence that the size of the hysteresis effects depended on trial duration is consistent with the stochastic nature of the dynamics governing figure-ground segregation. The results showed that for continuously changing stimuli, alternative figure-ground organizations are resolved via low-level, dynamical competition. A second series of experiments confirmed these results with an ambiguous stimulus based on Petter's effect.

  17. Calibration of a Direct Detection Doppler Wind Lidar System using a Wind Tunnel

    NASA Astrophysics Data System (ADS)

    Rees, David

    2012-07-01

    As a critical stage of a Project to develop an airborne Direct-Detection Doppler Wind Lidar System, it was possible to exploit a Wind Tunnel of the VZLU, Prague, Czech Republic for a comprehensive series of tests against calibrated Air Speed generated by the Wind Tunnel. The initial results from these test sequences will be presented. The rms wind speed errors were of order 0.25 m/sec - very satisfactory for this class of Doppler Wind Lidar measurements. The next stage of this Project will exploit a more highly-developed laser and detection system for measurements of wind shear, wake vortex and other potentially hazardous meteorological phenomena at Airports. Following the end of this Project, key parts of the instrumentation will be used for routine ground-based Doppler Wind Lidar measurements of the troposphere and stratosphere.

  18. 40 CFR 86.884-12 - Test run.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... zero and span settings of the smokemeter. (If a recorder is used, a chart speed of approximately one... collection, it shall be run at a minimum chart speed of one inch per minute during the idle mode and... zero and full scale response may be rechecked and reset during the idle mode of each test sequence. (v...

  19. Efficient moving target analysis for inverse synthetic aperture radar images via joint speeded-up robust features and regular moment

    NASA Astrophysics Data System (ADS)

    Yang, Hongxin; Su, Fulin

    2018-01-01

    We propose a moving target analysis algorithm using speeded-up robust features (SURF) and regular moment in inverse synthetic aperture radar (ISAR) image sequences. In our study, we first extract interest points from ISAR image sequences by SURF. Different from traditional feature point extraction methods, SURF-based feature points are invariant to scattering intensity, target rotation, and image size. Then, we employ a bilateral feature registering model to match these feature points. The feature registering scheme can not only search the isotropic feature points to link the image sequences but also reduce the error matching pairs. After that, the target centroid is detected by regular moment. Consequently, a cost function based on correlation coefficient is adopted to analyze the motion information. Experimental results based on simulated and real data validate the effectiveness and practicability of the proposed method.

  20. Effect of time discretization of the imaging process on the accuracy of trajectory estimation in fluorescence microscopy

    PubMed Central

    Wong, Yau; Chao, Jerry; Lin, Zhiping; Ober, Raimund J.

    2014-01-01

    In fluorescence microscopy, high-speed imaging is often necessary for the proper visualization and analysis of fast subcellular dynamics. Here, we examine how the speed of image acquisition affects the accuracy with which parameters such as the starting position and speed of a microscopic non-stationary fluorescent object can be estimated from the resulting image sequence. Specifically, we use a Fisher information-based performance bound to investigate the detector-dependent effect of frame rate on the accuracy of parameter estimation. We demonstrate that when a charge-coupled device detector is used, the estimation accuracy deteriorates as the frame rate increases beyond a point where the detector’s readout noise begins to overwhelm the low number of photons detected in each frame. In contrast, we show that when an electron-multiplying charge-coupled device (EMCCD) detector is used, the estimation accuracy improves with increasing frame rate. In fact, at high frame rates where the low number of photons detected in each frame renders the fluorescent object difficult to detect visually, imaging with an EMCCD detector represents a natural implementation of the Ultrahigh Accuracy Imaging Modality, and enables estimation with an accuracy approaching that which is attainable only when a hypothetical noiseless detector is used. PMID:25321248

  1. Multi-Point Hermes Acoustic Modem for High-Speed, High-Frequency Acoustic Communications with Low-Frequency Acoustic Control Loop for Real-Time Transmission of AUV-Carried High-Resolution Images and Navigation Data in Support of Ship Hulls Inspection

    DTIC Science & Technology

    2013-08-31

    13.3 µs) used in the data frame. The preamble uses direct-sequence spread spectrum (DSSS) to reduce the negative impact of fading. Three bits... ifr t covers one of the allocated hops of index  h i . The received reference symbol transmitted in hop   1,2,..., Hh i N is    ( )h...are recovered based on the phase difference between the constellation points demodulated from the sub-band m of the information symbol  ( ) ifr t

  2. Differential high-speed digital micromirror device based fluorescence speckle confocal microscopy.

    PubMed

    Jiang, Shihong; Walker, John

    2010-01-20

    We report a differential fluorescence speckle confocal microscope that acquires an image in a fraction of a second by exploiting the very high frame rate of modern digital micromirror devices (DMDs). The DMD projects a sequence of predefined binary speckle patterns to the sample and modulates the intensity of the returning fluorescent light simultaneously. The fluorescent light reflecting from the DMD's "on" and "off" pixels is modulated by correlated speckle and anticorrelated speckle, respectively, to form two images on two CCD cameras in parallel. The sum of the two images recovers a widefield image, but their difference gives a near-confocal image in real time. Experimental results for both low and high numerical apertures are shown.

  3. Skylab

    NASA Image and Video Library

    1973-01-01

    This montage is a sequence of soft x-ray photographs of the boot-shaped coronal hole rotating with the sun. The individual pictures were taken about 2 days apart by the Skylab telescope. Most of the apparent changes in this 6-day period resulted from a changing perspective. Skylab data helped demonstrate that coronal holes are sources of high-velocity streams in the solar wind. These high-velocity streams can be electrons, protons, and atomic nuclei that spray out from the Sun into interplanetary space. When the coronal hole is near the center of the Sun, as in view 2, the sprinkler is directed at Earth. These high-speed streams of solar wind distort Earth's magnetic field and disturb it's upper atmosphere.

  4. 20 kHz toluene planar laser-induced fluorescence imaging of a jet in nearly sonic crossflow

    NASA Astrophysics Data System (ADS)

    Miller, V. A.; Troutman, V. A.; Mungal, M. G.; Hanson, R. K.

    2014-10-01

    This manuscript describes continuous, high-repetition-rate (20 kHz) toluene planar laser-induced fluorescence (PLIF) imaging in an expansion tube impulse flow facility. Cinematographic image sequences are acquired that visualize an underexpanded jet of hydrogen in Mach 0.9 crossflow, a practical flow configuration relevant to aerospace propulsion systems. The freestream gas is nitrogen seeded with toluene; toluene broadly absorbs and fluoresces in the ultraviolet, and the relatively high quantum yield of toluene produces large signals and high signal-to-noise ratios. Toluene is excited using a commercially available, frequency-quadrupled (266 nm), high-repetition-rate (20 kHz), pulsed (0.8-0.9 mJ per pulse), diode-pumped solid-state Nd:YAG laser, and fluorescence is imaged with a high-repetition-rate intensifier and CMOS camera. The resulting PLIF movie and image sequences are presented, visualizing the jet start-up process and the dynamics of the jet in crossflow; the freestream duration and a measure of freestream momentum flux steadiness are also inferred. This work demonstrates progress toward continuous PLIF imaging of practical flow systems in impulse facilities at kHz acquisition rates using practical, turn-key, high-speed laser and imaging systems.

  5. Chromatically encoded high-speed photography of cavitation bubble dynamics inside inhomogeneous ophthalmic tissue

    NASA Astrophysics Data System (ADS)

    Tinne, N.; Matthias, B.; Kranert, F.; Wetzel, C.; Krüger, A.; Ripken, T.

    2016-03-01

    The interaction effect of photodisruption, which is used for dissection of biological tissue with fs-laser pulses, has been intensively studied inside water as prevalent sample medium. In this case, the single effect is highly reproducible and, hence, the method of time-resolved photography is sufficiently applicable. In contrast, the reproducibility significantly decreases analyzing more solid and anisotropic media like biological tissue. Therefore, a high-speed photographic approach is necessary in this case. The presented study introduces a novel technique for high-speed photography based on the principle of chromatic encoding. For illumination of the region of interest within the sample medium, the light paths of up to 12 LEDs with various emission wavelengths are overlaid via optical filters. Here, MOSFET-electronics provide a LED flash with a duration <100 ns; the diodes are externally triggered with a distinct delay for every LED. Furthermore, the different illumination wavelengths are chromatically separated again for detection via camera chip. Thus, the experimental setup enables the generation of a time-sequence of <= 12 images of a single cavitation bubble dynamics. In comparison to conventional time-resolved photography, images in sample media like water and HEMA show the significant advantages of this novel illumination technique. In conclusion, the results of this study are of great importance for the fundamental evaluation of the laser-tissue interaction inside anisotropic biological tissue and for the optimization of the surgical process with high-repetition rate fs-lasers. Additionally, this application is also suitable for the investigation of other microscopic, ultra-fast events in transparent inhomogeneous materials.

  6. A Synthetic Self-Oscillating Vocal Fold Model Platform for Studying Augmentation Injection

    PubMed Central

    Murray, Preston R.; Thomson, Scott L.; Smith, Marshall E.

    2013-01-01

    Objective Design and evaluate a platform for studying the mechanical effects of augmentation injections using synthetic self-oscillating vocal fold models. Study Design Basic science. Methods Life-sized, synthetic, multi-layer, self-oscillating vocal fold models were created that simulated bowing via volumetric reduction of the body layer relative to that of a normal, unbowed model. Material properties of the layers were unchanged. Models with varying degrees of bowing were created and paired with normal models. Following initial acquisition of data (onset pressure, vibration frequency, flow rate, and high-speed image sequences), bowed models were injected with silicone that had material properties similar to those used in augmentation procedures. Three different silicone injection quantities were tested: sufficient to close the glottal gap, insufficient to close the glottal gap, and excess silicone to create convex bowing of the bowed model. The above-mentioned metrics were again taken and compared. Pre- and post-injection high-speed image sequences were acquired using a hemilarynx setup, from which medial surface dynamics were quantified. Results The models vibrated with mucosal wave-like motion and at onset pressures and frequencies typical of human phonation. The models successfully exhibited various degrees of bowing which were then mitigated by injecting filler material. The models showed general pre- to post-injection decreases in onset pressure, flow rate, and open quotient, and a corresponding increase in vibration frequency. Conclusion The model may be useful in further explorations of the mechanical consequences of augmentation injections. PMID:24476985

  7. Effect of low-speed impact damage and damage location on behavior of composite panels

    NASA Technical Reports Server (NTRS)

    Jegley, Dawn C.

    1992-01-01

    The effect of low speed impact damage on the compression and tension strength of thin and moderately thick composite specimens was investigated. Impact speed ranged from 50 to 550 ft./sec., with corresponding impact energies from 0.25 to 30.7 ft. x lb. Impact locations were near the center of the specimen or near a lateral unloaded edge. In this study, thin specimens with only 90 degree and + or - 45 degree plies that were impacted away from the unloaded edge suffered less reduction in load carrying capability because of impact damage than of the same specimens impacted near the unloaded edge. Failure loads of thicker compression loaded specimens with a similar stacking sequence were independent of impact location. Failure loads of thin tension loaded specimens with 0 degree plies was independent of impact location, whereas failure loads of thicker compression loaded specimens with 0 degree plies were dependent upon impact location. A finite element analysis indicated that high axial strains occurred near the unloaded edges of the postbuckled panels. Thus, impacts near the unloaded edge would significantly affect the behavior of the postbuckled panel.

  8. Chemical Vapor Deposition Of Silicon Carbide

    NASA Technical Reports Server (NTRS)

    Powell, J. Anthony; Larkin, David J.; Matus, Lawrence G.; Petit, Jeremy B.

    1993-01-01

    Large single-crystal SiC boules from which wafers of large area cut now being produced commerically. Availability of wafers opens door for development of SiC semiconductor devices. Recently developed chemical vapor deposition (CVD) process produces thin single-crystal SiC films on SiC wafers. Essential step in sequence of steps used to fabricate semiconductor devices. Further development required for specific devices. Some potential high-temperature applications include sensors and control electronics for advanced turbine engines and automobile engines, power electronics for electromechanical actuators for advanced aircraft and for space power systems, and equipment used in drilling of deep wells. High-frequency applications include communication systems, high-speed computers, and microwave power transistors. High-radiation applications include sensors and controls for nuclear reactors.

  9. Repeated high-speed activities during youth soccer games in relation to changes in maximal sprinting and aerobic speeds.

    PubMed

    Buchheit, M; Simpson, B M; Mendez-Villanueva, A

    2013-01-01

    The aim of this study was to examine in highly-trained young soccer players whether substantial changes in either maximal sprinting speed (MSS) or maximal aerobic speed (as inferred from peak incremental test speed, V(Vam-Eval)) can affect repeated high-intensity running during games. Data from 33 players (14.5±1.3 years), who presented substantial changes in either MSS or V(Vam-Eval) throughout 2 consecutive testing periods (~3 months) were included in the final analysis. For each player, time-motion analyses were performed using a global positioning system (1-Hz) during 2-10 international club games played within 1-2 months from/to each testing period of interest (n for game analyzed=109, player-games=393, games per player per period=4±2). Sprint activities were defined as at least a 1-s run at intensities higher than 61% of individual MSS. Repeated-sprint sequences (RSS) were defined as a minimum of 2 consecutive sprints interspersed with a maximum of 60 s of recovery. Improvements in both MSS and V(Vam-Eval) were likely associated with a decreased RSS occurrence, but in some positions only (e. g., - 24% vs. - 3% for improvements in MSS in strikers vs. midfielders, respectively). The changes in the number of sprints per RSS were less clear but also position-dependent, e. g., +7 to +12% for full-backs and wingers, - 5 to - 7% for centre-backs and midfielders. In developing soccer players, changes in repeated-sprint activity during games do not necessarily match those in physical fitness. Game tactical and strategic requirements are likely to modulate on-field players' activity patterns independently (at least partially) of players' physical capacities. © Georg Thieme Verlag KG Stuttgart · New York.

  10. A teaching-learning sequence about weather map reading

    NASA Astrophysics Data System (ADS)

    Mandrikas, Achilleas; Stavrou, Dimitrios; Skordoulis, Constantine

    2017-07-01

    In this paper a teaching-learning sequence (TLS) introducing pre-service elementary teachers (PET) to weather map reading, with emphasis on wind assignment, is presented. The TLS includes activities about recognition of wind symbols, assignment of wind direction and wind speed on a weather map and identification of wind characteristics in a weather forecast. Sixty PET capabilities and difficulties in understanding weather maps were investigated, using inquiry-based learning activities. The results show that most PET became more capable of reading weather maps and assigning wind direction and speed on them. Our results also show that PET could be guided to understand meteorology concepts useful in everyday life and in teaching their future students.

  11. Validation of high throughput sequencing and microbial forensics applications

    PubMed Central

    2014-01-01

    High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results allow analysis and interpretation, significantly influencing the course and/or focus of an investigation, and can impact the response of the government to an attack having individual, political, economic or military consequences. Interpretation of the results of microbial forensic analyses relies on understanding the performance and limitations of HTS methods, including analytical processes, assays and data interpretation. The utility of HTS must be defined carefully within established operating conditions and tolerances. Validation is essential in the development and implementation of microbial forensics methods used for formulating investigative leads attribution. HTS strategies vary, requiring guiding principles for HTS system validation. Three initial aspects of HTS, irrespective of chemistry, instrumentation or software are: 1) sample preparation, 2) sequencing, and 3) data analysis. Criteria that should be considered for HTS validation for microbial forensics are presented here. Validation should be defined in terms of specific application and the criteria described here comprise a foundation for investigators to establish, validate and implement HTS as a tool in microbial forensics, enhancing public safety and national security. PMID:25101166

  12. A comprehensive evaluation of assembly scaffolding tools

    PubMed Central

    2014-01-01

    Background Genome assembly is typically a two-stage process: contig assembly followed by the use of paired sequencing reads to join contigs into scaffolds. Scaffolds are usually the focus of reported assembly statistics; longer scaffolds greatly facilitate the use of genome sequences in downstream analyses, and it is appealing to present larger numbers as metrics of assembly performance. However, scaffolds are highly prone to errors, especially when generated using short reads, which can directly result in inflated assembly statistics. Results Here we provide the first independent evaluation of scaffolding tools for second-generation sequencing data. We find large variations in the quality of results depending on the tool and dataset used. Even extremely simple test cases of perfect input, constructed to elucidate the behaviour of each algorithm, produced some surprising results. We further dissect the performance of the scaffolders using real and simulated sequencing data derived from the genomes of Staphylococcus aureus, Rhodobacter sphaeroides, Plasmodium falciparum and Homo sapiens. The results from simulated data are of high quality, with several of the tools producing perfect output. However, at least 10% of joins remains unidentified when using real data. Conclusions The scaffolders vary in their usability, speed and number of correct and missed joins made between contigs. Results from real data highlight opportunities for further improvements of the tools. Overall, SGA, SOPRA and SSPACE generally outperform the other tools on our datasets. However, the quality of the results is highly dependent on the read mapper and genome complexity. PMID:24581555

  13. Validation of high throughput sequencing and microbial forensics applications.

    PubMed

    Budowle, Bruce; Connell, Nancy D; Bielecka-Oder, Anna; Colwell, Rita R; Corbett, Cindi R; Fletcher, Jacqueline; Forsman, Mats; Kadavy, Dana R; Markotic, Alemka; Morse, Stephen A; Murch, Randall S; Sajantila, Antti; Schmedes, Sarah E; Ternus, Krista L; Turner, Stephen D; Minot, Samuel

    2014-01-01

    High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results allow analysis and interpretation, significantly influencing the course and/or focus of an investigation, and can impact the response of the government to an attack having individual, political, economic or military consequences. Interpretation of the results of microbial forensic analyses relies on understanding the performance and limitations of HTS methods, including analytical processes, assays and data interpretation. The utility of HTS must be defined carefully within established operating conditions and tolerances. Validation is essential in the development and implementation of microbial forensics methods used for formulating investigative leads attribution. HTS strategies vary, requiring guiding principles for HTS system validation. Three initial aspects of HTS, irrespective of chemistry, instrumentation or software are: 1) sample preparation, 2) sequencing, and 3) data analysis. Criteria that should be considered for HTS validation for microbial forensics are presented here. Validation should be defined in terms of specific application and the criteria described here comprise a foundation for investigators to establish, validate and implement HTS as a tool in microbial forensics, enhancing public safety and national security.

  14. Evolution in the block: common elements of 5S rDNA organization and evolutionary patterns in distant fish genera.

    PubMed

    Campo, Daniel; García-Vázquez, Eva

    2012-01-01

    The 5S rDNA is organized in the genome as tandemly repeated copies of a structural unit composed of a coding sequence plus a nontranscribed spacer (NTS). The coding region is highly conserved in the evolution, whereas the NTS vary in both length and sequence. It has been proposed that 5S rRNA genes are members of a gene family that have arisen through concerted evolution. In this study, we describe the molecular organization and evolution of the 5S rDNA in the genera Lepidorhombus and Scophthalmus (Scophthalmidae) and compared it with already known 5S rDNA of the very different genera Merluccius (Merluccidae) and Salmo (Salmoninae), to identify common structural elements or patterns for understanding 5S rDNA evolution in fish. High intra- and interspecific diversity within the 5S rDNA family in all the genera can be explained by a combination of duplications, deletions, and transposition events. Sequence blocks with high similarity in all the 5S rDNA members across species were identified for the four studied genera, with evidences of intense gene conversion within noncoding regions. We propose a model to explain the evolution of the 5S rDNA, in which the evolutionary units are blocks of nucleotides rather than the entire sequences or single nucleotides. This model implies a "two-speed" evolution: slow within blocks (homogenized by recombination) and fast within the gene family (diversified by duplications and deletions).

  15. A method to quantify movement activity of groups of animals using automated image analysis

    NASA Astrophysics Data System (ADS)

    Xu, Jianyu; Yu, Haizhen; Liu, Ying

    2009-07-01

    Most physiological and environmental changes are capable of inducing variations in animal behavior. The behavioral parameters have the possibility to be measured continuously in-situ by a non-invasive and non-contact approach, and have the potential to be used in the actual productions to predict stress conditions. Most vertebrates tend to live in groups, herds, flocks, shoals, bands, packs of conspecific individuals. Under culture conditions, the livestock or fish are in groups and interact on each other, so the aggregate behavior of the group should be studied rather than that of individuals. This paper presents a method to calculate the movement speed of a group of animal in a enclosure or a tank denoted by body length speed that correspond to group activity using computer vision technique. Frame sequences captured at special time interval were subtracted in pairs after image segmentation and identification. By labeling components caused by object movement in difference frame, the projected area caused by the movement of every object in the capture interval was calculated; this projected area was divided by the projected area of every object in the later frame to get body length moving distance of each object, and further could obtain the relative body length speed. The average speed of all object can well respond to the activity of the group. The group activity of a tilapia (Oreochromis niloticus) school to high (2.65 mg/L) levels of unionized ammonia (UIA) concentration were quantified based on these methods. High UIA level condition elicited a marked increase in school activity at the first hour (P<0.05) exhibiting an avoidance reaction (trying to flee from high UIA condition), and then decreased gradually.

  16. A novel typing method for Listeria monocytogenes using high-resolution melting analysis (HRMA) of tandem repeat regions.

    PubMed

    Ohshima, Chihiro; Takahashi, Hajime; Iwakawa, Ai; Kuda, Takashi; Kimura, Bon

    2017-07-17

    Listeria monocytogenes, which is responsible for causing food poisoning known as listeriosis, infects humans and animals. Widely distributed in the environment, this bacterium is known to contaminate food products after being transmitted to factories via raw materials. To minimize the contamination of products by food pathogens, it is critical to identify and eliminate factory entry routes and pathways for the causative bacteria. High resolution melting analysis (HRMA) is a method that takes advantage of differences in DNA sequences and PCR product lengths that are reflected by the disassociation temperature. Through our research, we have developed a multiple locus variable-number tandem repeat analysis (MLVA) using HRMA as a simple and rapid method to differentiate L. monocytogenes isolates. While evaluating our developed method, the ability of MLVA-HRMA, MLVA using capillary electrophoresis, and multilocus sequence typing (MLST) was compared for their ability to discriminate between strains. The MLVA-HRMA method displayed greater discriminatory ability than MLST and MLVA using capillary electrophoresis, suggesting that the variation in the number of repeat units, along with mutations within the DNA sequence, was accurately reflected by the melting curve of HRMA. Rather than relying on DNA sequence analysis or high-resolution electrophoresis, the MLVA-HRMA method employs the same process as PCR until the analysis step, suggesting a combination of speed and simplicity. The result of MLVA-HRMA method is able to be shared between different laboratories. There are high expectations that this method will be adopted for regular inspections at food processing facilities in the near future. Copyright © 2017. Published by Elsevier B.V.

  17. Clinical application of Half Fourier Acquisition Single Shot Turbo Spin Echo (HASTE) imaging accelerated by simultaneous multi-slice acquisition.

    PubMed

    Schulz, Jenni; P Marques, José; Ter Telgte, Annemieke; van Dorst, Anouk; de Leeuw, Frank-Erik; Meijer, Frederick J A; Norris, David G

    2018-01-01

    As a single-shot sequence with a long train of refocusing pulses, Half-Fourier Acquisition Single-Shot Turbo-Spin-Echo (HASTE) suffers from high power deposition limiting use at high resolutions and high field strengths, particularly if combined with acceleration techniques such as simultaneous multi-slice (SMS) imaging. Using a combination of multiband (MB)-excitation and PINS-refocusing pulses will effectively accelerate the acquisition time while staying within the SAR limitations. In particular, uncooperative and young patients will profit from the speed of the MB-PINS HASTE sequence, as clinical diagnosis can be possible without sedation. Materials and MethodsMB-excitation and PINS-refocusing pulses were incorporated into a HASTE-sequence with blipped CAIPIRINHA and TRAPS including an internal FLASH reference scan for online reconstruction. Whole brain MB-PINS HASTE data were acquired on a Siemens 3T-Prisma system from 10 individuals and compared to a clinical HASTE protocol. ResultsThe proposed MB-PINS HASTE protocol accelerates the acquisition by about a factor 2 compared to the clinical HASTE. The diagnostic image quality proved to be comparable for both sequences for the evaluation of the overall aspect of the brain, the detection of white matter changes and areas of tissue loss, and for the evaluation of the CSF spaces although artifacts were more frequently encountered with MB-PINS HASTE. ConclusionsMB-PINS HASTE enables acquisition of slice accelerated highly T2-weighted images and provides good diagnostic image quality while reducing acquisition time. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Effects of weather conditions on emergency ambulance calls for acute coronary syndromes

    NASA Astrophysics Data System (ADS)

    Vencloviene, Jone; Babarskiene, Ruta; Dobozinskas, Paulius; Siurkaite, Viktorija

    2015-08-01

    The aim of this study was to evaluate the relationship between weather conditions and daily emergency ambulance calls for acute coronary syndromes (ACS). The study included data on 3631 patients who called the ambulance for chest pain and were admitted to the department of cardiology as patients with ACS. We investigated the effect of daily air temperature ( T), barometric pressure (BP), relative humidity, and wind speed (WS) to detect the risk areas for low and high daily volume (DV) of emergency calls. We used the classification and regression tree method as well as cluster analysis. The clusters were created by applying the k-means cluster algorithm using the standardized daily weather variables. The analysis was performed separately during cold (October-April) and warm (May-September) seasons. During the cold period, the greatest DV was observed on days of low T during the 3-day sequence, on cold and windy days, and on days of low BP and high WS during the 3-day sequence; low DV was associated with high BP and decreased WS on the previous day. During June-September, a lower DV was associated with low BP, windless days, and high BP and low WS during the 3-day sequence. During the warm period, the greatest DV was associated with increased BP and changing WS during the 3-day sequence. These results suggest that daily T, BP, and WS on the day of the ambulance call and on the two previous days may be prognostic variables for the risk of ACS.

  19. A Teaching-Learning Sequence about Weather Map Reading

    ERIC Educational Resources Information Center

    Mandrikas, Achilleas; Stavrou, Dimitrios; Skordoulis, Constantine

    2017-01-01

    In this paper a teaching-learning sequence (TLS) introducing pre-service elementary teachers (PET) to weather map reading, with emphasis on wind assignment, is presented. The TLS includes activities about recognition of wind symbols, assignment of wind direction and wind speed on a weather map and identification of wind characteristics in a…

  20. Speed, Accuracy, and Serial Order in Sequence Production

    ERIC Educational Resources Information Center

    Pfordresher, Peter Q.; Palmer, Caroline; Jungers, Melissa K.

    2007-01-01

    The production of complex sequences like music or speech requires the rapid and temporally precise production of events (e.g., notes and chords), often at fast rates. Memory retrieval in these circumstances may rely on the simultaneous activation of both the current event and the surrounding context (Lashley, 1951). We describe an extension to a…

  1. Moving at the Speed of Potential: A Mixed-Methods Study of Accelerating Developmental Students in a California Community College

    ERIC Educational Resources Information Center

    Parks, Paula L.

    2014-01-01

    Most developmental community college students are not completing the composition sequence successfully. This mixed-methods study examined acceleration as a way to help developmental community college students complete the composition sequence more quickly and more successfully. Acceleration is a curricular redesign that includes challenging…

  2. CNVcaller: highly efficient and widely applicable software for detecting copy number variations in large populations.

    PubMed

    Wang, Xihong; Zheng, Zhuqing; Cai, Yudong; Chen, Ting; Li, Chao; Fu, Weiwei; Jiang, Yu

    2017-12-01

    The increasing amount of sequencing data available for a wide variety of species can be theoretically used for detecting copy number variations (CNVs) at the population level. However, the growing sample sizes and the divergent complexity of nonhuman genomes challenge the efficiency and robustness of current human-oriented CNV detection methods. Here, we present CNVcaller, a read-depth method for discovering CNVs in population sequencing data. The computational speed of CNVcaller was 1-2 orders of magnitude faster than CNVnator and Genome STRiP for complex genomes with thousands of unmapped scaffolds. CNV detection of 232 goats required only 1.4 days on a single compute node. Additionally, the Mendelian consistency of sheep trios indicated that CNVcaller mitigated the influence of high proportions of gaps and misassembled duplications in the nonhuman reference genome assembly. Furthermore, multiple evaluations using real sheep and human data indicated that CNVcaller achieved the best accuracy and sensitivity for detecting duplications. The fast generalized detection algorithms included in CNVcaller overcome prior computational barriers for detecting CNVs in large-scale sequencing data with complex genomic structures. Therefore, CNVcaller promotes population genetic analyses of functional CNVs in more species. © The Authors 2017. Published by Oxford University Press.

  3. CNVcaller: highly efficient and widely applicable software for detecting copy number variations in large populations

    PubMed Central

    Wang, Xihong; Zheng, Zhuqing; Cai, Yudong; Chen, Ting; Li, Chao; Fu, Weiwei

    2017-01-01

    Abstract Background The increasing amount of sequencing data available for a wide variety of species can be theoretically used for detecting copy number variations (CNVs) at the population level. However, the growing sample sizes and the divergent complexity of nonhuman genomes challenge the efficiency and robustness of current human-oriented CNV detection methods. Results Here, we present CNVcaller, a read-depth method for discovering CNVs in population sequencing data. The computational speed of CNVcaller was 1–2 orders of magnitude faster than CNVnator and Genome STRiP for complex genomes with thousands of unmapped scaffolds. CNV detection of 232 goats required only 1.4 days on a single compute node. Additionally, the Mendelian consistency of sheep trios indicated that CNVcaller mitigated the influence of high proportions of gaps and misassembled duplications in the nonhuman reference genome assembly. Furthermore, multiple evaluations using real sheep and human data indicated that CNVcaller achieved the best accuracy and sensitivity for detecting duplications. Conclusions The fast generalized detection algorithms included in CNVcaller overcome prior computational barriers for detecting CNVs in large-scale sequencing data with complex genomic structures. Therefore, CNVcaller promotes population genetic analyses of functional CNVs in more species. PMID:29220491

  4. Containment Safety Of Super Phenix : Essai Mars

    NASA Astrophysics Data System (ADS)

    Falgayrettes, M. F.; Fiche, C.; Hamon, P.

    1985-02-01

    The protection of people and property must be assured by every situation around an industrial power plant. That is why the FRENCH Commissariat a l'Energie Atomique has defined the size of the confinement of Super Phenix to withstand the worst highly hypothetical accident. The study of the strength of the confinement has been carried out by two complementary means : - Calculation (Display poster # 491 188), - Experiment : reactor mock-up. The latter is presented in the film. The solution which have been adopted for the problems encountered are emphasied ; the work with high speed camera is presented. The film is illustrated with some fast movie sequences.

  5. Simulator evaluation of the final approach spacing tool

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Erzberger, Heinz; Green, Steven M.

    1990-01-01

    The design and simulator evaluation of an automation tool for assisting terminal radar approach controllers in sequencing and spacing traffic onto the final approach course is described. The automation tool, referred to as the Final Approach Spacing Tool (FAST), displays speed and heading advisories for arrivals as well as sequencing information on the controller's radar display. The main functional elements of FAST are a scheduler that schedules and sequences the traffic, a 4-D trajectory synthesizer that generates the advisories, and a graphical interface that displays the information to the controller. FAST was implemented on a high performance workstation. It can be operated as a stand-alone in the Terminal Radar Approach Control (TRACON) Facility or as an element of a system integrated with automation tools in the Air Route Traffic Control Center (ARTCC). FAST was evaluated by experienced TRACON controllers in a real-time air traffic control simulation. Simulation results show that FAST significantly reduced controller workload and demonstrated a potential for an increase in landing rate.

  6. Extra projection data identification method for fast-continuous-rotation industrial cone-beam CT.

    PubMed

    Yang, Min; Duan, Shengling; Duan, Jinghui; Wang, Xiaolong; Li, Xingdong; Meng, Fanyong; Zhang, Jianhai

    2013-01-01

    Fast-continuous-rotation is an effective measure to improve the scanning speed and decrease the radiation dose for cone-beam CT. However, because of acceleration and deceleration of the motor, as well as the response lag of the scanning control terminals to the host PC, uneven-distributed and redundant projections are inevitably created, which seriously decrease the quality of the reconstruction images. In this paper, we first analyzed the aspects of the theoretical sequence chart of the fast-continuous-rotation mode. Then, an optimized sequence chart was proposed by extending the rotation angle span to ensure the effective 2π-span projections were situated in the stable rotation stage. In order to match the rotation angle with the projection image accurately, structure similarity (SSIM) index was used as a control parameter for extraction of the effective projection sequence which was exactly the complete projection data for image reconstruction. The experimental results showed that SSIM based method had a high accuracy of projection view locating and was easy to realize.

  7. Ultra Deep Sequencing of Listeria monocytogenes sRNA Transcriptome Revealed New Antisense RNAs

    PubMed Central

    Behrens, Sebastian; Widder, Stefanie; Mannala, Gopala Krishna; Qing, Xiaoxing; Madhugiri, Ramakanth; Kefer, Nathalie; Mraheil, Mobarak Abu; Rattei, Thomas; Hain, Torsten

    2014-01-01

    Listeria monocytogenes, a gram-positive pathogen, and causative agent of listeriosis, has become a widely used model organism for intracellular infections. Recent studies have identified small non-coding RNAs (sRNAs) as important factors for regulating gene expression and pathogenicity of L. monocytogenes. Increased speed and reduced costs of high throughput sequencing (HTS) techniques have made RNA sequencing (RNA-Seq) the state-of-the-art method to study bacterial transcriptomes. We created a large transcriptome dataset of L. monocytogenes containing a total of 21 million reads, using the SOLiD sequencing technology. The dataset contained cDNA sequences generated from L. monocytogenes RNA collected under intracellular and extracellular condition and additionally was size fractioned into three different size ranges from <40 nt, 40–150 nt and >150 nt. We report here, the identification of nine new sRNAs candidates of L. monocytogenes and a reevaluation of known sRNAs of L. monocytogenes EGD-e. Automatic comparison to known sRNAs revealed a high recovery rate of 55%, which was increased to 90% by manual revision of the data. Moreover, thorough classification of known sRNAs shed further light on their possible biological functions. Interestingly among the newly identified sRNA candidates are antisense RNAs (asRNAs) associated to the housekeeping genes purA, fumC and pgi and potentially their regulation, emphasizing the significance of sRNAs for metabolic adaptation in L. monocytogenes. PMID:24498259

  8. Reading Speed Does Not Benefit from Increased Line Spacing in AMD Patients

    PubMed Central

    CHUNG, SUSANA T. L.; JARVIS, SAMUEL H.; WOO, STANLEY Y.; HANSON, KARA; JOSE, RANDALL T.

    2009-01-01

    Purpose Crowding, the adverse spatial interaction due to the proximity of adjacent targets, has been suggested as an explanation for slow reading in peripheral vision. Previously, we showed that increased line spacing, which presumably reduces crowding between adjacent lines of text, improved reading speed in the normal periphery (Chung, Optom Vis Sci 2004;81:525–35). The purpose of this study was to examine whether or not individuals with age-related macular degeneration (AMD) would benefit from increased line spacing for reading. Methods Experiment 1: Eight subjects with AMD read aloud 100-word passages rendered at five line spacings: the standard single spacing, 1.5×, 2×, 3×, and 4× the standard spacing. Print sizes were 1× and 2× of the critical print size. Reading time and number of reading errors for each passage were measured to compute the reading speed. Experiment 2: Four subjects with AMD read aloud sequences of six 4-letter words, presented on a computer monitor using the rapid serial visual presentation (RSVP) paradigm. Target words were presented singly, or flanked above and below by two other words that changed in synchrony with the target word, at various vertical word separations. Print size was 2× the critical print size. Reading speed was calculated based on the RSVP exposure duration that yielded 80% of the words read correctly. Results Averaged across subjects, reading speeds for passages were virtually constant for the range of line spacings tested. For sequences of unrelated words, reading speeds were also virtually constant for the range of vertical word separations tested, except at the smallest (standard) separation at which reading speed was lower. Conclusions Contrary to the previous finding that reading speed improved in normal peripheral vision, increased line spacing in passages, or increased vertical separation between words in RSVP, did not lead to improved reading speed in people with AMD. PMID:18772718

  9. Methodological considerations for the 3D measurement of the X-factor and lower trunk movement in golf.

    PubMed

    Joyce, Christopher; Burnett, Angus; Ball, Kevin

    2010-09-01

    It is believed that increasing the X-factor (movement of the shoulders relative to the hips) during the golf swing can increase ball velocity at impact. Increasing the X-factor may also increase the risk of low back pain. The aim of this study was to provide recommendations for the three-dimensional (3D) measurement of the X-factor and lower trunk movement during the golf swing. This three-part validation study involved; (1) developing and validating models and related algorithms (2) comparing 3D data obtained during static positions representative of the golf swing to visual estimates and (3) comparing 3D data obtained during dynamic golf swings to images gained from high-speed video. Of particular interest were issues related to sequence dependency. After models and algorithms were validated, results from parts two and three of the study supported the conclusion that a lateral bending/flexion-extension/axial rotation (ZYX) order of rotation was deemed to be the most suitable Cardanic sequence to use in the assessment of the X-factor and lower trunk movement in the golf swing. The findings of this study have relevance for further research examining the X-factor its relationship to club head speed and lower trunk movement and low back pain in golf.

  10. A formal protocol test procedure for the Survivable Adaptable Fiber Optic Embedded Network (SAFENET)

    NASA Astrophysics Data System (ADS)

    High, Wayne

    1993-03-01

    This thesis focuses upon a new method for verifying the correct operation of a complex, high speed fiber optic communication network. These networks are of growing importance to the military because of their increased connectivity, survivability, and reconfigurability. With the introduction and increased dependence on sophisticated software and protocols, it is essential that their operation be correct. Because of the speed and complexity of fiber optic networks being designed today, they are becoming increasingly difficult to test. Previously, testing was accomplished by application of conformance test methods which had little connection with an implementation's specification. The major goal of conformance testing is to ensure that the implementation of a profile is consistent with its specification. Formal specification is needed to ensure that the implementation performs its intended operations while exhibiting desirable behaviors. The new conformance test method presented is based upon the System of Communicating Machine model which uses a formal protocol specification to generate a test sequence. The major contribution of this thesis is the application of the System of Communicating Machine model to formal profile specifications of the Survivable Adaptable Fiber Optic Embedded Network (SAFENET) standard which results in the derivation of test sequences for a SAFENET profile. The results applying this new method to SAFENET's OSI and Lightweight profiles are presented.

  11. High-speed and high-ratio referential genome compression.

    PubMed

    Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan

    2017-11-01

    The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes <30 min to compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  12. UFO: a web server for ultra-fast functional profiling of whole genome protein sequences.

    PubMed

    Meinicke, Peter

    2009-09-02

    Functional profiling is a key technique to characterize and compare the functional potential of entire genomes. The estimation of profiles according to an assignment of sequences to functional categories is a computationally expensive task because it requires the comparison of all protein sequences from a genome with a usually large database of annotated sequences or sequence families. Based on machine learning techniques for Pfam domain detection, the UFO web server for ultra-fast functional profiling allows researchers to process large protein sequence collections instantaneously. Besides the frequencies of Pfam and GO categories, the user also obtains the sequence specific assignments to Pfam domain families. In addition, a comparison with existing genomes provides dissimilarity scores with respect to 821 reference proteomes. Considering the underlying UFO domain detection, the results on 206 test genomes indicate a high sensitivity of the approach. In comparison with current state-of-the-art HMMs, the runtime measurements show a considerable speed up in the range of four orders of magnitude. For an average size prokaryotic genome, the computation of a functional profile together with its comparison typically requires about 10 seconds of processing time. For the first time the UFO web server makes it possible to get a quick overview on the functional inventory of newly sequenced organisms. The genome scale comparison with a large number of precomputed profiles allows a first guess about functionally related organisms. The service is freely available and does not require user registration or specification of a valid email address.

  13. Orientation dependent modulation of apparent speed: a model based on the dynamics of feed-forward and horizontal connectivity in V1 cortex.

    PubMed

    Seriès, Peggy; Georges, Sébastien; Lorenceau, Jean; Frégnac, Yves

    2002-11-01

    Psychophysical and physiological studies suggest that long-range horizontal connections in primary visual cortex participate in spatial integration and contour processing. Until recently, little attention has been paid to their intrinsic temporal properties. Recent physiological studies indicate, however, that the propagation of activity through long-range horizontal connections is slow, with time scales comparable to the perceptual scales involved in motion processing. Using a simple model of V1 connectivity, we explore some of the implications of this slow dynamics. The model predicts that V1 responses to a stimulus in the receptive field can be modulated by a previous stimulation, a few milliseconds to a few tens of milliseconds before, in the surround. We analyze this phenomenon and its possible consequences on speed perception, as a function of the spatio-temporal configuration of the visual inputs (relative orientation, spatial separation, temporal interval between the elements, sequence speed). We show that the dynamical interactions between feed-forward and horizontal signals in V1 can explain why the perceived speed of fast apparent motion sequences strongly depends on the orientation of their elements relative to the motion axis and can account for the range of speed for which this perceptual effect occurs (Georges, Seriès, Frégnac and Lorenceau, this issue).

  14. Developmental Trajectory of Motor Deficits in Preschool Children with ADHD

    PubMed Central

    Sweeney, Kristie L; Ryan, Matthew; Schneider, Heather; Ferenc, Lisa; Denckla, Martha Bridge; Mahone, E. Mark

    2018-01-01

    Motor deficits persisting into childhood (>7 years) are associated with increased executive and cognitive dysfunction, likely due to parallel neural circuitry. This study assessed the longitudinal trajectory of motor deficits in preschool children with ADHD, compared to typically developing (TD) children, in order to identify individuals at risk for anomalous neurological development. Participants included 47 children (21 ADHD, 26 TD) ages 4–7 years who participated in three visits (V1, V2, V3), each one year apart (V1=48–71 months, V2=60–83 months, V3=72–95 months). Motor variables assessed included speed (finger tapping and sequencing), total overflow, and axial movements from the Revised Physical and Neurological Examination for Subtle Signs (PANESS). Effects for group, visit, and group-by-visit interaction were examined. There were significant effects for group (favoring TD) for finger tapping speed and total axial movements, visit (performance improving with age for all 4 variables), and a significant group-by-visit interaction for finger tapping speed. Motor speed (repetitive finger tapping) and quality of axial movements are sensitive markers of anomalous motor development associated with ADHD in children as young as 4 years. Conversely, motor overflow and finger sequencing speed may be less sensitive in preschool, due to ongoing wide variations in attainment of these milestones. PMID:29757012

  15. Fan filters, the 3-D Radon transform, and image sequence analysis.

    PubMed

    Marzetta, T L

    1994-01-01

    This paper develops a theory for the application of fan filters to moving objects. In contrast to previous treatments of the subject based on the 3-D Fourier transform, simplicity and insight are achieved by using the 3-D Radon transform. With this point of view, the Radon transform decomposes the image sequence into a set of plane waves that are parameterized by a two-component slowness vector. Fan filtering is equivalent to a multiplication in the Radon transform domain by a slowness response function, followed by an inverse Radon transform. The plane wave representation of a moving object involves only a restricted set of slownesses such that the inner product of the plane wave slowness vector and the moving object velocity vector is equal to one. All of the complexity in the application of fan filters to image sequences results from the velocity-slowness mapping not being one-to-one; therefore, the filter response cannot be independently specified at all velocities. A key contribution of this paper is to elucidate both the power and the limitations of fan filtering in this new application. A potential application of 3-D fan filters is in the detection of moving targets in clutter and noise. For example, an appropriately designed fan filter can reject perfectly all moving objects whose speed, irrespective of heading, is less than a specified cut-off speed, with only minor attenuation of significantly faster objects. A simple geometric construction determines the response of the filter for speeds greater than the cut-off speed.

  16. The Solar Wind and Geomagnetic Activity as a Function of Time Relative to Corotating Interaction Regions

    NASA Technical Reports Server (NTRS)

    McPherron, Robert L.; Weygand, James

    2006-01-01

    Corotating interaction regions during the declining phase of the solar cycle are the cause of recurrent geomagnetic storms and are responsible for the generation of high fluxes of relativistic electrons. These regions are produced by the collision of a high-speed stream of solar wind with a slow-speed stream. The interface between the two streams is easily identified with plasma and field data from a solar wind monitor upstream of the Earth. The properties of the solar wind and interplanetary magnetic field are systematic functions of time relative to the stream interface. Consequently the coupling of the solar wind to the Earth's magnetosphere produces a predictable sequence of events. Because the streams persist for many solar rotations it should be possible to use terrestrial observations of past magnetic activity to predict future activity. Also the high-speed streams are produced by large unipolar magnetic regions on the Sun so that empirical models can be used to predict the velocity profile of a stream expected at the Earth. In either case knowledge of the statistical properties of the solar wind and geomagnetic activity as a function of time relative to a stream interface provides the basis for medium term forecasting of geomagnetic activity. In this report we use lists of stream interfaces identified in solar wind data during the years 1995 and 2004 to develop probability distribution functions for a variety of different variables as a function of time relative to the interface. The results are presented as temporal profiles of the quartiles of the cumulative probability distributions of these variables. We demonstrate that the storms produced by these interaction regions are generally very weak. Despite this the fluxes of relativistic electrons produced during those storms are the highest seen in the solar cycle. We attribute this to the specific sequence of events produced by the organization of the solar wind relative to the stream interfaces. We also show that there are large quantitative differences in various parameters between the two cycles.

  17. Multicore-based 3D-DWT video encoder

    NASA Astrophysics Data System (ADS)

    Galiano, Vicente; López-Granado, Otoniel; Malumbres, Manuel P.; Migallón, Hector

    2013-12-01

    Three-dimensional wavelet transform (3D-DWT) encoders are good candidates for applications like professional video editing, video surveillance, multi-spectral satellite imaging, etc. where a frame must be reconstructed as quickly as possible. In this paper, we present a new 3D-DWT video encoder based on a fast run-length coding engine. Furthermore, we present several multicore optimizations to speed-up the 3D-DWT computation. An exhaustive evaluation of the proposed encoder (3D-GOP-RL) has been performed, and we have compared the evaluation results with other video encoders in terms of rate/distortion (R/D), coding/decoding delay, and memory consumption. Results show that the proposed encoder obtains good R/D results for high-resolution video sequences with nearly in-place computation using only the memory needed to store a group of pictures. After applying the multicore optimization strategies over the 3D DWT, the proposed encoder is able to compress a full high-definition video sequence in real-time.

  18. QSRA: a quality-value guided de novo short read assembler.

    PubMed

    Bryant, Douglas W; Wong, Weng-Keen; Mockler, Todd C

    2009-02-24

    New rapid high-throughput sequencing technologies have sparked the creation of a new class of assembler. Since all high-throughput sequencing platforms incorporate errors in their output, short-read assemblers must be designed to account for this error while utilizing all available data. We have designed and implemented an assembler, Quality-value guided Short Read Assembler, created to take advantage of quality-value scores as a further method of dealing with error. Compared to previous published algorithms, our assembler shows significant improvements not only in speed but also in output quality. QSRA generally produced the highest genomic coverage, while being faster than VCAKE. QSRA is extremely competitive in its longest contig and N50/N80 contig lengths, producing results of similar quality to those of EDENA and VELVET. QSRA provides a step closer to the goal of de novo assembly of complex genomes, improving upon the original VCAKE algorithm by not only drastically reducing runtimes but also increasing the viability of the assembly algorithm through further error handling capabilities.

  19. Next-generation sequencing: the future of molecular genetics in poultry production and food safety.

    PubMed

    Diaz-Sanchez, S; Hanning, I; Pendleton, Sean; D'Souza, Doris

    2013-02-01

    The era of molecular biology and automation of the Sanger chain-terminator sequencing method has led to discovery and advances in diagnostics and biotechnology. The Sanger methodology dominated research for over 2 decades, leading to significant accomplishments and technological improvements in DNA sequencing. Next-generation high-throughput sequencing (HT-NGS) technologies were developed subsequently to overcome the limitations of this first generation technology that include higher speed, less labor, and lowered cost. Various platforms developed include sequencing-by-synthesis 454 Life Sciences, Illumina (Solexa) sequencing, SOLiD sequencing (among others), and the Ion Torrent semiconductor sequencing technologies that use different detection principles. As technology advances, progress made toward third generation sequencing technologies are being reported, which include Nanopore Sequencing and real-time monitoring of PCR activity through fluorescent resonant energy transfer. The advantages of these technologies include scalability, simplicity, with increasing DNA polymerase performance and yields, being less error prone, and even more economically feasible with the eventual goal of obtaining real-time results. These technologies can be directly applied to improve poultry production and enhance food safety. For example, sequence-based (determination of the gut microbial community, genes for metabolic pathways, or presence of plasmids) and function-based (screening for function such as antibiotic resistance, or vitamin production) metagenomic analysis can be carried out. Gut microbialflora/communities of poultry can be sequenced to determine the changes that affect health and disease along with efficacy of methods to control pathogenic growth. Thus, the purpose of this review is to provide an overview of the principles of these current technologies and their potential application to improve poultry production and food safety as well as public health.

  20. Three years of ULTRASPEC at the Thai 2.4-m telescope: Capabilities and scientific highlights

    NASA Astrophysics Data System (ADS)

    Yadav, Ram Kesh; Richichi, Andrea; Irawati, Puji; Dhillon, Vikram Singh; Marsh, Thomas R.; Soonthornthum, Boonrucksar

    2018-04-01

    High temporal resolution observations enable the study of rapid phenomena such as the flux variations in binary system objects, e.g. cataclysmic variables, compact binary systems, the flux variations in young star clusters, stellar occultations and more. The 2.4-m Thai National Telescope (TNT) is ideally suited for this niche research, being the largest facility in Southeast Asia and being equipped with ULTRASPEC, a high-speed imager based on a low-noise frame transfer electron-multiplying CCD. In the sub-window mode, ULTRASPEC can record uninterrupted sequences with frame rates as fast as few milliseconds. We present some of the key results obtained in the area of high time resolution with ULTRASPEC. We also present the results of a recent worldwide campaign to observe the current series of lunar occultations of Aldebaran (α Tauri) carried out in close collaboration with the Devasthal facilities, the out-of-eclipse variations on the post common-envelope system J1021+1744, and pre-main-sequence variables in young open cluster Stock 8.

  1. Efficient alignment-free DNA barcode analytics.

    PubMed

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-11-10

    In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding.

  2. elPrep: High-Performance Preparation of Sequence Alignment/Map Files for Variant Calling

    PubMed Central

    Decap, Dries; Fostier, Jan; Reumers, Joke

    2015-01-01

    elPrep is a high-performance tool for preparing sequence alignment/map files for variant calling in sequencing pipelines. It can be used as a replacement for SAMtools and Picard for preparation steps such as filtering, sorting, marking duplicates, reordering contigs, and so on, while producing identical results. What sets elPrep apart is its software architecture that allows executing preparation pipelines by making only a single pass through the data, no matter how many preparation steps are used in the pipeline. elPrep is designed as a multithreaded application that runs entirely in memory, avoids repeated file I/O, and merges the computation of several preparation steps to significantly speed up the execution time. For example, for a preparation pipeline of five steps on a whole-exome BAM file (NA12878), we reduce the execution time from about 1:40 hours, when using a combination of SAMtools and Picard, to about 15 minutes when using elPrep, while utilising the same server resources, here 48 threads and 23GB of RAM. For the same pipeline on whole-genome data (NA12878), elPrep reduces the runtime from 24 hours to less than 5 hours. As a typical clinical study may contain sequencing data for hundreds of patients, elPrep can remove several hundreds of hours of computing time, and thus substantially reduce analysis time and cost. PMID:26182406

  3. A biological compression model and its applications.

    PubMed

    Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd

    2011-01-01

    A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.

  4. The role of RT carry-over for congruence sequence effects in masked priming.

    PubMed

    Huber-Huber, Christoph; Ansorge, Ulrich

    2017-05-01

    The present study disentangles 2 sources of the congruence sequence effect with masked primes: congruence and response time of the previous trial (reaction time [RT] carry-over). Using arrows as primes and targets and a metacontrast masking procedure we found congruence as well as congruence sequence effects. In addition, congruence sequence effects decreased when RT carry-over was accounted for in a mixed model analysis, suggesting that RT carry-over contributes to congruence sequence effects in masked priming. Crucially, effects of previous trial congruence were not cancelled out completely indicating that RT carry-over and previous trial congruence are 2 sources feeding into the congruence sequence effect. A secondary task requiring response speed judgments demonstrated general awareness of response speed (Experiments 1), but removing this secondary task (Experiment 2) showed that RT carry-over effects were also present in single-task conditions. During (dual-task) prime-awareness test parts of both experiments, however, RT carry-over failed to modulate congruence effects, suggesting that some task sets of the participants can prevent the effect. The basic RT carry-over effects are consistent with the conflict adaptation account, with the adaptation to the statistics of the environment (ASE) model, and possibly with the temporal learning explanation. Additionally considering the task-dependence of RT carry-over, the results are most compatible with the conflict adaptation account. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. LightAssembler: fast and memory-efficient assembly algorithm for high-throughput sequencing reads.

    PubMed

    El-Metwally, Sara; Zakaria, Magdi; Hamza, Taher

    2016-11-01

    The deluge of current sequenced data has exceeded Moore's Law, more than doubling every 2 years since the next-generation sequencing (NGS) technologies were invented. Accordingly, we will able to generate more and more data with high speed at fixed cost, but lack the computational resources to store, process and analyze it. With error prone high throughput NGS reads and genomic repeats, the assembly graph contains massive amount of redundant nodes and branching edges. Most assembly pipelines require this large graph to reside in memory to start their workflows, which is intractable for mammalian genomes. Resource-efficient genome assemblers combine both the power of advanced computing techniques and innovative data structures to encode the assembly graph efficiently in a computer memory. LightAssembler is a lightweight assembly algorithm designed to be executed on a desktop machine. It uses a pair of cache oblivious Bloom filters, one holding a uniform sample of [Formula: see text]-spaced sequenced [Formula: see text]-mers and the other holding [Formula: see text]-mers classified as likely correct, using a simple statistical test. LightAssembler contains a light implementation of the graph traversal and simplification modules that achieves comparable assembly accuracy and contiguity to other competing tools. Our method reduces the memory usage by [Formula: see text] compared to the resource-efficient assemblers using benchmark datasets from GAGE and Assemblathon projects. While LightAssembler can be considered as a gap-based sequence assembler, different gap sizes result in an almost constant assembly size and genome coverage. https://github.com/SaraEl-Metwally/LightAssembler CONTACT: sarah_almetwally4@mans.edu.egSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Magnetopause surface fluctuations observed by Voyager 1

    NASA Technical Reports Server (NTRS)

    Lepping, R. P.; Burlaga, L. F.

    1979-01-01

    Moving out of the dawnside of the earth's magnetosphere, Voyager 1 crossed the magnetopause apparently seven times, despite the high spacecraft speed of 11 km/sec. Normals to the magnetopause and their associated error cones were estimated for each of the crossings using a minimum variance analysis of the internal magnetic field. The oscillating nature of the ecliptic plane component of these normals indicates that most of the multiple crossings were due to a wave-like surface disturbance moving tailward along the magnetopause. The wave, which was aperiodic, was modeled as a sequence of sine waves. The amplitude, wavelength, and speed were determined for two pairs of intervals from the measured slopes, occurrence times, and relative positions of six magnetopause crossings. The magnetopause thickness was estimated to lie in the range 300 to 700 km with higher values possible. The estimated amplitude of these waves was obviously small compared to their wavelengths.

  7. High speed sampler and demultiplexer

    DOEpatents

    McEwan, Thomas E.

    1995-01-01

    A high speed sampling demultiplexer based on a plurality of sampler banks, each bank comprising a sample transmission line for transmitting an input signal, a strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates at respective positions along the sample transmission line for sampling the input signal in response to the strobe signal. Strobe control circuitry is coupled to the plurality of banks, and supplies a sequence of bank strobe signals to the strobe transmission lines in each of the plurality of banks, and includes circuits for controlling the timing of the bank strobe signals among the banks of samplers. Input circuitry is included for supplying the input signal to be sampled to the plurality of sample transmission lines in the respective banks. The strobe control circuitry can repetitively strobe the plurality of banks of samplers such that the banks of samplers are cycled to create a long sample length. Second tier demultiplexing circuitry is coupled to each of the samplers in the plurality of banks. The second tier demultiplexing circuitry senses the sample taken by the corresponding sampler each time the bank in which the sampler is found is strobed. A plurality of such samples can be stored by the second tier demultiplexing circuitry for later processing. Repetitive sampling with the high speed transient sampler induces an effect known as "strobe kickout". The sample transmission lines include structures which reduce strobe kickout to acceptable levels, generally 60 dB below the signal, by absorbing the kickout pulses before the next sampling repetition.

  8. DNA Sequences over the Internet Provide Greater Speed and Accuracy for Health Sciences Reference Librarians.

    ERIC Educational Resources Information Center

    Harzbecker, Joseph, Jr.

    1993-01-01

    Describes the National Institute of Health's GenBank DNA sequence database and how it can be accessed through the Internet. A real reference question, which was answered successfully using the database, is reproduced to illustrate and elaborate on the potential of the Internet for information retrieval. (10 references) (KRN)

  9. High-accuracy and robust face recognition system based on optical parallel correlator using a temporal image sequence

    NASA Astrophysics Data System (ADS)

    Watanabe, Eriko; Ishikawa, Mami; Ohta, Maiko; Kodate, Kashiko

    2005-09-01

    Face recognition is used in a wide range of security systems, such as monitoring credit card use, searching for individuals with street cameras via Internet and maintaining immigration control. There are still many technical subjects under study. For instance, the number of images that can be stored is limited under the current system, and the rate of recognition must be improved to account for photo shots taken at different angles under various conditions. We implemented a fully automatic Fast Face Recognition Optical Correlator (FARCO) system by using a 1000 frame/s optical parallel correlator designed and assembled by us. Operational speed for the 1: N (i.e. matching a pair of images among N, where N refers to the number of images in the database) identification experiment (4000 face images) amounts to less than 1.5 seconds, including the pre/post processing. From trial 1: N identification experiments using FARCO, we acquired low error rates of 2.6% False Reject Rate and 1.3% False Accept Rate. By making the most of the high-speed data-processing capability of this system, much more robustness can be achieved for various recognition conditions when large-category data are registered for a single person. We propose a face recognition algorithm for the FARCO while employing a temporal image sequence of moving images. Applying this algorithm to a natural posture, a two times higher recognition rate scored compared with our conventional system. The system has high potential for future use in a variety of purposes such as search for criminal suspects by use of street and airport video cameras, registration of babies at hospitals or handling of an immeasurable number of images in a database.

  10. AAO2: a general purpose CCD controller for the AAT

    NASA Astrophysics Data System (ADS)

    Waller, Lew; Barton, John; Mayfield, Don; Griesbach, Jason

    2004-09-01

    The Anglo-Australian Observatory has developed a 2nd generation optical CCD controller to replace an earlier controller used now for almost twenty years. The new AAO2 controller builds on the considerable experience gained with the first controller, the new technologies now available and the techniques developed and successfully implemented in AAO's IRIS2 detector controller. The AAO2 controller has been designed to operate a wide variety of detectors and to achieve as near to detector limited performance as possible. It is capable of reading out CCDs with one, two or four output amplifiers, each output having its own video processor and high speed 16-bit ADC. The video processor is a correlated double sampler that may be switched between low noise dual slope integration or high speed clamp and sample modes. Programmable features include low noise DAC biases, horizontal clocks with DAC controllable levels and slopes and vertical clocks with DAC controllable arbitrary waveshapes. The controller uses two DSPs; one for overall control and the other for clock signal generation, which is highly programmable, with downloadable sequences of waveform patterns. The controller incorporates a precision detector temperature controller and provides accurate exposure time control. Telemetry is provided of all DAC generated voltages, many derived voltages, power supply voltages, detector temperature and detector identification. A high speed, full duplex fibre optic interface connects the controller to a host computer. The modular design uses six to ten circuit boards, plugged in to common backplanes. Two backplanes separate noisy digital signals from low noise analog signals.

  11. Fabrication of wear-resistant silicon microprobe tips for high-speed surface roughness scanning devices

    NASA Astrophysics Data System (ADS)

    Wasisto, Hutomo Suryo; Yu, Feng; Doering, Lutz; Völlmeke, Stefan; Brand, Uwe; Bakin, Andrey; Waag, Andreas; Peiner, Erwin

    2015-05-01

    Silicon microprobe tips are fabricated and integrated with piezoresistive cantilever sensors for high-speed surface roughness scanning systems. The fabrication steps of the high-aspect-ratio silicon microprobe tips were started with photolithography and wet etching of potassium hydroxide (KOH) resulting in crystal-dependent micropyramids. Subsequently, thin conformal wear-resistant layer coating of aluminum oxide (Al2O3) was demonstrated on the backside of the piezoresistive cantilever free end using atomic layer deposition (ALD) method in a binary reaction sequence with a low thermal process and precursors of trimethyl aluminum and water. The deposited Al2O3 layer had a thickness of 14 nm. The captured atomic force microscopy (AFM) image exhibits a root mean square deviation of 0.65 nm confirming the deposited Al2O3 surface quality. Furthermore, vacuum-evaporated 30-nm/200-nm-thick Au/Cr layers were patterned by lift-off and served as an etch mask for Al2O3 wet etching and in ICP cryogenic dry etching. By using SF6/O2 plasma during inductively coupled plasma (ICP) cryogenic dry etching, micropillar tips were obtained. From the preliminary friction and wear data, the developed silicon cantilever sensor has been successfully used in 100 fast measurements of 5- mm-long standard artifact surface with a speed of 15 mm/s and forces of 60-100 μN. Moreover, the results yielded by the fabricated silicon cantilever sensor are in very good agreement with those of calibrated profilometer. These tactile sensors are targeted for use in high-aspect-ratio microform metrology.

  12. High-speed single-pixel digital holography

    NASA Astrophysics Data System (ADS)

    González, Humberto; Martínez-León, Lluís.; Soldevila, Fernando; Araiza-Esquivel, Ma.; Tajahuerce, Enrique; Lancis, Jesús

    2017-06-01

    The complete phase and amplitude information of biological specimens can be easily determined by phase-shifting digital holography. Spatial light modulators (SLMs) based on liquid crystal technology, with a frame-rate around 60 Hz, have been employed in digital holography. In contrast, digital micro-mirror devices (DMDs) can reach frame rates up to 22 kHz. A method proposed by Lee to design computer generated holograms (CGHs) permits the use of such binary amplitude modulators as phase-modulation devices. Single-pixel imaging techniques record images by sampling the object with a sequence of micro-structured light patterns and using a simple photodetector. Our group has reported some approaches combining single-pixel imaging and phase-shifting digital holography. In this communication, we review these techniques and present the possibility of a high-speed single-pixel phase-shifting digital holography system with phase-encoded illumination. This system is based on a Mach-Zehnder interferometer, with a DMD acting as the modulator for projecting the sampling patterns on the object and also being used for phase-shifting. The proposed sampling functions are phaseencoded Hadamard patterns generated through a Lee hologram approach. The method allows the recording of the complex amplitude distribution of an object at high speed on account of the high frame rates of the DMD. Reconstruction may take just a few seconds. Besides, the optical setup is envisaged as a true adaptive system, which is able to measure the aberration induced by the optical system in the absence of a sample object, and then to compensate the wavefront in the phasemodulation stage.

  13. Adaptive convergence nonuniformity correction algorithm.

    PubMed

    Qian, Weixian; Chen, Qian; Bai, Junqi; Gu, Guohua

    2011-01-01

    Nowadays, convergence and ghosting artifacts are common problems in scene-based nonuniformity correction (NUC) algorithms. In this study, we introduce the idea of space frequency to the scene-based NUC. Then the convergence speed factor is presented, which can adaptively change the convergence speed by a change of the scene dynamic range. In fact, the convergence speed factor role is to decrease the statistical data standard deviation. The nonuniformity space relativity characteristic was summarized by plenty of experimental statistical data. The space relativity characteristic was used to correct the convergence speed factor, which can make it more stable. Finally, real and simulated infrared image sequences were applied to demonstrate the positive effect of our algorithm.

  14. Design study of a HEAO-C spread spectrum transponder telemetry system for use with the TDRSS subnet

    NASA Technical Reports Server (NTRS)

    Weathers, G.

    1975-01-01

    The results of a design study of a spread spectrum transponder for use on the HEAO-C satellite were given. The transponder performs the functions of code turn-around for ground range and range-rate determination, ground command receiver, and telemetry data transmitter. The spacecraft transponder and associated communication system components will allow the HEAO-C satellite to utilize the Tracking and Data Relay Satellite System (TDRSS) subnet of the post 1978 STDN. The following areas were discussed in the report: TDRSS Subnet Description, TDRSS-HEAO-C System Configuration, Gold Code Generator, Convolutional Encoder Design and Decoder Algorithm, High Speed Sequence Generators, Statistical Evaluation of Candidate Code Sequences using Amplitude and Phase Moments, Code and Carrier Phase Lock Loops, Total Spread Spectrum Transponder System, and Reference Literature Search.

  15. Line scanning system for direct digital chemiluminescence imaging of DNA sequencing blots

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karger, A.E.; Weiss, R.; Gesteland, R.F.

    A cryogenically cooled charge-coupled device (CCD) camera equipped with an area CCD array is used in a line scanning system for low-light-level imaging of chemiluminescent DNA sequencing blots. Operating the CCD camera in time-delayed integration (TDI) mode results in continuous data acquisition independent of the length of the CCD array. Scanning is possible with a resolution of 1.4 line pairs/mm at the 50% level of the modulation transfer function. High-sensitivity, low-light-level scanning of chemiluminescent direct-transfer electrophoresis (DTE) DNA sequencing blots is shown. The detection of DNA fragments on the blot involves DNA-DNA hybridization with oligonucleotide-alkaline phosphatase conjugate and 1,2-dioxetane-based chemiluminescence.more » The width of the scan allows the recording of up to four sequencing reactions (16 lanes) on one scan. The scan speed of 52 cm/h used for the sequencing blots corresponds to a data acquisition rate of 384 pixels/s. The chemiluminescence detection limit on the scanned images is 3.9 [times] 10[sup [minus]18] mol of plasmid DNA. A conditional median filter is described to remove spikes caused by cosmic ray events from the CCD images. 39 refs., 9 refs.« less

  16. Faster Smith-Waterman database searches with inter-sequence SIMD parallelisation

    PubMed Central

    2011-01-01

    Background The Smith-Waterman algorithm for local sequence alignment is more sensitive than heuristic methods for database searching, but also more time-consuming. The fastest approach to parallelisation with SIMD technology has previously been described by Farrar in 2007. The aim of this study was to explore whether further speed could be gained by other approaches to parallelisation. Results A faster approach and implementation is described and benchmarked. In the new tool SWIPE, residues from sixteen different database sequences are compared in parallel to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. SWIPE was about 2.5 times faster when the programs used only a single thread. For shorter queries, the increase in speed was larger. SWIPE was about twice as fast as BLAST when using the BLOSUM50 score matrix, while BLAST was about twice as fast as SWIPE for the BLOSUM62 matrix. The software is designed for 64 bit Linux on processors with SSSE3. Source code is available from http://dna.uio.no/swipe/ under the GNU Affero General Public License. Conclusions Efficient parallelisation using SIMD on standard hardware makes it possible to run Smith-Waterman database searches more than six times faster than before. The approach described here could significantly widen the potential application of Smith-Waterman searches. Other applications that require optimal local alignment scores could also benefit from improved performance. PMID:21631914

  17. Faster Smith-Waterman database searches with inter-sequence SIMD parallelisation.

    PubMed

    Rognes, Torbjørn

    2011-06-01

    The Smith-Waterman algorithm for local sequence alignment is more sensitive than heuristic methods for database searching, but also more time-consuming. The fastest approach to parallelisation with SIMD technology has previously been described by Farrar in 2007. The aim of this study was to explore whether further speed could be gained by other approaches to parallelisation. A faster approach and implementation is described and benchmarked. In the new tool SWIPE, residues from sixteen different database sequences are compared in parallel to one query residue. Using a 375 residue query sequence a speed of 106 billion cell updates per second (GCUPS) was achieved on a dual Intel Xeon X5650 six-core processor system, which is over six times more rapid than software based on Farrar's 'striped' approach. SWIPE was about 2.5 times faster when the programs used only a single thread. For shorter queries, the increase in speed was larger. SWIPE was about twice as fast as BLAST when using the BLOSUM50 score matrix, while BLAST was about twice as fast as SWIPE for the BLOSUM62 matrix. The software is designed for 64 bit Linux on processors with SSSE3. Source code is available from http://dna.uio.no/swipe/ under the GNU Affero General Public License. Efficient parallelisation using SIMD on standard hardware makes it possible to run Smith-Waterman database searches more than six times faster than before. The approach described here could significantly widen the potential application of Smith-Waterman searches. Other applications that require optimal local alignment scores could also benefit from improved performance.

  18. Dynamic Sensorimotor Planning during Long-Term Sequence Learning: The Role of Variability, Response Chunking and Planning Errors

    PubMed Central

    Verstynen, Timothy; Phillips, Jeff; Braun, Emily; Workman, Brett; Schunn, Christian; Schneider, Walter

    2012-01-01

    Many everyday skills are learned by binding otherwise independent actions into a unified sequence of responses across days or weeks of practice. Here we looked at how the dynamics of action planning and response binding change across such long timescales. Subjects (N = 23) were trained on a bimanual version of the serial reaction time task (32-item sequence) for two weeks (10 days total). Response times and accuracy both showed improvement with time, but appeared to be learned at different rates. Changes in response speed across training were associated with dynamic changes in response time variability, with faster learners expanding their variability during the early training days and then contracting response variability late in training. Using a novel measure of response chunking, we found that individual responses became temporally correlated across trials and asymptoted to set sizes of approximately 7 bound responses at the end of the first week of training. Finally, we used a state-space model of the response planning process to look at how predictive (i.e., response anticipation) and error-corrective (i.e., post-error slowing) processes correlated with learning rates for speed, accuracy and chunking. This analysis yielded non-monotonic association patterns between the state-space model parameters and learning rates, suggesting that different parts of the response planning process are relevant at different stages of long-term learning. These findings highlight the dynamic modulation of response speed, variability, accuracy and chunking as multiple movements become bound together into a larger set of responses during sequence learning. PMID:23056630

  19. Development and validation of an rDNA operon based primer walking strategy applicable to de novo bacterial genome finishing

    PubMed Central

    Eastman, Alexander W.; Yuan, Ze-Chun

    2015-01-01

    Advances in sequencing technology have drastically increased the depth and feasibility of bacterial genome sequencing. However, little information is available that details the specific techniques and procedures employed during genome sequencing despite the large numbers of published genomes. Shotgun approaches employed by second-generation sequencing platforms has necessitated the development of robust bioinformatics tools for in silico assembly, and complete assembly is limited by the presence of repetitive DNA sequences and multi-copy operons. Typically, re-sequencing with multiple platforms and laborious, targeted Sanger sequencing are employed to finish a draft bacterial genome. Here we describe a novel strategy based on the identification and targeted sequencing of repetitive rDNA operons to expedite bacterial genome assembly and finishing. Our strategy was validated by finishing the genome of Paenibacillus polymyxa strain CR1, a bacterium with potential in sustainable agriculture and bio-based processes. An analysis of the 38 contigs contained in the P. polymyxa strain CR1 draft genome revealed 12 repetitive rDNA operons with varied intragenic and flanking regions of variable length, unanimously located at contig boundaries and within contig gaps. These highly similar but not identical rDNA operons were experimentally verified and sequenced simultaneously with multiple, specially designed primer sets. This approach also identified and corrected significant sequence rearrangement generated during the initial in silico assembly of sequencing reads. Our approach reduces the required effort associated with blind primer walking for contig assembly, increasing both the speed and feasibility of genome finishing. Our study further reinforces the notion that repetitive DNA elements are major limiting factors for genome finishing. Moreover, we provided a step-by-step workflow for genome finishing, which may guide future bacterial genome finishing projects. PMID:25653642

  20. A survey and evaluations of histogram-based statistics in alignment-free sequence comparison.

    PubMed

    Luczak, Brian B; James, Benjamin T; Girgis, Hani Z

    2017-12-06

    Since the dawn of the bioinformatics field, sequence alignment scores have been the main method for comparing sequences. However, alignment algorithms are quadratic, requiring long execution time. As alternatives, scientists have developed tens of alignment-free statistics for measuring the similarity between two sequences. We surveyed tens of alignment-free k-mer statistics. Additionally, we evaluated 33 statistics and multiplicative combinations between the statistics and/or their squares. These statistics are calculated on two k-mer histograms representing two sequences. Our evaluations using global alignment scores revealed that the majority of the statistics are sensitive and capable of finding similar sequences to a query sequence. Therefore, any of these statistics can filter out dissimilar sequences quickly. Further, we observed that multiplicative combinations of the statistics are highly correlated with the identity score. Furthermore, combinations involving sequence length difference or Earth Mover's distance, which takes the length difference into account, are always among the highest correlated paired statistics with identity scores. Similarly, paired statistics including length difference or Earth Mover's distance are among the best performers in finding the K-closest sequences. Interestingly, similar performance can be obtained using histograms of shorter words, resulting in reducing the memory requirement and increasing the speed remarkably. Moreover, we found that simple single statistics are sufficient for processing next-generation sequencing reads and for applications relying on local alignment. Finally, we measured the time requirement of each statistic. The survey and the evaluations will help scientists with identifying efficient alternatives to the costly alignment algorithm, saving thousands of computational hours. The source code of the benchmarking tool is available as Supplementary Materials. © The Author 2017. Published by Oxford University Press.

  1. Speed behaviour in work zone crossovers. A driving simulator study.

    PubMed

    Domenichini, Lorenzo; La Torre, Francesca; Branzi, Valentina; Nocentini, Alessandro

    2017-01-01

    Reductions in speed and, more critically, in speed variability between vehicles are considered an important factor to reduce crash risk in work zones. This study was designed to evaluate in a virtual environment the drivers' behaviour in response to nine different configurations of a motorway crossover work zone. Specifically, the speed behaviour through a typical crossover layout, designed in accordance with the Italian Ministerial Decree 10 July 2002, was compared with that of eight alternative configurations which differ in some characteristics such as the sequence of speed limits, the median opening width and the lane width. The influence of variable message signs, of channelizing devices and of perceptual treatments based on Human Factor principles were also tested. Forty-two participants drove in driving simulator scenarios while data on their speeds and decelerations were collected. The results indicated that drivers' speeds are always higher than the temporary posted speed limits for all configurations and that speeds decreases significantly only within the by-passes. However the implementation of higher speed limits, together with a wider median opening and taller channelization devices led to a greater homogeneity of the speeds adopted by the drivers. The presence of perceptual measures generally induced both the greatest homogenization of speeds and the largest reductions in mean speed values. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Accurate high-speed liquid handling of very small biological samples.

    PubMed

    Schober, A; Günther, R; Schwienhorst, A; Döring, M; Lindemann, B F

    1993-08-01

    Molecular biology techniques require the accurate pipetting of buffers and solutions with volumes in the microliter range. Traditionally, hand-held pipetting devices are used to fulfill these requirements, but many laboratories have also introduced robotic workstations for the handling of liquids. Piston-operated pumps are commonly used in manually as well as automatically operated pipettors. These devices cannot meet the demands for extremely accurate pipetting of very small volumes at the high speed that would be necessary for certain applications (e.g., in sequencing projects with high throughput). In this paper we describe a technique for the accurate microdispensation of biochemically relevant solutions and suspensions with the aid of a piezoelectric transducer. It is suitable for liquids of a viscosity between 0.5 and 500 milliPascals. The obtainable drop sizes range from 5 picoliters to a few nanoliters with up to 10,000 drops per second. Liquids can be dispensed in single or accumulated drops to handle a wide volume range. The system proved to be excellently suitable for the handling of biological samples. It did not show any detectable negative impact on the biological function of dissolved or suspended molecules or particles.

  3. Testing and performance analysis of a 650 Mbps QPPM modem for free-space laser communications

    NASA Astrophysics Data System (ADS)

    Mortensen, Dale J.

    1994-08-01

    The testing and performance of a prototype modem developed at NASA Lewis Research Center for high-speed free-space direct detection optical communications is described. The testing was performed under laboratory conditions using computer control with specially developed test equipment that simulates free-space link conditions. The modem employs quaternary pulse position modulation (QPPM) at 325 Megabits per second (Mbps) on two optical channels, which are multiplexed to transmit a single 650 Mbps data stream. The measured results indicate that the receiver's automatic gain control (AGC), phased-locked-loop slot clock recovery, digital symbol clock recovery, matched filtering, and maximum likelihood data recovery circuits were found to have only 1.5 dB combined implementation loss during bit-error-rate (BER) performance measurements. Pseudo random bit sequences and real-time high quality video sources were used to supply 650 Mbps and 325 Mbps data streams to the modem. Additional testing revealed that Doppler frequency shifting can be easily tracked by the receiver, that simulated pointing errors are readily compensated for by the AGC circuits, and that channel timing skew affects the BER performance in an expected manner. Overall, the needed technologies for a high-speed laser communications modem were demonstrated.

  4. Protein functional features are reflected in the patterns of mRNA translation speed.

    PubMed

    López, Daniel; Pazos, Florencio

    2015-07-09

    The degeneracy of the genetic code makes it possible for the same amino acid string to be coded by different messenger RNA (mRNA) sequences. These "synonymous mRNAs" may differ largely in a number of aspects related to their overall translational efficiency, such as secondary structure content and availability of the encoded transfer RNAs (tRNAs). Consequently, they may render different yields of the translated polypeptides. These mRNA features related to translation efficiency are also playing a role locally, resulting in a non-uniform translation speed along the mRNA, which has been previously related to some protein structural features and also used to explain some dramatic effects of "silent" single-nucleotide-polymorphisms (SNPs). In this work we perform the first large scale analysis of the relationship between three experimental proxies of mRNA local translation efficiency and the local features of the corresponding encoded proteins. We found that a number of protein functional and structural features are reflected in the patterns of ribosome occupancy, secondary structure and tRNA availability along the mRNA. One or more of these proxies of translation speed have distinctive patterns around the mRNA regions coding for certain protein local features. In some cases the three patterns follow a similar trend. We also show specific examples where these patterns of translation speed point to the protein's important structural and functional features. This support the idea that the genome not only codes the protein functional features as sequences of amino acids, but also as subtle patterns of mRNA properties which, probably through local effects on the translation speed, have some consequence on the final polypeptide. These results open the possibility of predicting a protein's functional regions based on a single genomic sequence, and have implications for heterologous protein expression and fine-tuning protein function.

  5. Systematic and stochastic influences on the performance of the MinION nanopore sequencer across a range of nucleotide bias

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krishnakumar, Raga; Sinha, Anupama; Bird, Sara W.

    Emerging sequencing technologies are allowing us to characterize environmental, clinical and laboratory samples with increasing speed and detail, including real-time analysis and interpretation of data. One example of this is being able to rapidly and accurately detect a wide range of pathogenic organisms, both in the clinic and the field. Genomes can have radically different GC content however, such that accurate sequence analysis can be challenging depending upon the technology used. Here, we have characterized the performance of the Oxford MinION nanopore sequencer for detection and evaluation of organisms with a range of genomic nucleotide bias. We have diagnosed themore » quality of base-calling across individual reads and discovered that the position within the read affects base-calling and quality scores. Finally, we have evaluated the performance of the current state-of-the-art neural network-based MinION basecaller, characterizing its behavior with respect to systemic errors as well as context- and sequence-specific errors. Overall, we present a detailed characterization the capabilities of the MinION in terms of generating high-accuracy sequence data from genomes with a wide range of nucleotide content. This study provides a framework for designing the appropriate experiments that are the likely to lead to accurate and rapid field-forward diagnostics.« less

  6. Systematic and stochastic influences on the performance of the MinION nanopore sequencer across a range of nucleotide bias

    DOE PAGES

    Krishnakumar, Raga; Sinha, Anupama; Bird, Sara W.; ...

    2018-02-16

    Emerging sequencing technologies are allowing us to characterize environmental, clinical and laboratory samples with increasing speed and detail, including real-time analysis and interpretation of data. One example of this is being able to rapidly and accurately detect a wide range of pathogenic organisms, both in the clinic and the field. Genomes can have radically different GC content however, such that accurate sequence analysis can be challenging depending upon the technology used. Here, we have characterized the performance of the Oxford MinION nanopore sequencer for detection and evaluation of organisms with a range of genomic nucleotide bias. We have diagnosed themore » quality of base-calling across individual reads and discovered that the position within the read affects base-calling and quality scores. Finally, we have evaluated the performance of the current state-of-the-art neural network-based MinION basecaller, characterizing its behavior with respect to systemic errors as well as context- and sequence-specific errors. Overall, we present a detailed characterization the capabilities of the MinION in terms of generating high-accuracy sequence data from genomes with a wide range of nucleotide content. This study provides a framework for designing the appropriate experiments that are the likely to lead to accurate and rapid field-forward diagnostics.« less

  7. Two-Volt Josephson Arbitrary Waveform Synthesizer Using Wilkinson Dividers.

    PubMed

    Flowers-Jacobs, Nathan E; Fox, Anna E; Dresselhaus, Paul D; Schwall, Robert E; Benz, Samuel P

    2016-09-01

    The root-mean-square (rms) output voltage of the NIST Josephson arbitrary waveform synthesizer (JAWS) has been doubled from 1 V to a record 2 V by combining two new 1 V chips on a cryocooler. This higher voltage will improve calibrations of ac thermal voltage converters and precision voltage measurements that require state-of-the-art quantum accuracy, stability, and signal-to-noise ratio. We achieved this increase in output voltage by using four on-chip Wilkinson dividers and eight inner-outer dc blocks, which enable biasing of eight Josephson junction (JJ) arrays with high-speed inputs from only four high-speed pulse generator channels. This approach halves the number of pulse generator channels required in future JAWS systems. We also implemented on-chip superconducting interconnects between JJ arrays, which reduces systematic errors and enables a new modular chip package. Finally, we demonstrate a new technique for measuring and visualizing the operating current range that reduces the measurement time by almost two orders of magnitude and reveals the relationship between distortion in the output spectrum and output pulse sequence errors.

  8. The changing nature of spacecraft operations: From the Vikings of the 1970's to the great observatories of the 1990's and beyond

    NASA Technical Reports Server (NTRS)

    Ledbetter, Kenneth W.

    1992-01-01

    Four trends in spacecraft flight operations are discussed which will reduce overall program costs. These trends are the use of high-speed, highly reliable data communications systems for distributing operations functions to more convenient and cost-effective sites; the improved capability for remote operation of sensors; a continued rapid increase in memory and processing speed of flight qualified computer chips; and increasingly capable ground-based hardware and software systems, notably those augmented by artificial intelligence functions. Changes reflected by these trends are reviewed starting from the NASA Viking missions of the early 70s, when mission control was conducted at one location using expensive and cumbersome mainframe computers and communications equipment. In the 1980s, powerful desktop computers and modems enabled the Magellan project team to operate the spacecraft remotely. In the 1990s, the Hubble Space Telescope project uses multiple color screens and automated sequencing software on small computers. Given a projection of current capabilities, future control centers will be even more cost-effective.

  9. High-speed registration of phonation-related glottal area variation during artificial lengthening of the vocal tract.

    PubMed

    Laukkanen, Anne-Maria; Pulakka, Hannu; Alku, Paavo; Vilkman, Erkki; Hertegård, Stellan; Lindestad, Per-Ake; Larsson, Hans; Granqvist, Svante

    2007-01-01

    Vocal exercises that increase the vocal tract impedance are widely used in voice training and therapy. The present study applies a versatile methodology to investigate phonation during varying artificial extension of the vocal tract. Two males and one female phonated into a hard-walled plastic tube (phi 2 cm), whose physical length was randomly pair-wise changed between 30 cm, 60 cm and 100 cm. High-speed image (1900 f/sec) sequences of the vocal folds were obtained via a rigid endoscope. Acoustic and electroglottographic signals (EGG) were recorded. Oral pressure during shuttering of the tube was used to give an estimate of subglottic pressure (Psub). The only trend observed was that with the two longer tubes compared to the shortest one, fundamental frequency was lower, open time of the glottis shorter, and Psub higher. The results may partly reflect increased vocal tract impedance as such and partly the increased vocal effort to compensate for it. In other parameters there were individual differences in tube length-related changes, suggesting complexity of the coupling between supraglottic space and the glottis.

  10. Time-sequenced X-ray Observation of a Thermal Explosion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tringe, J W; Molitoris, J D; Smilowitz, L

    The evolution of a thermally-initiated explosion is studied using a multiple-image x-ray system. HMX-based PBX 9501 is used in this work, enabling direct comparison to recently-published data obtained with proton radiography [1]. Multiple x-ray images of the explosion are obtained with image spacing of ten microseconds or more. The explosion is simultaneously characterized with a high-speed camera using an interframe spacing of 11 {micro}s. X-ray and camera images were both initiated passively by signals from an embedded thermocouple array, as opposed to being actively triggered by a laser pulse or other external source. X-ray images show an accelerating reacting frontmore » within the explosive, and also show unreacted explosive at the time the containment vessel bursts. High-speed camera images show debris ejected from the vessel expanding at 800-2100 m/s in the first tens of {micro}s after the container wall failure. The effective center of the initiation volume is about 6 mm from the geometric center of the explosive.« less

  11. Imaging system design and image interpolation based on CMOS image sensor

    NASA Astrophysics Data System (ADS)

    Li, Yu-feng; Liang, Fei; Guo, Rui

    2009-11-01

    An image acquisition system is introduced, which consists of a color CMOS image sensor (OV9620), SRAM (CY62148), CPLD (EPM7128AE) and DSP (TMS320VC5509A). The CPLD implements the logic and timing control to the system. SRAM stores the image data, and DSP controls the image acquisition system through the SCCB (Omni Vision Serial Camera Control Bus). The timing sequence of the CMOS image sensor OV9620 is analyzed. The imaging part and the high speed image data memory unit are designed. The hardware and software design of the image acquisition and processing system is given. CMOS digital cameras use color filter arrays to sample different spectral components, such as red, green, and blue. At the location of each pixel only one color sample is taken, and the other colors must be interpolated from neighboring samples. We use the edge-oriented adaptive interpolation algorithm for the edge pixels and bilinear interpolation algorithm for the non-edge pixels to improve the visual quality of the interpolated images. This method can get high processing speed, decrease the computational complexity, and effectively preserve the image edges.

  12. Microactuator production via high aspect ratio, high edge acuity metal fabrication technology

    NASA Technical Reports Server (NTRS)

    Guckel, H.; Christenson, T. R.

    1993-01-01

    LIGA is a procession sequence which uses x-ray lithography on photoresist layers of several hundred micrometers to produce very high edge acuity photopolymer molds. These plastic molds can be converted to metal molds via electroplating of many different metals and alloys. The end results are high edge acuity metal parts with large structural heights. The LIGA process as originally described by W. Ehrfeld can be extended by adding a surface micromachining phase to produce precision metal parts which can be assembled to form three-dimensional micromechanisms. This process, SLIGA, has been used to fabricate a dynamometer on a chip. The instrument has been fully implemented and will be applied to tribology issues, speed-torque characterization of planar magnetic micromotors and a new family of sensors.

  13. GDC 2: Compression of large collections of genomes

    PubMed Central

    Deorowicz, Sebastian; Danek, Agnieszka; Niemiec, Marcin

    2015-01-01

    The fall of prices of the high-throughput genome sequencing changes the landscape of modern genomics. A number of large scale projects aimed at sequencing many human genomes are in progress. Genome sequencing also becomes an important aid in the personalized medicine. One of the significant side effects of this change is a necessity of storage and transfer of huge amounts of genomic data. In this paper we deal with the problem of compression of large collections of complete genomic sequences. We propose an algorithm that is able to compress the collection of 1092 human diploid genomes about 9,500 times. This result is about 4 times better than what is offered by the other existing compressors. Moreover, our algorithm is very fast as it processes the data with speed 200 MB/s on a modern workstation. In a consequence the proposed algorithm allows storing the complete genomic collections at low cost, e.g., the examined collection of 1092 human genomes needs only about 700 MB when compressed, what can be compared to about 6.7 TB of uncompressed FASTA files. The source code is available at http://sun.aei.polsl.pl/REFRESH/index.php?page=projects&project=gdc&subpage=about. PMID:26108279

  14. Impact of NGS in the medical sciences: Genetic syndromes with an increased risk of developing cancer as an example of the use of new technologies

    PubMed Central

    Lapunzina, Pablo; López, Rocío Ortiz; Rodríguez-Laguna, Lara; García-Miguel, Purificación; Martínez, Augusto Rojas; Martínez-Glez, Víctor

    2014-01-01

    The increased speed and decreasing cost of sequencing, along with an understanding of the clinical relevance of emerging information for patient management, has led to an explosion of potential applications in healthcare. Currently, SNP arrays and Next-Generation Sequencing (NGS) technologies are relatively new techniques used to scan genomes for gains and losses, losses of heterozygosity (LOH), SNPs, and indel variants as well as to perform complete sequencing of a panel of candidate genes, the entire exome (whole exome sequencing) or even the whole genome. As a result, these new high-throughput technologies have facilitated progress in the understanding and diagnosis of genetic syndromes and cancers, two disorders traditionally considered to be separate diseases but that can share causal genetic alterations in a group of developmental disorders associated with congenital malformations and cancer risk. The purpose of this work is to review these syndromes as an example of a group of disorders that has been included in a panel of genes for NGS analysis. We also highlight the relationship between development and cancer and underline the connections between these syndromes. PMID:24764758

  15. GDC 2: Compression of large collections of genomes.

    PubMed

    Deorowicz, Sebastian; Danek, Agnieszka; Niemiec, Marcin

    2015-06-25

    The fall of prices of the high-throughput genome sequencing changes the landscape of modern genomics. A number of large scale projects aimed at sequencing many human genomes are in progress. Genome sequencing also becomes an important aid in the personalized medicine. One of the significant side effects of this change is a necessity of storage and transfer of huge amounts of genomic data. In this paper we deal with the problem of compression of large collections of complete genomic sequences. We propose an algorithm that is able to compress the collection of 1092 human diploid genomes about 9,500 times. This result is about 4 times better than what is offered by the other existing compressors. Moreover, our algorithm is very fast as it processes the data with speed 200 MB/s on a modern workstation. In a consequence the proposed algorithm allows storing the complete genomic collections at low cost, e.g., the examined collection of 1092 human genomes needs only about 700 MB when compressed, what can be compared to about 6.7 TB of uncompressed FASTA files. The source code is available at http://sun.aei.polsl.pl/REFRESH/index.php?page=projects&project=gdc&subpage=about.

  16. Congruence analysis of point clouds from unstable stereo image sequences

    NASA Astrophysics Data System (ADS)

    Jepping, C.; Bethmann, F.; Luhmann, T.

    2014-06-01

    This paper deals with the correction of exterior orientation parameters of stereo image sequences over deformed free-form surfaces without control points. Such imaging situation can occur, for example, during photogrammetric car crash test recordings where onboard high-speed stereo cameras are used to measure 3D surfaces. As a result of such measurements 3D point clouds of deformed surfaces are generated for a complete stereo sequence. The first objective of this research focusses on the development and investigation of methods for the detection of corresponding spatial and temporal tie points within the stereo image sequences (by stereo image matching and 3D point tracking) that are robust enough for a reliable handling of occlusions and other disturbances that may occur. The second objective of this research is the analysis of object deformations in order to detect stable areas (congruence analysis). For this purpose a RANSAC-based method for congruence analysis has been developed. This process is based on the sequential transformation of randomly selected point groups from one epoch to another by using a 3D similarity transformation. The paper gives a detailed description of the congruence analysis. The approach has been tested successfully on synthetic and real image data.

  17. High-speed three-dimensional measurements with a fringe projection-based optical sensor

    NASA Astrophysics Data System (ADS)

    Bräuer-Burchardt, Christian; Breitbarth, Andreas; Kühmstedt, Peter; Notni, Gunther

    2014-11-01

    An optical three-dimensional (3-D) sensor based on a fringe projection technique that realizes the acquisition of the surface geometry of small objects was developed for highly resolved and ultrafast measurements. It realizes a data acquisition rate up to 60 high-resolution 3-D datasets per second. The high measurement velocity was achieved by consequent fringe code reduction and parallel data processing. The reduction of the length of the fringe image sequence was obtained by omission of the Gray code sequence using the geometric restrictions of the measurement objects and the geometric constraints of the sensor arrangement. The sensor covers three different measurement fields between 20 mm×20 mm and 40 mm×40 mm with a spatial resolution between 10 and 20 μm, respectively. In order to obtain a robust and fast recalibration of the sensor after change of the measurement field, a calibration procedure based on single shot analysis of a special test object was applied which works with low effort and time. The sensor may be used, e.g., for quality inspection of conductor boards or plugs in real-time industrial applications.

  18. SuBSENSE: a universal change detection method with local adaptive sensitivity.

    PubMed

    St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert

    2015-01-01

    Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.

  19. Tachyon search speeds up retrieval of similar sequences by several orders of magnitude

    PubMed Central

    Tan, Joshua; Kuchibhatla, Durga; Sirota, Fernanda L.; Sherman, Westley A.; Gattermayer, Tobias; Kwoh, Chia Yee; Eisenhaber, Frank; Schneider, Georg; Maurer-Stroh, Sebastian

    2012-01-01

    Summary: The usage of current sequence search tools becomes increasingly slower as databases of protein sequences continue to grow exponentially. Tachyon, a new algorithm that identifies closely related protein sequences ~200 times faster than standard BLAST, circumvents this limitation with a reduced database and oligopeptide matching heuristic. Availability and implementation: The tool is publicly accessible as a webserver at http://tachyon.bii.a-star.edu.sg and can also be accessed programmatically through SOAP. Contact: sebastianms@bii.a-star.edu.sg Supplementary information: Supplementary data are available at the Bioinformatics online. PMID:22531216

  20. A Dual-Mode Large-Arrayed CMOS ISFET Sensor for Accurate and High-Throughput pH Sensing in Biomedical Diagnosis.

    PubMed

    Huang, Xiwei; Yu, Hao; Liu, Xu; Jiang, Yu; Yan, Mei; Wu, Dongping

    2015-09-01

    The existing ISFET-based DNA sequencing detects hydrogen ions released during the polymerization of DNA strands on microbeads, which are scattered into microwell array above the ISFET sensor with unknown distribution. However, false pH detection happens at empty microwells due to crosstalk from neighboring microbeads. In this paper, a dual-mode CMOS ISFET sensor is proposed to have accurate pH detection toward DNA sequencing. Dual-mode sensing, optical and chemical modes, is realized by integrating a CMOS image sensor (CIS) with ISFET pH sensor, and is fabricated in a standard 0.18-μm CIS process. With accurate determination of microbead physical locations with CIS pixel by contact imaging, the dual-mode sensor can correlate local pH for one DNA slice at one location-determined microbead, which can result in improved pH detection accuracy. Moreover, toward a high-throughput DNA sequencing, a correlated-double-sampling readout that supports large array for both modes is deployed to reduce pixel-to-pixel nonuniformity such as threshold voltage mismatch. The proposed CMOS dual-mode sensor is experimentally examined to show a well correlated pH map and optical image for microbeads with a pH sensitivity of 26.2 mV/pH, a fixed pattern noise (FPN) reduction from 4% to 0.3%, and a readout speed of 1200 frames/s. A dual-mode CMOS ISFET sensor with suppressed FPN for accurate large-arrayed pH sensing is proposed and demonstrated with state-of-the-art measured results toward accurate and high-throughput DNA sequencing. The developed dual-mode CMOS ISFET sensor has great potential for future personal genome diagnostics with high accuracy and low cost.

  1. Long-term excretion of vaccine-derived poliovirus by a healthy child.

    PubMed

    Martín, Javier; Odoom, Kofi; Tuite, Gráinne; Dunn, Glynis; Hopewell, Nicola; Cooper, Gill; Fitzharris, Catherine; Butler, Karina; Hall, William W; Minor, Philip D

    2004-12-01

    A child was found to be excreting type 1 vaccine-derived poliovirus (VDPV) with a 1.1% sequence drift from Sabin type 1 vaccine strain in the VP1 coding region 6 months after he was immunized with oral live polio vaccine. Seventeen type 1 poliovirus isolates were recovered from stools taken from this child during the following 4 months. Contrary to expectation, the child was not deficient in humoral immunity and showed high levels of serum neutralization against poliovirus. Selected virus isolates were characterized in terms of their antigenic properties, virulence in transgenic mice, sensitivity for growth at high temperatures, and differences in nucleotide sequence from the Sabin type 1 strain. The VDPV isolates showed mutations at key nucleotide positions that correlated with the observed reversion to biological properties typical of wild polioviruses. A number of capsid mutations mapped at known antigenic sites leading to changes in the viral antigenic structure. Estimates of sequence evolution based on the accumulation of nucleotide changes in the VP1 coding region detected a "defective" molecular clock running at an apparent faster speed of 2.05% nucleotide changes per year versus 1% shown in previous studies. Remarkably, when compared to several type 1 VDPV strains of different origins, isolates from this child showed a much higher proportion of nonsynonymous versus synonymous nucleotide changes in the capsid coding region. This anomaly could explain the high VP1 sequence drift found and the ability of these virus strains to replicate in the gut for a longer period than expected.

  2. CLUSTOM-CLOUD: In-Memory Data Grid-Based Software for Clustering 16S rRNA Sequence Data in the Cloud Environment.

    PubMed

    Oh, Jeongsu; Choi, Chi-Hwan; Park, Min-Kyu; Kim, Byung Kwon; Hwang, Kyuin; Lee, Sang-Heon; Hong, Soon Gyu; Nasir, Arshan; Cho, Wan-Sup; Kim, Kyung Mo

    2016-01-01

    High-throughput sequencing can produce hundreds of thousands of 16S rRNA sequence reads corresponding to different organisms present in the environmental samples. Typically, analysis of microbial diversity in bioinformatics starts from pre-processing followed by clustering 16S rRNA reads into relatively fewer operational taxonomic units (OTUs). The OTUs are reliable indicators of microbial diversity and greatly accelerate the downstream analysis time. However, existing hierarchical clustering algorithms that are generally more accurate than greedy heuristic algorithms struggle with large sequence datasets. To keep pace with the rapid rise in sequencing data, we present CLUSTOM-CLOUD, which is the first distributed sequence clustering program based on In-Memory Data Grid (IMDG) technology-a distributed data structure to store all data in the main memory of multiple computing nodes. The IMDG technology helps CLUSTOM-CLOUD to enhance both its capability of handling larger datasets and its computational scalability better than its ancestor, CLUSTOM, while maintaining high accuracy. Clustering speed of CLUSTOM-CLOUD was evaluated on published 16S rRNA human microbiome sequence datasets using the small laboratory cluster (10 nodes) and under the Amazon EC2 cloud-computing environments. Under the laboratory environment, it required only ~3 hours to process dataset of size 200 K reads regardless of the complexity of the human microbiome data. In turn, one million reads were processed in approximately 20, 14, and 11 hours when utilizing 20, 30, and 40 nodes on the Amazon EC2 cloud-computing environment. The running time evaluation indicates that CLUSTOM-CLOUD can handle much larger sequence datasets than CLUSTOM and is also a scalable distributed processing system. The comparative accuracy test using 16S rRNA pyrosequences of a mock community shows that CLUSTOM-CLOUD achieves higher accuracy than DOTUR, mothur, ESPRIT-Tree, UCLUST and Swarm. CLUSTOM-CLOUD is written in JAVA and is freely available at http://clustomcloud.kopri.re.kr.

  3. A Phylogenomic Approach Based on PCR Target Enrichment and High Throughput Sequencing: Resolving the Diversity within the South American Species of Bartsia L. (Orobanchaceae)

    PubMed Central

    Tank, David C.

    2016-01-01

    Advances in high-throughput sequencing (HTS) have allowed researchers to obtain large amounts of biological sequence information at speeds and costs unimaginable only a decade ago. Phylogenetics, and the study of evolution in general, is quickly migrating towards using HTS to generate larger and more complex molecular datasets. In this paper, we present a method that utilizes microfluidic PCR and HTS to generate large amounts of sequence data suitable for phylogenetic analyses. The approach uses the Fluidigm Access Array System (Fluidigm, San Francisco, CA, USA) and two sets of PCR primers to simultaneously amplify 48 target regions across 48 samples, incorporating sample-specific barcodes and HTS adapters (2,304 unique amplicons per Access Array). The final product is a pooled set of amplicons ready to be sequenced, and thus, there is no need to construct separate, costly genomic libraries for each sample. Further, we present a bioinformatics pipeline to process the raw HTS reads to either generate consensus sequences (with or without ambiguities) for every locus in every sample or—more importantly—recover the separate alleles from heterozygous target regions in each sample. This is important because it adds allelic information that is well suited for coalescent-based phylogenetic analyses that are becoming very common in conservation and evolutionary biology. To test our approach and bioinformatics pipeline, we sequenced 576 samples across 96 target regions belonging to the South American clade of the genus Bartsia L. in the plant family Orobanchaceae. After sequencing cleanup and alignment, the experiment resulted in ~25,300bp across 486 samples for a set of 48 primer pairs targeting the plastome, and ~13,500bp for 363 samples for a set of primers targeting regions in the nuclear genome. Finally, we constructed a combined concatenated matrix from all 96 primer combinations, resulting in a combined aligned length of ~40,500bp for 349 samples. PMID:26828929

  4. CLUSTOM-CLOUD: In-Memory Data Grid-Based Software for Clustering 16S rRNA Sequence Data in the Cloud Environment

    PubMed Central

    Park, Min-Kyu; Kim, Byung Kwon; Hwang, Kyuin; Lee, Sang-Heon; Hong, Soon Gyu; Nasir, Arshan; Cho, Wan-Sup; Kim, Kyung Mo

    2016-01-01

    High-throughput sequencing can produce hundreds of thousands of 16S rRNA sequence reads corresponding to different organisms present in the environmental samples. Typically, analysis of microbial diversity in bioinformatics starts from pre-processing followed by clustering 16S rRNA reads into relatively fewer operational taxonomic units (OTUs). The OTUs are reliable indicators of microbial diversity and greatly accelerate the downstream analysis time. However, existing hierarchical clustering algorithms that are generally more accurate than greedy heuristic algorithms struggle with large sequence datasets. To keep pace with the rapid rise in sequencing data, we present CLUSTOM-CLOUD, which is the first distributed sequence clustering program based on In-Memory Data Grid (IMDG) technology–a distributed data structure to store all data in the main memory of multiple computing nodes. The IMDG technology helps CLUSTOM-CLOUD to enhance both its capability of handling larger datasets and its computational scalability better than its ancestor, CLUSTOM, while maintaining high accuracy. Clustering speed of CLUSTOM-CLOUD was evaluated on published 16S rRNA human microbiome sequence datasets using the small laboratory cluster (10 nodes) and under the Amazon EC2 cloud-computing environments. Under the laboratory environment, it required only ~3 hours to process dataset of size 200 K reads regardless of the complexity of the human microbiome data. In turn, one million reads were processed in approximately 20, 14, and 11 hours when utilizing 20, 30, and 40 nodes on the Amazon EC2 cloud-computing environment. The running time evaluation indicates that CLUSTOM-CLOUD can handle much larger sequence datasets than CLUSTOM and is also a scalable distributed processing system. The comparative accuracy test using 16S rRNA pyrosequences of a mock community shows that CLUSTOM-CLOUD achieves higher accuracy than DOTUR, mothur, ESPRIT-Tree, UCLUST and Swarm. CLUSTOM-CLOUD is written in JAVA and is freely available at http://clustomcloud.kopri.re.kr. PMID:26954507

  5. Does a Sensory Processing Deficit Explain Counting Accuracy on Rapid Visual Sequencing Tasks in Adults with and without Dyslexia?

    ERIC Educational Resources Information Center

    Conlon, Elizabeth G.; Wright, Craig M.; Norris, Karla; Chekaluk, Eugene

    2011-01-01

    The experiments conducted aimed to investigate whether reduced accuracy when counting stimuli presented in rapid temporal sequence in adults with dyslexia could be explained by a sensory processing deficit, a general slowing in processing speed or difficulties shifting attention between stimuli. To achieve these aims, the influence of the…

  6. High-tech breakthrough DNA scanner for reading sequence and detecting gene mutation: A powerful 1 lb, 20 {mu}m resolution, 16-bit personal scanner (PS) that scans 17inch x 14inch x-ray film in 48 s, with laser, uv and white light sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zeineh, J.A.; Zeineh, M.M.; Zeineh, R.A.

    1993-06-01

    The 17inch x 14inch X-ray film, gels, and blots are widely used in DNA research. However, DNA laser scanners are costly and unaffordable for the majority of surveyed biotech scientists who need it. The high-tech breakthrough analytical personal scanner (PS) presented in this report is an inexpensive 1 lb hand-held scanner priced at 2-4% of the bulky and costly 30-95 lb conventional laser scanners. This PS scanner is affordable from an operation budget and biotechnologists, who originate most science breakthroughs, can acquire it to enhance their speed, accuracy, and productivity. Compared to conventional laser scanners that are currently available onlymore » through hard-to-get capital-equipment budgets, the new PS scanner offers improved spatial resolution of 20 {mu}m, higher speed (scan up to 17inch x 14inch molecular X-ray film in 48 s), 1-32,768 gray levels (16-bits), student routines, versatility, and, most important, affordability. Its programs image the film, read DNA sequences automatically, and detect gene mutation. In parallel to the wide laboratory use of PC computers instead of mainframes, this PS scanner might become an integral part of a PC-PS powerful and cost-effective system where the PS performs the digital imaging and the PC acts on the data.« less

  7. Rate in template-directed polymer synthesis.

    PubMed

    Saito, Takuya

    2014-06-01

    We discuss the temporal efficiency of template-directed polymer synthesis, such as DNA replication and transcription, under a given template string. To weigh the synthesis speed and accuracy on the same scale, we propose a template-directed synthesis (TDS) rate, which contains an expression analogous to that for the Shannon entropy. Increasing the synthesis speed accelerates the TDS rate, but the TDS rate is lowered if the produced sequences are diversified. We apply the TDS rate to some production system models and investigate how the balance between the speed and the accuracy is affected by changes in the system conditions.

  8. CoVaCS: a consensus variant calling system.

    PubMed

    Chiara, Matteo; Gioiosa, Silvia; Chillemi, Giovanni; D'Antonio, Mattia; Flati, Tiziano; Picardi, Ernesto; Zambelli, Federico; Horner, David Stephen; Pesole, Graziano; Castrignanò, Tiziana

    2018-02-05

    The advent and ongoing development of next generation sequencing technologies (NGS) has led to a rapid increase in the rate of human genome re-sequencing data, paving the way for personalized genomics and precision medicine. The body of genome resequencing data is progressively increasing underlining the need for accurate and time-effective bioinformatics systems for genotyping - a crucial prerequisite for identification of candidate causal mutations in diagnostic screens. Here we present CoVaCS, a fully automated, highly accurate system with a web based graphical interface for genotyping and variant annotation. Extensive tests on a gold standard benchmark data-set -the NA12878 Illumina platinum genome- confirm that call-sets based on our consensus strategy are completely in line with those attained by similar command line based approaches, and far more accurate than call-sets from any individual tool. Importantly our system exhibits better sensitivity and higher specificity than equivalent commercial software. CoVaCS offers optimized pipelines integrating state of the art tools for variant calling and annotation for whole genome sequencing (WGS), whole-exome sequencing (WES) and target-gene sequencing (TGS) data. The system is currently hosted at Cineca, and offers the speed of a HPC computing facility, a crucial consideration when large numbers of samples must be analysed. Importantly, all the analyses are performed automatically allowing high reproducibility of the results. As such, we believe that CoVaCS can be a valuable tool for the analysis of human genome resequencing studies. CoVaCS is available at: https://bioinformatics.cineca.it/covacs .

  9. Investigation of modulation parameters in multiplexing gas chromatography.

    PubMed

    Trapp, Oliver

    2010-10-22

    Combination of information technology and separation sciences opens a new avenue to achieve high sample throughputs and therefore is of great interest to bypass bottlenecks in catalyst screening of parallelized reactors or using multitier well plates in reaction optimization. Multiplexing gas chromatography utilizes pseudo-random injection sequences derived from Hadamard matrices to perform rapid sample injections which gives a convoluted chromatogram containing the information of a single sample or of several samples with similar analyte composition. The conventional chromatogram is obtained by application of the Hadamard transform using the known injection sequence or in case of several samples an averaged transformed chromatogram is obtained which can be used in a Gauss-Jordan deconvolution procedure to obtain all single chromatograms of the individual samples. The performance of such a system depends on the modulation precision and on the parameters, e.g. the sequence length and modulation interval. Here we demonstrate the effects of the sequence length and modulation interval on the deconvoluted chromatogram, peak shapes and peak integration for sequences between 9-bit (511 elements) and 13-bit (8191 elements) and modulation intervals Δt between 5 s and 500 ms using a mixture of five components. It could be demonstrated that even for high-speed modulation at time intervals of 500 ms the chromatographic information is very well preserved and that the separation efficiency can be improved by very narrow sample injections. Furthermore this study shows that the relative peak areas in multiplexed chromatograms do not deviate from conventionally recorded chromatograms. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. A DNA sequence analysis package for the IBM personal computer.

    PubMed Central

    Lagrimini, L M; Brentano, S T; Donelson, J E

    1984-01-01

    We present here a collection of DNA sequence analysis programs, called "PC Sequence" (PCS), which are designed to run on the IBM Personal Computer (PC). These programs are written in IBM PC compiled BASIC and take full advantage of the IBM PC's speed, error handling, and graphics capabilities. For a modest initial expense in hardware any laboratory can use these programs to quickly perform computer analysis on DNA sequences. They are written with the novice user in mind and require very little training or previous experience with computers. Also provided are a text editing program for creating and modifying DNA sequence files and a communications program which enables the PC to communicate with and collect information from mainframe computers and DNA sequence databases. PMID:6546433

  11. Rapid isolation and purification of phorbol esters from Jatropha curcas by high-speed countercurrent chromatography.

    PubMed

    Hua, Wan; Hu, Huiling; Chen, Fang; Tang, Lin; Peng, Tong; Wang, Zhanguo

    2015-03-18

    In this work, a high-speed countercurrent chromatography (HSCCC) method was established for the preparation of phorbol esters (PEs) from Jatropha curcas. n-Hexane-ethyl acetate-methanol-water (1.5:1.5:1.2:0.5, v/v) was selected as the optimum two-phase solvent system to separate and purify jatropha factor C1 (JC1) with a purity of 85.2%, as determined by HPLC, and to obtain a mixture containing four or five PEs. Subsequently, continuous semipreparative HPLC was applied to further purify JC1 (99.8% as determined by HPLC). In addition, UPLC-PDA and UPLC-MS were established and successfully used to evaluate the isolated JC1 and PE-rich crude extract. The purity of JC1 was only 87.8% by UPLC-UV. A peak (a compound highly similar to JC1) was indentified as the isomer of JC1 by comparing the characteristic UV absorption and MS spectra. Meanwhile, this strategy was also applied to analyze the PE-rich crude extract from J. curcas. It is interesting that there may be more than 15 PEs according to the same quasi-molecular ion peaks, highly similar sequence-specific fragment ions, and similar UV absorption spectrum.

  12. Transient phases during crystallization of solution-processed organic thin films

    NASA Astrophysics Data System (ADS)

    Wan, Jing; Li, Yang; Ulbrandt, Jeffery; Smilgies, Detlef-M.; Hollin, Jonathan; Whalley, Adam; Headrick, Randall

    We report an in-situ study of 2,7-dioctyl[1]benzothieno[3,2-b][1]benzothiophene (C8-BTBT) organic semiconductor thin film deposition from solution via hollow pen writing, which exhibits multiple transient phases during crystallization. Under high writing speed (25 mm/s) the films have an isotropic morphology, although the mobilities range up to 3.0 cm2/V.s. To understand the crystallization in this highly non-equilibrium regime, we employ in-situ microbeam grazing incidence wide-angle X-ray scattering combined with optical video microscopy at different deposition temperatures. A sequence of crystallization was observed in which a layered liquid-crystalline (LC) phase of C8-BTBT precedes inter-layer ordering. For films deposited above 80ºC, a transition from LC phase to a transient crystalline state that we denote as Cr1 occurs after a temperature-dependent incubation time, which is consistent with classical nucleation theory. After an additional ~ 0.5s, Cr1 transforms to the final stable structure Cr2. Based on these results, we demonstrate a method to produce large crystalline grain size and high carrier mobility during high-speed processing by controlling the nucleation rate during the transformation from the LC phase. Nsf DMR-1307017, NSF DMR-1332208.

  13. SPARSE: quadratic time simultaneous alignment and folding of RNAs without sequence-based heuristics

    PubMed Central

    Will, Sebastian; Otto, Christina; Miladi, Milad; Möhl, Mathias; Backofen, Rolf

    2015-01-01

    Motivation: RNA-Seq experiments have revealed a multitude of novel ncRNAs. The gold standard for their analysis based on simultaneous alignment and folding suffers from extreme time complexity of O(n6). Subsequently, numerous faster ‘Sankoff-style’ approaches have been suggested. Commonly, the performance of such methods relies on sequence-based heuristics that restrict the search space to optimal or near-optimal sequence alignments; however, the accuracy of sequence-based methods breaks down for RNAs with sequence identities below 60%. Alignment approaches like LocARNA that do not require sequence-based heuristics, have been limited to high complexity (≥ quartic time). Results: Breaking this barrier, we introduce the novel Sankoff-style algorithm ‘sparsified prediction and alignment of RNAs based on their structure ensembles (SPARSE)’, which runs in quadratic time without sequence-based heuristics. To achieve this low complexity, on par with sequence alignment algorithms, SPARSE features strong sparsification based on structural properties of the RNA ensembles. Following PMcomp, SPARSE gains further speed-up from lightweight energy computation. Although all existing lightweight Sankoff-style methods restrict Sankoff’s original model by disallowing loop deletions and insertions, SPARSE transfers the Sankoff algorithm to the lightweight energy model completely for the first time. Compared with LocARNA, SPARSE achieves similar alignment and better folding quality in significantly less time (speedup: 3.7). At similar run-time, it aligns low sequence identity instances substantially more accurate than RAF, which uses sequence-based heuristics. Availability and implementation: SPARSE is freely available at http://www.bioinf.uni-freiburg.de/Software/SPARSE. Contact: backofen@informatik.uni-freiburg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25838465

  14. Monthly and annual percentage levels of wind speed differences computed by using FPS-16 radar/Jimsphere wind profile data from Cape Kennedy, Florida

    NASA Technical Reports Server (NTRS)

    Susko, M.; Kaufman, J. W.

    1973-01-01

    The percentage levels of wind speed differences are presented computed from sequential FPS-16 radar/Jimsphere wind profiles. The results are based on monthly profiles obtained from December 1964 to July 1970 at Cape Kennedy, Florida. The profile sequences contain a series of three to ten Jimspheres released at approximately 1.5-hour intervals. The results given are the persistence analysis of wind speed difference at 1.5-hour intervals to a maximum time interval of 12 hours. The monthly percentage of wind speed differences and the annual percentage of wind speed differences are tabulated. The percentage levels are based on the scalar wind speed changes calculated over an altitude interval of approximately 50 meters and printed out every 25 meters as a function of initial wind speed within each five-kilometer layer from near sea level to 20 km. In addition, analyses were made of the wind speed difference for the 0.2 to 1 km layer as an aid for studies associated with take-off and landing of the space shuttle.

  15. Visually driven chaining of elementary swim patterns into a goal-directed motor sequence: a virtual reality study of zebrafish prey capture

    PubMed Central

    Trivedi, Chintan A.; Bollmann, Johann H.

    2013-01-01

    Prey capture behavior critically depends on rapid processing of sensory input in order to track, approach, and catch the target. When using vision, the nervous system faces the problem of extracting relevant information from a continuous stream of input in order to detect and categorize visible objects as potential prey and to select appropriate motor patterns for approach. For prey capture, many vertebrates exhibit intermittent locomotion, in which discrete motor patterns are chained into a sequence, interrupted by short periods of rest. Here, using high-speed recordings of full-length prey capture sequences performed by freely swimming zebrafish larvae in the presence of a single paramecium, we provide a detailed kinematic analysis of first and subsequent swim bouts during prey capture. Using Fourier analysis, we show that individual swim bouts represent an elementary motor pattern. Changes in orientation are directed toward the target on a graded scale and are implemented by an asymmetric tail bend component superimposed on this basic motor pattern. To further investigate the role of visual feedback on the efficiency and speed of this complex behavior, we developed a closed-loop virtual reality setup in which minimally restrained larvae recapitulated interconnected swim patterns closely resembling those observed during prey capture in freely moving fish. Systematic variation of stimulus properties showed that prey capture is initiated within a narrow range of stimulus size and velocity. Furthermore, variations in the delay and location of swim triggered visual feedback showed that the reaction time of secondary and later swims is shorter for stimuli that appear within a narrow spatio-temporal window following a swim. This suggests that the larva may generate an expectation of stimulus position, which enables accelerated motor sequencing if the expectation is met by appropriate visual feedback. PMID:23675322

  16. A comprehensive and scalable database search system for metaproteomics.

    PubMed

    Chatterjee, Sandip; Stupp, Gregory S; Park, Sung Kyu Robin; Ducom, Jean-Christophe; Yates, John R; Su, Andrew I; Wolan, Dennis W

    2016-08-16

    Mass spectrometry-based shotgun proteomics experiments rely on accurate matching of experimental spectra against a database of protein sequences. Existing computational analysis methods are limited in the size of their sequence databases, which severely restricts the proteomic sequencing depth and functional analysis of highly complex samples. The growing amount of public high-throughput sequencing data will only exacerbate this problem. We designed a broadly applicable metaproteomic analysis method (ComPIL) that addresses protein database size limitations. Our approach to overcome this significant limitation in metaproteomics was to design a scalable set of sequence databases assembled for optimal library querying speeds. ComPIL was integrated with a modified version of the search engine ProLuCID (termed "Blazmass") to permit rapid matching of experimental spectra. Proof-of-principle analysis of human HEK293 lysate with a ComPIL database derived from high-quality genomic libraries was able to detect nearly all of the same peptides as a search with a human database (~500x fewer peptides in the database), with a small reduction in sensitivity. We were also able to detect proteins from the adenovirus used to immortalize these cells. We applied our method to a set of healthy human gut microbiome proteomic samples and showed a substantial increase in the number of identified peptides and proteins compared to previous metaproteomic analyses, while retaining a high degree of protein identification accuracy and allowing for a more in-depth characterization of the functional landscape of the samples. The combination of ComPIL with Blazmass allows proteomic searches to be performed with database sizes much larger than previously possible. These large database searches can be applied to complex meta-samples with unknown composition or proteomic samples where unexpected proteins may be identified. The protein database, proteomic search engine, and the proteomic data files for the 5 microbiome samples characterized and discussed herein are open source and available for use and additional analysis.

  17. The effect of recovery duration on running speed and stroke quality during intermittent training drills in elite tennis players.

    PubMed

    Ferrauti, A; Pluim, B M; Weber, K

    2001-04-01

    The aim of this study was to assess the effect of the recovery duration in intermittent training drills on metabolism and coordination in sport games. Ten nationally ranked male tennis players (age 25.3+/-3.7 years, height 1.83+/-0.8 m, body mass 77.8+/-7.7 kg; mean +/- sx) participated in a passing-shot drill (baseline sprint with subsequent passing shot) that aimed to improve both starting speed and stroke quality (speed and precision). Time pressure for stroke preparation was individually adjusted by a ball-machine and corresponded to 80% of maximum running speed. In two trials (T10, T15) separated by 2 weeks, the players completed 30 strokes and sprints subdivided into 6 x 5 repetitions with a 1 min rest between series. The rest between each stroke-and-sprint lasted either 10 s (T10) or 15 s (T15). The sequence of both conditions was randomized between participants. Post-exercise blood lactate concentration was significantly elevated in T10 (9.04+/-3.06 vs 5.01+/-1.35 mmol x l(-1), P < 0.01). Running time for stroke preparation (1.405+/-0.044 vs 1.376+/-0.045 s, P < 0.05) and stroke speed (106+/-12 vs 114+/-8 km x h(-1), P < 0.05) were significantly decreased in T10, while stroke precision - that is, more target hits (P < 0.1) and fewer errors (P < 0.05) - tended to be higher. We conclude that running speed and stroke quality during intermittent tennis drills are highly dependent on the duration of recovery time. Optimization of training efficacy in sport games (e.g. combined improvement of conditional and technical skills) requires skilful fine-tuning of monitoring guidelines.

  18. Chiron: translating nanopore raw signal directly into nucleotide sequence using deep learning.

    PubMed

    Teng, Haotian; Cao, Minh Duc; Hall, Michael B; Duarte, Tania; Wang, Sheng; Coin, Lachlan J M

    2018-05-01

    Sequencing by translocating DNA fragments through an array of nanopores is a rapidly maturing technology that offers faster and cheaper sequencing than other approaches. However, accurately deciphering the DNA sequence from the noisy and complex electrical signal is challenging. Here, we report Chiron, the first deep learning model to achieve end-to-end basecalling and directly translate the raw signal to DNA sequence without the error-prone segmentation step. Trained with only a small set of 4,000 reads, we show that our model provides state-of-the-art basecalling accuracy, even on previously unseen species. Chiron achieves basecalling speeds of more than 2,000 bases per second using desktop computer graphics processing units.

  19. Recovery Discontinuous Galerkin Jacobian-Free Newton-Krylov Method for All-Speed Flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HyeongKae Park; Robert Nourgaliev; Vincent Mousseau

    2008-07-01

    A novel numerical algorithm (rDG-JFNK) for all-speed fluid flows with heat conduction and viscosity is introduced. The rDG-JFNK combines the Discontinuous Galerkin spatial discretization with the implicit Runge-Kutta time integration under the Jacobian-free Newton-Krylov framework. We solve fully-compressible Navier-Stokes equations without operator-splitting of hyperbolic, diffusion and reaction terms, which enables fully-coupled high-order temporal discretization. The stability constraint is removed due to the L-stable Explicit, Singly Diagonal Implicit Runge-Kutta (ESDIRK) scheme. The governing equations are solved in the conservative form, which allows one to accurately compute shock dynamics, as well as low-speed flows. For spatial discretization, we develop a “recovery” familymore » of DG, exhibiting nearly-spectral accuracy. To precondition the Krylov-based linear solver (GMRES), we developed an “Operator-Split”-(OS) Physics Based Preconditioner (PBP), in which we transform/simplify the fully-coupled system to a sequence of segregated scalar problems, each can be solved efficiently with Multigrid method. Each scalar problem is designed to target/cluster eigenvalues of the Jacobian matrix associated with a specific physics.« less

  20. Wake structure and wing motion in bat flight

    NASA Astrophysics Data System (ADS)

    Hubel, Tatjana; Breuer, Kenneth; Swartz, Sharon

    2008-11-01

    We report on experiments concerning the wake structure and kinematics of bat flight, conducted in a low-speed wind tunnel using time-resolved PIV (200Hz) and 4 high-speed cameras to capture wake and wing motion simultaneously. 16 Lesser dog-faced fruit bats (C. brachyotis) were trained to fly in the wind tunnel at 3-6.5m/s. The PIV recordings perpendicular to the flow stream allowed observing the development of the tip vortex and circulation over the wing beat cycle. Each PIV acquisition sequence is correlated with the respective kinematic history. Circulation within wing beat cycles were often quite repeatable, however variations due to maneuvering of the bat are clearly visible. While no distinct vortex structure was observed at the upper reversal point (defined according the vertical motion of the wrist) a tip vortex was observed to develop in the first third of the downstroke, growing in strength, and persisting during much of the upstroke. Correlated to the presence of a strong tip vortex the circulation has almost constant strength over the middle half of the wing beat. At relatively low flight speeds (3.4 m/s), a closed vortex structure behind the bat is postulated.

  1. Getting the jump on Mother nature: the advancing technology of applied botany will speed reclamation of mined land

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    By propagating native or indigenous plants in greenhouses on a speeded-up schedule, much guesswork is being taken out of revegetating mined land. Native Plants Inc., which tricks nature by reproducing the biological sequence of plants in a fraction of the natural time, offers reclaimers of disturbed land the plants they want in any quantity on very short notice. Prior to planting, seeds are treated to break their dormancy code. The plants are grown in tubepaks that promote root growth, grow both night and day, and have CO/sub 2/ injected into the air. The company reports a 97% success rate inmore » transplanting its seedlings. Cloning and the unlocking of the germination secrets of plants have great potential for speeding the restoration of mined land. A micropropagation (or tissue culture), whereby parts of a whole plant are removed, sterilized, and grown on a specific nutrient medium are being considered. This technology affords a high degree of selectivity and rapid plant propagation and has far-reaching implications for mine operators faced with the challenges of reclaiming mined land. (DP)« less

  2. “Superluminal” FITS File Processing on Multiprocessors: Zero Time Endian Conversion Technique

    NASA Astrophysics Data System (ADS)

    Eguchi, Satoshi

    2013-05-01

    The FITS is the standard file format in astronomy, and it has been extended to meet the astronomical needs of the day. However, astronomical datasets have been inflating year by year. In the case of the ALMA telescope, a ˜TB-scale four-dimensional data cube may be produced for one target. Considering that typical Internet bandwidth is tens of MB/s at most, the original data cubes in FITS format are hosted on a VO server, and the region which a user is interested in should be cut out and transferred to the user (Eguchi et al. 2012). The system will equip a very high-speed disk array to process a TB-scale data cube in 10 s, and disk I/O speed, endian conversion, and data processing speeds will be comparable. Hence, reducing the endian conversion time is one of issues to solve in our system. In this article, I introduce a technique named “just-in-time endian conversion”, which delays the endian conversion for each pixel just before it is really needed, to sweep out the endian conversion time; by applying this method, the FITS processing speed increases 20% for single threading and 40% for multi-threading compared to CFITSIO. The speedup tightly relates to modern CPU architecture to improve the efficiency of instruction pipelines due to break of “causality”, a programmed instruction code sequence.

  3. HIA: a genome mapper using hybrid index-based sequence alignment.

    PubMed

    Choi, Jongpill; Park, Kiejung; Cho, Seong Beom; Chung, Myungguen

    2015-01-01

    A number of alignment tools have been developed to align sequencing reads to the human reference genome. The scale of information from next-generation sequencing (NGS) experiments, however, is increasing rapidly. Recent studies based on NGS technology have routinely produced exome or whole-genome sequences from several hundreds or thousands of samples. To accommodate the increasing need of analyzing very large NGS data sets, it is necessary to develop faster, more sensitive and accurate mapping tools. HIA uses two indices, a hash table index and a suffix array index. The hash table performs direct lookup of a q-gram, and the suffix array performs very fast lookup of variable-length strings by exploiting binary search. We observed that combining hash table and suffix array (hybrid index) is much faster than the suffix array method for finding a substring in the reference sequence. Here, we defined the matching region (MR) is a longest common substring between a reference and a read. And, we also defined the candidate alignment regions (CARs) as a list of MRs that is close to each other. The hybrid index is used to find candidate alignment regions (CARs) between a reference and a read. We found that aligning only the unmatched regions in the CAR is much faster than aligning the whole CAR. In benchmark analysis, HIA outperformed in mapping speed compared with the other aligners, without significant loss of mapping accuracy. Our experiments show that the hybrid of hash table and suffix array is useful in terms of speed for mapping NGS sequencing reads to the human reference genome sequence. In conclusion, our tool is appropriate for aligning massive data sets generated by NGS sequencing.

  4. Massively parallel digital high resolution melt for rapid and absolutely quantitative sequence profiling

    NASA Astrophysics Data System (ADS)

    Velez, Daniel Ortiz; Mack, Hannah; Jupe, Julietta; Hawker, Sinead; Kulkarni, Ninad; Hedayatnia, Behnam; Zhang, Yang; Lawrence, Shelley; Fraley, Stephanie I.

    2017-02-01

    In clinical diagnostics and pathogen detection, profiling of complex samples for low-level genotypes represents a significant challenge. Advances in speed, sensitivity, and extent of multiplexing of molecular pathogen detection assays are needed to improve patient care. We report the development of an integrated platform enabling the identification of bacterial pathogen DNA sequences in complex samples in less than four hours. The system incorporates a microfluidic chip and instrumentation to accomplish universal PCR amplification, High Resolution Melting (HRM), and machine learning within 20,000 picoliter scale reactions, simultaneously. Clinically relevant concentrations of bacterial DNA molecules are separated by digitization across 20,000 reactions and amplified with universal primers targeting the bacterial 16S gene. Amplification is followed by HRM sequence fingerprinting in all reactions, simultaneously. The resulting bacteria-specific melt curves are identified by Support Vector Machine learning, and individual pathogen loads are quantified. The platform reduces reaction volumes by 99.995% and achieves a greater than 200-fold increase in dynamic range of detection compared to traditional PCR HRM approaches. Type I and II error rates are reduced by 99% and 100% respectively, compared to intercalating dye-based digital PCR (dPCR) methods. This technology could impact a number of quantitative profiling applications, especially infectious disease diagnostics.

  5. Ultrahigh- and high-speed photography, videography, and photonics '91; Proceedings of the Meeting, San Diego, CA, July 24-26, 1991

    NASA Astrophysics Data System (ADS)

    Jaanimagi, Paul A.

    1992-01-01

    This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.

  6. Wear behaviors of pure aluminum and extruded aluminum alloy (AA2024-T4) under variable vertical loads and linear speeds

    NASA Astrophysics Data System (ADS)

    Jung, Jeki; Oak, Jeong-Jung; Kim, Yong-Hwan; Cho, Yi Je; Park, Yong Ho

    2017-11-01

    The aim of this study was to investigate the transition of wear behavior for pure aluminum and extruded aluminum alloy 2024-T4 (AA2024-T4). The wear test was carried using a ball-on-disc wear testing machine at various vertical loads and linear speeds. The transition of wear behaviors was analyzed based on the microstructure, wear tracks, wear cross-section, and wear debris. The critical wear rates for each material are occurred at lower linear speed for each vertical load. The transition of wear behavior was observed in which abrasion wears with the generation of an oxide layer, fracture of oxide layer, adhesion wear, severe adhesion wear, and the generation of seizure occurred in sequence. In case of the pure aluminum, the change of wear debris occurred in the order of blocky, flake, and needle-like debris. Cutting chip, flake-like, and coarse flake-like debris was occurred in sequence for the extruded AA2024-T4. The transition in the wear behavior of extruded AA2024-T4 occurred slower than in pure aluminum.

  7. RNA motif search with data-driven element ordering.

    PubMed

    Rampášek, Ladislav; Jimenez, Randi M; Lupták, Andrej; Vinař, Tomáš; Brejová, Broňa

    2016-05-18

    In this paper, we study the problem of RNA motif search in long genomic sequences. This approach uses a combination of sequence and structure constraints to uncover new distant homologs of known functional RNAs. The problem is NP-hard and is traditionally solved by backtracking algorithms. We have designed a new algorithm for RNA motif search and implemented a new motif search tool RNArobo. The tool enhances the RNAbob descriptor language, allowing insertions in helices, which enables better characterization of ribozymes and aptamers. A typical RNA motif consists of multiple elements and the running time of the algorithm is highly dependent on their ordering. By approaching the element ordering problem in a principled way, we demonstrate more than 100-fold speedup of the search for complex motifs compared to previously published tools. We have developed a new method for RNA motif search that allows for a significant speedup of the search of complex motifs that include pseudoknots. Such speed improvements are crucial at a time when the rate of DNA sequencing outpaces growth in computing. RNArobo is available at http://compbio.fmph.uniba.sk/rnarobo .

  8. Design and evaluation of an air traffic control Final Approach Spacing Tool

    NASA Technical Reports Server (NTRS)

    Davis, Thomas J.; Erzberger, Heinz; Green, Steven M.; Nedell, William

    1991-01-01

    This paper describes the design and simulator evaluation of an automation tool for assisting terminal radar approach controllers in sequencing and spacing traffic onto the final approach course. The automation tool, referred to as the Final Approach Spacing Tool (FAST), displays speed and heading advisories for arriving aircraft as well as sequencing information on the controller's radar display. The main functional elements of FAST are a scheduler that schedules and sequences the traffic, a four-dimensional trajectory synthesizer that generates the advisories, and a graphical interface that displays the information to the controller. FAST has been implemented on a high-performance workstation. It can be operated as a stand-alone in the terminal radar approach control facility or as an element of a system integrated with automation tools in the air route traffic control center. FAST was evaluated by experienced air traffic controllers in a real-time air traffic control simulation. simulation results summarized in the paper show that the automation tools significantly reduced controller work load and demonstrated a potential for an increase in landing rate.

  9. Developing course lecture notes on high-speed rail.

    DOT National Transportation Integrated Search

    2017-07-15

    1. Introduction a. World-wide Development of High-Speed Rail (Japan, Europe, China) b. High-speed Rail in the U.S. 2. High-Speed Rail Infrastructure a. Geometric Design of High Speed Rail i. Horizontal Curve ii. Vertical Curve iii. Grade and Turnout ...

  10. Engaging Environments Enhance Motor Skill Learning in a Computer Gaming Task.

    PubMed

    Lohse, Keith R; Boyd, Lara A; Hodges, Nicola J

    2016-01-01

    Engagement during practice can motivate a learner to practice more, hence having indirect effects on learning through increased practice. However, it is not known whether engagement can also have a direct effect on learning when the amount of practice is held constant. To address this question, 40 participants played a video game that contained an embedded repeated sequence component, under either highly engaging conditions (the game group) or mechanically identical but less engaging conditions (the sterile group). The game environment facilitated retention over a 1-week interval. Specifically, the game group improved in both speed and accuracy for random and repeated trials, suggesting a general motor-related improvement, rather than a specific influence of engagement on implicit sequence learning. These data provide initial evidence that increased engagement during practice has a direct effect on generalized learning, improving retention and transfer of a complex motor skill.

  11. Integrating Genome-based Informatics to Modernize Global Disease Monitoring, Information Sharing, and Response

    PubMed Central

    Brown, Eric W.; Detter, Chris; Gerner-Smidt, Peter; Gilmour, Matthew W.; Harmsen, Dag; Hendriksen, Rene S.; Hewson, Roger; Heymann, David L.; Johansson, Karin; Ijaz, Kashef; Keim, Paul S.; Koopmans, Marion; Kroneman, Annelies; Wong, Danilo Lo Fo; Lund, Ole; Palm, Daniel; Sawanpanyalert, Pathom; Sobel, Jeremy; Schlundt, Jørgen

    2012-01-01

    The rapid advancement of genome technologies holds great promise for improving the quality and speed of clinical and public health laboratory investigations and for decreasing their cost. The latest generation of genome DNA sequencers can provide highly detailed and robust information on disease-causing microbes, and in the near future these technologies will be suitable for routine use in national, regional, and global public health laboratories. With additional improvements in instrumentation, these next- or third-generation sequencers are likely to replace conventional culture-based and molecular typing methods to provide point-of-care clinical diagnosis and other essential information for quicker and better treatment of patients. Provided there is free-sharing of information by all clinical and public health laboratories, these genomic tools could spawn a global system of linked databases of pathogen genomes that would ensure more efficient detection, prevention, and control of endemic, emerging, and other infectious disease outbreaks worldwide. PMID:23092707

  12. Automation, parallelism, and robotics for proteomics.

    PubMed

    Alterovitz, Gil; Liu, Jonathan; Chow, Jijun; Ramoni, Marco F

    2006-07-01

    The speed of the human genome project (Lander, E. S., Linton, L. M., Birren, B., Nusbaum, C. et al., Nature 2001, 409, 860-921) was made possible, in part, by developments in automation of sequencing technologies. Before these technologies, sequencing was a laborious, expensive, and personnel-intensive task. Similarly, automation and robotics are changing the field of proteomics today. Proteomics is defined as the effort to understand and characterize proteins in the categories of structure, function and interaction (Englbrecht, C. C., Facius, A., Comb. Chem. High Throughput Screen. 2005, 8, 705-715). As such, this field nicely lends itself to automation technologies since these methods often require large economies of scale in order to achieve cost and time-saving benefits. This article describes some of the technologies and methods being applied in proteomics in order to facilitate automation within the field as well as in linking proteomics-based information with other related research areas.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Weizhao; Zhang, Zixuan; Lu, Jie

    Carbon fiber composites have received growing attention because of their high performance. One economic method to manufacturing the composite parts is the sequence of forming followed by the compression molding process. In this sequence, the preforming procedure forms the prepreg, which is the composite with the uncured resin, to the product geometry while the molding process cures the resin. Slip between different prepreg layers is observed in the preforming step and this paper reports a method to characterize the properties of the interaction between different prepreg layers, which is critical to predictive modeling and design optimization. An experimental setup wasmore » established to evaluate the interactions at various industrial production conditions. The experimental results were analyzed for an in-depth understanding about how the temperature, the relative sliding speed, and the fiber orientation affect the tangential interaction between two prepreg layers. The interaction factors measured from these experiments will be implemented in the computational preforming program.« less

  14. GHz Yb:KYW oscillators in time-resolved spectroscopy

    NASA Astrophysics Data System (ADS)

    Li, Changxiu; Krauß, Nico; Schäfer, Gerhard; Ebner, Lukas; Kliebisch, Oliver; Schmidt, Johannes; Winnerl, Stephan; Hettich, Mike; Dekorsy, Thomas

    2018-02-01

    A high-speed asynchronous optical sampling system (ASOPS) based on Yb:KYW oscillators with 1-GHz repetition rate is reported. Two frequency-offset-stabilized diode-pumped Yb:KYW oscillators are employed as pump and probe source, respectively. The temporal resolution of this system within 1-ns time window is limited to 500 fs and the noise floor around 10-6 (ΔR/R) close to the shot-noise level is obtained within an acquisition time of a few seconds. Coherent acoustic phonons are investigated by measuring multilayer semiconductor structures with multiple quantum wells and aluminum/silicon membranes in this ASOPS system. A wavepacket-like phonon sequence at 360 GHz range is detected in the semiconductor structures and a decaying sequence of acoustic oscillations up to 200 GHz is obtained in the aluminum/silicon membranes. Coherent acoustic phonons generated from semiconductor structures are further manipulated by a double pump scheme through pump time delay control.

  15. Efficient alignment-free DNA barcode analytics

    PubMed Central

    Kuksa, Pavel; Pavlovic, Vladimir

    2009-01-01

    Background In this work we consider barcode DNA analysis problems and address them using alternative, alignment-free methods and representations which model sequences as collections of short sequence fragments (features). The methods use fixed-length representations (spectrum) for barcode sequences to measure similarities or dissimilarities between sequences coming from the same or different species. The spectrum-based representation not only allows for accurate and computationally efficient species classification, but also opens possibility for accurate clustering analysis of putative species barcodes and identification of critical within-barcode loci distinguishing barcodes of different sample groups. Results New alignment-free methods provide highly accurate and fast DNA barcode-based identification and classification of species with substantial improvements in accuracy and speed over state-of-the-art barcode analysis methods. We evaluate our methods on problems of species classification and identification using barcodes, important and relevant analytical tasks in many practical applications (adverse species movement monitoring, sampling surveys for unknown or pathogenic species identification, biodiversity assessment, etc.) On several benchmark barcode datasets, including ACG, Astraptes, Hesperiidae, Fish larvae, and Birds of North America, proposed alignment-free methods considerably improve prediction accuracy compared to prior results. We also observe significant running time improvements over the state-of-the-art methods. Conclusion Our results show that newly developed alignment-free methods for DNA barcoding can efficiently and with high accuracy identify specimens by examining only few barcode features, resulting in increased scalability and interpretability of current computational approaches to barcoding. PMID:19900305

  16. High speed sampler and demultiplexer

    DOEpatents

    McEwan, T.E.

    1995-12-26

    A high speed sampling demultiplexer based on a plurality of sampler banks, each bank comprising a sample transmission line for transmitting an input signal, a strobe transmission line for transmitting a strobe signal, and a plurality of sampling gates at respective positions along the sample transmission line for sampling the input signal in response to the strobe signal. Strobe control circuitry is coupled to the plurality of banks, and supplies a sequence of bank strobe signals to the strobe transmission lines in each of the plurality of banks, and includes circuits for controlling the timing of the bank strobe signals among the banks of samplers. Input circuitry is included for supplying the input signal to be sampled to the plurality of sample transmission lines in the respective banks. The strobe control circuitry can repetitively strobe the plurality of banks of samplers such that the banks of samplers are cycled to create a long sample length. Second tier demultiplexing circuitry is coupled to each of the samplers in the plurality of banks. The second tier demultiplexing circuitry senses the sample taken by the corresponding sampler each time the bank in which the sampler is found is strobed. A plurality of such samples can be stored by the second tier demultiplexing circuitry for later processing. Repetitive sampling with the high speed transient sampler induces an effect known as ``strobe kickout``. The sample transmission lines include structures which reduce strobe kickout to acceptable levels, generally 60 dB below the signal, by absorbing the kickout pulses before the next sampling repetition. 16 figs.

  17. An integrated SNP mining and utilization (ISMU) pipeline for next generation sequencing data.

    PubMed

    Azam, Sarwar; Rathore, Abhishek; Shah, Trushar M; Telluri, Mohan; Amindala, BhanuPrakash; Ruperao, Pradeep; Katta, Mohan A V S K; Varshney, Rajeev K

    2014-01-01

    Open source single nucleotide polymorphism (SNP) discovery pipelines for next generation sequencing data commonly requires working knowledge of command line interface, massive computational resources and expertise which is a daunting task for biologists. Further, the SNP information generated may not be readily used for downstream processes such as genotyping. Hence, a comprehensive pipeline has been developed by integrating several open source next generation sequencing (NGS) tools along with a graphical user interface called Integrated SNP Mining and Utilization (ISMU) for SNP discovery and their utilization by developing genotyping assays. The pipeline features functionalities such as pre-processing of raw data, integration of open source alignment tools (Bowtie2, BWA, Maq, NovoAlign and SOAP2), SNP prediction (SAMtools/SOAPsnp/CNS2snp and CbCC) methods and interfaces for developing genotyping assays. The pipeline outputs a list of high quality SNPs between all pairwise combinations of genotypes analyzed, in addition to the reference genome/sequence. Visualization tools (Tablet and Flapjack) integrated into the pipeline enable inspection of the alignment and errors, if any. The pipeline also provides a confidence score or polymorphism information content value with flanking sequences for identified SNPs in standard format required for developing marker genotyping (KASP and Golden Gate) assays. The pipeline enables users to process a range of NGS datasets such as whole genome re-sequencing, restriction site associated DNA sequencing and transcriptome sequencing data at a fast speed. The pipeline is very useful for plant genetics and breeding community with no computational expertise in order to discover SNPs and utilize in genomics, genetics and breeding studies. The pipeline has been parallelized to process huge datasets of next generation sequencing. It has been developed in Java language and is available at http://hpc.icrisat.cgiar.org/ISMU as a standalone free software.

  18. Unravel lipid accumulation mechanism in oleaginous yeast through single cell systems biology study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Xiaoliang; Ding, Shiyou

    Searching for alternative and clean energy is one of the most important tasks today. Our research aimed at finding the best living condition for certain types of oleaginous yeasts for efficient lipid production. We found that R. glutinis yeast cells has great variability in lipid production among cells while Y. lipolytica cells has similar oil production ability. We found some individual cells shows much higher level of oil production. In order to further study these cases, we employed a label-free chemical sensitive microscopy method call stimulated Raman scattering (SRS). With SRS, we could measure the lipid content in each cell.more » We combined SRS microscopy with microfluidic device so that we can isolate cells with high fat content. We also developed SRS imaging technique that has higher imaging speed, which is highly desirable for high throughput cell screening and sorting. Since these cells has similar genome, it must be the transcriptome caused their difference in oil production. We developed a single cell transcriptome sequencing method to study which genes are responsible for elevated oil production. These methods that are developed for this project can easily be applied for many other areas of research. For example, the single transcriptome can be used to study the transcriptomes of other cell types. The high-speed SRS microscopy techniques can be used to speed up chemical imaging for lablefree histology or imaging distribution of chemicals in tissues of live mice or in humans. The developed microfluidic platform can be used to sort other type of cells, e.g., white blood cells for diagnosis of cancer or other blood diseases.« less

  19. An analysis of peak pelvis rotation speed, gluteus maximus and medius strength in high versus low handicap golfers during the golf swing.

    PubMed

    Callaway, Sarahann; Glaws, Kate; Mitchell, Melissa; Scerbo, Heather; Voight, Michael; Sells, Pat

    2012-06-01

    The kinematic sequence of the golf swing is an established principle that occurs in a proximal-to-distal pattern with power generation beginning with rotation of the pelvis. Few studies have correlated the influence of peak pelvis rotation to the skill level of the golfer. Furthermore, minimal research exists on the strength of the gluteal musculature and their ability to generate power during the swing. The purpose of this study was to explore the relationship between peak pelvis rotation, gluteus medius and gluteus maximus strength, and a golfer's handicap. 56 healthy subjects. Each subject was assessed using a hand-held dynamometry device per standardized protocol to determine gluteus maximus and medius strength. The K-vest was placed on the subject with electromagnetic sensors at the pelvis, upper torso, and gloved lead hand to measure the rotational speed at each segment in degrees/second. After K-vest calibration and 5 practice swings, each subject hit 5 golf balls during which time, the sensors measured pelvic rotation speed. A one-way ANOVA was performed to determine the relationships between peak pelvis rotation, gluteus medius and gluteus maximus strength, and golf handicap. A significant difference was found between the following dependent variables and golf handicap: peak pelvis rotation (p=0.000), gluteus medius strength (p=0.000), and gluteus maximus strength (p=0.000). Golfers with a low handicap are more likely to have increased pelvis rotation speed as well as increased gluteus maximus and medius strength when compared to high handicap golfers. The relationships between increased peak pelvis rotation and gluteus maximus and medius strength in low handicap golfers may have implications in designing golf training programs. Further research needs to be conducted in order to further explore these relationships.

  20. High-speed plasma imaging: A lightning bolt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurden, G.A.; Whiteson, D.O.

    Using a gated intensified digital Kodak Ektapro camera system, the authors captured a lightning bolt at 1,000 frames per second, with 100-{micro}s exposure time on each consecutive frame. As a thunder storm approaches while darkness descended (7:50 pm) on July 21, 1994, they photographed lightning bolts with an f22 105-mm lens and 100% gain on the intensified camera. This 15-frame sequence shows a cloud to ground stroke at a distance of about 1.5 km, which has a series of stepped leaders propagating downwards, following by the upward-propagating main return stroke.

  1. Quality Control Methodology Of A Surface Wind Observational Database In North Eastern North America

    NASA Astrophysics Data System (ADS)

    Lucio-Eceiza, Etor E.; Fidel González-Rouco, J.; Navarro, Jorge; Conte, Jorge; Beltrami, Hugo

    2016-04-01

    This work summarizes the design and application of a Quality Control (QC) procedure for an observational surface wind database located in North Eastern North America. The database consists of 526 sites (486 land stations and 40 buoys) with varying resolutions of hourly, 3 hourly and 6 hourly data, compiled from three different source institutions with uneven measurement units and changing measuring procedures, instrumentation and heights. The records span from 1953 to 2010. The QC process is composed of different phases focused either on problems related with the providing source institutions or measurement errors. The first phases deal with problems often related with data recording and management: (1) compilation stage dealing with the detection of typographical errors, decoding problems, site displacements and unification of institutional practices; (2) detection of erroneous data sequence duplications within a station or among different ones; (3) detection of errors related with physically unrealistic data measurements. The last phases are focused on instrumental errors: (4) problems related with low variability, placing particular emphasis on the detection of unrealistic low wind speed records with the help of regional references; (5) high variability related erroneous records; (6) standardization of wind speed record biases due to changing measurement heights, detection of wind speed biases on week to monthly timescales, and homogenization of wind direction records. As a result, around 1.7% of wind speed records and 0.4% of wind direction records have been deleted, making a combined total of 1.9% of removed records. Additionally, around 15.9% wind speed records and 2.4% of wind direction data have been also corrected.

  2. SSh versus TSE sequence protocol in rapid MR examination of pediatric patients with programmable drainage system.

    PubMed

    Brichtová, Eva; Šenkyřík, J

    2017-05-01

    A low radiation burden is essential during diagnostic procedures in pediatric patients due to their high tissue sensitivity. Using MR examination instead of the routinely used CT reduces the radiation exposure and the risk of adverse stochastic effects. Our retrospective study evaluated the possibility of using ultrafast single-shot (SSh) sequences and turbo spin echo (TSE) sequences in rapid MR brain imaging in pediatric patients with hydrocephalus and a programmable ventriculoperitoneal drainage system. SSh sequences seem to be suitable for examining pediatric patients due to the speed of using this technique, but significant susceptibility artifacts due to the programmable drainage valve degrade the image quality. Therefore, a rapid MR examination protocol based on TSE sequences, less sensitive to artifacts due to ferromagnetic components, has been developed. Of 61 pediatric patients who were examined using MR and the SSh sequence protocol, a group of 15 patients with hydrocephalus and a programmable drainage system also underwent TSE sequence MR imaging. The susceptibility artifact volume in both rapid MR protocols was evaluated using a semiautomatic volumetry system. A statistically significant decrease in the susceptibility artifact volume has been demonstrated in TSE sequence imaging in comparison with SSh sequences. Using TSE sequences reduced the influence of artifacts from the programmable valve, and the image quality in all cases was rated as excellent. In all patients, rapid MR examinations were performed without any need for intravenous sedation or general anesthesia. Our study results strongly suggest the superiority of the TSE sequence MR protocol compared to the SSh sequence protocol in pediatric patients with a programmable ventriculoperitoneal drainage system due to a significant reduction of susceptibility artifact volume. Both rapid sequence MR protocols provide quick and satisfactory brain imaging with no ionizing radiation and a reduced need for intravenous or general anesthesia.

  3. [Attention to speed and guide traffic signs with eye movements].

    PubMed

    Conchillo Jiménez, Ángela; Pérez-Moreno, Elisa; Recarte Goldaracena, Miguel Ángel

    2010-11-01

    The goal of this research is to describe the visual search patterns for diverse traffic signs. Twelve drivers of both genders and different driving experience levels took part in real driving research with an instrumented car provided with an eye-tracking system. Looking at signs has a weak relation with speed reduction in cases where actual driving speed was higher. Nevertheless, among the people who looked at the sign, the percentage of those who reduce the speed below the limit is greater than of those who do not look at the sign. Guide traffic signs, particularly those mounted over the road, are more frequently glanced at than speed limit signs, with a glance duration of more than one second, in sequences of more than two consecutive fixations. Implications for driving and the possibilities and limitations of eye movement analysis for traffic sign research are discussed.

  4. Nanopore-CMOS Interfaces for DNA Sequencing

    PubMed Central

    Magierowski, Sebastian; Huang, Yiyun; Wang, Chengjie; Ghafar-Zadeh, Ebrahim

    2016-01-01

    DNA sequencers based on nanopore sensors present an opportunity for a significant break from the template-based incumbents of the last forty years. Key advantages ushered by nanopore technology include a simplified chemistry and the ability to interface to CMOS technology. The latter opportunity offers substantial promise for improvement in sequencing speed, size and cost. This paper reviews existing and emerging means of interfacing nanopores to CMOS technology with an emphasis on massively-arrayed structures. It presents this in the context of incumbent DNA sequencing techniques, reviews and quantifies nanopore characteristics and models and presents CMOS circuit methods for the amplification of low-current nanopore signals in such interfaces. PMID:27509529

  5. Nanopore-CMOS Interfaces for DNA Sequencing.

    PubMed

    Magierowski, Sebastian; Huang, Yiyun; Wang, Chengjie; Ghafar-Zadeh, Ebrahim

    2016-08-06

    DNA sequencers based on nanopore sensors present an opportunity for a significant break from the template-based incumbents of the last forty years. Key advantages ushered by nanopore technology include a simplified chemistry and the ability to interface to CMOS technology. The latter opportunity offers substantial promise for improvement in sequencing speed, size and cost. This paper reviews existing and emerging means of interfacing nanopores to CMOS technology with an emphasis on massively-arrayed structures. It presents this in the context of incumbent DNA sequencing techniques, reviews and quantifies nanopore characteristics and models and presents CMOS circuit methods for the amplification of low-current nanopore signals in such interfaces.

  6. Hierarchical Traces for Reduced NSM Memory Requirements

    NASA Astrophysics Data System (ADS)

    Dahl, Torbjørn S.

    This paper presents work on using hierarchical long term memory to reduce the memory requirements of nearest sequence memory (NSM) learning, a previously published, instance-based reinforcement learning algorithm. A hierarchical memory representation reduces the memory requirements by allowing traces to share common sub-sequences. We present moderated mechanisms for estimating discounted future rewards and for dealing with hidden state using hierarchical memory. We also present an experimental analysis of how the sub-sequence length affects the memory compression achieved and show that the reduced memory requirements do not effect the speed of learning. Finally, we analyse and discuss the persistence of the sub-sequences independent of specific trace instances.

  7. DNA Base-Calling from a Nanopore Using a Viterbi Algorithm

    PubMed Central

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-01-01

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (∼98%), even with a poor signal/noise ratio. PMID:22677395

  8. Magnetic characterization of the stator core of a high-speed motor made of an ultrathin electrical steel sheet using the magnetic property evaluation system

    NASA Astrophysics Data System (ADS)

    Oka, Mohachiro; Enokizono, Masato; Mori, Yuji; Yamazaki, Kazumasa

    2018-04-01

    Recently, the application areas for electric motors have been expanding. For instance, electric motors are used in new technologies such as rovers, drones, cars, and robots. The motor used in such machinery should be small, high-powered, highly-efficient, and high-speed. In such motors, loss at high-speed rotation must be especially minimal. Eddy-current loss in the stator core is known to increase greatly during loss at high-speed rotation of the motor. To produce an efficient high-speed motor, we are developing a stator core for a motor using an ultrathin electrical steel sheet with only a small amount of eddy-current loss. Furthermore, the magnetic property evaluation for efficient, high-speed motor stator cores that use conventional commercial frequency is insufficient. Thus, we made a new high-speed magnetic property evaluation system to evaluate the magnetic properties of the efficient high-speed motor stator core. This system was composed of high-speed A/D converters, D/A converters, and a high-speed power amplifier. In experiments, the ultrathin electrical steel sheet dramatically suppressed iron loss and, in particular, eddy-current loss. In addition, a new high-speed magnetic property evaluation system accurately evaluated the magnetic properties of the efficient high-speed motor stator core.

  9. Application of viromics: a new approach to the understanding of viral infections in humans.

    PubMed

    Ramamurthy, Mageshbabu; Sankar, Sathish; Kannangai, Rajesh; Nandagopal, Balaji; Sridharan, Gopalan

    2017-12-01

    This review is focused at exploring the strengths of modern technology driven data compiled in the areas of virus gene sequencing, virus protein structures and their implication to viral diagnosis and therapy. The information for virome analysis (viromics) is generated by the study of viral genomes (entire nucleotide sequence) and viral genes (coding for protein). Presently, the study of viral infectious diseases in terms of etiopathogenesis and development of newer therapeutics is undergoing rapid changes. Currently, viromics relies on deep sequencing, next generation sequencing (NGS) data and public domain databases like GenBank and unique virus specific databases. Two commonly used NGS platforms: Illumina and Ion Torrent, recommend maximum fragment lengths of about 300 and 400 nucleotides for analysis respectively. Direct detection of viruses in clinical samples is now evolving using these methods. Presently, there are a considerable number of good treatment options for HBV/HIV/HCV. These viruses however show development of drug resistance. The drug susceptibility regions of the genomes are sequenced and the prediction of drug resistance is now possible from 3 public domains available on the web. This has been made possible through advances in the technology with the advent of high throughput sequencing and meta-analysis through sophisticated and easy to use software and the use of high speed computers for bioinformatics. More recently NGS technology has been improved with single-molecule real-time sequencing. Here complete long reads can be obtained with less error overcoming a limitation of the NGS which is inherently prone to software anomalies that arise in the hands of personnel without adequate training. The development in understanding the viruses in terms of their genome, pathobiology, transcriptomics and molecular epidemiology constitutes viromics. It could be stated that these developments will bring about radical changes and advancement especially in the field of antiviral therapy and diagnostic virology.

  10. 78 FR 22031 - California High-Speed Rail Authority-Construction Exemption-In Merced, Madera and Fresno Counties...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-12

    ... High-Speed Rail Authority--Construction Exemption--In Merced, Madera and Fresno Counties, CA AGENCY... High-Speed Rail Authority (Authority). This Final EIS is titled ``California High-Speed Train: Merced... Final EIS assesses the potential environmental impacts of constructing and operating a high-speed...

  11. Theta Neurofeedback Effects on Motor Memory Consolidation and Performance Accuracy: An Apparent Paradox?

    PubMed

    Reiner, Miriam; Lev, Dror D; Rosen, Amit

    2018-05-15

    Previous studies have shown that theta neurofeedback enhances motor memory consolidation on an easy-to-learn finger-tapping task. However, the simplicity of the finger-tapping task precludes evaluating the putative effects of elevated theta on performance accuracy. Mastering a motor sequence is classically assumed to entail faster performance with fewer errors. The speed-accuracy tradeoff (SAT) principle states that as action speed increases, motor performance accuracy decreases. The current study investigated whether theta neurofeedback could improve both performance speed and performance accuracy, or would only enhance performance speed at the cost of reduced accuracy. A more complex task was used to study the effects of parietal elevated theta on 45 healthy volunteers The findings confirmed previous results on the effects of theta neurofeedback on memory consolidation. In contrast to the two control groups, in the theta-neurofeedback group the speed-accuracy tradeoff was reversed. The speed-accuracy tradeoff patterns only stabilized after a night's sleep implying enhancement in terms of both speed and accuracy. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Robust optical flow using adaptive Lorentzian filter for image reconstruction under noisy condition

    NASA Astrophysics Data System (ADS)

    Kesrarat, Darun; Patanavijit, Vorapoj

    2017-02-01

    In optical flow for motion allocation, the efficient result in Motion Vector (MV) is an important issue. Several noisy conditions may cause the unreliable result in optical flow algorithms. We discover that many classical optical flows algorithms perform better result under noisy condition when combined with modern optimized model. This paper introduces effective robust models of optical flow by using Robust high reliability spatial based optical flow algorithms using the adaptive Lorentzian norm influence function in computation on simple spatial temporal optical flows algorithm. Experiment on our proposed models confirm better noise tolerance in optical flow's MV under noisy condition when they are applied over simple spatial temporal optical flow algorithms as a filtering model in simple frame-to-frame correlation technique. We illustrate the performance of our models by performing an experiment on several typical sequences with differences in movement speed of foreground and background where the experiment sequences are contaminated by the additive white Gaussian noise (AWGN) at different noise decibels (dB). This paper shows very high effectiveness of noise tolerance models that they are indicated by peak signal to noise ratio (PSNR).

  13. Simultaneous non-contiguous deletions using large synthetic DNA and site-specific recombinases

    PubMed Central

    Krishnakumar, Radha; Grose, Carissa; Haft, Daniel H.; Zaveri, Jayshree; Alperovich, Nina; Gibson, Daniel G.; Merryman, Chuck; Glass, John I.

    2014-01-01

    Toward achieving rapid and large scale genome modification directly in a target organism, we have developed a new genome engineering strategy that uses a combination of bioinformatics aided design, large synthetic DNA and site-specific recombinases. Using Cre recombinase we swapped a target 126-kb segment of the Escherichia coli genome with a 72-kb synthetic DNA cassette, thereby effectively eliminating over 54 kb of genomic DNA from three non-contiguous regions in a single recombination event. We observed complete replacement of the native sequence with the modified synthetic sequence through the action of the Cre recombinase and no competition from homologous recombination. Because of the versatility and high-efficiency of the Cre-lox system, this method can be used in any organism where this system is functional as well as adapted to use with other highly precise genome engineering systems. Compared to present-day iterative approaches in genome engineering, we anticipate this method will greatly speed up the creation of reduced, modularized and optimized genomes through the integration of deletion analyses data, transcriptomics, synthetic biology and site-specific recombination. PMID:24914053

  14. Dual-slit confocal light sheet microscopy for in vivo whole-brain imaging of zebrafish

    PubMed Central

    Yang, Zhe; Mei, Li; Xia, Fei; Luo, Qingming; Fu, Ling; Gong, Hui

    2015-01-01

    In vivo functional imaging at single-neuron resolution is an important approach to visualize biological processes in neuroscience. Light sheet microscopy (LSM) is a cutting edge in vivo imaging technique that provides micron-scale spatial resolution at high frame rate. Due to the scattering and absorption of tissue, however, conventional LSM is inadequate to resolve cells because of the attenuated signal to noise ratio (SNR). Using dual-beam illumination and confocal dual-slit detection, here a dual-slit confocal LSM is demonstrated to obtain the SNR enhanced images with frame rate twice as high as line confocal LSM method. Through theoretical calculations and experiments, the correlation between the slit’s width and SNR was determined to optimize the image quality. In vivo whole brain structural imaging stacks and the functional imaging sequences of single slice were obtained for analysis of calcium activities at single-cell resolution. A two-fold increase in imaging speed of conventional confocal LSM makes it possible to capture the sequence of the neurons’ activities and help reveal the potential functional connections in the whole zebrafish’s brain. PMID:26137381

  15. Absolute Position Encoders With Vertical Image Binning

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas B.

    2005-01-01

    Improved optoelectronic patternrecognition encoders that measure rotary and linear 1-dimensional positions at conversion rates (numbers of readings per unit time) exceeding 20 kHz have been invented. Heretofore, optoelectronic pattern-recognition absoluteposition encoders have been limited to conversion rates <15 Hz -- too low for emerging industrial applications in which conversion rates ranging from 1 kHz to as much as 100 kHz are required. The high conversion rates of the improved encoders are made possible, in part, by use of vertically compressible or binnable (as described below) scale patterns in combination with modified readout sequences of the image sensors [charge-coupled devices (CCDs)] used to read the scale patterns. The modified readout sequences and the processing of the images thus read out are amenable to implementation by use of modern, high-speed, ultra-compact microprocessors and digital signal processors or field-programmable gate arrays. This combination of improvements makes it possible to greatly increase conversion rates through substantial reductions in all three components of conversion time: exposure time, image-readout time, and image-processing time.

  16. Speeding-up Bioinformatics Algorithms with Heterogeneous Architectures: Highly Heterogeneous Smith-Waterman (HHeterSW).

    PubMed

    Gálvez, Sergio; Ferusic, Adis; Esteban, Francisco J; Hernández, Pilar; Caballero, Juan A; Dorado, Gabriel

    2016-10-01

    The Smith-Waterman algorithm has a great sensitivity when used for biological sequence-database searches, but at the expense of high computing-power requirements. To overcome this problem, there are implementations in literature that exploit the different hardware-architectures available in a standard PC, such as GPU, CPU, and coprocessors. We introduce an application that splits the original database-search problem into smaller parts, resolves each of them by executing the most efficient implementations of the Smith-Waterman algorithms in different hardware architectures, and finally unifies the generated results. Using non-overlapping hardware allows simultaneous execution, and up to 2.58-fold performance gain, when compared with any other algorithm to search sequence databases. Even the performance of the popular BLAST heuristic is exceeded in 78% of the tests. The application has been tested with standard hardware: Intel i7-4820K CPU, Intel Xeon Phi 31S1P coprocessors, and nVidia GeForce GTX 960 graphics cards. An important increase in performance has been obtained in a wide range of situations, effectively exploiting the available hardware.

  17. High-throughput SNP genotyping in the highly heterozygous genome of Eucalyptus: assay success, polymorphism and transferability across species

    PubMed Central

    2011-01-01

    Background High-throughput SNP genotyping has become an essential requirement for molecular breeding and population genomics studies in plant species. Large scale SNP developments have been reported for several mainstream crops. A growing interest now exists to expand the speed and resolution of genetic analysis to outbred species with highly heterozygous genomes. When nucleotide diversity is high, a refined diagnosis of the target SNP sequence context is needed to convert queried SNPs into high-quality genotypes using the Golden Gate Genotyping Technology (GGGT). This issue becomes exacerbated when attempting to transfer SNPs across species, a scarcely explored topic in plants, and likely to become significant for population genomics and inter specific breeding applications in less domesticated and less funded plant genera. Results We have successfully developed the first set of 768 SNPs assayed by the GGGT for the highly heterozygous genome of Eucalyptus from a mixed Sanger/454 database with 1,164,695 ESTs and the preliminary 4.5X draft genome sequence for E. grandis. A systematic assessment of in silico SNP filtering requirements showed that stringent constraints on the SNP surrounding sequences have a significant impact on SNP genotyping performance and polymorphism. SNP assay success was high for the 288 SNPs selected with more rigorous in silico constraints; 93% of them provided high quality genotype calls and 71% of them were polymorphic in a diverse panel of 96 individuals of five different species. SNP reliability was high across nine Eucalyptus species belonging to three sections within subgenus Symphomyrtus and still satisfactory across species of two additional subgenera, although polymorphism declined as phylogenetic distance increased. Conclusions This study indicates that the GGGT performs well both within and across species of Eucalyptus notwithstanding its nucleotide diversity ≥2%. The development of a much larger array of informative SNPs across multiple Eucalyptus species is feasible, although strongly dependent on having a representative and sufficiently deep collection of sequences from many individuals of each target species. A higher density SNP platform will be instrumental to undertake genome-wide phylogenetic and population genomics studies and to implement molecular breeding by Genomic Selection in Eucalyptus. PMID:21492434

  18. 4DCAPTURE: a general purpose software package for capturing and analyzing two- and three-dimensional motion data acquired from video sequences

    NASA Astrophysics Data System (ADS)

    Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake

    2003-07-01

    4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.

  19. Ancestral sequence reconstruction in primate mitochondrial DNA: compositional bias and effect on functional inference.

    PubMed

    Krishnan, Neeraja M; Seligmann, Hervé; Stewart, Caro-Beth; De Koning, A P Jason; Pollock, David D

    2004-10-01

    Reconstruction of ancestral DNA and amino acid sequences is an important means of inferring information about past evolutionary events. Such reconstructions suggest changes in molecular function and evolutionary processes over the course of evolution and are used to infer adaptation and convergence. Maximum likelihood (ML) is generally thought to provide relatively accurate reconstructed sequences compared to parsimony, but both methods lead to the inference of multiple directional changes in nucleotide frequencies in primate mitochondrial DNA (mtDNA). To better understand this surprising result, as well as to better understand how parsimony and ML differ, we constructed a series of computationally simple "conditional pathway" methods that differed in the number of substitutions allowed per site along each branch, and we also evaluated the entire Bayesian posterior frequency distribution of reconstructed ancestral states. We analyzed primate mitochondrial cytochrome b (Cyt-b) and cytochrome oxidase subunit I (COI) genes and found that ML reconstructs ancestral frequencies that are often more different from tip sequences than are parsimony reconstructions. In contrast, frequency reconstructions based on the posterior ensemble more closely resemble extant nucleotide frequencies. Simulations indicate that these differences in ancestral sequence inference are probably due to deterministic bias caused by high uncertainty in the optimization-based ancestral reconstruction methods (parsimony, ML, Bayesian maximum a posteriori). In contrast, ancestral nucleotide frequencies based on an average of the Bayesian set of credible ancestral sequences are much less biased. The methods involving simpler conditional pathway calculations have slightly reduced likelihood values compared to full likelihood calculations, but they can provide fairly unbiased nucleotide reconstructions and may be useful in more complex phylogenetic analyses than considered here due to their speed and flexibility. To determine whether biased reconstructions using optimization methods might affect inferences of functional properties, ancestral primate mitochondrial tRNA sequences were inferred and helix-forming propensities for conserved pairs were evaluated in silico. For ambiguously reconstructed nucleotides at sites with high base composition variability, ancestral tRNA sequences from Bayesian analyses were more compatible with canonical base pairing than were those inferred by other methods. Thus, nucleotide bias in reconstructed sequences apparently can lead to serious bias and inaccuracies in functional predictions.

  20. Explicit instruction of rules interferes with visuomotor skill transfer.

    PubMed

    Tanaka, Kanji; Watanabe, Katsumi

    2017-06-01

    In the present study, we examined the effects of explicit knowledge, obtained through instruction or spontaneous detection, on the transfer of visuomotor sequence learning. In the learning session, participants learned a visuomotor sequence, via trial and error. In the transfer session, the order of the sequence was reversed from that of the learning session. Before the commencement of the transfer session, some participants received explicit instruction regarding the reversal rule (i.e., Instruction group), while the others did not receive any information and were sorted into either an Aware or Unaware group, as assessed by interview conducted after the transfer session. Participants in the Instruction and Aware groups performed with fewer errors than the Unaware group in the transfer session. The participants in the Instruction group showed slower speed than the Aware and Unaware groups in the transfer session, and the sluggishness likely persisted even in late learning. These results suggest that explicit knowledge reduces errors in visuomotor skill transfer, but may interfere with performance speed, particularly when explicit knowledge is provided, as opposed to being spontaneously discovered.

  1. BlochSolver: A GPU-optimized fast 3D MRI simulator for experimentally compatible pulse sequences

    NASA Astrophysics Data System (ADS)

    Kose, Ryoichi; Kose, Katsumi

    2017-08-01

    A magnetic resonance imaging (MRI) simulator, which reproduces MRI experiments using computers, has been developed using two graphic-processor-unit (GPU) boards (GTX 1080). The MRI simulator was developed to run according to pulse sequences used in experiments. Experiments and simulations were performed to demonstrate the usefulness of the MRI simulator for three types of pulse sequences, namely, three-dimensional (3D) gradient-echo, 3D radio-frequency spoiled gradient-echo, and gradient-echo multislice with practical matrix sizes. The results demonstrated that the calculation speed using two GPU boards was typically about 7 TFLOPS and about 14 times faster than the calculation speed using CPUs (two 18-core Xeons). We also found that MR images acquired by experiment could be reproduced using an appropriate number of subvoxels, and that 3D isotropic and two-dimensional multislice imaging experiments for practical matrix sizes could be simulated using the MRI simulator. Therefore, we concluded that such powerful MRI simulators are expected to become an indispensable tool for MRI research and development.

  2. Asymmetric multiscale multifractal analysis of wind speed signals

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaonei; Zeng, Ming; Meng, Qinghao

    We develop a new method called asymmetric multiscale multifractal analysis (A-MMA) to explore the multifractality and asymmetric autocorrelations of the signals with a variable scale range. Three numerical experiments are provided to demonstrate the effectiveness of our approach. Then, the proposed method is applied to investigate multifractality and asymmetric autocorrelations of difference sequences between wind speed fluctuations with uptrends or downtrends. The results show that these sequences appear to be far more complex and contain abundant fractal dynamics information. Through analyzing the Hurst surfaces of nine difference sequences, we found that all series exhibit multifractal properties and multiscale structures. Meanwhile, the asymmetric autocorrelations are observed in all variable scale ranges and the asymmetry results are of good consistency within a certain spatial range. The sources of multifractality and asymmetry in nine difference series are further discussed using the corresponding shuffled series and surrogate series. We conclude that the multifractality of these series is due to both long-range autocorrelation and broad probability density function, but the major source of multifractality is long-range autocorrelation, and the source of asymmetry is affected by the spatial distance.

  3. Development of conditioning programs for dressage horses based on time-motion analysis of competitions.

    PubMed

    Clayton, H M

    1993-05-01

    The time-motion characteristics of Canadian basic- and medium-level dressage competitions are described, and the results are applied in formulating sport-specific conditioning programs. One competition was analyzed at the six levels from basic 1 to medium 3. Each test was divided into a series of sequences based on the type and speed of activity. The durations of the sequences were measured from videotapes. The basic-level tests had fewer sequences, and they were shorter in distance and duration than the medium tests (P < 0.10), but the average speed did not differ between the two levels. It is recommended that horses competing at the basic levels be conditioned using 5-min exercise periods, with short (10-s) bursts of lengthened trot and canter included at basic 2 and above. In preparation for medium-level competitions, the duration of the work periods increases to 7 min, 10- to 12-s bursts of medium or extended trot and canter are included, and transitions are performed frequently to simulate the energy expenditure in overcoming inertia.

  4. BALSA: integrated secondary analysis for whole-genome and whole-exome sequencing, accelerated by GPU.

    PubMed

    Luo, Ruibang; Wong, Yiu-Lun; Law, Wai-Chun; Lee, Lap-Kei; Cheung, Jeanno; Liu, Chi-Man; Lam, Tak-Wah

    2014-01-01

    This paper reports an integrated solution, called BALSA, for the secondary analysis of next generation sequencing data; it exploits the computational power of GPU and an intricate memory management to give a fast and accurate analysis. From raw reads to variants (including SNPs and Indels), BALSA, using just a single computing node with a commodity GPU board, takes 5.5 h to process 50-fold whole genome sequencing (∼750 million 100 bp paired-end reads), or just 25 min for 210-fold whole exome sequencing. BALSA's speed is rooted at its parallel algorithms to effectively exploit a GPU to speed up processes like alignment, realignment and statistical testing. BALSA incorporates a 16-genotype model to support the calling of SNPs and Indels and achieves competitive variant calling accuracy and sensitivity when compared to the ensemble of six popular variant callers. BALSA also supports efficient identification of somatic SNVs and CNVs; experiments showed that BALSA recovers all the previously validated somatic SNVs and CNVs, and it is more sensitive for somatic Indel detection. BALSA outputs variants in VCF format. A pileup-like SNAPSHOT format, while maintaining the same fidelity as BAM in variant calling, enables efficient storage and indexing, and facilitates the App development of downstream analyses. BALSA is available at: http://sourceforge.net/p/balsa.

  5. Ion implantation effects in 'cosmic' dust grains

    NASA Technical Reports Server (NTRS)

    Bibring, J. P.; Langevin, Y.; Maurette, M.; Meunier, R.; Jouffrey, B.; Jouret, C.

    1974-01-01

    Cosmic dust grains, whatever their origin may be, have probably suffered a complex sequence of events including exposure to high doses of low-energy nuclear particles and cycles of turbulent motions. High-voltage electron microscope observations of micron-sized grains either naturally exposed to space environmental parameters on the lunar surface or artificially subjected to space simulated conditions strongly suggest that such events could drastically modify the mineralogical composition of the grains and considerably ease their aggregation during collisions at low speeds. Furthermore, combined mass spectrometer and ionic analyzer studies show that small carbon compounds can be both synthesized during the implantation of a mixture of low-energy D, C, N ions in various solids and released in space by ion sputtering.

  6. Testing and performance analysis of a 650-Mbps quaternary pulse position modulation (QPPM) modem for free-space laser communications

    NASA Astrophysics Data System (ADS)

    Mortensen, Dale J.

    1995-04-01

    The testing and performance of a prototype modem developed at NASA Lewis Research Center for high-speed free-space direct detection optical communications is described. The testing was performed under laboratory conditions using computer control with specially developed test equipment that simulates free-space link conditions. The modem employs quaternary pulse position modulation at 325 Megabits per second (Mbps) on two optical channels, which are multiplexed to transmit a single 650 Mbps data stream. The measured results indicate that the receiver's automatic gain control (AGC), phased-locked-loop slot clock recovery, digital symbol clock recovery, matched filtering, and maximum likelihood data recovery circuits were found to have only 1.5 dB combined implementation loss during bit-error-rate (BER) performance measurements. Pseudo random bit sequences and real-time high quality video sources were used to supply 650 Mbps and 325 Mbps data streams to the modem. Additional testing revealed that Doppler frequency shifting can be easily tracked by the receiver, that simulated pointing errors are readily compensated for by the AGC circuits, and that channel timing skew affects the BER performance in an expected manner. Overall, the needed technologies for a high-speed laser communications modem were demonstrated.

  7. Functional sequencing read annotation for high precision microbiome analysis

    PubMed Central

    Zhu, Chengsheng; Miller, Maximilian; Marpaka, Srinayani; Vaysberg, Pavel; Rühlemann, Malte C; Wu, Guojun; Heinsen, Femke-Anouska; Tempel, Marie; Zhao, Liping; Lieb, Wolfgang; Franke, Andre; Bromberg, Yana

    2018-01-01

    Abstract The vast majority of microorganisms on Earth reside in often-inseparable environment-specific communities—microbiomes. Meta-genomic/-transcriptomic sequencing could reveal the otherwise inaccessible functionality of microbiomes. However, existing analytical approaches focus on attributing sequencing reads to known genes/genomes, often failing to make maximal use of available data. We created faser (functional annotation of sequencing reads), an algorithm that is optimized to map reads to molecular functions encoded by the read-correspondent genes. The mi-faser microbiome analysis pipeline, combining faser with our manually curated reference database of protein functions, accurately annotates microbiome molecular functionality. mi-faser’s minutes-per-microbiome processing speed is significantly faster than that of other methods, allowing for large scale comparisons. Microbiome function vectors can be compared between different conditions to highlight environment-specific and/or time-dependent changes in functionality. Here, we identified previously unseen oil degradation-specific functions in BP oil-spill data, as well as functional signatures of individual-specific gut microbiome responses to a dietary intervention in children with Prader–Willi syndrome. Our method also revealed variability in Crohn's Disease patient microbiomes and clearly distinguished them from those of related healthy individuals. Our analysis highlighted the microbiome role in CD pathogenicity, demonstrating enrichment of patient microbiomes in functions that promote inflammation and that help bacteria survive it. PMID:29194524

  8. Comprehensive Oculomotor Behavioral Response Assessment (COBRA)

    NASA Technical Reports Server (NTRS)

    Stone, Leland S. (Inventor); Liston, Dorion B. (Inventor)

    2017-01-01

    An eye movement-based methodology and assessment tool may be used to quantify many aspects of human dynamic visual processing using a relatively simple and short oculomotor task, noninvasive video-based eye tracking, and validated oculometric analysis techniques. By examining the eye movement responses to a task including a radially-organized appropriately randomized sequence of Rashbass-like step-ramp pursuit-tracking trials, distinct performance measurements may be generated that may be associated with, for example, pursuit initiation (e.g., latency and open-loop pursuit acceleration), steady-state tracking (e.g., gain, catch-up saccade amplitude, and the proportion of the steady-state response consisting of smooth movement), direction tuning (e.g., oblique effect amplitude, horizontal-vertical asymmetry, and direction noise), and speed tuning (e.g., speed responsiveness and noise). This quantitative approach may provide fast and results (e.g., a multi-dimensional set of oculometrics and a single scalar impairment index) that can be interpreted by one without a high degree of scientific sophistication or extensive training.

  9. SU-E-J-155: Utilizing Varian TrueBeam Developer Mode for the Quantification of Mechanical Limits and the Simulation of 4D Respiratory Motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moseley, D; Dave, M

    Purpose: Use Varian TrueBeam Developer mode to quantify the mechanical limits of the couch and to simulate 4D respiratory motion. Methods: An in-house MATLAB based GUI was created to make the BEAM XML files. The couch was moved in a triangular wave in the S/I direction with varying amplitudes (1mm, 5mm, 10mm, and 50mm) and periods (3s, 6s, and 9s). The periods were determined by specifying the speed. The theoretical positions were compared to the values recorded by the machine at 50 Hz. HD videos were taken for certain tests as external validation. 4D Respiratory motion was simulated by anmore » A/P MV beam being delivered while the couch moved in an elliptical manner. The ellipse had a major axis of 2 cm (S/I) and a minor axis of 1 cm (A/P). Results: The path planned by the TrueBeam deviated from the theoretical triangular form as the speed increased. Deviations were noticed starting at a speed of 3.33 cm/s (50mm amplitude, 6s period). The greatest deviation occurred in the 50mm- 3s sequence with a correlation value of −0.13 and a 27% time increase; the plan essentially became out of phase. Excluding these two, the plans had correlation values of 0.99. The elliptical sequence effectively simulated a respiratory pattern with a period of 6s. The period could be controlled by changing the speeds or the dose rate. Conclusion: The work first shows the quantification of the mechanical limits of the couch and the speeds at which the proposed plans begin to deviate. These limits must be kept in mind when programming other couch sequences. The methodology can be used to quantify the limits of other axes. Furthermore, the work shows the possibility of creating 4D respiratory simulations without using specialized phantoms or motion-platforms. This can be further developed to program patient-specific breathing patterns.« less

  10. 36 CFR 1192.175 - High-speed rail cars, monorails and systems.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 36 Parks, Forests, and Public Property 3 2010-07-01 2010-07-01 false High-speed rail cars... TRANSPORTATION VEHICLES Other Vehicles and Systems § 1192.175 High-speed rail cars, monorails and systems. (a) All cars for high-speed rail systems, including but not limited to those using “maglev” or high speed...

  11. 36 CFR 1192.175 - High-speed rail cars, monorails and systems.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 36 Parks, Forests, and Public Property 3 2012-07-01 2012-07-01 false High-speed rail cars... TRANSPORTATION VEHICLES Other Vehicles and Systems § 1192.175 High-speed rail cars, monorails and systems. (a) All cars for high-speed rail systems, including but not limited to those using “maglev” or high speed...

  12. 36 CFR 1192.175 - High-speed rail cars, monorails and systems.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 36 Parks, Forests, and Public Property 3 2011-07-01 2011-07-01 false High-speed rail cars... TRANSPORTATION VEHICLES Other Vehicles and Systems § 1192.175 High-speed rail cars, monorails and systems. (a) All cars for high-speed rail systems, including but not limited to those using “maglev” or high speed...

  13. 36 CFR § 1192.175 - High-speed rail cars, monorails and systems.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 36 Parks, Forests, and Public Property 3 2013-07-01 2012-07-01 true High-speed rail cars... TRANSPORTATION VEHICLES Other Vehicles and Systems § 1192.175 High-speed rail cars, monorails and systems. (a) All cars for high-speed rail systems, including but not limited to those using “maglev” or high speed...

  14. 36 CFR 1192.175 - High-speed rail cars, monorails and systems.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 36 Parks, Forests, and Public Property 3 2014-07-01 2014-07-01 false High-speed rail cars... TRANSPORTATION VEHICLES Other Vehicles and Systems § 1192.175 High-speed rail cars, monorails and systems. (a) All cars for high-speed rail systems, including but not limited to those using “maglev” or high speed...

  15. Sam2bam: High-Performance Framework for NGS Data Preprocessing Tools

    PubMed Central

    Cheng, Yinhe; Tzeng, Tzy-Hwa Kathy

    2016-01-01

    This paper introduces a high-throughput software tool framework called sam2bam that enables users to significantly speed up pre-processing for next-generation sequencing data. The sam2bam is especially efficient on single-node multi-core large-memory systems. It can reduce the runtime of data pre-processing in marking duplicate reads on a single node system by 156–186x compared with de facto standard tools. The sam2bam consists of parallel software components that can fully utilize multiple processors, available memory, high-bandwidth storage, and hardware compression accelerators, if available. The sam2bam provides file format conversion between well-known genome file formats, from SAM to BAM, as a basic feature. Additional features such as analyzing, filtering, and converting input data are provided by using plug-in tools, e.g., duplicate marking, which can be attached to sam2bam at runtime. We demonstrated that sam2bam could significantly reduce the runtime of next generation sequencing (NGS) data pre-processing from about two hours to about one minute for a whole-exome data set on a 16-core single-node system using up to 130 GB of memory. The sam2bam could reduce the runtime of NGS data pre-processing from about 20 hours to about nine minutes for a whole-genome sequencing data set on the same system using up to 711 GB of memory. PMID:27861637

  16. Quantitative Study for the Surface Dehydration of Vocal Folds Based on High-Speed Imaging.

    PubMed

    Li, Lin; Zhang, Yu; Maytag, Allison L; Jiang, Jack J

    2015-07-01

    From the perspective of the glottal area and mucosal wave, quantitatively estimate the differences of vocal fold on laryngeal activity during phonation at three different dehydration levels. Controlled three sets of tests. A dehydration experiment for 10 excised canine larynges was conducted at 16 cm H2O. According to the dehydration cycle time (H), dehydration levels were divided into three degrees (0% H, 50% H, 75% H). The glottal area and mucosal wave under three dehydration levels were extracted from high-speed images and digital videokymography (DKG) image sequences. Direct and non-direct amplitude components were derived from glottal areas. The amplitude and frequency of mucosal wave were calculated from DKG image sequences. These parameters in condition of three dehydration levels were compared for statistical analysis. The results showed a significant difference in direct (P = 0.001; P = 0.005) and non-direct (P = 0.005; P = 0.016) components of glottal areas between every two different dehydration levels. Considering the right-upper, right-lower, left-upper, and left-lower of vocal fold, the amplitudes of mucosal waves consistently decreased with increasing of dehydration levels. But, there was no significant difference in frequency. Surface dehydration could give rise to complex variation of vocal fold on tissues and vibratory mechanism, which should need analyzing from multiple perspectives. The results suggested that the combination of glottal area and mucosal wave could be better to research the change of vocal fold at different dehydrations. It would become a better crucial research tool for the clinical treatment of dehydration-induced laryngeal pathologies. Copyright © 2015 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  17. An improved filtering algorithm for big read datasets and its application to single-cell assembly.

    PubMed

    Wedemeyer, Axel; Kliemann, Lasse; Srivastav, Anand; Schielke, Christian; Reusch, Thorsten B; Rosenstiel, Philip

    2017-07-03

    For single-cell or metagenomic sequencing projects, it is necessary to sequence with a very high mean coverage in order to make sure that all parts of the sample DNA get covered by the reads produced. This leads to huge datasets with lots of redundant data. A filtering of this data prior to assembly is advisable. Brown et al. (2012) presented the algorithm Diginorm for this purpose, which filters reads based on the abundance of their k-mers. We present Bignorm, a faster and quality-conscious read filtering algorithm. An important new algorithmic feature is the use of phred quality scores together with a detailed analysis of the k-mer counts to decide which reads to keep. We qualify and recommend parameters for our new read filtering algorithm. Guided by these parameters, we remove in terms of median 97.15% of the reads while keeping the mean phred score of the filtered dataset high. Using the SDAdes assembler, we produce assemblies of high quality from these filtered datasets in a fraction of the time needed for an assembly from the datasets filtered with Diginorm. We conclude that read filtering is a practical and efficient method for reducing read data and for speeding up the assembly process. This applies not only for single cell assembly, as shown in this paper, but also to other projects with high mean coverage datasets like metagenomic sequencing projects. Our Bignorm algorithm allows assemblies of competitive quality in comparison to Diginorm, while being much faster. Bignorm is available for download at https://git.informatik.uni-kiel.de/axw/Bignorm .

  18. Deep Packet/Flow Analysis using GPUs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Qian; Wu, Wenji; DeMar, Phil

    Deep packet inspection (DPI) faces severe performance challenges in high-speed networks (40/100 GE) as it requires a large amount of raw computing power and high I/O throughputs. Recently, researchers have tentatively used GPUs to address the above issues and boost the performance of DPI. Typically, DPI applications involve highly complex operations in both per-packet and per-flow data level, often in real-time. The parallel architecture of GPUs fits exceptionally well for per-packet network traffic processing. However, for stateful network protocols such as TCP, their data stream need to be reconstructed in a per-flow level to deliver a consistent content analysis. Sincemore » the flow-centric operations are naturally antiparallel and often require large memory space for buffering out-of-sequence packets, they can be problematic for GPUs, whose memory is normally limited to several gigabytes. In this work, we present a highly efficient GPU-based deep packet/flow analysis framework. The proposed design includes a purely GPU-implemented flow tracking and TCP stream reassembly. Instead of buffering and waiting for TCP packets to become in sequence, our framework process the packets in batch and uses a deterministic finite automaton (DFA) with prefix-/suffix- tree method to detect patterns across out-of-sequence packets that happen to be located in different batches. In conclusion, evaluation shows that our code can reassemble and forward tens of millions of packets per second and conduct a stateful signature-based deep packet inspection at 55 Gbit/s using an NVIDIA K40 GPU.« less

  19. Changes in the flagellar bundling time account for variations in swimming behavior of flagellated bacteria in viscous media

    NASA Astrophysics Data System (ADS)

    Qu, Zijie; Temel, Fatma; Henderikx, Rene; Breuer, Kenneth

    2017-11-01

    The motility of bacteria E.coli in viscous fluids has been widely studied, although conflicting results on the effect of viscosity on swimming speed abound. The swimming mode of wild-type E.coli is idealized as a run-and-tumble sequence in which periods of straight swimming at a constant speed are randomly interrupted by a tumble, defined as a sudden change of direction with a very low speed. Using a tracking microscope, we follow cells for extended time and find that the swimming behavior of a single cell can exhibit a variety of behaviors including run-and-tumble and ``slow-random-walk'' in which the cells move at relatively low speed without the characteristic run. Although the characteristic swimming speed varies between individuals and in different polymer solutions, we find that the skewness of the speed distribution is solely a function of viscosity, and uniquely determines the ratio of the average speed to the characteristic run speed. Using Resistive Force Theory and the cell-specific measured characteristic run speed, we show that differences in the swimming behavior observed in solutions of different viscosity are due to changes in the flagellar bundling time, which increases as the viscosity rises, due to lower rotation rate of the flagellar motor. National Science Foundation.

  20. ‘Postage-stamp PIV’: small velocity fields at 400 kHz for turbulence spectra measurements

    NASA Astrophysics Data System (ADS)

    Beresh, Steven J.; Henfling, John F.; Spillers, Russell W.; Spitzer, Seth M.

    2018-03-01

    Time-resolved particle image velocimetry recently has been demonstrated in high-speed flows using a pulse-burst laser at repetition rates reaching 50 kHz. Turbulent behavior can be measured at still higher frequencies if the field of view is greatly reduced and lower laser pulse energy is accepted. Current technology allows image acquisition at 400 kHz for sequences exceeding 4000 frames but for an array of only 128  ×  120 pixels, giving the moniker of ‘postage-stamp PIV’. The technique has been tested far downstream of a supersonic jet exhausting into a transonic crossflow. Two-component measurements appear valid until 120 kHz, at which point a noise floor emerges whose magnitude is dependent on the reduction of peak locking. Stereoscopic measurement offers three-component data for turbulent kinetic energy spectra, but exhibits a reduced signal bandwidth and higher noise in the out-of-plane component due to the oblique camera images. The resulting spectra reveal two regions exhibiting power-law dependence describing the turbulent decay. The frequency response of the present measurement configuration exceeds nearly all previous velocimetry measurements in high speed flow.

  1. Multiple DNA and protein sequence alignment on a workstation and a supercomputer.

    PubMed

    Tajima, K

    1988-11-01

    This paper describes a multiple alignment method using a workstation and supercomputer. The method is based on the alignment of a set of aligned sequences with the new sequence, and uses a recursive procedure of such alignment. The alignment is executed in a reasonable computation time on diverse levels from a workstation to a supercomputer, from the viewpoint of alignment results and computational speed by parallel processing. The application of the algorithm is illustrated by several examples of multiple alignment of 12 amino acid and DNA sequences of HIV (human immunodeficiency virus) env genes. Colour graphic programs on a workstation and parallel processing on a supercomputer are discussed.

  2. An optimized and low-cost FPGA-based DNA sequence alignment--a step towards personal genomics.

    PubMed

    Shah, Hurmat Ali; Hasan, Laiq; Ahmad, Nasir

    2013-01-01

    DNA sequence alignment is a cardinal process in computational biology but also is much expensive computationally when performing through traditional computational platforms like CPU. Of many off the shelf platforms explored for speeding up the computation process, FPGA stands as the best candidate due to its performance per dollar spent and performance per watt. These two advantages make FPGA as the most appropriate choice for realizing the aim of personal genomics. The previous implementation of DNA sequence alignment did not take into consideration the price of the device on which optimization was performed. This paper presents optimization over previous FPGA implementation that increases the overall speed-up achieved as well as the price incurred by the platform that was optimized. The optimizations are (1) The array of processing elements is made to run on change in input value and not on clock, so eliminating the need for tight clock synchronization, (2) the implementation is unrestrained by the size of the sequences to be aligned, (3) the waiting time required for the sequences to load to FPGA is reduced to the minimum possible and (4) an efficient method is devised to store the output matrix that make possible to save the diagonal elements to be used in next pass, in parallel with the computation of output matrix. Implemented on Spartan3 FPGA, this implementation achieved 20 times performance improvement in terms of CUPS over GPP implementation.

  3. Assessment of potential aerodynamic effects on personnel and equipment in proximity to high-speed train operations : safety of high-speed ground transportation systems

    DOT National Transportation Integrated Search

    1999-12-01

    Amtrak is planning to provide high-speed passenger train service at speeds significantly higher than their current top speed of 125 mph, and with these higher speeds, there are concerns with safety from the aerodynamic effects created by a passing tr...

  4. DNA base-calling from a nanopore using a Viterbi algorithm.

    PubMed

    Timp, Winston; Comer, Jeffrey; Aksimentiev, Aleksei

    2012-05-16

    Nanopore-based DNA sequencing is the most promising third-generation sequencing method. It has superior read length, speed, and sample requirements compared with state-of-the-art second-generation methods. However, base-calling still presents substantial difficulty because the resolution of the technique is limited compared with the measured signal/noise ratio. Here we demonstrate a method to decode 3-bp-resolution nanopore electrical measurements into a DNA sequence using a Hidden Markov model. This method shows tremendous potential for accuracy (~98%), even with a poor signal/noise ratio. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  5. X-Raying the Beating Heart of a Newborn Star: Rotational Modulation of High-Energy Radiation from V1647 Ori

    NASA Technical Reports Server (NTRS)

    Hamaguchi, Kenji; Grosso, Nicolas; Kastner, Joel H.; Weintraub, David A.; Richmond, Michael; Petre, Robert; Teets, William K.; Principe, David

    2012-01-01

    We report a periodicity of approx.1 day in the highly elevated X-ray emission from the protostar V1647 Ori during its two recent multiple-year outbursts of mass accretion. This periodicity is indicative of protostellar rotation at near-breakup speed. Modeling of the phased X-ray light curve indicates the high-temperature ( 50 MK), X-ray-emitting plasma, which is most likely heated by accretion-induced magnetic reconnection, resides in dense ( 5 1010 cm.3), pancake-shaped magnetic footprints where the accretion stream feeds the newborn star. The sustained X-ray periodicity of V1647 Ori demonstrates that such protostellar magnetospheric accretion configurations can be stable over timescales of years. Subject headings: stars: formation stars: individual (V1647 Ori) stars: pre-main sequence X-rays: stars

  6. Modeling of Beams’ Multiple-Contact Mode with an Application in the Design of a High-g Threshold Microaccelerometer

    PubMed Central

    Li, Kai; Chen, Wenyuan; Zhang, Weiping

    2011-01-01

    Beam’s multiple-contact mode, characterized by multiple and discrete contact regions, non-uniform stoppers’ heights, irregular contact sequence, seesaw-like effect, indirect interaction between different stoppers, and complex coupling relationship between loads and deformation is studied. A novel analysis method and a novel high speed calculation model are developed for multiple-contact mode under mechanical load and electrostatic load, without limitations on stopper height and distribution, providing the beam has stepped or curved shape. Accurate values of deflection, contact load, contact region and so on are obtained directly, with a subsequent validation by CoventorWare. A new concept design of high-g threshold microaccelerometer based on multiple-contact mode is presented, featuring multiple acceleration thresholds of one sensitive component and consequently small sensor size. PMID:22163897

  7. 2008 13th Expeditionary Warfare Conference

    DTIC Science & Technology

    2008-10-23

    Ships 6 Joint High Speed Vessel (JHSV) • Program Capability – High speed lift ship capable of transporting cargo and personnel across intra... high - speed aluminum trimaran hullform that enables the ship to reach sustainable speeds of over 40 knots and range in excess of 3,500 nautical miles...advancing concepts for a very high speed , manned submersible,

  8. Optical diagnostics of mercury jet for an intense proton target.

    PubMed

    Park, H; Tsang, T; Kirk, H G; Ladeinde, F; Graves, V B; Spampinato, P T; Carroll, A J; Titus, P H; McDonald, K T

    2008-04-01

    An optical diagnostic system is designed and constructed for imaging a free mercury jet interacting with a high intensity proton beam in a pulsed high-field solenoid magnet. The optical imaging system employs a backilluminated, laser shadow photography technique. Object illumination and image capture are transmitted through radiation-hard multimode optical fibers and flexible coherent imaging fibers. A retroreflected illumination design allows the entire passive imaging system to fit inside the bore of the solenoid magnet. A sequence of synchronized short laser light pulses are used to freeze the transient events, and the images are recorded by several high speed charge coupled devices. Quantitative and qualitative data analysis using image processing based on probability approach is described. The characteristics of free mercury jet as a high power target for beam-jet interaction at various levels of the magnetic induction field is reported in this paper.

  9. [Separation and identification of bovine lactoferricin by high performance liquid chromatography-matrix-assisted laser desorption/ionization time of flight/ time of flight mass spectrometry].

    PubMed

    An, Meichen; Liu, Ning

    2010-02-01

    A high performance liquid chromatography-matrix-assisted laser desorption/ionization time of flight/time of flight mass spectrometry (HPLC-MALDI-TOF/TOF MS) method was developed for the separation and identification of bovine lactoferricin (LfcinB). Bovine lactoferrin was hydrolyzed by pepsin and then separated by ion exchange chromatography and reversed-phase liquid chromatography (RP-LC). The antibacterial activities of the fractions from RP-LC separation were determined and the protein concentration of the fraction with the highest activity was measured, whose sequence was indentified by MALDI-TOF/TOF MS. The relative molecular mass of LfcinB was 3 124.89 and the protein concentration was 18.20 microg/mL. The method of producing LfcinB proposed in this study has fast speed, high accuracy and high resolution.

  10. High-speed and ultrahigh-speed cinematographic recording techniques

    NASA Astrophysics Data System (ADS)

    Miquel, J. C.

    1980-12-01

    A survey is presented of various high-speed and ultrahigh-speed cinematographic recording systems (covering a range of speeds from 100 to 14-million pps). Attention is given to the functional and operational characteristics of cameras and to details of high-speed cinematography techniques (including image processing, and illumination). A list of cameras (many of them French) available in 1980 is presented

  11. Development of a DC propulsion system for an electric vehicle

    NASA Technical Reports Server (NTRS)

    Kelledes, W. L.

    1984-01-01

    The suitability of the Eaton automatically shifted mechanical transaxle concept for use in a near-term dc powered electric vehicle is evaluated. A prototype dc propulsion system for a passenger electric vehicle was designed, fabricated, tested, installed in a modified Mercury Lynx vehicle and track tested at the contractor's site. The system consisted of a two-axis, three-speed, automatically-shifted mechanical transaxle, 15.2 Kw rated, separately excited traction motor, and a transistorized motor controller with a single chopper providing limited armature current below motor base speed and full range field control above base speed at up to twice rated motor current. The controller utilized a microprocessor to perform motor and vehicle speed monitoring and shift sequencing by means of solenoids applying hydraulic pressure to the transaxle clutches. Bench dynamometer and track testing was performed. Track testing showed best system efficiency for steady-state cruising speeds of 65-80 Km/Hz (40-50 mph). Test results include acceleration, steady speed and SAE J227A/D cycle energy consumption, braking tests and coast down to characterize the vehicle road load.

  12. Evaluation of the Performance Characteristics of the CGLSS and NLDN Systems Based on Two Years of Ground-Truth Data from Launch Complex 39B, Kennedy Space Center, Florida

    NASA Technical Reports Server (NTRS)

    Mata, Carlos T.; Hill, Jonathan D.; Mata, Angel G.; Cummins, Kenneth L.

    2014-01-01

    From May 2011 through July 2013, the lightning instrumentation at Launch Complex 39B (LC39B) at the Kennedy Space Center, Florida, has obtained high-speed video records and field change waveforms (dE/dt and three-axis dH/dt) for 54 negative polarity return strokes whose strike termination locations and times are known with accuracy of the order of 10 m or less and 1 µs, respectively. A total of 18 strokes terminated directly to the LC39B lighting protection system (LPS), which contains three 181 m towers in a triangular configuration, an overhead catenary wire system on insulating masts, and nine down conductors. An additional 9 strokes terminated on the 106 m lightning protection mast of Launch Complex 39A (LC39A), which is located about 2.7 km southeast of LC39B. The remaining 27 return strokes struck either on the ground or attached to low-elevation grounded objects within about 500 m of the LC39B LPS. Leader/return stroke sequences were imaged at 3200 frames/sec by a network of six Phantom V310 high-speed video cameras. Each of the three towers on LC39B had two high-speed cameras installed at the 147 m level with overlapping fields of view of the center of the pad. The locations of the strike points of 54 return strokes have been compared to time-correlated reports of the Cloud-to-Ground Lightning Surveillance System (CGLSS) and the National Lightning Detection Network (NLDN), and the results of this comparison will be presented and discussed.

  13. FliMax, a novel stimulus device for panoramic and highspeed presentation of behaviourally generated optic flow.

    PubMed

    Lindemann, J P; Kern, R; Michaelis, C; Meyer, P; van Hateren, J H; Egelhaaf, M

    2003-03-01

    A high-speed panoramic visual stimulation device is introduced which is suitable to analyse visual interneurons during stimulation with rapid image displacements as experienced by fast moving animals. The responses of an identified motion sensitive neuron in the visual system of the blowfly to behaviourally generated image sequences are very complex and hard to predict from the established input circuitry of the neuron. This finding suggests that the computational significance of visual interneurons can only be assessed if they are characterised not only by conventional stimuli as are often used for systems analysis, but also by behaviourally relevant input.

  14. Demonstration of Two-Atom Entanglement with Ultrafast Optical Pulses

    NASA Astrophysics Data System (ADS)

    Wong-Campos, J. D.; Moses, S. A.; Johnson, K. G.; Monroe, C.

    2017-12-01

    We demonstrate quantum entanglement of two trapped atomic ion qubits using a sequence of ultrafast laser pulses. Unlike previous demonstrations of entanglement mediated by the Coulomb interaction, this scheme does not require confinement to the Lamb-Dicke regime and can be less sensitive to ambient noise due to its speed. To elucidate the physics of an ultrafast phase gate, we generate a high entanglement rate using just ten pulses, each of ˜20 ps duration, and demonstrate an entangled Bell state with (76 ±1 )% fidelity. These results pave the way for entanglement operations within a large collection of qubits by exciting only local modes of motion.

  15. Demonstration of Two-Atom Entanglement with Ultrafast Optical Pulses.

    PubMed

    Wong-Campos, J D; Moses, S A; Johnson, K G; Monroe, C

    2017-12-08

    We demonstrate quantum entanglement of two trapped atomic ion qubits using a sequence of ultrafast laser pulses. Unlike previous demonstrations of entanglement mediated by the Coulomb interaction, this scheme does not require confinement to the Lamb-Dicke regime and can be less sensitive to ambient noise due to its speed. To elucidate the physics of an ultrafast phase gate, we generate a high entanglement rate using just ten pulses, each of ∼20  ps duration, and demonstrate an entangled Bell state with (76±1)% fidelity. These results pave the way for entanglement operations within a large collection of qubits by exciting only local modes of motion.

  16. Identification of sequence motifs significantly associated with antisense activity.

    PubMed

    McQuisten, Kyle A; Peek, Andrew S

    2007-06-07

    Predicting the suppression activity of antisense oligonucleotide sequences is the main goal of the rational design of nucleic acids. To create an effective predictive model, it is important to know what properties of an oligonucleotide sequence associate significantly with antisense activity. Also, for the model to be efficient we must know what properties do not associate significantly and can be omitted from the model. This paper will discuss the results of a randomization procedure to find motifs that associate significantly with either high or low antisense suppression activity, analysis of their properties, as well as the results of support vector machine modelling using these significant motifs as features. We discovered 155 motifs that associate significantly with high antisense suppression activity and 202 motifs that associate significantly with low suppression activity. The motifs range in length from 2 to 5 bases, contain several motifs that have been previously discovered as associating highly with antisense activity, and have thermodynamic properties consistent with previous work associating thermodynamic properties of sequences with their antisense activity. Statistical analysis revealed no correlation between a motif's position within an antisense sequence and that sequences antisense activity. Also, many significant motifs existed as subwords of other significant motifs. Support vector regression experiments indicated that the feature set of significant motifs increased correlation compared to all possible motifs as well as several subsets of the significant motifs. The thermodynamic properties of the significantly associated motifs support existing data correlating the thermodynamic properties of the antisense oligonucleotide with antisense efficiency, reinforcing our hypothesis that antisense suppression is strongly associated with probe/target thermodynamics, as there are no enzymatic mediators to speed the process along like the RNA Induced Silencing Complex (RISC) in RNAi. The independence of motif position and antisense activity also allows us to bypass consideration of this feature in the modelling process, promoting model efficiency and reducing the chance of overfitting when predicting antisense activity. The increase in SVR correlation with significant features compared to nearest-neighbour features indicates that thermodynamics alone is likely not the only factor in determining antisense efficiency.

  17. Mining of high utility-probability sequential patterns from uncertain databases

    PubMed Central

    Zhang, Binbin; Fournier-Viger, Philippe; Li, Ting

    2017-01-01

    High-utility sequential pattern mining (HUSPM) has become an important issue in the field of data mining. Several HUSPM algorithms have been designed to mine high-utility sequential patterns (HUPSPs). They have been applied in several real-life situations such as for consumer behavior analysis and event detection in sensor networks. Nonetheless, most studies on HUSPM have focused on mining HUPSPs in precise data. But in real-life, uncertainty is an important factor as data is collected using various types of sensors that are more or less accurate. Hence, data collected in a real-life database can be annotated with existing probabilities. This paper presents a novel pattern mining framework called high utility-probability sequential pattern mining (HUPSPM) for mining high utility-probability sequential patterns (HUPSPs) in uncertain sequence databases. A baseline algorithm with three optional pruning strategies is presented to mine HUPSPs. Moroever, to speed up the mining process, a projection mechanism is designed to create a database projection for each processed sequence, which is smaller than the original database. Thus, the number of unpromising candidates can be greatly reduced, as well as the execution time for mining HUPSPs. Substantial experiments both on real-life and synthetic datasets show that the designed algorithm performs well in terms of runtime, number of candidates, memory usage, and scalability for different minimum utility and minimum probability thresholds. PMID:28742847

  18. 14 CFR 23.253 - High speed characteristics.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false High speed characteristics. 23.253 Section... Requirements § 23.253 High speed characteristics. If a maximum operating speed VMO/MMO is established under § 23.1505(c), the following speed increase and recovery characteristics must be met: (a) Operating...

  19. 78 FR 36823 - California High-Speed Rail Authority-Construction Exemption-in Merced, Madera and Fresno Counties...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-19

    ...-Speed Rail Authority--Construction Exemption--in Merced, Madera and Fresno Counties, Cal AGENCY: Surface...-Speed Rail Authority (Authority) to construct an approximately 65- mile high-speed passenger rail line... statewide California High-Speed Train System. This exemption is subject to environmental mitigation...

  20. 14 CFR 23.253 - High speed characteristics.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false High speed characteristics. 23.253 Section... Requirements § 23.253 High speed characteristics. If a maximum operating speed VMO/MMO is established under § 23.1505(c), the following speed increase and recovery characteristics must be met: (a) Operating...

  1. Predictors of older drivers' involvement in high-range speeding behavior.

    PubMed

    Chevalier, Anna; Coxon, Kristy; Rogers, Kris; Chevalier, Aran John; Wall, John; Brown, Julie; Clarke, Elizabeth; Ivers, Rebecca; Keay, Lisa

    2017-02-17

    Even small increases in vehicle speed raise crash risk and resulting injury severity. Older drivers are at increased risk of involvement in casualty crashes and injury compared to younger drivers. However, there is little objective evidence about older drivers' speeding. This study investigates the nature and predictors of high-range speeding among drivers aged 75-94 years. Speed per second was estimated using Global Positioning System devices installed in participants' vehicles. High-range speeding events were defined as traveling an average 10+km/h above the speed limit over 30 seconds. Descriptive analysis examined speeding events by participant characteristics and mileage driven. Regression analyses were used to examine the association between involvement in high-range speeding events and possible predictive factors. Most (96%, 182/190) participants agreed to have their vehicle instrumented, and speeding events were accurately recorded for 97% (177/182) of participants. While 77% (136/177) of participants were involved in one or more high-range events, 42% (75/177) were involved in greater than five events during 12-months of data collection. Participants involved in high-range events drove approximately twice as many kilometres as those not involved. High-range events tended to be infrequent (median = 6 per 10,000 km; IQR = 2-18). The rate of high-range speeding was associated with better cognitive function and attention to the driving environment. This suggests those older drivers with poorer cognition and visual attention may drive more cautiously, thereby reducing their high-range speeding behavior.

  2. New insights into the shock tube ignition of H 2/O 2 at low to moderate temperatures using high-speed end-wall imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ninnemann, Erik; Koroglu, Batikan; Pryor, Owen

    In this study, the effects of pre-ignition energy releases on H 2—O 2 mixtures were explored in a shock tube with the aid of high-speed imaging and conventional pressure and emission diagnostics. Ignition delay times and time-resolved camera image sequences were taken behind the reflected shockwaves for two hydrogen mixtures. High concentration experiments spanned temperatures between 858 and 1035 K and pressures between 2.74 and 3.91 atm for a 15% H 2\\18% O 2\\Ar mixture. Low concentration data were also taken at temperatures between 960 and 1131 K and pressures between 3.09 and 5.44 atm for a 4% H 2\\2%more » O 2\\Ar mixture. These two model mixtures were chosen as they were the focus of recent shock tube work conducted in the literature. Experiments were performed in both a clean and dirty shock tube facility; however, no deviations in ignition delay times between the two types of tests were apparent. The high-concentration mixture (15%H 2\\18%O 2\\Ar) experienced energy releases in the form of deflagration flames followed by local detonations at temperatures < 1000 K. Measured ignition delay times were compared to predictions by three chemical kinetic mechanisms: GRI-Mech 3.0, AramcoMech 2.0, and Burke's et al. (2012) mechanisms. It was found that when proper thermodynamic assumptions are used, all mechanisms were able to accurately predict the experiments with superior performance from the well-validated AramcoMech 2.0 and Burke et al. mechanisms. Current work provides better guidance in using available literature hydrogen shock tube measurements, which spanned more than 50 years but were conducted without the aid of high-speed visualization of the ignition process, and their modeling using combustion kinetic mechanisms.« less

  3. New insights into the shock tube ignition of H 2/O 2 at low to moderate temperatures using high-speed end-wall imaging

    DOE PAGES

    Ninnemann, Erik; Koroglu, Batikan; Pryor, Owen; ...

    2017-09-21

    In this study, the effects of pre-ignition energy releases on H 2—O 2 mixtures were explored in a shock tube with the aid of high-speed imaging and conventional pressure and emission diagnostics. Ignition delay times and time-resolved camera image sequences were taken behind the reflected shockwaves for two hydrogen mixtures. High concentration experiments spanned temperatures between 858 and 1035 K and pressures between 2.74 and 3.91 atm for a 15% H 2\\18% O 2\\Ar mixture. Low concentration data were also taken at temperatures between 960 and 1131 K and pressures between 3.09 and 5.44 atm for a 4% H 2\\2%more » O 2\\Ar mixture. These two model mixtures were chosen as they were the focus of recent shock tube work conducted in the literature. Experiments were performed in both a clean and dirty shock tube facility; however, no deviations in ignition delay times between the two types of tests were apparent. The high-concentration mixture (15%H 2\\18%O 2\\Ar) experienced energy releases in the form of deflagration flames followed by local detonations at temperatures < 1000 K. Measured ignition delay times were compared to predictions by three chemical kinetic mechanisms: GRI-Mech 3.0, AramcoMech 2.0, and Burke's et al. (2012) mechanisms. It was found that when proper thermodynamic assumptions are used, all mechanisms were able to accurately predict the experiments with superior performance from the well-validated AramcoMech 2.0 and Burke et al. mechanisms. Current work provides better guidance in using available literature hydrogen shock tube measurements, which spanned more than 50 years but were conducted without the aid of high-speed visualization of the ignition process, and their modeling using combustion kinetic mechanisms.« less

  4. Genomic Encyclopedia of Type Strains, Phase I: The one thousand microbial genomes (KMG-I) project

    DOE PAGES

    Kyrpides, Nikos C.; Woyke, Tanja; Eisen, Jonathan A.; ...

    2014-06-15

    The Genomic Encyclopedia of Bacteria and Archaea (GEBA) project was launched by the JGI in 2007 as a pilot project with the objective of sequencing 250 bacterial and archaeal genomes. The two major goals of that project were (a) to test the hypothesis that there are many benefits to the use the phylogenetic diversity of organisms in the tree of life as a primary criterion for generating their genome sequence and (b) to develop the necessary framework, technology and organization for large-scale sequencing of microbial isolate genomes. While the GEBA pilot project has not yet been entirely completed, both ofmore » the original goals have already been successfully accomplished, leading the way for the next phase of the project. Here we propose taking the GEBA project to the next level, by generating high quality draft genomes for 1,000 bacterial and archaeal strains. This represents a combined 16-fold increase in both scale and speed as compared to the GEBA pilot project (250 isolate genomes in 4+ years). We will follow a similar approach for organism selection and sequencing prioritization as was done for the GEBA pilot project (i.e. phylogenetic novelty, availability and growth of cultures of type strains and DNA extraction capability), focusing on type strains as this ensures reproducibility of our results and provides the strongest linkage between genome sequences and other knowledge about each strain. In turn, this project will constitute a pilot phase of a larger effort that will target the genome sequences of all available type strains of the Bacteria and Archaea.« less

  5. Bioinformatics and genomic analysis of transposable elements in eukaryotic genomes.

    PubMed

    Janicki, Mateusz; Rooke, Rebecca; Yang, Guojun

    2011-08-01

    A major portion of most eukaryotic genomes are transposable elements (TEs). During evolution, TEs have introduced profound changes to genome size, structure, and function. As integral parts of genomes, the dynamic presence of TEs will continue to be a major force in reshaping genomes. Early computational analyses of TEs in genome sequences focused on filtering out "junk" sequences to facilitate gene annotation. When the high abundance and diversity of TEs in eukaryotic genomes were recognized, these early efforts transformed into the systematic genome-wide categorization and classification of TEs. The availability of genomic sequence data reversed the classical genetic approaches to discovering new TE families and superfamilies. Curated TE databases and their accurate annotation of genome sequences in turn facilitated the studies on TEs in a number of frontiers including: (1) TE-mediated changes of genome size and structure, (2) the influence of TEs on genome and gene functions, (3) TE regulation by host, (4) the evolution of TEs and their population dynamics, and (5) genomic scale studies of TE activity. Bioinformatics and genomic approaches have become an integral part of large-scale studies on TEs to extract information with pure in silico analyses or to assist wet lab experimental studies. The current revolution in genome sequencing technology facilitates further progress in the existing frontiers of research and emergence of new initiatives. The rapid generation of large-sequence datasets at record low costs on a routine basis is challenging the computing industry on storage capacity and manipulation speed and the bioinformatics community for improvement in algorithms and their implementations.

  6. Optimal word sizes for dissimilarity measures and estimation of the degree of dissimilarity between DNA sequences.

    PubMed

    Wu, Tiee-Jian; Huang, Ying-Hsueh; Li, Lung-An

    2005-11-15

    Several measures of DNA sequence dissimilarity have been developed. The purpose of this paper is 3-fold. Firstly, we compare the performance of several word-based or alignment-based methods. Secondly, we give a general guideline for choosing the window size and determining the optimal word sizes for several word-based measures at different window sizes. Thirdly, we use a large-scale simulation method to simulate data from the distribution of SK-LD (symmetric Kullback-Leibler discrepancy). These simulated data can be used to estimate the degree of dissimilarity beta between any pair of DNA sequences. Our study shows (1) for whole sequence similiarity/dissimilarity identification the window size taken should be as large as possible, but probably not >3000, as restricted by CPU time in practice, (2) for each measure the optimal word size increases with window size, (3) when the optimal word size is used, SK-LD performance is superior in both simulation and real data analysis, (4) the estimate beta of beta based on SK-LD can be used to filter out quickly a large number of dissimilar sequences and speed alignment-based database search for similar sequences and (5) beta is also applicable in local similarity comparison situations. For example, it can help in selecting oligo probes with high specificity and, therefore, has potential in probe design for microarrays. The algorithm SK-LD, estimate beta and simulation software are implemented in MATLAB code, and are available at http://www.stat.ncku.edu.tw/tjwu

  7. Genomic Encyclopedia of Type Strains, Phase I: The one thousand microbial genomes (KMG-I) project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kyrpides, Nikos C.; Woyke, Tanja; Eisen, Jonathan A.

    The Genomic Encyclopedia of Bacteria and Archaea (GEBA) project was launched by the JGI in 2007 as a pilot project with the objective of sequencing 250 bacterial and archaeal genomes. The two major goals of that project were (a) to test the hypothesis that there are many benefits to the use the phylogenetic diversity of organisms in the tree of life as a primary criterion for generating their genome sequence and (b) to develop the necessary framework, technology and organization for large-scale sequencing of microbial isolate genomes. While the GEBA pilot project has not yet been entirely completed, both ofmore » the original goals have already been successfully accomplished, leading the way for the next phase of the project. Here we propose taking the GEBA project to the next level, by generating high quality draft genomes for 1,000 bacterial and archaeal strains. This represents a combined 16-fold increase in both scale and speed as compared to the GEBA pilot project (250 isolate genomes in 4+ years). We will follow a similar approach for organism selection and sequencing prioritization as was done for the GEBA pilot project (i.e. phylogenetic novelty, availability and growth of cultures of type strains and DNA extraction capability), focusing on type strains as this ensures reproducibility of our results and provides the strongest linkage between genome sequences and other knowledge about each strain. In turn, this project will constitute a pilot phase of a larger effort that will target the genome sequences of all available type strains of the Bacteria and Archaea.« less

  8. TotalReCaller: improved accuracy and performance via integrated alignment and base-calling.

    PubMed

    Menges, Fabian; Narzisi, Giuseppe; Mishra, Bud

    2011-09-01

    Currently, re-sequencing approaches use multiple modules serially to interpret raw sequencing data from next-generation sequencing platforms, while remaining oblivious to the genomic information until the final alignment step. Such approaches fail to exploit the full information from both raw sequencing data and the reference genome that can yield better quality sequence reads, SNP-calls, variant detection, as well as an alignment at the best possible location in the reference genome. Thus, there is a need for novel reference-guided bioinformatics algorithms for interpreting analog signals representing sequences of the bases ({A, C, G, T}), while simultaneously aligning possible sequence reads to a source reference genome whenever available. Here, we propose a new base-calling algorithm, TotalReCaller, to achieve improved performance. A linear error model for the raw intensity data and Burrows-Wheeler transform (BWT) based alignment are combined utilizing a Bayesian score function, which is then globally optimized over all possible genomic locations using an efficient branch-and-bound approach. The algorithm has been implemented in soft- and hardware [field-programmable gate array (FPGA)] to achieve real-time performance. Empirical results on real high-throughput Illumina data were used to evaluate TotalReCaller's performance relative to its peers-Bustard, BayesCall, Ibis and Rolexa-based on several criteria, particularly those important in clinical and scientific applications. Namely, it was evaluated for (i) its base-calling speed and throughput, (ii) its read accuracy and (iii) its specificity and sensitivity in variant calling. A software implementation of TotalReCaller as well as additional information, is available at: http://bioinformatics.nyu.edu/wordpress/projects/totalrecaller/ fabian.menges@nyu.edu.

  9. Effects of support size and orientation on symmetric gaits in free-ranging tamarins of Amazonian Peru: implications for the functional significance of primate gait sequence patterns.

    PubMed

    Nyakatura, John A; Heymann, Eckhard W

    2010-03-01

    The adoption of a specific gait sequence pattern during symmetrical locomotion has been proposed to have been a key advantage for the exploitation of the fine branch niche in early primates. Diverse aspects of primate locomotion have been extensively studied in technically equipped laboratory settings, but evolutionary conclusions derived from these investigations have rarely been verified in wild primates. Bridging the gap from the lab to the field, we conducted an actual performance determination of symmetrical gaits in two free-ranging tamarin species (Saguinus mystax and Saguinus fuscicollis) of Amazonian Peru by analyzing high-speed video recordings of naturally occurring locomotor bouts. Tamarins arguably represent viable models for aspects of early primate locomotion. We tested three specific hypotheses derived from laboratory studies to test for the influence of support size and orientation and to gain further insight into the functional significance of primate gait sequence patterns: (1) The tamarins utilize symmetrical gaits at a higher rate on small supports than on larger ones. (2) During symmetrical locomotion on small supports, diagonal sequences are utilized at a higher rate than on larger supports. (3) On inclines, diagonal sequences are predominantly used and on declines, lateral sequences are predominantly used. Our results corroborated hypotheses 1 and 3. We found no clear support for hypothesis 2. In conclusion, our results add to the notion that primate gait plasticity, rather than uniform adoption of diagonal sequence gaits, enabled early primates to accommodate different support types and effectively exploit the small branch niche. Copyright 2009 Elsevier Ltd. All rights reserved.

  10. Highly sensitive luciferase reporter assay using a potent destabilization sequence of calpain 3.

    PubMed

    Yasunaga, Mayu; Murotomi, Kazutoshi; Abe, Hiroko; Yamazaki, Tomomi; Nishii, Shigeaki; Ohbayashi, Tetsuya; Oshimura, Mitsuo; Noguchi, Takako; Niwa, Kazuki; Ohmiya, Yoshihiro; Nakajima, Yoshihiro

    2015-01-20

    Reporter assays that use luciferases are widely employed for monitoring cellular events associated with gene expression in vitro and in vivo. To improve the response of the luciferase reporter to acute changes of gene expression, a destabilization sequence is frequently used to reduce the stability of luciferase protein in the cells, which results in an increase of sensitivity of the luciferase reporter assay. In this study, we identified a potent destabilization sequence (referred to as the C9 fragment) consisting of 42 amino acid residues from human calpain 3 (CAPN3). Whereas the half-life of Emerald Luc (ELuc) from the Brazilian click beetle Pyrearinus termitilluminans was reduced by fusing PEST (t1/2=9.8 to 2.8h), the half-life of C9-fused ELuc was significantly shorter (t1/2=1.0h) than that of PEST-fused ELuc when measurements were conducted at 37°C. In addition, firefly luciferase (luc2) was also markedly destabilized by the C9 fragment compared with the humanized PEST sequence. These results indicate that the C9 fragment from CAPN3 is a much more potent destabilization sequence than the PEST sequence. Furthermore, real-time bioluminescence recording of the activation kinetics of nuclear factor-κB after transient treatment with tumor necrosis factor α revealed that the response of C9-fused ELuc is significantly greater than that of PEST-fused ELuc, demonstrating that the use of the C9 fragment realizes a luciferase reporter assay that has faster response speed compared with that provided by the PEST sequence. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Two-color, 30 second microwave-accelerated Metal-Enhanced Fluorescence DNA assays: a new Rapid Catch and Signal (RCS) technology.

    PubMed

    Dragan, Anatoliy I; Golberg, Karina; Elbaz, Amit; Marks, Robert; Zhang, Yongxia; Geddes, Chris D

    2011-03-07

    For analyses of DNA fragment sequences in solution we introduce a 2-color DNA assay, utilizing a combination of the Metal-Enhanced Fluorescence (MEF) effect and microwave-accelerated DNA hybridization. The assay is based on a new "Catch and Signal" technology, i.e. the simultaneous specific recognition of two target DNA sequences in one well by complementary anchor-ssDNAs, attached to silver island films (SiFs). It is shown that fluorescent labels (Alexa 488 and Alexa 594), covalently attached to ssDNA fragments, play the role of biosensor recognition probes, demonstrating strong response upon DNA hybridization, locating fluorophores in close proximity to silver NPs, which is ideal for MEF. Subsequently the emission dramatically increases, while the excited state lifetime decreases. It is also shown that 30s microwave irradiation of wells, containing DNA molecules, considerably (~1000-fold) speeds up the highly selective hybridization of DNA fragments at ambient temperature. The 2-color "Catch and Signal" DNA assay platform can radically expedite quantitative analysis of genome DNA sequences, creating a simple and fast bio-medical platform for nucleic acid analysis. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Gapped Spectral Dictionaries and Their Applications for Database Searches of Tandem Mass Spectra*

    PubMed Central

    Jeong, Kyowon; Kim, Sangtae; Bandeira, Nuno; Pevzner, Pavel A.

    2011-01-01

    Generating all plausible de novo interpretations of a peptide tandem mass (MS/MS) spectrum (Spectral Dictionary) and quickly matching them against the database represent a recently emerged alternative approach to peptide identification. However, the sizes of the Spectral Dictionaries quickly grow with the peptide length making their generation impractical for long peptides. We introduce Gapped Spectral Dictionaries (all plausible de novo interpretations with gaps) that can be easily generated for any peptide length thus addressing the limitation of the Spectral Dictionary approach. We show that Gapped Spectral Dictionaries are small thus opening a possibility of using them to speed-up MS/MS searches. Our MS-GappedDictionary algorithm (based on Gapped Spectral Dictionaries) enables proteogenomics applications (such as searches in the six-frame translation of the human genome) that are prohibitively time consuming with existing approaches. MS-GappedDictionary generates gapped peptides that occupy a niche between accurate but short peptide sequence tags and long but inaccurate full length peptide reconstructions. We show that, contrary to conventional wisdom, some high-quality spectra do not have good peptide sequence tags and introduce gapped tags that have advantages over the conventional peptide sequence tags in MS/MS database searches. PMID:21444829

  13. Research on natural frequency based on modal test for high speed vehicles

    NASA Astrophysics Data System (ADS)

    Ma, Guangsong; He, Guanglin; Guo, Yachao

    2018-04-01

    High speed vehicle as a vibration system, resonance generated in flight may be harmful to high speed vehicles. It is possible to solve the resonance problem by acquiring the natural frequency of the high-speed aircraft and then taking some measures to avoid the natural frequency of the high speed vehicle. Therefore, In this paper, the modal test of the high speed vehicle was carried out by using the running hammer method and the PolyMAX modal parameter identification method. Firstly, the total frequency response function, coherence function of the high speed vehicle are obtained by the running hammer stimulation test, and through the modal assurance criterion (MAC) to determine the accuracy of the estimated parameters. Secondly, the first three order frequencies, the pole steady state diagram of the high speed vehicles is obtained by the PolyMAX modal parameter identification method. At last, the natural frequency of the vibration system was accurately obtained by the running hammer method.

  14. PathoScope 2.0: a complete computational framework for strain identification in environmental or clinical sequencing samples

    PubMed Central

    2014-01-01

    Background Recent innovations in sequencing technologies have provided researchers with the ability to rapidly characterize the microbial content of an environmental or clinical sample with unprecedented resolution. These approaches are producing a wealth of information that is providing novel insights into the microbial ecology of the environment and human health. However, these sequencing-based approaches produce large and complex datasets that require efficient and sensitive computational analysis workflows. Many recent tools for analyzing metagenomic-sequencing data have emerged, however, these approaches often suffer from issues of specificity, efficiency, and typically do not include a complete metagenomic analysis framework. Results We present PathoScope 2.0, a complete bioinformatics framework for rapidly and accurately quantifying the proportions of reads from individual microbial strains present in metagenomic sequencing data from environmental or clinical samples. The pipeline performs all necessary computational analysis steps; including reference genome library extraction and indexing, read quality control and alignment, strain identification, and summarization and annotation of results. We rigorously evaluated PathoScope 2.0 using simulated data and data from the 2011 outbreak of Shiga-toxigenic Escherichia coli O104:H4. Conclusions The results show that PathoScope 2.0 is a complete, highly sensitive, and efficient approach for metagenomic analysis that outperforms alternative approaches in scope, speed, and accuracy. The PathoScope 2.0 pipeline software is freely available for download at: http://sourceforge.net/projects/pathoscope/. PMID:25225611

  15. Circumstellar Material on and off the Main Sequence

    NASA Astrophysics Data System (ADS)

    Steele, Amy; Debes, John H.; Deming, Drake

    2017-06-01

    There is evidence of circumstellar material around main sequence, giant, and white dwarf stars that originates from the small-body population of planetary systems. These bodies tell us something about the chemistry and evolution of protoplanetary disks and the planetary systems they form. What happens to this material as its host star evolves off the main sequence, and how does that inform our understanding of the typical chemistry of rocky bodies in planetary systems? In this talk, I will discuss the composition(s) of circumstellar material on and off the main sequence to begin to answer the question, “Is Earth normal?” In particular, I look at three types of debris disks to understand the typical chemistry of planetary systems—young debris disks, debris disks around giant stars, and dust around white dwarfs. I will review the current understanding on how to infer dust composition for each class of disk, and present new work on constraining dust composition from infrared excesses around main sequence and giant stars. Finally, dusty and polluted white dwarfs hold a unique key to our understanding of the composition of rocky bodies around other stars. In particular, I will discuss WD1145+017, which has a transiting, disintegrating planetesimal. I will review what we know about this system through high speed photometry and spectroscopy and present new work on understanding the complex interplay of physics that creates white dwarf pollution from the disintegration of rocky bodies.

  16. Potential scenarios of concern for high speed rail operations

    DOT National Transportation Integrated Search

    2011-03-16

    Currently, multiple operating authorities are proposing the : introduction of high-speed rail service in the United States. : While high-speed rail service shares a number of basic : principles with conventional-speed rail service, the operational : ...

  17. The effects of processing and sequence organization on the timing of turn taking: a corpus study

    PubMed Central

    Roberts, Seán G.; Torreira, Francisco; Levinson, Stephen C.

    2015-01-01

    The timing of turn taking in conversation is extremely rapid given the cognitive demands on speakers to comprehend, plan and execute turns in real time. Findings from psycholinguistics predict that the timing of turn taking is influenced by demands on processing, such as word frequency or syntactic complexity. An alternative view comes from the field of conversation analysis, which predicts that the rules of turn-taking and sequence organization may dictate the variation in gap durations (e.g., the functional role of each turn in communication). In this paper, we estimate the role of these two different kinds of factors in determining the speed of turn-taking in conversation. We use the Switchboard corpus of English telephone conversation, already richly annotated for syntactic structure speech act sequences, and segmental alignment. To this we add further information including Floor Transfer Offset (the amount of time between the end of one turn and the beginning of the next), word frequency, concreteness, and surprisal values. We then apply a novel statistical framework (“random forests”) to show that these two dimensions are interwoven together with indexical properties of the speakers as explanatory factors determining the speed of response. We conclude that an explanation of the of the timing of turn taking will require insights from both processing and sequence organization. PMID:26029125

  18. Iterative MMSE Detection for MIMO/BLAST DS-CDMA Systems in Frequency Selective Fading Channels - Achieving High Performance in Fully Loaded Systems

    NASA Astrophysics Data System (ADS)

    Silva, João Carlos; Souto, Nuno; Cercas, Francisco; Dinis, Rui

    A MMSE (Minimum Mean Square Error) DS-CDMA (Direct Sequence-Code Division Multiple Access) receiver coupled with a low-complexity iterative interference suppression algorithm was devised for a MIMO/BLAST (Multiple Input, Multiple Output / Bell Laboratories Layered Space Time) system in order to improve system performance, considering frequency selective fading channels. The scheme is compared against the simple MMSE receiver, for both QPSK and 16QAM modulations, under SISO (Single Input, Single Output) and MIMO systems, the latter with 2Tx by 2Rx and 4Tx by 4Rx (MIMO order 2 and 4 respectively) antennas. To assess its performance in an existing system, the uncoded UMTS HSDPA (High Speed Downlink Packet Access) standard was considered.

  19. Low-level wind response to mesoscale pressure systems

    NASA Astrophysics Data System (ADS)

    Garratt, J. R.; Physick, W. L.

    1983-09-01

    Observations are presented which show a strong correlation between low-level wind behaviour (e.g., rotation near the surface) and the passage of mesoscale pressure systems. The latter are associated with frontal transition zones, are dominated by a pressure-jump line and a mesoscale high pressure area, and produce locally large horizontal pressure gradients. The wind observations are simulated by specifying a time sequence of perturbation pressure gradient and subsequently solving the vertically-integrated momentum equations with appropriate initial conditions. Very good agreement is found between observed and calculated winds; in particular, (i) a 360 ° rotation in wind on passage of the mesoscale high; (ii) wind-shift lines produced dynamically by the pressure-jump line; (iii) rapid linear increase in wind speed on passage of the pressure jump.

  20. Proton Radiography at Los Alamos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saunders, Alexander

    2017-02-28

    The proton radiography (pRad) facility at Los Alamos National Lab uses high energy protons to acquire multiple frame flash radiographic sequences at megahertz speeds: that is, it can make movies of the inside of explosions as they happen. The facility is primarily used to study the damage to and failure of metals subjected to the shock forces of high explosives as well as to study the detonation of the explosives themselves. Applications include improving our understanding of the underlying physical processes that drive the performance of the nuclear weapons in the United States stockpile and developing novel armor technologies inmore » collaboration with the Army Research Lab. The principle and techniques of pRad will be described, and examples of some recent results will be shown.« less

  1. A revision of the subtract-with-borrow random number generators

    NASA Astrophysics Data System (ADS)

    Sibidanov, Alexei

    2017-12-01

    The most popular and widely used subtract-with-borrow generator, also known as RANLUX, is reimplemented as a linear congruential generator using large integer arithmetic with the modulus size of 576 bits. Modern computers, as well as the specific structure of the modulus inferred from RANLUX, allow for the development of a fast modular multiplication - the core of the procedure. This was previously believed to be slow and have too high cost in terms of computing resources. Our tests show a significant gain in generation speed which is comparable with other fast, high quality random number generators. An additional feature is the fast skipping of generator states leading to a seeding scheme which guarantees the uniqueness of random number sequences. Licensing provisions: GPLv3 Programming language: C++, C, Assembler

  2. A new principle for the standardization of long paragraphs for reading speed analysis.

    PubMed

    Radner, Wolfgang; Radner, Stephan; Diendorfer, Gabriela

    2016-01-01

    To investigate the reliability, validity, and statistical comparability of long paragraphs that were developed to be equivalent in construction and difficulty. Seven long paragraphs were developed that were equal in syntax, morphology, and number and position of words (111), with the same number of syllables (179) and number of characters (660). For validity analyses, the paragraphs were compared with the mean reading speed of a set of seven sentence optotypes of the RADNER Reading Charts (mean of 7 × 14 = 98 words read). Reliability analyses were performed by calculating the Cronbach's alpha value and the corrected total item correlation. Sixty participants (aged 20-77 years) read the paragraphs and the sentences (distance 40 cm; font: Times New Roman 12 pt). Test items were presented randomly; reading length was measured with a stopwatch. Reliability analysis yielded a Cronbach's alpha value of 0.988. When the long paragraphs were compared in pairwise fashion, significant differences were found in 13 of the 21 pairs (p < 0.05). In two sequences of three paragraphs each and in eight pairs of paragraphs, the paragraphs did not differ significantly, and these paragraph combinations are therefore suitable for comparative research studies. The mean reading speed was 173.34 ± 24.01 words per minute (wpm) for the long paragraphs and 198.26 ± 28.60 wpm for the sentence optotypes. The maximum difference in reading speed was 5.55 % for the long paragraphs and 2.95 % for the short sentence optotypes. The correlation between long paragraphs and sentence optotypes was high (r = 0.9243). Despite good reliability and equivalence in construction and degree of difficulty, a statistically significant difference in reading speed can occur between long paragraphs. Since statistical significance should be dependent only on the persons tested, either standardizing long paragraphs for statistical equality of reading speed measurements or increasing the number of presented paragraphs is recommended for comparative investigations.

  3. 14 CFR 25.253 - High-speed characteristics.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false High-speed characteristics. 25.253 Section...-speed characteristics. (a) Speed increase and recovery characteristics. The following speed increase and... inadvertent speed increases (including upsets in pitch and roll) must be simulated with the airplane trimmed...

  4. 14 CFR 25.253 - High-speed characteristics.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false High-speed characteristics. 25.253 Section...-speed characteristics. (a) Speed increase and recovery characteristics. The following speed increase and... inadvertent speed increases (including upsets in pitch and roll) must be simulated with the airplane trimmed...

  5. Automated drug identification system

    NASA Technical Reports Server (NTRS)

    Campen, C. F., Jr.

    1974-01-01

    System speeds up analysis of blood and urine and is capable of identifying 100 commonly abused drugs. System includes computer that controls entire analytical process by ordering various steps in specific sequences. Computer processes data output and has readout of identified drugs.

  6. Race and Dyslexia

    ERIC Educational Resources Information Center

    Hoyles, Asher; Hoyles, Martin

    2010-01-01

    This article begins with a definition of dyslexia as genetic, involving language processing and phonological awareness. It goes beyond reading and writing difficulties to include, for example, sequencing, orientation, short-term memory, speed, circumlocution, organisational skills, visual thinking, self-esteem and anger. Dyslexia, though…

  7. High-speed adaptive optics for imaging of the living human eye

    PubMed Central

    Yu, Yongxin; Zhang, Tianjiao; Meadway, Alexander; Wang, Xiaolin; Zhang, Yuhua

    2015-01-01

    The discovery of high frequency temporal fluctuation of human ocular wave aberration dictates the necessity of high speed adaptive optics (AO) correction for high resolution retinal imaging. We present a high speed AO system for an experimental adaptive optics scanning laser ophthalmoscope (AOSLO). We developed a custom high speed Shack-Hartmann wavefront sensor and maximized the wavefront detection speed based upon a trade-off among the wavefront spatial sampling density, the dynamic range, and the measurement sensitivity. We examined the temporal dynamic property of the ocular wavefront under the AOSLO imaging condition and improved the dual-thread AO control strategy. The high speed AO can be operated with a closed-loop frequency up to 110 Hz. Experiment results demonstrated that the high speed AO system can provide improved compensation for the wave aberration up to 30 Hz in the living human eye. PMID:26368408

  8. 49 CFR 236.1007 - Additional requirements for high-speed service.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 4 2011-10-01 2011-10-01 false Additional requirements for high-speed service..., AND APPLIANCES Positive Train Control Systems § 236.1007 Additional requirements for high-speed... by this subpart, and which have been utilized on high-speed rail systems with similar technical and...

  9. Event generators for address event representation transmitters

    NASA Astrophysics Data System (ADS)

    Serrano-Gotarredona, Rafael; Serrano-Gotarredona, Teresa; Linares Barranco, Bernabe

    2005-06-01

    Address Event Representation (AER) is an emergent neuromorphic interchip communication protocol that allows for real-time virtual massive connectivity between huge number neurons located on different chips. By exploiting high speed digital communication circuits (with nano-seconds timings), synaptic neural connections can be time multiplexed, while neural activity signals (with mili-seconds timings) are sampled at low frequencies. Also, neurons generate 'events' according to their activity levels. More active neurons generate more events per unit time, and access the interchip communication channel more frequently, while neurons with low activity consume less communication bandwidth. In a typical AER transmitter chip, there is an array of neurons that generate events. They send events to a peripheral circuitry (let's call it "AER Generator") that transforms those events to neurons coordinates (addresses) which are put sequentially on an interchip high speed digital bus. This bus includes a parallel multi-bit address word plus a Rqst (request) and Ack (acknowledge) handshaking signals for asynchronous data exchange. There have been two main approaches published in the literature for implementing such "AER Generator" circuits. They differ on the way of handling event collisions coming from the array of neurons. One approach is based on detecting and discarding collisions, while the other incorporates arbitration for sequencing colliding events . The first approach is supposed to be simpler and faster, while the second is able to handle much higher event traffic. In this article we will concentrate on the second arbiter-based approach. Boahen has been publishing several techniques for implementing and improving the arbiter based approach. Originally, he proposed an arbitration squeme by rows, followed by a column arbitration. In this scheme, while one neuron was selected by the arbiters to transmit his event out of the chip, the rest of neurons in the array were freezed to transmit any further events during this time window. This limited the maximum transmission speed. In order to improve this speed, Boahen proposed an improved 'burst mode' scheme. In this scheme after the row arbitration, a complete row of events is pipelined out of the array and arbitered out of the chip at higher speed. During this single row event arbitration, the array is free to generate new events and communicate to the row arbiter, in a pipelined mode. This scheme significantly improves maximum event transmission speed, specially for high traffic situations were speed is more critical. We have analyzed and studied this approach and have detected some shortcomings in the circuits reported by Boahen, which may render some false situations under some statistical conditions. The present paper proposes some improvements to overcome such situations. The improved "AER Generator" has been implemented in an AER transmitter system

  10. 49 CFR 38.175 - High-speed rail cars, monorails and systems.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 1 2011-10-01 2011-10-01 false High-speed rail cars, monorails and systems. 38....175 High-speed rail cars, monorails and systems. (a) All cars for high-speed rail systems, including... for high-platform, level boarding and shall comply with § 38.111(a) of this part for each type of car...

  11. 49 CFR 38.175 - High-speed rail cars, monorails and systems.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 1 2014-10-01 2014-10-01 false High-speed rail cars, monorails and systems. 38....175 High-speed rail cars, monorails and systems. (a) All cars for high-speed rail systems, including... for high-platform, level boarding and shall comply with § 38.111(a) of this part for each type of car...

  12. 49 CFR 38.175 - High-speed rail cars, monorails and systems.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 1 2013-10-01 2013-10-01 false High-speed rail cars, monorails and systems. 38....175 High-speed rail cars, monorails and systems. (a) All cars for high-speed rail systems, including... for high-platform, level boarding and shall comply with § 38.111(a) of this part for each type of car...

  13. 49 CFR 38.175 - High-speed rail cars, monorails and systems.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 1 2010-10-01 2010-10-01 false High-speed rail cars, monorails and systems. 38....175 High-speed rail cars, monorails and systems. (a) All cars for high-speed rail systems, including... for high-platform, level boarding and shall comply with § 38.111(a) of this part for each type of car...

  14. 49 CFR 38.175 - High-speed rail cars, monorails and systems.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 1 2012-10-01 2012-10-01 false High-speed rail cars, monorails and systems. 38....175 High-speed rail cars, monorails and systems. (a) All cars for high-speed rail systems, including... for high-platform, level boarding and shall comply with § 38.111(a) of this part for each type of car...

  15. High Speed Balancing Applied to the T700 Engine

    NASA Technical Reports Server (NTRS)

    Walton, J.; Lee, C.; Martin, M.

    1989-01-01

    The work performed under Contracts NAS3-23929 and NAS3-24633 is presented. MTI evaluated the feasibility of high-speed balancing for both the T700 power turbine rotor and the compressor rotor. Modifications were designed for the existing Corpus Christi Army Depot (CCAD) T53/T55 high-speed balancing system for balancing T700 power turbine rotors. Tests conducted under these contracts included a high-speed balancing evaluation for T700 power turbines in the Army/NASA drivetrain facility at MTI. The high-speed balancing tests demonstrated the reduction of vibration amplitudes at operating speed for both low-speed balanced and non-low-speed balanced T700 power turbines. In addition, vibration data from acceptance tests of T53, T55, and T700 engines were analyzed and a vibration diagnostic procedure developed.

  16. Integrating Epigenomics into the Understanding of Biomedical Insight.

    PubMed

    Han, Yixing; He, Ximiao

    2016-01-01

    Epigenetics is one of the most rapidly expanding fields in biomedical research, and the popularity of the high-throughput next-generation sequencing (NGS) highlights the accelerating speed of epigenomics discovery over the past decade. Epigenetics studies the heritable phenotypes resulting from chromatin changes but without alteration on DNA sequence. Epigenetic factors and their interactive network regulate almost all of the fundamental biological procedures, and incorrect epigenetic information may lead to complex diseases. A comprehensive understanding of epigenetic mechanisms, their interactions, and alterations in health and diseases genome widely has become a priority in biological research. Bioinformatics is expected to make a remarkable contribution for this purpose, especially in processing and interpreting the large-scale NGS datasets. In this review, we introduce the epigenetics pioneering achievements in health status and complex diseases; next, we give a systematic review of the epigenomics data generation, summarize public resources and integrative analysis approaches, and finally outline the challenges and future directions in computational epigenomics.

  17. Integrating Epigenomics into the Understanding of Biomedical Insight

    PubMed Central

    Han, Yixing; He, Ximiao

    2016-01-01

    Epigenetics is one of the most rapidly expanding fields in biomedical research, and the popularity of the high-throughput next-generation sequencing (NGS) highlights the accelerating speed of epigenomics discovery over the past decade. Epigenetics studies the heritable phenotypes resulting from chromatin changes but without alteration on DNA sequence. Epigenetic factors and their interactive network regulate almost all of the fundamental biological procedures, and incorrect epigenetic information may lead to complex diseases. A comprehensive understanding of epigenetic mechanisms, their interactions, and alterations in health and diseases genome widely has become a priority in biological research. Bioinformatics is expected to make a remarkable contribution for this purpose, especially in processing and interpreting the large-scale NGS datasets. In this review, we introduce the epigenetics pioneering achievements in health status and complex diseases; next, we give a systematic review of the epigenomics data generation, summarize public resources and integrative analysis approaches, and finally outline the challenges and future directions in computational epigenomics. PMID:27980397

  18. DNA Sequencing by Capillary Electrophoresis

    PubMed Central

    Karger, Barry L.; Guttman, Andras

    2009-01-01

    Sequencing of human and other genomes has been at the center of interest in the biomedical field over the past several decades and is now leading toward an era of personalized medicine. During this time, DNA sequencing methods have evolved from the labor intensive slab gel electrophoresis, through automated multicapillary electrophoresis systems using fluorophore labeling with multispectral imaging, to the “next generation” technologies of cyclic array, hybridization based, nanopore and single molecule sequencing. Deciphering the genetic blueprint and follow-up confirmatory sequencing of Homo sapiens and other genomes was only possible by the advent of modern sequencing technologies that was a result of step by step advances with a contribution of academics, medical personnel and instrument companies. While next generation sequencing is moving ahead at break-neck speed, the multicapillary electrophoretic systems played an essential role in the sequencing of the Human Genome, the foundation of the field of genomics. In this prospective, we wish to overview the role of capillary electrophoresis in DNA sequencing based in part of several of our articles in this journal. PMID:19517496

  19. Production of Supra-regular Spatial Sequences by Macaque Monkeys.

    PubMed

    Jiang, Xinjian; Long, Tenghai; Cao, Weicong; Li, Junru; Dehaene, Stanislas; Wang, Liping

    2018-06-18

    Understanding and producing embedded sequences in language, music, or mathematics, is a central characteristic of our species. These domains are hypothesized to involve a human-specific competence for supra-regular grammars, which can generate embedded sequences that go beyond the regular sequences engendered by finite-state automata. However, is this capacity truly unique to humans? Using a production task, we show that macaque monkeys can be trained to produce time-symmetrical embedded spatial sequences whose formal description requires supra-regular grammars or, equivalently, a push-down stack automaton. Monkeys spontaneously generalized the learned grammar to novel sequences, including longer ones, and could generate hierarchical sequences formed by an embedding of two levels of abstract rules. Compared to monkeys, however, preschool children learned the grammars much faster using a chunking strategy. While supra-regular grammars are accessible to nonhuman primates through extensive training, human uniqueness may lie in the speed and learning strategy with which they are acquired. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. An Integrated SNP Mining and Utilization (ISMU) Pipeline for Next Generation Sequencing Data

    PubMed Central

    Azam, Sarwar; Rathore, Abhishek; Shah, Trushar M.; Telluri, Mohan; Amindala, BhanuPrakash; Ruperao, Pradeep; Katta, Mohan A. V. S. K.; Varshney, Rajeev K.

    2014-01-01

    Open source single nucleotide polymorphism (SNP) discovery pipelines for next generation sequencing data commonly requires working knowledge of command line interface, massive computational resources and expertise which is a daunting task for biologists. Further, the SNP information generated may not be readily used for downstream processes such as genotyping. Hence, a comprehensive pipeline has been developed by integrating several open source next generation sequencing (NGS) tools along with a graphical user interface called Integrated SNP Mining and Utilization (ISMU) for SNP discovery and their utilization by developing genotyping assays. The pipeline features functionalities such as pre-processing of raw data, integration of open source alignment tools (Bowtie2, BWA, Maq, NovoAlign and SOAP2), SNP prediction (SAMtools/SOAPsnp/CNS2snp and CbCC) methods and interfaces for developing genotyping assays. The pipeline outputs a list of high quality SNPs between all pairwise combinations of genotypes analyzed, in addition to the reference genome/sequence. Visualization tools (Tablet and Flapjack) integrated into the pipeline enable inspection of the alignment and errors, if any. The pipeline also provides a confidence score or polymorphism information content value with flanking sequences for identified SNPs in standard format required for developing marker genotyping (KASP and Golden Gate) assays. The pipeline enables users to process a range of NGS datasets such as whole genome re-sequencing, restriction site associated DNA sequencing and transcriptome sequencing data at a fast speed. The pipeline is very useful for plant genetics and breeding community with no computational expertise in order to discover SNPs and utilize in genomics, genetics and breeding studies. The pipeline has been parallelized to process huge datasets of next generation sequencing. It has been developed in Java language and is available at http://hpc.icrisat.cgiar.org/ISMU as a standalone free software. PMID:25003610

Top